You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Viliam comments on Open thread, Nov. 23 - Nov. 29, 2015 - Less Wrong Discussion

5 Post author: MrMind 23 November 2015 07:59AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (257)

You are viewing a single comment's thread. Show more comments above.

Comment author: Viliam 24 November 2015 09:27:01AM *  2 points [-]

I agree with that.

In the future, it would be best to derive everything from the axioms. (Using libraries where the frequently used theorems are already proved.) The problem is, the most simple theorems that we can derive from the axioms quickly are not important enough to pay for the development and use of the software.

So a better approach would be for the system to accept a few theorems as (temporary) axioms. Essentially, if it would be okay to use the Pythagorean theorem in a scientific paper without proving it, then in the first version of the program it would be okay to use the Pythagorean theorem as an axiom -- displaying a warning "I have used Pythagorean theorem without having a proof of it".

This first version would already be helpful at verifying current papers. And there is an option to provide the proof of the Pythagorean theorem from the first principles later. If you add it later, you can re-run the papers and get the results with less warnings. If the Pythagorean theorem happens to be wrong, as long as you have provided the warnings for all papers, you know which ones of them to retract.

Actually, I believe such systems would be super helpful e.g. in set theory, when you want to verify whether the proof you used does rely on the axiom of choice. Because even if you didn't use it directly, maybe one of them theorems you used was based on it. Generally, using different sets of axioms could become easier.

Comment author: passive_fist 24 November 2015 08:04:37PM 0 points [-]

Yes that's an insightful way of looking at how computer verification could assist in real mathematics research.

Going back to the CS analogy, programmers started out by writing everything in machine language, then gradually people began to write commonly-used functions as libraries that you could just install and forget about (they didn't even have to be in the same language) and they wrote higher-level languages that could automatically compile to machine code. Higher and higher levels of abstraction were recognized and implemented over the years (for implementing things like parsers, data structures, databases, etc.) until we got to modern languages like python and java where programming almost feels like simply writing out your thoughts. There was very little universal coordination in all of this; it just grew out of the needs of various people. No one in 1960 sat down and said, "Ok, let's write python."

Comment author: Lumifer 24 November 2015 08:10:11PM *  0 points [-]

until we got to modern languages like python and java where programming almost feels like simply writing out your thoughts. ... No one in 1960 sat down and said, "Ok, let's write python."

For a very good reason: let me invite you to contemplate Python performance on 1960-class hardware.

As to "writing out your thoughts", people did design such a language in 1959...

P.S. Oh, and do your thoughts flow like this..?

public class HelloWorld { public static void main(String[] args) { System.out.println("Hello World"); } }
Comment author: passive_fist 24 November 2015 08:24:41PM *  1 point [-]

For a very good reason: let me invite you to contemplate Python performance on 1960-class hardware.

That the implementation of python is fairly slow is a different matter, and high-level languages need not be any slower than, say, C or Fortran, as modern JIT languages demonstrate. It just takes a lot of work to make them fast.

As to "writing out your thoughts", people did design such a language in 1959...

Lisp was also designed during that same period and probably proves your point even better. But 1960's Lisp was as bare-bones as it was high-level; you still had to wrote almost everything yourself from scratch.

Comment author: bogus 24 November 2015 08:32:09PM *  1 point [-]

But 1960's Lisp was as bare-bones as it was high-level; you still had to wrote almost everything yourself from scratch.

Computerized math is the same today. No one wants to write everything they need from scratch, unless they're working in a genuinely self-contained (i.e. 'synthetic') subfield where the prereqs are inherently manageable. See programming languages (with their POPLmark challenge) and homotopy-type-theory as examples of such where computerization is indeed making quick progress.

Comment author: Lumifer 24 November 2015 09:04:28PM 0 points [-]

as bare-bones as it was high-level

Umm... LISP is elegant and expressive -- you can (and people routinely do) construct complicated environments including DSLs on top of it. But that doesn't make it high-level -- it only makes it a good base for high-level things.

But if you use "high-level" to mean "abstracted away from the hardware" then yes, it was, but that doesn't have much to do with "writing out your thoughts".

Comment author: bogus 24 November 2015 08:22:35PM 0 points [-]

For a very good reason: let me invite you to contemplate Python performance on 1960-class hardware.

LISP was definitely a thing in the 1960s, and python is not that different. For a long time, the former was pretty much 'the one' very-high-level, application-oriented language. Much like Python or Ruby today.

Comment author: Lumifer 24 November 2015 09:00:36PM *  1 point [-]

... and python is not that different.

8-0 Allow me to disagree.

pretty much 'the one' ... application-oriented language

Allow me to disagree again. LISP was lambda calculus made flesh and was very popular in academia. Outside of the ivory towers, the suits used COBOL, and the numbers people used Fortran (followed by a whole lot of Algol-family languages) to write their applications.