gjm comments on Progress on automated mathematical theorem proving? - Less Wrong

14 Post author: JonahSinick 03 July 2013 06:40PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (65)

You are viewing a single comment's thread. Show more comments above.

Comment author: gjm 03 July 2013 11:23:35PM 0 points [-]

Because doing mathematics well is something that takes really exceptional brainpower and that even most very intelligent people aren't capable of.

Just like chess.

Comment author: endoself 04 July 2013 12:01:39AM 7 points [-]

I'd like to make explicit the connection of this idea to hard takeoff, since it's something I've thought about before but isn't stated explicitly very often. Namely, this provides some reason to think that by the time an AGI is human-level in the things humans have evolved to do, it will be very superhuman in things that humans have more difficulty with, like math and engineering.

Comment author: ciphergoth 05 July 2013 11:33:03AM 2 points [-]

I'm against sarcasm as a way of making an argument; it makes it less pleasant to discuss things here, and it can hide a bad argument.

Comment author: gjm 05 July 2013 01:08:24PM 0 points [-]

First of all: I hereby apologize to orthonormal if s/he felt insulted or belittled or mocked by what I wrote. That was in no way my intention.

I'm not sure "sarcasm" is quite the right term, though. I really do think that, among human beings, doing mathematics well takes exceptional brainpower, and even most very intelligent people can't do it; I really do think that playing chess well has the same feature; and I really do think that that feature is why orthonormal expects doing mathematics well to be one of the last achievements of AI before general superhumanness.

If my last paragraph had said "I don't think that's very good evidence, though; you could say the same about playing chess well, and we all know what computers have done to that" then it would have been clearer and that might have been worth the loss of brevity. But I'm not sure sarcasm is the failure mode (if failure it be) here.

Comment author: JonahSinick 03 July 2013 11:45:19PM 2 points [-]

Following up on JoshuaZ's comment, humans brains aren't at all optimized for doing higher math — it's a mystery that humans can do higher math at all. Humans optimized for things like visual processing and adapting to the cultures that they grow up in. So I would expect human-level math AI to be easier than human-level visual processing AI.

Because doing mathematics well is something that takes really exceptional brainpower and that even most very intelligent people aren't capable of.

What do you mean by "really exceptional brainpower"? And what do you mean by "doing mathematics well" ?

Comment author: gjm 04 July 2013 12:08:25AM 3 points [-]

What I meant was: (1) empirically it seems that very few human beings are good at proving theorems (or even following other people's proofs), (2) being good at proving theorems seems to correlate somewhat with other things we think of as cleverness, (3) these facts are probably part of why it seems to orthonormal (and, I bet, to lots of others) as if skill in theorem-proving would be one of the hardest things for AI to achieve, but (4) #1 and #2 hold to some extent for other things, like being good at playing chess, that also used to be thought of as particularly impressive human achievements but that seem to be easier to make computers do than all sorts of things that initially seem straightforward.

All of which is rather cumbersome, which is why I put it in the elliptical way I did.

Comment author: JonahSinick 04 July 2013 12:13:11AM 1 point [-]

Heh, I missed the irony :-)

Comment author: JoshuaZ 04 July 2013 12:18:22AM 2 points [-]

I didn't, but realized someone could, hence why my other comment started off with the words "more explicitly" to essentially unpack gjm's remark.

Comment author: JoshuaZ 03 July 2013 11:38:01PM 1 point [-]

More explicitly, there are a lot of things that humans can do easily that seem to be extremely difficult for AI (vision and language are the most obvious), and there are many things that humans have trouble with that we can easily make AI do a decent job. At some level, this shouldn't be that surprising because a lot of the things humans find really easy are things we have millions of years of evolution optimizing those procedures.