nazgulnarsil comments on Metaphilosophical Mysteries - Less Wrong

35 Post author: Wei_Dai 27 July 2010 12:55AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (255)

You are viewing a single comment's thread. Show more comments above.

Comment author: Blueberry 27 July 2010 06:28:52PM *  9 points [-]

While people say this sometimes, I don't think this is accurate. Most of the "AI" advances, as far as I know, haven't shed a lot of light on intelligence. They may have solved problems traditionally classified as AI, but that doesn't make the solutions AI; it means we were actually wrong about what the problems required. I'm thinking specifically of statistical natural language processing, which is essentially based on finding algorithms to analyze a corpus, and then using the results on novel text. It's a useful hack, and it does give good results, but it just tells us that those problems are less interesting than we thought.

Another example is chess and Go computing, where chess programs have gotten very very good just based on pruning and brute-force computation; the advances in chess computer ability were brought on by computing power, not some kind of AI advance. It's looking like the same will be true of Go programs in the next 10 to 20 years, based on Monte Carlo techniques, but this just means that chess and Go are less interesting games than we thought. You can't brute-force a traditional "AI" problem with a really fast computer and say that you've achieved AI.

Comment author: nazgulnarsil 28 July 2010 06:56:08AM -2 points [-]

"You can't brute-force a traditional "AI" problem with a really fast computer and say that you've achieved AI."

chinese room, etc.

Comment author: Blueberry 28 July 2010 09:22:49PM 3 points [-]

Elaborate? I'm familiar with Searle's Chinese Room thought experiment, but I'm not sure what your point is here.

Comment author: nazgulnarsil 03 August 2010 07:58:35AM 0 points [-]

much of what feels like deep reasoning from the inside has been revealed by experiment to be simple pattern recognition and completion.

Comment author: mattnewport 28 July 2010 09:47:34PM 0 points [-]

Much recent progress in problems traditionally considered to be 'AI' problems has come not from dramatic algorithmic breakthroughs or from new insights into the way human brains operate but from throwing lots of processing power at lots of data. It is possible that there are few grand 'secrets' to AI beyond this.

The way the human brain has developed suggests to me that human intelligence is not the result of evolution making a series of great algorithmic discoveries on the road to general intelligence but of refinements to certain fairly general purpose computational structures.

The 'secret' of human intelligence may be little more than wiring a bunch of sensors and effectors up to a bunch of computational capacity and dropping it in a complex environment. There may be no such thing as an 'interesting' AI problem by whatever definition you are using for 'interesting'.