Richard_Loosemore comments on Reason as memetic immune disorder - Less Wrong

215 Post author: PhilGoetz 19 September 2009 09:05PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (166)

You are viewing a single comment's thread. Show more comments above.

Comment author: PhilGoetz 19 September 2009 11:22:55PM *  7 points [-]

An interesting observation! An objection to it is that this approach would require your AI to have inconsistent beliefs.

Personally, I believe that fast AI systems with inconsistencies, heuristics, and habits will beat verifiably-correct logic systems in most applications; and will achieve general AI long before any pure-logic systems. (This is one reason why I'm skeptical that coming up with the right decision logic is a workable approach to FAI. I wish that Eliezer had been at Ben Goertzel's last AGI conference, just to see what he would have said to Selmer Bringsjord's presentation claiming that the only safe AI would be a logic system using a consistent logic, so that we could verify that certain undesirable statements were false in that system. The AI practitioners present found the idea not just laughable, but insulting. I said that he was telling us to turn the clock back to 1960 and try again the things that we spent decades failing at. Richard Loosemore gave a long, rude, and devastating reply to Bringsjord, who remained blissfully ignorant of the drubbing he'd just received.)

Comment author: Nick_Tarleton 20 September 2009 08:41:55PM 2 points [-]

An AI doesn't have to have a purely logical structure (let alone a stupid one, e.g. structureless predicates for tables and chairs) in order to be able to logically prove important things about it. It seems to me that criticism of formally proving FAI by analogy to failed logical AI equivocates between these things.