MatthewB comments on Reason as memetic immune disorder - Less Wrong

215 Post author: PhilGoetz 19 September 2009 09:05PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (166)

You are viewing a single comment's thread. Show more comments above.

Comment author: PhilGoetz 19 September 2009 11:22:55PM *  7 points [-]

An interesting observation! An objection to it is that this approach would require your AI to have inconsistent beliefs.

Personally, I believe that fast AI systems with inconsistencies, heuristics, and habits will beat verifiably-correct logic systems in most applications; and will achieve general AI long before any pure-logic systems. (This is one reason why I'm skeptical that coming up with the right decision logic is a workable approach to FAI. I wish that Eliezer had been at Ben Goertzel's last AGI conference, just to see what he would have said to Selmer Bringsjord's presentation claiming that the only safe AI would be a logic system using a consistent logic, so that we could verify that certain undesirable statements were false in that system. The AI practitioners present found the idea not just laughable, but insulting. I said that he was telling us to turn the clock back to 1960 and try again the things that we spent decades failing at. Richard Loosemore gave a long, rude, and devastating reply to Bringsjord, who remained blissfully ignorant of the drubbing he'd just received.)

Comment author: MatthewB 26 December 2009 02:41:16PM 0 points [-]

Steve Omhundro has given several talks that talk about the consequences of a purely logical or rationally exact AI system.

His talk at the Sing. Summit 2007 The Nature of Self-Improving AI discussed what would happen if such an Agent were to have the wrong rules constraining its behavior. I saw a purely logical system as being one such possible agent type to which he referred.