wedrifid comments on What if AI doesn't quite go FOOM? - Less Wrong

11 Post author: Mass_Driver 20 June 2010 12:03AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (186)

You are viewing a single comment's thread. Show more comments above.

Comment author: wedrifid 20 June 2010 02:42:33AM *  5 points [-]

One of the many reasons that I will win my bet with Eliezer is that it is impossible for an AI to understand itself.

That is just... trivially false.

If it could, it would be able to predict it's own actions, and this is a logical contradiction, just as it is for us.

And that is the worst reasoning I have encountered in at least a week. Not only is it trying to foist a nonsensical definition 'understand', an AI could predict it's own actions. AND even if it couldn't it still wouldn't be a logical contradiction. It'd just be a fact.

Comment author: Unknowns 20 June 2010 05:29:48AM -2 points [-]

An AI could not predict its own actions, because any intelligent agent is quite capable of implementing the algorithm: "Take the predictor's predicted action. Do the opposite."

In order to predict itself (with 100% accuracy), it would have to be able to emulate its own programming, and this would cause a never-ending loop. Thus this is impossible.

Comment author: JoshuaZ 20 June 2010 05:32:43AM 5 points [-]

An AI could not predict its own actions, because any intelligent agent is quite capable of implementing the algorithm: "Take the predictor's predicted action. Do the opposite."

Ok. And why would your AI decide to do so? You seem to be showing that a sufficiently pathological AI won't be able to predict its own actions. How this shows that other AIs won't be able to predict their own actions within some degree of certainty seems off.