Unknowns comments on What if AI doesn't quite go FOOM? - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (186)
An AI could not predict its own actions, because any intelligent agent is quite capable of implementing the algorithm: "Take the predictor's predicted action. Do the opposite."
In order to predict itself (with 100% accuracy), it would have to be able to emulate its own programming, and this would cause a never-ending loop. Thus this is impossible.
Ok. And why would your AI decide to do so? You seem to be showing that a sufficiently pathological AI won't be able to predict its own actions. How this shows that other AIs won't be able to predict their own actions within some degree of certainty seems off.