wedrifid comments on What if AI doesn't quite go FOOM? - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (186)
That is just... trivially false.
And that is the worst reasoning I have encountered in at least a week. Not only is it trying to foist a nonsensical definition 'understand', an AI could predict it's own actions. AND even if it couldn't it still wouldn't be a logical contradiction. It'd just be a fact.
An AI could not predict its own actions, because any intelligent agent is quite capable of implementing the algorithm: "Take the predictor's predicted action. Do the opposite."
In order to predict itself (with 100% accuracy), it would have to be able to emulate its own programming, and this would cause a never-ending loop. Thus this is impossible.
Ok. And why would your AI decide to do so? You seem to be showing that a sufficiently pathological AI won't be able to predict its own actions. How this shows that other AIs won't be able to predict their own actions within some degree of certainty seems off.