JoshuaZ comments on What if AI doesn't quite go FOOM? - Less Wrong

11 Post author: Mass_Driver 20 June 2010 12:03AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (186)

You are viewing a single comment's thread. Show more comments above.

Comment author: JoshuaZ 20 June 2010 05:45:53AM *  5 points [-]

There appears to be a lot of logic here that is happening implicitly because I'm not following you.

You wrote:

An AI could not predict its own actions, because any intelligent agent is quite capable of implementing the algorithm: "Take the predictor's predicted action. Do the opposite."

Now, this seems like a very narrow sort of AI that would go and then do something else against what was predicted.

For example, it is a logical contradiction for someone to predict my actions in advance (and tell me about it), because my "programming" will lead me to do something else, much like the above algorithm.

You seem to be using "logical contradiction" in a non-standard fashion. Do you mean it won't happen given how your mind operates? In that case, permit me to make a few predictions about your actions over the next 48 hours (that you could probably predict also): 1) You will sleep at some point in that time period. 2) You will eat at some point in that time period. I make both of those with probability around .98 each. If we extend to one month I'm willing to make a similar confidence prediction that you will make a phonecall or check your email within in that time. I'm pretty sure you are not going to go out of your way as a result of these predictions to try to go do something else.

You also seem to be missing the point about what an AI would actually need to improve. Say for example that the AI has a subroutine for factoring integers. If it comes up with a better algorithm for factoring integers, it can replace the subroutine with the new one. It doesn't need to think deeply about how this will alter behavior.