JoshuaZ comments on What if AI doesn't quite go FOOM? - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (186)
There appears to be a lot of logic here that is happening implicitly because I'm not following you.
You wrote:
Now, this seems like a very narrow sort of AI that would go and then do something else against what was predicted.
You seem to be using "logical contradiction" in a non-standard fashion. Do you mean it won't happen given how your mind operates? In that case, permit me to make a few predictions about your actions over the next 48 hours (that you could probably predict also): 1) You will sleep at some point in that time period. 2) You will eat at some point in that time period. I make both of those with probability around .98 each. If we extend to one month I'm willing to make a similar confidence prediction that you will make a phonecall or check your email within in that time. I'm pretty sure you are not going to go out of your way as a result of these predictions to try to go do something else.
You also seem to be missing the point about what an AI would actually need to improve. Say for example that the AI has a subroutine for factoring integers. If it comes up with a better algorithm for factoring integers, it can replace the subroutine with the new one. It doesn't need to think deeply about how this will alter behavior.