Manfred comments on xkcd on the AI box experiment - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (229)
Let's just tell the acausal trade story in terms of extreme positive utility rather than negative.
Putting it simply for the purpose of this comment: "If you do what the future AI wants now, it will reward you when it comes into being."
Makes the whole discussion much more cheerful.
This version may actually have more problems than the negative version.
Please elaborate. (unless it is an infohazard to do so)
Hm, upon further consideration I actually don't think it has extra actual problems, merely different framing problems.
Now I'm curious what all the people who upvoted you for saying it does were thinking.
What incentive does the future AI have to do this once you've already helped it?
Well, that's the tricky part. But suppose, for the sake of argument, that we have good reason to think that it will. Then we'll help it. So it's good for the AI if we have good reason to think this. And it can't be good reason unless the AI actually does it. So it will.