Caspian comments on Desirable Dispositions and Rational Actions - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (180)
I think of Omega as a simplified stand-in for other people.
The part about Omega being omniscient and knowably trustworthy isn't solved. But I think the problem of Omega rewarding bizarre irrational behaviour on your part mostly goes away if you assume it's fairly human-like, perhaps following UDT or some other decision theory itself. The human motivation for it posing Newcomb's problem could be that it wants one of the boxes kept closed for some reason, and will reward you for keeping it closed. To make it fit this explanation, Omega should say it doesn't want you to open the box, and preferably give a reason.
Kinds of things the human-like Omega might do:
But it should be less likely to reward you for acting irrational for no reason, or for doing what it wants you not to do.