Caspian comments on Desirable Dispositions and Rational Actions - Less Wrong

13 Post author: RichardChappell 17 August 2010 03:20AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (180)

You are viewing a single comment's thread. Show more comments above.

Comment author: Caspian 22 August 2010 02:23:04PM *  2 points [-]

I think of Omega as a simplified stand-in for other people.

The part about Omega being omniscient and knowably trustworthy isn't solved. But I think the problem of Omega rewarding bizarre irrational behaviour on your part mostly goes away if you assume it's fairly human-like, perhaps following UDT or some other decision theory itself. The human motivation for it posing Newcomb's problem could be that it wants one of the boxes kept closed for some reason, and will reward you for keeping it closed. To make it fit this explanation, Omega should say it doesn't want you to open the box, and preferably give a reason.

Kinds of things the human-like Omega might do:

  • trust you or not based on it's prediction of your behaviour.
  • prefer you to be rewarded if you act how it wants.
  • prefer you be punished if you harm it.
  • tell you what it wants of you.

But it should be less likely to reward you for acting irrational for no reason, or for doing what it wants you not to do.