You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Khoth comments on A solvable Newcomb-like problem - part 1 of 3 - Less Wrong Discussion

1 Post author: Douglas_Reay 03 December 2012 09:26AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (11)

You are viewing a single comment's thread.

Comment author: Khoth 03 December 2012 10:05:02AM *  2 points [-]

Omega's best strategy is to put money in both boxes if you one-box with probability >0.5. Omega's expected winnings is positive if you one-box with probability >10/11 (I think), but I don't presume to know enough about superintelligent AI psychology to know whether that's Omega's cutoff for changing strategy from "maximise money" to "screw you", so I'd just one-box.

Alternative solution: Offer Omega $1,100,000,000,000 to put nothing in the boxes.

Comment author: Douglas_Reay 03 December 2012 04:56:32PM 0 points [-]

Alternative solution: Offer Omega $1,100,000,000,000 to put nothing in the boxes.

Love it. But, no, that would be caught by Alpha's monitoring before things came to that point.

I was looking for an elegant way to say "These are the only possible options, ignore any change that there will be any other result (such as $1,000,000 in box B.". I was trying to give a context where the rules made sense. In terms of reputation being exchangeable for money, that represents both Alpha and Omega being immensely embarrassed among the AI society at screwing up the basic physical setup and invalidating their own contest. It is sort of a side issue. :-)