Eliezer_Yudkowsky comments on Counterfactual Mugging - Less Wrong

52 Post author: Vladimir_Nesov 19 March 2009 06:08AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (257)

You are viewing a single comment's thread. Show more comments above.

Comment author: Nebu 19 March 2009 06:19:44PM 2 points [-]

A (quasi)rational agent with access to genuine randomness (such as a human)

Whaddaya mean humans are rational agents with access to genuine randomness? That's what we're arguing about in the first place!

A superintelligence could almost perfectly predict the probability distribution over my actions, but by quantum entanglement it would not be able to predict my actual actions.

Perhaps Omega is entangled with your brain such that in all the worlds in which you would choose to one-box, he would predict that you one-box, and all the worlds in which you would choose to two-box, he would predict that you two-box?

Comment author: Eliezer_Yudkowsky 19 March 2009 07:32:52PM 3 points [-]

In the original formulation, if Omega expects you to flip a coin, he leaves box B empty.