Wei_Dai comments on Newcomb's Problem standard positions - Less Wrong

5 Post author: Eliezer_Yudkowsky 06 April 2009 05:05PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (21)

You are viewing a single comment's thread.

Comment author: Wei_Dai 07 April 2009 02:53:22AM 2 points [-]

It's not clear that reflective consistency is feasible for human beings.

Consider the following thought experiment. You’re about to be copied either once (with probability .99) or twice (with probability .01). After that, one of your two or three instances will be randomly selected to be the decision-maker. He will get to choose from the following options, without knowing how many copies were made:

A: The decision-maker will have a pleasant experience. The other(s) will have unpleasant experience(s).

B: The decision-maker will have an unpleasant experience. The other(s) will have pleasant experience(s).

Presumably, you’d like to commit your future self to pick option B. But without some sort of external commitment device, it’s hard to see how you can prevent your future self from picking option A.

Comment author: cousin_it 07 April 2009 02:34:36PM *  0 points [-]

Why so complicated? Just split into two selves and play Prisoner's Dilemma with each other. A philosophically-inclined person could have major fun with this experiment, e.g. inventing some sort of Agentless Decision Theory, while mathematically-inclined people enjoy the show from a safe distance.

Comment author: Wei_Dai 07 April 2009 08:20:38PM *  2 points [-]

I structured my thought experiment that way specifically to avoid superrationality-type justifications for playing Cooperate in PD.