Vladimir_Nesov comments on Newcomb's Problem standard positions - Less Wrong

5 Post author: Eliezer_Yudkowsky 06 April 2009 05:05PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (21)

You are viewing a single comment's thread.

Comment author: Vladimir_Nesov 06 April 2009 06:07:23PM *  4 points [-]

Why is your view not easily summarized? From what I see, the solution satisfying all of the requirements looks rather simple, without even any need to define causality and the like. I may write it up at some point in the following months, after some running confusions (not crucial to the main point) are resolved.

Basically, all the local decisions come from the same computation that would be performed to set the most general precommitment for all possible states of the world. The expected utility maximization is defined only once, on the global state space, and then the actual actions only retrieve the global solution, given encountered observations. The observations don't change the state space over which the expected utility optimization is defined (and don't change the optimal global solution or preference order on the global solutions), only what the decisions in a given (counterfactual) branch can affect. Since the global precommitment is the only thing that defines the local agents' decisions, the "commitment" part can be dropped, and the agents' actions can just be defined to follow the resulting preference order.

I admit, it'd take some work to write that up understandably, but it doesn't seem to involve difficult technical issues.

Comment author: Wei_Dai 07 April 2009 03:03:49AM 2 points [-]

I think your summary is understandable enough, but I don't agree that observations should never change the optimal global solution or preference order on the global solutions, because observations can tell you which observer you are in the world, and different observers can have different utility functions. See my counter-example in a separate comment at http://lesswrong.com/lw/90/newcombs_problem_standard_positions/5u4#comments.

Comment author: Vladimir_Nesov 07 April 2009 10:49:36AM *  0 points [-]

From the global point of view, you only consider different possible experiences, that imply different possible situations. Nothing changes, because everything is determined from the global viewpoint. If you want to determine certain decisions in response to certain possible observations, you also specify that globally, and set it in stone. Whatever happens to you, you can (mathematically speaking) consider that in advance, as an input sequence to your cognitive algorithm, and prepare the plan of action in response. The fact that you participate in a certain mind-copying experiment is also data to which you respond in a certain way.

This is of course not for human beings, this is for something holding much stronger to reflective consistency. And in that setting changing preferences is unacceptable.