Vladimir_Nesov comments on Outlawing Anthropics: An Updateless Dilemma - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (194)
Actually... how is this an anthropic situation AT ALL?
I mean, wouldn't it be equivalent to, say, gather 20 rational people (That understand PD, etc etc etc, and can certainly manage to agree to coordinate with each other) that are allowed to meet with each other in advance and discuss the situation...
I show up and tell them that I have two buckets of marbles, some of which are green, some of which are red
One bucket has 18 green and 2 red, and the other bucket has 18 red and 2 green.
I will (already have) flipped a logical coin. Depending on the outcome, I will use either one bucket or the other.
After having an opportunity to discuss strategy, they will be allowed to reach into the bucket without looking, pull out a marble, look at it, then, if it's green choose if to pay and steal, etc etc etc. (in case it's not obvious, the payout rules being equivalent to the OP)
As near as I can determine, this situation is entirely equivalent to the OP and is in no way an anthropic one. If the OP actually is an argument against anthropic updates in the presence of logical uncertainty... then it's actually an argument against the general case of Bayesian updating in the presence of logical uncertainty, even when there's no anthropic stuff going on at all!
EDIT: oh, in case it's not obvious, marbles are not replaced after being drawn from the bucket.
Right, and this is a perspective very close to intuition for UDT: you consider different instances of yourself at different times as separate decision-makers that all share the common agenda ("global strategy"), coordinated "off-stage", and implement it without change depending on circumstances they encounter in each particular situation. The "off-stageness" of coordination is more naturally described by TDT, which allows considering different agents as UDT-instances of the same strategy, but the precise way in which it happens remains magic.
Nesov, the reason why I regard Dai's formulation of UDT as such a significant improvement over your own is that it does not require offstage coordination. Offstage coordination requires a base theory and a privileged vantage point and, as you say, magic.
I still don't understand this emphasis. Here I sketched in what sense I mean the global solution -- it's more about definition of preference than the actual computations and actions that the agents make (locally). There is an abstract concept of global strategy that can be characterized as being "offstage", but there is no offstage computation or offstage coordination, and in general complete computation of global strategy isn't performed even locally -- only approximations, often approximations that make it impossible to implement the globally best solution.
In the above comment, by "magic" I referred to exact mechanism that says in what way and to what extent different agents are running the same algorithm, which is more in the domain of TDT, UDT generally not talking about separate agents, only different possible states of the same agent. Which is why neither concept solves the bargaining problem: it's out of UDT's domain, and TDT takes the relevant pieces of the puzzle as given, in its causal graphs.
For further disambiguation, see for example this comment you made: