Christian_Szegedy comments on Outlawing Anthropics: An Updateless Dilemma - Less Wrong

26 Post author: Eliezer_Yudkowsky 08 September 2009 06:31PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (194)

You are viewing a single comment's thread. Show more comments above.

Comment author: Christian_Szegedy 10 September 2009 03:56:24AM *  1 point [-]

This committal is what you wish to optimise over from TDT/UDT, and clearly this requires knowledge about the likelyhood of different decision making groups.

I was influenced by the OP and used to think that way. However I think now, that this is not the root problem.

What if the agents get more complicated decision problems: for example, rewards depending on the parity of the agents voting certain way, etc.?

I think, what essential is that the agents have to think globally (categorical imperative, hmmm?)

Practically: if the agent recognizes that there is a collective decision, then it should model all available conceivable protocols (but making apriori sure that all cooperating agents perform the same or compatible analysis, if they can't communicate) and then they should choose the protocol with best overall total gain. In the case of the OP: the second calculation in the OP. (Not messing around with correction factors based on responsibilities, etc.)

Special considerations based on group sizes etc. may be incidentally correct in certain situations, but this is just not general enough. The crux is that the ultimate test is simply the expected value computation for the protocol of the whole group.

Comment author: Jonathan_Lee 10 September 2009 11:52:00AM 1 point [-]

Between non communicating copies of your decision algorithm, it's forced that every instance comes to the same answers/distributions to all questions, as otherwise Eliezer can make money betting between different instances of the algorithm. It's not really a categorical imperative, beyond demanding consistency.

The crux of the OP is asking for a probability assessment of the world, not whether the DT functions.

I'm not postulating 1/n allocation of responsibility; I'm stating that the source of the confusion is over: P(A random individual is in a world of class Ai | Data) with P(A random world is of class Ai | Data) And that these are not equal if the number of individuals with access to Data are different in distinct classes of world.

Hence in this case, there are 2 classes of world, A1 with 18 Green rooms and 2 Reds, and A2 with 2 Green rooms and 18 Reds.

P(Random individual is in the A1 class | Woke up in a green room) = 0.9 by anthropic update. P(Random world is in the A1 class | Some individual woke up in a green room) = 0.5

Why? Because in A1 there 18/20 individuals fit the description "Woke up in a green room", but in A2 only 2/20 do.

The crux of the OP is that neither a 90/10 nor 50/50 split seem acceptable, if betting on "Which world-class an individual in a Green room is in" and "Which world-class the (set of all individuals in Green rooms which contains this individual) is in" are identical. I assert that they are not. The first case is 0.9/0.1 A1/A2, the second is 0.5/0.5 A1/A2.

Consider a similar question where a random Green room will be asked. If you're in that room, you update both on (Green walls) and (I'm being asked) and recover the 0.5/0.5, correctly. This is close to the OP as if we wildly assert that you and only you have free will and force the others, then you are special. Equally in cases where everyone is asked and plays separately, you have 18 or 2 times the benefits depending on whether you're in A1 or A2.

If each individual Green room played separately, then you update on (Green walls), but P(I'm being asked|Green) = 1 in either case. This is betting on whether there are 18 people in green rooms or 2, and you get the correct 0.9/0.1 split. To reproduce the OP the offers would need to be +1/18 to Greens and -3/18 from Reds in A1, and +1/2 to Greens and -3/2 from Reds in A2, and then you'd refuse to play, correctly.

Comment deleted 10 September 2009 09:11:55AM *  [-]
Comment author: Christian_Szegedy 10 September 2009 06:00:51PM 0 points [-]

It's not about complexity, it is just expected total gain. Simply the second calculation of the OP.

I just argued, that the second calculation is right and that is what the agents should do in general. (unless they are completely egoistic for their special copies)