You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

cousin_it comments on Solve Psy-Kosh's non-anthropic problem - Less Wrong Discussion

34 Post author: cousin_it 20 December 2010 09:24PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (99)

You are viewing a single comment's thread. Show more comments above.

Comment author: ata 20 December 2010 10:45:32PM *  1 point [-]

you should do a Bayesian update: the coin is 90% likely to have come up tails. So saying "yea" gives 0.9*1000 + 0.1*100 = 910 expected donation.

I'm not sure if this is relevant to the overall nature of the problem, but in this instance, the term 0.9*1000 is incorrect because you don't know if every other decider is going to be reasoning the same way. If you decide on "yea" on that basis, and the coin came up tails, and one of the other deciders says "nay", then the donation is $0.

Is it possible to insert the assumption that the deciders will always reason identically (and, thus, that their decisions will be perfectly correlated) without essentially turning it back into an anthropic problem?

Comment author: cousin_it 21 December 2010 12:12:45AM *  1 point [-]

I'm not sure if this is relevant either, but I'm also not sure that such an assumption is needed. Note that failing to coordinate is the worst possible outcome - worse than successfully coordinating on any answer. Imagine that you inhabit case 2: you see a good argument for "yea", but no equally good argument for "nay", and there's no possible benefit to saying "nay" unless everyone else sees something that you're not seeing. Framed like this, choosing "yea" sounds reasonable, no?

Comment author: Nornagest 21 December 2010 12:20:57AM *  1 point [-]

There's no particular way I see to coordinate on a "yea" answer. You don't have any ability to coordinate with others while you're answering questions, and "nay" appears to be the better bet before the problem starts.

It's not uncommon to assume that everyone in a problem like this thinks in the same way you do, but I think making that assumption in this case would reduce it to an entirely different and less interesting problem -- mainly because it renders the zero in the payoff matrix irrelevant if you choose a deterministic solution.