hylleddin comments on 2013 Less Wrong Census/Survey - Less Wrong

78 Post author: Yvain 22 November 2013 09:26AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (616)

You are viewing a single comment's thread. Show more comments above.

Comment author: ThrustVectoring 22 November 2013 04:44:20AM 12 points [-]

The expected value of defecting is 4p/(p + 4(1-p), to within one part in the number of survey takers. Whether or not you defect makes no difference as to the proportion of people who defect.

The solution is to determine how likely it is that a random participant is going to defect, conditional on your choice of cooperate or defect. If you're playing with a total of N copies of yourself, you cooperate and get the maximal payout ($60/N). If you're playing against cooperate bots, you defect and get $60*4N/(N-1).

We can generalize this to partial levels. If you play with D defectors and C cooperators whose opinion you can't change, and X people who will cooperate when you cooperate (and defect when you defect), then the payouts are as thus:

C: (C + X)/(C + D + X) D: 4(C /(C + D + X)

You can solve for the break even point by setting C + X = 4 * C

So the answer is that you should defect, unless you think that for every person who is going to cooperate no matter what, there are at least three people who are thinking with similar enough reasoning to come up with the same answer you come up with (regardless of what answer that is).

Comment author: hylleddin 22 November 2013 09:45:22AM *  0 points [-]

The expected value of defecting is 4p/(p + 4(1-p), to within one part in the number of survey takers. Whether or not you defect makes no difference as to the proportion of people who defect.

Unless you're using timeless decision theory, if I understand TDT correctly (which I very well might not). In that case, the calculations by Zack show the amount of causal entanglement for which cooperation is a good choice. That is, P(others cooperate | I cooperate) and P(others defect | I defect) should be more than 0.8 for cooperation to be a good idea.

I do not think my decisions have that level of causal entanglement with other humans, so I defected.

Though, I just realized, I should have been basing my decision on my entanglement with lesswrong survey takers, which is probably substantially higher. Oh well.

Comment author: Oscar_Cunningham 26 November 2013 04:40:25PM 0 points [-]

Though, I just realized, I should have been basing my decision on my entanglement with lesswrong survey takers, which is probably substantially higher. Oh well.

I defected for the same reasons as you. We're entangled! Reading the responses of the other survey takers I think it's clear that very few people are entangled with us, so we did indeed make the right choice!

Comment author: hylleddin 22 November 2013 09:55:19AM 0 points [-]

Nevermind, you already covered this, though in a different fashion.

Comment author: ThrustVectoring 22 November 2013 03:30:25PM 1 point [-]

Yeah, and the math is a little different, three entangled decision makers for each cooperate-bot you can defect against (the number of defectors don't matter, surprisingly). You get three extra chances to get the money generously donated to the pool by the cooperate bots by defecting, compared to causing a certain number of people to help you make the pool even larger.