Didn't you just re-state the prisoner's dilemma?
Prisonner's dilemma for N players is more complex than for 2 players.
For iterated 2 player's dilemma, you cooperate when the other player cooperates, and defect when the other player defects. Always cooperating is not the best strategy; you need to respond to the other player's actions.
When you have 100,000,000 player's prisonner's dilemma, where 60,000,000 players defect and 40,000,000 players cooperate, what exactly are you supposed to do? To make it even more difficult, cooperation has non-zero costs (you have to do some research about political candidates), and it's not even obvious whether the expected payoff is greater than this.
For iterated 2 player's dilemma, you cooperate when the other player cooperates, and defect when the other player defects. Always cooperating is not the best strategy; you need to respond to the other player's actions.
Actually you only cooperate if the other player would defect if you didn't cooperate. If they cooperate no matter what, defect.
The current issue of the Oxford Left Review has a debate between socialist Pete Mills and two 80,000 hours people, Ben Todd and Sebastian Farquhar: The Ethical Careers Debate, p4-9. I'm interested in it because I want to understand why people object to the ideas of 80,000 hours. A paraphrasing:
As a socialist, Mills really doesn't like the argument that the best way to help the world's poor is probably to work in heavily capitalist industries. He seems to be avoiding engaging with Todd and Farquhar's arguments, especially replaceability. He also really doesn't like looking at things in terms of numbers, I think because numbers suggest certainty. When I calculate that in 50 years of giving away $40K a year you save 1000 lives at $2K each, that's not saying the number is exactly 1000. It's saying 1000 is my best guess, and unless I can come up with a better guess it's the estimate I should use when choosing between this career path and other ones. He also doesn't seem to understand prediction and probability: "every revolution is impossible, until it is inevitable" may be how it feels for those living under an oppressive regime but it's not our best probability estimate. [1]
In a previous discussion a friend also was mislead calculations. When I said "one can avert infant deaths for about $500 each" their response was "What do they do with the 500 dollars? That doesn't seem to make sense. Do they give the infant a $500 anti-death pill? How do you know it really takes a constant stream of $500 for each infant?". Have other people run into this? Bad calculations also tend to be distributed widely, with people saying things like "one pint of blood can save up to three lives" when the expected marginal lives saved is actually tiny. Maybe we should focus less on estimates of effectiveness in smart-giving advocacy? Is there a way to show the huge difference in effect between the best charities and most charities without using these?
Maybe I should have way more of these discussions, enough that I can collect statistics on what arguments and examples work and which don't.
(I also posted this on my blog)
[1] Which is not to say you can't have big jumps in probability estimates. I could put the chance of revolution at 5% somewhere based on historical data but then hear some new information about how one has just started and sounds really promising which bumps my estimate up to 70%. But expected value calculations for jobs can work with numbers like these, it's just "impossible" and "inevitable" that break estimates.