V_V comments on Voting is like donating thousands of dollars to charity - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (210)
But that point can still be subject to the same (invalid, IMHO) argument against voting: your vote alone is not going to change the poll's percentages by any noticeable extent, hence you could as well not vote and nobody will notice the difference.
I'll explain why I think this line of argument is invalid in another comment. EDIT: here
That's actually a better point, but it opens a can of worms: ideally, istrumentally rational agents should always win (or maximize their chance of winning, if uncertainty is involved), but does a consistent form of rationality that allows that actually exist?
Consider two pairs of players playing a standard one-shot prisoner's dilemma, where the players are not allowed to credibly commit or communicate in any way.
In one case the players are both CooperateBots: they always cooperate because they think that God will punish them if they defect, or they feel a sense of tribal loyalty towards each other, or whatever else. These players win.
In the other case, the players are both utility maximizing rational agents. What outcome do they obtain?
By having two agents play the same game against different opposition, you compare two scenarios that may seem similar on the surface but are fundamentally different. Obviously, making sure your opponent cooperates is not part of PD, so you can't call this winning. And as soon as you delve into the depths of meta-PD, where players can influence other players' decisions beforehand and/or hand out additional punishment afterwards, like for example in most real life situations, the rational agents will devise methods by which mutual cooperation can be assured much better than by loyalty or altruism or whatever. Anyone moderately rational will cooperate if the PD matrix is "cooperate and get [whatever] or defect and have all your winnings taken away by the player community and given to the other player", and accordingly win against irrational players, while any non-playing rationalist would support such kind of convention; although, depending on how/why PD games happen in the first place, this may evolve into "cooperate and have all winnings taken away by the player community or defect and additionally get punished in an unpleasant way".
By the way, the term CooperateBot only really makes sense when talking about iterated PD, where it refers to an agent always cooperating regardless of the results of any previous rounds.
In non-iterated PD, someone who cooperates is a cooperator.
No, the cooperators actually lose when playing each other, because they gain less than what they could, while the only reason they get anything at all is because they are playing against other cooperators. Likewise, the defectors win when playing other defectors, and they obviously win against cooperators. Cooperating could only win if it effected your opponent's decision, which is not the case in PD.
It seems your definition of winning is flawed in that you want your agents to achieve results that are clearly outside their influence. Rationalists should win under the constraints of reality, not invent scenarios in which they have already won.
As soon as you're talking about communities, you're talking about meta-PD, not PD, and as I've explained above, rationalist agents play meta-PD by making sure cooperation is desirable for the individuum as well, so they win. End of story.
Nitpick: Superrationality is not a decision theory.
Wikipedia is not a determinative source.
Which answer is the "superrational" one in Newcomb's problem? In a game of chicken? In an ultimatum game?
Decision theories like Causal Decision Theory and Evidential Decision theory have answers, and can explain why they reached those answers. As far as I am aware, there's no equivalent formalization of "superrationality." Until such a formalization exists, it is misleading in this type of discussion to call "superrationality" a decision theory.
And let the other guy win? Madness!