V_V comments on Voting is like donating thousands of dollars to charity - Less Wrong

32 Post author: Academian 05 November 2012 01:02AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (210)

You are viewing a single comment's thread. Show more comments above.

Comment author: V_V 08 November 2012 04:27:53PM *  -1 points [-]

But a vote for a losing candidate is not "thrown away"; it sends a message to mainstream candidates that you vote, but they have to work harder to appeal to your interest group to get your vote. Readers in non-swing states especially should consider what message they're sending with their vote before voting for any candidate, in any election, that they don't actually like.

But that point can still be subject to the same (invalid, IMHO) argument against voting: your vote alone is not going to change the poll's percentages by any noticeable extent, hence you could as well not vote and nobody will notice the difference.

I'll explain why I think this line of argument is invalid in another comment. EDIT: here

Also, rationalists are supposed to win. If we end up doing a fancy expected utility calculation and then neglect voting, all the while supposedly irrational voters ignore all of that and vote for their favored candidates and get them elected while ours lose... then that's, well, losing.

That's actually a better point, but it opens a can of worms: ideally, istrumentally rational agents should always win (or maximize their chance of winning, if uncertainty is involved), but does a consistent form of rationality that allows that actually exist?

Consider two pairs of players playing a standard one-shot prisoner's dilemma, where the players are not allowed to credibly commit or communicate in any way.

In one case the players are both CooperateBots: they always cooperate because they think that God will punish them if they defect, or they feel a sense of tribal loyalty towards each other, or whatever else. These players win.

In the other case, the players are both utility maximizing rational agents. What outcome do they obtain?

Comment author: Andreas_Giger 08 November 2012 06:02:07PM *  1 point [-]

By having two agents play the same game against different opposition, you compare two scenarios that may seem similar on the surface but are fundamentally different. Obviously, making sure your opponent cooperates is not part of PD, so you can't call this winning. And as soon as you delve into the depths of meta-PD, where players can influence other players' decisions beforehand and/or hand out additional punishment afterwards, like for example in most real life situations, the rational agents will devise methods by which mutual cooperation can be assured much better than by loyalty or altruism or whatever. Anyone moderately rational will cooperate if the PD matrix is "cooperate and get [whatever] or defect and have all your winnings taken away by the player community and given to the other player", and accordingly win against irrational players, while any non-playing rationalist would support such kind of convention; although, depending on how/why PD games happen in the first place, this may evolve into "cooperate and have all winnings taken away by the player community or defect and additionally get punished in an unpleasant way".

By the way, the term CooperateBot only really makes sense when talking about iterated PD, where it refers to an agent always cooperating regardless of the results of any previous rounds.

Comment deleted 08 November 2012 06:15:22PM [-]
Comment author: Andreas_Giger 08 November 2012 07:08:13PM *  2 points [-]

In non-iterated PD, someone who cooperates is a cooperator.

Nevertheless the CooperateBots win when playing between each other, while the rational agents lose when they have no mean to credibly commit.

No, the cooperators actually lose when playing each other, because they gain less than what they could, while the only reason they get anything at all is because they are playing against other cooperators. Likewise, the defectors win when playing other defectors, and they obviously win against cooperators. Cooperating could only win if it effected your opponent's decision, which is not the case in PD.

It seems your definition of winning is flawed in that you want your agents to achieve results that are clearly outside their influence. Rationalists should win under the constraints of reality, not invent scenarios in which they have already won.

Comment deleted 08 November 2012 11:29:23PM *  [-]
Comment author: Andreas_Giger 09 November 2012 07:30:50AM 1 point [-]

Clearly in a community of unconditional cooperators every agent obtains a better payoff than any agent in a community of defectors.

As soon as you're talking about communities, you're talking about meta-PD, not PD, and as I've explained above, rationalist agents play meta-PD by making sure cooperation is desirable for the individuum as well, so they win. End of story.

Comment author: TimS 08 November 2012 11:41:33PM -1 points [-]

Nitpick: Superrationality is not a decision theory.

Comment deleted 09 November 2012 12:14:10AM [-]
Comment author: TimS 09 November 2012 01:34:15AM -2 points [-]

Wikipedia is not a determinative source.

Which answer is the "superrational" one in Newcomb's problem? In a game of chicken? In an ultimatum game?

Decision theories like Causal Decision Theory and Evidential Decision theory have answers, and can explain why they reached those answers. As far as I am aware, there's no equivalent formalization of "superrationality." Until such a formalization exists, it is misleading in this type of discussion to call "superrationality" a decision theory.

Comment deleted 09 November 2012 02:13:23AM *  [-]
Comment author: MugaSofer 09 November 2012 10:38:06AM 0 points [-]

In a game of chicken

Swerve.

And let the other guy win? Madness!