There's a whole section on voting in the LDT For Economists page on Arbital. Also see the one for analytic philosophers, which has a few other angles on voting.
From what I can tell from your other comments on this page, you might already have internalized all the relevant intuitions, but it might be useful anyway. Superrationality is also discussed.
Sidenote: I'm a little surprised no one else mentioned it already. Somehow arbital posts by Eliezer aren't considered as canon as the sequences, maybe it's the structure (rather than just the content)?
I think it's just reachability. Arbital is Far Away, and it's plausible that not everyone even knows it exists.
Gary Drescher in his old book Good and Real talks about this. p299. It was especially cool in that it said that even altruist CDTers can't account for the rationality of voting in sufficiently large elections. I haven't verified whether that's true or not.
It was especially cool in that it said that even altruist CDTers can’t account for the rationality of voting in sufficiently large elections.
That's pretty surprising. I checked out the page, and he unfortunately doesn't motivate what kind of model he's using, so it's hard to verify. From the book:
If the importance of the election is presumed proportionate to the size of the electorate, then for large enough elections, expected-utility calculations cannot justify the effort of voting by appeal to the small but heavily weighted possibility that your vote will be a tiebreaker. The odds of that outcome decrease faster than linearly with the number of voters, so the expected value of your vote as a tiebreaker approaches zero— even taking account of the value to everyone combined, not just yourself. Given enough voters, then, the causal value (even to everyone) of your vote is overshadowed by the inconvenience to you of going out to vote."
In an election with two choices, in a model where everybody has 50% chance of voting for either side, I don't think the claim is true. Maybe he's assuming that the outcomes of elections become easier to predict as they grow larger, because indivi
...That book is from 2006. I understand that it deals with the Paradox of Voting, but does it have anything that would be directly relevant to considering it in light of "acausal decision theories"? As far as I know, such theories pretty much didn't exist back then.
This idea is certainly not new, for example in an essay about TDT from 2009, Yudkowsky wrote:
Some concluding chiding of those philosophers who blithely decided that the "rational" course of action systematically loses... And celebrating of the fact that rationalists can cooperate with each other, vote in elections, and do many other nice things that philosophers have claimed they can't...
(emphasis mine)
The relevance of TDT/UDT/FDT to voting surfaced in discussions many times, but possibly nobody wrote a detailed essay on the subject.
I don't think any of the more interesting decision theories differ from CDT on a trivial expected value calculation, with no acausal paths to the payoffs. How do you see it working? Can you put some probabilities and payoffs in place to show why you think this is relevant?
But there is an obvious acausal path in this case. If other voters are using the same algorithm you are to decide whether or not to vote, or a "sufficiently similar" one (in some sense that would have to be fleshed out), then that inflates the probability that "your" decision of whether or not to vote is pivotal, because "you" are effectively multiple voters.
Is that sufficient, or do you need actual numbers? (I'd guess it is and you don't.)
the chance that your vote (along with everyone else's) would be pivotal because the margin was 1 vote,
I have never understood this criterion for your vote "mattering". It has the consequence that if (as will almost always be the case for a large electorate) the winner has a majority of at least 3, then no-one's vote mattered. If a committee of 5 people votes 4 to 1, then no-one's vote mattered. Two votes mattered, but no-one's vote mattered. If one of the yes voters had stayed at home that day, then every yes vote would matter, but the no vote wouldn't matter.
This does not seem like a sensible concept to attach to the word "matter". If someone on that committee was very anxious that the vote should go they way they would like, they will have done everything they could to persuade every other persuadable member to vote their way. Far from no-one's vote mattering, every vote in that situation matters. This is a frequent occurrence in parliamentary votes, when there is any doubt beforehand whether the motion will pass, and the result is of great importance to both sides. In the forthcoming US presidential election, both parties will be making tremendous efforts to "get out the vote". Yet no-one's vote "matters"?
There has been some philosophical work that makes just this point. In particular, Julia Nefsky (who I think has some EA ties?) has a whole series of papers about this. Probably the best one to start with is her PhilCompass paper here: https://onlinelibrary.wiley.com/doi/abs/10.1111/phc3.12587
Obviously I don't mean this to address the original question, though, since it's not from an FDT/UDT perspective.
I agree that this definition of "matters" is odd; not the one most people use in everyday speech. I think that there are ways to make other definitions rigorous (in ways that aren't addressed in the wikipedia article I linked). But this is the narrowly consequentialist definition, so it does deserve analysis.
I echo Dagon's claim that there is no difference between CDT and FDT or UDT here (although with the disclaimer that I'm not an expert). This is so because you play the game with many other non-UDT agents, and UDT tends to do the same thing CDT does wrt. cooperation with other non-UDT agents. (Where non-UDT is everything that doesn't implement ideas from the TDT/UDT/FDT bundle.)
However, a reasonable calculation shows that a vote is worth quite a lot (at least if you live in a swing state) if you consider the benefit for everyone rather than just for yourself – which seems to be what rationalists tend to do anyway on things like x-risk prevention and charity. And if you don't live in a swing state, you can try to trade your vote with someone who does. (I believe EY did this in 2016.)
Wait, what?
"You play the game with many other CDT agents" — this seems demonstrably false, at least, if we accept the Paradox of Voting as being a thing, in which case, CDT agents have by assumption removed themselves from the game. (I understand your response that voting may be altruistically-CDT-rational; as you know, it's been discussed before, and very rightly so. But I also think it's still worth considering the boundedly-altruistic/diagonally-dominant case.)
It seems to me that the only way you can claim there's "many other CDT agents" is if "CDT" is being used as a catch-all for "not explicitly FDT/UDT", and I'd strongly dispute that usage. I think that memetically/genetically evolved heuristics are likely to differ systematically from CDT. It may be best to create an entirely separate model for people operating under such heuristics, but if you want to force them into a pure CDT-vs-UDT-vs-random-noise (ie, mixture distribution) paradigm, I'd say they would be substantially more than 0% UDT.
ETA: I guess I can parse "other voters are CDT" as a sensible assumption if you're explicitly doing repeated-game analysis, but such an analysis would pretty much dissolve both the Paradox of Voting and the CDT vs. acausal-DTs distinction.
I think that memetically/genetically evolved heuristics are likely to differ systematically from CDT.
On reflection, I'm not sure whether I agree with this or not. I'll edit the post.
However, the point is non-essential. What I've said holds true if you replace "CDT" with "weird bundle of heuristics." The point is that it's not UDT: an UDT agent needs other agents to be UDT or similar to cooperate with them for stuff like voting. (Or at least that's what I believe is true and what matters for this question.) And I certainly think the UDT proportion is small enough to be modeled as 0.
I think there is a strong similarity between FDT (can't speak to UDT/TDT) and Kantian lines of thought in ethics. (To bring this out: the Kantian thought is roughly to consider yourself simply as an instance of a rational agent, and ask "can I will that all rational agents in these circumstances do what I'm considering doing?" FDT basically says "consider all agents that implement my algorithm or something sufficiently similar. What action should all those algorithm-instances output in these circumstances?" It's not identical, but it's pretty close.) Lots of people have Kantian intuitions, and to the extent that they do, I think they are implementing something quite similar to FDT. Lots of people probably vote because they think something like "well, if everyone didn't vote, that would be bad, so I'd better vote." (Insert hedging and caveats here about how there's a ton of debate over whether Kantianism is/should be consequentialist or not.) So they may be countable as at least partially FDT agents for purposes of FDT reasoning.
I think that memetically/genetically evolved heuristics are likely to differ systematically from CDT.
Here's a brief argument why they would (and why they might diverge specifically in the direction of FDT): the metric evolution optimizes for is inclusive genetic fitness, not merely fitness of the organism. Witness kin selection. The heuristics that evolution would install to exploit this would tend to be: act as if there are other organisms in the environment running a similar algorithm to you (i.e. those that share lots of genes with you), and cooperate with those. This is roughly FDT-reasoning, not CDT-reasoning.
[...] Lots of people have Kantian intuitions, and to the extent that they do, I think they are implementing something quite similar to FDT.
I've never thought about this, but your comment is persuasive. I've un-endorsed my answer and moved it to the comments.
Not sure where of if this fits into your thought or not. In many was I see both the paradox and many of the attempts to explain it may well stem from incorrectly specifying the question. The argument is that the payoff from voting for any given person is lower than the costs incurred so why vote?
However, since people clearly do vote isn't the better question to ask: what did we miss in specifying the equation that results in the implication all these people are irrational and imposing costs on themselves?
In other words, rather than accepting the claimed paradox why not just take the empirical observation and then look for the underlying explanation. Would a good scientist ever talk about the paradox of flight once observed?
The Paradox of Voting, simply stated, is that voting in a large election almost certainly isn't worth your time (unless you think it's the most fun thing you could be doing). The guaranteed opportunity cost of going to vote will in most cases easily and predictably outweigh the expected benefits — the chance that your vote (along with everyone else's) would be pivotal because the margin was 1 vote, multiplied by your expected marginal utility payoff from your chosen candidate winning.
There are various well-known responses to this issue, listed in the Wikipedia article linked above. But to me, one of the obvious responses is to see this as just another instance of a chicken/snowdrift game, and to invoke the logic you might use to support cooperation in such games; that is, decision theory. I think this may even be one of the most common real-world instances where UDT/FDT might apply. I think it would also be a source of interesting edge cases for exploring the limits of UDT/FDT; that is, even small changes in how strictly you delimit which other (potential) voters to consider as UDT/FDT "co-agents" could easily swing the prescriptions you'd get. But doing a few quick google searches doesn't turn up any write-ups considering this issue in this light. Am I missing something, or is this idea really "new" (at least, undocumented)?
ETA: Thanks to @Vanessa Kosoy, @Daniel Kokotajlo, and @strangepoop, I now have sufficient references for prior discussions of this idea. Thanks! Honorable mention to @lkaxas, who suggested a connection to Kantian ethics which is relevant, though more remote than the references given by the above three.