Vladimir_Nesov comments on Politics as Charity - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (161)
Why is this not a confusion? It seems on the face of it that since voters' decisions are correlated, your decision accounts for behavior of other people as well, and so you are not only casting one vote with your decision, but many votes simultaneously.
Do you believe that my decision to vote is as like to acausally influence my opponents into voting as it is my supporters? If so, and if we can expect about equal amounts of both, doesn't that produce the same problem?
I feel genuinely guilty about prop 19's failure precisely because the reason for my failure to vote -- general procrastination and lack of organization resulting in my not registering in time -- was probably correlated with similar failures by others on my side of the issue.
That's probably a special case though.
(ETA for non-Californians: Prop 19 was a proposal to legalize the use of marijuana)
There are asymmetric versions, too: for instance, if you choose not to vote out of lack of enthusiasm, you cede the field to people who are more enthusiastic about their candidate. This effect would help candidates with special-interest appeal (a smaller base of more enthusiastic voters) against candidates with more general (but weaker) appeal.
For example, if the reason you were considering not voting was bad weather on election day, and you managed to discard that reason as one you won't be moved by in a voting decision, this decision would be common to many people irrespective of their candidate. By deciding to vote anyway, you establish that people in similar situations do vote.
This additionally places into question one vote as a lower estimate of influence of your decision, making it an outright useless figure.
Right, I agree with that. But let's say I'm a Democrat. If I choose to go, maybe a thousand Democrats and a thousand Republicans all choose to go, for a net gain of zero. If I choose to stay home, a thousand Democrats and a thousand Republicans choose to stay home, for a net gain of zero.
Either way, the net gain is zero. So why bother voting?
If it's common knowledge that every eligible voter is using UDT I think the outcome might be that everyone chooses a mixed strategy: vote with probability p (for some fairly small p like < 0.1) and stay home with probability 1-p. This way, the outcome of the election is almost certainly the same as if everyone votes, but its cost is much smaller.
Caveats: I don't know how to derive this mathematically from the stated assumption, and I have little idea how to apply this type of reasoning to humans. Actually it still seems plausible to me that E(total number of votes | I vote) - E(total number of votes | I don't vote) is near 1 and therefore CDT-type ("deciding vote") reasoning is a good approximation for my actual situation.
Could you please tell me what "to establish" means in the last sentence?
(Your comment made me spit out my tea. I know almost nothing about U/TDT.)
If my decision process uses UDT-type reasoning, do I have a chance of acausally influencing people who don't know about UDT-type reasoning?
#lesswrong
<a> Does this acausally make the other residents avoid the grass as well because they decide in approximately the same way when encountering the grass, or does it not because they haven't even heard of TDT?
<a> What if all the residents were LW posters??
One thing I've long wondered: in cases like these is TDT equivalent to your mom saying 'and what if everyone walked on the grass?'
I think that's exactly what you would go around asking yourself if you were a TDT-using human in a community of TDT-using humans.
No, although it is often used in that sort of way.
This is actually a good question. Gary Drescher seems to think you can, but I think Eliezer is more skeptical.
Is this a topic in Good and Real?
Yes– it's in the account of ethics, near the end.
How does one go about computing E(total number of votes | I vote) - E(total number of votes | I don't vote)?
No idea, but "deciding vote" is not it.
But my vote doesn't even acausally affect others' votes: no one's thinking "I'll only vote if Will Newsome does", their algorithm is "I'll only vote if lots of other people do", and lots of other people will vote whether I do or not. Sure, if everyone had my decision theory it'd be a tragedy of the commons, but realistically the chance is still one in a million, or maybe very slightly better. Thus the notion of "deciding vote" is only a very little bit confused. Am I wrong?
I agree... tentatively. I haven't yet spent much time considering the idea of acausal influence in its most general form, but I'm not sure I see how it would apply here; you can have some pre-election influence by virtue of what sort of person you are (or seem to be), but when it's election day, it seems like you should be able to decide to vote or not vote without your decision retroactively implying too much about what other things you could have caused.
I realize that sounds exactly like the argument for two-boxing, but I'm not convinced the causal structure is similar enough for the analogy to be valid.
(I've previously had vaguely relevant thoughts about the expected payoff of one vote. I should expand on that at some point...)
It's only necessary for you and other people to make a decision for the same reasons. These reasons can be rather abstract and simple (except for the human universal component) and move many people in the same way.
Acausal influence stems from other processes similar to you. This can be a simulated version of you, on whose action the simulating agent's choices depend. Or it can just be someone else like you, who's likely to some degree to decide the same thing for some of the same reasons.
"Acausal influence" is superficially a contradiction, and this phrase deserves skeptical scrutiny.
The only sort of "influence" I can think of, that might defensibly be described as acausal, is the "influence" of an object (actual or possible) which is being imagined or otherwise represented in a non-perceptual way (i.e. the representation was not being caused by sense impressions ultimately caused by the object itself). But even then there may be a "causal" interpretation of where the representation's properties came from - it's just that these would be "logical causes". A representation of the Death Star has some of its properties because otherwise it wouldn't be a representation of the Death Star; it would be a representation of something else, or not a representation at all.
There seems to be a duality here. The physical properties of a physical symbol will have physical causes, while the semantic properties will have "logical" causes. I don't know how to think about these logical causes correctly - it doesn't seem right to say that they are caused by objects in other possible worlds, for example. But isn't the talk of acausal anything due simply to ignoring logical causes of properties at the semantic level?
I don't think it's worthwhile to fight the terminology: 'acausal' makes sense as opposed to 'causal' as in 'causal decision theory'. I think it's pretty sensible and defensible, even if 'timeless' might've been a better choice.
No, the more I think about it, the more I think there is a serious problem here.
"Superrationality" is just a situation in which a certain bias - a certain deviation from actual rationality - is rewarded, when enough other people have the same bias. If a bunch of people all using a "superrational decision theory" manage to achieve the big collective payoff they sought by cooperating, it's only because of the contingent fact that they happened to have a majority. And under that circumstance, ordinary decision theory would tell you to go with the flow and choose with that majority as well!
Superrationality is either an attempt to solve coordination problems through magical thinking, or it's a fancy name for visibly favoring altruism in the hope that others will too, or it's a preference for altruistic terminal values disguised as an appeal to rational self-interest.
Majority or whatever number of cooperating people happens to be sufficient to achieve whatever goal they are trying to achieve. Because of the advantages from cooperation the superational contingent will often not need to be larger than the remainder.
Not in a prisoner's dilemma.
Quoting from Wikipedia because I have no real expertise on decision theory:
How exactly does superrationality differ from membership in the Club Of Always Colluding With Each Other?
I'm saving the decision theory apparatus (which actually multiplies the expected payoff of both political and non-political altruistic expenditures) for a later post. I couldn't fit everything into the first one.
Then you should've made clear that "deciding vote" is actually a lower estimate, and shouldn't be interpreted as classical "deciding vote".
I added some clarifications.
Ah, didn't see this earlier.
I don't think it multiplies the expected payoff for both in the same way. Some Bostromian division-of-responsibility principle should apply in both cases. The apparent gains are from the probability of making an important shift via group action where individual action would be unlikely to go over a tipping point, not because you're multiplying by the number of people involved.
If you acausally influence other people to vote, you'll also acausally influence them to spend time doing so. (And since they're like you, their time is as valuable as yours.) To a first approximation, the expected cost and benefit are proportional to the naïve (ignoring acausal influence) estimate. So the question of whether it's worth the effort should come out the same.
Other people's time is not as valuable as yours (to you).
Darn, you beat me to it! Given that your decision and others' decisions stem from a common cause, and you are highly correlated with them (compared to chance), then your decision is informative about their decisions. (You can think of it as deciding which world you "wake up" in.) I had elaborated before about how to apply this reasoning to PD-like problems:
Also, if I go the opposite route, and use Schwitzgebel's model and decision theory, that's not a good argument to justify voting, for with a population of 100,000,000, you actually have far less than a 1e-8 chance of swinging the outcome, because the other votes are unlikely (under this causal model) to split exactly 50/50 other than your vote.