Vladimir_Nesov comments on Politics as Charity - Less Wrong

29 Post author: CarlShulman 23 September 2010 05:33AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (161)

You are viewing a single comment's thread.

Comment author: Vladimir_Nesov 23 September 2010 05:56:55AM *  6 points [-]

She estimates that she has a 1 in 1,000,000 chance of casting the deciding vote

Why is this not a confusion? It seems on the face of it that since voters' decisions are correlated, your decision accounts for behavior of other people as well, and so you are not only casting one vote with your decision, but many votes simultaneously.

Comment author: Yvain 23 September 2010 05:54:28PM 7 points [-]

Do you believe that my decision to vote is as like to acausally influence my opponents into voting as it is my supporters? If so, and if we can expect about equal amounts of both, doesn't that produce the same problem?

Comment author: MBlume 06 January 2011 06:48:06AM *  6 points [-]

I feel genuinely guilty about prop 19's failure precisely because the reason for my failure to vote -- general procrastination and lack of organization resulting in my not registering in time -- was probably correlated with similar failures by others on my side of the issue.

That's probably a special case though.

(ETA for non-Californians: Prop 19 was a proposal to legalize the use of marijuana)

Comment author: orthonormal 24 September 2010 12:25:44AM 3 points [-]

There are asymmetric versions, too: for instance, if you choose not to vote out of lack of enthusiasm, you cede the field to people who are more enthusiastic about their candidate. This effect would help candidates with special-interest appeal (a smaller base of more enthusiastic voters) against candidates with more general (but weaker) appeal.

Comment author: Vladimir_Nesov 23 September 2010 08:09:10PM 1 point [-]

Do you believe that my decision to vote is as like to acausally influence my opponents into voting as it is my supporters?

For example, if the reason you were considering not voting was bad weather on election day, and you managed to discard that reason as one you won't be moved by in a voting decision, this decision would be common to many people irrespective of their candidate. By deciding to vote anyway, you establish that people in similar situations do vote.

This additionally places into question one vote as a lower estimate of influence of your decision, making it an outright useless figure.

Comment author: Yvain 23 September 2010 08:49:56PM 5 points [-]

Right, I agree with that. But let's say I'm a Democrat. If I choose to go, maybe a thousand Democrats and a thousand Republicans all choose to go, for a net gain of zero. If I choose to stay home, a thousand Democrats and a thousand Republicans choose to stay home, for a net gain of zero.

Either way, the net gain is zero. So why bother voting?

Comment author: Wei_Dai 25 September 2010 07:02:21PM 4 points [-]

If it's common knowledge that every eligible voter is using UDT I think the outcome might be that everyone chooses a mixed strategy: vote with probability p (for some fairly small p like < 0.1) and stay home with probability 1-p. This way, the outcome of the election is almost certainly the same as if everyone votes, but its cost is much smaller.

Caveats: I don't know how to derive this mathematically from the stated assumption, and I have little idea how to apply this type of reasoning to humans. Actually it still seems plausible to me that E(total number of votes | I vote) - E(total number of votes | I don't vote) is near 1 and therefore CDT-type ("deciding vote") reasoning is a good approximation for my actual situation.

Comment author: NihilCredo 27 September 2010 12:06:51AM *  0 points [-]

For example, if the reason you were considering not voting was bad weather on election day, and you managed to discard that reason as one you won't be moved by in a voting decision, this decision would be common to many people irrespective of their candidate. By deciding to vote anyway, you establish that people in similar situations do vote.

Could you please tell me what "to establish" means in the last sentence?

(Your comment made me spit out my tea. I know almost nothing about U/TDT.)

Comment author: Nisan 23 September 2010 10:43:14PM 5 points [-]

If my decision process uses UDT-type reasoning, do I have a chance of acausally influencing people who don't know about UDT-type reasoning?

Comment author: Bongo 24 September 2010 11:48:05AM *  3 points [-]

#lesswrong

  • <a> There's new grass planted in your apartment block's front yard. If everyone walks over it, it will die, but if just a couple of people walk over it, it'll be okay. Your way would be shorter if you walked over the grass. (Tragedy of the commons situation),
  • <a> And you've read about funky decision theories on Less Wrong, and decide to avoid the grass because you've decided that you follow TDT.
  • <a> Does this acausally make the other residents avoid the grass as well because they decide in approximately the same way when encountering the grass, or does it not because they haven't even heard of TDT?

  • <a> What if all the residents were LW posters??

Comment author: gwern 24 September 2010 11:58:48AM 1 point [-]

One thing I've long wondered: in cases like these is TDT equivalent to your mom saying 'and what if everyone walked on the grass?'

Comment author: Bongo 25 September 2010 08:34:00PM 1 point [-]

I think that's exactly what you would go around asking yourself if you were a TDT-using human in a community of TDT-using humans.

Comment author: wedrifid 24 September 2010 06:19:21PM 1 point [-]

No, although it is often used in that sort of way.

Comment author: orthonormal 24 September 2010 12:27:15AM 1 point [-]

This is actually a good question. Gary Drescher seems to think you can, but I think Eliezer is more skeptical.

Comment author: Nisan 24 September 2010 12:38:33AM 0 points [-]

Is this a topic in Good and Real?

Comment author: orthonormal 24 September 2010 01:07:43AM 0 points [-]

Yes– it's in the account of ethics, near the end.

Comment author: Wei_Dai 23 September 2010 06:32:44AM 3 points [-]

How does one go about computing E(total number of votes | I vote) - E(total number of votes | I don't vote)?

Comment author: Vladimir_Nesov 23 September 2010 06:39:37AM 4 points [-]

No idea, but "deciding vote" is not it.

Comment author: Will_Newsome 23 September 2010 06:11:08AM *  3 points [-]

But my vote doesn't even acausally affect others' votes: no one's thinking "I'll only vote if Will Newsome does", their algorithm is "I'll only vote if lots of other people do", and lots of other people will vote whether I do or not. Sure, if everyone had my decision theory it'd be a tragedy of the commons, but realistically the chance is still one in a million, or maybe very slightly better. Thus the notion of "deciding vote" is only a very little bit confused. Am I wrong?

Comment author: ata 23 September 2010 06:37:37AM *  3 points [-]

I agree... tentatively. I haven't yet spent much time considering the idea of acausal influence in its most general form, but I'm not sure I see how it would apply here; you can have some pre-election influence by virtue of what sort of person you are (or seem to be), but when it's election day, it seems like you should be able to decide to vote or not vote without your decision retroactively implying too much about what other things you could have caused.

I realize that sounds exactly like the argument for two-boxing, but I'm not convinced the causal structure is similar enough for the analogy to be valid.

(I've previously had vaguely relevant thoughts about the expected payoff of one vote. I should expand on that at some point...)

Comment author: Vladimir_Nesov 23 September 2010 06:18:53AM *  3 points [-]

But my vote doesn't even acausally affect others' votes: no one's thinking "I'll only vote if Will Newsome does"

It's only necessary for you and other people to make a decision for the same reasons. These reasons can be rather abstract and simple (except for the human universal component) and move many people in the same way.

Comment author: Eliezer_Yudkowsky 23 September 2010 07:55:40AM 6 points [-]

Acausal influence stems from other processes similar to you. This can be a simulated version of you, on whose action the simulating agent's choices depend. Or it can just be someone else like you, who's likely to some degree to decide the same thing for some of the same reasons.

Comment author: Mitchell_Porter 23 September 2010 09:49:35AM 1 point [-]

"Acausal influence" is superficially a contradiction, and this phrase deserves skeptical scrutiny.

The only sort of "influence" I can think of, that might defensibly be described as acausal, is the "influence" of an object (actual or possible) which is being imagined or otherwise represented in a non-perceptual way (i.e. the representation was not being caused by sense impressions ultimately caused by the object itself). But even then there may be a "causal" interpretation of where the representation's properties came from - it's just that these would be "logical causes". A representation of the Death Star has some of its properties because otherwise it wouldn't be a representation of the Death Star; it would be a representation of something else, or not a representation at all.

There seems to be a duality here. The physical properties of a physical symbol will have physical causes, while the semantic properties will have "logical" causes. I don't know how to think about these logical causes correctly - it doesn't seem right to say that they are caused by objects in other possible worlds, for example. But isn't the talk of acausal anything due simply to ignoring logical causes of properties at the semantic level?

Comment author: Will_Newsome 24 September 2010 04:34:00AM 2 points [-]

I don't think it's worthwhile to fight the terminology: 'acausal' makes sense as opposed to 'causal' as in 'causal decision theory'. I think it's pretty sensible and defensible, even if 'timeless' might've been a better choice.

Comment author: Mitchell_Porter 24 September 2010 05:02:57AM 0 points [-]

No, the more I think about it, the more I think there is a serious problem here.

"Superrationality" is just a situation in which a certain bias - a certain deviation from actual rationality - is rewarded, when enough other people have the same bias. If a bunch of people all using a "superrational decision theory" manage to achieve the big collective payoff they sought by cooperating, it's only because of the contingent fact that they happened to have a majority. And under that circumstance, ordinary decision theory would tell you to go with the flow and choose with that majority as well!

Superrationality is either an attempt to solve coordination problems through magical thinking, or it's a fancy name for visibly favoring altruism in the hope that others will too, or it's a preference for altruistic terminal values disguised as an appeal to rational self-interest.

Comment author: wedrifid 24 September 2010 05:22:20AM 1 point [-]

If a bunch of people all using a "superrational decision theory" manage to achieve the big collective payoff they sought by cooperating, it's only because of the contingent fact that they happened to have a majority.

Majority or whatever number of cooperating people happens to be sufficient to achieve whatever goal they are trying to achieve. Because of the advantages from cooperation the superational contingent will often not need to be larger than the remainder.

Comment author: Nick_Tarleton 24 September 2010 09:39:00AM 0 points [-]

And under that circumstance, ordinary decision theory would tell you to go with the flow and choose with that majority as well!

Not in a prisoner's dilemma.

Comment author: NihilCredo 27 September 2010 12:12:47AM 0 points [-]

Quoting from Wikipedia because I have no real expertise on decision theory:

Note that a superrational player playing against a game-theoretic rational player will defect, since the strategy only assumes that the superrational players will agree. A superrational player playing against a player of uncertain superrationality will sometimes defect and sometimes cooperate.

How exactly does superrationality differ from membership in the Club Of Always Colluding With Each Other?

Comment author: CarlShulman 23 September 2010 06:05:24AM 3 points [-]

I'm saving the decision theory apparatus (which actually multiplies the expected payoff of both political and non-political altruistic expenditures) for a later post. I couldn't fit everything into the first one.

Comment author: Vladimir_Nesov 23 September 2010 07:54:41AM 0 points [-]

Then you should've made clear that "deciding vote" is actually a lower estimate, and shouldn't be interpreted as classical "deciding vote".

Comment author: CarlShulman 23 September 2010 01:17:41PM 0 points [-]

I added some clarifications.

Comment author: Eliezer_Yudkowsky 23 September 2010 07:53:27AM 0 points [-]

Ah, didn't see this earlier.

I don't think it multiplies the expected payoff for both in the same way. Some Bostromian division-of-responsibility principle should apply in both cases. The apparent gains are from the probability of making an important shift via group action where individual action would be unlikely to go over a tipping point, not because you're multiplying by the number of people involved.

Comment author: TobyBartels 23 September 2010 12:30:43PM 1 point [-]

If you acausally influence other people to vote, you'll also acausally influence them to spend time doing so. (And since they're like you, their time is as valuable as yours.) To a first approximation, the expected cost and benefit are proportional to the naïve (ignoring acausal influence) estimate. So the question of whether it's worth the effort should come out the same.

Comment author: Vladimir_Nesov 23 September 2010 01:57:46PM 0 points [-]

Other people's time is not as valuable as yours (to you).

Comment author: SilasBarta 23 September 2010 04:05:23PM *  -1 points [-]

Darn, you beat me to it! Given that your decision and others' decisions stem from a common cause, and you are highly correlated with them (compared to chance), then your decision is informative about their decisions. (You can think of it as deciding which world you "wake up" in.) I had elaborated before about how to apply this reasoning to PD-like problems:

In a world of identical beings, they would all "wake up" from any Prisoner's Dilemma situation finding that they had both defected, or both cooperated. Viewed in this light, it makes sense to cooperate, since it will mean waking up in the pure-cooperation world, even though your decision to cooperate did not literally cause the other parties to cooperate (and even though you perceive it this way).

Making the situation more realistic does not change this conclusion either. Imagine you are positively, but not perfectly, correlated with the other beings; and that you go through thousands of PDs at once with different partners. In that case, you can defect, and wake up having found partners that cooperated. Maybe there are many such partners. However, from the fact that you regard it as optimal to always defect, it follows that you will wake up in a world with more defecting partners than if you had regarded it as optimal in such situations to cooperate.

As before, your decision does not cause others to cooperate, but it does influence what world you wake up in.

Also, if I go the opposite route, and use Schwitzgebel's model and decision theory, that's not a good argument to justify voting, for with a population of 100,000,000, you actually have far less than a 1e-8 chance of swinging the outcome, because the other votes are unlikely (under this causal model) to split exactly 50/50 other than your vote.