Brian_Tomasik comments on Common sense as a prior - LessWrong

33 Post author: Nick_Beckstead 11 August 2013 06:18PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (212)

You are viewing a single comment's thread. Show more comments above.

Comment author: Brian_Tomasik 13 August 2013 01:13:43AM *  1 point [-]

Elite common sense says that voting is important for altruistic reasons.

This is because they're deontologists, not because they're consequentialists with a linear utility function. So rather than suggesting more similarity in values, it suggests less. (That said, there's more overlap between deontology and consequentialism than meets the eye.)

So I think it's best to remain agnostic as to the degree to which variance in the humanitarian endeavors that people engage in is driven by variance in their values.

It may be best to examine on a case-by-case basis. We don't need to just look at what people are doing and make inferences; we can also look at other psychological hints about how they feel regarding a given issue. Nick did suggest giving greater weight to what people believe (or, in this case, what they do) than their stated reasons for those beliefs (or actions), but he acknowledges this recommendation is controversial (e.g., Ray Dalio disagrees), and on some issues it seems like there's enough other information to outweigh whatever inferences we might draw from actions alone. For example, we know people tend to be irrational in the religious domain based on other facts and so can somewhat discount the observed behavior there.

Points taken on the other issues we discussed.

Comment author: JonahSinick 13 August 2013 03:59:45AM 0 points [-]

This is because they're deontologists, not because they're consequentialists with a linear utility function.

How do you know this? Do you think that these people would describe their reason for voting as deontological?

Comment author: Brian_Tomasik 13 August 2013 04:18:28AM *  2 points [-]

Oh, definitely. The consequentialist justification only happens in obscure corners of geekdom like LessWrong and stat / poli sci journals.

Just ask people why they vote, and most of them will say things like "It's a civic duty," "Our forefathers died for this, so we shouldn't waste it," "If everyone didn't vote, things would be bad," ...

I Googled the question and found similar responses in this article:

One reason that people often offer for voting is “But what if everybody thought that way?” [...]

Another reason for voting, offered by political scientists and lay individuals alike, is that it is a civic duty of every citizen in a democratic country to vote in elections. It’s not about trying to affect the electoral outcome; it’s about doing your duty as a democratic citizen by voting in elections.

Interestingly, the author also says: "Your decision to vote or not will not affect whether or not other people will vote (unless you are a highly influential person and you announce your voting intention to the world in advance of the election)." This may be mostly true in practice, but not in the limit as everyone approaches identity with you. It seems like this author is a two-boxer based on his statements. He calls timeless considerations "magical thinking."

Comment author: CarlShulman 13 August 2013 08:41:20PM 2 points [-]

Just ask people why they vote, and most of them will say things like "It's a civic duty," "Our forefathers died for this, so we shouldn't waste it," "If everyone didn't vote, things would be bad," ...

These views reflect the endorsements of various trusted political figures and groups, the active promotion of voting by those with more individual influence, and the raw observation of outcomes affected by bulk political behavior.

In other words, the common sense or deontological rules of thumb are shaped by the consequences, as the consequences drive moralizing activity. Joshua Greene has some cute discussion of this in his dissertation:

I believe that this pattern is quite general. Our intuitions are not utilitarian, and as a result it is often possible to devise cases in which our intuitions conflict with utilitarianism. But at the same time, our intuitions are somewhat constrained by utilitarianism. This is because we care about utilitarian outcomes, and when a practice is terribly anti-utilitarian, there is, sooner or later, a voice in favor of abolishing it, even if the voice is not explicitly utilitarian. Take the case of drunk driving. Drinking is okay. Driving is okay. Doing both at the same time isn’t such an obviously horrible thing to do, but we’ve learned the hard way that this intuitively innocuous, even fun, activity is tremendously damaging. And now, having moralized the issue with the help of organizations like Mothers Against Drunk Driving—what better moral authority than Mom?—we are prepared to impose very stiff penalties on people who aren’t really “bad people,” people with no general anti-social tendencies. We punish drunk driving and related offenses in a way that appears (or once appeared) disproportionately harsh because we’ve paid the utilitarian costs of not doing so.39 The same might be said of harsh penalties applied to wartime deserters and draft-dodgers. The disposition to avoid situations in which one must kill people and risk being killed is not such an awful disposition to have, morally speaking, and what could be a greater violation of your “rights” than your government’s sending you, an innocent person, off to die against your will?40 Nevertheless we are willing to punish people severely, as severely as we would punish violent criminals, for acting on that reasonable and humane disposition when the utilitarian stakes are sufficiently high.41

Comment author: JonahSinick 13 August 2013 02:34:13PM *  1 point [-]

The consequentialist justification only happens in obscure corners of geekdom like LessWrong and stat / poli sci journals.

Explicitly yes, but implicitly...?

Just ask people why they vote,

Do you have in mind average people, or, e.g., top 10% Ivy Leaguers ... ?

Just ask people why they vote, and most of them will say things like "It's a civic duty," "Our forefathers died for this, so we shouldn't waste it," "If everyone didn't vote, things would be bad," ...

These reasons aren't obviously deontological (even though they might sound like they are on first hearing). As you say in your comment, timeless decision theory is relevant (transparently so in the last two of the three reasons that you cite).

Even if people did explicitly describe their reasons as deontological, one still wouldn't know whether this was the case, because people's stated reasons are often different from their actual reasons.

One would want to probe here to try to tell whether these things reflect terminal values or instrumental values.

Comment author: Brian_Tomasik 13 August 2013 05:41:16PM *  0 points [-]

Do you have in mind average people, or, e.g., top 10% Ivy Leaguers ... ?

Both. Remember that many Ivy Leaguers are liberal-arts majors. Even many that are quantitatively oriented I suspect aren't familiar with the literature. I guess it takes a certain level of sophistication to think that voting doesn't make a difference in expectation, so maybe most people fall into the bucket of those who haven't really thought about the matter rigorously at all. (Remember, we're including English and Art majors here.)

You could say, "If they knew the arguments, they would be persuaded," which may be true, but that doesn't explain why they already vote without knowing the arguments. Explaining that suggests deontology as a candidate hypothesis.

These reasons aren't obviously deontological (even though they might sound like they are on first hearing).

  • "It's a civic duty" is deontological if anything is, because deontology is duty-based ethics.
  • "If everyone didn't vote, things would be bad" is an application of Kant's categorical imperative.
  • "Our forefathers died for this, so we shouldn't waste it" is not deontological -- just the sunk-cost fallacy.

Even if people did explicitly describe their reasons as deontological, one still wouldn't know whether this was the case, because people's stated reasons are often different from their actual reasons.

At some point it may become a debate about the teleological level at which you assess their "reasons." As individuals, it's very likely the value of voting is terminal in some sense, based on cultural acclimation. Taking a broader view of why society itself developed this tendency, you might say that it did so for more consequentialist / instrumental reasons.

It's similar to assessing the "reason" why a mother cares for her child. At an individual / neural level it's based on reward circuitry. At a broader evolutionary level, it's based on bequeathing genes.

Comment author: JonahSinick 13 August 2013 08:59:08PM 1 point [-]

The main point to my mind here is that apparently deontological beliefs may originate from a combination of consequentialist values with an implicit understanding of timeless decision theory.

Comment author: wedrifid 13 August 2013 04:24:06AM 0 points [-]

Interestingly, the author also says: "Your decision to vote or not will not affect whether or not other people will vote (unless you are a highly influential person and you announce your voting intention to the world in advance of the election)." This may be mostly true in practice, but not in the limit as everyone approaches identity with you. It seems like this author is a two-boxer based on his statements. He calls timeless considerations "magical thinking."

He may also be a two boxer who thinks that one boxing is magical thinking. However this instance doesn't demonstrate that. Acting as if other agents will conditionally cooperate when they in fact will not is an error. In fact, it will prompt actual timeless decision theorists to defect against you.

Comment author: Brian_Tomasik 13 August 2013 05:33:01AM 2 points [-]

Thanks! I'm not sure I understood your comment. Did you mean that if the other agents aren't similar enough to you, it's an error to assume that your cooperating will cause them to cooperate?

I was drawing the inference about two-boxing from the fact that the author seemed to dismiss the possibility that what you do could possibly affect what others do in any circumstance.

Comment author: wedrifid 13 August 2013 05:43:11AM *  1 point [-]

Did you mean that if the other agents aren't similar enough to you, it's an error to assume that your cooperating will cause them to cooperate?

Yes, specifically similar with respect to decision theory implementation.

I was drawing the inference about two-boxing from the fact that the author seemed to dismiss the possibility that what you do could possibly affect what others do in any circumstance.

He seems to be talking about humans as they exist. If (or when) he generalises to all agents he starts being wrong.

Comment author: Brian_Tomasik 13 August 2013 05:49:38AM 1 point [-]

Even among humans, there's something to timeless considerations, right? If you were in a real prisoner's dilemma with someone you didn't know but who was very similar to you and had read a lot of the same things, it seems plausible you should cooperate? I don't claim the effect is strong enough to operate in the realm of voting most of the time, but theoretically timeless considerations can matter for less-than-perfect copies of yourself.

Comment author: wedrifid 13 August 2013 06:50:27AM 2 points [-]

Even among humans, there's something to timeless considerations, right? If you were in a real prisoner's dilemma with someone you didn't know but who was very similar to you and had read a lot of the same things, it seems plausible you should cooperate?

Yes, it applies among (some of) that class of humans.

I don't claim the effect is strong enough to operate in the realm of voting most of the time, but theoretically timeless considerations can matter for less-than-perfect copies of yourself.

Yes.