JonahSinick comments on Common sense as a prior - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (212)
How do you know this? Do you think that these people would describe their reason for voting as deontological?
Oh, definitely. The consequentialist justification only happens in obscure corners of geekdom like LessWrong and stat / poli sci journals.
Just ask people why they vote, and most of them will say things like "It's a civic duty," "Our forefathers died for this, so we shouldn't waste it," "If everyone didn't vote, things would be bad," ...
I Googled the question and found similar responses in this article:
Interestingly, the author also says: "Your decision to vote or not will not affect whether or not other people will vote (unless you are a highly influential person and you announce your voting intention to the world in advance of the election)." This may be mostly true in practice, but not in the limit as everyone approaches identity with you. It seems like this author is a two-boxer based on his statements. He calls timeless considerations "magical thinking."
These views reflect the endorsements of various trusted political figures and groups, the active promotion of voting by those with more individual influence, and the raw observation of outcomes affected by bulk political behavior.
In other words, the common sense or deontological rules of thumb are shaped by the consequences, as the consequences drive moralizing activity. Joshua Greene has some cute discussion of this in his dissertation:
Explicitly yes, but implicitly...?
Do you have in mind average people, or, e.g., top 10% Ivy Leaguers ... ?
These reasons aren't obviously deontological (even though they might sound like they are on first hearing). As you say in your comment, timeless decision theory is relevant (transparently so in the last two of the three reasons that you cite).
Even if people did explicitly describe their reasons as deontological, one still wouldn't know whether this was the case, because people's stated reasons are often different from their actual reasons.
One would want to probe here to try to tell whether these things reflect terminal values or instrumental values.
Both. Remember that many Ivy Leaguers are liberal-arts majors. Even many that are quantitatively oriented I suspect aren't familiar with the literature. I guess it takes a certain level of sophistication to think that voting doesn't make a difference in expectation, so maybe most people fall into the bucket of those who haven't really thought about the matter rigorously at all. (Remember, we're including English and Art majors here.)
You could say, "If they knew the arguments, they would be persuaded," which may be true, but that doesn't explain why they already vote without knowing the arguments. Explaining that suggests deontology as a candidate hypothesis.
At some point it may become a debate about the teleological level at which you assess their "reasons." As individuals, it's very likely the value of voting is terminal in some sense, based on cultural acclimation. Taking a broader view of why society itself developed this tendency, you might say that it did so for more consequentialist / instrumental reasons.
It's similar to assessing the "reason" why a mother cares for her child. At an individual / neural level it's based on reward circuitry. At a broader evolutionary level, it's based on bequeathing genes.
The main point to my mind here is that apparently deontological beliefs may originate from a combination of consequentialist values with an implicit understanding of timeless decision theory.
He may also be a two boxer who thinks that one boxing is magical thinking. However this instance doesn't demonstrate that. Acting as if other agents will conditionally cooperate when they in fact will not is an error. In fact, it will prompt actual timeless decision theorists to defect against you.
Thanks! I'm not sure I understood your comment. Did you mean that if the other agents aren't similar enough to you, it's an error to assume that your cooperating will cause them to cooperate?
I was drawing the inference about two-boxing from the fact that the author seemed to dismiss the possibility that what you do could possibly affect what others do in any circumstance.
Yes, specifically similar with respect to decision theory implementation.
He seems to be talking about humans as they exist. If (or when) he generalises to all agents he starts being wrong.
Even among humans, there's something to timeless considerations, right? If you were in a real prisoner's dilemma with someone you didn't know but who was very similar to you and had read a lot of the same things, it seems plausible you should cooperate? I don't claim the effect is strong enough to operate in the realm of voting most of the time, but theoretically timeless considerations can matter for less-than-perfect copies of yourself.
Yes, it applies among (some of) that class of humans.
Yes.