TimS comments on "Politics is the mind-killer" is the mind-killer - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (99)
I like the idea of starting a Politics Open Thread if it means I won't see any more political comments elsewhere on LW. Also it would work as a nice experiment to convince libertines like you that encouraging political discussion isn't a good idea, or convince curmudgeons like me that it is.
I like the idea of a political theory thread, but before I do it, I think it's worthwhile to think about some ground rules in order for it to be productive.
Any other points I should add (particularly about voting/karma)?
Edit:
"Arguments are soldiers" is practically the definition of democracy. In theory, if my arguments are persuasive enough it will determine whether or not my neighbors or I can continue doing X or start doing Y without being fined, jailed, or killed for it. Depending on what great things I like to do or what horrible things I want to prevent my neighbors from doing, that's an awfully powerful incentive for me to risk a few minds being killed.
Now, in practice we mostly live in near-megaperson cities in multi-megaperson districts of near-gigaperson countries, whereas my above theory mostly applies to hectoperson and kiloperson tribes. But my ape brain can't quite internalize that, so the subconscious incentive remains.
But that's not even the worst of it! I try to read a range of liberal, conservative, libertarian, populist etc. news and commentary, just so that the gaps in each don't overlap so much... but it requires a conscious effort. Judging by the groupthink in reader comments on these sites, most people's behavior is the opposite of mine. Why not? Reading about how right you are is fun; reading about how wrong you are is not.
It would be very easy for new would-be LessWrong readers to see the politics threads, jump to conclusions like "Oh, these people think they're so smart but they're actually a bunch of Blues! A wise Green like me should look elsewhere for rationality." Repeat for a few years and the average LessWrong biases really do start to skew Blue, even bad Blue-associated ideas start going unchallenged, etc.
I think I would still love to read what LessWrong users have to say about politics. Probably on a different site. With unconnected karma and preferably unconnected pseudonyms.
I don't read about how I am wrong. I only read about how other people (sometimes including my former selves) are wrong, and that's fun too.
Respectfully, that's not a correct use of the metaphor. The point is that unwillingness to disagree with other positions simply because those positions reach the desired conclusion is evidence of being mindkilled. You don't shoot soldiers on your side, but for those thinking rationally, arguments are not soldiers, so bad ideas should always be challenged.
This is a real risk, but it's worth assessing (and figuring out how to assess) how likely it is to occur.
By "thinking rationally", you must mean epistemically, not instrumentally.
If (to use as Less-Wrong-politically-neutral an allegory as I can) you are vastly outnumbered by citizens who are wondering if maybe those birds were an omen telling us that Jupiter doesn't want heretics thrown to the lions anymore, I agree that the epistemically rational thing to do is point out that we don't have much evidence for the efficacy of augury or the existence of Zeus, but the instrumentally rational thing to do is to smile, nod, and point out that eagles are well-known to convey the most urgent of omens. In more poetic words: you don't shoot soldiers on your side.
The metaphor seems to be as correct as any mere metaphor can get. Is it such a stretch to call an argument a "soldier" for you when it's responsible for helping defend your life, liberty, or property?
First, that's not the metaphor we were discussing. Second, the metaphor you are using allows arguments to be soldiers of any ideology, not simply democracy.
I have read "Politics is the mindkiller" and am discussing the same metaphor. For that matter, I'm practically recapitulating the same metaphor, to make an even stronger point: not only can politics provoke irrational impulses to support poor arguments on your "side", politics can create instrumentally rational incentives to (publicly, visibly, not internally) support poor arguments. Sometimes you support a morally dubious soldier because of jingoism, sometimes you support him because he's the best defense in between you and an even worse soldier.
Would you be more specific about how you think my use of the metaphor is different and/or invalid?
I do think I've given a compelling counterexample to "bad ideas should always be [publicly] challenged". (my apologies if the implicit [publicly] here was not your intended claim, but the context is that of a proposed public discussion) Have you changed your mind about that claim, or do you see a problem with my reasoning? For that matter, in my hypothetical political forum would you be arguing for atheism or for more compassionate augury yourself?
The preposition of your second sentence suggests a miscommunication of my initial claim. I didn't intend to say "arguments are soldiers of democracy", but rather "arguments are soldiers in a democracy". You're still right that this also applies to non-democracies: in any state where public opinion affects political policy, incentives exist to try and steer opinion towards instrumentally rational ends even if this is done via epistemically irrational means. Unlimited democracy is just an abstract maximum of this effect, not the only case where it applies.
In brief, I think my interpretation is right because it is consistent with the intended lesson, which is "Don't talk about Politics on LessWrong." In other words, I understood the point of the story to be that treating arguments as soldiers interferes with believing true things.
I agree that "bad ideas should be publicly challenged" is only true if what I'm trying to do is believe true theories and not believe false theories. If I'm trying to change society (i.e. do politics), I shouldn't antagonize my allies. The risk is that I will go from disingenuously defending my allies' wrong claims to sincerely believing my allies' wrong claims, even in the face of the evidence. That's being mindkilled. In short, engaging in the coalition-building necessary to do politics is claimed to cause belief in empirically false things. I.e. "Politics is the Mindkiller."
My interpretation could be summarized in similar fashion as "really, really, don't talk about politics on LessWrong" - whether this is "consistent" or not depends on your definition of that word.
I agree with your interpretation of the point of the story... and with pretty much everything else you wrote in this comment, which I guess leaves me with little else to say.
Although, that's an example of another issue with political forums, isn't it? In an academic setting, if a speaker elicits informed agreement from the audience about their subject, that means we've all got more shared foundational material with which to build the discussion of a closely related subsequent topic. Difficult questions without obvious unanimous answers do get reached eventually, but only after enough simpler related problems have been solved to make the hard questions tractable.
Politics instead turns into debates, where discussions shut down once agreement occurs, then derail onto the less tractable topics where disagreement is most heated. Where would we be if Newton had decided "Yeah, Kepler's laws seem accurate; let me just write "me too" and then we're on to weather prediction!"
Are some ideologies more objectively correct than others? (Abolitionists used ostracism and violence to prevail against those who would return fugitive slaves south. Up until the point of violence, many of their arguments were "soldiers." One such "soldier" was Spooner's "The Unconstitutionality of Slavery" --from the same man who later wrote "the Constitution of No Authority." He personally believed that the Constitution had no authority, but since it was revered by many conformists, he used a reference to it to show them that they should alter their position to support of abolitionism. Good for him!)
If some ideologies are more correct than others, then those arguments which are actually soldiers for those ideologies have strategic utility, but only as strategic "talking points," "soldiers," or "sticky" memes. Then, everyone who agrees with using those soldiers can identify them as such (strategy), and decide whether it's a good strategic or philosophical, argument, or both, or neither.
You seem to have excluded a middle option, namely "I am in favor of heretics not being thrown to the lions, and no amount of bird-related omen interpretation will sway my opinion on the subject one way or another."
Seconded on the different site, unconnected karma and unconnected pseudonyms. Also, it'd be nice if it could somehow be somewhat dissociated from LW... might be useful to have a link to it easily visible, actually, but if there is one it should be right next to a specification explaining the idea and linking to "politics is the mind-killer".
Separately, the idea of retaining a taboo on things like discussing politicians or the like, and restricting it to mostly issue discussions, also sounds useful.
Downvote spam, but otherwise avoid voting up or down - we're likely to be voting for biased reasons.
That's an awesome idea. Maybe amend it to "downvote spam, otherwise vote everything toward 0" so a minority of politically-motivated voters can't spoil the game for everyone else?
In addition to my other comment, I think it will be hard to enforce a voting norm that is so inconsistent with the voting norms on the rest of the site.
Disagree, there are successful instances of using karma in ways inconsistent with the rest of the site.
The most important counterexample here is Will Newsome's Irrationality Game post, where voting norms were reversed: the weirdest/most irrational beliefs were upvoted the most, and the most sensible/agreeable beliefs were downvoted into invisibility. Many of the comments in that thread, especially the highest-voted, have disclaimers indicating that they operate according to a different voting metric. There is no obvious indication that anyone was confused or malicious with regard to the changed local norm.
Hmm. I like the idea that expressing an idea well is rewarded, which your suggestion doesn't allow. Trying to figure out how to decide between them.
Hmm. How about:
Spam is not engagement, but the poster whose posting led to this discussion post was not really interested in a discussion.
Sounds good. Has a side-effect of there being a perceived cost for posting in the thread; you're more likely to be downvoted.
I generally counsel not downvoting for disagreement anywhere on the site. I think this needs to be stronger.
Mm. I sometimes upvote for things I think are good ideas, as an efficient alternative to a comment saying "Yes, that's right." I sometimes downvote for things I think are bad ideas, as an alternative to a comment saying "Nope, that's wrong." While I would agree that in the latter case a downvote isn't as good as a more detailed comment explaining why something is wrong, I do think it's better than nothing.
So, consider this an opportunity to convince someone to your position on downvotes, if you want to: why ought I change my behavior?
Voting is there to encourage/discourage some kinds of comments. We don't want people to not make comments just because we disagree with their contents, so we shouldn't downvote comments for disagreement.
If someone makes a good, well-reasoned comment in favor of a position I disagree with, that merits an upvote and a response.
It might be nice to have a mechanism for voting "agree/disagree" in addition to "high quality / low quality" (as I proposed 3 years ago), but in the absence of such a mechanism we should avoid mixing our signals.
The comments that float to the top should be the highest-quality, not the ones most in line with the Lw party line.
And people should be rewarded for making high-quality comments and punished for making low-quality comments, not rewarded for expressing popular opinions and punished for expressing unpopular opinions.
I agree that good, well-reasoned comments don't merit downvotes, even if I disagree with the position they support. I agree that merely unpopular opinions don't merit downvotes. I agree that low-quality comments in line with the LW party line don't merit upvotes. I agree that merely popular opinions don't merit upvotes. I agree that voting is there to encourage and discourage some kinds of comments.
What's your position on downvoting a neither-spectacularly-well-or-poorly-written comment expressing an idea that's simply false?
I don't think that type of comment should be downvoted except when the author can't take a hint and continues posting the same false idea repeatedly. Downvoting false ideas won't prevent well-intentioned people from making mistakes or failing to understand things, mostly it would just discourage them from posting at all to whatever extent they are bothered by the possibility of downvotes.
I agree with User:saturn.
An idea that's false but "spectacularly well-written" should be downvoted to the extent of its destructiveness. Stupidity (the tendency toward unwitting self-destruction) is what we're trying to avoid here, right? We're trying to avoid losing. Willful ignorance of the truth is an especially damaging form of stupidity.
Two highly intelligent people will not likely come to a completely different and antithetical viewpoint if both are reasonably intelligent. Thus, the very well-written but false viewpoint is far more damaging than the clearly stupid false viewpoint. If this site helps people avoid damaging their property (their brain, their bodies, their material possessions), or minimizes systemic damage to those things, then it's more highly functional, and the value is apparent even to casual observers.
Such a value is sure to be adopted and become "market standard." That seems like the best possible outcome, to me.
So, if a comment is seemingly very well-reasoned, but false, it will actually help to expand irrationality. Moreover, it's more costly to address the idea, because it "seems legit." Thus, to not sound like a jerk, you have to expend energy on politeness and form that could normally be spent on addressing substance.
HIV tricks the body into believing it's harmless by continually changing and "living to fight another day." If it was a more obvious threat, it would be identified and killed. I'd rather have a sudden flu that makes me clearly sick, but that my body successfully kills, than HIV that allows me to seem fine, but slowly kills me in 10 years. The well-worded but false argument is like a virus that slips past your body's defenses or neutralizes them. That's worse than a clearly dangerous poison because it isn't obviously dangerous.
False ideas are most dangerous when they seem to be true. Moreover, such ideas won't seem to be true to smart people. It's enough for them to seem true to 51% of voters.
If 51% of voters can't find fault with a false idea, it can be as damaging as "the state should own and control all property." Result: millions murdered (and we still dare not talk about it, lest we be accused of being "mind killed" or "rooting for team A to the detriment of team B" --as if avoiding mass murder weren't enough of a reason for rooting for a properly-identified "right team").
Now, what if there's a reasonable disagreement, from people who know differen things? Then evidence should be presented, and the final winner should become clear, or a vital area where further study is needed can be identified.
If reality is objective, but humans are highly subjective creatures due to limited brain (neocortex) size, then argument is a good way to make progress toward a Lesswrong site that exhibits emergent intelligence.
I think that's a good way to use the site. I would prefer to have my interactions with this site lead me to undiscovered truths. If absolutely everyone here believes in the "zero universes" theory, then I'll watch more "Google tech talks" and read more white papers on the subject, allocating more of my time to comprehending it. If everyone here says it's a toss-up between that and the multiverse theory, or "NOTA.," I might allocate my time to an entirely different and "more likely to yield results" subject.
In any case, there is an objective reality that all of us share "common ground" with. Thus, false arguments that appear well reasoned are always poorly-reasoned, to some extent. They are always a combination of thousands of variables. Upranking or downranking is a means for indicating which variables we think are more important, and which ones we think are true or false.
The goal should always be an optimal outcome, including an optimal prioritization.
If you have the best recipe ever for a stevia-sweetened milkshake, and your argument is true, valid, good, and I make the milkshake and I think it's the best thing ever, and it contains other healthy ingredients that I think will help me live longer, then that's a rational goal. I'm drinking something tasty, and living longer, etc. However, if I downvote a comment because I don't want Lesswrong to turn into a recipe-posting board, that might be more rational.
What's the greatest purpose to which a tool can be used? True, I can use my pistol to hammer in nails, but if I do that, and I eventually need a pistol to defend my life, I might not have it, due to years of abuse or "sub-optimal use." Also, if I survive attacks against me, I can buy a hammer.
A Lesswrong "upvote" contains an approximation of all of that. Truth, utility, optimality, prioritization, importance, relevance to community, etc. Truth is a kind of utility. If we didn't care about utility, we might discuss purely provincial interests. However: Lesswrong is interested in eliminating bad thinking, and it thus makes sense to start with the worst of thinking around which there is the least "wiggle room."
If I have facial hair (or am gay), Ayn Rand followers might not like me. Ayn Rand often defended capitalism. By choosing to distance herself from people over their facial hair, she failed to prioritize her views rationally, and to perceive how others would shape her views into a cult through their extended lack of proper prioritization. So, in some ways, Rand, (like the still worse Reagan) helped to delegitimize capitalism. Still, if you read what she wrote about capitalism, she was 100% right, and if you read what she wrote about facial hair, she was 100% superficial and doltish. So, on an Ayn Rand forum, if someone begins defending Rand's disapproval of facial hair, I might point out that in 2006 the USA experienced a systemic shock to its fiat currency system, and try to direct the conversation to more important matters.
I might also suggest leaving the discussions of facial hair to Western wear discussion boards.
It's vital to ALWAYS include an indication of how important a subject is. That's how marketplaces of ideas focus their trading.
Well, to the extent of its net destructiveness... that is, the difference between the destructiveness of the idea as it manifests in the specific comment, and the destructiveness of downvoting it.
But with that caveat, sure, I expect that's true.
That said, the net destructiveness of most of the false ideas I see here is pretty low, so this isn't a rule that is often relevant to my voting behavior. Other considerations generally swamp it.
That said, I have to admit I did not read this comment all the way through. Were it not a response to me, which I make a habit of not voting on, I would have downvoted it for its incoherent wall-of-text nature.
I think the norm is pretty strong. I tend to downvote for stupid, not just wrong. But it will need to be explicitly reinforced.
Edit: The norm on the site is also different if you are participating in the conversation (try not to downvote at all) or simply observing.
To call "don't downvote if I'm in the conversation" a local norm might be overstating the case. I've heard several people assert this about their own behavior, and there are good reasons for it (and equally good reasons for not upvoting if I'm in the conversation), but my own position is more "distrust the impulse to vote on something I'm emotionally engaged with."
I like that, and I think I'll use something like that in the guidelines.
To echo Alejandro1, downvotes should also go to comments which break the rules.
― Robert A. Heinlein
(There's no way to break the rule on posting too fast. That's one I'd break. Because yeah, we ought not to be able to come close to thinking as fast as our hands can type. What a shame that would be. ...Or can a well-filtered internet forum --which prides itself on being well-filtered-- have "too much information")
Downvoted for fallacy of gray, and because I'm feeling ornery today.
There's no fallacy of gray in there. Since votes count just as much in the thread, and our votes will be much more noisy, it would often be best to refrain from voting there. If anything, I might have expected to be accused of the opposite fallacy.
This qualification makes it not the fallacy of gray. If that qualifier was implicit from context above, I simply missed it.
I still don't see how that would relate to the fallacy of gray:
Perhaps a norm of using the anti-kibitzer for the thread?
I'm not sure that's a help for biased voting patterns (which would probably come from the views being expressed), but it might help preventing local mind-killing from spilling out onto the rest of the site.
But I don't think there's an easy mechanism for that, and comments will still show up in 'recent comments' under discussion.