You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Politics is hard mode

27 RobbBB 21 July 2014 10:14PM

Summary: I don't think 'politics is the mind-killer' works well rthetorically. I suggest 'politics is hard mode' instead.


 

Some people in and catawampus to the LessWrong community have objected to "politics is the mind-killer" as a framing (/ slogan / taunt). Miri Mogilevsky explained on Facebook:

My usual first objection is that it seems odd to single politics out as a “mind-killer” when there’s plenty of evidence that tribalism happens everywhere. Recently, there has been a whole kerfuffle within the field of psychology about replication of studies. Of course, some key studies have failed to replicate, leading to accusations of “bullying” and “witch-hunts” and what have you. Some of the people involved have since walked their language back, but it was still a rather concerning demonstration of mind-killing in action. People took “sides,” people became upset at people based on their “sides” rather than their actual opinions or behavior, and so on.

Unless this article refers specifically to electoral politics and Democrats and Republicans and things (not clear from the wording), “politics” is such a frightfully broad category of human experience that writing it off entirely as a mind-killer that cannot be discussed or else all rationality flies out the window effectively prohibits a large number of important issues from being discussed, by the very people who can, in theory, be counted upon to discuss them better than most. Is it “politics” for me to talk about my experience as a woman in gatherings that are predominantly composed of men? Many would say it is. But I’m sure that these groups of men stand to gain from hearing about my experiences, since some of them are concerned that so few women attend their events.

In this article, Eliezer notes, “Politics is an important domain to which we should individually apply our rationality — but it’s a terrible domain in which to learn rationality, or discuss rationality, unless all the discussants are already rational.” But that means that we all have to individually, privately apply rationality to politics without consulting anyone who can help us do this well. After all, there is no such thing as a discussant who is “rational”; there is a reason the website is called “Less Wrong” rather than “Not At All Wrong” or “Always 100% Right.” Assuming that we are all trying to be more rational, there is nobody better to discuss politics with than each other.

The rest of my objection to this meme has little to do with this article, which I think raises lots of great points, and more to do with the response that I’ve seen to it — an eye-rolling, condescending dismissal of politics itself and of anyone who cares about it. Of course, I’m totally fine if a given person isn’t interested in politics and doesn’t want to discuss it, but then they should say, “I’m not interested in this and would rather not discuss it,” or “I don’t think I can be rational in this discussion so I’d rather avoid it,” rather than sneeringly reminding me “You know, politics is the mind-killer,” as though I am an errant child. I’m well-aware of the dangers of politics to good thinking. I am also aware of the benefits of good thinking to politics. So I’ve decided to accept the risk and to try to apply good thinking there. [...]

I’m sure there are also people who disagree with the article itself, but I don’t think I know those people personally. And to add a political dimension (heh), it’s relevant that most non-LW people (like me) initially encounter “politics is the mind-killer” being thrown out in comment threads, not through reading the original article. My opinion of the concept improved a lot once I read the article.

In the same thread, Andrew Mahone added, “Using it in that sneering way, Miri, seems just like a faux-rationalist version of ‘Oh, I don’t bother with politics.’ It’s just another way of looking down on any concerns larger than oneself as somehow dirty, only now, you know, rationalist dirty.” To which Miri replied: “Yeah, and what’s weird is that that really doesn’t seem to be Eliezer’s intent, judging by the eponymous article.”

Eliezer replied briefly, to clarify that he wasn't generally thinking of problems that can be directly addressed in local groups (but happen to be politically charged) as "politics":

Hanson’s “Tug the Rope Sideways” principle, combined with the fact that large communities are hard to personally influence, explains a lot in practice about what I find suspicious about someone who claims that conventional national politics are the top priority to discuss. Obviously local community matters are exempt from that critique! I think if I’d substituted ‘national politics as seen on TV’ in a lot of the cases where I said ‘politics’ it would have more precisely conveyed what I was trying to say.

But that doesn't resolve the issue. Even if local politics is more instrumentally tractable, the worry about polarization and factionalization can still apply, and may still make it a poor epistemic training ground.

A subtler problem with banning “political” discussions on a blog or at a meet-up is that it’s hard to do fairly, because our snap judgments about what counts as “political” may themselves be affected by partisan divides. In many cases the status quo is thought of as apolitical, even though objections to the status quo are ‘political.’ (Shades of Pretending to be Wise.)

Because politics gets personal fast, it’s hard to talk about it successfully. But if you’re trying to build a community, build friendships, or build a movement, you can’t outlaw everything ‘personal.’

And selectively outlawing personal stuff gets even messier. Last year, daenerys shared anonymized stories from women, including several that discussed past experiences where the writer had been attacked or made to feel unsafe. If those discussions are made off-limits because they relate to gender and are therefore ‘political,’ some folks may take away the message that they aren’t allowed to talk about, e.g., some harmful or alienating norm they see at meet-ups. I haven’t seen enough discussions of this failure mode to feel super confident people know how to avoid it.

Since this is one of the LessWrong memes that’s most likely to pop up in cross-subcultural dialogues (along with the even more ripe-for-misinterpretation “policy debates should not appear one-sided“…), as a first (very small) step, my action proposal is to obsolete the ‘mind-killer’ framing. A better phrase for getting the same work done would be ‘politics is hard mode’:

1. ‘Politics is hard mode’ emphasizes that ‘mind-killing’ (= epistemic difficulty) is quantitative, not qualitative. Some things might instead fall under Middlingly Hard Mode, or under Nightmare Mode…

2. ‘Hard’ invites the question ‘hard for whom?’, more so than ‘mind-killer’ does. We’re used to the fact that some people and some contexts change what’s ‘hard’, so it’s a little less likely we’ll universally generalize.

3. ‘Mindkill’ connotes contamination, sickness, failure, weakness. In contrast, ‘Hard Mode’ doesn’t imply that a thing is low-status or unworthy. As a result, it’s less likely to create the impression (or reality) that LessWrongers or Effective Altruists dismiss out-of-hand the idea of hypothetical-political-intervention-that-isn’t-a-terrible-idea. Maybe some people do want to argue for the thesis that politics is always useless or icky, but if so it should be done in those terms, explicitly — not snuck in as a connotation.

4. ‘Hard Mode’ can’t readily be perceived as a personal attack. If you accuse someone of being ‘mindkilled’, with no context provided, that smacks of insult — you appear to be calling them stupid, irrational, deluded, or the like. If you tell someone they’re playing on ‘Hard Mode,’ that’s very nearly a compliment, which makes your advice that they change behaviors a lot likelier to go over well.

5. ‘Hard Mode’ doesn’t risk bringing to mind (e.g., gendered) stereotypes about communities of political activists being dumb, irrational, or overemotional.

6. ‘Hard Mode’ encourages a growth mindset. Maybe some topics are too hard to ever be discussed. Even so, ranking topics by difficulty encourages an approach where you try to do better, rather than merely withdrawing. It may be wise to eschew politics, but we should not fear it. (Fear is the mind-killer.)

7. Edit: One of the larger engines of conflict is that people are so much worse at noticing their own faults and biases than noticing others'. People will be relatively quick to dismiss others as 'mindkilled,' while frequently flinching away from or just-not-thinking 'maybe I'm a bit mindkilled about this.' Framing the problem as a challenge rather than as a failing might make it easier to be reflective and even-handed.

This is not an attempt to get more people to talk about politics. I think this is a better framing whether or not you trust others (or yourself) to have productive political conversations.

When I playtested this post, Ciphergoth raised the worry that 'hard mode' isn't scary-sounding enough. As dire warnings go, it's light-hearted—exciting, even. To which I say: good. Counter-intuitive fears should usually be argued into people (e.g., via Eliezer's politics sequence), not connotation-ninja'd or chanted at them. The cognitive content is more clearly conveyed by 'hard mode,' and if some group (people who love politics) stands to gain the most from internalizing this message, the message shouldn't cast that very group (people who love politics) in an obviously unflattering light. LW seems fairly memetically stable, so the main issue is what would make this meme infect friends and acquaintances who haven't read the sequences. (Or Dune.)

If you just want a scary personal mantra to remind yourself of the risks, I propose 'politics is SPIDERS'. Though 'politics is the mind-killer' is fine there too.

If you and your co-conversationalists haven’t yet built up a lot of trust and rapport, or if tempers are already flaring, conveying the message ‘I’m too rational to discuss politics’ or ‘You’re too irrational to discuss politics’ can make things worse. In that context, ‘politics is the mind-killer’ is the mind-killer. At least, it’s a needlessly mind-killing way of warning people about epistemic hazards.

‘Hard Mode’ lets you speak as the Humble Aspirant rather than the Aloof Superior. Strive to convey: ‘I’m worried I’m too low-level to participate in this discussion; could you have it somewhere else?’ Or: ‘Could we talk about something closer to Easy Mode, so we can level up together?’ More generally: If you’re worried that what you talk about will impact group epistemology, you should be even more worried about how you talk about it.

How should negative externalities be handled? (Warning: politics)

-5 nigerweiss 08 May 2013 09:40PM

Politics ahead!  Read at your own risk, mind killers, etc.  Let all caveats be well and thoroughly emptored.

It seems reasonably clear to me that, from a computational perspective, functional central planning is not practically possible.  Resource allocation among many agents looks an awful lot like an exponential time problem, and the world market is quite an efficient approximation.  In the real world, markets, regulated to preclude blackmail, theft, and slavery, will tend to provide a better approximation of "correct" resource allocation between free agents than a central resource allocation algorithm could plausibly achieve without a tremendous, invasive amount of information about the desires of every market participant, and quite a lot of computing power (within a few orders of magnitude of the combined computational budget of the human species).  

It would be naive to say that we'd need exactly the computational power of the human species in order to achieve it: we can imagine how we might optimize the resource allocation scheme by quite a lot.  Populations are (at least somewhat) compressible, in that there are a number of groups of individual people who optimize for similar things, allowing you to save on simulating all of them.  Additionally, a decent chunk of human neurological and intellectual activity is not dedicated to economic optimization of any kind, which saves you some computing time there as well.  And, of course, humans are not rational, and the homunculi representing them in the optimized market simulation could be, giving them substantially more bang for their cognitive buck - we can imagine, for instance, that this market simulation would not sink billions of dollars into lotteries each year!  It may also be that the behavior of the market itself, on some level, is lawful, and a sufficiently intelligent agent could find general-case solutions that are less expensive than market simulation.    

Still, though, the amount of information and raw processing power needed to pull off central planning competitive with the market approximation seems to be out of our reach for the time being.  As a result of this, and a few other factors, my own politics tend to lean Libertarian / minarchist, and I'm aware that there is some of this sentiment in circulation on this site, though generally not explicitly.  I'm trying to refine my beliefs surrounding some of the sticky issues in Libertarian philosophy (mostly related to children and extreme policy cases), and I thought I'd ask LW what they thought about one issue in particular.  

I have been wondering whether or not there are any interventions in the economy that can have a positive expected benefit.  I honestly don't know if this is the case: put another way, the question is really asking if there are any characteristic behaviors of markets that are undesirable in some sense, and can be corrected by the application of an external law.  Furthermore, such things cannot be profitable to correct for any participant or plausibly-sized collection of participants in the market, but must be good for the market as a whole, or must be something that requires regulatory power to fix.  

An obvious example of this sort of thing is the tragedy of the commons and negative externalities.  The most pressing case study would be climate change: the science suggests, fairly firmly, that human CO2 emissions are causing long-term shifts in global climate.  How disastrous these shifts will actually be is less well settled, but there is at least a reasonable probability that it will be fairly unpleasant, in the long term.  Personally, I feel that we are likely to run into much bigger problems much sooner than the 50-200 year timescales these disasters seem to expected on.  However, were this not the case, I find that I'm not quite sure how my ideal government, run by a few thousand much smarter and better informed copies of me, ought to respond to the issue.  I don't know what I think the ideal policy for dealing with these sorts of externalities is, and I thought I'd ask for LessWrong's thoughts on the matter.

In my own mind, I think that as light a touch as possible is probably desirable.  Law is a very blunt instrument, and crude legislation like a carbon tax could easily have its own serious negative implications (driving industry to countries that simply don't care about CO2 emissions, for example).  However, actions like subsidizing and partially deregulating nuclear power plants could help a lot by making coal-fired power plants noncompetitive.  We could also declare a policy of slowly withdrawing any government involvement in overseas oil acquisition, which would drive up the price of petroleum products and make electric cars a more appealing alternative.  However, I don't know if there would be horrifying consequences to any of these actions: this is the underlying problem - I am not as smart as the market, and guessing its moods is not something that I, or any human is going to be very good at.  However, it seems clear that some intervention is necessary in this sort of case.  Rock, hard place, you are here.  

Thoughts?  

The Problem With Rational Wiki

20 [deleted] 26 October 2012 11:31AM

Related to: RationalWiki's take on LW, David Gerard's Comments, Vladimir_M's comments, Public Drafts

I wanted to bring more attention to this argument because I've ran into related discussion several times in the comment section and because it demonstrates a failure mode that LessWrong may find itself vulnerable to.

Since it has been cited as a source especially on the reputation LessWrong may or may not have elsewhere I think readers should be aware Rational Wiki has a certain reputation here as well. I'm not talking about the object level disagreements such as cryonics, existential risk, many-worlds interpretation and artificial intelligence because we have some reasonable disagreement on those here as well. Even its cheeky tone while not helping its stated goals can be amusing. I'm somewhat less forgiving about their casual approach to epistemology and their vulnerability to cargo cult science, as long as it is peer reviewed cargo cult science.

While factually it is as about as accurate as Wikipedia, it is very selective about the facts that it is interested in. For example what would you expect from a site calling itself "Rational Wiki" to have on its page about charity. Do you expect information on how much good charity actually does? What kinds of charities do not do what they say on the label? How to avoid getting misled? The ethics of charity? The psychology, sociology or economics of charity?

I'm sorry to disappoint you but the article consists of some haphazardly arranged facts and stats on how much members of some religions give or are supposed to give to charity, a dig against Christianity and a non-sequitur unfavourable comparison of the US to Sweden. Contrast this with what you can find on the topic on sites like LessWrong or 80, 000 Hours. Basically the material presented is what a slightly left of centre atheist needs to win an internet debate. As is much of the rest of the site.

Indeed some entries have a clear ideological bias that is quite startling to behold on a "rational wiki" and it has been noted by some.

Now to avoid any misunderstandings there are good articles, a few LWers are contributors to the rational wiki and there is certainly nothing wrong with being a left of centre atheist! Nearly everyone on this site is an atheist, and people who identify as left wing politically form a large majority here. The tribal markers and its political agenda aren't the biggest problem. Sites with all sorts of agendas, even political ones, promoting rationality are a good thing.

Its problem is that it is an ammunition depot to aid in winning debates. Very specific kinds of debates too. This may sound harsh, but consider: How many people reading the site that aren't already atheists will change their mind on religion? How many people who follow a "crankish" belief won't do so afterwards? While I'm sure it happens the site obviously isn't optimized for this. How many people will read the wiki and try to find errors and biases in their own thinking to debug it instead of breaking if further with confirmation bias or using it as a club? How many will apply this knowledge to help them with any real world problems? Truth seeeking? As a source or community that could aid in that quest it is less useful and reliable than Wikipedia, which while a rather good and extensive encyclopaedia (despite snickering to the contrary) has a subtly but importantly different stated goal.

What else remains? What other plausible function does it serve?

Let's talk about politics

-14 WingedViper 19 September 2012 05:25PM

Hello fellow LWs,

As I have read repeatedly on LW (http://lesswrong.com/lw/gw/politics_is_the_mindkiller/) you don't like discussing politics because it produces biased thinking/arguing which I agree is true for the general populace. What I find curious is that you don't seem to even try it here where people would be very likely to keep their identities small (www.paulgraham.com/identity.html). It should be the perfect (or close enough) environment to talk politics because you can have reasonable discussions here.

I do understand that you don't like to bring politics into discussions about rationality, but I don't understand why there shouldn't be dedicated political threads here. (Maybe you could flag them?)

all the best

Viper

 

Is Politics the Mindkiller? An Inconclusive Test

14 OrphanWilde 27 July 2012 05:45PM

Or is the convention against discussing politics here silly?

I propose a test.  I'm going to try to lay down some rules on voting on comments for the test here (not that I can force anybody to abide by them):

1.) Top-level comments should introduce arguments (or ridicule me and/or this test); responses should be responses to those arguments.

2.) Upvote and downvote based on whether or not you find an argument convincing in the context in which it was raised.  This means if it's a good argument against the argument it is responding to, not whether or not there's a good/obvious counterargument to it; if you have a good counterargument, raise it.  If it's a convincing argument, and the counterargument is also convincing, upvote both.  If both arguments are unconvincing, downvote both.

3.) Try not to downvote particular comments excessively, if they're legitimate lines of argument.  A faulty line of argument provides opportunity for rebuttal, and so for our test has value even then; that is, I want some faulty lines of argument here.  If you disagree, please downvote me, instead of the faulty comments, because this post is what you want less of, not those comments.  This necessarily implies, for balance, that we not excessively upvote comments.  I'd suggest fairly arbitrary limits of 3/-3?

Edit: 4.) A single argument per comment would be ideal; as MixedNuts points out here, it's otherwise hard to distinguish between one good and one bad argument, which makes the upvoting/downvoting difficult to evaluate.  (My apologies about missing this, folks.)

I'm going to try really hard not to get personally involved, except to lay down a leading comment posing an argument against abortion, a position I don't hold, for the record.  The core of the argument isn't disingenuous, and I hold that this argument is true, it just doesn't lead to my opposing abortion.  I do not hold the moral axiom by which I extend the basic argument to argue against abortion, however; I'm playing the devil's advocate to try to help me from getting sucked into the argument while providing an initial point of discussion.

Which leads me to the next point: If you see a hole in an argument, even if it's an argument for a perspective you agree with, poke through it.  The goal is to see whether we can have a constructive political argument here.

The fact that this is a test, and known to be a test, means this isn't a blind study.  Uh, try to act as if you're not being tested?

After it's gone on a little while, if this post hasn't been hopelessly downvoted and ridiculed (and thus the premise and test discarded as undesirable to begin with), we can put up a poll to see whether people found the political debates helpful, not helpful, and so on.

Journal article about politics and mindkilling

30 CronoDAS 07 September 2011 07:46AM

I just found a link to a paper written in 2003 by Geoffrey L. Cohen of Yale University.

"Party over Policy: The Dominating Impact of Group Influence on Political Beliefs"

Abstract:

Four studies demonstrated both the power of group influence in persuasion and people’s blindness to it. Even under conditions of effortful processing, attitudes toward a social policy depended almost exclusively upon the stated position of one’s political party. This effect overwhelmed the impact of both the policy’s objective content and participants’ ideological beliefs (Studies 1–3), and it was driven by a shift in the assumed factual qualities of the policy and in its perceived moral connotations (Study 4). Nevertheless, participants denied having been influenced by their political group, although they believed that other individuals, especially their ideological adversaries, would be so influenced. The underappreciated role of social identity in persuasion is discussed.

That's written in journal-ese, so I'll post a translation from the article I found that contained the link:

My favorite study (pdf) in this space was by Yale’s Geoffrey Cohen. He had a control group of liberals and conservatives look at a generous welfare reform proposal and a harsh welfare reform proposal. As expected, liberals preferred the generous plan and conservatives favored the more stringent option. Then he had another group of liberals and conservatives look at the same plans, but this time, the plans were associated with parties.

Both liberals and conservatives followed their parties, even when their parties disagreed with their preferences. So when Democrats were said to favor the stringent welfare reform, for example, liberals went right along. Three scary sentences from the piece: “When reference group information was available, participants gave no weight to objective policy content, and instead assumed the position of their group as their own. This effect was as strong among people who were knowledgeable about welfare as it was among people who were not. Finally, participants persisted in the belief that they had formed their attitude autonomously even in the two group information conditions where they had not.”

Also, the final study conducted had subjects write editorials either in support of or against a single policy proposal. The differences in how people responded in the "no group information" condition and the "my political party supports / opposes" conditions are also illuminating...