Some Thoughts Are Too Dangerous For Brains to Think

15 Post author: WrongBot 13 July 2010 04:44AM
[EDIT - While I still support the general premise argued for in this post, the examples provided were fairly terrible. I won't delete this post because the comments contain some interesting and valuable discussions, but please bear in mind that this is not even close to the most convincing argument for my point.]
A great deal of the theory involved in improving computer and network security involves the definition and creation of "trusted systems", pieces of hardware or software that can be relied upon because the input they receive is entirely under the control of the user. (In some cases, this may instead be the system administrator, manufacturer, programmer, or any other single entity with an interest in the system.) The only way to protect a system from being compromised by untrusted input is to ensure that no possible input can cause harm, which requires either a robust filtering system or strict limits on what kinds of input are accepted: a blacklist or a whitelist, roughly.
One of the downsides of having a brain designed by a blind idiot is that said idiot hasn’t done a terribly good job with limiting input or anything resembling “robust filtering”. Hence that whole bias thing. A consequence of this is that your brain is not a trusted system, which itself has consequences that go much, much deeper than a bunch of misapplied heuristics. (And those are bad enough on their own!)
In discussions of the AI-Box Experiment I’ve seen, there has been plenty of outrage, dismay, and incredulity directed towards the underlying claim: that a sufficiently intelligent being can hack a human via a text-only channel. But whether or not this is the case (and it seems to be likely), the vulnerability is trivial in the face of a machine that is completely integrated with your consciousness and can manipulate it, at will, towards its own ends and without your awareness.
Your brain cannot be trusted. It is not safe. You must be careful with what you put into it, because it  will decide the output, not you. We have been warned, here on Less Wrong, that there is dangerous knowledge; Eliezer has told us that knowing about biases can cause us harm. Nick Bostrom has written a paper describing dozens of ways in which information can hurt us, but he missed (at least) one.
The acquisition of some thoughts, discoveries, and pieces of evidence can lower our expected outcomes, even when they are true. This can be accounted for; we can debias. But some thoughts and discoveries and pieces of evidence can be used by our underhanded, untrustworthy brains to change our utility functions, a fate that is undesirable for the same reason that being forced to take a murder pill is undesirable.
(I am making a distinction here between the parts of your brain that you have access to and can introspect about, which for lack of better terms I call “you” or “your consciousness”, and the vast majority of your brain, to which you have no such access or awareness, which I call “your brain.” This is an emotional manipulation, which you are now explicitly aware of. Does that negate its effect? Can it?)

A few examples (in approximately increasing order of controversy):

Identity PoliticsPaul Graham and Kaj Sotala have covered this ground, so I will not rehash their arguments. I will only add that, in the absence of a stronger aspect of your identity, truly identifying as something new is an irreversible operation. It might be overwritten again in time, but your brain will not permit an undo.
Power Corrupts: History is littered with examples of idealists seizing power only to find themselves betraying the values they once held dear. No human who values anything more than power itself should seek it; your brain will betray you. There has not yet been a truly benevolent dictator and it would be delusional at best to believe that you will be the first. You are not a mutant. (EDIT: Michael Vassar has pointed out that there have been benevolent dictators by any reasonable definition of the word.)
Opening the Door to Bigotry: I place a high value on not discriminating against sentient beings on the basis of artifacts of the birth lottery. I’ve also observed that people who come to believe that there are significant differences between the sexes/races/whatevers on average begin to discriminate against all individuals of the disadvantaged sex/race/whatever, even when they were only persuaded by scientific results they believed to be accurate and were reluctant to accept that conclusion. I have watched this happen to smart people more than once. Furthermore, I have never met (or read the writings of) any person who believed in fundamental differences between the whatevers and who was not also to some degree a bigot.
One specific and relatively common version of this are people who believe that women have a lower standard deviation on measures of IQ than men. This belief is not incompatible with believing that any particular woman might be astonishingly intelligent, but these people all seem to have a great deal of trouble applying the latter to any particular woman. There may be exceptions, but I haven’t met them. Based on all the evidence I have, I’ve made a conscious decision to avoid seeking out information on sex differences in intelligence and other, similar kinds of research. I might be able to resist my brain’s attempts to change what I value, but I’m not willing to take that risk; not yet, not with the brain I have right now.
If you know of other ways in which a person’s brain might stealthily alter their utility function, please describe them in the comments.

If you proceed anyway...

If the big red button labelled “DO NOT TOUCH!” is still irresistible, if your desire to know demands you endure any danger and accept any consequences, then you should still think really, really hard before continuing. But I’m quite confident that a sizable chunk of the Less Wrong crowd will not be deterred, and so I have a final few pieces of advice.
  • Identify knowledge that may be dangerous. Forewarned is forearmed.
  • Try to cut dangerous knowledge out of your decision network. Don’t let it influence other beliefs or your actions without your conscious awareness. You can’t succeed completely at this, but it might help.
  • Deliberately lower dangerous priors, by acknowledging the possibility that your brain is contaminating your reasoning and then overcompensating, because you know that you’re still too overconfident.
  • Spend a disproportionate amount of time seeking contradictory evidence. If believing something could have a great cost to your values, make a commensurately great effort to be right.
  • Just don’t do it. It’s not worth it. And if I found out, I’d have to figure out where you live, track you down, and kill you.
Just kidding! That would be impossibly ridiculous.

Comments (311)

Comment author: JoshuaZ 13 July 2010 04:54:57AM *  10 points [-]

This advice bothers me a lot. Labeling possibly true knowledge as dangerous knowledge (as the example with statements about average behavior of groups) is deeply worrisome and is the sort of thing that if one isn't careful would be used by people to justify ignoring relevant data about reality. I'm also concerned that this piece conflates actual knowledge (as in empirical data) and things like group identity which seems to be not so much knowledge but rather a value association.

Comment author: WrongBot 13 July 2010 03:03:20PM *  6 points [-]

I am grouping together "everything that goes into your brain," which includes lots and lots of stuff, most of it unconscious. See research on priming, for example.

This argument is explicitly about encouraging people to justify ignoring relevant data about reality. It is, I recognize, an extremely dangerous proposition, of exactly the sort I am warning against!

At risk of making a fully general counterargument, I think it's telling that a number of commenters, yourself included, have all but said that this post is too dangerous.

  • You called it "deeply worrisome."
  • RichardKennaway called it "defeatist scaremongering."
  • Emile thinks it's Dark Side Epistemology. (And see my response.)

These are not just people dismissing this as a bad idea (which would have encouraged me to do the same), these are people are worrying about a dangerous idea. I'm more convinced I'm right than I was when I wrote the post.

Comment author: JoshuaZ 13 July 2010 03:14:05PM 2 points [-]

"Deeply worrisome" may have been bad wording on my part. It might be more accurate to say that this is an attitude which is so much more often wrong than right that it is better to acknowledge the low probability of such knowledge existing but not actually deliberately keep knowledge out.

Comment author: Vladimir_Nesov 13 July 2010 03:19:38PM 6 points [-]

Heh. So most of the critics argue their disapproval of the argument in your post based essentially on the same considerations as discussed in the post.

Comment author: Jonathan_Graehl 13 July 2010 04:43:59PM 5 points [-]

It doesn't make you right. It just makes them as wrong (or lazy) as you.

If you feel afraid that incorporating a belief would change your values, that's fine. It's understandable that you won't then dispassionately weigh the evidence for it; perhaps you'll bring a motivated skepticism to bear on the scary belief. If it's important enough that you care, then the effort is justified.

However, fighting to protect your cherished belief is going to lead to a biased evaluation of evidence, so refusing to engage the scary arguments is just a more extreme and honest version of trying to refute them.

I'd justify both practices situationally: considering the chance you weigh the evidence dispassionately but get the answer quite wrong (even your confidence estimation is off), you can err on the side of caution in protecting your most cherished values. That is, your objective function isn't just to have the best Bayesian-rational track record.

Comment author: mattnewport 13 July 2010 06:21:07PM 3 points [-]

These are not just people dismissing this as a bad idea (which would have encouraged me to do the same), these are people are worrying about a dangerous idea. I'm more convinced I'm right than I was when I wrote the post.

Becoming more convinced of your own position when presented with counterarguments is a well known cognitive bias.

Comment author: WrongBot 13 July 2010 06:38:03PM 3 points [-]

Knowing about biases may have hurt you. The counterarguments are not what convinced me; it's that the counterarguments describe my post as bad because it belongs to the class of things that it is warning against.

There are other counterarguments in the comments here that have made me less convinced of my position; this is not a belief of which I am substantially certain.

Comment author: Bongo 14 July 2010 07:56:51AM 3 points [-]

Your post is not dangerous knowledge. It's dangerous advice about dangerous knowledge.

Comment author: Peter_de_Blanc 13 July 2010 05:02:34AM 5 points [-]

There has not yet been a truly benevolent dictator and it would be delusional at best to believe that you will be the first.

This is true approximately to the extent that there has never been a truly benevolent person. Power anti-corrupts.

Comment author: gwern 13 July 2010 06:27:00AM 7 points [-]

I don't understand your second sentence.

Comment author: EStokes 13 July 2010 06:54:06PM *  11 points [-]

I believe that what he's saying is that with power, people show their true colors. Consciously or not, nice people may have been nice because it benefitted them to. The fact that there were too many penalties for not being nice when they didn't have as much power was a "corruption" of their behavior, in a sense. With the power they gained, the penalties didn't matter enough compared to the benefits.

Comment author: Blueberry 13 July 2010 08:27:43PM 0 points [-]

Wow, you're really good at interpreting cryptic sentences!

Comment author: xamdam 13 July 2010 11:24:27PM 0 points [-]

I think "Elementary, dear Watson" was in order ;)

Comment author: Douglas_Knight 14 July 2010 07:31:48AM 3 points [-]

In favor of the "power just allows corrupt behavior" theory, Bueno de Mesquita offers two very nice examples of people who ruled two different states. One is Leopold of Belgium, who simultaneously ruled Belgium and the Congo. The other is Chiang Kai-shek, who sequentially ruled China and Taiwan, allegedly rather differently. (I heard him speak about these examples in this podcast. BdM, Morrow, Silverson, and Smith wrote about Leopold here, gated)

Comment author: Psychohistorian 13 July 2010 05:51:16AM *  3 points [-]

Not to evoke a recursive nightmare, but some utility function alterations appear to be strictly desirable.

As an obvious example, if I were on a diet and I could rewrite my utility function such that the utilities assigned to consuming spinach and cheesecake were swapped, I see no harm in making that edit. One could argue that my second-order utility (and all higher) function should be collapsed into my first-order one, such that this would not really change my meta-utility function, but this issue just highlights the futility of trying to cram my complex, conflicting, and oft-inconsistent desires into a utility function.

This does raise an interesting issue: if I'm a strictly selfish utilitarian, do I not want my utility function to be that which will attain the highest expected utility? Selfishness is not necessary; it just makes the question much simpler.

Comment author: NancyLebovitz 13 July 2010 12:10:53PM 2 points [-]

Anorexia could be viewed as an excessive ability to rewrite utility functions about food.

If you don't have the ability to include context, the biological blind god may serve you better than the memetic blind god.

Comment author: orthonormal 13 July 2010 02:44:05PM 0 points [-]

This does raise an interesting issue: if I'm a strictly selfish utilitarian, do I not want my utility function to be that which will attain the highest expected utility?

This is a particular form of wireheading; fortunately, for evolutionary reasons we're not able to do very much of it without advanced technology.

Comment author: Vladimir_Nesov 13 July 2010 06:38:39PM *  1 point [-]

This does raise an interesting issue: if I'm a strictly selfish utilitarian, do I not want my utility function to be that which will attain the highest expected utility?

This is a particular form of wireheading

I'd say it's rather a form of conceptual confusion: you can't change a concept ("change" is itself a "timeful" concept, meaningful only as a property within structures which are processes in the appropriate sense). But it's plausible that creating agents with slightly different explicit preference will result in a better outcome than, all else equal, if you give those agents your own preference. Of course, you'd probably need to be a superintelligence to correctly make decisions like this, at which point creation of agents with given preference might cease to be a natural concept.

Comment author: red75 13 July 2010 07:48:22PM 0 points [-]

I am afraid that advanced technology is not necessary. Literal wireheading.

Comment author: WrongBot 13 July 2010 03:13:12PM 4 points [-]

I wouldn't claim that any human is actually able to describe their own utility function; they're much too complex and riddled with strange exceptions and pieces of craziness like hyperbolic discounting.

I also think that there's some confusion surrounding the whole idea of utility functions in reality, which I should have been more explicit about. Your utility function is just a description of what you want/value; it is not explicitly about maximizing happiness. For example, I don't want to murder people, even under circumstances where it would make me very happy to do so. For this reason, I would do everything within my power to avoid taking a pill that would change my preferences such that I would then generally want to murder people; this is the murder pill I mentioned.

As for swapping the utilities of spinach and cheesecake, I think the only way that makes sense to do so would be to change how you perceive their respective tastes, which isn't a change to your utility function at all. You still want to eat food that tastes good; changing that would have much broader and less predictable consequences.

This does raise an interesting issue: if I'm a strictly selfish utilitarian, do I not want my utility function to be that which will attain the highest expected utility? Selfishness is not necessary; it just makes the question much simpler.

Only if your current utility function is "maximize expected utility." (It isn't.)

Comment author: cousin_it 13 July 2010 07:36:03AM *  3 points [-]

Your examples of "identity politics" and "power corrupts" don't seem to illustrate "dangerous knowledge". They are more like dangerous decisions. Am I missing the point?

Comment author: Vladimir_Nesov 13 July 2010 08:20:46AM 6 points [-]

Situations creating modes of thought that make your corrupted hardware turn you into a bad person.

Comment author: RichardKennaway 13 July 2010 07:48:49AM 1 point [-]

(I am making a distinction here between the parts of your brain that you have access to and can introspect about, which for lack of better terms I call “you” or “your consciousness”, and the vast majority of your brain, to which you have no such access or awareness, which I call “your brain.” This is an emotional manipulation, which you are now explicitly aware of. Does that negate its effect? Can it?)

You seem to think you know what the effect is. My immediate thought on reading "it will decide the output, not you" was "oh dear, dualism again", not "zomg im the prisoner of this alien machine!!!!", which seemed to be the effect you were going for.

Anyway, all I see here is defeatist scaremongering.

Comment author: PlaidX 13 July 2010 07:54:39AM 6 points [-]

Certain patterns of input may be dangerous, but knowledge isn't a pattern of input, it can be formatted in a myriad of ways, and it's not generally that hard to find a safe one. There's a picture of a french fry that crashes AOL instant messenger, but that doesn't mean it's the french fry that's the problem. It's just the way it's encoded.

Comment author: Emile 13 July 2010 07:55:09AM 9 points [-]

This seems to be bordering on Dark Side epistemology - and doesn't seem very well aligned with the name of this site.

Another argument against digging in some of the red flag issues is that you might aquire unpopular opinions, and if you're bad at hiding those, you might suffer negative social consequences.

Comment author: WrongBot 13 July 2010 03:03:13PM 1 point [-]

Dark Side epistemology is about protecting false beliefs, if I understand the article correctly. I'm talking about protecting your values.

Comment author: Vladimir_Nesov 13 July 2010 03:25:27PM *  5 points [-]

Anti-epistemology (the updated term for the concept) is primarily about developing immunity to rational argument, allowing to stop the development of your understanding (of factual questions, or of moral questions), and keep incorrect answers (that usually signal belonging to a group) indefinitely. In worse forms, it fosters the development of incorrect understanding as well.

Comment author: Vladimir_Nesov 13 July 2010 08:16:36AM *  7 points [-]

I agree with the overall point: certain thoughts can make you worse off.

Whether it's difficult to judge which information is dangerous, and whether given heuristics for judging that will turn into an anti-epistemic disaster, is about solving the problem, not about existence of the problem. In fact, a convincing argument for using a flawed knowledge-avoiding heuristics would itself be the kind of knowledge one should avoid being exposed to.

If we have an apparently unsolvable problem, with most hypothetical attempts at solution leading to disaster, we shouldn't therefore declare it illusory, and mentioning it irresponsible.

Edit: See also WrongBot's analysis of why the post gets a negative reaction for the wrong reasons.

Comment author: NancyLebovitz 13 July 2010 12:07:34PM 13 points [-]

I’ve also observed that people who come to believe that there are significant differences between the sexes/races/whatevers on average begin to discriminate against all individuals of the disadvantaged sex/race/whatever, even when they were only persuaded by scientific results they believed to be accurate and were reluctant to accept that conclusion. I have watched this happen to smart people more than once. Furthermore, I have never met (or read the writings of) any person who believed in fundamental differences between the whatevers and who was not also to some degree a bigot.

This is something I haven't observed, but it's seemed plausible to me anyway. Have there been any studies (even small, lightweight studies with hypothetical trait differences) showing that sort of overshoot? If there are, why don't they get the sort of publicity that studies which show differences get?


Speaking of AIs getting out of the box, it's conceivable to me that an AI could talk its way out. It's a lot less plausible that an AI could get it right the first time.


And here's a thought which may or may not be dangerous, but which spooked the hell out of me when I first realized it.

Different groups have different emotional tones, and these kept pretty stable with social pressure. Part of the social pressure is usually claims that the particular tone is superior to the alternatives (nicer, more honest, more fun, more dignified, etc.). The shocker was when I realized that the emotional tone is almost certainly the result of what a few high-status members of a group prefer or preferred, but the emotional tone is generally defended as though it's morally superior. This is true even in troll groups, who claim that emotional toughness is more valuable than anything which can be gained by not being insulting.

Comment author: Airedale 13 July 2010 03:38:36PM *  9 points [-]

I’ve also observed that people who come to believe that there are significant differences between the sexes/races/whatevers on average begin to discriminate against all individuals of the disadvantaged sex/race/whatever, even when they were only persuaded by scientific results they believed to be accurate and were reluctant to accept that conclusion. I have watched this happen to smart people more than once. Furthermore, I have never met (or read the writings of) any person who believed in fundamental differences between the whatevers and who was not also to some degree a bigot.

This is something I haven't observed, but it's seemed plausible to me anyway. Have there been any studies (even small, lightweight studies with hypothetical trait differences) showing that sort of overshoot? If there are, why don't they get the sort of publicity that studies which show differences get?

I would also be interested in hearing if there are any studies on this subject. For me, much of WrongBot's argument hangs on how accurate these observations are. I'm still not sure I'd agree with the overall point, but more evidence on this point would make me much more inclined to consider it.

Also, WrongBot, it seems possible that the observations you've made could have alternate explanations; e.g., the people that you have witnessed change their behavior based on scientific results may not have been as originally unbiased or reluctant to change their minds on these subjects as you had believed them to be.

In other words, there may be a chicken/egg problem here. Did these people that you observed really become more bigoted/discriminatory after accepting the truth of certain studies, or did (perhaps subconscious) bigotry actually lead them to accept (and even seek out) studies showing results that confirmed this bigotry and gave them "cover" to discriminate?

Comment author: WrongBot 13 July 2010 06:29:06PM 2 points [-]

I didn't look hard enough for more evidence for this post, and I apologize.

I've recently turned up:

  • A study on clapping which indicated that people believe very strongly that they can distinguish between the sounds of clapping produced by men and women, when in reality they're slightly better than chance. The relevant section starts at the bottom of the 4th page of that PDF. This is weak evidence that beliefs about gender influence a wide array of situations, often unconsciously.

  • This paper on sex-role beliefs and sex-difference knowledge in schoolteachers may be relevant, but it's buried behind a pay-wall.

  • Lots of studies like this one have documented how gender prejudices subconsciously affect behavior.

  • And here's a precise discussion of exactly the effect I was describing. Naturally, it too is behind a pay-wall.

Comment author: Jonathan_Graehl 13 July 2010 04:51:19PM 0 points [-]

Your point about tone being set top-down (by the high-status, or by inertia in the established community) seems to me to explain why we there are so many genuinely vicious people among netizens who talk rationally and honestly about differences in populations (essentially anti-PC) - even beyond what you'd expect in that they're rebelling against an explicit "be nice" policy that most people assent to.

Comment author: NancyLebovitz 13 July 2010 05:05:07PM 1 point [-]

I'm not sure about the connection you're making. Is it combining my points that tone is set from the top, and people are apt to overshoot their prejudices beyond their evidence?

Comment author: Jonathan_Graehl 13 July 2010 06:55:03PM 0 points [-]

My old theory about the nastiness of some anti-PC reactionaries was that they came to their view out of some animus.

Your suggestion that communities' tones may be determined by that of a small number of incumbents serves as an alternative, softening explanation.

Comment author: NancyLebovitz 13 July 2010 10:44:27PM 0 points [-]

I think it's complicated. Some of it probably is animus, but it wouldn't surprise me if some of it isn't about the specific topic so much as resentment at having the rules changed with no acknowledgement made that rule changes have costs for those who are obeying them.

Comment author: Morendil 13 July 2010 05:18:58PM 3 points [-]

The shocker was when I realized that the emotional tone is almost certainly the result of what a few high-status members of a group prefer or preferred

Yes, if you have gained temporary influence over others one of the ways you can put that to further use is by trading that influence into an environment that accords with your preferences.

but the emotional tone is generally defended as though it's morally superior

Regardless of how it comes to be established as a social norm, it could be that a particular tone is more suited to a particular purpose, for instance truth-seeking or community-building or fund-raising.

(For instance, academics have a strong norm of writing in an impersonal tone, usually relying on the passive voice to achieve that. This could either be the result of contingent pressure exerted by the people who founded the field, or it could be an antidote to inflamed rhetoric which would detract from the arguments of fact and inference.)

Comment author: Sniffnoy 13 July 2010 09:31:38PM 0 points [-]

Yes, if you have gained temporary influence over others one of the ways you can put that to further use is by trading that influence into an environment that accords with your preferences.

What exactly is spent here? It looks like this is someone with enough status in the group can do "for free".

Comment author: Morendil 13 July 2010 09:43:11PM 1 point [-]

I don't think it's ever free to use your influence over a group. Do it too often, and you come across as a despot.

As a local example, Eliezer's insistence on the use of ROT13 for spoilerish comments carried through at some status "cost" when a few dissenters objected.

Comment author: rhollerith_dot_com 13 July 2010 05:25:39PM *  10 points [-]

Different groups have different emotional tones . . . (nicer, more honest, more fun, more dignified, etc.).

Downvotes have caused me to put a lot of effort into changing the tone of my communications on Less Wrong so that they are no longer significantly less agreeable (nice) than the group average.

In the early 1990s the newsgroups about computers and other technical subjects were similar to Less Wrong: mostly male, mean IQ above 130, vastly denser in libertarians than the population of any country, the best place online for people already high in rationality to improve their rationality.

Aside from differences in the "shape" of the conversation caused by differences in the "mediating" software used to implement the conversation, the biggest difference between the technical newsgroups of the early 1990s and Less Wrong is that the tone of Less Wrong is much more agreeable.

For example, there was much less evidence IIRC of a desire to spare someone's feelings on the technical newsgroups of the early 1990s, and flames (impassioned harangues of a length almost never seen in comments here and of a level of vitriol very rare here) were very common -- but then again the mediating software probably pulled for deep nesting of replies more than Less Wrong's software does, and most of those flames occured in very deeply nested flamewars with only 2 or 3 participants.

Comment author: Nick_Tarleton 13 July 2010 08:29:10PM 2 points [-]

Having seen both types of tone, which do you think is more effective in improving rationality and sharing ideas?

Comment author: rhollerith_dot_com 13 July 2010 10:20:09PM *  5 points [-]

The short answer is I do not know.

The slightly longer answer is that it probably does not matter unless the niceness reaches the level at which people become too deferential towards the leaders of the community, a failure mode that I personally do not worry about.

Parenthetically, none of the newsgroups I frequented in the 1990s had a leader unless my memory is epically failing me right now. Erik Naggum came the closest (on comp.lang.lisp) but the maintenance of his not-quite-leader status required him to expend a prodigious amount time (and words) to continue to prove his expertise and commitment to Lisp and to browbeat other participants. (And my guess is the the constant public browbeating cost him at least one consulting job. It certainly did not make him look attractive.)

The most likely reason for the emotional tone of LW is that the participants the community most admire have altruism, philanthropy or a refined kind of friendliness as one of their primary motivations for participation, and for them to maintain a certain level of niceness is probably effortless or well-rehearsed and instrumentally very useful.

Specifically, Eliezer and Anna have altruism, philanthropy or human friendliness as one of their primary motivations with probability .9. There are almost certainly others here with that as one of the primary motivations, but they are hard for me to read or I just do not have enough information (in the form of either a large body of online writings like Eliezer's or sufficient face time) to form an opinion worth expressing.

More precisely, if they were less nice than they were, it would be difficult for them to fulfill their mission of improving people's rationality and networking to reduce e-risks, but if they were too nice it would have too much of an inhibitory effect on the critical (judgemental) faculties of them and their interlocutors, so they end up being less nice than the average suburban Californian, say, but significantly nicer than the average niceness of most of the online communities frequented by programmers and others whose work relies heavily on the critical faculty, i.e., where to succeed at the work requires being able to perceive very subtle faults in something.

In other words, I have a working hypothesis that there is a tension between the internal emotional state optimal for "interpersonal" goals (like networking and teaching rationality) and the state optimal for making a rational analysis of a situation or argument. This tension certainly exists for me. I have no direct evidence that the same tension exists for the leaders of this community, but again that is my tentative hypothesis.

So, IMHO the important question is not the effects of the current level of niceness but rather the effects of altruistically motivated participants. I should share my thinking on that some day when I have more time.

Comment author: [deleted] 13 July 2010 02:16:20PM 14 points [-]

A thousand times no. Really, this is a bad idea.

Yeah, some people don't value truth at any cost. And there's some sense to that. When you take a little bit of knowledge and it makes you a bad person, or an unhappy person, I can understand the argument that you'd have been better off without that knowledge.

But most of the time, I believe, if you keep thinking and learning, you'll come round right. (I.e.: when a teenager reads Ayn Rand and thinks that gives him license to be an asshole, his problem is not that he reads too much philosophy.)

You seem to be particularly worried about accidentally becoming a bigot. (I don't think most of us are in any danger of accidentally becoming supreme dictators.) I think you are safe. Think of it this way: you don't want to be a bigot. You don't want your future self to be a bigot either. So don't behave like one. No matter what you read. Commit your future self to not being an asshole.

I think fear of brainwashing is generally silly.* You will not become a Mormon from reading the Book of Mormon. You will not become a Nazi from reading Mein Kampf, or a Communist from reading Das Kapital. You will not become a racist from reading Steve Sailer. I don't think we are such fragile creatures. Just keep an even keel and behave like a decent person, and you're free to read whatever you like.

*Actual brainwashing -- overriding your own sanity and reason -- is possible, but I think it requires a total environment, like a cult compound or an interrogation room. It's not something that reading a book can do to you.

Comment author: RichardKennaway 13 July 2010 02:49:27PM 12 points [-]

But most of the time, I believe, if you keep thinking and learning, you'll come round right. (I.e.: when a teenager reads Ayn Rand and thinks that gives him license to be an asshole, his problem is not that he reads too much philosophy.)

"A little learning is a dang'rous thing;
Drink deep, or taste not the Pierian spring:
There shallow draughts intoxicate the brain,
And drinking largely sobers us again."

-- Pope

Comment author: SilasBarta 13 July 2010 04:43:10PM *  4 points [-]

That sounds like my (provisional) resolution the conflict between "using all you know" and "don't be a bigot": you should incorporate the likelihood ratio of things that a person can't control, so long as you also observe and incorporate evidence that could outweigh such statistical, aggregate, nonspecific knowledge.

So drink deep (use all evidence), but if you don't, then avoid incorporating "dangerous knowledge" as a second best alternative. Apply a low Bayes factor for something someone didn't choose, as long as you give them a chance to counteract it with other evidence.

(Poetry still sucks, though. I'm not yet changing my mind about that.)

Comment author: Emile 13 July 2010 05:27:14PM 10 points [-]

(Poetry still sucks, though. I'm not yet changing my mind about that.)

... must ... resist ... impulse ... to ... downvote ... different ... tastes ...

Comment author: NancyLebovitz 13 July 2010 10:51:42PM 0 points [-]

The other problem with "using all you know" about groups which are subject to bigotry is that "we rule, you drool" is very basic human wiring, and there's apt to be some motivated cognition (in the people developing and giving you the information, even if you aren't engaging in it) on the subject.

Comment author: Emile 13 July 2010 03:05:16PM 2 points [-]

You seem to be particularly worried about accidentally becoming a bigot. (I don't think most of us are in any danger of accidentally becoming supreme dictators.) I think you are safe. Think of it this way: you don't want to be a bigot. You don't want your future self to be a bigot either. So don't behave like one. No matter what you read. Commit your future self to not being an asshole.

He's probably more motivated by not wanting others to become bigots - right, WrongBot?

Comment author: WrongBot 13 July 2010 03:41:42PM 6 points [-]

My motivation in writing this article was to attempt to dissuade others from courses of action that might lead them to become bigots, among other things.

But I am also personally terrified of exactly the sort of thing I describe, because I can't see a way to protect against it. If I had enough strong evidence to assign a probability of .99 to the belief that gay men have an average IQ 10 points lower than straight men (I use this example because I have no reason at all to believe it is true, and so there is less risk that someone will try to convince me of it), I don't think I could prevent that from affecting my behavior in some way. I don't think it's possible. And I disvalue such a result very strongly, so I avoid it.

I bring up dangerous thoughts because I am genuinely scared of them.

Comment author: Jonathan_Graehl 13 July 2010 04:30:34PM 5 points [-]

Why should your behavior be unaffected? If you want to spend time evaluating a person on their own merits, surely you still can.

Comment author: WrongBot 13 July 2010 06:30:54PM 1 point [-]

Just because I'll be able to do something doesn't mean that I will. I can resolve to spend time evaluating people based on their own merits all I like, but that's no guarantee at all that the resolution will last.

Comment author: [deleted] 13 July 2010 06:46:30PM 6 points [-]

You seem to think that anti-bigots evaluate people on their merits more than bigots do. Why?

If you're looking for a group of people who are more likely to evaluate people on their merits, you might try looking for a group of people who are committed to believing true things.

Comment author: [deleted] 13 July 2010 05:06:20PM 10 points [-]

The fact that you have a core value, important enough to you that you'd deliberately keep yourself ignorant to preserve that value, is evidence that the value is important enough to you that it can withstand the addition of information. Your fear is a good sign that you have nothing to fear.

For real. I have been in those shoes. Regarding this subject, and others. You shouldn't be worried.

Statistical facts like the ones you cited are not prescriptive. You don't have to treat anyone badly because of IQ. IQ does not equal worth. You don't use a battery of statistics on test scores, crime rates, graduation rates, etc. to determine how you will treat individuals. You continue to behave according to your values.

Comment author: JenniferRM 14 July 2010 01:12:39AM *  10 points [-]

In the past I have largely agreed with the sentiment that truth and information are mostly good, and when they create problems the solution is even more truth.

But on the basis of an interest in knowing more, I sometimes try to seek evidence that supports things I think are false or that I don't want to be true. Also, I try to notice when something I agree with is asserted without good evidential support. And I don't think you supported your conclusions there with real evidence.

You don't have to treat anyone badly because of IQ. IQ does not equal worth. You don't use a battery of statistics on test scores, crime rates, graduation rates, etc. to determine how you will treat individuals. You continue to behave according to your values.

This reads more to me like prescriptive signaling than like evidence. While it is very likely to be the case that "IQ test results" are not the same as "human worth", it doesn't follow that an arbitrary person would not change their behavior towards someone who is "measurably not very smart" in any way that dumb person might not like. And for some specific people (like WrongBot by the admission of his or her own fears) the fear may very well be justified.

When I read Cialdinni's book Influence, I was struck by the number of times his chapters took the form: (1) describe mental shenanigan, (2) offer evidence that people are easily and generally tricked in this way (3) explain how it functions as a bias when manipulated and a useful heuristic in non-evil environments, (4) offer laboratory evidence that basic warnings to people about the trick offer little protective benefit, (5) exhort the reader to "be careful anyway" with some ad hoc and untested advice.

Advice should be supported with evidence... and sometimes I think a rationalist should know when to shut up and/or bail out of a bad epistemic situation.

Evidence from implicit association tests indicate that people can be biased against other people without even being aware of it. When scientists tried to measure the degree of "cognitive work" it takes to parse racist situations they found that observing overt racism against black people was mentally taxing to white people while observing subtle racism against black people racism was mentally taxing to black people. The whites were oblivious to subtle racism and didn't even try to process it because it happened below their perceptual awareness, overt racism made them stop and momentarily ponder if maybe (shock!) we don't live in a colorblind world yet. The blacks knew racism was common (but not universal) and factored it into their model of the situation without lots of trouble when racism was overt - the tricky part was subtle racism where they had to think through the details to understand what was going on.

(I feel safe saying that white people are frequently oblivious to racism, and are sometimes active but unaware perpetrators of subtle forms of racism because I myself am white. When talking about group shortcomings, I find it best to stick to the shortcomings of my own group.)

Based on information like this, I can easily imagine that I might learn a true (relatively general) fact, use it to leap to an unjustifiable conclusion with respect to an individual, have that individual be harmed by my action, and never notice unless "called on it".

But when called on it, its quite plausible that I'd leap to defend myself and engage in a bunch of motivated cognition to deny that I could possibly ever be biased... and I'd dig myself even deeper into a hole, updating the wrong way when presented with "more evidence". So it would seem that more information would just leave me more wrong than I started with, unless something unusual happened.

(Then, to compound my bad luck I might cache defensive views of myself after generating them in the heat of an argument.)

So it seems reasonable to me that if we don't have the time to drink largely then maybe we should avoid shallow draughts. And even in that case we should be cautious about any subject that impinges on mind killer territory because more evidence really does seem to make you more biased in such areas.

I upvoted the article (from -2 to -1) because the problems I have with it are minor issues of tone, rather major issues with the the content. The general content seems to be a very fundamental rationalist "public safety message", with more familiarity assumed than is justified (like assuming everyone automatically agrees with Paul Graham and putting in a joke about violence at the end).

I don't, unfortunately, know of any experimentally validated method for predicting whether a specific person at a specific time is going to be harmed or helped by a specific piece of "true information" and this is part of what makes it hard to talk with people in a casual manner about important issues and feel justifiably responsible about it. In some sense, I see this community as existing, in part, to try to invent such methods and perhaps even to experimentally validate them. Hence the up vote to encourage the conversation :-)

Comment author: [deleted] 14 July 2010 04:02:39AM 4 points [-]

Those are good points.

What I was trying to encourage was a practice of trusting your own strength. I think that morally conscientious people (as I suspect WrongBot is) err too much on the side of thinking they're cognitively fragile, worrying that they'll become something they despise. "The best lack all conviction, while the worst are full of passionate intensity."

Believing in yourself can be a self-fulfilling prophecy; believing in your own ability to resist becoming a racist might also be self-fulfilling. There's plenty of evidence for cognitive biases, but if we're too willing to paint humans as enslaved by them, we might actually decrease rationality on average! That's why I engaged in "prescriptive signaling." It's a pep talk. Sometimes it's better to try to do something than to contemplate excessively whether it's possible.

Comment author: twanvl 13 July 2010 05:43:51PM 3 points [-]

Group statistics gives only a prior, and just a few observations of any individual will overwhelm it. And if start discriminating against gays if they have low average intelligence, then you should discriminate even more against low intelligence itself. It is not the gayness that is the important factor in that case, it just has a weak correlation.

Comment author: satt 13 July 2010 08:17:00PM 3 points [-]

You will not become a Mormon from reading the Book of Mormon. You will not become a Nazi from reading Mein Kampf, or a Communist from reading Das Kapital. You will not become a racist from reading Steve Sailer.

I will not, but...

Comment author: simplicio 13 July 2010 11:51:50PM 9 points [-]

You will not become a Nazi from reading Mein Kampf, or a Communist from reading Das Kapital.

I became a Trotskyite (once upon a time) partly based on reading Trotsky's history of the Russian Revolution. Yes, I was primed for it, but... words aren't mere.

Comment author: Emile 14 July 2010 10:06:32AM 1 point [-]

Interesting - would you recommend others read it?

I'm interested in reading anything that can change my mind, but avoid some partisan stuff when it looks like it's "preaching to the choir" and that it assumes that the reader already agrees with the conclusions.

Comment author: simplicio 14 July 2010 03:49:41PM 3 points [-]

Interesting - would you recommend others read it?

Yes, if you're not young, impressionable and overidealistic. Trotsky was an incredible writer, and reading that book you do really see things from the perspective of an insider.

One of the reactionary and therefore fashionable historians in contemporary France, L. Madelin, slandering in his drawing-room fashion the great revolution – that is, the birth of his own nation – asserts that “the historian ought to stand upon the wall of a threatened city, and behold at the same time the besiegers and the besieged”: only in this way, it seems, can he achieve a “conciliatory justice.” However, the words of Madelin himself testify that if he climbs out on the wall dividing the two camps, it is only in the character of a reconnoiterer for the reaction. It is well that he is concerned only with war camps of the past: in a time of revolution standing on the wall involves great danger. Moreover, in times of alarm the priests of “conciliatory justice” are usually found sitting on the inside of four walls waiting to see which side will win.

The serious and critical reader will not want a treacherous impartiality, which offers him a cup of conciliation with a well-settled poison of reactionary hate at the bottom, but a scientific conscientiousness, which for its sympathies and antipathies – open and undisguised – seeks support in an honest study of the facts, a determination of their real connections, an exposure of the causal laws of their movement. That is the only possible historic objectivism, and moreover it is amply sufficient, for it is verified and attested not by the good intentions of the historian, for which only he himself can vouch, but the natural laws revealed by him of the historic process itself.

Comment author: [deleted] 13 July 2010 02:38:44PM 6 points [-]

One specific and relatively common version of this are people who believe that women have a lower standard deviation on measures of IQ than men. This belief is not incompatible with believing that any particular woman might be astonishingly intelligent, but these people all seem to have a great deal of trouble applying the latter to any particular woman.

I don't think that this requires a utility-function-changing superbias. Alternatively: We think sloppily about groups, flattening fine distinctions into blanket generalizations. This bias takes the fact "women have a lower standard deviation on measures of IQ than men" as input and spits out the false fact "chicks can't be as smart as guys". If a person updates on this nonfact, and he tends to value less-intelligent individuals less and treat them differently, his valuation of all women will shift downward, fully in accordance with his existing utility function.

Placing "a high value on not discriminating against sentient beings on the basis of artifacts of the birth lottery" is not a common position. Most people discriminate freely on an individual basis. They also aren't aware of cognitive biases or how to combat them. Perhaps it's safer not to learn about between-group differences under those circumstances.

Strange advice for Less Wrong, though.

Comment author: RobinZ 13 July 2010 03:23:19PM 6 points [-]

One argument you could give a Less Wrong audience is that the information about intelligence you could learn by learning someone's gender is almost completely screened off by the information content gained by examining the person directly (e.g. through conversation, or through reading research papers).

Comment author: lmnop 13 July 2010 08:58:08PM *  6 points [-]

That is exactly what should happen, but I suspect that in real life it doesn't, largely because of anchoring and adjustment.

Suppose I know the average intelligence of a member of Group A is 115, and the average intelligence of a member of Group B is 85. After meeting and having a long, involved conversation with a specific member of either group, I should probably toss out my knowledge of the average intelligence of their group and evaluate them based on the (much more pertinent) information I have gained from the conversation. But if I behave like most people do, I won't do that. Instead, I'll adjust my estimate from the original estimate supplied by the group average. Thus, my estimate of the intelligence of a particular individual from Group A will still be very different than my estimate of the intelligence of a particular individual from Group B with the same actual intelligence even after I have had a conversation (or two, or three) with both of them. How many conversations does it take for my estimates to converge? Do my estimates ever converge?

Comment author: mattnewport 13 July 2010 09:11:15PM 4 points [-]

After meeting and having a long, involved conversation with a specific member of either group, I should probably toss out my knowledge of the average intelligence of their group and evaluate them based on the (much more pertinent) information I have gained from the conversation.

If your goal is to accurately judge intelligence this may not be a good approach. Universities moved away from basing admissions decisions primarily on interviews and towards emphasizing test scores and grades because 'long, involved conversation' tends to result in more unconscious bias than simpler, more objective measures when it comes to judging intelligence (at least as it correlates with academic achievement).

Unless you have strong reason to believe that all the unconscious biases that come into play in face to face conversation are likely to be just about right to balance out any biases based on preconceptions of particular groups you are just replacing one source of bias (preconceived stereotypes based on group membership) with another (responses to biasing factors in face to face conversation such as physical attractiveness, accent, shared interests, body language, etc.)

Comment author: Matt_Simpson 13 July 2010 03:47:45PM *  5 points [-]

I agree with the main point of this post, but I think it could have used a more thorough, worked out example. Identity politics is probably the best example of your point, but you barely go into it. Don't worry about redundancy too much; not everyone has read the original posts.

FWIW, my personal experience with politics is an anecdote in your favor.

Comment author: red75 13 July 2010 04:27:24PM 11 points [-]

One specific and relatively common version of this are people who believe that women have a lower standard deviation on measures of IQ than men. This belief is not incompatible with believing that any particular woman might be astonishingly intelligent, but these people all seem to have a great deal of trouble applying the latter to any particular woman.

Your evidence is not quite about beliefs. I think correct version is:

People that don't mind to share that they believe that women have a lower... etc.

Comment author: Douglas_Knight 13 July 2010 05:30:44PM 7 points [-]

Another version is that bigots can't shut up about it.

Comment author: knb 13 July 2010 06:08:51PM *  9 points [-]

I really disagree with your argument, Wrongbot. First of all, I think responding appropriately to "dangerous" information is an important task, and one which most LW folks can achieve.

In addition, I wonder if your personal observations about people who become bigots by reading "dangerous content" are actually accurate. People who are already bigots (or are predisposed to bigotry) are probably more likely to seek out data that "confirms" their assumptions. So your anecdotal observation may be produced by a selection effect.

At bare minimum, you should give us some information about the sample your observations are based on. For example you say:

One specific and relatively common version of this are people who believe that women have a lower standard deviation on measures of IQ than men. This belief is not incompatible with believing that any particular woman might be astonishingly intelligent, but these people all seem to have a great deal of trouble applying the latter to any particular woman. There may be exceptions, but I haven’t met them.

This could mean you've met a couple people like this, and never met anyone else who has encountered this data. In any case, you really don't have enough data to draw the extreme conclusion that you should ignore data.

In any case, the most fundamental problem with your point is that any attempt to preemptively prevent yourself from acquiring dangerous information is predicated on you already knowing the "dangerous" part. You can spend the rest of your life avoiding data about IQ/SAT scores, but you already know that women's scores vary somewhat less than men's' scores. (Anyway, I fail to see how expecting somewhat less variance in women would effect behavior in real life.)

Comment author: jimrandomh 13 July 2010 06:29:20PM 13 points [-]

With bigotry, I think the real problem is confirmation bias. If I believe, for example, that orange-eyed people have an average IQ of only 99, and that's true, then when I talk to orange-eyed people, that belief will prime me to notice more of their faults. This would cause me to systematically underestimate the intelligence of orange-eyed people I met, probably by much more than 1 IQ point. This is especially likely because I get to observe eye color from a distance, before I have any real evidence to go on.

In fact, for the priming effect, in most people the magnitude of the real statistical correlation doesn't matter at all. Hence the resistance to acknowledging even tiny, well-proven differences between races and genders: they produce differences in perception that are not necessarily on the same order of magnitude as the differences in reality.

Comment author: lmnop 13 July 2010 08:34:23PM *  11 points [-]

This is exactly the crux of the argument. When people say that everyone should be taught that people are the same regardless of gender or race, what they really mean isn't that there aren't differences on average between women and men, etc, but that being taught about those small differences will cause enough people to significantly overshoot via confirmation bias that it will overall lead to more misjudgments of individuals than if people weren't taught about those small differences at all, hence people shouldn't be taught about those small differences. I am hesitantly sympathetic to this view; it is borne out in many of the everyday interactions I observe, including those involving highly intelligent aspiring rationalists.

This doesn't mean we should stop researching gender or race differences, but that we should simultaneously research the effects of people learning about this research: how big are the differences in the perception vs the reality of those differences? Are they big enough that anyone being taught about gender and race differences should also be taught about of the risk of them systematically misjudging many individuals because of their knowledge, and warned to remain vigilant against confirmation bias? When individuals are told to remain vigilant, do they still overshoot to an extent that they become less accurate in judging people than they were before they obtained this knowledge? I would have a much better idea how to proceed both as a society and as an individual seeking to maximize my accuracy in judging people after finding out the answer to these questions.

Comment author: Emile 14 July 2010 11:29:32AM 2 points [-]

Those are real and important effects (that should probably have been included in the original post).

A problem with avoiding knowledge that could lead you to discriminate is that it makes it hard to judge some situations - did James Watson, Larry Summers and Stephanie Grace deserve a public shaming?

Comment author: MichaelVassar 15 July 2010 05:05:21PM 4 points [-]

Stephanie Grace, definitely not, she was sharing thoughts privately.

Summers? Not for sexism, he seemed honest and sincere in a desire to clarify issues and reach truth, but he displayed stupidity and gullibility which should be cause for shame in his position at Harvard, and to some degree as a broad social scientist and policy adviser, though not as an economic theorist narrowly construed.

Watson, probably. He said something overtly and exageratedly negative, said it publicly and needlessly, and has a specific public prestige which makes his words more influential. It's unfortunate that he didn't focus on some other issue and public shame of this sort might reduce such unfortunate occurrences in the future.

Comment author: Unknowns 13 July 2010 06:31:08PM 2 points [-]

This is completely wrong. You might as well tell a baby to avoid learning language, since this will change its utility function, it will begin to have an adult's utility function, instead of a baby's.

Comment author: retiredurologist 13 July 2010 07:32:47PM *  5 points [-]

WrongBot: Brendan Nyhan, the Robert Wood Johnson scholar in health policy research at the University of Michigan, spoke today on Public Radio's "Talk of the Nation" about a bias that may be reassuring to you. He calls it the "backfire effect". He says new research suggests that misinformed people rarely change their minds when presented with the facts -- and often become even more attached to their beliefs. The Boston Globe reviews the findings here as they pertain to politics. If this is correct, it seems quite likely that if you have strong anti-bigot beliefs, and you are exposed to "dangerous factual thoughts" that might conceivably sway you toward bigotry, the backfire effect should make you defend your original views even more vigorously, thus acting as a protective bias. OTOH, while listening, I wondered, "Is Nyhan saying that the only factual positions one can assume are those about which one had no previous opinion or knowledge?" Best wishes to overcome your phobia.

Comment author: Blueberry 13 July 2010 08:42:06PM *  19 points [-]

If knowing the truth makes me a bigot, then I want to be a bigot. If my values are based on not knowing certain facts, or getting certain facts incorrect, then I want my values to change.

It may help to taboo "bigot" for a minute. You seem to be lumping a number of things under a label and calling them bad.

There's the question of how we treat people who are less intelligent (regardless of group membership). I'm fine with discriminating in some ways based on intelligence of the individual, and if it does turn out that Group X is statistically less intelligent, then maybe Group X should be underrepresented in important positions. This has consequences for policy decisions. Of course, there may be a way of increasing the intelligence of Group X:

Based on all the evidence I have, I’ve made a conscious decision to avoid seeking out information on sex differences in intelligence and other, similar kinds of research.

How are you going to help a disadvantaged group if you're blinding yourself to the details of how they're disadvantaged?

Comment author: WrongBot 13 July 2010 08:57:02PM 1 point [-]

I'm fine with discriminating in some ways based on intelligence of the individual, and if it does turn out that Group X is statistically less intelligent, then maybe Group X should be underrepresented in important positions. This has consequences for policy decisions.

Agreed. But I should not make decisions about individual members of Group X based on the statistical trend associated with Group X, and I doubt my (or anyone's) ability to actually not do so in cases where I have integrated the belief that the statistical trend is true.

How are you going to help a disadvantaged group if you're blinding yourself to the details of how they're disadvantaged?

The short answer is that I'm not going to. I'm not doing research on human intelligence, and I doubt I ever will. The best I can hope to do is not further disadvantage individual members of Group X by discriminating against them on the basis of statistical trends that they may not embody.

People who are doing research that relates to human intelligence in some way should probably not follow this exact line of reasoning.

Comment author: Vladimir_M 14 July 2010 05:30:17AM *  12 points [-]

WrongBot:

But I should not make decisions about individual members of Group X based on the statistical trend associated with Group X [...]

Really? I don't think it's possible to function in any realistic human society without constantly making decisions about individuals based on the statistical trends associated with various groups to which they happen to belong (a.k.a. "statistical discrimination"). Acquiring perfectly detailed information about every individual you ever interact with is simply not possible given the basic constraints faced by humans.

Of course, certain forms of statistical discrimination are viewed as an immensely important moral issue nowadays, while others are seen simply as normal common sense. It's a fascinating question how and why exactly various forms of it happen (or fail) to acquire a deep moral dimension. But in any case, a blanket condemnation of all forms of statistical discrimination is an attitude incompatible with any realistic human way of life.

Comment author: WrongBot 14 July 2010 06:29:35AM 0 points [-]

The "deep moral dimension" generally applies to group memberships that aren't (perceived to be) chosen: sex, gender, race, class, sexual orientation, religion to a lesser extent.

These are the kinds of "Group X" to which I was referring. Discriminating against someone because they majored in Drama in college or believe in homeopathy are not even remotely equivalent to racism, sexism, and the like.

Comment author: Matt_Simpson 14 July 2010 06:42:15AM 6 points [-]

The "deep moral dimension" generally applies to group memberships that aren't (perceived to be) chosen: sex, gender, race, class, sexual orientation, religion to a lesser extent.

But you still discriminate based on sex, gender, race, class, sexual orientation and religion every day. You don't try to talk about sports with every girl you meet, you safely assume that they probably aren't interested until you receive evidence to the contrary. But if you meet a guy, then talking about sports moves higher on the list of conversation topics just because he's a guy.

Comment author: WrongBot 14 July 2010 04:49:42PM 2 points [-]

Well, I actually try to avoid talking about sports entirely, because I find the topic totally uninteresting.

But! That is mere nitpicking, and the thrust of your argument is correct. I can only say that like all human beings I regularly fail to adhere to my own moral standards, and that this does not make those standards worthless.

Comment author: Matt_Simpson 14 July 2010 04:55:15PM 4 points [-]

Well, I actually try to avoid talking about sports entirely, because I find the topic totally uninteresting.

For some reason I expected that answer. ;)

I can only say that like all human beings I regularly fail to adhere to my own moral standards, and that this does not make those standards worthless.

I find it odd that you still hold on to "not statistically discriminating" as a value. What about it do you think is immoral? (I'm not trying to be condescending here, I'm genuinely curious)

Comment author: WrongBot 14 July 2010 06:44:03PM 2 points [-]

I value not statistically discriminating (on the basis of unchosen characteristics or group memberships) because it is an incredibly unpleasant phenomenon to experience. As a white American man I suffer proportionally much less from the phenomenon than do most people, and even the small piece of it that I pick up from being bisexual sucks.

It's not a terminal value, necessarily, but in practice it tends to act like one.

Comment author: HughRistik 14 July 2010 05:42:26PM *  2 points [-]

I can only say that like all human beings I regularly fail to adhere to my own moral standards, and that this does not make those standards worthless.

If following your moral standards is impractical, maybe those standards aren't quite right in the first place.

It is a common mistake for idealists to choose their morality without reference to practical realities. A better search plan would be to find all the practical options, and then pick whichever of those is the most moral.

If you spare women you meet from discussion of sports (or insert whatever interest you have that exhibits average sex differences) until she expresses interest in the subject, you have not failed any reasonable moral standards.

Comment author: SilasBarta 14 July 2010 05:47:47PM 2 points [-]

If you spare women you meet from discussion of sports (or insert whatever interest you have that exhibits average sex differences) until she expresses interest in the subject, you have not failed any reasonable moral standards.

Well, until you factor in the unfortunate tendency of women to be attracted to men who are indifferent to their interests :-P

Comment author: WrongBot 14 July 2010 06:48:22PM 1 point [-]

It is a common mistake for idealists to choose their morality without reference to practical realities. A better search plan would be to find all the practical options, and then pick whichever of those is the most moral.

Most moral by what standard? You're just passing the buck here.

Comment author: HughRistik 14 July 2010 06:51:04PM 0 points [-]

Moral according to your standards. I'm just suggesting a different order of operation: understanding the practicalities first, and then trying to find which of the practical options you judge most moral.

Comment author: WrongBot 14 July 2010 07:01:26PM 1 point [-]

But those standards are moral standards. If you're suggesting that one should just choose the most moral practical option, how is that any different from consequentialism?

Your first comment sounded like you were suggesting that people should choose the most moral practical standard.

Comment author: mattnewport 14 July 2010 07:21:35AM 8 points [-]

The well documented discrimination against short men and ugly people and the (more debatable) discrimination against the socially inept and those whose behaviour and learning style does not conform to the compliant workers that schools are largely structured to produce are examples of discrimination that appears to receive less attention and concern.

Comment author: NancyLebovitz 14 July 2010 12:04:51PM 2 points [-]

Opposition to discrimination doesn't just happen. It has to be organized and promoted for an extended period before there's a effect.

Afaik, that promotion typically has to include convincing people in the discriminated group that things can be different and that opposing discrimination is worth the risks and effort. In some cases, it also includes convincing them that they don't deserve to be mistreated.

Comment author: Vladimir_M 14 July 2010 05:42:55PM *  6 points [-]

WrongBot:

The "deep moral dimension" generally applies to group memberships that aren't (perceived to be) chosen: sex, gender, race, class, sexual orientation, religion to a lesser extent.

This is not an accurate description of the present situation. To take the most blatant example, every country discriminates between its own citizens and foreigners, and also between foreigners from different countries (some can visit freely, while others need hard to get visas). This state of affairs is considered completely normal and uncontroversial, even though it involves a tremendous amount of discrimination based on group memberships that are a mere accident of birth.

Thus, there are clearly some additional factors involved in the moralization of other forms of discrimination, and the fascinating question is what exactly they are. The question is especially puzzling considering that religion is, in most cases, much easier to change than nationality, and yet the former makes your above list, while the latter doesn't -- so the story about choice vs. accident of birth definitely doesn't hold water.

I'm also puzzled by your mention of class. Discrimination by class is definitely not a morally sensitive issue nowadays the way sex or race is. On the contrary, success in life is nowadays measured mostly by one's ability to distance and insulate oneself from the lower classes by being able to afford living in low-class-free neighborhoods and joining higher social circles. Even when it comes to you personally, I can't imagine that you would have exactly the same reaction when approached by a homeless panhandler and by someone decent-looking.

Comment author: Douglas_Knight 14 July 2010 06:19:10PM 2 points [-]

Discrimination by class is definitely not a morally sensitive issue nowadays the way sex or race is. On the contrary, success in life is nowadays measured mostly by one's ability to distance and insulate oneself from the lower classes

Without disagreeing much with your comment, I have to point out that this is a non sequitur. Moral sensitivity has nothing to do with (ordinary) actions. Among countries where the second sentence is true, there are both ones where the first is true and ones where the first is false. I don't know so much about countries where the second sentence is false.

As to religion, in places where people care about it enough to discriminate, changing it will probably alienate one's family, so it is very costly to change, although technically possible. Also, in many places, religion is a codeword for ethnic groups, so it can't be changed (eg, Catholics in US 1850-1950).

Comment author: Vladimir_M 15 July 2010 06:00:05AM *  3 points [-]

You're right that my comment was imprecise, in that I didn't specify to which societies it applies. I had in mind the modern Western societies, and especially the English-speaking countries. In other places, things can indeed be very different with regards to all the mentioned issues.

However, regarding your comment:

Moral sensitivity has nothing to do with (ordinary) actions.

That's not really true. People are indeed apt to enthusiastically extol moral principles in the abstract while at the same them violating them whenever compliance would be too costly. However, even when such violations are rampant, these acts are still different from those that don't involve any such hypocritical violations, or those that violate only weaker and less significant principles.

And in practice, when we observe people's acts and attitudes that involve their feeling of superiority over lower classes and their desire to distance themselves from them, it looks quite different from analogous behaviors with respect to e.g. race or sex. The latter sorts of statements and acts normally involve far more caution, evasion, obfuscation, and rationalization. To take a concrete example, few people would see any problem with recommending a house by saying that it's located in "a nice middle-class neighborhood" -- but imagine the shocked reactions if someone praised it by talking about the ethnic/racial composition of the neighborhood loudly and explicitly, even if the former description might in practice serve as (among other things) a codeword for the latter.

Comment author: WrongBot 13 July 2010 09:09:49PM 3 points [-]

This post is seeing some pretty heavy downvoting, but the opinions I'm seeing in the comments so far seem to be more mixed; I suppose this isn't unusual.

I have a question, then, for people who downvoted this post: what specifically did you dislike about it? This is a data-gathering exercise that will hopefully allow me to identify flaws in my writing and/or thinking and then correct them. Was the argument being made just obviously wrong? Was it insufficiently justified? Did my examples suck? Were there rhetorical tactics that you particularly disliked? Was it structured badly? Are you incredibly annoyed by the formatting errors I can't figure out how to fix?

Those are broadly the sorts of answers I'm looking for. I am specifically not looking for justifications for downvotes; really, all I want is your help in becoming stronger. With luck, I will be able to waste less of your time in the future.

Thanks.

Comment author: mattnewport 13 July 2010 09:40:43PM 4 points [-]

Was the argument being made just obviously wrong?

This, primarily. At least obviously wrong by my value system where believing true things is a core value. To the extent that this is also the value system of less wrong as a whole it seems contrary to the core values of the site without acknowledging the conflict explicitly enough.

I didn't think the examples were very good either. I think the argument is wrong even for value systems that place a lower value on truth than mine and the examples aren't enough to persuade me otherwise.

I also found the (presumably) joke about hunting down and killing anyone who disagrees with you jarring and in rather poor taste. I'm generally in favour of tasteless and offensive jokes but this one just didn't work for me.

Comment author: Vladimir_Nesov 13 July 2010 09:43:10PM *  4 points [-]

At least obviously wrong by my value system where believing true things is a core value.

Beware identity. It seems that a hero shouldn't kill, ever, but sometimes it's the right thing to do. Unless it's your sole value, there will be situations where it should give way.

Comment author: mattnewport 13 July 2010 09:58:52PM 0 points [-]

Unless it's your sole value, there will be situations where it should give way.

This seems like it should generally be true but in practice I haven't encountered any plausible examples where I prefer ignorance. This includes a number of hypotheticals where many people claim they would prefer ignorance which leads me to believe the value I place on truth is outside the norm.

Truth / knowledge is a little paradoxical in this sense as well. I believe that killing is generally wrong but there is no paradox in killing in certain situations because it appears to be the right choice. The feedback effect of truth on your decision making / value defining apparatus makes it unlike other core values that might sometimes be abandoned.

Comment author: Vladimir_Nesov 13 July 2010 10:01:07PM 0 points [-]

This seems like it should generally be true but in practice I haven't encountered any plausible examples where I prefer ignorance. This includes a number of hypotheticals where many people claim they would prefer ignorance which leads me to believe the value I place on truth is outside the norm.

I agree with this, my objection is to the particular argument you used, not necessarily the implied conclusion.

Comment author: Tyrrell_McAllister 13 July 2010 10:59:45PM *  4 points [-]

This, primarily. At least obviously wrong by my value system where believing true things is a core value.

I really don't think that the OP can be called "obviously wrong". For example, your brain is imperfect, so it may be that believing some true things makes it less likely that you will believe other more important true things. Then, even if your core value is to believe true things, you are going to want to be careful about letting the dangerous beliefs into your head.

And the circularity that WrongBot and Vladimir Nesov have pointed out rears its head here, too. Suppose that the possibility that I pose above is true. Then, if you knew this, it might undermine the extent to which you hold believing true things to be a core value. That is precisely the kind of unwanted utility-function change that Wrongbot is warning us about.

It's probably too pessimistic to say that you could never believe the dangerous true things. But it seems reasonably possible that some true beliefs are too dangerous unless you are very careful about the way in which you come to believe them. It may be unwise to just charge in and absorb true facts willy-nilly.

Here's another way to come at WrongBot's argument. It's obvious that we sometimes should keep secrets. Sometimes more harm than good would result if someone else knew something that we know. It's not obvious, but it is at least plausible, that the "harm" could be that the other person's utility function would change in a way that we don't want. At least, this is certainly not obviously wrong. The final step in the argument is then to acknowledge that the "other person" might be the part of yourself over which you do not have perfect control — which is, after all, most of you.

Comment author: mattnewport 14 July 2010 12:02:00AM *  1 point [-]

It's obvious that we sometimes should keep secrets. Sometimes more harm than good would result if someone else knew something that we know.

I believe some other people's reports that there are things they would prefer not to know and would be inclined to honor their preference if I knew such a secret but I can't think of any examples of such secrets for myself. In almost all cases I can think of I would want to be informed of any true information that was being withheld from me. The only possible exceptions are 'pleasant surprises' that are being kept secret on a strictly time-limited basis to enhance enjoyment (surprise gifts, parties, etc.) but I think these are not really what we're talking about.

I can certainly think of many examples of secrets that people keep secret out of self-interest and attempt to justify by claiming they are doing it in the best interests of the ignorant party. In most such cases the 'more harm than good' would accrue to the party requesting the keeping of the secret rather than the party from whom the secret is being withheld. Sometimes keeping such secrets might be the 'right thing' morally (the Nazi at the door looking for fugitives) but this is not because you are acting in the interests of the party from whom you are keeping information.

Comment author: Tyrrell_McAllister 14 July 2010 12:35:31AM 2 points [-]

I can certainly think of many examples of secrets that people keep secret out of self-interest and attempt to justify by claiming they are doing it in the best interests of the ignorant party. In most such cases the 'more harm than good' would accrue to the party requesting the keeping of the secret rather than the party from whom the secret is being withheld.

But this is the way to think of WrongBot's claim. The conscious you, the part over which you have deliberate control, is but a small part of the goal-seeking activity that goes on in your brain. Some of that goal-seeking activity is guided by interests that aren't really yours. Sometimes you ought to ignore the interests of these other agents in your brain. There is some possibility that you should sometimes do this by keeping information from reaching those other agents, even though this means keeping the information from yourself as well.

Comment author: Tyrrell_McAllister 14 July 2010 12:50:06AM *  5 points [-]

In almost all cases I can think of I would want to be informed of any true information that was being withheld from me.

Maybe this is an example:

I was once working hard to meet a deadline. Then I saw in my e-mail that I'd just received the referee reports for a journal article that I'd submitted. Even when a referee report recommends acceptance, it will almost always request changes, however minor. I knew that if I looked at the reports, I would feel a very strong pull to work on whatever was in them, which would probably take at least several hours. Even if I resisted this pull, resistance alone would be a major tax on my attention. My brain, of its own accord, would grab mental CPU cycles from my current project to compose responses to whatever the referees said. I decided that I couldn't spare this distraction before I met my deadline. So I left the reports unread until I'd completed my project.

In short, I kept myself ignorant because I expected that knowledge of the reports' contents would induce me to pursue the wrong actions.

Comment author: mattnewport 14 July 2010 01:05:06AM *  4 points [-]

This is an example of a pretty different kind of thing to what WrongBot is talking about. It's a hack for rationing attention or a technique for avoiding distraction and keeping focus for a period of time. You read the email once your current time-critical priority was dealt with, you didn't permanently delete it. Such tactics can be useful and I use them myself. It is quite different from permanently avoiding some information for fear of permanent corruption of your brain.

I'm a little surprised that you would have thought that this example fell into the same class of things as WrongBot or I were talking about. Perhaps we need to define what kinds of 'dangerous thought' we are talking about a little more clearly. I'm rather bemused that people are conflating this kind of avoidance of viscerally unpleasant experiences with 'dangerous thoughts' as well. It seems others are interpreting the scope of the article massively more broadly than I am.

Comment author: Tyrrell_McAllister 14 July 2010 01:32:53AM *  3 points [-]

This is an example of a pretty different kind of thing to what WrongBot is talking about.

I think that you can just twiddle some parameters with my example to see something more like WrongBot's examples. My example had a known deadline, after which I knew it would be safe to read the reports. But suppose that I didn't know exactly when it would be safe to read the reports. My current project is the sort of thing where I don't currently know when I will have done enough. I don't yet know what the conditions for success are, so I don't yet know what I need to do to create safe conditions to read the reports. It is possible that it will never be safe to read the reports, that I will never be able to afford the distraction of suppressing my brain's desire to compose responses.

My understanding is that WrongBot views group-intelligence differences analogously. The argument is that it's not safe to learn such truths now, and we don't yet know what we need to do to create safe conditions for learning these truths. Maybe we will never find such conditions. At any rate, we should be very careful about exposing our brains to these truths before we've figured out the safe conditions. That is my reading of the argument.

Comment author: WrongBot 14 July 2010 03:35:46AM 4 points [-]

More or less. I'm generally sufficiently optimistic about the future that I don't think that there are kinds of true knowledge that will continue to be dangerous indefinitely; I'm just trying to highlight things I think might not be safe right now, when we're all stuck doing serious thinking with opaquely-designed sacks of meat.

Comment author: HughRistik 14 July 2010 05:06:11AM 1 point [-]

Like Matt, I don't think your example does the same thing as WrongBot's, even with your twiddling.

WrongBot doesn't want the "dangerous thoughts" to influence him to revise his beliefs and values. That wasn't the case for you: you didn't want to avoid revising your beliefs about your paper; you just didn't want to deal with the cognitive distraction of it during the short term. If you avoided reading your reports because you wanted to avoid believing that your article needed any improvement, then I think your situation would be more analogous to WrongBot's.

The argument is that it's not safe to learn such truths now, and we don't yet know what we need to do to create safe conditions for learning these truths. Maybe we will never find such conditions. At any rate, we should be very careful about exposing our brains to these truths before we've figured out the safe conditions.

But there's another difference here: when you decided to not expose yourself to that knowledge, you knew at the time when the safe conditions would occur, and that those conditions would occur very soon. That's not the case for WrongBot, who has sworn off certain kinds of knowledge indefinitely.

Putting oneself at risk of error for a short and capped time frame is much different from putting oneself at risk of error indefinitely.

Comment author: Tyrrell_McAllister 14 July 2010 06:09:31AM *  2 points [-]

WrongBot doesn't want the "dangerous thoughts" to influence him to revise his beliefs and values. That wasn't the case for you: you didn't want to avoid revising your beliefs about your paper; you just didn't want to deal with the cognitive distraction of it during the short term.

The beliefs that I didn't want to revise were my beliefs about the contents of the reports. Before I read them, my beliefs about their contents were general and vague. Were I to read the reports, I would have specific knowledge about what they said. My worry was that this would revise my values: after gaining that specific knowledge, my brain would excessively value replying to the reports over working on my current project. Despite my intention to focus solely on my current project, my brain would allocate significant resources to composing responses to what I'd read in the reports.

But there's another difference here: when you decided to not expose yourself to that knowledge, you knew at the time when the safe conditions would occur, and that those conditions would occur very soon.

But in the "twiddled" version, I don't know when the safe conditions will occur . . .

That's not the case for WrongBot, who has sworn off certain kinds of knowledge indefinitely.

To be fair, WrongBot thinks that we will be able to learn this knowledge eventually. We just shouldn't take it as obvious that we know what the safe conditions are yet.

Comment author: HughRistik 14 July 2010 06:52:45AM 1 point [-]

I still say that there is a difference between what you and WrongBot are doing, even if you're successfully shooting down my attempts to articulate it. I might need a few more tries to be able to correctly articulate that intuition.

My worry was that this would revise my values: after gaining that specific knowledge, my brain would excessively value replying to the reports over working on my current project.

These are not the same types of values. You were worried about your values about priorities changing, while under time pressure. WrongBot is worried about his moral values changing about he treats certain groups of people.

But in the "twiddled" version, I don't know when the safe conditions will occur . . .

True, but there wasn't the same magnitude or type of uncertainty, right? You knew that you would probably be able to read your reports after your deadline...? All predictions about the future are uncertain, but not all types of uncertainty are created equal.

I would be interested to hear your opinion of a little thought experiment. What if I was a creationist, and you recommend me a book debunking creationism. I say that I won't read it because it might change my values, at least not only the conditions are safe for me. If I say that I can't read it this week because I have a deadline, but maybe next week, you'll probably give me a pass. But what if I put off reading it indefinitely? Is that rational?

It seems that since we recognize that rationalists are human, we can and should give them a pass on scrutinizing certain thoughts or investigating certain ideas when they are under time pressure or emotional pressure in the short term, like in your example. But how long can one dodge inquiry in a certain area before one's rationalist creds become suspect?

Comment author: jimrandomh 13 July 2010 10:53:29PM 9 points [-]

I think it would've been better received if some attention was given to defense mechanisms - ie, rather than phrasing it as some true things being unconditionally bad to know, phrase it as some true things being bad to know unless you have the appropriate prerequisites in place. For example, knowing about differences between races is bad unless you are very good at avoiding confirmation bias, and knowing how to detect errors in reasoning is bad unless you are very good at avoiding motivated cognition.

Comment author: Tyrrell_McAllister 13 July 2010 11:05:42PM *  6 points [-]

I have a question, then, for people who downvoted this post: what specifically did you dislike about it? This is a data-gathering exercise that will hopefully allow me to identify flaws in my writing and/or thinking and then correct them. Was the argument being made just obviously wrong? Was it insufficiently justified? Did my examples suck? Were there rhetorical tactics that you particularly disliked? Was it structured badly? Are you incredibly annoyed by the formatting errors I can't figure out how to fix?

I upvoted your post, because I think that you raise a possibility that we should consider. It should not be dismissed out of hand.

However, your examples do kind of suck :). As Sarah pointed out, none of us is likely to become a dictator, and dictators are probably not typical people. So the history of dictators is not great information about how we ought to tend to our epistemological garden. Your claims about how data on group differences in intelligence affect people would be strong evidence if it were backed up by more than anecdote and speculation. As it is, though, it is at least as likely that you are suffering from confirmation bias.

Comment author: WrongBot 14 July 2010 12:31:55AM 3 points [-]

Thank you. I should have held off on making the post for a few days and worked out better examples at the very least. I will do better.

Comment author: mattnewport 14 July 2010 02:34:08AM *  10 points [-]

I've just identified something else that was nagging at me about this post: the irony of the author of this post making an argument that closely parallels an argument some thoughtful conservatives make against condoning alternative lifestyles like polyamory.

The essence of that argument is that humans are not sufficiently intelligent, rational or self-controlled to deal with the freedom to pursue their own happiness without the structure and limits imposed by evolved cultural and social norms that keep their baser instincts in check. That cultural norms exist for a reason (a kind of cultural selection for societies with norms that give them a competitive advantage) and that it is dangerous to mess with traditional norms when we don't fully understand why they exist.

I don't really subscribe to the conservative argument (though I have more sympathy for it than the argument made in this post) but it takes a similar form to this argument when it suggests that some things are too dangerous for mere humans to meddle with.

Comment author: WrongBot 14 July 2010 03:43:46AM 0 points [-]

While there are some superficial parallels, I don't think the two cases are actually very similar.

Humans don't have a polyamory-bias; if the scientific consensus on neurotransmitters like oxytocin and vasopressin is accurate, it's quite the opposite. Deliberate action in defiance of bias is not dangerous. There's no back door for evolution to exploit.

Comment author: MichaelVassar 15 July 2010 05:07:17PM 3 points [-]

This just seems unreasoned to me.

Comment author: WrongBot 15 July 2010 05:16:53PM 0 points [-]

Erm, how so?

It occurs to me that I should clarify that when I said

Deliberate action in defiance of bias is not dangerous.

I meant that it is not dangerous thinking of the sort I have attempted to describe.

Comment author: MichaelVassar 15 July 2010 06:19:32PM 6 points [-]

Maybe I just don't see the distinction or the argument that you are making, but I still don't. Do you really think that thinking about polyamory isn't likely to impact values somewhat relative to unquestioned monogamy?

Comment author: WrongBot 15 July 2010 06:45:29PM 0 points [-]

Oh, it's quite likely to impact values. But it won't impact your values without some accompanying level of conscious awareness. It's unconscious value shifts that the post is concerned about.

Comment author: satt 13 July 2010 09:38:25PM *  8 points [-]

Here's something that might work as an alternative example that doesn't imply as much bigotry on anybody's part: a PNAS study from earlier this year found that during a school year, schoolgirls with more maths-anxious female maths teachers appear to develop more stereotyped views of gender and maths achievement, and do less well in their maths classes.

Let's suppose the results of that study were replicated and extended. Would a female maths teacher be justified in refusing to think about the debate over sex and IQ/maths achievement, on the grounds that doing so is likely to generate maths anxiety and so indirectly harm their female students' maths competence?

[Edited so the hyperlink isn't so long & ugly.]

Comment author: xamdam 13 July 2010 11:46:31PM *  9 points [-]

I think this is a worthwhile discussion.

Here are some "true things" I don't want to know about:

  • the most catchy commercial jingle in the universe
  • what 2g1c looks like. I managed to avoid it thus far
  • the day I am going to die
Comment author: [deleted] 14 July 2010 12:33:05AM 8 points [-]

I have to admit there's information I shield myself from as well.

  1. I don't like watching real people die on video. I worry about getting desensitized/dehumanized.

  2. I don't want to see 2g1c either. (by extension, most of the grungier parts of the intertubes.)

  3. I don't want to know (from experience) what heroin feels like.

I do know people who believe in total desensitization -- they think that the reflex to shudder or gag is something you have to burn out of yourself. I don't think I want that for myself, though.

Comment author: gwern 14 July 2010 10:54:26AM 4 points [-]

I don't want to see 2g1c either. (by extension, most of the grungier parts of the intertubes.)

You know, those shock videos are not as bad as they look. 2g1c is usually thought to be something in the line of chocolate; and the infamous Tubgirl is known to be just orange juice.

(Which makes sense; eating feces is a good way to get sick.)

Comment author: teageegeepea 14 July 2010 01:52:49AM 14 points [-]

I'm surprised about the last one. I think it would be quite helpful if you could be prepared for that.

The other two are experiences you wouldn't like to have. If you had the indexical knowledge of what the catchiest jingle was, you could better avoid hearing it.

Comment author: xamdam 15 July 2010 07:18:36PM 0 points [-]

if you could be prepared for it

That's a big if ;)

I am not.

Comment author: Emile 14 July 2010 10:39:19AM *  6 points [-]
If you tell me the wild boar
Has twenty teeth, I’ll say, “Why sure.”
Or say that he has thirty three,
That number is quite all right with me
Or scream that he has ninety-nine
I’ll never say that you are lyin’,
For the number of teeth
In a wild boar’s mouth
Is a subject I’m glad
I know nothing about.

-- Shel Silverstein

Comment author: teageegeepea 14 July 2010 01:50:17AM 10 points [-]

Bryan Caplan argues against the "corrupted by power" idea with an alternative view: they were corrupt from the start, which is why they were willing to go to such extremes to attain power.

Around the time I stopped believing in God and objective morality I came around to Stirners' view: such values are "geists" haunting the mind, often distracting us from factual truths. Just as I stopped reading fiction for reasons of epistemic hygiene, I decided that chucking morality would serve a similar purpose. I certainly wouldn't trust myself to selectively filter any factual information. How can the uninformed know what to be uninformed about?

Comment author: simplicio 14 July 2010 02:39:39AM *  23 points [-]

I upvoted this post because it's a fascinating topic. But I think a trip down memory lane might be in order. This 'dangerous knowledge' idea isn't new, and examples of what was once considered dangerous knowledge should leap into the minds of anybody familiar with the Coles Notes of the history of science and philosophy (Galileo anyone?). Most dangerous knowledge seems to turn out not to be (kids know about contraception, and lo, the sky has not fallen).

I share your distrust of the compromised hardware we run on, and blindly collecting facts is a bad idea. But I'm not so sure introducing a big intentional meta-bias is a great idea. If I get myopia, my vision is not improved by tearing my eyes out.

Comment author: simplicio 14 July 2010 03:11:28AM *  15 points [-]

On reflection, I think I have an obligation to stick my neck out and address some issue of potential dangerous knowledge that really matters, rather than the triviality (to us anyway) of heliocentrism.

Suppose (worst case) that race IQ differences are real, and not explained by the Flynn effect or anything like that. I think it's beyond dispute that that would be a big boost for the racists (at least short-term), but would it be an insuperable obstacle for those of us who think ontological differences don't translate smoothly into differences in ethical worth?

The question of sex makes me fairly optimistic. Men and women are definitely distinct psychologically. And yet, as this fact has become more and more clear, I do not think sexual equality has declined. Probably the opposite - a softening of attitudes on all sides. So maybe people would actually come to grips with race IQ differences, assuming they exist.

More importantly, withholding that knowledge could be much more disastrous.

(1) If the knowledge does come out, the racists get to yell "I told you so," "Conspiracy of silence" etc. Then the IQ difference gets magnified 1000x in the public imagination.

(2) If the knowledge does not come out, then underrepresentation of certain races in e.g., higher learning stands as an ugly fact sans explanation. Society beats its head against a problem of supposed endemic racism for eternity, when the real culprit is statistical differences in mean IQ. Even though - public perceptions be damned - statistical IQ differences should have all the moral weight of "pygmies are underrepresented in basketball."

Knowing about (potential) racial IQ differences is dangerous; so is a perpetual false presumption of racism resulting from ignoring those differences if they exist. Which one generates the most angst, long-term? I don't know. But the truth is probably more sustainable than a well-intentioned fib.

Comment author: WrongBot 14 July 2010 03:50:58AM 6 points [-]

I'm inclined to agree with you.

I certainly don't think that avoiding dangerous knowledge is a good group strategy, due to (at least) difficulties with enforcement and unintended side-effects of the sort you've described here.

The question of sex makes me fairly optimistic. Men and women are definitely distinct psychologically. And yet, as this fact has become more and more clear, I do not think sexual equality has declined. Probably the opposite - a softening of attitudes on all sides. So maybe people would actually come to grips with race IQ differences, assuming they exist.

While the scientific consensus has become more clear, I'm not sure that it's reflected in popular or even intellectual opinion. Note the continuing popularity of Judith Butler in non-science academic circles, for example. Or the media's general tendency to discuss sex differences entirely outside of any scientific context. This may not be the best example.

Comment author: simplicio 14 July 2010 04:08:02AM *  8 points [-]

This may not be the best example.

Perhaps not for society at large, but what about empirically-based intellectuals themselves? Do you think knowledge of innate sex differences leads to more or less sexism among them? I think it leads to less, although my evidence is wholly anecdotal.

There is another problem with avoiding dangerous knowledge. Remember the dragon in the garage? In order to make excuses ahead of time for missing evidence, the dragon proponent needs to have an accurate representation of reality somewhere in their heart-of-hearts. This leads to cognitive dissonance.

Return to the race/IQ example. Would you rather

  • know group X has a 10 points lower average IQ than group Y, and just deal with it by trying your best to correct for confirmation bias etc., OR

  • intentionally keep yourself ignorant, while feeling deep down that something is not right.

?

I suspect the second option is worse for your behaviour towards group X. It would still be difficult for a human to do, but I'd personally rather swallow the hard pill of a 10-point average IQ difference and consciously correct for my brain's crappy heuristics, than feel queasy around group X in perpetuity because I know I'm lying to myself about them.

Comment author: MichaelVassar 15 July 2010 04:57:25PM 1 point [-]

Possibly, but faith in the truth winning out also looks like faith to me. Also, publicly at least people have to pick their battles.

Comment author: HughRistik 14 July 2010 07:29:04AM *  15 points [-]

I’ve also observed that people who come to believe that there are significant differences between the sexes/races/whatevers on average begin to discriminate against all individuals of the disadvantaged sex/race/whatever, even when they were only persuaded by scientific results they believed to be accurate and were reluctant to accept that conclusion. I have watched this happen to smart people more than once. Furthermore, I have never met (or read the writings of) any person who believed in fundamental differences between the whatevers and who was not also to some degree a bigot.

One specific and relatively common version of this are people who believe that women have a lower standard deviation on measures of IQ than men. This belief is not incompatible with believing that any particular woman might be astonishingly intelligent, but these people all seem to have a great deal of trouble applying the latter to any particular woman. There may be exceptions, but I haven’t met them.

The rest of the post was good, but these claims seem far too anecdotal and availability heuristicky to justify blocking yourself out of an entire area of inquiry.

When well-meaning, intelligent people like yourself refuse to examine certain areas of controversy, you consign those discourses to people with less-enlightened social attitudes. When certain beliefs are outlawed, only outlaws will hold those beliefs.

SarahC has raised some alternative ideas about how people may respond to dangerous knowledge.

As for:

Furthermore, I have never met (or read the writings of) any person who believed in fundamental differences between the whatevers and who was not also to some degree a bigot.

Why are you so comfortable with such a hasty generalization? I'm not extremely widely-read on the subject of group differences, but I've run into some writing on the subject by people who doesn't seem to be bigots. See Gender, Nature, and Nurture by Richard Lippa, for instance.

Why would you make a hasty generalization and then shut yourself off to evidence that could disconfirm it?

A consequence of this is that your brain is not a trusted system, which itself has consequences that go much, much deeper than a bunch of misapplied heuristics. (And those are bad enough on their own!)

Your post itself demonstrates this. You are accepting certain empirical and moral beliefs that have not been justified, such as the notion of cognitive equality between groups. Regardless of whether this hypothesis is true or not, it seems to get inordinately privileged for ideological reasons. (In my view, suspended judgment on group differences is a more rational initial attitude.)

Privileging certain hypotheses for mainly ideological reasons is not rationality, even when your ideology is really warm and fuzzy.

If you are comfortable freezing your belief system in certain areas, that's a strong symptom that your mind got hacked somewhere, and the virus is so bad that it is disabled your own epistemic immune system.

Personally, like simplicio, I'm not comfortable pulling an ostrich maneuver and basing my values on empirical notions that could turn out to be lies. What a great way to destroy my own conviction in my values! I would prefer to investigate these subjects, even at risk of shaking up my values. So far, like SarahC, I haven't found my values to be shaken up all that much (though maybe I'm biased in that perception).

Comment author: WrongBot 14 July 2010 05:52:17PM 2 points [-]

I think it may be helpful to clearly distinguish between epistemic and instrumental rationality. The idea proposed in this post is actively detrimental to the pursuit of epistemic rationality; I should have acknowledged that more clearly up front.

But if one is more concerned with instrumental rationality ("winning"), then perhaps there is more value here. If you've designated a particular goal state as a winning one and then, after playing for a while, unconsciously decided to change which goal state counts as a win, then from the perspective of the you that began the game, you've lost.

I do agree that my last example was massively under-justified, especially considering the breadth of the claim.

Comment author: Kevin 14 July 2010 08:23:08AM 3 points [-]
Comment author: cousin_it 14 July 2010 09:31:22AM *  18 points [-]

In the comments here we see how LW is segmenting into "pro-truth" and "pro-equality" camps, just as it happened before with pro-PUA and anti-PUA, pro-status and anti-status, etc. I believe all these divisions are correlated and indicate a deeper underlying division within our community. Also I observe that discussions about topics that lie on the "dividing line" generate much more heat than light, and that people who participate in them tend to write their bottom lines in advance.

I'm generally reluctant to shut people up, but here's a suggestion: if you find yourself touching the "dividing line" topics in a post or comment, think twice whether it's really necessary. We may wish ourselves to be rational, but it seems we still lack the abstract machinery required to actually update our opinions when talking about these topics. Nothing is to be gained from discussing them until we have the more abstract stuff firmly in place.

Comment author: whpearson 14 July 2010 10:05:26AM *  3 points [-]

I think there are divisions within the community, but I am not sure about the correlations. Or at least they don't fit me.

I'm pro discussion of status, I liked red paper clip theory for an example. I'm anti acquiring high status for myself and anti people telling me I should be pro that. I'm anti-pua advice, pro the occasional well backed up psychological research with PUA style flavour (finding out what women really find attractive, why the common advice is wrong etc).

I'm pretty much pro-truth, I don't think words can influence me that much (if they could I would be far more mainstream). I'm less sure about situations, if I was more status/money maximising for a while to earn money to donate to FHI etc, then I would worry that I would get sucked into the high status decadent consumer lifestyle and forget about my long term concerns.

Edit: Actually, I've just thought of a possible reason for the division you note.

If you are dominant or want to become dominant you do not want to be swayed by the words of others. So ideas are less likely to be dangerous to you or your values. If you are less-dominant you may be more susceptible to the ideas that are floating around in society as, evolutionarily, you would want to be part of whatever movement is forming so you are part of the ingroup.

I think my social coprocessor is probably broken in some weird way, so I may be an outlier.

Comment author: Blueberry 14 July 2010 04:14:40PM 3 points [-]

I'm anti-pua advice, pro the occasional well backed up psychological research with PUA style flavour (finding out what women really find attractive, why the common advice is wrong etc).

I think this is just another way of saying "I'm pro- good advice about dating and anti- bad advice about dating." I would consider the research you're discussing a form of PUA/dating advice.

Comment author: whpearson 14 July 2010 05:54:01PM *  7 points [-]

Are newtons laws billiard ball prediction advice?

In other words, there are other uses than trying to pick up girls for knowing what, on average, women like in a man. These include, but are not limited to,

  • Judging the likely ability of politicians to influence women
  • Being able to match make between friends
  • Writing realistic plots in fiction
  • Not being suprised when your friends are attracted to certain people
Comment author: Larks 15 July 2010 12:38:49AM 9 points [-]

If you're an altruist (on the 'idealist' side of WrongBot's distinction), you'd probably consider making women you know happier to be the biggest advantage.

Comment author: whpearson 15 July 2010 09:54:04AM *  4 points [-]

Most of the women I'm friends with are in relationships with men that aren't me :) So me being maximally attractive to them may not make them happier. I would need more research on how to have the correct amount of attractiveness in platonic relationships.

Sure women like the attention of a very attractive man, but it could lead to jealousy (why is attractive man speaking to X and not me), unrequited lust and .strife in their existing relationships.

Perhaps a research on what women find creepy, and not doing that, would be more useful for making women happier in general.

Edit: There is also the problem that if you become more attractive you might make your male friends less happy as they get less attention. Raising the general attractiveness of your male social group is another possibility, but one that would require quite an oddly rational group.

Comment author: HughRistik 15 July 2010 05:41:33AM 5 points [-]

If you are dominant or want to become dominant you do not want to be swayed by the words of others. So ideas are less likely to be dangerous to you or your values. If you are less-dominant you may be more susceptible to the ideas that are floating around in society as, evolutionarily, you would want to be part of whatever movement is forming so you are part of the ingroup.

Another possibility is that we are seeing some other personality differences in openness and or agreeableness. People who are higher in openness and/or lower in agreeableness might be more interested in ideas that are judged politically incorrect, or antisocial.

Comment author: MichaelVassar 15 July 2010 04:53:54PM 8 points [-]

There's no social coprocessor, we evolved a giant cerebral cortex to do social processing, but some people refuse to use it for that because they can't use it in its native mode while they are also emulating a general intelligence on the same hardware.

Comment author: whpearson 16 July 2010 10:15:56AM *  2 points [-]

I was being brief (and imprecise) in my self-assessment as that wasn't the main point of the comment. I didn't even mean broken in the sense that other might have meant it, i.e. Aspergers.

I just don't enjoy social conversation much normally. I can do it such that the other person enjoys it somewhat. An example, I was chatting to a cute dancer last night (at someone's 30th so I was obliged to), and she invited me to watch her latest dance. I declined because I wasn't into her (or into watching dance). She was nice and pretty, nothing wrong with her, but I just don't tend to seek marginal connections with people because they don't do much for me. Historically the people I connect with have seem to have been people that have challenged me or can make me think in odd directions.

This I understand is an unusual way to pick people to associate with, so I think something in the way I process social signals is different from the norm. This is what I meant.

Comment author: CarlShulman 14 July 2010 10:10:24AM 0 points [-]

Any hypotheses about the common factor?

Comment author: cousin_it 14 July 2010 12:25:47PM *  6 points [-]

Not sure. I was anti-status, anti-PUA, pro-equality until age 22 or so, and then changed my opinions on all these issues at around the same time (took a couple years). So maybe there is a common cause, but I have absolutely no idea what that cause could be.

Comment author: CarlShulman 14 July 2010 01:27:39PM 5 points [-]

Reduced attachment to explicit verbal norms?

Comment author: JamesPfeiffer 14 July 2010 03:27:29PM 2 points [-]

My relevant life excerpt is similar to yours. The first two changed because of increased understanding of how humans coordinate and act socially. Not sure if there is a link to the third.

Comment author: Blueberry 14 July 2010 04:11:57PM 0 points [-]

I was anti-status, anti-PUA, pro-equality until age 22 or so, and then changed my opinions on all these issues

It's called "growing up."

Comment author: Emile 14 July 2010 12:35:10PM 2 points [-]

I agree that these politically charged issues are probably not a very good thing for the community, and that we should be extra cautious when engaging them.

Comment author: [deleted] 14 July 2010 02:15:18PM 6 points [-]

If there's a discussion about whether or not we should seek truth -- at a site about rationality -- that's a discussion worth having. It's not a side issue.

Like whpearson, I think we're not all on one side or another. I'm pro-truth. I'm anti-PUA. I don't know if I'm pro or anti status -- there's something about this community's focus on it that unsettles me, but I certainly don't disapprove of people choosing to do something high-status like become a millionaire.

You're basically talking about the anti-PC cluster. It's an interesting phenomenon. We've got instinctively and vehemently anti-PC people; we've got people trying to edge in the direction of "Hey, maybe we shouldn't just do whatever we want"; and we've got people like me who are sort of on the dividing line, anti-PC in theory but willing to walk away and withdraw association from people who actually spew a lot of hate.

I think it's an interesting issue because it deals with how we ought best to react to controversy. In the spirit of the comments I made to WrongBot, I don't think we should fear to go there; I know my rationality isn't that fragile and I doubt yours is either. (I've gotten my knee-jerk emotional responses burned out of me by people much ruder than anyone here.)

Comment author: cousin_it 14 July 2010 02:35:03PM *  13 points [-]

Anti-PC? Good name, I will use it.

I know my rationality isn't that fragile and I doubt yours is either.

What troubles me is this: your position on the divisive issues is not exactly identical to mine, but I very much doubt that I could sway your position or you could sway mine. Therefore, I'm pretty confident that at least one of us fails at rationality when thinking about these issues. On the other hand, if we were talking about math or computing, I'd be pretty confident that a correct argument would actually be recognized as correct and there would be no room for different "positions". There is only one truth.

We have had some big successes already. (For example, most people here know better than be confused by talk of "free will".) I don't think the anti-PC issue can be resolved by the drawn-out positional war we're waging, because it isn't actually making anyone change their opinions. It's just a barrage of rationalizations from all sides. We need more insight. We need a breakthrough, or maybe several, that would point out the obviously correct way to think about anti-PC issues.

Comment author: Blueberry 14 July 2010 04:06:35PM 2 points [-]

your position on the divisive issues is not exactly identical to mine, but I very much doubt that I could sway your position or you could sway mine. Therefore, I'm pretty confident that at least one of us fails at rationality when thinking about these issues. On the other hand, if we were talking about math or computing, I'd be pretty confident that a correct argument would actually be recognized as correct and there would be no room for different "positions". There is only one truth.

But there are many different values. If we can't sway each other's positions, that points to a value difference.

Comment author: Vladimir_Nesov 14 July 2010 04:23:40PM 6 points [-]

If we can't sway each other's positions, that points to a value difference.

If only it was always so. Value is hard to see, so easy to rationalize.

Comment author: [deleted] 14 July 2010 08:35:45PM 4 points [-]

I think it actually is a value difference, just like Blueberry said.

I do not want to participate in nastiness (loosely defined). It's related to my inclination not to engage in malicious gossip. (Folks who know me personally consider it almost weird how uncomfortable I am with bashing people, singly or in groups.) It's not my business to stop other people from doing it, but I just don't want it as part of my life, because it's corrosive and makes me unhappy.

To refine my own position a little bit -- I'm happy to consider anti-PC issues as matters of fact, but I don't like them connotationally, because I don't like speaking ill of people when I can help it. For example, in a conversation with a friend: he says, "Don't you know blacks have a higher crime rate than whites?" I say, "Sure, that's true. But what do you want from me? You want me to say how much I hate my black neighbors? What do you want me to say?"

I don't think that's an issue that argument can dissuade me from; it's my own preference.

Comment author: steven0461 14 July 2010 09:28:00PM 0 points [-]

Asserting group inequalities means speaking more ill of one group of people but less ill of another, so doesn't that cancel out?

Comment author: [deleted] 14 July 2010 09:44:28PM 1 point [-]

I'm not talking about empirical claims, I'm talking about affect. I have zero problem with talking about group inequalities, in themselves.

Comment author: Douglas_Knight 14 July 2010 05:06:18PM 3 points [-]

Maybe it's a political correctness principal component, but it seems to me that ideas about status should not be aligned with that component. If PUA had not been mentioned, and we were just discussing Johnstone, then I think those who are ignorant of PUA, whether pro- or anti-PC, would have less extreme reactions and often completely different ones.

If people's opinions on one issue are polarizing their opinions on another, without agreement that they're logically related, something is probably going wrong and this is a cost to discussing the first issue. Also, cousin it talked about the issues creating "camps." That's probably the mediating problem.

Comment author: WrongBot 14 July 2010 07:27:24PM 21 points [-]

My hypothesis is that this is a "realist"/"idealist" divide. Or, to put it another way, one camp is more concerned with being right and the other is more concerned with doing the right thing. ("Right" means two totally different things, here.)

Quality of my post aside (and it really wasn't very good), I think that's where the dividing line has been in the comments.

Similarly, I think most people who value PUA here value it because it works, and most people who oppose it do so on ethical or idealistic grounds. Ditto discussions of status.

The reason the arguments between these camps are so unfruitful, then, is that we're sorting of arguing past each other. We're using different heuristics to evaluate desirability, and then we're surprised when we get different results; I'm as guilty of this as anyone.

Comment author: HughRistik 15 July 2010 05:36:51AM 7 points [-]

My hypothesis is that this is a "realist"/"idealist" divide.

I was thinking the same thing, when I insinuated that you were being idealistic ;) Whether this dichotomy makes sense is another question.

Similarly, I think most people who value PUA here value it because it works, and most people who oppose it do so on ethical or idealistic grounds. Ditto discussions of status.

I think this an excellent example of what the disagreements look like superficially. I think what is actually going on is more complex, such as differences of perception of empirical matters (underlying "what works"), and different moral philosophies.

For example, if you have a deontological prescription against acting "inauthentic," then certain strategies for learning social skills will appear unethical to you. If you are a virtue ethicist, then holding certain sorts of intentions may appear unethical, whereas a consequentialist would look more at the effects of the behavior.

Although I would get pegged on the "realist" side of the divide, I am actually very idealistic. I just (a) revise my values as my empirical understanding of the world changes, and (b) believe that empirical investigation and certain morally controversial behaviors are useful to execute on my values in the real world.

For example, even though intentionally studying status is controversial, I find that social status skills are often useful for creating equality with people. I study power to gain equality. So am I a realist, or an idealist on that subject?

Another aspect of the difference we are seeing may be in this article's description of "shallowness."

Comment author: HughRistik 15 July 2010 08:03:26AM 14 points [-]

Here is another example of the way that pragmatism and idealism interact for me, from the world of pickup:

I was brought up with up with the value of gender equality, and with a proscription against dominating women or being a "jerk."

When I got into pickup and seduction, I encountered the theory that certain masculine behaviors, including social dominance, are a factor in female attraction to men. This theory matched my observation of many women's behavior.

While I was uncomfortable with the notion of displaying stereotypically masculine behavior (e.g. "hegemonic masculinity" from feminist theory) and acting in a dominant manner towards women, I decided to give it a try. I found that it worked. Yet I still didn't like certain types of masculine and dominance displays, and the type of interactions they created with women (even while "working" in terms of attraction and not being obviously unethical), so I started experimenting and practicing styles less reliant on dominance.

I found that there were ways of attracting women that worked quite well, and didn't depend on dominance and a narrow version of masculinity. It just took a bit of practice and creativity, and I needed my other pickup tools to be able to pull it off. Practicing a traditional form of masculinity got me the social experience necessary to figure out ways to drop that sort of masculinity.

In conclusion, I eventually affirmed my value of having equal interactions with women and avoiding dominating them. And I discovered "field tested" ways to attain success with women while adhering to that value, so I confirmed that it wasn't a silly, pie-in-the-sky ideal.

I call this an empirical approach to selecting and accomplishing a value.

Comment author: MichaelVassar 15 July 2010 04:49:18PM 9 points [-]

I strongly agree with this. Count me in the camp of believing true things in literally all situations, as I think that the human brain is too biased for any other approach to result, in expectation, in doing the right thing, but also as in the camp of not necessarily sharing truths that might be expected to be harmful.

Comment author: xamdam 14 July 2010 09:58:27AM 1 point [-]

By the way, some people took similar position to yours in

What Is Your Dangerous Idea?: Today's Leading Thinkers on the Unthinkable

Comment author: steven0461 14 July 2010 05:56:22PM 2 points [-]

If you're being held back by worries about your values changing, you can always try cultivating a general habit of reverting to values held by earlier selves when doing so is relatively easy. I call it "reactionary self-help".

Comment author: PhilGoetz 14 July 2010 07:18:08PM *  3 points [-]

I don't think that makes sense. Changing back is no more desirable than any other change.

Once you've changed, you've changed. Changing your utility function is undesirable. But it isn't bad. You strive to avoid it; but once it's happened, you're glad it did.

Comment author: steven0461 14 July 2010 08:36:06PM *  3 points [-]

Right; that's what happens by default. But if you find that because your future self will want to keep its new values, you're overly reluctant to take useful actions that change your values as a side effect, you might want to precommit to roll back certain changes; or if you can't keep track of all the side effects, it's conceivable you want to turn it into a general habit. I could see this either being a good or bad idea on net.

Comment author: MichaelVassar 15 July 2010 04:46:19PM 1 point [-]

You can also try to engage in trade with your future selves, which most good formulations of CEV or its successors should probably enable.

Comment author: MichaelVassar 15 July 2010 05:17:33PM 24 points [-]

I flat-out disagree that power corrupts as the phrase is usually understood, but that's a topic worthy of rational discussion (just not now with me).

The claim that there has never been a truly benevolent dictator though, that's simply a religious assertion, a key point of faith in the American democratic religion and no more worthy of discussion than whether the Earth is old, at least for usual meanings of the word 'benevolent' and for meanings of 'dictator' which avoid the no true Scotsman fallacy. There have been benevolent democratically elected leaders in the usual sense too. How confident do you think you should be that the latter are more common than the former though? Why?

I'm seriously inclined to down-vote the whole comment community on this one except for Peter, though I won't, for their failure to challenge such an overt assertion of such an absurd claim. How many people would have jumped in against the claim that without belief in god there can be no morality or public order, that the moral behavior of secular people is just a habit or hold-over from Christian times, and that thus that all secular societies are doomed? To me it's about equally credible.

BTW, just from the 20th century there are people from Ataturk to FDR to Lee Kuan Yew to Deng Chou Ping. More generally, more or less The Entire History of the World especially East Asia are counter-examples.

Comment author: JanetK 15 July 2010 05:40:47PM 6 points [-]

I agree that statements like all As are Bs are likely to be only approximately true and if you look you will find counter examples. But... 'power corrupts' is a fairly reliable rule of thumb as rules of thumb go. I include a couple of refs that took all of 3 minutes to find although I couldn't find the really good one that I noticed a year or so ago.

http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1298606 abstract: We investigate the effect of power differences and associated expectations in social decision-making. Using a modified ultimatum game, we show that allocators lower their offers to recipients when the power difference shifts in favor of the allocator. Remarkably, however, when recipients are completely powerless, offers increase. This effect is mediated by a change in framing of the situation: when the opponent is without power, feelings of social responsibility are evoked. On the recipient side, we show that recipients do not anticipate these higher outcomes resulting from powerlessness. They prefer more power over less, expecting higher outcomes when they are more powerful, especially when less power entails powerlessness. Results are discussed in relation to empathy gaps and social responsibility.

http://scienceblogs.com/cortex/2010/01/power.php from J Lehrer's comments: The scientists argue that power is corrupting because it leads to moral hypocrisy. Although we almost always know what the right thing to do is - cheating at dice is a sin - power makes it easier to justify the wrongdoing, as we rationalize away our moral mistake.

Comment author: MichaelVassar 15 July 2010 06:04:32PM *  4 points [-]

Voted up for using data, though I'm very far from convinced by the specific data. The first seems irrelevant or at best very weakly suggestive. Regarding the second, I'm pretty confident that scientists profoundly mis-understand what sort of thing hypocrisy is as a consequence of the same profound misunderstanding of what sort of thing mind is which lead to the failures of GOFAI. I guess I also think they misunderstand what corruption is, though I'm less clear on that.

It's really critical that we distinguish power corrupting from fear and weakness producing pro-social submission and from fearful people invoking morality to cover over cowardice. In the usual sense of the former concept corruption is something that should be expected, for instance, to be much more gradual. One should really notice that heroes in stories for adults are not generally rule-abiding, and frequently aren't even typically selfless. Acting more antisocial, like the people you actually admire (except when you are busy resenting their affronts to you) do, because like them you are no longer afraid is totally different from acting like people you detest.

I don't think that "power corrupts" is a helpful approximation at the level of critical thinking ability common here. (what models are useful depends on what other models you have).

Comment author: Vladimir_M 15 July 2010 06:20:54PM *  8 points [-]

MichaelVassar:

I'm seriously inclined to down-vote the whole comment community on this one except for Peter, though I won't, for their failure to challenge such an overt assertion of such an absurd claim.

I was tempted to challenge it, but I decided that it's not worth to open such an emotionally charged can of worms.

The claim that there has never been a truly benevolent dictator though, that's simply a religious assertion, a key point of faith in the American democratic religion and no more worthy of discussion than whether the Earth is old, at least for usual meanings of the word 'benevolent' and for meanings of 'dictator' which avoid the no true Scotsman fallacy. There have been benevolent democratically elected leaders in the usual sense too. How confident do you think you should be that the latter are more common than the former though? Why?

These are some good remarks and questions, but I'd say you're committing a fallacy when you contrast dictators with democratically elected leaders as if it were some sort of dichotomy, or even a typically occurring contrast. There have been many non-democratic political arrangements in human history other than dictatorships. Moreover, it's not at all clear that dictatorships and democracies should be viewed as disjoint phenomena. Unless we insist on a No-True-Scotsman definition of democracy, many dictatorships, including quite nasty ones, have been fundamentally democratic in the sense of basing their power on majority popular support.

Comment author: MichaelVassar 16 July 2010 08:22:19AM 4 points [-]

I agree with everything in your paragraph. The important distinction between states as I see it is more between totalitarian and non-totalitarian than between democratic and non-democratic, as the latter tends to be a fairly smooth continuum. I was working within the local parlance for an American audience.

Comment author: WrongBot 15 July 2010 06:22:25PM 3 points [-]

While I'd disagree with your description of FDR as a dictator, you're quite right about Ataturk, and your other examples expose my woefully insufficient knowledge of non-Western history. My belief has been updated, and the post will be as well, in a moment.

Thanks.

Comment author: MichaelVassar 16 July 2010 08:24:11AM *  4 points [-]

Thank you! I'm so happy to have a community where things like this happen. Are you in agreement with my description of Lincoln as a dictator below? He's less benevolent than FDR but I'd still call him benevolent and he's a more clear dictator.

Comment author: Mass_Driver 15 July 2010 06:53:19PM 10 points [-]

that's a topic worthy of rational discussion (just not now with me).

If this is a plea to be let alone on the topic, then, feel free to ignore my comment below -- I'm posting in case third parties want to respond.

The claim that there has never been a truly benevolent dictator though, that's simply a religious assertion,

Perhaps it's phrased poorly. There have certainly been plenty of dictators who often meant well and who often, on balance, did more good than harm for their country -- but such dictators are rare exceptions, and even these well-meaning, useful dictators may not have been "truly" benevolent in the sense that they presided over hideous atrocities. Obviously a certain amount of illiberal behavior is implicit in what it means to be a dictator -- to argue that FDR was non-benevolent because he served four terms or managed the economy with a heavy hand would indeed involve a "no true Scotsman" fallacy. But a well-intentioned, useful, illiberal ruler may nevertheless be surprisingly bloody, and this is a warning that should be widely and frequently promulgated, because it is true and important and people tend to forget it.

BTW, just from the 20th century there are people from Ataturk to FDR to Lee Kuan Yew to Deng Chou Ping. More generally, more or less The Entire History of the World especially East Asia are counter-examples.

*Ataturk is often accused of playing a leading role in the Armenian genocide, and at the very least seems to have been involved in dismissing courts that were trying war criminals without providing replacement courts, and in conquering territories where Armenians were massacred shortly after the conquest.

*Deng Chou Ping was probably the most powerful person in China at the time of the Tiananmen Square massacres, and it is not clear that he exerted any influence to attempt to disperse the protesters peacefully or even with a minimum of violence: tanks were used in urban areas and secret police hunted down thousands of dissidents even after the protests had ended. One might have hoped that a benevolent illiberal ruler, when confronted with peaceful demands for democracy, would simply say "No." and ignore the protesters except in so far as they were creating a public nuisance.

*FDR presided over the internment of hundreds of thousands of American citizens in concentration camps solely on the basis of race, as well as the firebombing of Dresden, Hamburg, and Tokyo. The first conflagration of a residential area could have been an accident, but there is no evidence of which I am aware that the Allies ever took steps to prevent tens of thousands of civilians from being burnt alive, such as, e.g., taking care to only bomb non-urban industrial targets on hot, dry, summer days. Although Hitler is surely far more responsible than FDR for the Holocaust, a truly benevolent ruler would probably have spared an air raid or two to cut the railroad tracks that led from Jewish ghettos to German death camps. Whatever you might think about FDR's leadership (I would not presume to judge him or to say that I could have done better in his place), it was surprisingly bloody for a benevolent person.

Lee Kuan Yew seems to have been a fairly good dictator, but in his autobiography, he claims to have directly benefited from the US's war efforts in Vietnam, and he says that he would not have remained in power but for the US efforts. For its part, the US State Department explicitly claimed that the Vietnam war was intended to prevent countries like Lee Kuan Yew's Singapore from falling like dominoes after a possible Communization of Vietnam. Although it would probably be unfair to lay moral culpability for, e.g., Mai Lai or Agent Orange on Lee Kuan Yew (and thus I do not say he is in any way to blame), it is still worth noting that Yew's dictatorship was indirectly maintained by years of surprisingly bloody violence. Thus, Yew may be an exception that proves the rule -- even when you yourself, as an aspiring dictator, do not get your hands bloody as power corrupts you, it is possible that you are saved from bloody hands only by a friend who gets his hands bloody for you.

Comment author: MichaelVassar 15 July 2010 09:04:33PM 17 points [-]

I simply deny the assertion that dictators who wanted good results and got them were rare exceptions. Citation needed.

Admittedly, dictators have frequently presided over atrocities, unlike democratic rulers who have never presided over atrocities such as slavery, genocide, or more recently, say the Iraq war, Vietnam, or in an ongoing sense, the drug war or factory farming.

Human life is bloody. Power pushes the perceived responsibility for that brute fact onto the powerful. People are often scum, but avoiding power doesn't actually remove their responsibility. Practically every American can save lives for amounts of money which are fairly minor to them. What are the relevant differences between them and French aristocrats who could have done the same? I see one difference. The French aristocrats lived in a Malthusian world where tehy couldn't really have impacted total global suffering with the local efforts available?

How is G.W. Bush more corrupt than the people who elected him. He seems to care more for the third world poor than they do, and not obviously less for rule of law or the welfare of the US.

Playing fast and loose with geopolitical realities, (Iraq is only slightly about oil, for instance) I'd like to conclude with the observation that even when you yourself, as a middle class American, don't get your hands bloody as cheap oil etc corrupt you, it is possible that you are saved from bloody hands by an elected representative who you hired to do the job.

Comment author: Mass_Driver 16 July 2010 05:49:52AM 4 points [-]

Citation needed.

It's hard to find proof of what most people consider obvious, unless its part of the Canon of Great Moments in Science (tm) and the textbook industry can make a bundle off it. Tell you what -- if you like, I'll trade you a promise to look for the citation you want for a promise to look for primary science on anthropogenic global warming. I suspect we're making the climate warmer, but I don't know where to read a peer-reviewed article documenting the evidence that we are. I'll spend any reasonable amount of time that you do looking -- 5 minutes, 15 minutes, 90 minutes -- and if I can't find anything, I'll admit to being wrong.

unlike democratic rulers who have never presided over atrocities such as slavery, genocide, or more recently, say the Iraq war, Vietnam, or in an ongoing sense, the drug war or factory farming.

Slavery, genocide, and factory farming are examples of imperfect democracy -- the definition of "citizen" simply isn't extended widely enough yet. Fortunately, people (slowly) tend to notice the inconsistency in times of relative peace and prosperity, and extend additional rights. Hence the order-of-magnitude decrease in the fraction of the global population that is enslaved, and, if you believe Stephen Pinker, in the frequency of ethnic killings. As for factory farming, I sincerely hope the day when animals are treated as citizens when appropriate will come, and the quicker it comes the better I'll be pleased. On the other hand, if you glorify dictatorship, or if you give dictatorship an opening to glorify itself, it tends to pretty effectively suppress talk about widening the circle of compassion. Better to have a hypocritical system of liberties than to let vice walk the streets without paying any tribute to virtue at all; such tributes can collect compound interest over the centuries.

The Vietnam war is generally recognized as a failure of democracy; the two most popular opponents of the war were assassinated, and the papers providing the policy rationale for the war were illegally hidden, ultimately causing the downfall of President Nixon. The drug war seems to be winding down as the high cost of prisons sinks in. The war on Iraq is probably democracy's fault.

Human life is bloody. Power pushes the perceived responsibility for that brute fact onto the powerful.

True enough, but it also pushes some of the real responsibility onto the powerful. I would much rather kill one person than stand by and let ten die, but I would much rather let one person die than kill one person -- responsibility counts for something.

it is possible that you are saved from bloody hands by an elected representative who you hired to do the job.

God forbid, if you'll excuse the expression. I'm not paying anybody to butcher for me, although sometimes, despite my best efforts, they take my tax dollars for the purpose. So far as I can manage it without being thrown in jail, it's not in my name; I vote against any incumbent that commits atrocities, and campaign for people who promise not to, and buy renewable energy from the power company and fair-trade imports from the third world and humanely-labeled meat from the supermarket. I'm sure that I still benefit from all kinds of bloody shenanigans, but it's not because I want to.

Finally, are you any relation to Michael Vassar, the political philosopher and scholar of just war theory? You seem to have a mind that is open like his, and a similarly agile debating style, but you also seem considerably bitterer than his published works.

Comment author: MichaelVassar 16 July 2010 08:42:32AM 8 points [-]

Good writing style!

I don't think I glorify dictatorship, but I do think that terribly dictatorships, like Stalinist Russia, have sometimes spoken of widening circles of compassion.

I do think you are glorifying democracy. Do you have examples of perfect democracy to contrast with imperfect democracy? Slaves frequently aren't citizens, but on other occasions, such as in the immense and enslaving US prison system (with its huge rates of false conviction and of conviction for absurd crimes), or the military draft they are. The reduction in slavery may be due to philosophical progress trickling down to the masses, or it may simply be that slavery has become less economically competitive as markets have matured.

Responsibility counts for something, but for far less among the powerful. As power increases, custom weakens, and situations become more unique, acts/omissions distinctions become less useful. As a result, rapid rises in power do frequently leave people without a moral compass, leading to terrible actions.

I appreciate your efforts to avoid indirectly causing harms.

I didn't know about the other Michael Vassar. It's an uncommon name, so I'm surprised to hear it.

Comment author: prase 16 July 2010 09:07:14AM 9 points [-]

I simply deny the assertion that dictators who wanted good results and got them were rare exceptions. Citation needed.

The standards of evaluation of goodness should be specified in greater detail first. Else it is quite difficult to tell whether e.g. Atatürk was really benevolent or not, even if we agree on goodness of his individual actions. Some of the questions

  • are the points scored by getting desired good results cancelled by the atrocities, and to what extent?
  • could a non-dictatorial regime do better (given the conditions in the specific country and historical period), and if no, can the dictator bear full responsibility for his deeds?
  • what amount of goodness makes a dictator benevolent?

Unless we first specify the criteria, the risk of widespread rationalisation in this discussion is high.

Comment author: Kevin 16 July 2010 08:25:45AM *  4 points [-]

One might have hoped that a benevolent illiberal ruler, when confronted with peaceful demands for democracy, would simply say "No." and ignore the protesters except in so far as they were creating a public nuisance.

In America, we have grown jaded towards protests because they don't ever accomplish anything. But at their most powerful, protests become revolutions. If Deng would have just ignored the protesters indefinitely, the CCP would have fallen. Perhaps the protest could have been dispersed without loss of life, but it's only very recently that police tactics have advanced to the point of being able to disburse large groups of defensively-militarized protesters without killing people. See http://en.wikipedia.org/wiki/Miami_model and compare to the failure of the police at the Seattle WTO protests of 1999.

This is a recent story about Deng's supposed backing of Tiananmen violence. http://www.nytimes.com/2010/06/05/world/asia/05china.html?_r=1

Comment author: satt 16 July 2010 03:14:52AM 3 points [-]

I'm seriously inclined to down-vote the whole comment community on this one except for Peter, though I won't, for their failure to challenge such an overt assertion of such an absurd claim.

I didn't challenge it because I didn't find it absurd. I've asked myself in the past whether I could think of heads of state whose orders & actions were untarnished enough that I could go ahead and call them "benevolent" without caveats, and I drew a blank.

I'd guess my definition of a benevolent leader is less inclusive than yours; judging by your child comment it seems as if you're interpreting "benevolent dictator" as meaning simply "dictators who wanted good results and got them". To me "benevolent" connotes not only good motives & good policies/behaviour but also a lack of very bad policies/behaviour. Other posters in this discussion might've interpreted it like I did.

Comment author: MichaelVassar 16 July 2010 08:05:37AM 3 points [-]

Possibly. OTOH, the poster seems to have been convinced. I draw a blank on people, dictators or not, who don't engage in very bad policies/behavior on whatever scale they are able to act on. No points for inaction in my book.

Comment author: Aurini 16 July 2010 06:13:36AM 5 points [-]

Perhaps it would be more accurate to state "The structural dynamics of dictatorial regimes demands coercion be used, while decentralized power systems allow dissent"; even the Philosopher King must murder upstarts who would take the throne. Mass Driver's comments (below) support this, with Lee Kuan Yew's power requiring violent coercion being performed on his behalf, and the examples of Democratic Despotism largely boil down to a lack of accountability and transparency in the elected leaders - essentially they became (have become) too powerful.

"Power corrupts" is just the colloquial form.

(It is possible that I am in a Death Spiral with this idea, but this analysis occurred to me spontaneously - I didn't go seeking out an explanation that fit my theory)

Comment author: MichaelVassar 16 July 2010 08:19:34AM 10 points [-]

Voted up for precision.
I see decentralization of power as less relevant than regime stability as an enabler of non-violence. Kings in long-standing monarchies, philosophical or not, need use little violence. New dictators (classically called tyrants) need use much violence. In addition, they have the advantage of having been selected for ability and the disadvantage of having been poorly educated for their position.

Of course, power ALWAYS scales up the impact of your actions. Lets say that I'm significantly more careful than average. In that case, my worst actions include doing things that have a .1% chance of killing someone every decade. Scale that up by ten million and its roughly equivalent to killing ten thousand people once during a decade long reign over a mid-sized country. I'd call that much better than Lincoln (who declared marshal law and was an elected dictator if Hitler was one) or FDR but MUCH worse than Deng. OTOH, Lincoln and FDR lived in an anarchy, the international community, and I don't. I couldn't be as careful/scrupulous as I am if I lived in an anarchy.

Comment author: WrongBot 15 July 2010 07:52:33PM 10 points [-]

I've observed that quite a bit of the disagreement with the substance of my post is due to people believing that the level of distrust for one's own brain that I advocate is excessive. (See this comment by SarahC, for example.)

It occurs to me that I should explain exactly why I do not trust my own brain.

In the past week I have noted the following instances in which my brain has malfunctioned; each of them is a class of malfunction I had never previously observed in myself:

(It may be relevant to note that I have AS.)

  • I needed to open a box of plastic wrap, of the sort with a roll inside a box, a flap that lifts up, and a sharp edge under the flap. The front of the box was designed such that there were two sections separated by some perforation; there's a little set of instructions on the box that tells you to tear one of those sections off, thus giving you a functional box of plastic wrap. I spent approximately five minutes trying to tear the wrong section off, mangling the box and cutting my finger twice in the process. This was an astonishing failure to solve a basic physical task.

  • I was making bread dough, a process which necessitates measuring out 4.5 cups of flour into a bowl. My mind was not wandering to any unusual degree, nor was I distracted or interrupted. I lost count of the number of consecutive cups of flour I was pouring into the bowl; I failed to count to four and a half.

  • I was playing Puzzle Quest (a turn-based videogame that mostly involves match-3 play of the sort made popular by Bejewled) while reading comments on LessWrong, switching between tasks every few minutes. I find that doing this gives me time to think over things I've just read; it's also fun. At one point, as I switching from looking at a comment I had just finished reading to looking at my TV screen, I suddenly began to believe that matching colored gems was the process by which one constructed sound arguments. In general. This sensation lasted approximately five seconds before reality reasserted itself.

I might not have even really noticed these brain malfunctions if I hadn't spent significant effort recently on becoming more luminous; I'm inclined to believe that there have been plenty of other such events in the past that I have failed to notice.

In any case, I hope this explains why I am so afraid of my own brain.

Comment author: xamdam 15 July 2010 09:13:13PM 3 points [-]

Come to think of it, a related argument was made, poetically, in Watchmen: Dr. Manhattan knew everything, it did clearly change his utility function (he became less human) and he mentioned appreciating not knowing the future when Adrian blocked it with tacheons. Poetry, but something to think about it.