I upvoted this post because it's a fascinating topic. But I think a trip down memory lane might be in order. This 'dangerous knowledge' idea isn't new, and examples of what was once considered dangerous knowledge should leap into the minds of anybody familiar with the Coles Notes of the history of science and philosophy (Galileo anyone?). Most dangerous knowledge seems to turn out not to be (kids know about contraception, and lo, the sky has not fallen).
I share your distrust of the compromised hardware we run on, and blindly collecting facts is a bad idea. But I'm not so sure introducing a big intentional meta-bias is a great idea. If I get myopia, my vision is not improved by tearing my eyes out.
On reflection, I think I have an obligation to stick my neck out and address some issue of potential dangerous knowledge that really matters, rather than the triviality (to us anyway) of heliocentrism.
Suppose (worst case) that race IQ differences are real, and not explained by the Flynn effect or anything like that. I think it's beyond dispute that that would be a big boost for the racists (at least short-term), but would it be an insuperable obstacle for those of us who think ontological differences don't translate smoothly into differences in ethical worth?
The question of sex makes me fairly optimistic. Men and women are definitely distinct psychologically. And yet, as this fact has become more and more clear, I do not think sexual equality has declined. Probably the opposite - a softening of attitudes on all sides. So maybe people would actually come to grips with race IQ differences, assuming they exist.
More importantly, withholding that knowledge could be much more disastrous.
(1) If the knowledge does come out, the racists get to yell "I told you so," "Conspiracy of silence" etc. Then the IQ difference gets magnified 1000x in the public imagination.
(2) If t...
I flat-out disagree that power corrupts as the phrase is usually understood, but that's a topic worthy of rational discussion (just not now with me).
The claim that there has never been a truly benevolent dictator though, that's simply a religious assertion, a key point of faith in the American democratic religion and no more worthy of discussion than whether the Earth is old, at least for usual meanings of the word 'benevolent' and for meanings of 'dictator' which avoid the no true Scotsman fallacy. There have been benevolent democratically elected leaders in the usual sense too. How confident do you think you should be that the latter are more common than the former though? Why?
I'm seriously inclined to down-vote the whole comment community on this one except for Peter, though I won't, for their failure to challenge such an overt assertion of such an absurd claim. How many people would have jumped in against the claim that without belief in god there can be no morality or public order, that the moral behavior of secular people is just a habit or hold-over from Christian times, and that thus that all secular societies are doomed? To me it's about equally credible.
BTW, just from the 20th century there are people from Ataturk to FDR to Lee Kuan Yew to Deng Chou Ping. More generally, more or less The Entire History of the World especially East Asia are counter-examples.
that's a topic worthy of rational discussion (just not now with me).
If this is a plea to be let alone on the topic, then, feel free to ignore my comment below -- I'm posting in case third parties want to respond.
The claim that there has never been a truly benevolent dictator though, that's simply a religious assertion,
Perhaps it's phrased poorly. There have certainly been plenty of dictators who often meant well and who often, on balance, did more good than harm for their country -- but such dictators are rare exceptions, and even these well-meaning, useful dictators may not have been "truly" benevolent in the sense that they presided over hideous atrocities. Obviously a certain amount of illiberal behavior is implicit in what it means to be a dictator -- to argue that FDR was non-benevolent because he served four terms or managed the economy with a heavy hand would indeed involve a "no true Scotsman" fallacy. But a well-intentioned, useful, illiberal ruler may nevertheless be surprisingly bloody, and this is a warning that should be widely and frequently promulgated, because it is true and important and people tend to forget it.
...BTW, just from the 20
I simply deny the assertion that dictators who wanted good results and got them were rare exceptions. Citation needed.
Admittedly, dictators have frequently presided over atrocities, unlike democratic rulers who have never presided over atrocities such as slavery, genocide, or more recently, say the Iraq war, Vietnam, or in an ongoing sense, the drug war or factory farming.
Human life is bloody. Power pushes the perceived responsibility for that brute fact onto the powerful. People are often scum, but avoiding power doesn't actually remove their responsibility. Practically every American can save lives for amounts of money which are fairly minor to them. What are the relevant differences between them and French aristocrats who could have done the same? I see one difference. The French aristocrats lived in a Malthusian world where tehy couldn't really have impacted total global suffering with the local efforts available?
How is G.W. Bush more corrupt than the people who elected him. He seems to care more for the third world poor than they do, and not obviously less for rule of law or the welfare of the US.
Playing fast and loose with geopolitical realities, (Iraq is only slightly about oil, for instance) I'd like to conclude with the observation that even when you yourself, as a middle class American, don't get your hands bloody as cheap oil etc corrupt you, it is possible that you are saved from bloody hands by an elected representative who you hired to do the job.
I simply deny the assertion that dictators who wanted good results and got them were rare exceptions. Citation needed.
The standards of evaluation of goodness should be specified in greater detail first. Else it is quite difficult to tell whether e.g. Atatürk was really benevolent or not, even if we agree on goodness of his individual actions. Some of the questions
Unless we first specify the criteria, the risk of widespread rationalisation in this discussion is high.
MichaelVassar:
I'm seriously inclined to down-vote the whole comment community on this one except for Peter, though I won't, for their failure to challenge such an overt assertion of such an absurd claim.
I was tempted to challenge it, but I decided that it's not worth to open such an emotionally charged can of worms.
The claim that there has never been a truly benevolent dictator though, that's simply a religious assertion, a key point of faith in the American democratic religion and no more worthy of discussion than whether the Earth is old, at least for usual meanings of the word 'benevolent' and for meanings of 'dictator' which avoid the no true Scotsman fallacy. There have been benevolent democratically elected leaders in the usual sense too. How confident do you think you should be that the latter are more common than the former though? Why?
These are some good remarks and questions, but I'd say you're committing a fallacy when you contrast dictators with democratically elected leaders as if it were some sort of dichotomy, or even a typically occurring contrast. There have been many non-democratic political arrangements in human history other than dictatorships. Moreover, it's not at all clear that dictatorships and democracies should be viewed as disjoint phenomena. Unless we insist on a No-True-Scotsman definition of democracy, many dictatorships, including quite nasty ones, have been fundamentally democratic in the sense of basing their power on majority popular support.
rhollerith_dot_com:
IMHO probably the worst effect of Western civilization's current overoptimism about democracy will be to inhibit experiments in forms of non-democratic government that would not have been possible before information technology (including the internet) became broadly disseminated.
I beg to differ. The worst effect is that throughout recent history, democratic ideas have regularly been foisted upon peoples and places where the introduction of democratic politics was a perfect recipe for utter disaster. I won't even try to quantify the total amount of carnage, destruction, and misery caused this way, but it's certainly well above the scale of those political mass crimes and atrocities that serve as the usual benchmarks of awfulness nowadays. Of course, all this normally gets explained away with frantic no-true-Scotsman responses whenever unpleasant questions are raised along these lines.
For full disclosure, I should add that I care particularly strongly about this because I was personally affected by one historical disaster that was brought about this way, namely the events in former Yugoslavia. Regardless of what one thinks about who bears what part of the blame for what happened there, one thing that's absolutely impossible to deny is that all the key players enjoyed democratic support confirmed by free elections.
Seconded. I live in Russia, and if you compare the well-being of citizens in Putin's epoch against Yeltsin's, Putin wins so thoroughly that it's not even funny.
Voted up for precision.
I see decentralization of power as less relevant than regime stability as an enabler of non-violence. Kings in long-standing monarchies, philosophical or not, need use little violence. New dictators (classically called tyrants) need use much violence. In addition, they have the advantage of having been selected for ability and the disadvantage of having been poorly educated for their position.
Of course, power ALWAYS scales up the impact of your actions. Lets say that I'm significantly more careful than average. In that case, my worst actions include doing things that have a .1% chance of killing someone every decade. Scale that up by ten million and its roughly equivalent to killing ten thousand people once during a decade long reign over a mid-sized country. I'd call that much better than Lincoln (who declared marshal law and was an elected dictator if Hitler was one) or FDR but MUCH worse than Deng. OTOH, Lincoln and FDR lived in an anarchy, the international community, and I don't. I couldn't be as careful/scrupulous as I am if I lived in an anarchy.
If knowing the truth makes me a bigot, then I want to be a bigot. If my values are based on not knowing certain facts, or getting certain facts incorrect, then I want my values to change.
It may help to taboo "bigot" for a minute. You seem to be lumping a number of things under a label and calling them bad.
There's the question of how we treat people who are less intelligent (regardless of group membership). I'm fine with discriminating in some ways based on intelligence of the individual, and if it does turn out that Group X is statistically less intelligent, then maybe Group X should be underrepresented in important positions. This has consequences for policy decisions. Of course, there may be a way of increasing the intelligence of Group X:
Based on all the evidence I have, I’ve made a conscious decision to avoid seeking out information on sex differences in intelligence and other, similar kinds of research.
How are you going to help a disadvantaged group if you're blinding yourself to the details of how they're disadvantaged?
WrongBot:
But I should not make decisions about individual members of Group X based on the statistical trend associated with Group X [...]
Really? I don't think it's possible to function in any realistic human society without constantly making decisions about individuals based on the statistical trends associated with various groups to which they happen to belong (a.k.a. "statistical discrimination"). Acquiring perfectly detailed information about every individual you ever interact with is simply not possible given the basic constraints faced by humans.
Of course, certain forms of statistical discrimination are viewed as an immensely important moral issue nowadays, while others are seen simply as normal common sense. It's a fascinating question how and why exactly various forms of it happen (or fail) to acquire a deep moral dimension. But in any case, a blanket condemnation of all forms of statistical discrimination is an attitude incompatible with any realistic human way of life.
The well documented discrimination against short men and ugly people and the (more debatable) discrimination against the socially inept and those whose behaviour and learning style does not conform to the compliant workers that schools are largely structured to produce are examples of discrimination that appears to receive less attention and concern.
I’ve also observed that people who come to believe that there are significant differences between the sexes/races/whatevers on average begin to discriminate against all individuals of the disadvantaged sex/race/whatever, even when they were only persuaded by scientific results they believed to be accurate and were reluctant to accept that conclusion. I have watched this happen to smart people more than once. Furthermore, I have never met (or read the writings of) any person who believed in fundamental differences between the whatevers and who was not also to some degree a bigot.
One specific and relatively common version of this are people who believe that women have a lower standard deviation on measures of IQ than men. This belief is not incompatible with believing that any particular woman might be astonishingly intelligent, but these people all seem to have a great deal of trouble applying the latter to any particular woman. There may be exceptions, but I haven’t met them.
The rest of the post was good, but these claims seem far too anecdotal and availability heuristicky to justify blocking yourself out of an entire area of inquiry.
When well-meaning, intelligent people like yo...
In the comments here we see how LW is segmenting into "pro-truth" and "pro-equality" camps, just as it happened before with pro-PUA and anti-PUA, pro-status and anti-status, etc. I believe all these divisions are correlated and indicate a deeper underlying division within our community. Also I observe that discussions about topics that lie on the "dividing line" generate much more heat than light, and that people who participate in them tend to write their bottom lines in advance.
I'm generally reluctant to shut people up, but here's a suggestion: if you find yourself touching the "dividing line" topics in a post or comment, think twice whether it's really necessary. We may wish ourselves to be rational, but it seems we still lack the abstract machinery required to actually update our opinions when talking about these topics. Nothing is to be gained from discussing them until we have the more abstract stuff firmly in place.
My hypothesis is that this is a "realist"/"idealist" divide. Or, to put it another way, one camp is more concerned with being right and the other is more concerned with doing the right thing. ("Right" means two totally different things, here.)
Quality of my post aside (and it really wasn't very good), I think that's where the dividing line has been in the comments.
Similarly, I think most people who value PUA here value it because it works, and most people who oppose it do so on ethical or idealistic grounds. Ditto discussions of status.
The reason the arguments between these camps are so unfruitful, then, is that we're sorting of arguing past each other. We're using different heuristics to evaluate desirability, and then we're surprised when we get different results; I'm as guilty of this as anyone.
Here is another example of the way that pragmatism and idealism interact for me, from the world of pickup:
I was brought up with up with the value of gender equality, and with a proscription against dominating women or being a "jerk."
When I got into pickup and seduction, I encountered the theory that certain masculine behaviors, including social dominance, are a factor in female attraction to men. This theory matched my observation of many women's behavior.
While I was uncomfortable with the notion of displaying stereotypically masculine behavior (e.g. "hegemonic masculinity" from feminist theory) and acting in a dominant manner towards women, I decided to give it a try. I found that it worked. Yet I still didn't like certain types of masculine and dominance displays, and the type of interactions they created with women (even while "working" in terms of attraction and not being obviously unethical), so I started experimenting and practicing styles less reliant on dominance.
I found that there were ways of attracting women that worked quite well, and didn't depend on dominance and a narrow version of masculinity. It just took a bit of practice and creativ...
I strongly agree with this. Count me in the camp of believing true things in literally all situations, as I think that the human brain is too biased for any other approach to result, in expectation, in doing the right thing, but also as in the camp of not necessarily sharing truths that might be expected to be harmful.
Anti-PC? Good name, I will use it.
I know my rationality isn't that fragile and I doubt yours is either.
What troubles me is this: your position on the divisive issues is not exactly identical to mine, but I very much doubt that I could sway your position or you could sway mine. Therefore, I'm pretty confident that at least one of us fails at rationality when thinking about these issues. On the other hand, if we were talking about math or computing, I'd be pretty confident that a correct argument would actually be recognized as correct and there would be no room for different "positions". There is only one truth.
We have had some big successes already. (For example, most people here know better than be confused by talk of "free will".) I don't think the anti-PC issue can be resolved by the drawn-out positional war we're waging, because it isn't actually making anyone change their opinions. It's just a barrage of rationalizations from all sides. We need more insight. We need a breakthrough, or maybe several, that would point out the obviously correct way to think about anti-PC issues.
Anti-PC? Good name
I don't think using this name is a good idea. It has strong political connotations. And while I'm sure many here aren't aware of them or are willing to ignore them, I fear this may not be true:
There's no social coprocessor, we evolved a giant cerebral cortex to do social processing, but some people refuse to use it for that because they can't use it in its native mode while they are also emulating a general intelligence on the same hardware.
If you're an altruist (on the 'idealist' side of WrongBot's distinction), you'd probably consider making women you know happier to be the biggest advantage.
A thousand times no. Really, this is a bad idea.
Yeah, some people don't value truth at any cost. And there's some sense to that. When you take a little bit of knowledge and it makes you a bad person, or an unhappy person, I can understand the argument that you'd have been better off without that knowledge.
But most of the time, I believe, if you keep thinking and learning, you'll come round right. (I.e.: when a teenager reads Ayn Rand and thinks that gives him license to be an asshole, his problem is not that he reads too much philosophy.)
You seem to be particularly worried about accidentally becoming a bigot. (I don't think most of us are in any danger of accidentally becoming supreme dictators.) I think you are safe. Think of it this way: you don't want to be a bigot. You don't want your future self to be a bigot either. So don't behave like one. No matter what you read. Commit your future self to not being an asshole.
I think fear of brainwashing is generally silly.* You will not become a Mormon from reading the Book of Mormon. You will not become a Nazi from reading Mein Kampf, or a Communist from reading Das Kapital. You will not become a racist from reading Steve S...
But most of the time, I believe, if you keep thinking and learning, you'll come round right. (I.e.: when a teenager reads Ayn Rand and thinks that gives him license to be an asshole, his problem is not that he reads too much philosophy.)
"A little learning is a dang'rous thing;
Drink deep, or taste not the Pierian spring:
There shallow draughts intoxicate the brain,
And drinking largely sobers us again."
-- Pope
(Poetry still sucks, though. I'm not yet changing my mind about that.)
... must ... resist ... impulse ... to ... downvote ... different ... tastes ...
You will not become a Nazi from reading Mein Kampf, or a Communist from reading Das Kapital.
I became a Trotskyite (once upon a time) partly based on reading Trotsky's history of the Russian Revolution. Yes, I was primed for it, but... words aren't mere.
The fact that you have a core value, important enough to you that you'd deliberately keep yourself ignorant to preserve that value, is evidence that the value is important enough to you that it can withstand the addition of information. Your fear is a good sign that you have nothing to fear.
For real. I have been in those shoes. Regarding this subject, and others. You shouldn't be worried.
Statistical facts like the ones you cited are not prescriptive. You don't have to treat anyone badly because of IQ. IQ does not equal worth. You don't use a battery of statistics on test scores, crime rates, graduation rates, etc. to determine how you will treat individuals. You continue to behave according to your values.
In the past I have largely agreed with the sentiment that truth and information are mostly good, and when they create problems the solution is even more truth.
But on the basis of an interest in knowing more, I sometimes try to seek evidence that supports things I think are false or that I don't want to be true. Also, I try to notice when something I agree with is asserted without good evidential support. And I don't think you supported your conclusions there with real evidence.
You don't have to treat anyone badly because of IQ. IQ does not equal worth. You don't use a battery of statistics on test scores, crime rates, graduation rates, etc. to determine how you will treat individuals. You continue to behave according to your values.
This reads more to me like prescriptive signaling than like evidence. While it is very likely to be the case that "IQ test results" are not the same as "human worth", it doesn't follow that an arbitrary person would not change their behavior towards someone who is "measurably not very smart" in any way that dumb person might not like. And for some specific people (like WrongBot by the admission of his or her own fears...
With bigotry, I think the real problem is confirmation bias. If I believe, for example, that orange-eyed people have an average IQ of only 99, and that's true, then when I talk to orange-eyed people, that belief will prime me to notice more of their faults. This would cause me to systematically underestimate the intelligence of orange-eyed people I met, probably by much more than 1 IQ point. This is especially likely because I get to observe eye color from a distance, before I have any real evidence to go on.
In fact, for the priming effect, in most people the magnitude of the real statistical correlation doesn't matter at all. Hence the resistance to acknowledging even tiny, well-proven differences between races and genders: they produce differences in perception that are not necessarily on the same order of magnitude as the differences in reality.
This is exactly the crux of the argument. When people say that everyone should be taught that people are the same regardless of gender or race, what they really mean isn't that there aren't differences on average between women and men, etc, but that being taught about those small differences will cause enough people to significantly overshoot via confirmation bias that it will overall lead to more misjudgments of individuals than if people weren't taught about those small differences at all, hence people shouldn't be taught about those small differences. I am hesitantly sympathetic to this view; it is borne out in many of the everyday interactions I observe, including those involving highly intelligent aspiring rationalists.
This doesn't mean we should stop researching gender or race differences, but that we should simultaneously research the effects of people learning about this research: how big are the differences in the perception vs the reality of those differences? Are they big enough that anyone being taught about gender and race differences should also be taught about of the risk of them systematically misjudging many individuals because of their knowledge, and warned to rem...
Bryan Caplan argues against the "corrupted by power" idea with an alternative view: they were corrupt from the start, which is why they were willing to go to such extremes to attain power.
Around the time I stopped believing in God and objective morality I came around to Stirners' view: such values are "geists" haunting the mind, often distracting us from factual truths. Just as I stopped reading fiction for reasons of epistemic hygiene, I decided that chucking morality would serve a similar purpose. I certainly wouldn't trust myself to selectively filter any factual information. How can the uninformed know what to be uninformed about?
I've observed that quite a bit of the disagreement with the substance of my post is due to people believing that the level of distrust for one's own brain that I advocate is excessive. (See this comment by SarahC, for example.)
It occurs to me that I should explain exactly why I do not trust my own brain.
In the past week I have noted the following instances in which my brain has malfunctioned; each of them is a class of malfunction I had never previously observed in myself:
(It may be relevant to note that I have AS.)
I needed to open a box of plastic wrap, of the sort with a roll inside a box, a flap that lifts up, and a sharp edge under the flap. The front of the box was designed such that there were two sections separated by some perforation; there's a little set of instructions on the box that tells you to tear one of those sections off, thus giving you a functional box of plastic wrap. I spent approximately five minutes trying to tear the wrong section off, mangling the box and cutting my finger twice in the process. This was an astonishing failure to solve a basic physical task.
I was making bread dough, a process which necessitates measuring out 4.5 cups of flour into a bowl
One specific and relatively common version of this are people who believe that women have a lower standard deviation on measures of IQ than men. This belief is not incompatible with believing that any particular woman might be astonishingly intelligent, but these people all seem to have a great deal of trouble applying the latter to any particular woman.
Your evidence is not quite about beliefs. I think correct version is:
People that don't mind to share that they believe that women have a lower... etc.
I’ve also observed that people who come to believe that there are significant differences between the sexes/races/whatevers on average begin to discriminate against all individuals of the disadvantaged sex/race/whatever, even when they were only persuaded by scientific results they believed to be accurate and were reluctant to accept that conclusion. I have watched this happen to smart people more than once. Furthermore, I have never met (or read the writings of) any person who believed in fundamental differences between the whatevers and who was not also to some degree a bigot.
This is something I haven't observed, but it's seemed plausible to me anyway. Have there been any studies (even small, lightweight studies with hypothetical trait differences) showing that sort of overshoot? If there are, why don't they get the sort of publicity that studies which show differences get?
Speaking of AIs getting out of the box, it's conceivable to me that an AI could talk its way out. It's a lot less plausible that an AI could get it right the first time.
And here's a thought which may or may not be dangerous, but which spooked the hell out of me when I first realized it.
Different groups h...
Different groups have different emotional tones . . . (nicer, more honest, more fun, more dignified, etc.).
Downvotes have caused me to put a lot of effort into changing the tone of my communications on Less Wrong so that they are no longer significantly less agreeable (nice) than the group average.
In the early 1990s the newsgroups about computers and other technical subjects were similar to Less Wrong: mostly male, mean IQ above 130, vastly denser in libertarians than the population of any country, the best place online for people already high in rationality to improve their rationality.
Aside from differences in the "shape" of the conversation caused by differences in the "mediating" software used to implement the conversation, the biggest difference between the technical newsgroups of the early 1990s and Less Wrong is that the tone of Less Wrong is much more agreeable.
For example, there was much less evidence IIRC of a desire to spare someone's feelings on the technical newsgroups of the early 1990s, and flames (impassioned harangues of a length almost never seen in comments here and of a level of vitriol very rare here) were very common -- but then again the mediating software probably pulled for deep nesting of replies more than Less Wrong's software does, and most of those flames occured in very deeply nested flamewars with only 2 or 3 participants.
I’ve also observed that people who come to believe that there are significant differences between the sexes/races/whatevers on average begin to discriminate against all individuals of the disadvantaged sex/race/whatever, even when they were only persuaded by scientific results they believed to be accurate and were reluctant to accept that conclusion. I have watched this happen to smart people more than once. Furthermore, I have never met (or read the writings of) any person who believed in fundamental differences between the whatevers and who was not also to some degree a bigot.
This is something I haven't observed, but it's seemed plausible to me anyway. Have there been any studies (even small, lightweight studies with hypothetical trait differences) showing that sort of overshoot? If there are, why don't they get the sort of publicity that studies which show differences get?
I would also be interested in hearing if there are any studies on this subject. For me, much of WrongBot's argument hangs on how accurate these observations are. I'm still not sure I'd agree with the overall point, but more evidence on this point would make me much more inclined to consider it.
Also, Wrong...
I think this is a worthwhile discussion.
Here are some "true things" I don't want to know about:
I'm surprised about the last one. I think it would be quite helpful if you could be prepared for that.
The other two are experiences you wouldn't like to have. If you had the indexical knowledge of what the catchiest jingle was, you could better avoid hearing it.
I have to admit there's information I shield myself from as well.
I don't like watching real people die on video. I worry about getting desensitized/dehumanized.
I don't want to see 2g1c either. (by extension, most of the grungier parts of the intertubes.)
I don't want to know (from experience) what heroin feels like.
I do know people who believe in total desensitization -- they think that the reflex to shudder or gag is something you have to burn out of yourself. I don't think I want that for myself, though.
Here's something that might work as an alternative example that doesn't imply as much bigotry on anybody's part: a PNAS study from earlier this year found that during a school year, schoolgirls with more maths-anxious female maths teachers appear to develop more stereotyped views of gender and maths achievement, and do less well in their maths classes.
Let's suppose the results of that study were replicated and extended. Would a female maths teacher be justified in refusing to think about the debate over sex and IQ/maths achievement, on the grounds that doing so is likely to generate maths anxiety and so indirectly harm their female students' maths competence?
[Edited so the hyperlink isn't so long & ugly.]
I really disagree with your argument, Wrongbot. First of all, I think responding appropriately to "dangerous" information is an important task, and one which most LW folks can achieve.
In addition, I wonder if your personal observations about people who become bigots by reading "dangerous content" are actually accurate. People who are already bigots (or are predisposed to bigotry) are probably more likely to seek out data that "confirms" their assumptions. So your anecdotal observation may be produced by a selection effect.
At bare minimum, you should give us some information about the sample your observations are based on. For example you say:
One specific and relatively common version of this are people who believe that women have a lower standard deviation on measures of IQ than men. This belief is not incompatible with believing that any particular woman might be astonishingly intelligent, but these people all seem to have a great deal of trouble applying the latter to any particular woman. There may be exceptions, but I haven’t met them.
This could mean you've met a couple people like this, and never met anyone else who has encountered this dat...
This seems to be bordering on Dark Side epistemology - and doesn't seem very well aligned with the name of this site.
Another argument against digging in some of the red flag issues is that you might aquire unpopular opinions, and if you're bad at hiding those, you might suffer negative social consequences.
I agree with the overall point: certain thoughts can make you worse off.
Whether it's difficult to judge which information is dangerous, and whether given heuristics for judging that will turn into an anti-epistemic disaster, is about solving the problem, not about existence of the problem. In fact, a convincing argument for using a flawed knowledge-avoiding heuristics would itself be the kind of knowledge one should avoid being exposed to.
If we have an apparently unsolvable problem, with most hypothetical attempts at solution leading to disaster, we shoul...
This advice bothers me a lot. Labeling possibly true knowledge as dangerous knowledge (as the example with statements about average behavior of groups) is deeply worrisome and is the sort of thing that if one isn't careful would be used by people to justify ignoring relevant data about reality. I'm also concerned that this piece conflates actual knowledge (as in empirical data) and things like group identity which seems to be not so much knowledge but rather a value association.
I am grouping together "everything that goes into your brain," which includes lots and lots of stuff, most of it unconscious. See research on priming), for example.
This argument is explicitly about encouraging people to justify ignoring relevant data about reality. It is, I recognize, an extremely dangerous proposition, of exactly the sort I am warning against!
At risk of making a fully general counterargument, I think it's telling that a number of commenters, yourself included, have all but said that this post is too dangerous.
These are not just people dismissing this as a bad idea (which would have encouraged me to do the same), these are people are worrying about a dangerous idea. I'm more convinced I'm right than I was when I wrote the post.
One specific and relatively common version of this are people who believe that women have a lower standard deviation on measures of IQ than men. This belief is not incompatible with believing that any particular woman might be astonishingly intelligent, but these people all seem to have a great deal of trouble applying the latter to any particular woman. There may be exceptions, but I haven’t met them.
I'm skeptical of the notion that people tend to lower their intelligence estimates of women they meet as a result of this as opposed to using it as an excuse to reinforce their preexisting inclination to have a lower intelligence estimate of women than of men.
I agree with the main point of this post, but I think it could have used a more thorough, worked out example. Identity politics is probably the best example of your point, but you barely go into it. Don't worry about redundancy too much; not everyone has read the original posts.
FWIW, my personal experience with politics is an anecdote in your favor.
One specific and relatively common version of this are people who believe that women have a lower standard deviation on measures of IQ than men. This belief is not incompatible with believing that any particular woman might be astonishingly intelligent, but these people all seem to have a great deal of trouble applying the latter to any particular woman.
I don't think that this requires a utility-function-changing superbias. Alternatively: We think sloppily about groups, flattening fine distinctions into blanket generalizations. This bias takes the fact ...
Actually I think that if differences in group (sex, race, ethnicity, class, caste) intelligence (IQ) means and distributions proved to be of genetic origins this would be a net gain in utility since it would increase public acceptance of genetic engineering and spending on gene based therapies.
BTW We already know that the differences are real as in they are measured and we have tried our very best to get rid of say cultural bias, and proving that they aren't culturally biased is impossible so its deceiving to talk "if differences proved to be real&qu...
WrongBot: Brendan Nyhan, the Robert Wood Johnson scholar in health policy research at the University of Michigan, spoke today on Public Radio's "Talk of the Nation" about a bias that may be reassuring to you. He calls it the "backfire effect". He says new research suggests that misinformed people rarely change their minds when presented with the facts -- and often become even more attached to their beliefs. The Boston Globe reviews the findings here as they pertain to politics. If this is correct, it seems quite likely that if you hav...
Certain patterns of input may be dangerous, but knowledge isn't a pattern of input, it can be formatted in a myriad of ways, and it's not generally that hard to find a safe one. There's a picture of a french fry that crashes AOL instant messenger, but that doesn't mean it's the french fry that's the problem. It's just the way it's encoded.
I'm working on something on the subject of dangerous and predatory memes. And oh yes, predatory memes exist.
Please read this thread. When anyone talks about this sort of thing, the first reaction is "It can't happen to me, I'm far too smart for that". When it is pointed out how many people who fell for such things thought precisely that, the next reaction is a longer and more elaborate version of "It can't happen to me, I'm far too smart for that".
I'm thinking the very hardest bit is going to be getting across to people that it can happ...
There has not yet been a truly benevolent dictator and it would be delusional at best to believe that you will be the first.
This is true approximately to the extent that there has never been a truly benevolent person. Power anti-corrupts.
I believe that what he's saying is that with power, people show their true colors. Consciously or not, nice people may have been nice because it benefitted them to. The fact that there were too many penalties for not being nice when they didn't have as much power was a "corruption" of their behavior, in a sense. With the power they gained, the penalties didn't matter enough compared to the benefits.
This post is seeing some pretty heavy downvoting, but the opinions I'm seeing in the comments so far seem to be more mixed; I suppose this isn't unusual.
I have a question, then, for people who downvoted this post: what specifically did you dislike about it? This is a data-gathering exercise that will hopefully allow me to identify flaws in my writing and/or thinking and then correct them. Was the argument being made just obviously wrong? Was it insufficiently justified? Did my examples suck? Were there rhetorical tactics that you particularly disliked? Was...
I've just identified something else that was nagging at me about this post: the irony of the author of this post making an argument that closely parallels an argument some thoughtful conservatives make against condoning alternative lifestyles like polyamory.
The essence of that argument is that humans are not sufficiently intelligent, rational or self-controlled to deal with the freedom to pursue their own happiness without the structure and limits imposed by evolved cultural and social norms that keep their baser instincts in check. That cultural norms exist for a reason (a kind of cultural selection for societies with norms that give them a competitive advantage) and that it is dangerous to mess with traditional norms when we don't fully understand why they exist.
I don't really subscribe to the conservative argument (though I have more sympathy for it than the argument made in this post) but it takes a similar form to this argument when it suggests that some things are too dangerous for mere humans to meddle with.
I think it would've been better received if some attention was given to defense mechanisms - ie, rather than phrasing it as some true things being unconditionally bad to know, phrase it as some true things being bad to know unless you have the appropriate prerequisites in place. For example, knowing about differences between races is bad unless you are very good at avoiding confirmation bias, and knowing how to detect errors in reasoning is bad unless you are very good at avoiding motivated cognition.
Your examples of "identity politics" and "power corrupts" don't seem to illustrate "dangerous knowledge". They are more like dangerous decisions. Am I missing the point?
Come to think of it, a related argument was made, poetically, in Watchmen: Dr. Manhattan knew everything, it did clearly change his utility function (he became less human) and he mentioned appreciating not knowing the future when Adrian blocked it with tacheons. Poetry, but something to think about it.
This is completely wrong. You might as well tell a baby to avoid learning language, since this will change its utility function, it will begin to have an adult's utility function, instead of a baby's.
Not to evoke a recursive nightmare, but some utility function alterations appear to be strictly desirable.
As an obvious example, if I were on a diet and I could rewrite my utility function such that the utilities assigned to consuming spinach and cheesecake were swapped, I see no harm in making that edit. One could argue that my second-order utility (and all higher) function should be collapsed into my first-order one, such that this would not really change my meta-utility function, but this issue just highlights the futility of trying to cram my complex, ...
If you're being held back by worries about your values changing, you can always try cultivating a general habit of reverting to values held by earlier selves when doing so is relatively easy. I call it "reactionary self-help".
(I am making a distinction here between the parts of your brain that you have access to and can introspect about, which for lack of better terms I call “you” or “your consciousness”, and the vast majority of your brain, to which you have no such access or awareness, which I call “your brain.” This is an emotional manipulation, which you are now explicitly aware of. Does that negate its effect? Can it?)
You seem to think you know what the effect is. My immediate thought on reading "it will decide the output, not you" was "oh dear, dualism a...
By the way, some people took similar position to yours in
What Is Your Dangerous Idea?: Today's Leading Thinkers on the Unthinkable
Identity Politics: Agree- good point.
Power Corrupts: Irrelevant to those LWers who realistically will never gain large amounts of power and status. For those who do it is a matter of the dangers of increasing control, not avoiding dangerous thoughts.
On the comment about opening the door to bigotry: Even if bigotry has bad effects, given the limited amount of harm an individual can do and appropriate conscious supression of effects, isn't it worth it to prevent self-delusion?
If you're going to intentionally choose false beliefs, you should at least be careful to also install an aversion to using these beliefs to decide other questions you care about such as which intellectual institutions to trust, and an aversion to passing these beliefs on to other people. It's one thing to nuke your brain and quite another to fail to encase it in lead afterward.
Your brain cannot be trusted. It is not safe. You must be careful with what you put into it, because it will decide the output, not you.
This "it" may, or even should, relate to the idea itself. The same idea, the same meme, put into a healthy rational brains anywhere, will decide the same! Since the brains are just a rational machine always doing the best possible thing.
It is the input, what decides the output. Machine has no other (irrational) choices, than to process the input best way it can, and then to spit out the output.
It is not my cal...
A few examples (in approximately increasing order of controversy):
If you proceed anyway...