[EDIT - While I still support the general premise argued for in this post, the examples provided were fairly terrible. I won't delete this post because the comments contain some interesting and valuable discussions, but please bear in mind that this is not even close to the most convincing argument for my point.]
A great deal of the theory involved in improving computer and network security involves the definition and creation of "trusted systems", pieces of hardware or software that can be relied upon because the input they receive is entirely under the control of the user. (In some cases, this may instead be the system administrator, manufacturer, programmer, or any other single entity with an interest in the system.) The only way to protect a system from being compromised by untrusted input is to ensure that no possible input can cause harm, which requires either a robust filtering system or strict limits on what kinds of input are accepted: a blacklist or a whitelist, roughly.
One of the downsides of having a brain designed by a blind idiot is that said idiot hasn’t done a terribly good job with limiting input or anything resembling “robust filtering”. Hence that whole bias thing. A consequence of this is that your brain is not a trusted system, which itself has consequences that go much, much deeper than a bunch of misapplied heuristics. (And those are bad enough on their own!)
In discussions of the AI-Box Experiment I’ve seen, there has been plenty of outrage, dismay, and incredulity directed towards the underlying claim: that a sufficiently intelligent being can hack a human via a text-only channel. But whether or not this is the case (and it seems to be likely), the vulnerability is trivial in the face of a machine that is completely integrated with your consciousness and can manipulate it, at will, towards its own ends and without your awareness.
Your brain cannot be trusted. It is not safe. You must be careful with what you put into it, because it  will decide the output, not you. We have been warned, here on Less Wrong, that there is dangerous knowledge; Eliezer has told us that knowing about biases can cause us harm. Nick Bostrom has written a paper describing dozens of ways in which information can hurt us, but he missed (at least) one.
The acquisition of some thoughts, discoveries, and pieces of evidence can lower our expected outcomes, even when they are true. This can be accounted for; we can debias. But some thoughts and discoveries and pieces of evidence can be used by our underhanded, untrustworthy brains to change our utility functions, a fate that is undesirable for the same reason that being forced to take a murder pill is undesirable.
(I am making a distinction here between the parts of your brain that you have access to and can introspect about, which for lack of better terms I call “you” or “your consciousness”, and the vast majority of your brain, to which you have no such access or awareness, which I call “your brain.” This is an emotional manipulation, which you are now explicitly aware of. Does that negate its effect? Can it?)

A few examples (in approximately increasing order of controversy):

Identity PoliticsPaul Graham and Kaj Sotala have covered this ground, so I will not rehash their arguments. I will only add that, in the absence of a stronger aspect of your identity, truly identifying as something new is an irreversible operation. It might be overwritten again in time, but your brain will not permit an undo.
Power Corrupts: History is littered with examples of idealists seizing power only to find themselves betraying the values they once held dear. No human who values anything more than power itself should seek it; your brain will betray you. There has not yet been a truly benevolent dictator and it would be delusional at best to believe that you will be the first. You are not a mutant. (EDIT: Michael Vassar has pointed out that there have been benevolent dictators by any reasonable definition of the word.)
Opening the Door to Bigotry: I place a high value on not discriminating against sentient beings on the basis of artifacts of the birth lottery. I’ve also observed that people who come to believe that there are significant differences between the sexes/races/whatevers on average begin to discriminate against all individuals of the disadvantaged sex/race/whatever, even when they were only persuaded by scientific results they believed to be accurate and were reluctant to accept that conclusion. I have watched this happen to smart people more than once. Furthermore, I have never met (or read the writings of) any person who believed in fundamental differences between the whatevers and who was not also to some degree a bigot.
One specific and relatively common version of this are people who believe that women have a lower standard deviation on measures of IQ than men. This belief is not incompatible with believing that any particular woman might be astonishingly intelligent, but these people all seem to have a great deal of trouble applying the latter to any particular woman. There may be exceptions, but I haven’t met them. Based on all the evidence I have, I’ve made a conscious decision to avoid seeking out information on sex differences in intelligence and other, similar kinds of research. I might be able to resist my brain’s attempts to change what I value, but I’m not willing to take that risk; not yet, not with the brain I have right now.
If you know of other ways in which a person’s brain might stealthily alter their utility function, please describe them in the comments.

If you proceed anyway...

If the big red button labelled “DO NOT TOUCH!” is still irresistible, if your desire to know demands you endure any danger and accept any consequences, then you should still think really, really hard before continuing. But I’m quite confident that a sizable chunk of the Less Wrong crowd will not be deterred, and so I have a final few pieces of advice.
  • Identify knowledge that may be dangerous. Forewarned is forearmed.
  • Try to cut dangerous knowledge out of your decision network. Don’t let it influence other beliefs or your actions without your conscious awareness. You can’t succeed completely at this, but it might help.
  • Deliberately lower dangerous priors, by acknowledging the possibility that your brain is contaminating your reasoning and then overcompensating, because you know that you’re still too overconfident.
  • Spend a disproportionate amount of time seeking contradictory evidence. If believing something could have a great cost to your values, make a commensurately great effort to be right.
  • Just don’t do it. It’s not worth it. And if I found out, I’d have to figure out where you live, track you down, and kill you.
Just kidding! That would be impossibly ridiculous.
Some Thoughts Are Too Dangerous For Brains to Think
New Comment
318 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

I upvoted this post because it's a fascinating topic. But I think a trip down memory lane might be in order. This 'dangerous knowledge' idea isn't new, and examples of what was once considered dangerous knowledge should leap into the minds of anybody familiar with the Coles Notes of the history of science and philosophy (Galileo anyone?). Most dangerous knowledge seems to turn out not to be (kids know about contraception, and lo, the sky has not fallen).

I share your distrust of the compromised hardware we run on, and blindly collecting facts is a bad idea. But I'm not so sure introducing a big intentional meta-bias is a great idea. If I get myopia, my vision is not improved by tearing my eyes out.

On reflection, I think I have an obligation to stick my neck out and address some issue of potential dangerous knowledge that really matters, rather than the triviality (to us anyway) of heliocentrism.

Suppose (worst case) that race IQ differences are real, and not explained by the Flynn effect or anything like that. I think it's beyond dispute that that would be a big boost for the racists (at least short-term), but would it be an insuperable obstacle for those of us who think ontological differences don't translate smoothly into differences in ethical worth?

The question of sex makes me fairly optimistic. Men and women are definitely distinct psychologically. And yet, as this fact has become more and more clear, I do not think sexual equality has declined. Probably the opposite - a softening of attitudes on all sides. So maybe people would actually come to grips with race IQ differences, assuming they exist.

More importantly, withholding that knowledge could be much more disastrous.

(1) If the knowledge does come out, the racists get to yell "I told you so," "Conspiracy of silence" etc. Then the IQ difference gets magnified 1000x in the public imagination.

(2) If t... (read more)

6WrongBot
I'm inclined to agree with you. I certainly don't think that avoiding dangerous knowledge is a good group strategy, due to (at least) difficulties with enforcement and unintended side-effects of the sort you've described here. While the scientific consensus has become more clear, I'm not sure that it's reflected in popular or even intellectual opinion. Note the continuing popularity of Judith Butler in non-science academic circles, for example. Or the media's general tendency to discuss sex differences entirely outside of any scientific context. This may not be the best example.
9simplicio
Perhaps not for society at large, but what about empirically-based intellectuals themselves? Do you think knowledge of innate sex differences leads to more or less sexism among them? I think it leads to less, although my evidence is wholly anecdotal. There is another problem with avoiding dangerous knowledge. Remember the dragon in the garage? In order to make excuses ahead of time for missing evidence, the dragon proponent needs to have an accurate representation of reality somewhere in their heart-of-hearts. This leads to cognitive dissonance. Return to the race/IQ example. Would you rather * know group X has a 10 points lower average IQ than group Y, and just deal with it by trying your best to correct for confirmation bias etc., OR * intentionally keep yourself ignorant, while feeling deep down that something is not right. ? I suspect the second option is worse for your behaviour towards group X. It would still be difficult for a human to do, but I'd personally rather swallow the hard pill of a 10-point average IQ difference and consciously correct for my brain's crappy heuristics, than feel queasy around group X in perpetuity because I know I'm lying to myself about them.
1[anonymous]
I think we are seeing that among the for now (fortunately) small group of relatively intelligent unconformist people who change their opinion on this subject once looking at the data. It biases them towards unduly sympathetic judgements of everyone else who happens to hold the same opinion. or or eventually Leaking unconfromist, driven, principled (as in truth seeking even when it costs them status) intelligent people to otherwise unworthy causes? This may prove to be dangerous in the long term. One can't overestimate the propaganda value of calling out a well intentioned lie out as a lie and then proving that it actually is, you know, a lie. Our biases make us very vulnerable to be overly suspicious of someone who has been shown to be a liar. This is doubly true of our tendency to question their motives.
1MichaelVassar
Possibly, but faith in the truth winning out also looks like faith to me. Also, publicly at least people have to pick their battles.

I flat-out disagree that power corrupts as the phrase is usually understood, but that's a topic worthy of rational discussion (just not now with me).

The claim that there has never been a truly benevolent dictator though, that's simply a religious assertion, a key point of faith in the American democratic religion and no more worthy of discussion than whether the Earth is old, at least for usual meanings of the word 'benevolent' and for meanings of 'dictator' which avoid the no true Scotsman fallacy. There have been benevolent democratically elected leaders in the usual sense too. How confident do you think you should be that the latter are more common than the former though? Why?

I'm seriously inclined to down-vote the whole comment community on this one except for Peter, though I won't, for their failure to challenge such an overt assertion of such an absurd claim. How many people would have jumped in against the claim that without belief in god there can be no morality or public order, that the moral behavior of secular people is just a habit or hold-over from Christian times, and that thus that all secular societies are doomed? To me it's about equally credible.

BTW, just from the 20th century there are people from Ataturk to FDR to Lee Kuan Yew to Deng Chou Ping. More generally, more or less The Entire History of the World especially East Asia are counter-examples.

that's a topic worthy of rational discussion (just not now with me).

If this is a plea to be let alone on the topic, then, feel free to ignore my comment below -- I'm posting in case third parties want to respond.

The claim that there has never been a truly benevolent dictator though, that's simply a religious assertion,

Perhaps it's phrased poorly. There have certainly been plenty of dictators who often meant well and who often, on balance, did more good than harm for their country -- but such dictators are rare exceptions, and even these well-meaning, useful dictators may not have been "truly" benevolent in the sense that they presided over hideous atrocities. Obviously a certain amount of illiberal behavior is implicit in what it means to be a dictator -- to argue that FDR was non-benevolent because he served four terms or managed the economy with a heavy hand would indeed involve a "no true Scotsman" fallacy. But a well-intentioned, useful, illiberal ruler may nevertheless be surprisingly bloody, and this is a warning that should be widely and frequently promulgated, because it is true and important and people tend to forget it.

BTW, just from the 20

... (read more)

I simply deny the assertion that dictators who wanted good results and got them were rare exceptions. Citation needed.

Admittedly, dictators have frequently presided over atrocities, unlike democratic rulers who have never presided over atrocities such as slavery, genocide, or more recently, say the Iraq war, Vietnam, or in an ongoing sense, the drug war or factory farming.

Human life is bloody. Power pushes the perceived responsibility for that brute fact onto the powerful. People are often scum, but avoiding power doesn't actually remove their responsibility. Practically every American can save lives for amounts of money which are fairly minor to them. What are the relevant differences between them and French aristocrats who could have done the same? I see one difference. The French aristocrats lived in a Malthusian world where tehy couldn't really have impacted total global suffering with the local efforts available?

How is G.W. Bush more corrupt than the people who elected him. He seems to care more for the third world poor than they do, and not obviously less for rule of law or the welfare of the US.

Playing fast and loose with geopolitical realities, (Iraq is only slightly about oil, for instance) I'd like to conclude with the observation that even when you yourself, as a middle class American, don't get your hands bloody as cheap oil etc corrupt you, it is possible that you are saved from bloody hands by an elected representative who you hired to do the job.

[-]prase120

I simply deny the assertion that dictators who wanted good results and got them were rare exceptions. Citation needed.

The standards of evaluation of goodness should be specified in greater detail first. Else it is quite difficult to tell whether e.g. Atatürk was really benevolent or not, even if we agree on goodness of his individual actions. Some of the questions

  • are the points scored by getting desired good results cancelled by the atrocities, and to what extent?
  • could a non-dictatorial regime do better (given the conditions in the specific country and historical period), and if no, can the dictator bear full responsibility for his deeds?
  • what amount of goodness makes a dictator benevolent?

Unless we first specify the criteria, the risk of widespread rationalisation in this discussion is high.

3Blueberry
Upvoted for the umlaut!
7prase
That was perhaps the cheapest upvote I ever got. Thanks. (Unfortunately Ceauşescu was anything but benevolent, else he would be mentioned and I could gather additional upvotes for the comma.)
6Mass_Driver
It's hard to find proof of what most people consider obvious, unless its part of the Canon of Great Moments in Science (tm) and the textbook industry can make a bundle off it. Tell you what -- if you like, I'll trade you a promise to look for the citation you want for a promise to look for primary science on anthropogenic global warming. I suspect we're making the climate warmer, but I don't know where to read a peer-reviewed article documenting the evidence that we are. I'll spend any reasonable amount of time that you do looking -- 5 minutes, 15 minutes, 90 minutes -- and if I can't find anything, I'll admit to being wrong. Slavery, genocide, and factory farming are examples of imperfect democracy -- the definition of "citizen" simply isn't extended widely enough yet. Fortunately, people (slowly) tend to notice the inconsistency in times of relative peace and prosperity, and extend additional rights. Hence the order-of-magnitude decrease in the fraction of the global population that is enslaved, and, if you believe Stephen Pinker, in the frequency of ethnic killings. As for factory farming, I sincerely hope the day when animals are treated as citizens when appropriate will come, and the quicker it comes the better I'll be pleased. On the other hand, if you glorify dictatorship, or if you give dictatorship an opening to glorify itself, it tends to pretty effectively suppress talk about widening the circle of compassion. Better to have a hypocritical system of liberties than to let vice walk the streets without paying any tribute to virtue at all; such tributes can collect compound interest over the centuries. The Vietnam war is generally recognized as a failure of democracy; the two most popular opponents of the war were assassinated, and the papers providing the policy rationale for the war were illegally hidden, ultimately causing the downfall of President Nixon. The drug war seems to be winding down as the high cost of prisons sinks in. The war on Iraq is prob
7MichaelVassar
Good writing style! I don't think I glorify dictatorship, but I do think that terribly dictatorships, like Stalinist Russia, have sometimes spoken of widening circles of compassion. I do think you are glorifying democracy. Do you have examples of perfect democracy to contrast with imperfect democracy? Slaves frequently aren't citizens, but on other occasions, such as in the immense and enslaving US prison system (with its huge rates of false conviction and of conviction for absurd crimes), or the military draft they are. The reduction in slavery may be due to philosophical progress trickling down to the masses, or it may simply be that slavery has become less economically competitive as markets have matured. Responsibility counts for something, but for far less among the powerful. As power increases, custom weakens, and situations become more unique, acts/omissions distinctions become less useful. As a result, rapid rises in power do frequently leave people without a moral compass, leading to terrible actions. I appreciate your efforts to avoid indirectly causing harms. I didn't know about the other Michael Vassar. It's an uncommon name, so I'm surprised to hear it.
7Mass_Driver
By which you mean, I suppose, that my skill as a rhetorician has exceeded my skill as a rationalist. Well, you may be right. Supposing you are, what do you suggest I do about it? Well, yes, I am. Not our democracy, not any narrow technique for promoting democracy, but democracy as the broad principle that people should have a decisive say in the decisions that affect them strikes me as pretty awesome. I guess I might be claiming benefits for democracy in excess of what I have evidence to support, and that if I were an excellent rationalist, I would simply say, "I do not know what the effects of attempting democracy are." I am not an excellent rationalist. What I do is to look hard for the answers to important questions, and then, if after long searching I cannot find the answers and I have no hope of finding the answers, but the questions still seem important, I choose an answer that appeals to my intuition. I spent the better part of my undergraduate years trying to understand what democracy is, what violence is, and whether the two have any systematic relation to each other. Scientifically speaking, my answer is that we do not know, and will not know, in all likelihood, for quite some time. Violence happens in places where researchers find it difficult or impossible to record it; death tolls are so biased by partisans of various stripes, by the credulity of an entertainment-based media, and by the fog of war that one can almost never tell which of two similarly-sized conflicts was more violent. Democracy is, at best, a correlation among several variables, each of which can only be specified with 2 or 3 bits of meaningful information, and each of which might have different effects on violence. Given the confusion, to scientifically state a relationship between democracy and violence would be ridiculous. And, yet, I find that I very much want to know what the relationship is between democracy and violence. I can oppose all offensive wars designed to change anoth
7MichaelVassar
Not at all. Rhetorical skill IS a good thing, and properly contributes to logic. Your argument seems rational to me, in the non-Spock sense that we generally encourage here. What to do? Keep on thinking AND caring! If the search you use is as fair and unbiased as you can make it, this looking hard for answers is the core of what being a good rationalist is. Possibly, you should look harder for the causes of systematic differences between people's intuitions, to see whether those causes are entangled with truth, but analysis has to stop at some point. In practice, rationalists may back themselves into permanent inaction due to uncertainty, but the theory of rationality we endorse here says we should be doing what you claim to be doing. I find it extremely disturbing that we aren't communicating this effectively, though its clearly our fault since we aren't communicating it effectively enough to ourselves for it to motivate us to be more dynamic either.
4MichaelVassar
When you say you glorify Democracy though, I think you mean something much closer to what I would call Coherent Extrapolated Volition than it is to what I would call Democracy. Something radically novel that hasn't ever been tried, or even specified in enough detail to call it a proposal without some charity. As a factual matter, I would suggest that the systems of government that we call Democracies in the US may typically be a bit further in the CEV direction than those we typically call dictatorships, but if they are, its a weak tendency, like the tendency of good painters to be good at basketball or something. You might detect it statistically, if you had properly operationalized it first, or vaguely suspect its there based on intuitive perception, but you couldn't ever be very confident it was there. It's obviously wrong to overturn cultural traditions which have been questioned but not refuted. Such traditions have some information value, if only for anthropic reasons, and more importantly, they are somewhat correlated with your values. In this particular case, if you limit your options under consideration to 'fight against invaders or do nothing' I have no objections. Real life situations usually present more options, but those weren't specified. As an off-the-cuff example, I think its obvious that a person who fought against the NAZIs in WWII was doing something better than they would by staying home even though the NAZIs didn't invade the US and even valuing their lives moderately more highly than those of others. OTOH, the marginal expected impact of a soldier on the expected outcome of the war was surely SO MUCH less than the marginal expected impact of an independent person who put in serious effort to be an assassin, while the risk was probably not an order of magnitude smaller, so I think its fair to say that they were still being irrational, judged as altruists, and were in most cases, well, only following orders. If they valued victory enough to b
0Mass_Driver
Thanks! Wholeheartedly agree, btw.
2Emile
I think you're referring to Michael Walzer.
0Mass_Driver
Right! Thank you.
1rela
I don't know if you're still looking for this, and if this would be an appropriate place to post links. But: Primary Evidence: * temperatures increase over the last 2000 years as estimated by tree ring, marine/lake/cave proxy, ice isotopes, glacier length/mass, and borehole data. figures S-1, O-4, 2-3, 2-5, 5-3, 6-3, 7-1, 10-4, and 11-2 are probably the most useful to you. Surface Temperature Reconstructions for the Last 2,000 Years. Committee on Surface Temperature Reconstructions for the Last 2,000 Years, National Research Council ISBN: 0-309-66144-7, 160 pages, 7 x 10, (2006) * anomalies in combined land-surface air and sea-surface water temperature increase significantly 1880-2009. Global-mean monthly, seasonal, and annual means, 1880-present, updated through most recent month. NASA Goddard. [GISS Surface Temperature Analysis][http://data.giss.nasa.gov/gistemp/) Other Supporting evidence: * earlier flowering times in recent 25 years, with data taken over the past 250 years. Amano, et. al [A 250-year index of first flowering dates and its response to temperature changes] (http://rspb.royalsocietypublishing.org/content/277/1693/2451.full). Proc. R. Soc. B 22 August 2010 vol. 277 no. 1693 2451-2457 Contradicting evidence: * extremes of monthly average temperatures in Central England do not appear to match either a "high extremes after 1780s/1850s only" or "low extremes before 1780s/1850s only" hypothesis. Manley. [Central England temperatures: Monthly means 1659 to 1973] (http://onlinelibrary.wiley.com/doi/10.1002/qj.49710042511/abstract). Quarterly Journal of the Royal Meteorological Society. Volume 100, Issue 425, pages 389–405, July 1974 Hope that's helpful.
1kodos96
factory farming? huh?
4Kevin
In America, we have grown jaded towards protests because they don't ever accomplish anything. But at their most powerful, protests become revolutions. If Deng would have just ignored the protesters indefinitely, the CCP would have fallen. Perhaps the protest could have been dispersed without loss of life, but it's only very recently that police tactics have advanced to the point of being able to disburse large groups of defensively-militarized protesters without killing people. See http://en.wikipedia.org/wiki/Miami_model and compare to the failure of the police at the Seattle WTO protests of 1999. This is a recent story about Deng's supposed backing of Tiananmen violence. http://www.nytimes.com/2010/06/05/world/asia/05china.html?_r=1

MichaelVassar:

I'm seriously inclined to down-vote the whole comment community on this one except for Peter, though I won't, for their failure to challenge such an overt assertion of such an absurd claim.

I was tempted to challenge it, but I decided that it's not worth to open such an emotionally charged can of worms.

The claim that there has never been a truly benevolent dictator though, that's simply a religious assertion, a key point of faith in the American democratic religion and no more worthy of discussion than whether the Earth is old, at least for usual meanings of the word 'benevolent' and for meanings of 'dictator' which avoid the no true Scotsman fallacy. There have been benevolent democratically elected leaders in the usual sense too. How confident do you think you should be that the latter are more common than the former though? Why?

These are some good remarks and questions, but I'd say you're committing a fallacy when you contrast dictators with democratically elected leaders as if it were some sort of dichotomy, or even a typically occurring contrast. There have been many non-democratic political arrangements in human history other than dictatorships. Moreover, it's not at all clear that dictatorships and democracies should be viewed as disjoint phenomena. Unless we insist on a No-True-Scotsman definition of democracy, many dictatorships, including quite nasty ones, have been fundamentally democratic in the sense of basing their power on majority popular support.

9RHollerith
Good point. For example, if you squint hard enough, the choosing of a council or legislature through lots as was done for a time in the Venetian state, is "democratic" in that everyone in some broad class (the people eligible to be chosen at random) had an equal chance to participate in the government, but would not meet with the approval of most modern advocates of democracy, even though IMHO it is worth trying again. The Venetians understood that some of the people chosen by lot would be obviously incompent at governing, so their procedure alternated phases in which a group was chosen by lot with phases in which the group that is the output of the previous phase vote to determine the makeup of the input to the next phase with the idea that the voting phases would weed out those who were obviously incompetent. So, though there was voting, it was done only by the relatively tiny number of people who had been selected by lot -- and (if we ignore information about specific individuals) they had the same chance of becoming a legislator as the people they were voting on. IMHO probably the worst effect of Western civilization's current overoptimism about democracy will be to inhibit experiments in forms of non-democratic government that would not have been possible before information technology (including the internet) became broadly disseminated. (Of course such experiments should be small in scale till they have built up a substantial track record.)

rhollerith_dot_com:

IMHO probably the worst effect of Western civilization's current overoptimism about democracy will be to inhibit experiments in forms of non-democratic government that would not have been possible before information technology (including the internet) became broadly disseminated.

I beg to differ. The worst effect is that throughout recent history, democratic ideas have regularly been foisted upon peoples and places where the introduction of democratic politics was a perfect recipe for utter disaster. I won't even try to quantify the total amount of carnage, destruction, and misery caused this way, but it's certainly well above the scale of those political mass crimes and atrocities that serve as the usual benchmarks of awfulness nowadays. Of course, all this normally gets explained away with frantic no-true-Scotsman responses whenever unpleasant questions are raised along these lines.

For full disclosure, I should add that I care particularly strongly about this because I was personally affected by one historical disaster that was brought about this way, namely the events in former Yugoslavia. Regardless of what one thinks about who bears what part of the blame for what happened there, one thing that's absolutely impossible to deny is that all the key players enjoyed democratic support confirmed by free elections.

Seconded. I live in Russia, and if you compare the well-being of citizens in Putin's epoch against Yeltsin's, Putin wins so thoroughly that it's not even funny.

8Vladimir_Nesov
You could attribute the difference to many correlated features, such as the year beginning with "20" instead of "19".
2LucasSloan
Also: The economy in Yeltsin's day was unusually bad, in deep recession due to pre-collapse economic problems, combined with the difficulties of switching over. In addition, today's economy benefits from a relatively high price for oil.
4Vladimir_Nesov
That would be a less absurdist version of my point.
0LucasSloan
I assumed you meant that economic growth (in general) meant that the wellbeing of people is generally going to be greater when the year count is greater. I was providing specific reasons why the economy at the time would have been worse than regressing economic growth would suggest, other than political leadership.
3RHollerith
Yes, that is a very bad effect of the overoptimism about democracy. Another example: even the vast majority of those (the non-whites) who could not vote in Rhodesia were significantly better off than they came to be after the Jimmy Carter administration forced the country (now called Zimbabwe) to give them the vote.
4MichaelVassar
I agree with everything in your paragraph. The important distinction between states as I see it is more between totalitarian and non-totalitarian than between democratic and non-democratic, as the latter tends to be a fairly smooth continuum. I was working within the local parlance for an American audience.
8JanetK
I agree that statements like all As are Bs are likely to be only approximately true and if you look you will find counter examples. But... 'power corrupts' is a fairly reliable rule of thumb as rules of thumb go. I include a couple of refs that took all of 3 minutes to find although I couldn't find the really good one that I noticed a year or so ago. http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1298606 abstract: We investigate the effect of power differences and associated expectations in social decision-making. Using a modified ultimatum game, we show that allocators lower their offers to recipients when the power difference shifts in favor of the allocator. Remarkably, however, when recipients are completely powerless, offers increase. This effect is mediated by a change in framing of the situation: when the opponent is without power, feelings of social responsibility are evoked. On the recipient side, we show that recipients do not anticipate these higher outcomes resulting from powerlessness. They prefer more power over less, expecting higher outcomes when they are more powerful, especially when less power entails powerlessness. Results are discussed in relation to empathy gaps and social responsibility. http://scienceblogs.com/cortex/2010/01/power.php from J Lehrer's comments: The scientists argue that power is corrupting because it leads to moral hypocrisy. Although we almost always know what the right thing to do is - cheating at dice is a sin - power makes it easier to justify the wrongdoing, as we rationalize away our moral mistake.
9gwern
Somewhat relevant: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1548222
2JanetK
I can think of a number of reasons why monarchs may suffer somewhat less from the 'power corrupts' norm. (1) often educated from childhood to use power wisely (2) often feel their power is legit and therefore less fearful of overthrow (3) tend to get better 'press' than other autocrats so that abuse of power less noticeable (4) often have continuity and structure in their advisors inherited from previous monarch. Despite this, there have been some pretty nasty monarchs through history - even ones that are thought of as great like Good Queen Bess. However, if I had to live in an autocratic state I would prefer an established monarchy, all others things being equal.
6MichaelVassar
Voted up for using data, though I'm very far from convinced by the specific data. The first seems irrelevant or at best very weakly suggestive. Regarding the second, I'm pretty confident that scientists profoundly mis-understand what sort of thing hypocrisy is as a consequence of the same profound misunderstanding of what sort of thing mind is which lead to the failures of GOFAI. I guess I also think they misunderstand what corruption is, though I'm less clear on that. It's really critical that we distinguish power corrupting from fear and weakness producing pro-social submission and from fearful people invoking morality to cover over cowardice. In the usual sense of the former concept corruption is something that should be expected, for instance, to be much more gradual. One should really notice that heroes in stories for adults are not generally rule-abiding, and frequently aren't even typically selfless. Acting more antisocial, like the people you actually admire (except when you are busy resenting their affronts to you) do, because like them you are no longer afraid is totally different from acting like people you detest. I don't think that "power corrupts" is a helpful approximation at the level of critical thinking ability common here. (what models are useful depends on what other models you have).
6Aurini
Perhaps it would be more accurate to state "The structural dynamics of dictatorial regimes demands coercion be used, while decentralized power systems allow dissent"; even the Philosopher King must murder upstarts who would take the throne. Mass Driver's comments (below) support this, with Lee Kuan Yew's power requiring violent coercion being performed on his behalf, and the examples of Democratic Despotism largely boil down to a lack of accountability and transparency in the elected leaders - essentially they became (have become) too powerful. "Power corrupts" is just the colloquial form. (It is possible that I am in a Death Spiral with this idea, but this analysis occurred to me spontaneously - I didn't go seeking out an explanation that fit my theory)

Voted up for precision.
I see decentralization of power as less relevant than regime stability as an enabler of non-violence. Kings in long-standing monarchies, philosophical or not, need use little violence. New dictators (classically called tyrants) need use much violence. In addition, they have the advantage of having been selected for ability and the disadvantage of having been poorly educated for their position.

Of course, power ALWAYS scales up the impact of your actions. Lets say that I'm significantly more careful than average. In that case, my worst actions include doing things that have a .1% chance of killing someone every decade. Scale that up by ten million and its roughly equivalent to killing ten thousand people once during a decade long reign over a mid-sized country. I'd call that much better than Lincoln (who declared marshal law and was an elected dictator if Hitler was one) or FDR but MUCH worse than Deng. OTOH, Lincoln and FDR lived in an anarchy, the international community, and I don't. I couldn't be as careful/scrupulous as I am if I lived in an anarchy.

4WrongBot
While I'd disagree with your description of FDR as a dictator, you're quite right about Ataturk, and your other examples expose my woefully insufficient knowledge of non-Western history. My belief has been updated, and the post will be as well, in a moment. Thanks.
4MichaelVassar
Thank you! I'm so happy to have a community where things like this happen. Are you in agreement with my description of Lincoln as a dictator below? He's less benevolent than FDR but I'd still call him benevolent and he's a more clear dictator.
3WrongBot
Lincoln's a little more borderline, but so far as I'm aware, he didn't do anything to mess with the 1864 elections; I think most people would think that that keeps him on the non-dictator end of the spectrum Of course, the validity of that election was based on a document that he was actively violating at the time, so there definitely seems to be room for debate.
4MichaelVassar
In addition, there's the fact that most of the Southern States couldn't vote at the time. It was basically unthinkable that he could have lost the elections. Democratic and dictatorial aren't natural types, but I'd say Lincoln is at least as far in the dictatorial direction as Putin, Nazarbayev, or almost any other basically sane ex-Soviet leader.
3satt
I didn't challenge it because I didn't find it absurd. I've asked myself in the past whether I could think of heads of state whose orders & actions were untarnished enough that I could go ahead and call them "benevolent" without caveats, and I drew a blank. I'd guess my definition of a benevolent leader is less inclusive than yours; judging by your child comment it seems as if you're interpreting "benevolent dictator" as meaning simply "dictators who wanted good results and got them". To me "benevolent" connotes not only good motives & good policies/behaviour but also a lack of very bad policies/behaviour. Other posters in this discussion might've interpreted it like I did.
3MichaelVassar
Possibly. OTOH, the poster seems to have been convinced. I draw a blank on people, dictators or not, who don't engage in very bad policies/behavior on whatever scale they are able to act on. No points for inaction in my book.
2Carinthium
I know somebody who used to work for Lee Kuan Yew, who has testified that in quite a few ways he at least has been corrupted (things such as creating a slush fund, giving a man who saved his life a public house he didn't qualify for etc).
9gwern
That doesn't sound very corrupted to me. If your standard of corruption is that stringent, you could probably make a case for Barack Obama being corrupted - the Rezko below-market-price business, his aunt getting asylum and public housing, etc. (And someone like George W. Bush is even easier; Harken Energy, anyone?)
4Vaniver
Um, you're going to have a hard time claiming Obama isn't corrupted, or that he was uncorrupt to begin with. (As you mention, such a claim is even harder for Bush.)
8MichaelVassar
If the standard makes ALL leaders corrupt it doesn't favor democratic over dictatorial ones, nor is it a very useful standard. Relative to their power, are the benefits Obama, Lee Kuan Yew or even Bush skim greater than those typical Americans seek in an antisocial manner? Even comparable?
1Vaniver
Useful for what? I agree it's not terribly useful for choosing whether person A or person B should hold role X, but I feel that question is a distraction- your design of role X is more important than your selection of a person to fill that role. And so the question of how someone acquired power is less interesting to me than the power that person has, and I think the link between the two is a lot weaker than people expect.
5gwern
I'm presenting a dilemma. Either your standards for corruption are so high that you have to call both Yew & Obama corrupt, or your standards are loose enough that neither fits according to listed examples. I prefer to bite the latter bullet, but if you want to bite the former, that's your choice.
2Carinthium
Isn't the intelligent solution to talk about degrees of corruption and minimisisation? Measures to increase transperancy over this sort of thing are almost certainly the solution to Obama-level corruption.
0gwern
No, because that's a much more complex argument. Start with the simplest thing that could possibly work. If you don't reach any resolution or make any progress, then one can look into more sophisticated approaches.
1Carinthium
The reason to look at it that way is because it deals with problems of what is or isn't "corrupt" in general- instead, levels to get rid of (assuming one is in a position to supress corruption in the first place) can be set and corruption above a maximum level dealt with.

If knowing the truth makes me a bigot, then I want to be a bigot. If my values are based on not knowing certain facts, or getting certain facts incorrect, then I want my values to change.

It may help to taboo "bigot" for a minute. You seem to be lumping a number of things under a label and calling them bad.

There's the question of how we treat people who are less intelligent (regardless of group membership). I'm fine with discriminating in some ways based on intelligence of the individual, and if it does turn out that Group X is statistically less intelligent, then maybe Group X should be underrepresented in important positions. This has consequences for policy decisions. Of course, there may be a way of increasing the intelligence of Group X:

Based on all the evidence I have, I’ve made a conscious decision to avoid seeking out information on sex differences in intelligence and other, similar kinds of research.

How are you going to help a disadvantaged group if you're blinding yourself to the details of how they're disadvantaged?

1WrongBot
Agreed. But I should not make decisions about individual members of Group X based on the statistical trend associated with Group X, and I doubt my (or anyone's) ability to actually not do so in cases where I have integrated the belief that the statistical trend is true. The short answer is that I'm not going to. I'm not doing research on human intelligence, and I doubt I ever will. The best I can hope to do is not further disadvantage individual members of Group X by discriminating against them on the basis of statistical trends that they may not embody. People who are doing research that relates to human intelligence in some way should probably not follow this exact line of reasoning.

WrongBot:

But I should not make decisions about individual members of Group X based on the statistical trend associated with Group X [...]

Really? I don't think it's possible to function in any realistic human society without constantly making decisions about individuals based on the statistical trends associated with various groups to which they happen to belong (a.k.a. "statistical discrimination"). Acquiring perfectly detailed information about every individual you ever interact with is simply not possible given the basic constraints faced by humans.

Of course, certain forms of statistical discrimination are viewed as an immensely important moral issue nowadays, while others are seen simply as normal common sense. It's a fascinating question how and why exactly various forms of it happen (or fail) to acquire a deep moral dimension. But in any case, a blanket condemnation of all forms of statistical discrimination is an attitude incompatible with any realistic human way of life.

0WrongBot
The "deep moral dimension" generally applies to group memberships that aren't (perceived to be) chosen: sex, gender, race, class, sexual orientation, religion to a lesser extent. These are the kinds of "Group X" to which I was referring. Discriminating against someone because they majored in Drama in college or believe in homeopathy are not even remotely equivalent to racism, sexism, and the like.

The well documented discrimination against short men and ugly people and the (more debatable) discrimination against the socially inept and those whose behaviour and learning style does not conform to the compliant workers that schools are largely structured to produce are examples of discrimination that appears to receive less attention and concern.

3NancyLebovitz
Opposition to discrimination doesn't just happen. It has to be organized and promoted for an extended period before there's a effect. Afaik, that promotion typically has to include convincing people in the discriminated group that things can be different and that opposing discrimination is worth the risks and effort. In some cases, it also includes convincing them that they don't deserve to be mistreated.
9Vladimir_M
WrongBot: This is not an accurate description of the present situation. To take the most blatant example, every country discriminates between its own citizens and foreigners, and also between foreigners from different countries (some can visit freely, while others need hard to get visas). This state of affairs is considered completely normal and uncontroversial, even though it involves a tremendous amount of discrimination based on group memberships that are a mere accident of birth. Thus, there are clearly some additional factors involved in the moralization of other forms of discrimination, and the fascinating question is what exactly they are. The question is especially puzzling considering that religion is, in most cases, much easier to change than nationality, and yet the former makes your above list, while the latter doesn't -- so the story about choice vs. accident of birth definitely doesn't hold water. I'm also puzzled by your mention of class. Discrimination by class is definitely not a morally sensitive issue nowadays the way sex or race is. On the contrary, success in life is nowadays measured mostly by one's ability to distance and insulate oneself from the lower classes by being able to afford living in low-class-free neighborhoods and joining higher social circles. Even when it comes to you personally, I can't imagine that you would have exactly the same reaction when approached by a homeless panhandler and by someone decent-looking.
3Douglas_Knight
Without disagreeing much with your comment, I have to point out that this is a non sequitur. Moral sensitivity has nothing to do with (ordinary) actions. Among countries where the second sentence is true, there are both ones where the first is true and ones where the first is false. I don't know so much about countries where the second sentence is false. As to religion, in places where people care about it enough to discriminate, changing it will probably alienate one's family, so it is very costly to change, although technically possible. Also, in many places, religion is a codeword for ethnic groups, so it can't be changed (eg, Catholics in US 1850-1950).
5Vladimir_M
You're right that my comment was imprecise, in that I didn't specify to which societies it applies. I had in mind the modern Western societies, and especially the English-speaking countries. In other places, things can indeed be very different with regards to all the mentioned issues. However, regarding your comment: That's not really true. People are indeed apt to enthusiastically extol moral principles in the abstract while at the same them violating them whenever compliance would be too costly. However, even when such violations are rampant, these acts are still different from those that don't involve any such hypocritical violations, or those that violate only weaker and less significant principles. And in practice, when we observe people's acts and attitudes that involve their feeling of superiority over lower classes and their desire to distance themselves from them, it looks quite different from analogous behaviors with respect to e.g. race or sex. The latter sorts of statements and acts normally involve far more caution, evasion, obfuscation, and rationalization. To take a concrete example, few people would see any problem with recommending a house by saying that it's located in "a nice middle-class neighborhood" -- but imagine the shocked reactions if someone praised it by talking about the ethnic/racial composition of the neighborhood loudly and explicitly, even if the former description might in practice serve as (among other things) a codeword for the latter.
8Matt_Simpson
But you still discriminate based on sex, gender, race, class, sexual orientation and religion every day. You don't try to talk about sports with every girl you meet, you safely assume that they probably aren't interested until you receive evidence to the contrary. But if you meet a guy, then talking about sports moves higher on the list of conversation topics just because he's a guy.
2WrongBot
Well, I actually try to avoid talking about sports entirely, because I find the topic totally uninteresting. But! That is mere nitpicking, and the thrust of your argument is correct. I can only say that like all human beings I regularly fail to adhere to my own moral standards, and that this does not make those standards worthless.
4Matt_Simpson
For some reason I expected that answer. ;) I find it odd that you still hold on to "not statistically discriminating" as a value. What about it do you think is immoral? (I'm not trying to be condescending here, I'm genuinely curious)
2WrongBot
I value not statistically discriminating (on the basis of unchosen characteristics or group memberships) because it is an incredibly unpleasant phenomenon to experience. As a white American man I suffer proportionally much less from the phenomenon than do most people, and even the small piece of it that I pick up from being bisexual sucks. It's not a terminal value, necessarily, but in practice it tends to act like one.
2HughRistik
If following your moral standards is impractical, maybe those standards aren't quite right in the first place. It is a common mistake for idealists to choose their morality without reference to practical realities. A better search plan would be to find all the practical options, and then pick whichever of those is the most moral. If you spare women you meet from discussion of sports (or insert whatever interest you have that exhibits average sex differences) until she expresses interest in the subject, you have not failed any reasonable moral standards.
1WrongBot
Most moral by what standard? You're just passing the buck here.
0HughRistik
Moral according to your standards. I'm just suggesting a different order of operation: understanding the practicalities first, and then trying to find which of the practical options you judge most moral.
1WrongBot
But those standards are moral standards. If you're suggesting that one should just choose the most moral practical option, how is that any different from consequentialism? Your first comment sounded like you were suggesting that people should choose the most moral practical standard.
1SilasBarta
Well, until you factor in the unfortunate tendency of women to be attracted to men who are indifferent to their interests :-P
0[anonymous]
People don't get to choose how intelligent they are.
0Simplicius
Those people depend upon funding that is contingent on public opinion of how valid their research is. Also by making a research question disreputable, talented people might avoid it and those with ulterior motives might flock to it. Currently the only people who dare to touch this field in any meaningful way are those who are already tenured, and while that is the whole purpose of tenure, the fact remains that even if these people are due to their age (as the topic wasn't always taboo) not really showing the negative effects of the above paragraph they are still old. And old brains just don't work that well when it comes to coming up with new stuff. Deciding a piece of knowledge should be considered dangerous knowledge will necessarily lead to the deception of others and self on many different levels and in many different ways. I agree with the estimation made by some others that will produce Dragon in the garage dynamics which will induce many of the same bad results and biases you seem to wish to ameliorate.

I’ve also observed that people who come to believe that there are significant differences between the sexes/races/whatevers on average begin to discriminate against all individuals of the disadvantaged sex/race/whatever, even when they were only persuaded by scientific results they believed to be accurate and were reluctant to accept that conclusion. I have watched this happen to smart people more than once. Furthermore, I have never met (or read the writings of) any person who believed in fundamental differences between the whatevers and who was not also to some degree a bigot.

One specific and relatively common version of this are people who believe that women have a lower standard deviation on measures of IQ than men. This belief is not incompatible with believing that any particular woman might be astonishingly intelligent, but these people all seem to have a great deal of trouble applying the latter to any particular woman. There may be exceptions, but I haven’t met them.

The rest of the post was good, but these claims seem far too anecdotal and availability heuristicky to justify blocking yourself out of an entire area of inquiry.

When well-meaning, intelligent people like yo... (read more)

2WrongBot
I think it may be helpful to clearly distinguish between epistemic and instrumental rationality. The idea proposed in this post is actively detrimental to the pursuit of epistemic rationality; I should have acknowledged that more clearly up front. But if one is more concerned with instrumental rationality ("winning"), then perhaps there is more value here. If you've designated a particular goal state as a winning one and then, after playing for a while, unconsciously decided to change which goal state counts as a win, then from the perspective of the you that began the game, you've lost. I do agree that my last example was massively under-justified, especially considering the breadth of the claim.

In the comments here we see how LW is segmenting into "pro-truth" and "pro-equality" camps, just as it happened before with pro-PUA and anti-PUA, pro-status and anti-status, etc. I believe all these divisions are correlated and indicate a deeper underlying division within our community. Also I observe that discussions about topics that lie on the "dividing line" generate much more heat than light, and that people who participate in them tend to write their bottom lines in advance.

I'm generally reluctant to shut people up, but here's a suggestion: if you find yourself touching the "dividing line" topics in a post or comment, think twice whether it's really necessary. We may wish ourselves to be rational, but it seems we still lack the abstract machinery required to actually update our opinions when talking about these topics. Nothing is to be gained from discussing them until we have the more abstract stuff firmly in place.

My hypothesis is that this is a "realist"/"idealist" divide. Or, to put it another way, one camp is more concerned with being right and the other is more concerned with doing the right thing. ("Right" means two totally different things, here.)

Quality of my post aside (and it really wasn't very good), I think that's where the dividing line has been in the comments.

Similarly, I think most people who value PUA here value it because it works, and most people who oppose it do so on ethical or idealistic grounds. Ditto discussions of status.

The reason the arguments between these camps are so unfruitful, then, is that we're sorting of arguing past each other. We're using different heuristics to evaluate desirability, and then we're surprised when we get different results; I'm as guilty of this as anyone.

Here is another example of the way that pragmatism and idealism interact for me, from the world of pickup:

I was brought up with up with the value of gender equality, and with a proscription against dominating women or being a "jerk."

When I got into pickup and seduction, I encountered the theory that certain masculine behaviors, including social dominance, are a factor in female attraction to men. This theory matched my observation of many women's behavior.

While I was uncomfortable with the notion of displaying stereotypically masculine behavior (e.g. "hegemonic masculinity" from feminist theory) and acting in a dominant manner towards women, I decided to give it a try. I found that it worked. Yet I still didn't like certain types of masculine and dominance displays, and the type of interactions they created with women (even while "working" in terms of attraction and not being obviously unethical), so I started experimenting and practicing styles less reliant on dominance.

I found that there were ways of attracting women that worked quite well, and didn't depend on dominance and a narrow version of masculinity. It just took a bit of practice and creativ... (read more)

I strongly agree with this. Count me in the camp of believing true things in literally all situations, as I think that the human brain is too biased for any other approach to result, in expectation, in doing the right thing, but also as in the camp of not necessarily sharing truths that might be expected to be harmful.

9HughRistik
I was thinking the same thing, when I insinuated that you were being idealistic ;) Whether this dichotomy makes sense is another question. I think this an excellent example of what the disagreements look like superficially. I think what is actually going on is more complex, such as differences of perception of empirical matters (underlying "what works"), and different moral philosophies. For example, if you have a deontological prescription against acting "inauthentic," then certain strategies for learning social skills will appear unethical to you. If you are a virtue ethicist, then holding certain sorts of intentions may appear unethical, whereas a consequentialist would look more at the effects of the behavior. Although I would get pegged on the "realist" side of the divide, I am actually very idealistic. I just (a) revise my values as my empirical understanding of the world changes, and (b) believe that empirical investigation and certain morally controversial behaviors are useful to execute on my values in the real world. For example, even though intentionally studying status is controversial, I find that social status skills are often useful for creating equality with people. I study power to gain equality. So am I a realist, or an idealist on that subject? Another aspect of the difference we are seeing may be in this article's description of "shallowness."
5wedrifid
(Prompted by but completely irrelevant to the recent bump.) Come now. This is lesswrong. It is an "idealist"/"idealist" divide with slightly different ideals. :P One side's ideal just happens to be "verbal symbols should be used to further epistemic accuracy". It is very much an 'ethical or idealistic' position with all the potential for narrow mindedness that entails.
-1ChristianKl
The evidence that PUA works is largely anecdotal. A lot of people claim that one shouldn't believe in acupuncture based on anecdotal evidence. PUA however is a theory that plays well with other reductionist beliefs while acupuncture doesn't. I think the following two are open questions: Given the same amount of approaches, does a guy who has read PUA theories have higher success of getting laid? If the man has a goal to have a fulfilling long term relationship with an attractive woman, is it benefitial for him to go down the PUA road? The evidence for the status hypothesis is also relatively weak. Being reductionist does have nothing to do with being realist. Being reductionist brings you problem when you are faced with a system that's more complex than your model. In biology students get taught these days that even when you know all parts of a system you don't necessarily know what the system does. That reductionism is wrong and that you actually need real evidence for theories such as the status hypothesis.
2Vaniver
Isn't one of the benefits of PUA is that your number of actual approaches increases (while single, at least)?
0ChristianKl
Isn't one of the benefits of homeopathy that you get to talk to a person who promises you that you will feel better? If the control for homeopathy is doing nothing that you find that homeopathy works. If you however do a double blind trial you will probably find that homeopathy doesn't work. If you truly belief in rationalism and don't engage in it to signal status, I see no reason to use another standard for judging whether homeopathy is true than for judging whether PUA works.
2Vaniver
Aww, I respect you as a person too! (What were you trying to accomplish with this comment?) As you point out, which control you pick is significant, but my point is that what test you pick is significant too. Let's talk about basketball: you can try and determine how good players are by their free throw percentage, or you can try and determine how good players are by their average points scored per game. You're suggesting the analog of the first, which seems ludicrous because it ignores many critical skills. If someone is interested primarily in getting laid, it seems that the number they care about is mean time between lays, not percentage success on approaches. I won't comment much about your homeopathy example, except to say that even if one considers it relevant it undermines your position. Homeopathy is better than both nothing and harmful treatments (my impression is most people come to PUA from not trying at all or trying ineffectively). Generally, for any homeopathic treatment you could take there is a superior mainstream treatment, but for some no treatment is more effective than placebo (and so you're just making the decision of whether or not to pay for the benefits of placebo). Likewise, even if the only benefit of PUA is increased confidence, you have to trick yourself into that confidence somehow- and so if PUA boosts confidence PUA increases your chances, even though it did it indirectly.
4David_Gerard
Your statement concerning homeopathy turns out not to be correct. In practice, homeopathy is harmful because it replaces effective treatments in the patients' minds and It soaks up medical funding. Edit: Actually, yes, I do agree with Vaniver's point as explained below: at the time of its invention, homeopathy (i.e., water) frequently gave better results than the actively harmful things many doctors were doing to their patients. That said, I'm not sure the analogy with PUAs is usably solid even in those terms ... need to come up with one that might be.
4Vaniver
Precision in language: my statement concerning homeopathy is correct, but has debatable relevance. At present, homeopathy underperforms mainstream medicine for nearly everything (like I explicitly mentioned). But I strongly suspect the only reason we're talking about an alternative medicine that originated 200 years ago is because it predated the germ theory of disease by 70 years. So, it had at least 70 years of growth as an often superior alternative to mainstream medicine, which was murdering its patients through ignorance.* As well, Avogadro's number was measured about the same time as the germ theory was put forward by Pasteur, and so for that time homeopathy had as solid a theoretical background as mainstream medicine. My feeling is that insomuch as PUA should be compared to homeopathy, it should be compared to homeopathy in 1840- the proponents may be totally wrong about why it works and quality data either way is likely scarce, but the paucity of strong alternatives means it's a good choice.** Heck, it might even be the analog of germ theory instead of the analog of homeopathy. *The story of Ignaz Semmelweis ought not be forgot. **Is there anyone else trying a "scientific" approach to relationships? I know there are a number of sexologists, but they seem more descriptive and less practical than PUA. Not to mention they seem more interested in the physical aspects than the tactical/strategic ones.
2NancyLebovitz
A reductionist approach to acupuncture-- it claims that all the ideas about mystical energy are mistranslations, and explains acupuncture in terms of current biology.
0wedrifid
There is an implied argument in here that is triggering my bullshit senses. The worst part is that it uses what is a valid consideration (the lamentable lack of research into effective attraction strategies) and uses it as a facade over an untenable analogy and complete neglect of the strength of anecdotal evidence. Relative to what, exactly? The 'gravity' hypothesis? The evidence is overwhelming.
2ChristianKl
How do you determine the strength of anecdotal evidence to decide that PUA works and acupuncture doesn't? I know quite a few people both online and offline who claim that acupuncture has helped them with various issues. I know people online who claimed that PUA helped them. I know people online who say that they concluded after spending over a year in the PUA community that the field is a scam. I also know people online who have radically changed their social life without going the PUA road. As a good skeptic it important to know that you simply don't have enough information to decide certain questions.
8wedrifid
-2[anonymous]
And as an effective homo-hypocritus it is important to recognize when the 'good skeptic' role will be a beneficial one to adopt, completely independent on the evidence.
0WrongBot
This is only true if you have insufficient math/computing ability to simulate the interactions of the system's parts. For it to be otherwise, either your information would have to actually be incomplete, or magic would have to happen.
-1ChristianKl
Thanks to Heisenberg your information is also always incomplete. In real life you do have insufficient math/computing ability to simulate the interactions of many systems. Whether weak reductionism is true doesn't matter much for this debate. People who believe in strong reductionism find appeal in Pua theory. They believe that they have sufficient mental resources and information to calculate complex social interactions in a way that allows them to optimize those interactions. Because of the belief in strong reductionism they believe in Pua based on anecdotal evidence and don't believe in acupuncture based on anecdotal evidence.
7[anonymous]
If there's a discussion about whether or not we should seek truth -- at a site about rationality -- that's a discussion worth having. It's not a side issue. Like whpearson, I think we're not all on one side or another. I'm pro-truth. I'm anti-PUA. I don't know if I'm pro or anti status -- there's something about this community's focus on it that unsettles me, but I certainly don't disapprove of people choosing to do something high-status like become a millionaire. You're basically talking about the anti-PC cluster. It's an interesting phenomenon. We've got instinctively and vehemently anti-PC people; we've got people trying to edge in the direction of "Hey, maybe we shouldn't just do whatever we want"; and we've got people like me who are sort of on the dividing line, anti-PC in theory but willing to walk away and withdraw association from people who actually spew a lot of hate. I think it's an interesting issue because it deals with how we ought best to react to controversy. In the spirit of the comments I made to WrongBot, I don't think we should fear to go there; I know my rationality isn't that fragile and I doubt yours is either. (I've gotten my knee-jerk emotional responses burned out of me by people much ruder than anyone here.)

Anti-PC? Good name, I will use it.

I know my rationality isn't that fragile and I doubt yours is either.

What troubles me is this: your position on the divisive issues is not exactly identical to mine, but I very much doubt that I could sway your position or you could sway mine. Therefore, I'm pretty confident that at least one of us fails at rationality when thinking about these issues. On the other hand, if we were talking about math or computing, I'd be pretty confident that a correct argument would actually be recognized as correct and there would be no room for different "positions". There is only one truth.

We have had some big successes already. (For example, most people here know better than be confused by talk of "free will".) I don't think the anti-PC issue can be resolved by the drawn-out positional war we're waging, because it isn't actually making anyone change their opinions. It's just a barrage of rationalizations from all sides. We need more insight. We need a breakthrough, or maybe several, that would point out the obviously correct way to think about anti-PC issues.

[-][anonymous]100

Anti-PC? Good name

I don't think using this name is a good idea. It has strong political connotations. And while I'm sure many here aren't aware of them or are willing to ignore them, I fear this may not be true:

  • For potential new readers and posters
  • Once the "camps" are firmly established.
4[anonymous]
I think it actually is a value difference, just like Blueberry said. I do not want to participate in nastiness (loosely defined). It's related to my inclination not to engage in malicious gossip. (Folks who know me personally consider it almost weird how uncomfortable I am with bashing people, singly or in groups.) It's not my business to stop other people from doing it, but I just don't want it as part of my life, because it's corrosive and makes me unhappy. To refine my own position a little bit -- I'm happy to consider anti-PC issues as matters of fact, but I don't like them connotationally, because I don't like speaking ill of people when I can help it. For example, in a conversation with a friend: he says, "Don't you know blacks have a higher crime rate than whites?" I say, "Sure, that's true. But what do you want from me? You want me to say how much I hate my black neighbors? What do you want me to say?" I don't think that's an issue that argument can dissuade me from; it's my own preference.
0[anonymous]
This discussion prompted a connection in my mind that startled me a lot. Let's put it in the open. We've been discussing the moral status of identical copies. I gave a partial reductio sometime ago, but wasn't really satisfied. Now consider this: what about the welfare of your imperfect copies? Do UDT-like considerations make it provably rational to care more about creatures that share random features with you? Note that I say UDT-like considerations, not evolutionary considerations. Evolution doesn't explain professional solidarity or feminism because neither relies on heritable traits. Ganging up looks more like a Schelling coordination game, where you benefit from seeking allies based on some random quality as long as they also get the idea of allying with you based on same quality. And it might work better if the quality is hard to change, like sex or race. Anyone willing to work out the math is welcome to do so...
0steven0461
Asserting group inequalities means speaking more ill of one group of people but less ill of another, so doesn't that cancel out?
2[anonymous]
I'm not talking about empirical claims, I'm talking about affect. I have zero problem with talking about group inequalities, in themselves.
3Blueberry
But there are many different values. If we can't sway each other's positions, that points to a value difference.
8Vladimir_Nesov
If only it was always so. Value is hard to see, so easy to rationalize.
2[anonymous]
"Value difference" is often used as a cop-out. How did our terminal values come to be so different, anyway? If I'm extremely selfish and you're extremely selfish, we will likely have very different values, but if we are both altruistic, our values are combinations of values of all the other people in the world, so they should be pretty similar. For example, if I think society should be organized like an anthill and you think it should be organized like a pool of sharks (to borrow Ken Binmore's example), this is a factual disagreement about what would make everyone better off, not a value disagreement.
5Douglas_Knight
Maybe it's a political correctness principal component, but it seems to me that ideas about status should not be aligned with that component. If PUA had not been mentioned, and we were just discussing Johnstone, then I think those who are ignorant of PUA, whether pro- or anti-PC, would have less extreme reactions and often completely different ones. If people's opinions on one issue are polarizing their opinions on another, without agreement that they're logically related, something is probably going wrong and this is a cost to discussing the first issue. Also, cousin it talked about the issues creating "camps." That's probably the mediating problem.
0Risto_Saarelma
I am presently amused by imagining forum members declaring themselves "anti-truth". Though I guess there is a spectrum from sticking to discovering and exposing widely applicable truths no matter what, some kind of Straussian stance where only the enlightened elites can be allowed access to dangerous truths and the general populace is to be fed noble lies, and then on to even less coherent spheres of willful obscurantism and outright anti-intellectualism, where it seems that nobody is encouraged to pursue some topics. For some reason though, people who either explicitly believe that noble lies are necessary or have internalized a culture where they are built-in never seem to claim to be anti-truth.
3whpearson
I think there are divisions within the community, but I am not sure about the correlations. Or at least they don't fit me. I'm pro discussion of status, I liked red paper clip theory for an example. I'm anti acquiring high status for myself and anti people telling me I should be pro that. I'm anti-pua advice, pro the occasional well backed up psychological research with PUA style flavour (finding out what women really find attractive, why the common advice is wrong etc). I'm pretty much pro-truth, I don't think words can influence me that much (if they could I would be far more mainstream). I'm less sure about situations, if I was more status/money maximising for a while to earn money to donate to FHI etc, then I would worry that I would get sucked into the high status decadent consumer lifestyle and forget about my long term concerns. Edit: Actually, I've just thought of a possible reason for the division you note. If you are dominant or want to become dominant you do not want to be swayed by the words of others. So ideas are less likely to be dangerous to you or your values. If you are less-dominant you may be more susceptible to the ideas that are floating around in society as, evolutionarily, you would want to be part of whatever movement is forming so you are part of the ingroup. I think my social coprocessor is probably broken in some weird way, so I may be an outlier.

There's no social coprocessor, we evolved a giant cerebral cortex to do social processing, but some people refuse to use it for that because they can't use it in its native mode while they are also emulating a general intelligence on the same hardware.

2whpearson
I was being brief (and imprecise) in my self-assessment as that wasn't the main point of the comment. I didn't even mean broken in the sense that other might have meant it, i.e. Aspergers. I just don't enjoy social conversation much normally. I can do it such that the other person enjoys it somewhat. An example, I was chatting to a cute dancer last night (at someone's 30th so I was obliged to), and she invited me to watch her latest dance. I declined because I wasn't into her (or into watching dance). She was nice and pretty, nothing wrong with her, but I just don't tend to seek marginal connections with people because they don't do much for me. Historically the people I connect with have seem to have been people that have challenged me or can make me think in odd directions. This I understand is an unusual way to pick people to associate with, so I think something in the way I process social signals is different from the norm. This is what I meant.
8MichaelVassar
I know what's going on. You think of yourself and others as collections of thoughts and ideas. Since most people don't have interesting thoughts or ideas, you think they aren't interesting. OTOH, it's possible to adopt, temporarily and in a manner which automatically reverses itself, the criteria for assigning interest that the person you are associating with uses. When you do that, everyone turns out to be interesting and likable.
3whpearson
That wasn't my working hypothesis. Mine was that I have different language capabilities and that those affect what social situations I find easy and enjoyable (and so the different people I chose to associate with). For example I can quite happily rattle off some surreal story with someone or I enjoy helping someone plan or design something. I find it hard to narrate stories about my life or remember interesting tidbits about the world that aren't in my interest right at the moment. Oh I can find many things interesting for a brief time, e.g. where the best place to be a dancer is (London is better than Europe) or how the some school kids were playing up today. Just subconsciously my brain knows it doesn't want lots of that sort of information or social interaction so sends signals that I do not want to have long term friendships with these sorts of people.
1ABranco
Hi, Michael. Can you expand that thought, and the process? Doesn't adopting the other person's criteria constitute a kind of "self-deception" if you happen to dislike/disapprove his/her criteria? I mean that even if, despite your dislikes, you sympathize with the paths that led to that person's motivations, if reading a book happens to be a truly more interesting activity at that moment, and is an actionable alternative, I don't see how connecting with the person could be a better choice. Unless... you find something very enjoyable in this process itself that doesn't depend much on the person. I remember your comment about "liking people's territories instead of their maps" — it seems to be related here. Is it?
5Blueberry
Do you ever just associate with people you find attractive at first sight? (I can't tell if you're referring to a strip club, or what kind of dancer you mean.) You may find Prof. Richard Wiseman's research on what makes people "lucky" interesting: his research has found advantages to seeking marginal connections with people you meet.
3whpearson
Do you mean sexually attractive? Or just interesting looking? I'll initiate conversation with interesting looking people (that may or may not be sexually attractive). By dancer I just meant someone who does modern dance, she was a friend of a friend (I have some odd friends by this websites standard I think). Oh I know I should develop more marginal connections. It simply feels false to do so though, that I am doing so in the hopes of exploiting them, rather than finding them particularly interesting in their own right. I would rather not be cultivated in that fashion.
0Blueberry
I meant sexually attractive (you described the dancer as "cute" and "pretty"). Though I guess either would work.
0Blueberry
I'm not sure I understand. By 'emulating a general intelligence', do you mean consciously thinking through every action? My understanding is that people can develop social processing skills by consciously practicing unnatural habits until they become natural.
4MichaelVassar
No-one consciously thinks through every action. I mean thinking at all rather than paying total attention to the other person and letting your actions happen. If you feel that 'you' are doing something, you aren't running the brain in its native mode, your running an emulation. It's hard to figure out how to do this from a verbal description, but if it happens you will recognize what I'm talking about and it doesn't require any practice of anything unnatural.
4HughRistik
This is correct; at least some people can do this. For someone reason, there is a cultural bias that makes believe that this approach doesn't work, because so many people seem to believe that it doesn't without evidence. These people are wrong; this view has already been falsified by many people. Many people learn many different disciplines through the four stages of competence (unconscious incompetence, conscious incompetence, conscious competence, unconscious competence), in sports and the arts. Conversation isn't a special exception. Though it may be different from those domains by requiring more specialized mental hardware. Consciously practicing "unnatural" social habits happens to be a good way to jump start that hardware if it is dormant. Someone without this hardware may not be able to learn how to emulate naturally social people through consciously trying to emulate them. Yet I bet that most people with social difficulties short of Asperger's aren't missing the relevant hardware; they just don't know how to use it out of social inexperience, such as from spending their formative years being isolated and bullied for being slightly different.
-1daedalus2u
I disagree. I think there is the functional equivalent of a “social-co-processor”, what I see as the fundamental trade-off along the autism spectrum, the trading of a "theory of mind" (necessary for good and nuanced communication with neurotypically developing individuals and a “theory of reality”, (necessary for good ability at tool making and tool using). http://daedalus2u.blogspot.com/2008/10/theory-of-mind-vs-theory-of-reality.html Because the maternal pelvis is limited in size, the infant brain is limited at birth (still ~1% of women die per childbirth (in the wild) due to cephalopelvic disproportion). The “best” time to program the fundamental neuroanatomy of the brain is in utero, during the first trimester when the fundamental neuroanatomy of the brain is developing and when the epigenetic programming of all the neurons in the brain is occurring. The two fundamental human traits, language and tool making and tool using both require a large brain with substantial plasticity over the individual's lifetime. But other than that they are pretty much orthogonal. I suspect there has been evolutionary pressure to optimize the neuroanatomy of the human infant brain at birth so as to optimize the neurological tasks that brain is likely to need to do over that individual's lifetime.
6HughRistik
Another possibility is that we are seeing some other personality differences in openness and or agreeableness. People who are higher in openness and/or lower in agreeableness might be more interested in ideas that are judged politically incorrect, or antisocial.
1[anonymous]
The division might correlate with where people land on the various axis's of the neurodiversity spectrum.
3Blueberry
I think this is just another way of saying "I'm pro- good advice about dating and anti- bad advice about dating." I would consider the research you're discussing a form of PUA/dating advice.
9whpearson
Are newtons laws billiard ball prediction advice? In other words, there are other uses than trying to pick up girls for knowing what, on average, women like in a man. These include, but are not limited to, * Judging the likely ability of politicians to influence women * Being able to match make between friends * Writing realistic plots in fiction * Not being suprised when your friends are attracted to certain people
[-]Larks110

If you're an altruist (on the 'idealist' side of WrongBot's distinction), you'd probably consider making women you know happier to be the biggest advantage.

4whpearson
Most of the women I'm friends with are in relationships with men that aren't me :) So me being maximally attractive to them may not make them happier. I would need more research on how to have the correct amount of attractiveness in platonic relationships. Sure women like the attention of a very attractive man, but it could lead to jealousy (why is attractive man speaking to X and not me), unrequited lust and .strife in their existing relationships. Perhaps a research on what women find creepy, and not doing that, would be more useful for making women happier in general. Edit: There is also the problem that if you become more attractive you might make your male friends less happy as they get less attention. Raising the general attractiveness of your male social group is another possibility, but one that would require quite an oddly rational group.
2Emile
I agree that these politically charged issues are probably not a very good thing for the community, and that we should be extra cautious when engaging them.
0CarlShulman
Any hypotheses about the common factor?
6cousin_it
Not sure. I was anti-status, anti-PUA, pro-equality until age 22 or so, and then changed my opinions on all these issues at around the same time (took a couple years). So maybe there is a common cause, but I have absolutely no idea what that cause could be.
8[anonymous]
del
5CarlShulman
Reduced attachment to explicit verbal norms?
2JamesPfeiffer
My relevant life excerpt is similar to yours. The first two changed because of increased understanding of how humans coordinate and act socially. Not sure if there is a link to the third.
1Blueberry
It's called "growing up."
1[anonymous]
I wouldn't call it that, climbing the metacontrarian ladder seems to describe it much better.
[-][anonymous]180

A thousand times no. Really, this is a bad idea.

Yeah, some people don't value truth at any cost. And there's some sense to that. When you take a little bit of knowledge and it makes you a bad person, or an unhappy person, I can understand the argument that you'd have been better off without that knowledge.

But most of the time, I believe, if you keep thinking and learning, you'll come round right. (I.e.: when a teenager reads Ayn Rand and thinks that gives him license to be an asshole, his problem is not that he reads too much philosophy.)

You seem to be particularly worried about accidentally becoming a bigot. (I don't think most of us are in any danger of accidentally becoming supreme dictators.) I think you are safe. Think of it this way: you don't want to be a bigot. You don't want your future self to be a bigot either. So don't behave like one. No matter what you read. Commit your future self to not being an asshole.

I think fear of brainwashing is generally silly.* You will not become a Mormon from reading the Book of Mormon. You will not become a Nazi from reading Mein Kampf, or a Communist from reading Das Kapital. You will not become a racist from reading Steve S... (read more)

But most of the time, I believe, if you keep thinking and learning, you'll come round right. (I.e.: when a teenager reads Ayn Rand and thinks that gives him license to be an asshole, his problem is not that he reads too much philosophy.)

"A little learning is a dang'rous thing;
Drink deep, or taste not the Pierian spring:
There shallow draughts intoxicate the brain,
And drinking largely sobers us again."

-- Pope

4SilasBarta
That sounds like my (provisional) resolution the conflict between "using all you know" and "don't be a bigot": you should incorporate the likelihood ratio of things that a person can't control, so long as you also observe and incorporate evidence that could outweigh such statistical, aggregate, nonspecific knowledge. So drink deep (use all evidence), but if you don't, then avoid incorporating "dangerous knowledge" as a second best alternative. Apply a low Bayes factor for something someone didn't choose, as long as you give them a chance to counteract it with other evidence. (Poetry still sucks, though. I'm not yet changing my mind about that.)
[-]Emile140

(Poetry still sucks, though. I'm not yet changing my mind about that.)

... must ... resist ... impulse ... to ... downvote ... different ... tastes ...

0NancyLebovitz
The other problem with "using all you know" about groups which are subject to bigotry is that "we rule, you drool" is very basic human wiring, and there's apt to be some motivated cognition (in the people developing and giving you the information, even if you aren't engaging in it) on the subject.

You will not become a Nazi from reading Mein Kampf, or a Communist from reading Das Kapital.

I became a Trotskyite (once upon a time) partly based on reading Trotsky's history of the Russian Revolution. Yes, I was primed for it, but... words aren't mere.

2Emile
Interesting - would you recommend others read it? I'm interested in reading anything that can change my mind, but avoid some partisan stuff when it looks like it's "preaching to the choir" and that it assumes that the reader already agrees with the conclusions.
4simplicio
Yes, if you're not young, impressionable and overidealistic. Trotsky was an incredible writer, and reading that book you do really see things from the perspective of an insider.
0[anonymous]
This "it" may, or even should, relate to the idea itself. The same idea, the same meme, put into a healthy rational brains anywhere, will decide the same! Since the brains are just a rational machine always doing the best possible thing. It is the input, what decides the output. Machine has no other (irrational) choices, than to process the input best way it can, and then to spit out the output. It is not my calculator only, which outputs "12" to the input "5+7". It is every unbroken calculator in the world, which outputs the same. So again. The input "decides" what the output should be, not the computer (brains).
7[anonymous]
I don't know if this is a fair characterization of Steve Sailer. I'm quite sure some of his commenters are racist but then again many of the commenters on any major news site are as well. I would call him racialist, or perhaps just a HBDer. Perhaps I'm somewhat biased in my view of him, but generally for example this interesting video seems typical Steve Sailer style. This is as representative of racism as Das Kapital is of Communism or Mein Kampf of Nazism? Does racism just have a bad PR guy? Whatever one calls this it clearly dosen't deserve the few thousand negative karma points racism has in my mind. Perhaps he is putting forward his best face here, but listening to a few parts of this discussion I half expected he would start reciting the litany of Tarski, going into a Hansonian analysis of status or telling everyone that beliefs should pay rent. He certainly touches on these topics in a slightly different vocabulary! Reword a sentence or two and it sounds something a commenter could write on Lesswrong and get upvoted for.
0GLaDOS
Original link is broken. This seems to be the same video.
2Emile
He's probably more motivated by not wanting others to become bigots - right, WrongBot?
8WrongBot
My motivation in writing this article was to attempt to dissuade others from courses of action that might lead them to become bigots, among other things. But I am also personally terrified of exactly the sort of thing I describe, because I can't see a way to protect against it. If I had enough strong evidence to assign a probability of .99 to the belief that gay men have an average IQ 10 points lower than straight men (I use this example because I have no reason at all to believe it is true, and so there is less risk that someone will try to convince me of it), I don't think I could prevent that from affecting my behavior in some way. I don't think it's possible. And I disvalue such a result very strongly, so I avoid it. I bring up dangerous thoughts because I am genuinely scared of them.
[-][anonymous]140

The fact that you have a core value, important enough to you that you'd deliberately keep yourself ignorant to preserve that value, is evidence that the value is important enough to you that it can withstand the addition of information. Your fear is a good sign that you have nothing to fear.

For real. I have been in those shoes. Regarding this subject, and others. You shouldn't be worried.

Statistical facts like the ones you cited are not prescriptive. You don't have to treat anyone badly because of IQ. IQ does not equal worth. You don't use a battery of statistics on test scores, crime rates, graduation rates, etc. to determine how you will treat individuals. You continue to behave according to your values.

In the past I have largely agreed with the sentiment that truth and information are mostly good, and when they create problems the solution is even more truth.

But on the basis of an interest in knowing more, I sometimes try to seek evidence that supports things I think are false or that I don't want to be true. Also, I try to notice when something I agree with is asserted without good evidential support. And I don't think you supported your conclusions there with real evidence.

You don't have to treat anyone badly because of IQ. IQ does not equal worth. You don't use a battery of statistics on test scores, crime rates, graduation rates, etc. to determine how you will treat individuals. You continue to behave according to your values.

This reads more to me like prescriptive signaling than like evidence. While it is very likely to be the case that "IQ test results" are not the same as "human worth", it doesn't follow that an arbitrary person would not change their behavior towards someone who is "measurably not very smart" in any way that dumb person might not like. And for some specific people (like WrongBot by the admission of his or her own fears... (read more)

6[anonymous]
Those are good points. What I was trying to encourage was a practice of trusting your own strength. I think that morally conscientious people (as I suspect WrongBot is) err too much on the side of thinking they're cognitively fragile, worrying that they'll become something they despise. "The best lack all conviction, while the worst are full of passionate intensity." Believing in yourself can be a self-fulfilling prophecy; believing in your own ability to resist becoming a racist might also be self-fulfilling. There's plenty of evidence for cognitive biases, but if we're too willing to paint humans as enslaved by them, we might actually decrease rationality on average! That's why I engaged in "prescriptive signaling." It's a pep talk. Sometimes it's better to try to do something than to contemplate excessively whether it's possible.
6Jonathan_Graehl
Why should your behavior be unaffected? If you want to spend time evaluating a person on their own merits, surely you still can.
1WrongBot
Just because I'll be able to do something doesn't mean that I will. I can resolve to spend time evaluating people based on their own merits all I like, but that's no guarantee at all that the resolution will last.
8[anonymous]
You seem to think that anti-bigots evaluate people on their merits more than bigots do. Why? If you're looking for a group of people who are more likely to evaluate people on their merits, you might try looking for a group of people who are committed to believing true things.
4twanvl
Group statistics gives only a prior, and just a few observations of any individual will overwhelm it. And if start discriminating against gays if they have low average intelligence, then you should discriminate even more against low intelligence itself. It is not the gayness that is the important factor in that case, it just has a weak correlation.
3daedalus2u
I see the problem of bigotry in terms of information and knowledge but I see bigotry as occurring when there is too little knowledge. I have quite an extensive blog post on this subject. http://daedalus2u.blogspot.com/2010/03/physiology-behind-xenophobia.html My conceptualization of this may seem contrived, but I give a much more detailed explanation on my blog along with multiple examples. I see it as essentially the lack of an ability to communicate with someone that triggers xenophobia. As I see it, when two people meet and try to communicate, they do a “Turing test”, where they exchange information and try to see if the person they are communicating with is “human enough”, that is human enough to communicate with, be friends with, trade with, or simply human enough to not kill. What happens when you try to communicate, is that you both use your “theory of mind”, what I call the communication protocols that translate the mental concepts you have in your brain into the data stream of language that you transmit; sounds, gestures, facial expressions, tone of voice, accents, etc. If the two “theories of mind” are compatible, then communication can proceed at a very high data rate because the two theories of mind do so much data compression to fit the mental concepts into the puny data stream of language and to then extract them from the data stream. However, if the two theories of mind are not compatible, then the error rate goes up, and then via the uncanny valley effect xenophobia is triggered. This initial xenophobia is a feeling and so is morally neutral. How one then acts is not morally neutral. If one seeks to understand the person who has triggered xenophobia, then your theory of mind will self-modify and eventually you will be able to understand the person and the xenophobia will go away. If you seek to not understand the individual, or block that understanding, then the xenophobia will remain. It is exactly analogous to Nietzsche's quote “if you look i
1satt
I will not, but...

With bigotry, I think the real problem is confirmation bias. If I believe, for example, that orange-eyed people have an average IQ of only 99, and that's true, then when I talk to orange-eyed people, that belief will prime me to notice more of their faults. This would cause me to systematically underestimate the intelligence of orange-eyed people I met, probably by much more than 1 IQ point. This is especially likely because I get to observe eye color from a distance, before I have any real evidence to go on.

In fact, for the priming effect, in most people the magnitude of the real statistical correlation doesn't matter at all. Hence the resistance to acknowledging even tiny, well-proven differences between races and genders: they produce differences in perception that are not necessarily on the same order of magnitude as the differences in reality.

[-]lmnop160

This is exactly the crux of the argument. When people say that everyone should be taught that people are the same regardless of gender or race, what they really mean isn't that there aren't differences on average between women and men, etc, but that being taught about those small differences will cause enough people to significantly overshoot via confirmation bias that it will overall lead to more misjudgments of individuals than if people weren't taught about those small differences at all, hence people shouldn't be taught about those small differences. I am hesitantly sympathetic to this view; it is borne out in many of the everyday interactions I observe, including those involving highly intelligent aspiring rationalists.

This doesn't mean we should stop researching gender or race differences, but that we should simultaneously research the effects of people learning about this research: how big are the differences in the perception vs the reality of those differences? Are they big enough that anyone being taught about gender and race differences should also be taught about of the risk of them systematically misjudging many individuals because of their knowledge, and warned to rem... (read more)

2Emile
Those are real and important effects (that should probably have been included in the original post). A problem with avoiding knowledge that could lead you to discriminate is that it makes it hard to judge some situations - did James Watson, Larry Summers and Stephanie Grace deserve a public shaming?
4MichaelVassar
Stephanie Grace, definitely not, she was sharing thoughts privately. Summers? Not for sexism, he seemed honest and sincere in a desire to clarify issues and reach truth, but he displayed stupidity and gullibility which should be cause for shame in his position at Harvard, and to some degree as a broad social scientist and policy adviser, though not as an economic theorist narrowly construed. Watson, probably. He said something overtly and exageratedly negative, said it publicly and needlessly, and has a specific public prestige which makes his words more influential. It's unfortunate that he didn't focus on some other issue and public shame of this sort might reduce such unfortunate occurrences in the future.
0Emile
I wasn't really looking for answers to that question, I was trying to say that if we avoid "dangerous information" (to avoid confirmation bias, etc.), and encourage others to avoid it too, we're making it harder to answer questions like that.

Bryan Caplan argues against the "corrupted by power" idea with an alternative view: they were corrupt from the start, which is why they were willing to go to such extremes to attain power.

Around the time I stopped believing in God and objective morality I came around to Stirners' view: such values are "geists" haunting the mind, often distracting us from factual truths. Just as I stopped reading fiction for reasons of epistemic hygiene, I decided that chucking morality would serve a similar purpose. I certainly wouldn't trust myself to selectively filter any factual information. How can the uninformed know what to be uninformed about?

I've observed that quite a bit of the disagreement with the substance of my post is due to people believing that the level of distrust for one's own brain that I advocate is excessive. (See this comment by SarahC, for example.)

It occurs to me that I should explain exactly why I do not trust my own brain.

In the past week I have noted the following instances in which my brain has malfunctioned; each of them is a class of malfunction I had never previously observed in myself:

(It may be relevant to note that I have AS.)

  • I needed to open a box of plastic wrap, of the sort with a roll inside a box, a flap that lifts up, and a sharp edge under the flap. The front of the box was designed such that there were two sections separated by some perforation; there's a little set of instructions on the box that tells you to tear one of those sections off, thus giving you a functional box of plastic wrap. I spent approximately five minutes trying to tear the wrong section off, mangling the box and cutting my finger twice in the process. This was an astonishing failure to solve a basic physical task.

  • I was making bread dough, a process which necessitates measuring out 4.5 cups of flour into a bowl

... (read more)
4David_Gerard
Someone has to write this game.
4WrongBot
I'm imagining some kind of sliding-block puzzle game, with each block as a symbol or logical operator. You start off with some axioms and then have to go through and construct proofs for progressively more complex first-order logic expressions. Or maybe a game that does for syllogisms what Manufactoria does for Turing Machines. (Memetic hazard warning!) This could be promising...
2Swimmer963 (Miranda Dixon-Luinenburg)
I have a tendency to do this if I want to solve a basic task and someone is watching me, especially a teacher. (I'm in nursing school, so a lot of my evaluations consist of my teacher watching me assemble equipment, not something I'm talented with to begin with.) Alone, I'll just start experimenting with different ways until I find one that works, but if I'm being watched and implicitly evaluated, paradoxically enough I'll keep trying the same failed way over again until they correct me. I don't know if this is a weird illogical attempt to avoid embarrassment, or if I'm subconsciously trying to hasten the moment that they'll just go ahead and tell me, or if it's just because enough of my brain is taken up worrying about someone watching me that the leftovers aren't capable of thinking about the task, and just default to random physical actions. I do this all the time, too. Maybe because my default state, when I'm alone and not under pressure to do something, is a kind of relaxed spacey-ness where I let my thoughts go on whatever association trains they please. People make fun of me for this, and it is irritating, but it's something I'm slowly learning to "switch off" when I really, really have to be focusing my whole attention on something. This kind of thinking happens to me all the time in the state between sleeping and waking, or during dreams themselves. It's occasionally happened to me while awake. I don't find it particularly concerning, since it's easy to notice and wears off fast.
0hesperidia
Noting that this thread is nearly two years old: AS is highly correlated with deficiency in executive function. This would explain the bread incident, although not the other two.
0WrongBot
In the intervening time I've also been convinced that I have ADD, or at least something that looks like it. My executive function is usually pretty decent.
0gwern
Is 'AS' supposed to mean 'Asperger's Syndrome'? I was thinking so and the bread incident does sound like an executive control problem, but the third TV incident sounds more like a schizophrenic sort of hallucination.
2Blueberry
It sounds like the type of unusual, creative, synaesthetic association that can occur under the influence of cannabis or psiloycbin mushrooms, or just sleep deprivation.
0Vaniver
About half of my caloric intake is bread I bake, and I am terrible at counting. I keep a stack of pennies handy for exactly this reason.
[-]red75150

One specific and relatively common version of this are people who believe that women have a lower standard deviation on measures of IQ than men. This belief is not incompatible with believing that any particular woman might be astonishingly intelligent, but these people all seem to have a great deal of trouble applying the latter to any particular woman.

Your evidence is not quite about beliefs. I think correct version is:

People that don't mind to share that they believe that women have a lower... etc.

7Douglas_Knight
Another version is that bigots can't shut up about it.
4A1987dM
Yeah. I have that belief too, but I don't point it out unless that's particularly relevant to the conversation, nor do I try to steer conversations towards that region of topicspace unless I have some compellingly strong reason to do that.

I’ve also observed that people who come to believe that there are significant differences between the sexes/races/whatevers on average begin to discriminate against all individuals of the disadvantaged sex/race/whatever, even when they were only persuaded by scientific results they believed to be accurate and were reluctant to accept that conclusion. I have watched this happen to smart people more than once. Furthermore, I have never met (or read the writings of) any person who believed in fundamental differences between the whatevers and who was not also to some degree a bigot.

This is something I haven't observed, but it's seemed plausible to me anyway. Have there been any studies (even small, lightweight studies with hypothetical trait differences) showing that sort of overshoot? If there are, why don't they get the sort of publicity that studies which show differences get?


Speaking of AIs getting out of the box, it's conceivable to me that an AI could talk its way out. It's a lot less plausible that an AI could get it right the first time.


And here's a thought which may or may not be dangerous, but which spooked the hell out of me when I first realized it.

Different groups h... (read more)

Different groups have different emotional tones . . . (nicer, more honest, more fun, more dignified, etc.).

Downvotes have caused me to put a lot of effort into changing the tone of my communications on Less Wrong so that they are no longer significantly less agreeable (nice) than the group average.

In the early 1990s the newsgroups about computers and other technical subjects were similar to Less Wrong: mostly male, mean IQ above 130, vastly denser in libertarians than the population of any country, the best place online for people already high in rationality to improve their rationality.

Aside from differences in the "shape" of the conversation caused by differences in the "mediating" software used to implement the conversation, the biggest difference between the technical newsgroups of the early 1990s and Less Wrong is that the tone of Less Wrong is much more agreeable.

For example, there was much less evidence IIRC of a desire to spare someone's feelings on the technical newsgroups of the early 1990s, and flames (impassioned harangues of a length almost never seen in comments here and of a level of vitriol very rare here) were very common -- but then again the mediating software probably pulled for deep nesting of replies more than Less Wrong's software does, and most of those flames occured in very deeply nested flamewars with only 2 or 3 participants.

2Nick_Tarleton
Having seen both types of tone, which do you think is more effective in improving rationality and sharing ideas?
5RHollerith
The short answer is I do not know. The slightly longer answer is that it probably does not matter unless the niceness reaches the level at which people become too deferential towards the leaders of the community, a failure mode that I personally do not worry about. Parenthetically, none of the newsgroups I frequented in the 1990s had a leader unless my memory is epically failing me right now. Erik Naggum came the closest (on comp.lang.lisp) but the maintenance of his not-quite-leader status required him to expend a prodigious amount time (and words) to continue to prove his expertise and commitment to Lisp and to browbeat other participants. (And my guess is the the constant public browbeating cost him at least one consulting job. It certainly did not make him look attractive.) The most likely reason for the emotional tone of LW is that the participants the community most admire have altruism, philanthropy or a refined kind of friendliness as one of their primary motivations for participation, and for them to maintain a certain level of niceness is probably effortless or well-rehearsed and instrumentally very useful. Specifically, Eliezer and Anna have altruism, philanthropy or human friendliness as one of their primary motivations with probability .9. There are almost certainly others here with that as one of the primary motivations, but they are hard for me to read or I just do not have enough information (in the form of either a large body of online writings like Eliezer's or sufficient face time) to form an opinion worth expressing. More precisely, if they were less nice than they were, it would be difficult for them to fulfill their mission of improving people's rationality and networking to reduce e-risks, but if they were too nice it would have too much of an inhibitory effect on the critical (judgemental) faculties of them and their interlocutors, so they end up being less nice than the average suburban Californian, say, but significantly nicer than the

I’ve also observed that people who come to believe that there are significant differences between the sexes/races/whatevers on average begin to discriminate against all individuals of the disadvantaged sex/race/whatever, even when they were only persuaded by scientific results they believed to be accurate and were reluctant to accept that conclusion. I have watched this happen to smart people more than once. Furthermore, I have never met (or read the writings of) any person who believed in fundamental differences between the whatevers and who was not also to some degree a bigot.

This is something I haven't observed, but it's seemed plausible to me anyway. Have there been any studies (even small, lightweight studies with hypothetical trait differences) showing that sort of overshoot? If there are, why don't they get the sort of publicity that studies which show differences get?

I would also be interested in hearing if there are any studies on this subject. For me, much of WrongBot's argument hangs on how accurate these observations are. I'm still not sure I'd agree with the overall point, but more evidence on this point would make me much more inclined to consider it.

Also, Wrong... (read more)

3WrongBot
I didn't look hard enough for more evidence for this post, and I apologize. I've recently turned up: * A study on clapping which indicated that people believe very strongly that they can distinguish between the sounds of clapping produced by men and women, when in reality they're slightly better than chance. The relevant section starts at the bottom of the 4th page of that PDF. This is weak evidence that beliefs about gender influence a wide array of situations, often unconsciously. * This paper on sex-role beliefs and sex-difference knowledge in schoolteachers may be relevant, but it's buried behind a pay-wall. * Lots of studies like this one have documented how gender prejudices subconsciously affect behavior. * And here's a precise discussion of exactly the effect I was describing. Naturally, it too is behind a pay-wall.
4Morendil
Yes, if you have gained temporary influence over others one of the ways you can put that to further use is by trading that influence into an environment that accords with your preferences. Regardless of how it comes to be established as a social norm, it could be that a particular tone is more suited to a particular purpose, for instance truth-seeking or community-building or fund-raising. (For instance, academics have a strong norm of writing in an impersonal tone, usually relying on the passive voice to achieve that. This could either be the result of contingent pressure exerted by the people who founded the field, or it could be an antidote to inflamed rhetoric which would detract from the arguments of fact and inference.)
0Sniffnoy
What exactly is spent here? It looks like this is someone with enough status in the group can do "for free".
1Morendil
I don't think it's ever free to use your influence over a group. Do it too often, and you come across as a despot. As a local example, Eliezer's insistence on the use of ROT13 for spoilerish comments carried through at some status "cost" when a few dissenters objected.
-1Jonathan_Graehl
Your point about tone being set top-down (by the high-status, or by inertia in the established community) seems to me to explain why we there are so many genuinely vicious people among netizens who talk rationally and honestly about differences in populations (essentially anti-PC) - even beyond what you'd expect in that they're rebelling against an explicit "be nice" policy that most people assent to.
0NancyLebovitz
I'm not sure about the connection you're making. Is it combining my points that tone is set from the top, and people are apt to overshoot their prejudices beyond their evidence?
-1Jonathan_Graehl
My old theory about the nastiness of some anti-PC reactionaries was that they came to their view out of some animus. Your suggestion that communities' tones may be determined by that of a small number of incumbents serves as an alternative, softening explanation.
0NancyLebovitz
I think it's complicated. Some of it probably is animus, but it wouldn't surprise me if some of it isn't about the specific topic so much as resentment at having the rules changed with no acknowledgement made that rule changes have costs for those who are obeying them.
[-]xamdam120

I think this is a worthwhile discussion.

Here are some "true things" I don't want to know about:

  • the most catchy commercial jingle in the universe
  • what 2g1c looks like. I managed to avoid it thus far
  • the day I am going to die

I'm surprised about the last one. I think it would be quite helpful if you could be prepared for that.

The other two are experiences you wouldn't like to have. If you had the indexical knowledge of what the catchiest jingle was, you could better avoid hearing it.

0xamdam
That's a big if ;) I am not.
[-][anonymous]110

I have to admit there's information I shield myself from as well.

  1. I don't like watching real people die on video. I worry about getting desensitized/dehumanized.

  2. I don't want to see 2g1c either. (by extension, most of the grungier parts of the intertubes.)

  3. I don't want to know (from experience) what heroin feels like.

I do know people who believe in total desensitization -- they think that the reflex to shudder or gag is something you have to burn out of yourself. I don't think I want that for myself, though.

5gwern
You know, those shock videos are not as bad as they look. 2g1c is usually thought to be something in the line of chocolate; and the infamous Tubgirl is known to be just orange juice. (Which makes sense; eating feces is a good way to get sick.)
7Emile
If you tell me the wild boar Has twenty teeth, I’ll say, “Why sure.” Or say that he has thirty three, That number is quite all right with me Or scream that he has ninety-nine I’ll never say that you are lyin’, For the number of teeth In a wild boar’s mouth Is a subject I’m glad I know nothing about. -- Shel Silverstein
3ABranco
It's not obvious that knowing more always makes us better off — because the landscape of rationality is not smooth. The quote in Eliezer's site stating that "That which can be destroyed by the truth should be." sounded to me too strong a claim from the very first time I read it. Many people cultivate falsehoods or use blinkers that are absolutely necessary to the preservation of their sanity (sic), and removing them could terribly jeopardize their adaptability to the environment. It could literally kill them.
0[anonymous]
I suppose this translates to things you already know, but don't want to consciously attend to. For instance, I feel compelled by the Essendon Football Club's slogan: ] While I am tempted to mull of it for a while to dissect it's secrets, I am unlikely, from experience, to get anything meaningful from the experience that I could apply to increase any consequential skill set. Therefore, I'll attend to some other thought associated with my immediate environmental stimuli.
[-]satt120

Here's something that might work as an alternative example that doesn't imply as much bigotry on anybody's part: a PNAS study from earlier this year found that during a school year, schoolgirls with more maths-anxious female maths teachers appear to develop more stereotyped views of gender and maths achievement, and do less well in their maths classes.

Let's suppose the results of that study were replicated and extended. Would a female maths teacher be justified in refusing to think about the debate over sex and IQ/maths achievement, on the grounds that doing so is likely to generate maths anxiety and so indirectly harm their female students' maths competence?

[Edited so the hyperlink isn't so long & ugly.]

[-]knb120

I really disagree with your argument, Wrongbot. First of all, I think responding appropriately to "dangerous" information is an important task, and one which most LW folks can achieve.

In addition, I wonder if your personal observations about people who become bigots by reading "dangerous content" are actually accurate. People who are already bigots (or are predisposed to bigotry) are probably more likely to seek out data that "confirms" their assumptions. So your anecdotal observation may be produced by a selection effect.

At bare minimum, you should give us some information about the sample your observations are based on. For example you say:

One specific and relatively common version of this are people who believe that women have a lower standard deviation on measures of IQ than men. This belief is not incompatible with believing that any particular woman might be astonishingly intelligent, but these people all seem to have a great deal of trouble applying the latter to any particular woman. There may be exceptions, but I haven’t met them.

This could mean you've met a couple people like this, and never met anyone else who has encountered this dat... (read more)

[-]Emile110

This seems to be bordering on Dark Side epistemology - and doesn't seem very well aligned with the name of this site.

Another argument against digging in some of the red flag issues is that you might aquire unpopular opinions, and if you're bad at hiding those, you might suffer negative social consequences.

1WrongBot
Dark Side epistemology is about protecting false beliefs, if I understand the article correctly. I'm talking about protecting your values.
7Vladimir_Nesov
Anti-epistemology (the updated term for the concept) is primarily about developing immunity to rational argument, allowing to stop the development of your understanding (of factual questions, or of moral questions), and keep incorrect answers (that usually signal belonging to a group) indefinitely. In worse forms, it fosters the development of incorrect understanding as well.

I agree with the overall point: certain thoughts can make you worse off.

Whether it's difficult to judge which information is dangerous, and whether given heuristics for judging that will turn into an anti-epistemic disaster, is about solving the problem, not about existence of the problem. In fact, a convincing argument for using a flawed knowledge-avoiding heuristics would itself be the kind of knowledge one should avoid being exposed to.

If we have an apparently unsolvable problem, with most hypothetical attempts at solution leading to disaster, we shoul... (read more)

This advice bothers me a lot. Labeling possibly true knowledge as dangerous knowledge (as the example with statements about average behavior of groups) is deeply worrisome and is the sort of thing that if one isn't careful would be used by people to justify ignoring relevant data about reality. I'm also concerned that this piece conflates actual knowledge (as in empirical data) and things like group identity which seems to be not so much knowledge but rather a value association.

I am grouping together "everything that goes into your brain," which includes lots and lots of stuff, most of it unconscious. See research on priming), for example.

This argument is explicitly about encouraging people to justify ignoring relevant data about reality. It is, I recognize, an extremely dangerous proposition, of exactly the sort I am warning against!

At risk of making a fully general counterargument, I think it's telling that a number of commenters, yourself included, have all but said that this post is too dangerous.

  • You called it "deeply worrisome."
  • RichardKennaway called it "defeatist scaremongering."
  • Emile thinks it's Dark Side Epistemology. (And see my response.)

These are not just people dismissing this as a bad idea (which would have encouraged me to do the same), these are people are worrying about a dangerous idea. I'm more convinced I'm right than I was when I wrote the post.

8Vladimir_Nesov
Heh. So most of the critics argue their disapproval of the argument in your post based essentially on the same considerations as discussed in the post.
6Jonathan_Graehl
It doesn't make you right. It just makes them as wrong (or lazy) as you. If you feel afraid that incorporating a belief would change your values, that's fine. It's understandable that you won't then dispassionately weigh the evidence for it; perhaps you'll bring a motivated skepticism to bear on the scary belief. If it's important enough that you care, then the effort is justified. However, fighting to protect your cherished belief is going to lead to a biased evaluation of evidence, so refusing to engage the scary arguments is just a more extreme and honest version of trying to refute them. I'd justify both practices situationally: considering the chance you weigh the evidence dispassionately but get the answer quite wrong (even your confidence estimation is off), you can err on the side of caution in protecting your most cherished values. That is, your objective function isn't just to have the best Bayesian-rational track record.
3Bongo
Your post is not dangerous knowledge. It's dangerous advice about dangerous knowledge.
3mattnewport
Becoming more convinced of your own position when presented with counterarguments is a well known cognitive bias.
3WrongBot
Knowing about biases may have hurt you. The counterarguments are not what convinced me; it's that the counterarguments describe my post as bad because it belongs to the class of things that it is warning against. There are other counterarguments in the comments here that have made me less convinced of my position; this is not a belief of which I am substantially certain.
2JoshuaZ
"Deeply worrisome" may have been bad wording on my part. It might be more accurate to say that this is an attitude which is so much more often wrong than right that it is better to acknowledge the low probability of such knowledge existing but not actually deliberately keep knowledge out.

One specific and relatively common version of this are people who believe that women have a lower standard deviation on measures of IQ than men. This belief is not incompatible with believing that any particular woman might be astonishingly intelligent, but these people all seem to have a great deal of trouble applying the latter to any particular woman. There may be exceptions, but I haven’t met them.

I'm skeptical of the notion that people tend to lower their intelligence estimates of women they meet as a result of this as opposed to using it as an excuse to reinforce their preexisting inclination to have a lower intelligence estimate of women than of men.

0Dmytry
Ya. Plus, technically, smaller standard deviation makes for extreme differences in the frequencies at the high IQ range (not that I believe in it or anything).

I agree with the main point of this post, but I think it could have used a more thorough, worked out example. Identity politics is probably the best example of your point, but you barely go into it. Don't worry about redundancy too much; not everyone has read the original posts.

FWIW, my personal experience with politics is an anecdote in your favor.

[-][anonymous]80

One specific and relatively common version of this are people who believe that women have a lower standard deviation on measures of IQ than men. This belief is not incompatible with believing that any particular woman might be astonishingly intelligent, but these people all seem to have a great deal of trouble applying the latter to any particular woman.

I don't think that this requires a utility-function-changing superbias. Alternatively: We think sloppily about groups, flattening fine distinctions into blanket generalizations. This bias takes the fact ... (read more)

9RobinZ
One argument you could give a Less Wrong audience is that the information about intelligence you could learn by learning someone's gender is almost completely screened off by the information content gained by examining the person directly (e.g. through conversation, or through reading research papers).
9lmnop
That is exactly what should happen, but I suspect that in real life it doesn't, largely because of anchoring and adjustment. Suppose I know the average intelligence of a member of Group A is 115, and the average intelligence of a member of Group B is 85. After meeting and having a long, involved conversation with a specific member of either group, I should probably toss out my knowledge of the average intelligence of their group and evaluate them based on the (much more pertinent) information I have gained from the conversation. But if I behave like most people do, I won't do that. Instead, I'll adjust my estimate from the original estimate supplied by the group average. Thus, my estimate of the intelligence of a particular individual from Group A will still be very different than my estimate of the intelligence of a particular individual from Group B with the same actual intelligence even after I have had a conversation (or two, or three) with both of them. How many conversations does it take for my estimates to converge? Do my estimates ever converge?
5mattnewport
If your goal is to accurately judge intelligence this may not be a good approach. Universities moved away from basing admissions decisions primarily on interviews and towards emphasizing test scores and grades because 'long, involved conversation' tends to result in more unconscious bias than simpler, more objective measures when it comes to judging intelligence (at least as it correlates with academic achievement). Unless you have strong reason to believe that all the unconscious biases that come into play in face to face conversation are likely to be just about right to balance out any biases based on preconceptions of particular groups you are just replacing one source of bias (preconceived stereotypes based on group membership) with another (responses to biasing factors in face to face conversation such as physical attractiveness, accent, shared interests, body language, etc.)

Actually I think that if differences in group (sex, race, ethnicity, class, caste) intelligence (IQ) means and distributions proved to be of genetic origins this would be a net gain in utility since it would increase public acceptance of genetic engineering and spending on gene based therapies.

BTW We already know that the differences are real as in they are measured and we have tried our very best to get rid of say cultural bias, and proving that they aren't culturally biased is impossible so its deceiving to talk "if differences proved to be real&qu... (read more)

4Zack_M_Davis
I think you may be confusing Richard Lynn (author of such books as Race Differences in Intelligence: An Evolutionary Analysis) with James Flynn (of Flynn effect fame).
0Simplicius
Yes I actually did. Corrected. This is an interesting failure since before I checked back on this post I was 100% certain I put James Flynn.
5wedrifid
100% certain and wrong? Ooops, there goes your entire epistemic framework. :)
2Simplicius
Lol yes I see why using that phrase on this site is a bit funny. Still updating on the language used here. Wonderful site.

WrongBot: Brendan Nyhan, the Robert Wood Johnson scholar in health policy research at the University of Michigan, spoke today on Public Radio's "Talk of the Nation" about a bias that may be reassuring to you. He calls it the "backfire effect". He says new research suggests that misinformed people rarely change their minds when presented with the facts -- and often become even more attached to their beliefs. The Boston Globe reviews the findings here as they pertain to politics. If this is correct, it seems quite likely that if you hav... (read more)

Certain patterns of input may be dangerous, but knowledge isn't a pattern of input, it can be formatted in a myriad of ways, and it's not generally that hard to find a safe one. There's a picture of a french fry that crashes AOL instant messenger, but that doesn't mean it's the french fry that's the problem. It's just the way it's encoded.

I'm working on something on the subject of dangerous and predatory memes. And oh yes, predatory memes exist.

Please read this thread. When anyone talks about this sort of thing, the first reaction is "It can't happen to me, I'm far too smart for that". When it is pointed out how many people who fell for such things thought precisely that, the next reaction is a longer and more elaborate version of "It can't happen to me, I'm far too smart for that".

I'm thinking the very hardest bit is going to be getting across to people that it can happ... (read more)

5Vladimir_Nesov
Certainly there are people who can't be infected with strong cultish memes, and when those people believe that it can't happen to them, they are correct. There are also people who believe so incorrectly, but this is not a strong argument for impossibility of holding that belief correctly. You seem to be overstating the case, implying undue confidence.
2David_Gerard
Yes, I'm seeming to state it as 1 rather than a high percentage. This is hyperbole, sorry. What I mean to get across is that it's higher than most people think. Particularly ones who consider they think better than others. Thinking better than most people isn't actually that hard, and sufficient LessWrong and you may think quite a lot better. You still have all your cognitive biases - they're in the buggy, corrupt hardware. Knowing about them doesn't grant you immunity to them. WrongBot gives an anecdote of just how wrong a brain can be. You Are Not So Smart's about page gives a summary of the problem and the blog itself gives the examples. I try to notice my own stupidities and I miss a ton (my loved ones are happy to help my awareness). In general, people don't have a keen sense for their own stupidities, and learning how to be rational can induce a hubris where one thinks one isn't susceptible any more. (What is the correct term for this bias?) I do think it likely that any mind will have susceptibilities and exploits. Consider the AI box experiment. Even a human can think of an argument to convince a human to do the thing they really, really shouldn't when the subject knows the game and that the game is on; what could a human or evolved meme do when the subject isn't aware the game is on or that there's a game?
0Vladimir_Nesov
What are you arguing for using these arguments? Being protected from cults doesn't require lack of bias, and indeed lack of bias is an unattainable idealization. If you argue that presence of biases knowably confers overconfidence in the belief "I can't be captured by a cult", then correcting for that knowable bias leaves you no longer knowably biased. Since this can be said about any belief, it's not clear why it should be said about this particular one, unless you believe that this belief is more systematically incorrect than others. But then you need to argue about what distinguishes this belief from others, not about presence of bias in general. That people are not perfectly rational is not a general argument against any belief. Contrived scenarios can surprise any belief, however correct about expected scenarios.

There has not yet been a truly benevolent dictator and it would be delusional at best to believe that you will be the first.

This is true approximately to the extent that there has never been a truly benevolent person. Power anti-corrupts.

9gwern
I don't understand your second sentence.

I believe that what he's saying is that with power, people show their true colors. Consciously or not, nice people may have been nice because it benefitted them to. The fact that there were too many penalties for not being nice when they didn't have as much power was a "corruption" of their behavior, in a sense. With the power they gained, the penalties didn't matter enough compared to the benefits.

0Blueberry
Wow, you're really good at interpreting cryptic sentences!
1xamdam
I think "Elementary, dear Watson" was in order ;)
5Douglas_Knight
In favor of the "power just allows corrupt behavior" theory, Bueno de Mesquita offers two very nice examples of people who ruled two different states. One is Leopold of Belgium, who simultaneously ruled Belgium and the Congo. The other is Chiang Kai-shek, who sequentially ruled China and Taiwan, allegedly rather differently. (I heard him speak about these examples in this podcast. BdM, Morrow, Silverson, and Smith wrote about Leopold here, gated)

This post is seeing some pretty heavy downvoting, but the opinions I'm seeing in the comments so far seem to be more mixed; I suppose this isn't unusual.

I have a question, then, for people who downvoted this post: what specifically did you dislike about it? This is a data-gathering exercise that will hopefully allow me to identify flaws in my writing and/or thinking and then correct them. Was the argument being made just obviously wrong? Was it insufficiently justified? Did my examples suck? Were there rhetorical tactics that you particularly disliked? Was... (read more)

I've just identified something else that was nagging at me about this post: the irony of the author of this post making an argument that closely parallels an argument some thoughtful conservatives make against condoning alternative lifestyles like polyamory.

The essence of that argument is that humans are not sufficiently intelligent, rational or self-controlled to deal with the freedom to pursue their own happiness without the structure and limits imposed by evolved cultural and social norms that keep their baser instincts in check. That cultural norms exist for a reason (a kind of cultural selection for societies with norms that give them a competitive advantage) and that it is dangerous to mess with traditional norms when we don't fully understand why they exist.

I don't really subscribe to the conservative argument (though I have more sympathy for it than the argument made in this post) but it takes a similar form to this argument when it suggests that some things are too dangerous for mere humans to meddle with.

0WrongBot
While there are some superficial parallels, I don't think the two cases are actually very similar. Humans don't have a polyamory-bias; if the scientific consensus on neurotransmitters like oxytocin and vasopressin is accurate, it's quite the opposite. Deliberate action in defiance of bias is not dangerous. There's no back door for evolution to exploit.
3MichaelVassar
This just seems unreasoned to me.
0WrongBot
Erm, how so? It occurs to me that I should clarify that when I said I meant that it is not dangerous thinking of the sort I have attempted to describe.
7MichaelVassar
Maybe I just don't see the distinction or the argument that you are making, but I still don't. Do you really think that thinking about polyamory isn't likely to impact values somewhat relative to unquestioned monogamy?
0WrongBot
Oh, it's quite likely to impact values. But it won't impact your values without some accompanying level of conscious awareness. It's unconscious value shifts that the post is concerned about.
1[anonymous]
How can you be so sure? As in I dissagree. How people value different kinds of sexual behaviours seems to be very strongly influenced by the subconscious.

I think it would've been better received if some attention was given to defense mechanisms - ie, rather than phrasing it as some true things being unconditionally bad to know, phrase it as some true things being bad to know unless you have the appropriate prerequisites in place. For example, knowing about differences between races is bad unless you are very good at avoiding confirmation bias, and knowing how to detect errors in reasoning is bad unless you are very good at avoiding motivated cognition.

8Tyrrell_McAllister
I upvoted your post, because I think that you raise a possibility that we should consider. It should not be dismissed out of hand. However, your examples do kind of suck :). As Sarah pointed out, none of us is likely to become a dictator, and dictators are probably not typical people. So the history of dictators is not great information about how we ought to tend to our epistemological garden. Your claims about how data on group differences in intelligence affect people would be strong evidence if it were backed up by more than anecdote and speculation. As it is, though, it is at least as likely that you are suffering from confirmation bias.
3WrongBot
Thank you. I should have held off on making the post for a few days and worked out better examples at the very least. I will do better.
6mattnewport
This, primarily. At least obviously wrong by my value system where believing true things is a core value. To the extent that this is also the value system of less wrong as a whole it seems contrary to the core values of the site without acknowledging the conflict explicitly enough. I didn't think the examples were very good either. I think the argument is wrong even for value systems that place a lower value on truth than mine and the examples aren't enough to persuade me otherwise. I also found the (presumably) joke about hunting down and killing anyone who disagrees with you jarring and in rather poor taste. I'm generally in favour of tasteless and offensive jokes but this one just didn't work for me.
6Vladimir_Nesov
Beware identity. It seems that a hero shouldn't kill, ever, but sometimes it's the right thing to do. Unless it's your sole value, there will be situations where it should give way.
0mattnewport
This seems like it should generally be true but in practice I haven't encountered any plausible examples where I prefer ignorance. This includes a number of hypotheticals where many people claim they would prefer ignorance which leads me to believe the value I place on truth is outside the norm. Truth / knowledge is a little paradoxical in this sense as well. I believe that killing is generally wrong but there is no paradox in killing in certain situations because it appears to be the right choice. The feedback effect of truth on your decision making / value defining apparatus makes it unlike other core values that might sometimes be abandoned.
0Vladimir_Nesov
I agree with this, my objection is to the particular argument you used, not necessarily the implied conclusion.
4Tyrrell_McAllister
I really don't think that the OP can be called "obviously wrong". For example, your brain is imperfect, so it may be that believing some true things makes it less likely that you will believe other more important true things. Then, even if your core value is to believe true things, you are going to want to be careful about letting the dangerous beliefs into your head. And the circularity that WrongBot and Vladimir Nesov have pointed out rears its head here, too. Suppose that the possibility that I pose above is true. Then, if you knew this, it might undermine the extent to which you hold believing true things to be a core value. That is precisely the kind of unwanted utility-function change that Wrongbot is warning us about. It's probably too pessimistic to say that you could never believe the dangerous true things. But it seems reasonably possible that some true beliefs are too dangerous unless you are very careful about the way in which you come to believe them. It may be unwise to just charge in and absorb true facts willy-nilly. Here's another way to come at WrongBot's argument. It's obvious that we sometimes should keep secrets. Sometimes more harm than good would result if someone else knew something that we know. It's not obvious, but it is at least plausible, that the "harm" could be that the other person's utility function would change in a way that we don't want. At least, this is certainly not obviously wrong. The final step in the argument is then to acknowledge that the "other person" might be the part of yourself over which you do not have perfect control — which is, after all, most of you.
2mattnewport
I believe some other people's reports that there are things they would prefer not to know and would be inclined to honor their preference if I knew such a secret but I can't think of any examples of such secrets for myself. In almost all cases I can think of I would want to be informed of any true information that was being withheld from me. The only possible exceptions are 'pleasant surprises' that are being kept secret on a strictly time-limited basis to enhance enjoyment (surprise gifts, parties, etc.) but I think these are not really what we're talking about. I can certainly think of many examples of secrets that people keep secret out of self-interest and attempt to justify by claiming they are doing it in the best interests of the ignorant party. In most such cases the 'more harm than good' would accrue to the party requesting the keeping of the secret rather than the party from whom the secret is being withheld. Sometimes keeping such secrets might be the 'right thing' morally (the Nazi at the door looking for fugitives) but this is not because you are acting in the interests of the party from whom you are keeping information.
5Tyrrell_McAllister
Maybe this is an example: I was once working hard to meet a deadline. Then I saw in my e-mail that I'd just received the referee reports for a journal article that I'd submitted. Even when a referee report recommends acceptance, it will almost always request changes, however minor. I knew that if I looked at the reports, I would feel a very strong pull to work on whatever was in them, which would probably take at least several hours. Even if I resisted this pull, resistance alone would be a major tax on my attention. My brain, of its own accord, would grab mental CPU cycles from my current project to compose responses to whatever the referees said. I decided that I couldn't spare this distraction before I met my deadline. So I left the reports unread until I'd completed my project. In short, I kept myself ignorant because I expected that knowledge of the reports' contents would induce me to pursue the wrong actions.
7mattnewport
This is an example of a pretty different kind of thing to what WrongBot is talking about. It's a hack for rationing attention or a technique for avoiding distraction and keeping focus for a period of time. You read the email once your current time-critical priority was dealt with, you didn't permanently delete it. Such tactics can be useful and I use them myself. It is quite different from permanently avoiding some information for fear of permanent corruption of your brain. I'm a little surprised that you would have thought that this example fell into the same class of things as WrongBot or I were talking about. Perhaps we need to define what kinds of 'dangerous thought' we are talking about a little more clearly. I'm rather bemused that people are conflating this kind of avoidance of viscerally unpleasant experiences with 'dangerous thoughts' as well. It seems others are interpreting the scope of the article massively more broadly than I am.
3ABranco
Or putting it differently: * One thing is to operationally avoid gaining certain data at a certain moment in order to better function overall. Because we need to keep our attention focused. * Another thing is to strategically avoid gaining certain kinds of information that could possibly lead us astray. I'd guess most people here agree with this kind of "self-deception" that the former entails. And it seems that the post is arguing pro this kind of "self-deception" in the latter case as well, although there isn't as much consensus — some people seem to welcome any kind of truth whatsoever, at any time. However... It seems to me now that, frankly, both cases are incredibly similar! So I may be conflating them, too. The major difference seems to be the scale adopted: checking your email is an information hazard at that moment, and you want to postpone it for a couple of hours. Knowing about certain truths is an information hazard at this moment, and you want to postpone it for a couple of... decades. If ever. When your brain is stronger enough to handle it smoothly. It all boils down to knowing we are not robots, that our brains are a kludge, and that certain stimuli (however real or true) are undesired.
3Tyrrell_McAllister
I think that you can just twiddle some parameters with my example to see something more like WrongBot's examples. My example had a known deadline, after which I knew it would be safe to read the reports. But suppose that I didn't know exactly when it would be safe to read the reports. My current project is the sort of thing where I don't currently know when I will have done enough. I don't yet know what the conditions for success are, so I don't yet know what I need to do to create safe conditions to read the reports. It is possible that it will never be safe to read the reports, that I will never be able to afford the distraction of suppressing my brain's desire to compose responses. My understanding is that WrongBot views group-intelligence differences analogously. The argument is that it's not safe to learn such truths now, and we don't yet know what we need to do to create safe conditions for learning these truths. Maybe we will never find such conditions. At any rate, we should be very careful about exposing our brains to these truths before we've figured out the safe conditions. That is my reading of the argument.
5WrongBot
More or less. I'm generally sufficiently optimistic about the future that I don't think that there are kinds of true knowledge that will continue to be dangerous indefinitely; I'm just trying to highlight things I think might not be safe right now, when we're all stuck doing serious thinking with opaquely-designed sacks of meat.
2HughRistik
Like Matt, I don't think your example does the same thing as WrongBot's, even with your twiddling. WrongBot doesn't want the "dangerous thoughts" to influence him to revise his beliefs and values. That wasn't the case for you: you didn't want to avoid revising your beliefs about your paper; you just didn't want to deal with the cognitive distraction of it during the short term. If you avoided reading your reports because you wanted to avoid believing that your article needed any improvement, then I think your situation would be more analogous to WrongBot's. But there's another difference here: when you decided to not expose yourself to that knowledge, you knew at the time when the safe conditions would occur, and that those conditions would occur very soon. That's not the case for WrongBot, who has sworn off certain kinds of knowledge indefinitely. Putting oneself at risk of error for a short and capped time frame is much different from putting oneself at risk of error indefinitely.
3Tyrrell_McAllister
The beliefs that I didn't want to revise were my beliefs about the contents of the reports. Before I read them, my beliefs about their contents were general and vague. Were I to read the reports, I would have specific knowledge about what they said. My worry was that this would revise my values: after gaining that specific knowledge, my brain would excessively value replying to the reports over working on my current project. Despite my intention to focus solely on my current project, my brain would allocate significant resources to composing responses to what I'd read in the reports. But in the "twiddled" version, I don't know when the safe conditions will occur . . . To be fair, WrongBot thinks that we will be able to learn this knowledge eventually. We just shouldn't take it as obvious that we know what the safe conditions are yet.
2HughRistik
I still say that there is a difference between what you and WrongBot are doing, even if you're successfully shooting down my attempts to articulate it. I might need a few more tries to be able to correctly articulate that intuition. These are not the same types of values. You were worried about your values about priorities changing, while under time pressure. WrongBot is worried about his moral values changing about he treats certain groups of people. True, but there wasn't the same magnitude or type of uncertainty, right? You knew that you would probably be able to read your reports after your deadline...? All predictions about the future are uncertain, but not all types of uncertainty are created equal. I would be interested to hear your opinion of a little thought experiment. What if I was a creationist, and you recommend me a book debunking creationism. I say that I won't read it because it might change my values, at least not only the conditions are safe for me. If I say that I can't read it this week because I have a deadline, but maybe next week, you'll probably give me a pass. But what if I put off reading it indefinitely? Is that rational? It seems that since we recognize that rationalists are human, we can and should give them a pass on scrutinizing certain thoughts or investigating certain ideas when they are under time pressure or emotional pressure in the short term, like in your example. But how long can one dodge inquiry in a certain area before one's rationalist creds become suspect?
0Tyrrell_McAllister
I'm having trouble seeing this distinction. What if I had a moral obligation to do as well as possible on my current project, because people were depending on me, say? My concern would be that, if I read the reports, I would feel a pull to act immorally. I might even rationalize away the immorality under the influence of this pull. In effect, I would act according to different moral values. Would that make the situation more analogous in your view, or would something still be missing? I'm getting the sense that the problem with my example is that it has nothing to do with political correctness. Is it key for you that WrongBot wants to keep information out of his/her brain because of political correctness specifically? I called it a "twiddled" version because I was thinking of the uncertainty as a continuous parameter that I could set to a wide spectrum of values. In the actual situation, the dial was pegged at "almost complete certainty". But I can imagine situations where I'm very uncertain. It looks like part of your problem with this is that such a quantitative change amounts to a qualitative change in your view. Is that right? I take it that your concern would be that losing creationism would change your moral values in a dangerous way. Whether you are being rational then depends on what "put off reading it indefinitely" means. I would say that you are being rational to avoid the book for now only if you are making a good-faith effort to determine rationally the conditions under which it would be safe to read the book, with the intention of reading the book once you've found sufficiently safe conditions.
2mattnewport
Part of the problem I'm having with your example is my perception of the magnitude of the gap between what you are talking about and WrongBot's examples. While they share certain similarities it appears roughly equivalent to a discussion about losing your entire life savings which you are comparing to the time you dropped a dime down the back of the sofa. Sometimes a sufficiently large difference of magnitude can be treated for most purposes as a difference in kind.
2HughRistik
Quantity has a quality all of its own.
0Tyrrell_McAllister
What is the axis along which the gap lies? Is it the degree of uncertainty about when it will be safe to learn the dangerous knowledge?
7mattnewport
Multiple axes: * Degree of uncertainty and magnitude of duration of the length of time before it will be 'safe'. * Degree of effort involved in avoidance (temporarily holding off on reading a specific email vs. actively avoiding certain knowledge and filtering all information for a long and unspecified duration). * Severity of consequences (delayed or somewhat sub-standard performance on a near term project deadline vs. fundamental change or damage to your core values) * Scope of filtering (avoiding detailed contents of a specific email with a known and clearly delineated area of significance vs. general avoidance of whole areas of knowledge where you may not even have a good idea of what knowledge you may be missing out on). * Mental resources emphasized (short term attentional resources vs. deeply considered core beliefs and modes of thought and high level knowledge and understanding).
0HughRistik
That's part of it, and also how far into the future one thinks that might occur.
0WrongBot
In my perception, the gap is less about certainty and more about timescale; I'd draw a line between "in a normal human lifetime" and "when I have a better brain" as the two qualitatively different timescales that you're talking about.
2Tyrrell_McAllister
But this is the way to think of WrongBot's claim. The conscious you, the part over which you have deliberate control, is but a small part of the goal-seeking activity that goes on in your brain. Some of that goal-seeking activity is guided by interests that aren't really yours. Sometimes you ought to ignore the interests of these other agents in your brain. There is some possibility that you should sometimes do this by keeping information from reaching those other agents, even though this means keeping the information from yourself as well.

Your examples of "identity politics" and "power corrupts" don't seem to illustrate "dangerous knowledge". They are more like dangerous decisions. Am I missing the point?

8Vladimir_Nesov
Situations creating modes of thought that make your corrupted hardware turn you into a bad person.

Come to think of it, a related argument was made, poetically, in Watchmen: Dr. Manhattan knew everything, it did clearly change his utility function (he became less human) and he mentioned appreciating not knowing the future when Adrian blocked it with tacheons. Poetry, but something to think about it.

2ABranco
He referred to something along the lines of "the sensation of being surprised", if I recall it correctly. Would you choose to know everything, if you could, but then never having this sensation again?
1[anonymous]
Would you choose to never get sick, if you could, but then never having this sensation (of getting healthy) again?

This is completely wrong. You might as well tell a baby to avoid learning language, since this will change its utility function, it will begin to have an adult's utility function, instead of a baby's.

Not to evoke a recursive nightmare, but some utility function alterations appear to be strictly desirable.

As an obvious example, if I were on a diet and I could rewrite my utility function such that the utilities assigned to consuming spinach and cheesecake were swapped, I see no harm in making that edit. One could argue that my second-order utility (and all higher) function should be collapsed into my first-order one, such that this would not really change my meta-utility function, but this issue just highlights the futility of trying to cram my complex, ... (read more)

5WrongBot
I wouldn't claim that any human is actually able to describe their own utility function; they're much too complex and riddled with strange exceptions and pieces of craziness like hyperbolic discounting. I also think that there's some confusion surrounding the whole idea of utility functions in reality, which I should have been more explicit about. Your utility function is just a description of what you want/value; it is not explicitly about maximizing happiness. For example, I don't want to murder people, even under circumstances where it would make me very happy to do so. For this reason, I would do everything within my power to avoid taking a pill that would change my preferences such that I would then generally want to murder people; this is the murder pill I mentioned. As for swapping the utilities of spinach and cheesecake, I think the only way that makes sense to do so would be to change how you perceive their respective tastes, which isn't a change to your utility function at all. You still want to eat food that tastes good; changing that would have much broader and less predictable consequences. Only if your current utility function is "maximize expected utility." (It isn't.)
3NancyLebovitz
Anorexia could be viewed as an excessive ability to rewrite utility functions about food. If you don't have the ability to include context, the biological blind god may serve you better than the memetic blind god.
0orthonormal
This is a particular form of wireheading; fortunately, for evolutionary reasons we're not able to do very much of it without advanced technology.
1Vladimir_Nesov
I'd say it's rather a form of conceptual confusion: you can't change a concept ("change" is itself a "timeful" concept, meaningful only as a property within structures which are processes in the appropriate sense). But it's plausible that creating agents with slightly different explicit preference will result in a better outcome than, all else equal, if you give those agents your own preference. Of course, you'd probably need to be a superintelligence to correctly make decisions like this, at which point creation of agents with given preference might cease to be a natural concept.
-1red75
I am afraid that advanced technology is not necessary. Literal wireheading.

If you're being held back by worries about your values changing, you can always try cultivating a general habit of reverting to values held by earlier selves when doing so is relatively easy. I call it "reactionary self-help".

3PhilGoetz
I don't think that makes sense. Changing back is no more desirable than any other change. Once you've changed, you've changed. Changing your utility function is undesirable. But it isn't bad. You strive to avoid it; but once it's happened, you're glad it did.
4steven0461
Right; that's what happens by default. But if you find that because your future self will want to keep its new values, you're overly reluctant to take useful actions that change your values as a side effect, you might want to precommit to roll back certain changes; or if you can't keep track of all the side effects, it's conceivable you want to turn it into a general habit. I could see this either being a good or bad idea on net.
1WrongBot
I don't think you can do this. Your future self, not sharing your values, will have no reason to honor your present self's precommitment.
1mattnewport
Precommitment implies making it expensive or impossible for your future self not to honor your commitment.
0WrongBot
Errr, how? I am familiar with the practice of precommitment, but most of the ways of creating one for oneself seem to rely on consequences not preferred by one's values. If one's values have changed, then, such a precommitment isn't very helpful.
0mattnewport
In the context of the thread we're not talking about all your values changing, just some subset. Base the precommitment round a value you do not expect to change. Money is a reliable fallback due to it's fungibility.
1WrongBot
This isn't as reliable as you think. It isn't often that people change how much importance they attach to money, but it isn't rare, either. Either way, is there a good way to guarantee that you'll lose access to money when your values change? That's tough for an external party to verify when you have an incentive to lie.
1mattnewport
This is more reliable than you think. We live in a world where money is convertible to further a very wide range of values. It doesn't have to be money. You just need a value that you have no reason to expect will change significantly as a result of exposure to particular 'dangerous thoughts'. Can you honestly say that you expect exposing yourself to information about sex differences in intelligence will radically alter the relative value of money to you though? Escrow is the general name for a good way to guarantee that your future self will be bound by your precommitment. Depending on how much money is involved this could be as informal as asking a trusted friend who shares your current values to hold some money for a specified period and promise to donate it to a charity promoting the value you fear may be at risk if they judge you to have abandoned that value. The whole point of precommitment is that you have leverage over your future self. You can make arrangements of cost and complexity up to the limit your current self values the matter of concern and impose a much greater penalty on your future self in case of breach of contract. Ultimately I don't believe this is your true rejection. If you wished you could find ways to make credible precommitments to your current values and then undergo controlled exposure to 'dangerous thoughts' but you choose not to. That may be a valid choice from a cost/benefit analysis by your current values but it is not because the alternative is impossible, it is just too expensive for your tastes.
1Vladimir_Nesov
It's important to distinguish changes in values, from updating of knowledge about values in response to moral arguments. The latter emphatically shouldn't be opposed, otherwise you turn morally stupid.
0WrongBot
That sounds like it would be isomorphic to always encouraging the updating of instrumental values, but not terminal ones, which strikes me as an unquestionably good idea in all cases where stupidity is not a terminal value.
0Vladimir_Nesov
You don't update values, you update knowledge about values. Knowledge about terminal values might be as incomplete as knowledge about instrumental values. The difference is that with instrumental values, you usually update indifference, while with "terminal" values you start out with some idea of preference.
-2red75
What about newborns? If they have same terminal values as adults, then Kolmogorov complexity of terminal values should not exceed one of genome. Thus a) terminal values are updated or b) terminal values are not very complex or c) knowledge about terminal values is part of terminal values, which imply a).
1MichaelVassar
You can also try to engage in trade with your future selves, which most good formulations of CEV or its successors should probably enable.
0Kingreaper
I don't believe I could revert back easily under normal circumstances, so I can't see this advice actually being fruitful unless that fact about me is unusual.

(I am making a distinction here between the parts of your brain that you have access to and can introspect about, which for lack of better terms I call “you” or “your consciousness”, and the vast majority of your brain, to which you have no such access or awareness, which I call “your brain.” This is an emotional manipulation, which you are now explicitly aware of. Does that negate its effect? Can it?)

You seem to think you know what the effect is. My immediate thought on reading "it will decide the output, not you" was "oh dear, dualism a... (read more)

By the way, some people took similar position to yours in

What Is Your Dangerous Idea?: Today's Leading Thinkers on the Unthinkable

Identity Politics: Agree- good point.

Power Corrupts: Irrelevant to those LWers who realistically will never gain large amounts of power and status. For those who do it is a matter of the dangers of increasing control, not avoiding dangerous thoughts.

On the comment about opening the door to bigotry: Even if bigotry has bad effects, given the limited amount of harm an individual can do and appropriate conscious supression of effects, isn't it worth it to prevent self-delusion?

[-][anonymous]00

Don't read this article. It's way too dangerous:

[-][anonymous]00

If you're going to intentionally choose false beliefs, you should at least be careful to also install an aversion to using these beliefs to decide other questions you care about such as which intellectual institutions to trust, and an aversion to passing these beliefs on to other people. It's one thing to nuke your brain and quite another to fail to encase it in lead afterward.

[-]Thomas-40

Your brain cannot be trusted. It is not safe. You must be careful with what you put into it, because it will decide the output, not you.

This "it" may, or even should, relate to the idea itself. The same idea, the same meme, put into a healthy rational brains anywhere, will decide the same! Since the brains are just a rational machine always doing the best possible thing.

It is the input, what decides the output. Machine has no other (irrational) choices, than to process the input best way it can, and then to spit out the output.

It is not my cal... (read more)

1WrongBot
This would also be true of unbroken brains, if there were any.
-5Thomas