Have no heroes, and no villains
"If you meet the Buddha on the road, kill him!"
When Edward Wilson published the book Sociobiology, Richard Lewontin and Stephen J. Gould secretly convened a group of biologists to gather regularly, for months, in the same building at Harvard that Wilson's office was in, to write an angry, politicized rebuttal to it, essentially saying not that Sociobiology was wrong, but that it was immoral - without ever telling Wilson. This proved, to me, that they were not interested in the truth. I never forgave them for this.
I constructed a narrative of evolutionary biology in which Edward Wilson and Richard Dawkins were, for various reasons, the Good Guys; and Richard Lewontin and Stephen J. Gould were the Bad Guys.
When reading articles on group selection for this post, I was distressed to find Richard Dawkins joining in the vilification of group selection with religious fervor; while Stephen J. Gould was the one who said,
"I have witnessed widespread dogma only three times in my career as an evolutionist, and nothing in science has disturbed me more than ignorant ridicule based upon a desire or perceived necessity to follow fashion: the hooting dismissal of Wynne-Edwards and group selection in any form during the late 1960's and most of the 1970's, the belligerence of many cladists today, and the almost ritualistic ridicule of Goldschmidt by students (and teachers) who had not read him."
This caused me great cognitive distress. I wanted Stephen Jay Gould to be the Bad Guy. I realized I was trying to find a way to dismiss Gould's statement, or at least believe that he had said it from selfish motives. Or else, to find a way to flip it around so that he was the Good Guy and someone else was the Bad Guy.
To move on, I had to consciously shatter my Good Guy/Bad Guy narrative, and accept that all of these people are sometimes brilliant, sometimes blind; sometimes share my values, and sometimes prioritize their values (e.g., science vs. politics) very differently from me. I was surprised by how painful it was to do that, even though I was embarrassed to have had the Good Guy/Bad Guy hypothesis in the first place. I don't think it was even personal - I didn't care who would be the Good Guys and who would be the Bad Guys. I just want there to be Good Guys and Bad Guys.
Is cryonics evil because it's cold?
There have been many previous discussions here on cryonics and why it is perceived as threatening or otherwise disagreeable. Even among LWers who are not signed up and don’t plan to, I’d say there’s a good degree of consensus that cryonics is reviled and ridiculed to a very unjustified degree. I had a thought about one possible factor contributing to its unsavory public image that I haven’t seen brought up in previous discussions:
COLD is EVIL.
Well, no, cold isn’t evil, but “COLD is EVIL/THREATENING/DANGEROUS/HARSH/LONELY/UNLOVING/SAD/DEAD” seems to be a pretty common set of conceptual metaphors. You see it in figures of speech like “cold-hearted,” “in cold blood,” “cold expression,” “icy stare,” “chilling,” “went cold,” “cold calculation,” “the cold shoulder,” “cold feet,” “stone cold,” “out cold.” (Naturally, it’s also the case that WARM is GOOD/COMFORTING/SAFE/SOCIAL/LOVING/HAPPY/ALIVE, though COOL and HOT sort of go in their own directions.) Associating something with coldness just makes it seem more threatening and less benevolent. And besides, being that “COLD is DEAD,” it’s pretty hard to imagine someone as not really dead if they’re in a container of liquid nitrogen at -135ºC. (Even harder if it’s just their head in there… but that’s a separate issue.) There is already a little bit of research on the effects of some of the conceptual metaphors of coldness and the way its emotional content leaks onto metaphorically associated concepts (“Cold and lonely: does social exclusion literally feel cold?”; “Experiencing physical warmth promotes interpersonal warmth.”; any others?).
Frugality and working from finite data
The scientific method is wonderfully simple, intuitive, and above all effective. Based on the available evidence, you formulate several hypotheses and assign prior probabilities to each one. Then, you devise an experiment which will produce new evidence to distinguish between the hypotheses. Finally, you perform the experiment, and adjust your probabilities accordingly.
So far, so good. But what do you do when you cannot perform any new experiments?
This may seem like a strange question, one that leans dangerously close to unprovable philosophical statements that don't have any real-world consequences. But it is in fact a serious problem facing the field of cosmology. We must learn that when there is no new evidence that will cause you to change your beliefs (or even when there is), the best thing to do is to rationally re-examine the evidence you already have.
Self-fulfilling correlations
Correlation does not imply causation. Sometimes corr(X,Y) means X=>Y; sometimes it means Y=>X; sometimes it means W=>X, W=>Y. And sometimes it's an artifact of people's beliefs about corr(X, Y). With intelligent agents, perceived causation causes correlation.
Volvos are believed by many people to be safe. Volvo has an excellent record of being concerned with safety; they introduced 3-point seat belts, crumple zones, laminated windshields, and safety cages, among other things. But how would you evaluate the claim that Volvos are safer than other cars?
Presumably, you'd look at the accident rate for Volvos compared to the accident rate for similar cars driven by a similar demographic, as reflected, for instance in insurance rates. (My google-fu did not find accident rates posted on the internet, but insurance rates don't come out especially pro-Volvo.) But suppose the results showed that Volvos had only 3/4 as many accidents as similar cars driven by similar people. Would that prove Volvos are safer?
Transhumanism and the denotation-connotation gap
A word's denotation is our conscious definition of it. You can think of this as the set of things in the world with membership in the category defined by that word; or as a set of rules defining such a set. (Logicians call the former the category's extension into the world.)
A word's connotation can mean the emotional coloring of the word. AI geeks may think of it as a set of pairs, of other concepts that get activated or inhibited by that word, and the changes to the odds of recalling each of those concepts.
When we think analytically about a word - for instance, when writing legislation - we use its denotation. But when we are in values/judgement mode - for instance, when deciding what to legislate about, or when voting - we use its denotation less and its connotation more.
This denotative-connotative gap can cause people to behave less rationally when they become more rational. People who think and act emotionally are at least consistent. Train them to think analytically, and they will choose goals using connotation but pursue them using denotation. That's like hiring a Russian speaker to manage your affairs because he's smarter than you, but you have to give him instructions via Google translate. Not always a win.
Consider the word "human". It has wonderful connotations, to humans. Human nature, humane treatment, the human condition, what it means to be human. Often the connotations are normative rather than descriptive; behaviors we call "inhumane" are done only by humans. The denotation is bare by comparison: Featherless biped. Homo sapiens, as defined by 3 billion base pairs of DNA.
So You Think You're a Bayesian? The Natural Mode of Probabilistic Reasoning
Related to: The Conjunction Fallacy, Conjunction Controversy
The heuristics and biases research program in psychology has discovered many different ways that humans fail to reason correctly under uncertainty. In experiment after experiment, they show that we use heuristics to approximate probabilities rather than making the appropriate calculation, and that these heuristics are systematically biased. However, a tweak in the experiment protocols seems to remove the biases altogether and shed doubt on whether we are actually using heuristics. Instead, it appears that the errors are simply an artifact of how our brains internally store information about uncertainty. Theoretical considerations support this view.
EDIT: The view presented here is controversial in the heuristics and biases literature; see Unnamed's comment on this post below.
EDIT 2: The author no longer holds the views presented in this post. See this comment.
A common example of the failure of humans to reason correctly under uncertainty is the conjunction fallacy. Consider the following question:
Linda is 31 years old, single, outspoken and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in antinuclear demonstrations.
What is the probability that Linda is:
(a) a bank teller
(b) a bank teller and active in the feminist movement
In a replication by Gigerenzer, 91% of subjects rank (b) as more probable than (a), saying that it is more likely that Linda is active in the feminist movement AND a bank teller than that Linda is simply a bank teller (1993). The conjunction rule of probability states that the probability of two things being true is less than or equal to the probability of one of those things being true. Formally, P(A & B) ≤ P(A). So this experiment shows that people violate the conjunction rule, and thus fail to reason correctly under uncertainty. The representative heuristic has been proposed as an explanation for this phenomenon. To use this heuristic, you evaluate the probability of a hypothesis by comparing how "alike" it is to the data. Someone using the representative heuristic looks at the Linda question and sees that Linda's characteristics resemble those of a feminist bank teller much more closely than that of just a bank teller, and so they conclude that Linda is more likely to be a feminist bank teller than a bank teller.
This is the standard story, but are people really using the representative heuristic in the Linda problem? Consider the following rewording of the question:
Linda is 31 years old, single, outspoken and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in antinuclear demonstrations.
There are 100 people who fit the description above. How many of them are:
(a) bank tellers
(b) bank tellers and active in the feminist movement
Notice that the question is now strictly in terms of frequencies. Under this version, only 22% of subjects rank (b) as more probable than (a) (Gigerenzer, 1993). The only thing that changed is the question that is asked; the description of Linda (and the 100 people) remains unchanged, so the representativeness of the description for the two groups should remain unchanged. Thus people are not using the representative heuristic - at least not in general.
Some Thoughts Are Too Dangerous For Brains to Think
What Intelligence Tests Miss: The psychology of rational thought
This is the fourth and final part in a mini-sequence presenting Keith E. Stanovich's excellent book What Intelligence Tests Miss: The psychology of rational thought.
If you want to give people a single book to introduce people to the themes and ideas discussed on Less Wrong, What Intelligence Tests Miss is probably the best currenty existing book for doing so. It does have a somewhat different view on the study of bias than we on LW: while Eliezer concentrated on the idea of the map and the territory and aspiring to the ideal of a perfect decision-maker, Stanovich's perspective is more akin to bias as a thing that prevents people from taking full advantage of their intelligence. Regardless, for someone less easily persuaded by LW's somewhat abstract ideals, reading Stanovich's concrete examples first and then proceeding to the Sequences is likely to make the content presented in the sequences much more interesting. Even some of our terminology such as "carving reality at the joints" and the instrumental/epistemic rationality distinction will be more familiar to somebody who was first read What Intelligence Tests Miss.
Below is a chapter-by-chapter summary of the book.
Inside George W. Bush's Mind: Hints at What IQ Tests Miss is a brief introductory chapter. It starts with the example of president George W. Bush, mentioning that the president's opponents frequently argued against his intelligence, and even his supporters implicitly conceded the point by arguing that even though he didn't have "school smarts" he did have "street smarts". Both groups were purportedly surprised when it was revealed that the president's IQ was around 120, roughly the same as his 2004 presidential candidate opponent John Kerry. Stanovich then goes on to say that this should not be surprising, for IQ tests do not tap into the tendency to actually think in an analytical manner, and that IQ had been overvalued as a concept. For instance, university admissions frequently depend on tests such as the SAT, which are pretty much pure IQ tests. The chapter ends by a disclaimer that the book is not an attempt to say that IQ tests measure nothing important, or that there would be many kinds of intelligence. IQ does measure something real and important, but that doesn't change the fact that people overvalue it and are generally confused about what it actually does measure.
Dysrationalia: Separating Rationality and Intelligence talks about the phenomenon informally described as "smart but acting stupid". Stanovich notes that if we used a broad definition of intelligence, where intelligence only meant acting in an optimal manner, then this expression wouldn't make any sense. Rather, it's a sign that people are intuitively aware of IQ and rationality as measuring two separate qualities. Stanovich then brings up the concept of dyslexia, which the DSM IV defines as "reading achievement that falls substantially below that expected given the individual's chronological age, measured intelligence, and age-appropriate education". Similarly, the diagnostic criterion for mathematics disorder (dyscalculia) is "mathematical ability that falls substantially below that expected for the individual's chronological age, measured intelligence, and age-appropriate education". He argues that since we have a precedent for creating new disability categories when someone's ability in an important skill domain is below what would be expected for their intelligence, it would make sense to also have a category for "dysrationalia":
Dysrationalia is the inability to think and behave rationally despite adequate intelligence. It is a general term that refers to a heterogenous group of disorders manifested by significant difficulties in belief formation, in the assessment of belief consistency, and/or in the determination of action to achieve one's goals. Although dysrationalia may occur concomitantly with other handicapping conditions (e.g. sensory impairment), dysrationalia is not the result of those conditions. The key diagnostic criterion for dysrationalia is a level of rationality, as demonstrated in thinking and behavior, that is significantly below the level of the individual's intellectual capacity (as determined by an individually administered IQ test).
A Taxonomy of Bias: Mindware Problems
This is the third part in a mini-sequence presenting content from Keith E. Stanovich's excellent book What Intelligence Tests Miss: The psychology of rational thought. It will culminate in a review of the book itself.
Noting that there are many different kinds of bias, Keith Stanovich proposes a classification scheme for bias that has two primary categories: the Cognitive Miser, and Mindware Gaps. Last time, I discussed the Cognitive Miser category. Today, I will discuss Mindware Problems, which has the subcategories of Mindware Gaps and Corrupted Mindware.
Mindware Problems
Stanovich defines "mindware" as "a generic label for the rules, knowledge, procedures, and strategies that a person can retrieve from memory in order to aid decision making and problem solving".
Mindware Gaps
Previously, I mentioned two tragic cases. In one, a pediatrician incorrectly testified the odds of a two children in the same family suffering infant death syndrome to be 73 million to 1. In the other, people bought into a story of "facilitated communication" helping previously non-verbal children to communicate, without looking at it in a critical manner. Stanovich uses these two as examples of a mindware gap. The people involved were lacking critical mindware: in one case, that of probabilistic reasoning, in the other, that of scientific thinking. One of the reasons why so many intelligent people can act in an irrational manner is that they're simply missing the mindware necessary for rational decision-making.
Much of the useful mindware is a matter of knowledge: knowledge of Bayes' theorem, taking into account alternative hypotheses and falsifiability, awareness of the conjunction fallacy, and so on. Stanovich also mentions something he calls strategic mindware, which refers to the disposition towards engaging the reflective mind in problem solving. These were previously mentioned as thinking dispositions, and some of them can be measured by performance-based tasks. For instance, in the Matching Familiar Figures Test (MFFT), participants are presented with a picture of an object, and told to find the correct match from an array of six other similar pictures. Reflective people have long response times and few errors, while impulsive people have short response times and numerous errors. These types of mindware are closer to strategies, tendencies, procedures, and dispositions than to knowledge structures.
Stanovich identifies mindware gaps to be involved in at least conjunction errors and ignoring base rates (missing probability knowledge), as well as the Wason selection task and confirmation bias (not considering alternate hypotheses). Incorrect lay psychological theories are identified as a combination of a mindware gap and contaminated mindware (see below). For instance, people are often blind to their own biases, because they incorrectly think that biased thinking on their part would be detectable by conscious introspection. In addition to bias blind spot, lay psychological theory is likely to be involved in errors of affective forecasting (the forecasting of one's future emotional state).
A Taxonomy of Bias: The Cognitive Miser
This is the second part in a mini-sequence presenting content from Keith E. Stanovich's excellent book What Intelligence Tests Miss: The psychology of rational thought. It will culminate in a review of the book itself.
Noting that there are many different kinds of bias, Keith Stanovich proposes a classification scheme for bias that has two primary categories: the Cognitive Miser, and Mindware Problems. Today, I will discuss the Cognitive Miser category, which has the subcategories of Default to the Autonomous Mind, Serial Associative Cognition with a Focal Bias, and Override Failure.
The Cognitive Miser
Cognitive science suggests that our brains use two different kinds of systems for reasoning: Type 1 and Type 2. Type 1 is quick, dirty and parallel, and requires little energy. Type 2 is energy-consuming, slow and serial. Because Type 2 processing is expensive and can only work on one or at most a couple of things at a time, humans have evolved to default to Type 1 processing whenever possible. We are "cognitive misers" - we avoid unnecessarily spending Type 2 cognitive resources and prefer to use Type 1 heuristics, even though this might be harmful in a modern-day environment.
Stanovich further subdivides Type 2 processing into what he calls the algorithmic mind and the reflective mind. He argues that the reason why high-IQ people can fall prey to bias almost as easily as low-IQ people is that intelligence tests measure the effectiveness of the algorithmic mind, whereas many reasons for bias can be found in the reflective mind. An important function of the algorithmic mind is to carry out cognitive decoupling - to create copies of our mental representations about things, so that the copies can be used in simulations without affecting the original representations. For instance, a person wondering how to get a fruit down from a high tree will imagine various ways of getting to the fruit, and by doing so he operates on a mental concept that has been copied and decoupled from the concept of the actual fruit. Even when he imagines the things he might do to the fruit, he never confuses the fruit he has imagined in his mind with the fruit that's still hanging in the tree (the two concepts are decoupled). If he did, he might end up believing that he could get the fruit down by simply imagining himself taking it down. High performance on IQ tests indicates an advanced ability for cognitive decoupling.
In contrast, the reflective mind embodies various higher-level goals as well as thinking dispositions. Various psychological tests of thinking dispositions measure things such as the tendency to collect information before making up one's mind, the tendency to seek various points of view before coming to a conclusion, the disposition to think extensively about a problem before responding, the tendency to calibrate the degree of strength of one's opinion to the degree of evidence available, the tendency to think about future consequences before taking action, the tendency to explicitly weigh pluses and minuses of situations before making a decision, and the tendency to seek nuance and avoid absolutism. All things being equal, a high-IQ person would have a better chance of avoiding bias if they stopped to think things through, but a higher algorithmic efficiency doesn't help them if it's not in their nature to ever bother doing so. In tests of rational thinking where the subjects are explicitly instructed to consider the issue in a detached and objective manner, there's a correlation of .3 - .4 between IQ and test performance. But if such instructions are not given, and people are free to reason in a biased or unbiased way as they wish (like in real life), the correlation between IQ and rationality falls to nearly zero!
= 783df68a0f980790206b9ea87794c5b6)
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)