Nick Bostrom and Anders Sandberg (2008) have proposed what they call the "evolutionary heuristic" for evaluating possible ways to enhance humans. It begins with posing a challenge, the "evolutionary optimality challenge" or EOC: "if the proposed intervention would result in an enhancement, why have we not already evolved to be that way?"
They write that there seem to be three main categories of answers to this challenge (what follows are abbreviated quotes, see original paper for full explanation):
- Changed tradeoffs: "Evolution 'designed' the system for operation in one type of environment, but now we wish to deploy it in a very different type of environment..."
- Value discordance: "There is a discrepancy between the standards by which evolution measured the quality of her work, and the standards that we wish to apply..."
- Evolutionary restrictions: "We have access to various tools, materials, and techniques that were unavailable to evolution..."
In their original paper, Bostrom and Sandberg are interested in biological interventions like drugs and embryo selection, but it seems that their heuristic could also tell us a lot about "rationality techniques," i.e. methods of trying to become more rational expressible in the form of how-to advice, like what you often find advocated here at LessWrong or by CFAR.
Applying the evolutionary heuristic to rationality techniques supports the value of things like statistics, science, and prediction markets. However, it also gives us reason to doubt that a rationality technique is likely to be effective when it does't have any good answer to the EOC.
Let's start with values dissonance. I've previously noted that much human irrationality seems to be evolutionarily adaptive: "We have evolved to have an irrationally inflated view of ourselves, so as to better sell others on that view." That suggests that if you value truth more than inclusive fitness, you might want to take steps to counteract that tendency, say by actively trying to force yourself to ignore how having various beliefs will affect your self-image or others' opinions of you. (I spell this idea out a little more carefully at the previous link).
But it seems like the kind of rationality techniques discussed at LessWrong generally don't fall into the "values dissonance" category. Rather, if they make any sense at all, they're going to make sense because of differences between the modern environment environment and the ancestral environment. That is, they fall under the category of "changed tradeoffs." (Note: it's unclear to me how the category of "evolutionary restrictions" could apply to rationality techniques. Suggestions?)
For example, consider the availability heuristic. This is the thing that makes people wrongly assume the risk of getting attacked by a shark when you go swimming is really high because of one memorable news story they saw about a shark attack. But if you think about it, the availability heuristic probably wasn't that much of a problem 100,000 years ago on the African savannah. Back then, if you heard a story about someone getting eaten by a lion, it was probably because someone in your band or a neighboring band had gotten eaten by a lion. That probably meant the chance of getting eaten by a lion was non-trivial in the area where you were and you needed to watch out for lions.
On the other hand, if you're a modern American and you hear a story about someone getting eaten by a shark, it was probably because you heard about it on the news. Maybe it happened hundreds of miles away in Florida. News outlets selectively report sensational stories, so for all you know that was the only person to get eaten by a shark out of 300 million Americans that year, maybe even in the past few years. Thus it is written: don't try to judge the frequency of events based on how often you hear about them on the news; use Google to find the actual statistics.
The value—and counter-intuitiveness—of a lot of scientific techniques seems similarly explicable. Randomized, double-blind, placebo-controlled studies with 1000 subjects are hard to do if you're a hunter-gatherer band with 50 members. Even when primitive hunter-gatherers could have theoretically done a particular experiment, rigorous scientific experiments are a lot of work. They may not pass cost-benefit analysis if it's just your band that will be using the results. In order for science to be worthwhile, it helps to have a printing press that you can use to share your findings all over the world.
A third example is prediction markets, indeed markets in general. In a post last month, Robin Hanson writes (emphasis mine):
Speculative markets generally do an excellent job of aggregating information...
Yet even though this simple fact seems too obvious for finance experts to mention, the vast majority of the rest of news coverage and commentary on all other subjects today, and pretty much every day, will act as if they disagreed. Folks will discuss and debate and disagree on other subjects, and talk as if the best way for most of us to form accurate opinions on such subjects is to listen to arguments and commentary offered by various pundits and experts and then decide who and what we each believe. Yes this is the way our ancestors did it, and yes this is how we deal with disagreements in our personal lives, and yes this was usually the best method.
But by now we should know very well that we would get more accurate estimates more cheaply on most widely discussed issues of fact by creating (and legalizing), and if need be subsidizing, speculative betting markets on such topics. This isn’t just vague speculation, this is based on very well established results in finance, results too obvious to seem worth mentioning when experts discuss finance. Yet somehow the world of media continues to act is if it doesn’t know. Or perhaps it doesn’t care; punditry just isn’t about accuracy.
The evolutionary heuristic suggests a different explanation for reluctance to use prediction markets: the fact that "listen to arguments and form your own opinion" was the best method we had on the African savannah meant we evolved to use it. Thus, other methods, like prediction markets, feel deeply counter-intuitive, even for people who can appreciate their merits in the abstract.
In short, the evolutionary heuristic supports what many people have already concluded for other reasons: most people could do better at forming their view of the world by relying more on statistics, science, and (when available) prediction markets. People likely fail to rely on them as much as they should, because those were not available in the ancestral environment, and therefore relying on them does not come naturally to us.
There's a flip side to this, though, that the evolutionary heuristic might suggest certain rationality techniques are unlikely to work. In an earlier version of this post, I suggested a CFAR experiment in trying to improve people's probability estimates as an example of such a rationality technique. But as Benja pointed out in the comments, our ancestors faced little selection pressure for accurate verbal probability estimates, which suggests there might be a lot of room to improve people's verbal probability estimates.
On the other hand, given that our ancestors managed to be successful without being good at making verbal probability estimates might suggest that rationality techniques based on improving that skill would be unlikely to result in increased performance in areas where the skill isn't obviously relevant. (Yvain's post Extreme Rationality: It's Not That Great is relevant here.) On the other other hand, maybe abstract reasoning skills like "making verbal probability estimates" is generally useful for dealing with evolutionarily novel problems.
Because so few of our ancestors died because they got numerical probability estimates wrong.
I agree with the general idea in your post, but I don't think it strongly predicts that CFAR's experiment would fail. Morever, if it predicts that, why doesn't it also predict that we should have evolved to sample our intuitions multiple times and average the results, since that seems to give more accurate numerical estimates? (I don't actually think this single article is very strong evidence for or against this interpretation of the hypothesis by itself, but neither do I think that CFAR's experiment is; I think the likelihood ratios aren't particularly extreme in either case.)
Ah, you're right. Will edit post to reflect that.