Okay, so, how did this survive evolution?
There is variation in the population over tolerance for risk. If the preference for certainty is detrimental in achieving goals, then those with less attachment to certainty, ceteris paribus, would have better overall reproductive success than the ones with greater attachment. Accordingly, a pro-certainty bias would be selected against.
So, what's the reason it didn't get selected out of the population?
One guess, pointed out in the original comments, might be that there is reason to prefer certainty when making deals with untrustworthy agents. For instance, if I promise you a certain $24,000, and then you don't get it, you know for sure that I lied, as does everyone else who was aware of the deal, which is pretty bad for me. If I promise you a 33/34 chance of $27,000 then if you don't get it I can always claim you were just unlucky, giving me at least plausible deniability. Thus there is significant reason for you to prefer the first, since the more I have to lose by betraying you the less likely I am to do it. The same argument does not carry in the case of 33% versus 34%.
I suspect that with infinite computational power on all sides this effect would vanish, and failing to deliver on any deal would decrease my trustworthiness by a certain amount depending on the plausibility of other explanations. However, humans don't have infinite computational power, so we tend to just save time by labelling people as "trustworthy" or "untrustworthy" meaning, creating the incentive to bias towards absolute promises rather than probabilistic promises.
Of course, this is all quite complicated, it's just one thought that springs to mind. It may be better just to favour the null hypothesis of "evolution is stupid, the human brain is a massive kludge that doesn't normally operate on anything resembling expected utility, massive mistakes are to be expected".
Another guess is that using numbers to describe probability is new enough that our brains haven't had time to evolve any way of dealing with the difference between 33% and 34%. The concept of certainty has been around for a lot longer.
There's a negative pregnant in your statement that makes me think you believe in very recent human evolution. Is there reason to think humans have undergone any biological evolution since the development of agriculture?
Aside from lactose tolerance (or more accurately, lactase persistence, as the "wild type" is "intolerance"), there are differences in enzyme quantities in saliva due to copy number variations between those populations which have a history of consuming carbohydrates and those which do not. There are also the various resistances to malaria. For multiple reasons, including history (e.g., malaria seems to have become endemic in the Mediterranean over the course of the Roman Empire), we know these are all new, anywhere from 6,000 to 500 years before the present. I can give other examples, but these are the most clear and distinct in the literature.
Let me make sure I'm understanding correctly.
If true, that seems moderately strong evidence of biological evolution of humans since the beginning of recorded history (I'm using that interchangeably with the development of agriculture). I'm interested in the evidence for very short-term evolution in humans (<500 years) if you have something that's easy to cite.
My original point was that I'm skeptical that "social pattern" portions of our brain have undergone biological evolution since the development of agriculture. And the OP about changes in the brain allowing greater understanding of statistics seemed like that kind of assertion.
And the OP about changes in the brain allowing greater understanding of statistics seemed like that kind of assertion.
AFAICT I asserted the opposite of that. I said we haven't had recent changes in the brain allowing for greater understanding of statistics, and that's why we're so bad at them.
You're opening up a bigger debate here. I recall that Razib Khan often posts on this subject (there's plenty of evidence, but lots of distinctions to be made) on Gene Expression.
Four reasons: Variation, selection, retention and competition. If you mean biological evolution with definite and noticeable effects in the general population lactose tolerance is an obvious example.
The vertebrate retina is a kludge, but we don't have a percentage of the population with octopus-style retinas, so there's no selectable variance to favor the genes that that produce octopus-type retinas. Similarly, we can't evolve a proper set of long back bones because there's no variance in the human population to use to select against our ludicrous stacked vertebrae arrangement.
But the degree to which people favor certainty does vary, and accordingly it is vulnerable to selection pressure. There accordingly must be a why as to the continued existence of certainty bias.
Perhaps all variation in certainty favouring is simply due to environmental factors. Remember that all complex adaptions must be universal so there must be a simple difference, something like single gene present or absent, which controls how much someone desires certainty for any of the variance to be genetic.
Even if some is genetic, I would guess that the primary difference is in which side of the system1 vs system 2 dichotomy is more likely to win. This affects lots of things other than certainty bias, and so may have been kept where it is by many other factors, with the last being an unfortunate side effect of the general way in which system 1 works (in particular, that system 1 seems bad at expressing nuances and continuous ranges, it sees the world almost entirely in good vs bad dichotomies).
Certainly there are no true expected utility maximisers out there, so it is no surprise that we should violate expected utility maximisation in some way.
Even having said that, if you demand an explanation the one I just gave still seems reasonably good.
Remember that all complex adaptions must be universal so there must be a simple difference, something like single gene present or absent, which controls how much someone desires certainty for any of the variance to be genetic.
This doesn't appear to be the case for genetic variation in intelligence. (Also, I don't see how it follows in the first place.)
Any complex adaptation, requiring many genes to work together, cannot all evolve at once, it would be too unlikely a mutation. Instead, pieces evolve one by one, each individually useful in the context they first appear. However, there is not enough selection pressure to evolve a new piece unless the old pieces are already universal, so you would not expect anything complicated to exist in some but not all members of a species.
With intelligence, it seems like many different factors can affect it on the margins, because the brain is a complex organ that can be slowed down, sped up or damaged in many ways. However, I do not notice a particularly wide intelligence spread among humans, only in rare cases where something is genuinely broken do we find someone less intelligent than a chimpanzee, and we literally never find someone more intelligent by an equivalent amount.
Any complex adaptation, requiring many genes to work together, cannot all evolve at once, it would be too unlikely a mutation. Instead, pieces evolve one by one, each individually useful in the context they first appear. However, there is not enough selection pressure to evolve a new piece unless the old pieces are already universal, so you would not expect anything complicated to exist in some but not all members of a species.
I get that. I don't see how that could imply that quantitative variation must be controlled by a single gene.
I also don't see how the magnitude of variation in intelligence affects the argument ("particularly wide intelligence spread" is subjective).
It doesn't quite have to be controlled by a single gene, I was giving an example. Something like height, which is affected by many factors, could be affected by lots of single gene substitutions, but you would expect the over-all effect to look like an averaging out, not like some humans having one set of decision making machinery and others having a totally different set.
Perhaps all variation in certainty favouring is simply due to environmental factors.
Could very well be.
Even having said that, if you demand an explanation the one I just gave still seems reasonably good.
Yes, it does. I prefer the one paulfchristiano made, since it applies to a wider range of circumstances (interpersonal and environmental), but the untrustworthy agent explanation works well enough.
We observe two biases simultaneously:
"How silly to treat the difference between 97% and 100% so much differently than the difference between 33% and 34%!"
"How silly to assign 97% probability to things that only happen 70% of the time!"'
You don't get introspective access to the probabilities your brain is implicitly using, so this sort of error is unsurprising. Natural selection isn't going to do anything about it until people start making serious decisions on the basis of explicit expected value calculations.
Okay, this explanation works!
Without access to the math, we estimate probability into wide bands ("always", "usually", "sometimes", "never") and evolutionarily favor the "always" band because it is a lot less likely to have us wind up starving, and how could we save the excess from a jackpot win anyway on the savannah? When we then learn math, we learn that 99%, which before math we would count as "always" in our intuitive system, isn't actually always, and now our half-educated intuitive system treats it as a "usually". What we then need to do is ignore the intuitive system in favor of the mathematical learning of payoffs.
Okay, I'm happy with that.
Note the implicit selection effect here: cognitive biases that did get selected out of the gene pool don't get posts on Less Wrong, so we're left with heuristics that helped more than they hurt (all things considered) in the ancestral environment.
Before I get into concocting just-so stories, have you read much of the sequence on evolution? (Particularly this post?)
(I'm not seeking to be rude- it's just that the understanding of how evolution actually operates on species is pretty important to the explanation of any one trait.)
I don't see how the cost of computation can be the deciding factor here. The exact computation (34% - 33% or whatever) would probably take fewer CPU instructions than the evolved hack. And anyway both costs are tiny, because the human brain's processing power is comparable to all computers in the world combined.
I observe that in my experience, acting confident is highly correlated with both a preference for certainty and sexual attractiveness. That might have something to do with it. Peacock tails aren't especially adaptive, either.
I suspect acting confidently functions as a costly signal that you do in fact have good information, which in tern can signal intelligence and/or contacts in high level places.
This rerun doesn't get any better on a second viewing. The post basically amounts to EY insisting that the common preference pattern in the Allais paradox really is irrational. Well, no, it isn't really. Regret isn't necessarily irrational, even when an arguably good decision leads to bad luck, and neither is the desire to avoid it.
Today's post, Zut Allais! was originally published on 20 January 2008. A summary (taken from the LW wiki):
Discuss the post here (rather than in the comments to the original post).
This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was The Allais Paradox, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.
Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.