Although I largely agree, there is little actual experimental support for Maslow's theory. He mostly just made it up. See http://lesswrong.com/lw/2j/schools_proliferating_without_evidence/ . See also eg.:
"The uncritical acceptance of Maslow's need hierarchy theory despite the lack of empirical evidence is discussed and the need for a review of recent empirical evidence is emphasized. A review of ten factor-analytic and three ranking studies testing Maslow's theory showed only partial support for the concept of need hierarchy. A large number of cross-sectional studies showed no clear evidence for Maslow's deprivation/domination proposition except with regard to self-actualization. Longitudinal studies testing Maslow's gratification/activation proposition showed no support, and the limited support received from cross-sectional studies is questionable due to numerous measurement problems."
from "Maslow reconsidered: A review of research on the need hierarchy theory", by Wahba and Bridwell.
OK, now take the next step. Since most people who are choosing love, belonging, and esteem over accuracy are not aware they are giving up accuracy, then you have to wonder how you can tell when you are doing so. If you are tempted to think that you are an exception who is willing to choose accuracy instead, ask if this is just another kind of group you want to join, or another kind of esteem you hope to acquire. If so, when would this lead you to actually choose more accuracy, vs. just to tell yourself you so choose?
It seems to me that people here are very aware of the very real ways in which they are in fact very superior to the general public, even the elite public, and also of the very real ways in which they are, in most cases, very inferior to the elite public (and frequently somewhat inferior to the general public). The problem is that they tend to morally endorse both their strengths AND their weaknesses, seeing, as Robin does, both as 'high'. I see the high/low motivation intuition as actually being somewhat rarer in the general world that Robin and most people here see it, and think that it is actually a much more common distinction in this culture than in most cultures. Partly people are simply choosing a measuring standard which makes them look impressive but partly they are unimpressive BECAUSE they have chosen, without fully understanding what they are doing or what the consequences are, to shape themselves in order to be impressive by a different standard than most people choose.
People don't delude themselves about their own traits, they delude themselves about what traits a human without specific information about his or her self sees as good, choosing to see many of their own traits as good rather than as bad and failing to notice that people who lack those traits consistently see things otherwise. For instance, many people who are good at thinking productively about painful facts treat traits like an insufficient reluctance to harm those who are not so gifted as a virtue, even creating slogan to that effect such as "that which can be destroyed by the truth should be". Others rightly see this as a justification for claiming moral superiority while harming those unlike themselves, largely do to status motives.
Wouldn't rationality help people get things on the two bottom tiers? If so, shouldn't your theory predict that people in more dire circumstances are more rational, when I believe the opposite tends to be the case?
A much, much simpler explanation is that rationality is hard and not-rationality is easy. Just as all good families are quite similar and bad families are often uniquely different, there's basically one correct epistemology and a whole lot of incorrect ones. Because truly terrible, survival-inhibiting epistemologies have been eliminated through natural (and social) selection, we're left with a bunch that, at the very least, do not inhibit reproduction. Extremely high-quality epistemology does not appear to be particularly conducive to Darwinian reproductive success, so it never exactly got selected for. Indeed, extremely high quality epistemology may be contingent on a certain level of scientific progress, and thus may have only been practicable in the past few centuries. Imagine trying to simply exist in the world 20,000 years ago, where the only answer you could give to nearly any question about nature or how the world worked was, "I haven't a clue."
Much like r...
You might be interested in Bryan Caplan's concept of "rational irrationality" -- it seems to be more or less what you're aiming for:
http://econfaculty.gmu.edu/bcaplan/ratirnew.doc
*Abstract: Beliefs about politics and religion often have three puzzling properties: systematic bias, high certainty, and little informational basis. The theory of rational ignorance (Downs 1957) explains only the low level of information. The current paper presents a general model of “rational irrationality,” which explains all three stylized facts. According to the theory of rational irrationality, being irrational - in the sense of deviating from rational expectations - is a good like any other; the lower the private cost, the more agents buy. A peculiar feature of beliefs about politics, religion, etc. is that the private repercussions of error are virtually nonexistent, setting the private cost of irrationality at zero; it is therefore in these areas that irrational views are most apparent. The consumption of irrationality can be optimal, but it will usually not be when the private and the social cost of irrationality differ – for example, in elections.*
Excellent post. I'm also reminded of this post that I just recently read: Change Incentives, Not Minds.
Now, you’d think that a public choice EconTalk would take the position that there are systemic flaws in our current form of government and so we need to find ways to improve our systems to remove the flaws.
Instead, Boudreaux’s position, as far as I could tell, was “Well, there are systemic flaws in democracy…so let’s educate people about them”. It’s a complete non sequitur, which I found very frustrating, because someone who understands public choice ought to be able to see the glaring flaw in this strategy. It’s like saying: “In a democracy, voters are rationally ignorant…so let’s educate voters about rational ignorance to fix it.”, or “In a democracy, concentrated interests with lower transaction costs tend to win in the political marketplace, so let’s teach people this to fix it.” I heard no mention of changing the rules so as to change these incentives. Incentives are like the laws of physics – they work whether or not you believe about them, and whether or not you know about them.
We can probably convince some intelligent people to become rationalists simply by talking to ...
Anybody have ideas on what might be realistically achievable rationality incentives?
One simple idea is to make rationality more popular, but even if that worked, it would probably just make "rationality" more popular, where "rationality" is defined as screaming that your opponents are irrational.
Another possibility is to increase elitism in government and business wrt mathematical & scientific knowledge - this wouldn't help the public at large be rational but it would at least avoid innumerate policy-makers to a certain extent.
I think perhaps the most important thing we could do, though, would be to aspire to becoming more well-rounded humans ourselves. The trouble with rationalists is that we aren't typically the kind of people one appreciates unless one is already a rationalist. But a genuine phronimos is a very magnetic personality, a real rolemodel, because they have all the other graces as well. A personality like, say, Stephen Lewis, promoting rationalism - now that would be a force. (Don't get me wrong, what he has been doing is more important).
While it's undeniable that the social prestige of a belief is not necessarily dependent on its truth, and that social pressures often prevent people from arriving at true beliefs, it's also important to remember that:
Hence I would warn against drawing the implication that the aspiring rationalist's project is in any sense futile.
"To put it another way, irrationalists free-ride on the real-world material-comfort achievements of rationalists. "
This is way too rationality-centric. People who don't work a lot free-ride on the real-world material-comfort achievements of workaholics. People who are not creative or entrepreneurial free-ride on those who invent new products and services. There is a hell of a lot more to productivity and world accomplishments than rationality.
For example, a rational person might work much less than an irrational person who let their primitive ...
love, esteem and belonging are typically not achieved by coming up with a plan to get them (coming up with a plan to make someone like you is often called manipulative and is widely criticized).
Roko, I'm surprised you didn't see the flaw in this sentence. Trying to make a particular person like you (from scratch) is indeed a bad plan, but that's not the thing one should necessarily attempt to do in the first place, and in fact it's one of the traps people characteristically fall into by not being rational.
One who is trying to optimize "love, estee...
Voted up.
if you have to choose between fitting in with your group etc and believing the truth, you should shun the truth.
I think many people develop a rough map of other people's beliefs, to the extent that they avoid saying things that would compromise fitting in the group they're in. Speaking of which:
irrationalists free-ride on the real-world achievements of rationalists
Trying to get to level 4 are we? (Clearly I'm not ;)) Conversely, you could argue that "irrationalists" are better at getting things done due to group leverage and rationalists free-ride of those achievements.
Possibly you're unfairly conflating scientific knowledge and technical expertese with rationality: plenty of discoveries have been made without the aid of Bayesian Rationality, and a fair number (rising as you go back through history) without the Scientific Method either. Furthermore, this sort of knowledge doesn't really translate into private benefits, only inventions that largely benefit everyone. Quite possibly it is the dutiful, the lucky and the knowledgeable are bearing the load.
Good post. I feel vaguely uneasy about unconditional praise for scientists, as I think there are far too many now, and many of them simply belong to a different species of politician, as illustrated in this post. Good scientists are heroes, bad scientists are just as annoying as everyone else (and maybe more so), and it's very difficult for outsiders to tell the difference.
I guess the big issue is: rationality is hard and requires sacrifice, so there's no reason to expect people to be rational unless there's some kind of pressure on them to do so. In most areas of human activity, that pressure just isn't strong enough.
Does "winning" mean achieving your "true" values, or does it just mean succeeding at doing something you're trying to do? In the former case, equating "quality of life" to "winning" sneaks in the assumption that people are egoists (and don't care about quantity of life).
However, although the comfort that we experience (in the developed world) due to our modern technology is very much a product of the analytic-rational paradigm, that comfort is given roughly equally to everyone and is certainly not given preferentially to the kind of person who most contributed causally to it happening, i.e. to scientists, engineers and great thinkers.
How much have scientists and engineers contributed to our standard of living? Probably a good amount, but why do we have scientists and engineers? My impression is that our current high ...
While the income/happiness correlation does exist, it is an internal comparison rather than an external one. See for example this breakdown by country. The data suggests that people construct their notion of happiness in part comparatively to the material wealth of those around them. While this might not apply to very basic needs (i.e. people starving) it seems that this starts to have a substantial impact before all the physiological needs are met. I'm incidentally not convinced that the top tier on Maslow's pyramid is anything other than a culturally me...
Interesting post, good explanation of what's keeping rationality from being more practically useful, duly upvoted, but change the spelling in the title to "tragedy".
You're getting at something that feels half-right, but only half.
the comfort that we experience (in the developed world) due to our modern technology is very much a product of the analytic-rational paradigm, that comfort is given roughly equally to everyone
Generalizing from too small a sample? I have argued previously that the analytic-rational paradigm mostly has an impact on societies, but societies are not homogenous collections of individuals: large numbers of individuals in "comfortable" societies are in fact quite miserable, the few who...
There's some truth to this, but I think this is half the story.
Clearly people do trade rationality for signaling (as evidenced by people changing their predictions when you ask them to bet), but people are also bad at it when they're trying (there are plenty of examples of people failing even when they don't get any signaling benefit).
You have to decide how much effort to put into living up to your epistemic potential, and how much into increasing that potential.
It seems that we work harder on the latter, as well as the former in some specific cases where we know that epistemic rationality is especially important.
the effects that their beliefs have on their quality of life, i.e. on how it is that beliefs make people win
Equating "quality of life" to "winning" seems to me to sneak in the assumption that people are egoists with short time horizons (in their "true" values and not just their stated values).
uptake of cryonics ...
Because it is anything else that irrational belief people on lesswrong take to signal their belonging to this community... Yes, everyone else is irrational, except the particular group speaker belongs to.
I can see thousands of identical claims being made in thousands of different communities, each criticizing other groups' signaling irrationality.
In Brief: Making yourself happy is not best achieved by having true beliefs, primarily because the contribution of true beliefs to material comfort is a public good that you can free ride on, but the signaling benefits and happiness benefits of convenient falsehoods pay back locally, i.e. you personally benefit from your adoption of convenient falsehoods. The consequence is that many people hold beliefs about important subjects in order to feel a certain way or be accepted by a certain group. Widespread irrationality is ultimately an incentive problem.
Note: this article has been edited to take into account Tom McCabe, Vladimir_M and Morendil's comments1
In asking why the overall level of epistemic rationality in the world is low and what we can do to change that, it is useful to think about the incentives that many people face concerning the effects that their beliefs have on their quality of life, i.e. on how it is that beliefs make people win.
People have various real and perceived needs; of which our material/practical needs and our emotional needs are two very important subsets. Material/practical needs include adequate nutrition, warmth and shelter, clothing, freedom from crime or attack by hostiles, sex and healthcare. Our emotional needs include status, friendship, family, love, a feeling of belonging and perhaps something called "self actualization".
Data strongly suggests that when material and practical needs are not satisfied, people live extremely miserable lives (this can be seen in the happiness/income correlation—note that very low incomes predict very low happiness). The comfortable life that we lead in developed countries seems to mostly protect us from the lowest depths of anguish, and I would postulate that a reasonable explanation is that almost all of us never starve, die of cold or get killed in violence.
The comfort that we experience (in the developed world) due to our modern technology is very much a product of the analytic-rational paradigm. That is to say a tradition of rational, analytic thinking stretching back through Watson & Crick, Bardeen, Einstein, Darwin, Adam Smith, Newton, Bacon, etc, is a crucial (necessary, and "nearly" sufficient) reason for our comfort.
However, that comfort is given roughly equally to everyone and is certainly not given preferentially to the kind of person who most contributed causally to it happening, including scientists, engineers and great thinkers (mostly because the people who make crucial contributions are usually dead by the time the bulk of the benefits arrive). To put it another way, irrationalists free-ride on the real-world material-comfort achievements of rationalists.
This means that once you find yourself in a more economically developed country, your individual decisions in improving the quality of your own life will (mostly) not involve thinking in the rational vein that caused you to be at the quite high quality you are already at. I have been reading a good self-help book which laments that studies have shown that 50% of one's happiness in life is genetically determined—highlighting the transhumanist case for re-engineering humanity for our own benefit—but that does not mean that to individually be more happy you should become an advocate for transhumanist paradise engineering, because such a project is a public good. It would be like trying to get to work faster by single-handedly building a subway.
The rational paradigm works well for societies, but not obviously for individuals
Instead, to ask what incentives apply to people's choice of beliefs and overall paradigm is to ask what beliefs will best facilitate the fulfillment of those needs to which individual incentives apply. Since our material/practical needs are relatively easily fulfilled (at least amongst non-dirt-poor people in the west), we turn our attention to emotional needs such as:
love and belonging, friendship, family, intimacy, group membership, esteem, status and respect from others, sense of self-respect, confidence
The beliefs that most contribute to these things generally deviate from factual accuracy, because factually accurate beliefs are picked out as being "special" or optimal by the planning model of winning, but love, esteem and belonging are typically not achieved by coming up with a plan to get them (coming up with a plan to make someone like you is often called manipulative and is widely criticized). In fact, love and belonging are typically much better fostered by shared nonanalytic or false beliefs, for example a common belief in God or something like religion (e.g. New Age stuff), in a political party or left/right/traditional/liberal alignment, and/or by personality variables, which are themselves influenced by beliefs in a way that doesn't go via the planning model.
The bottom line is that many people's "map" is not really like an ordinary map, in that its design criterion is not simply to reflect the territory; it is designed to make them fit into a group (religion, politics), feel good about themselves (belief in immortal soul and life after death), fit into a particular cultural niche or signal personality (e.g. belief in Chakras/Auras). Because of the way that incentives are set up, this may in many cases be individually utility maximizing, i.e. instrumentally rational. This seems to fit with the data—80% of the world are theists, including a majority of people in the USA, and as we have complained many times on this site, the overall level of rationality across many different topics (quality of political debate, uptake of cryonics, lack of attention paid to "big picture" issues such as the singularity, dreadful inefficiency of charity) is low.
Bryan Caplan has an economic theory to formalize this: he calls it rational irrationality. Thanks to Vladimir_M for pointing out that Caplan had already formalized this idea:
If the most pleasant belief for an individual differs from the belief dictated by rational expectations, agents weigh the hedonic benefits of deviating from rational expectations against the expected costs of self-delusion.
Beliefs respond to relative price changes just like any other good. On some level, adherents remain aware of what price they have to pay for their beliefs. Under normal circumstances, the belief that death in holy war carries large rewards is harmless, so people readily accept the doctrine. But in extremis, as the tide of battle turns against them, the price of retaining this improbable belief suddenly becomes enormous. Widespread apostacy is the result as long as the price stays high; believers flee the battlefield in disregard of the incentive structure they recently affirmed. But when the danger passes, the members of the routed army can and barring a shift in preferences will return to their original belief. They face no temptation to convert to a new religion or flirt with atheism.
1: The article was originally written with a large emphasis on Maslow's Hierarchy of needs, but it seems that this may be a "truthy" idea that propagates despite failures to confirm it experimentally.