In Brief: Making yourself happy is not best achieved by having true beliefs, primarily because the contribution of true beliefs to material comfort is a public good that you can free ride on, but the signaling benefits and happiness benefits of convenient falsehoods pay back locally, i.e. you personally benefit from your adoption of convenient falsehoods. The consequence is that many people hold beliefs about important subjects in order to feel a certain way or be accepted by a certain group. Widespread irrationality is ultimately an incentive problem.
Note: this article has been edited to take into account Tom McCabe, Vladimir_M and Morendil's comments1
In asking why the overall level of epistemic rationality in the world is low and what we can do to change that, it is useful to think about the incentives that many people face concerning the effects that their beliefs have on their quality of life, i.e. on how it is that beliefs make people win.
People have various real and perceived needs; of which our material/practical needs and our emotional needs are two very important subsets. Material/practical needs include adequate nutrition, warmth and shelter, clothing, freedom from crime or attack by hostiles, sex and healthcare. Our emotional needs include status, friendship, family, love, a feeling of belonging and perhaps something called "self actualization".
Data strongly suggests that when material and practical needs are not satisfied, people live extremely miserable lives (this can be seen in the happiness/income correlation—note that very low incomes predict very low happiness). The comfortable life that we lead in developed countries seems to mostly protect us from the lowest depths of anguish, and I would postulate that a reasonable explanation is that almost all of us never starve, die of cold or get killed in violence.
The comfort that we experience (in the developed world) due to our modern technology is very much a product of the analytic-rational paradigm. That is to say a tradition of rational, analytic thinking stretching back through Watson & Crick, Bardeen, Einstein, Darwin, Adam Smith, Newton, Bacon, etc, is a crucial (necessary, and "nearly" sufficient) reason for our comfort.
However, that comfort is given roughly equally to everyone and is certainly not given preferentially to the kind of person who most contributed causally to it happening, including scientists, engineers and great thinkers (mostly because the people who make crucial contributions are usually dead by the time the bulk of the benefits arrive). To put it another way, irrationalists free-ride on the real-world material-comfort achievements of rationalists.
This means that once you find yourself in a more economically developed country, your individual decisions in improving the quality of your own life will (mostly) not involve thinking in the rational vein that caused you to be at the quite high quality you are already at. I have been reading a good self-help book which laments that studies have shown that 50% of one's happiness in life is genetically determined—highlighting the transhumanist case for re-engineering humanity for our own benefit—but that does not mean that to individually be more happy you should become an advocate for transhumanist paradise engineering, because such a project is a public good. It would be like trying to get to work faster by single-handedly building a subway.
The rational paradigm works well for societies, but not obviously for individuals
Instead, to ask what incentives apply to people's choice of beliefs and overall paradigm is to ask what beliefs will best facilitate the fulfillment of those needs to which individual incentives apply. Since our material/practical needs are relatively easily fulfilled (at least amongst non-dirt-poor people in the west), we turn our attention to emotional needs such as:
love and belonging, friendship, family, intimacy, group membership, esteem, status and respect from others, sense of self-respect, confidence
The beliefs that most contribute to these things generally deviate from factual accuracy, because factually accurate beliefs are picked out as being "special" or optimal by the planning model of winning, but love, esteem and belonging are typically not achieved by coming up with a plan to get them (coming up with a plan to make someone like you is often called manipulative and is widely criticized). In fact, love and belonging are typically much better fostered by shared nonanalytic or false beliefs, for example a common belief in God or something like religion (e.g. New Age stuff), in a political party or left/right/traditional/liberal alignment, and/or by personality variables, which are themselves influenced by beliefs in a way that doesn't go via the planning model.
The bottom line is that many people's "map" is not really like an ordinary map, in that its design criterion is not simply to reflect the territory; it is designed to make them fit into a group (religion, politics), feel good about themselves (belief in immortal soul and life after death), fit into a particular cultural niche or signal personality (e.g. belief in Chakras/Auras). Because of the way that incentives are set up, this may in many cases be individually utility maximizing, i.e. instrumentally rational. This seems to fit with the data—80% of the world are theists, including a majority of people in the USA, and as we have complained many times on this site, the overall level of rationality across many different topics (quality of political debate, uptake of cryonics, lack of attention paid to "big picture" issues such as the singularity, dreadful inefficiency of charity) is low.
Bryan Caplan has an economic theory to formalize this: he calls it rational irrationality. Thanks to Vladimir_M for pointing out that Caplan had already formalized this idea:
If the most pleasant belief for an individual differs from the belief dictated by rational expectations, agents weigh the hedonic benefits of deviating from rational expectations against the expected costs of self-delusion.
Beliefs respond to relative price changes just like any other good. On some level, adherents remain aware of what price they have to pay for their beliefs. Under normal circumstances, the belief that death in holy war carries large rewards is harmless, so people readily accept the doctrine. But in extremis, as the tide of battle turns against them, the price of retaining this improbable belief suddenly becomes enormous. Widespread apostacy is the result as long as the price stays high; believers flee the battlefield in disregard of the incentive structure they recently affirmed. But when the danger passes, the members of the routed army can and barring a shift in preferences will return to their original belief. They face no temptation to convert to a new religion or flirt with atheism.
1: The article was originally written with a large emphasis on Maslow's Hierarchy of needs, but it seems that this may be a "truthy" idea that propagates despite failures to confirm it experimentally.
People don't delude themselves about their own traits, they delude themselves about what traits a human without specific information about his or her self sees as good, choosing to see many of their own traits as good rather than as bad and failing to notice that people who lack those traits consistently see things otherwise. For instance, many people who are good at thinking productively about painful facts treat traits like an insufficient reluctance to harm those who are not so gifted as a virtue, even creating slogan to that effect such as "that which can be destroyed by the truth should be". Others rightly see this as a justification for claiming moral superiority while harming those unlike themselves, largely do to status motives.
I'd have thought that the biggest risk from being attached to "That which can be destroyed by the truth should be" is being too ready to believe statements which can cause destruction.