The deeper solution to the mystery of moralism—Believing in morality and free will are hazardous to your mental health
[Crossposted.]
Sexual predator Jerry Sandusky will serve his time in a minimal security prison, where he’s allowed groups of visitors five days a week.
What a belief in moral realism and free will do is nothing less than change the architecture of decision-making. When we practice principles of integrity and internalize them, they and nonmoral considerations co-determine our System 1 judgments, whereas according to moral realism and free will, moral good is the product of conscious free choice, so System 2 contrastsits moral opinion to System 1’s intuition, for which System 2 compensates—and usually overcompensates. The voter had to weigh the imperatives of the duty to vote and the duty to avoid “lowering the bar” when both candidates are ideologically and programmatically distasteful. System 2 can prime and program System 1 by studying the issues, but the multifaceted decision is itself best made by System 1. What happens when System 2 tries to decide these propositions? System 2 makes the qualitative judgment that System 1 is biased one way or the other and corrects System 1. This will implicate the overcompensation bias, in which conscious attempts to counteract biases usually overcorrect. A voter who thinks correction is needed for a bias toward shirking duty will vote when not really wanting to, all things considered. A voter biased toward "lowering the bar" will be excessively purist. Whatever standard the voter uses will be taken too far.
- It retards people in adaptively changing their principles of integrity.
- It prevents people from questioning their so-called foundations.
- It systematically exaggerates the compellingness of moral claims.
Let's talk about politics
Hello fellow LWs,
As I have read repeatedly on LW (http://lesswrong.com/lw/gw/politics_is_the_mindkiller/) you don't like discussing politics because it produces biased thinking/arguing which I agree is true for the general populace. What I find curious is that you don't seem to even try it here where people would be very likely to keep their identities small (www.paulgraham.com/identity.html). It should be the perfect (or close enough) environment to talk politics because you can have reasonable discussions here.
I do understand that you don't like to bring politics into discussions about rationality, but I don't understand why there shouldn't be dedicated political threads here. (Maybe you could flag them?)
all the best
Viper
Subsuming Purpose, Part 1
Summary:
The purpose of this entry is to establish the existence of local equilibriums which introduce deviations from an ends-driven organization (an organization whose primary focus is a particular purpose) to transform it into a means-driven organization (an organization whose primary focus is the means to achieve its purpose, rather than the purpose itself).
Subsuming Purpose, Part 1
Imagine you run a charity, and you have two star employees; one shares your goals without any emphasis on a means, the other believes in the cause but believes firmly in fundraising as the best means to that end. Both contribute to your charity, but the fundraiser does more good overall. The fundraiser enables your organization. Who do you set as your successor?
Who will your successor choose as their successor?
The person who believes in the purpose will choose the best person for achieving that purpose. The person who believes in a specific means to achieve that ends will choose the best person for those means. The means will subsume the ends. A person who values specific means, say, fundraising, is more likely to promote fellow fundraisers; he values their contributions more. Specialists, and in particular the lines of thinking which lead to specialization, create rigidity in the organization.
Suppose that you choose the fundraiser. The fundraiser, by dint of having chosen to specialize in fundraising, probably believes that fundraising is more important than the alternative means of supporting the organization: he will probably choose to promote other effective fundraisers over their alternatives.
And now people who don't agree that fundraising will start protesting, seeing their charity becoming increasingly subverted; fundraising is rewarded over the charitable purpose of the organization. They will leave, or protest; if their protests aren't heeded, for example because fundraisers who believe in fundraising do already run the company, they may be marginalized. Such individuals may be selected out, either self-selectively, or by explicit opposition by management to introducing people who are likely to cause trouble for them in the future.
Generalized:
In the example above, I made one particular assumption: That somebody who possesses some choice-driven characteristic X (competency at fundraising in the example) is more likely to believe that X is important, and will favor X over alternative characteristics. It's not necessary that this is always the case; a generalist may also possess some characteristic X. It's only necessary that p(XY) > p(X!Y), where X is possession of characteristic X, and Y is belief that X is an important characteristic to have (belief that fundraising is the most valuable pursuit for the charitable organization in the example).
Any preference, once established, which follows a tendency such that p(XY) > p(X!Y) will concrete itself into the organization once given a foothold; those who are selected based on X will also have, on average, a preference for X. They will select individuals with X.
The danger of organization specialization, as opposed to individual specialization, arises when that preference extends to preference; when, given two people X, those who have a preference for X (those who have characteristic Y) are preferred over those who do not. This is the point at which selecting people for X and Y becomes a runaway process, a process which may subsume the original purpose of the organization.
When those who do not have a preference for X begin to believe that X has already overtaken the original purpose of the organization, the meaningful possibilities are that they will either fight it or leave. If they simply leave, they harden the preference for X; there are fewer individuals in the organization who oppose Y. If they fight it and win, they've won for a day; an equilibrium has not yet been reached. If they fight it and lose, they establish a preference for preference; people who disagree with the orthodoxy of X begin to be seen as potential conflict creators in the organization, and just as problematically, revealing the preference for X may alter the decisions of those who might enter the organization otherwise; a non-Y individual may choose another organization which better suits their preferences.
Every Cause Wants to be a Cult. Every belief wants to be an orthodoxy. Orthodoxy is a stable equilibrium, the pit surrounding the gently sloped hill of idea diversity.
The supposedly hard problem of consciousness and the nonexistence of sense data: Is your dog a conscious being?
When is Winning not Winning?
Lately I'd gotten jaded enough that I simply accepted that different rules apply to the elite class. As Hanson would say, most rules are there specifically to curtail those who don't have the ability to avoid them and to be side-stepped by those who do - it's why we evolved such big, manipulative brains. So when this video recently made the rounds it shocked me to realize how far my values had drifted over the past several years.
(the video is not about politics, it is about status. My politics are far from those of Penn)
http://www.youtube.com/watch?v=wWWOJGYZYpk&feature=sharek
It's good we have people like Penn around to remind us what it was like to be teenagers and still expect the world to be fair, so our brains can be used for more productive things.
By the measure our society currently uses, Obama was winning. Penn was not. Yet Penn’s approach is the winning strategy for society. Brain power is wasted on status games and social manipulation when it could be used for actually making things better. The machinations of the elite class are a huge drain of resources that could be better used in almost any other pursuit. And yet the elites are admired high-status individuals who are viewed as “winning” at life. They sit atop huge piles of utility. Idealists like Penn are regarded as immature for insisting on things as low-status as “the rules should be fair and apply identically to every one, from the inner-city crack-dealer to the Harvard post-grad.”
The “Rationalists Should Win” meme is a good one, but it risks corrupting our goals. If we focus too much on “Rationalist Should Win” we risk going for near-term gains that benefit us. Status, wealth, power, sex. Basically hedonism – things that feel good because we’ve evolved to feel good when we get them. Thus we feel we are winning, and we’re even told we are winning by our peers and by society. But these things aren’t of any use to society. A society of such “rationalists” would make only feeble and halting progress toward grasping the dream of defeating death and colonizing the stars.
It is important to not let one’s concept of “winning” be corrupted by Azathoth.
ADDED 5/23:
It seems the majority of comments on this post are people who disagree on the basis of rationality being a tool for achieving ends, but not for telling you what ends are worth achieving.
I disagree. As is written "The Choice between Good and Bad is not a matter of saying 'Good!' It is about deciding which is which." And rationality can help to decide which is which. In fact without rationality you are much more likely to be partially or fully mistaken when you decide.
Global warming is a better test of irrationality that theism
Theism is often a default test of irrationality on Less Wrong, but I propose that global warming denial would make a much better candidate.
Theism is a symptom of excess compartmentalisation, of not realising that absence of evidence is evidence of absence, of belief in belief, of privileging the hypothesis, and similar failings. But these are not intrinsically huge problems. Indeed, someone with a mild case of theism can have the same anticipations as someone without, and update their evidence in the same way. If they have moved their belief beyond refutation, in theory it thus fails to constrain their anticipations at all; and often this is the case in practice.
Contrast that with someone who denies the existence of anthropogenic global warming (AGW). This has all the signs of hypothesis privileging, but also reeks of fake justification, motivated skepticism, massive overconfidence (if they are truly ignorant of the facts of the debate), and simply the raising of politics above rationality. If I knew someone was a global warming skeptic, then I would expect them to be wrong in their beliefs and their anticipations, and to refuse to update when evidence worked against them. I would expect their judgement to be much more impaired than a theist's.
Of course, reverse stupidity isn't intelligence: simply because one accepts AGW, doesn't make one more rational. I work in England, in a university environment, so my acceptance of AGW is the default position and not a sign of rationality. But if someone is in a milieu that discouraged belief in AGW (one stereotype being heavily Republican areas of the US) and has risen above this, then kudos to them: their acceptance of AGW is indeed a sign of rationality.
Hiroshima Day
On August 6th, in 1945, the world saw the first use of atomic weapons against human targets. On this day 63 years ago, humanity lost its nuclear virginity. Until the end of time we will be a species that has used fission bombs in anger.
Time has passed, and we still haven't blown up our world, despite a close call or two. Which makes it difficult to criticize the decision - would things still have turned out all right, if anyone had chosen differently, anywhere along the way?
Maybe we needed to see the ruins, of the city and the people.
Maybe we didn't.
There's an ongoing debate - and no, it is not a settled issue - over whether the Japanese would have surrendered without the Bomb. But I would not have dropped the Bomb even to save the lives of American soldiers, because I would have wanted to preserve that world where atomic weapons had never been used - to not cross that line. I don't know about history to this point; but the world would be safer now, I think, today, if no one had ever used atomic weapons in war, and the idea was not considered suitable for polite discussion.
I'm not saying it was wrong. I don't know for certain that it was wrong. I wouldn't have thought that humanity could make it this far without using atomic weapons again. All I can say is that if it had been me, I wouldn't have done it.
View more: Prev
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)