The deeper solution to the mystery of moralism—Believing in morality and free will are hazardous to your mental health

-19 metaphysicist 14 October 2012 01:21PM

[Crossposted.]

The complex relationship between Systems 1 and 2 and construal level

The distinction between pre-attentive and focal-attentive mental processes  has dominated cognitive psychology for some 35 years. In the past decade has arisen another cognitive dichotomy specific to social psychology: processes of abstract construal (far cognition) versusconcrete construal (near cognition). This essay will theorize about the relationship between these dichotomies to clarify further how believing in the existence of free will and in the objective existence of morality can thwart reason by causing you to choose what you don’t want.

The state of the art on pre-attentive and focal-attentive processes is Daniel Kahneman’s bookThinking, Fast and Slow, where he calls pre-attentive processes System 1 and focal-attentive processes System 2. The reification of processes into fictional systems also resembles Freud’sSystem Csc (Conscious) and System Pcs (Preconscious). I’ll adopt the language System 1 andSystem 2, but readers can apply their understanding of conscious –preconscious, pre-attentive – focal-attentive, or automatic processes – controlled processes dichotomies. They name the same distinction, in which System 1 consists of processes occurring quickly and effortlessly in parallel outside awareness; System 2 consists of processes occurring slowly and effortfully in sequentialawareness, which in this context refers to the contents of working memory rather than raw experience and accompanies System 2 activity.

To integrate Systems 1 and 2 with construal-level theory, we note that System 2—the conscious part of our minds—can perform any of three routines in making a decision about taking some action, such as whether to vote in an election, a good example not just for timeliness but also for linkages to our main concern with morality: voting is a clear example of an action without tangible benefit. The potential voter might:

Case 1. Make a conscious decision to vote based on applying the principle that citizens owe a duty to vote in elections.
Case 2. Decide to be open to the candidates’ substantive positions and vote only if either candidate seems worthy of support.
Case 3. Experience a change of mind between 1 and 2.

The preceding were examples of the three routines System 2 can perform:

Case 1. Make the choice.
Case 2. “Program” System 1 to make the choice based on automatic criteria that don’t require sequential thinking.
Case 3. Interrupt System 1 in the face of anomalies.

When System 2 initiates action, whether it retains the power to decide or passes to System 1 is the difference between concrete and abstract construal. The second routine is key to understanding how Systems 1 and 2 work to produce the effects construal-level theory predicts. Keep in mind that the unconscious, automatic System 1 includes not just hardwired patterns but also skilled habits. Meanwhile, System 2 is notoriously “lazy,” unwilling to interrupt System 1, as in Case 3, but despite the perennial biases that plague system 1, resulting from letting System 1 have its way, the highest levels of expertise also occur in System 1.

A delegate System 1 operates with potentially complex holistic patterns typifying far cognition. This pattern is far because we offload distant matter to System 1 but exercise sequential control under System 2 as immediacy looms—although there are many exceptions. It is critical to distinguish far cognition from the lazy failure of System 2 to perform properly in Case 3. Such failure isn’t specific to mode. Far cognition, System 1 acting as delegate for System 2, is a narrower concept than automatic cognition, but far cognition is automatic cognition. Nearcognition admits no easy cross-classification.

Belief in free will and moral realism undermine our “fast and frugal heuristics”

The two most important recent books on the cognitive psychology of decision and judgment areThinking, Fast and Slow by Daniel Kahneman and Gut Reactions: The Intelligence of the Unconscious by Gerd Gigerenzer, and both insist on the contrast between their positions, although conflicts aren’t obvious. Kahneman explains System 1 biases as due to the mechanisms employed outside the range of evolutionary usefulness; Gigerenzer describes “fast and frugal heuristics” that sometimes misfire to produce biases. Where these half-empty versus half-full positions on heuristics and biases really differ is their overall appraisal of near and far processes, as Gigerenzer is a far thinker and Kahneman a near thinker, and they are both naturally biased for their preferred modes. Far thought shows more confidence in fast-and-frugal heuristics, since it offloads to System 1, whose province is to employ them.

The fast-and-frugal-heuristics way of thinking is particularly useful in understanding the effect of moral realism and free will: they cause System 2 to supplant System 1 in decision-making. When we apply principles of integrity to regulate our conduct, sometimes we do better in far mode, where System 2 offloads the task of determining compliance to System 1. To the contrary, if you have a principle of integrity that includes an absolute obligation to vote, you act as in Case 1: on a conscious decision. But principles of integrity do not really take this absolute form, an illusion begotten by moral realism. A principle of integrity flexible enough for actual use might favor voting (based, say, on a general principle embracing an obligation to perform duties) but disfavor it for “lowering the bar” when there’s only a choice between the lesser of evils. To practice the art of objectively applying these principles depends on your honest appraisal of the strength of your commitment to each virtue. System 2 is incapable of this feat; when it can be accomplished, it’s due to System 1’s automatic skills, operating unconsciously.Principles of integrity are applied more accurately in far-mode than near-mode. [Hat Tip to Overcoming Bias for these convenient phrases.]

But belief in moral realism and free will impel moral actors to apply their principles in near-mode. Objective morality and moral realism imply that compliance with morality results from freely willed acts. I’m not going to defend this premise thoroughly here, but this thought experiment might carry some persuasive weight. Read the following in near mode, and introspect your emotions:

 

Sexual predator Jerry Sandusky will serve his time in a minimal security prison, where he’s allowed groups of visitors five days a week.

 


Some readers will experience a sense of outrage. Then remind yourself: There’s no free will.If you believe the reminder, your outrage will subside; if you’ve long been a convinced and consistent determinist, you might not need to remind yourself. Morality inculpates based on acts of free will: morality and free will are inseparable.

A point I must emphasize because of its novelty: it’s System 1 that ordinarily determines what you want. System 2 doesn’t ordinarily deliberate about the subject directly; it deliberates about relevant facts, but in the end, you can only intuit your volition. You can’t deduce it.

What a belief in moral realism and free will do is nothing less than change the architecture of decision-making. When we practice principles of integrity and internalize them, they and nonmoral considerations co-determine our System 1 judgments, whereas according to moral realism and free will, moral good is the product of conscious free choice, so System 2 contrastsits moral opinion to System 1’s intuition, for which System 2 compensates—and usually overcompensates. The voter had to weigh the imperatives of the duty to vote and the duty to avoid “lowering the bar” when both candidates are ideologically and programmatically distasteful. System 2 can prime and program System 1 by studying the issues, but the multifaceted decision is itself best made by System 1. What happens when System 2 tries to decide these propositions? System 2 makes the qualitative judgment that System 1 is biased one way or the other and corrects System 1. This will implicate the overcompensation bias, in which conscious attempts to counteract biases usually overcorrect. A voter who thinks correction is needed for a bias toward shirking duty will vote when not really wanting to, all things considered. A voter biased toward "lowering the bar" will be excessively purist. Whatever standard the voter uses will be taken too far.

Belief in moral realism and free will biases practical reasoning

This essay presents the third of three ways that belief in objective morality and free will can cause people to do what they don’t want to do:

 

  1. It retards people in adaptively changing their principles of integrity.
  2. It prevents people from questioning their so-called foundations.
  3. It systematically exaggerates the compellingness of moral claims.

 

Some will be tempted to think that the third either is contrary to experience or is socially desirable. It’s neither. In moralism, an exaggerated subjective sense of duty and excessive sense of guilt co-exist with unresponsiveness to morality’s practical demands.

Let's talk about politics

-14 WingedViper 19 September 2012 05:25PM

Hello fellow LWs,

As I have read repeatedly on LW (http://lesswrong.com/lw/gw/politics_is_the_mindkiller/) you don't like discussing politics because it produces biased thinking/arguing which I agree is true for the general populace. What I find curious is that you don't seem to even try it here where people would be very likely to keep their identities small (www.paulgraham.com/identity.html). It should be the perfect (or close enough) environment to talk politics because you can have reasonable discussions here.

I do understand that you don't like to bring politics into discussions about rationality, but I don't understand why there shouldn't be dedicated political threads here. (Maybe you could flag them?)

all the best

Viper

 

Subsuming Purpose, Part 1

1 OrphanWilde 10 August 2012 06:45PM

 

Summary:

The purpose of this entry is to establish the existence of local equilibriums which introduce deviations from an ends-driven organization (an organization whose primary focus is a particular purpose) to transform it into a means-driven organization (an organization whose primary focus is the means to achieve its purpose, rather than the purpose itself).

Subsuming Purpose, Part 1

Imagine you run a charity, and you have two star employees; one shares your goals without any emphasis on a means, the other believes in the cause but believes firmly in fundraising as the best means to that end.  Both contribute to your charity, but the fundraiser does more good overall.  The fundraiser enables your organization.  Who do you set as your successor?

Who will your successor choose as their successor?

The person who believes in the purpose will choose the best person for achieving that purpose.  The person who believes in a specific means to achieve that ends will choose the best person for those means.  The means will subsume the ends.  A person who values specific means, say, fundraising, is more likely to promote fellow fundraisers; he values their contributions more.  Specialists, and in particular the lines of thinking which lead to specialization, create rigidity in the organization.

Suppose that you choose the fundraiser.  The fundraiser, by dint of having chosen to specialize in fundraising, probably believes that fundraising is more important than the alternative means of supporting the organization: he will probably choose to promote other effective fundraisers over their alternatives.

And now people who don't agree that fundraising will start protesting, seeing their charity becoming increasingly subverted; fundraising is rewarded over the charitable purpose of the organization.  They will leave, or protest; if their protests aren't heeded, for example because fundraisers who believe in fundraising do already run the company, they may be marginalized.  Such individuals may be selected out, either self-selectively, or by explicit opposition by management to introducing people who are likely to cause trouble for them in the future.

Generalized:

In the example above, I made one particular assumption: That somebody who possesses some choice-driven characteristic X (competency at fundraising in the example) is more likely to believe that X is important, and will favor X over alternative characteristics.  It's not necessary that this is always the case; a generalist may also possess some characteristic X.  It's only necessary that p(XY) > p(X!Y), where X is possession of characteristic X, and Y is belief that X is an important characteristic to have (belief that fundraising is the most valuable pursuit for the charitable organization in the example).

Any preference, once established, which follows a tendency such that p(XY) > p(X!Y) will concrete itself into the organization once given a foothold; those who are selected based on X will also have, on average, a preference for X.  They will select individuals with X.

The danger of organization specialization, as opposed to individual specialization, arises when that preference extends to preference; when, given two people X, those who have a preference for X (those who have characteristic Y) are preferred over those who do not.  This is the point at which selecting people for X and Y becomes a runaway process, a process which may subsume the original purpose of the organization.

When those who do not have a preference for X begin to believe that X has already overtaken the original purpose of the organization, the meaningful possibilities are that they will either fight it or leave.  If they simply leave, they harden the preference for X; there are fewer individuals in the organization who oppose Y.  If they fight it and win, they've won for a day; an equilibrium has not yet been reached.  If they fight it and lose, they establish a preference for preference; people who disagree with the orthodoxy of X begin to be seen as potential conflict creators in the organization, and just as problematically, revealing the preference for X may alter the decisions of those who might enter the organization otherwise; a non-Y individual may choose another organization which better suits their preferences.

Every Cause Wants to be a Cult.  Every belief wants to be an orthodoxy.  Orthodoxy is a stable equilibrium, the pit surrounding the gently sloped hill of idea diversity.

 

The supposedly hard problem of consciousness and the nonexistence of sense data: Is your dog a conscious being?

-13 [deleted] 04 August 2012 01:57AM
Of dogs and cows
Are dogs conscious? My guess, you think so: that’s why they’re termed “sentient.” We assume that dogs see the world much as we do, despite being receptive to different information; they experience the same conscious data of sense as we experience. You, nevertheless, might be prepared to concede the ultimate unfathomability of the question, but if not, consider a related question: when did consciousness first arise in the course of organic evolution?

The reason the questions are obscure deserves scrutiny. I think I know you’re conscious because you say you know what I’m saying when I mention “consciousness” or “experience,” but the limits of my knowledge of consciousness are telling: I will never find some physical structure to explain it. This isn’t due to lack of empirical research or of theoretical ingenuity. To explain an observation, you must describe it, and the language used for describing conscious experience is the same language used for describing the object the experience refers to. The most I can do to describe the experienced “brownness” is achieved by referring to its cause. When I see a brown cow, I can only describe the raw experience as “brown”: the color that ordinarily gives rise to the experience. Thus, I necessarily omit from the description exactly what I want to explain: the qualitative character of the “brown” experience.

Apparent self-evidence
If qualitative consciousness existed, it would be utterly inexplicable; yet, the evidence of direct experience seems self-evidently to support its own existence. This seemingly immediate awareness of our raw mental states seems to be just what it is like to be ourselves. (Thomas Nagel.) Regardless of the apparent indubitability of the intuition that we have raw experiential states, this intuition remains nothing more than belief, and beliefs are subject to illusions that mislead us systematically.

One reason you might resist the conclusion that qualitative experience is illusory is wholesale distortion of reality regarding objects of seemingly immediate awareness seems implausible just because of our intimate connection with our own experience, but scientific developments can render seemingly unrelated philosophical positions plausible. The work of neurologists, such as Oliver Sacks, should caution against the prejudice that some experiences are so basic they resist radical distortion and fabrication. An example Sacks describes is a brain-damaged patient who mistook his wife for a hat. Neuroscientists conclude that cognitive functions are assemblies of modules, making it less startling that beliefs can be so radically wrong.

There’s also a conceptual reason for the reluctance of philosophers and scientists to reject the intuition of raw sense experience: lack of clarity about how to characterize the prewired belief responsible for the illusion. The intuition seems too complex and sophisticated to accommodate innate belief; philosophers trying to nail down the precise content of the belief that qualia exist have had recourse to thought experiments remote from actual experience, and nobody seems to have characterized the essence of qualia. My suggestion: the illusion of qualitative awareness is the belief that we when we perceive or imagine objects, we have independently real experiences characterizable only by the terms used to describe the external object itself. Qualia are inherently ineffable contents of perception or imagination.

The illusion’s evolution
This definition also suggests an evolutionary explanation for the illusion of qualitative experience. Thought doesn’t depend on the illusion of consciousness, as one can easily conceive of an intelligent being without illusory beliefs about the nature of the thinking process, but the illusion of consciousness might have encouraged the development of thinking. Perhaps human ancestors evolved the innate belief that they have experiences with properties corresponding to those of their referents because this belief encouraged our ancestors to make mental models of the world—encouraged them to engage in the offline thinking unique for our species. Objectified conscious experience could encourage mental-model making by generalizing the prior insight that you can predict one external object by manipulating a similar external object. Our ancestors would then need only substitute internal objects for external ones.

Bypassing “sense data” in the theory of knowledge
According to one longstanding theory in epistemology, sense data are our only basis for knowing the external world. This doctrine, taken to its logical conclusion, leads to skepticism about the external world’s existence: sense data, supposedly our window to the world, became an insuperable barrier to cognition, for if all our knowledge is nothing but construction from sense data, our sense data are all we know. We can’t get out of our own minds.

The reason the sense data theory leads to skeptical conclusions goes back to ineffability. If we know the world by sense data, you can draw conclusions about the world only through analogy, that is, by forming a relationship between two descriptions. Ineffable sense data have nothing in common with a world of things, except their names —such as “brownness.”

Two illusions
This account of raw experience as an adaptive illusion brings clarity to the argument that free will is illusory. The sense of free will, I concluded, is the misperception that experienced deciding causes behavior. But “experience” doesn’t exist. Compatibilist free will is incoherent because it assumes the causal efficacy of unreal raw experience.

When is Winning not Winning?

13 Eneasz 22 May 2012 04:25PM

 

Lately I'd gotten jaded enough that I simply accepted that different rules apply to the elite class. As Hanson would say, most rules are there specifically to curtail those who don't have the ability to avoid them and to be side-stepped by those who do - it's why we evolved such big, manipulative brains. So when this video recently made the rounds it shocked me to realize how far my values had drifted over the past several years.

(the video is not about politics, it is about status. My politics are far from those of Penn)
http://www.youtube.com/watch?v=wWWOJGYZYpk&feature=sharek

 

It's good we have people like Penn around to remind us what it was like to be teenagers and still expect the world to be fair, so our brains can be used for more productive things.

By the measure our society currently uses, Obama was winning. Penn was not. Yet Penn’s approach is the winning strategy for society. Brain power is wasted on status games and social manipulation when it could be used for actually making things better. The machinations of the elite class are a huge drain of resources that could be better used in almost any other pursuit. And yet the elites are admired high-status individuals who are viewed as “winning” at life. They sit atop huge piles of utility. Idealists like Penn are regarded as immature for insisting on things as low-status as “the rules should be fair and apply identically to every one, from the inner-city crack-dealer to the Harvard post-grad.”

The “Rationalists Should Win” meme is a good one, but it risks corrupting our goals. If we focus too much on “Rationalist Should Win” we risk going for near-term gains that benefit us. Status, wealth, power, sex. Basically hedonism – things that feel good because we’ve evolved to feel good when we get them. Thus we feel we are winning, and we’re even told we are winning by our peers and by society. But these things aren’t of any use to society. A society of such “rationalists” would make only feeble and halting progress toward grasping the dream of defeating death and colonizing the stars.

It is important to not let one’s concept of “winning” be corrupted by Azathoth.

 

ADDED 5/23:

It seems the majority of comments on this post are people who disagree on the basis of  rationality being a tool for achieving ends, but not for telling you what ends are worth achieving.

 

I disagree. As is written "The Choice between Good and Bad is not a matter of saying 'Good!'  It is about deciding which is which." And rationality can help to decide which is which. In fact without rationality you are much more likely to be partially or fully mistaken when you decide.

 

Global warming is a better test of irrationality that theism

-2 Stuart_Armstrong 16 March 2012 05:10PM

Theism is often a default test of irrationality on Less Wrong, but I propose that global warming denial would make a much better candidate.

Theism is a symptom of excess compartmentalisation, of not realising that absence of evidence is evidence of absence, of belief in belief, of privileging the hypothesis, and similar failings. But these are not intrinsically huge problems. Indeed, someone with a mild case of theism can have the same anticipations as someone without, and update their evidence in the same way. If they have moved their belief beyond refutation, in theory it thus fails to constrain their anticipations at all; and often this is the case in practice.

Contrast that with someone who denies the existence of anthropogenic global warming (AGW). This has all the signs of hypothesis privileging, but also reeks of fake justification, motivated skepticism, massive overconfidence (if they are truly ignorant of the facts of the debate), and simply the raising of politics above rationality. If I knew someone was a global warming skeptic, then I would expect them to be wrong in their beliefs and their anticipations, and to refuse to update when evidence worked against them. I would expect their judgement to be much more impaired than a theist's.

Of course, reverse stupidity isn't intelligence: simply because one accepts AGW, doesn't make one more rational. I work in England, in a university environment, so my acceptance of AGW is the default position and not a sign of rationality. But if someone is in a milieu that discouraged belief in AGW (one stereotype being heavily Republican areas of the US) and has risen above this, then kudos to them: their acceptance of AGW is indeed a sign of rationality.

Hiroshima Day

1 Eliezer_Yudkowsky 06 August 2008 11:15PM

On August 6th, in 1945, the world saw the first use of atomic weapons against human targets.  On this day 63 years ago, humanity lost its nuclear virginity.  Until the end of time we will be a species that has used fission bombs in anger.

Time has passed, and we still haven't blown up our world, despite a close call or two.  Which makes it difficult to criticize the decision - would things still have turned out all right, if anyone had chosen differently, anywhere along the way?

Maybe we needed to see the ruins, of the city and the people.

Maybe we didn't.

There's an ongoing debate - and no, it is not a settled issue - over whether the Japanese would have surrendered without the Bomb.  But I would not have dropped the Bomb even to save the lives of American soldiers, because I would have wanted to preserve that world where atomic weapons had never been used - to not cross that line.  I don't know about history to this point; but the world would be safer now, I think, today, if no one had ever used atomic weapons in war, and the idea was not considered suitable for polite discussion.

I'm not saying it was wrong.  I don't know for certain that it was wrong.  I wouldn't have thought that humanity could make it this far without using atomic weapons again.  All I can say is that if it had been me, I wouldn't have done it.

View more: Prev