Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Modularity, signaling, and belief in belief

19 Kaj_Sotala 13 November 2011 11:54AM

This is the fourth part in a mini-sequence presenting material from Robert Kurzban's excellent book Why Everyone (Else) Is a Hypocrite: Evolution and the Modular Mind.

In the previous post, Strategic ignorance and plausible deniability, we discussed some ways by which people might have modules designed to keep them away from certain kinds of information. These arguments were relatively straightforward.

The next step up is the hypothesis that our "press secretary module" might be designed to contain information that is useful for certain purposes, even if other modules have information that not only conflicts with this information, but is also more likely to be accurate. That is, some modules are designed to acquire systematically biased - i.e. false - information, including information that other modules "know" is wrong.

continue reading »

How to enjoy being wrong

20 lincolnquirk 27 July 2011 05:48AM

Related to: Reasoning Isn't About Logic, It's About Arguing; It is OK to Publicly Make a Mistake and Change Your Mind.

Examples of being wrong

A year ago, in arguments or in thought, I would often:

  • avoid criticizing my own thought processes or decisions when discussing why my startup failed
  • overstate my expertise on a topic (how to design a program written in assembly language), then have to quickly justify a position and defend it based on limited knowledge and cached thoughts, rather than admitting "I don't know"
  • defend a position (whether doing an MBA is worthwhile) based on the "common wisdom" of a group I identify with, without any actual knowledge, or having thought through it at all
  • defend a position (whether a piece of artwork was good or bad) because of a desire for internal consistency (I argued it was good once, so felt I had to justify that position)
  • defend a political or philosophical position (libertarianism) which seemed attractive, based on cached thoughts or cached selves rather than actual reasoning
  • defend a position ("cashiers like it when I fish for coins to make a round amount of change"), hear a very convincing argument for its opposite ("it takes up their time, other customers are waiting, and they're better at making change than you"), but continue arguing for the original position. In this scenario, I actually updated -- thereafter, I didn't fish for coins in my wallet anymore -- but still didn't admit it in the original argument.
  • defend a policy ("I should avoid albacore tuna") even when the basis for that policy (mercury risk) has been countered by factual evidence (in this case, the amount of mercury per can is so small that you would need 10 cans per week to start reading on the scale).
  • provide evidence for a proposition ("I am getting better at poker") where I actually thought it was just luck, but wanted to believe the proposition
  • when someone asked "why did you [do a weird action]?", I would regularly attempt to justify the action in terms of reasons that "made logical sense", rather than admitting that I didn't know why I made a choice, or examining myself to find out why.

Now, I very rarely get into these sorts of situations. If I do, I state out loud: "Oh, I'm rationalizing," or perhaps "You're right," abort that line of thinking, and retreat to analyzing reasons why I emitted such a wrong statement.

We rationalize because we don't like admitting we're wrong. (Is this obvious? Do I need to cite it?) One possible evo-psych explanation: rationalization is an adaptation which improved fitness by making it easier for tribal humans to convince others to their point of view.

Over the last year, I've self-modified to mostly not mind being wrong, and in some cases even enjoy being wrong. I still often start to rationalize, and in some cases get partway through the thought, before noticing the opportunity to correct the error. But when I notice that opportunity, I take it, and get a flood of positive feedback and self-satisfaction as I update my models.

continue reading »

Trivers on Self-Deception

33 Yvain 12 July 2011 09:04PM

People usually have good guesses about the origins of their behavior. If they eat, we believe them when they say it was because they were hungry; if they go to a concert, we believe them when they say they like the music, or want to go out with their friends. We usually assume people's self-reports of their motives are accurate.

Discussions of signaling usually make the opposite assumption: that our stated (and mentally accessible) reasons for actions are false. For example, a person who believes they are donating to charity to "do the right thing" might really be doing it to impress others; a person who buys an expensive watch because "you can really tell the difference in quality" might really want to conspicuously consume wealth.

Signaling theories share the behaviorist perspective that actions do not derive from thoughts, but rather that actions and thoughts are both selected behavior. In this paradigm, predicted reward might lead one to signal, but reinforcement of positive-affect producing thoughts might create the thought "I did that because I'm a nice person".

Robert Trivers is one of the founders of evolutionary psychology, responsible for ideas like reciprocal altruism and parent-offspring conflict. He also developed a theory of consciousness which provides a plausible explanation for the distinction between selected actions and selected thoughts.

TRIVERS' THEORY OF SELF-DECEPTION

Trivers starts from the same place a lot of evolutionary psychologists start from: small bands of early humans grown successful enough that food and safety were less important determinants of reproduction than social status.

The Invention of Lying may have been a very silly movie, but the core idea - that a good liar has a major advantage in a world of people unaccustomed to lies - is sound. The evolutionary invention of lying led to an "arms race" between better and better liars and more and more sophisticated mental lie detectors.

There's some controversy over exactly how good our mental lie detectors are or can be. There are certainly cases in which it is possible to catch lies reliably: my mother can identify my lies so accurately that I can't even play minor pranks on her anymore. But there's also some evidence that there are certain people who can reliably detect lies from any source at least 80% of the time without any previous training: microexpressions expert Paul Ekman calls them (sigh...I can't believe I have to write this) Truth Wizards, and identifies them at about one in four hundred people.

The psychic unity of mankind should preclude the existence of a miraculous genetic ability like this in only one in four hundred people: if it's possible, it should have achieved fixation. Ekman believes that everyone can be trained to this level of success (and has created the relevant training materials himself) but that his "wizards" achieve it naturally; perhaps because they've had a lot of practice. One can speculate that in an ancestral environment with a limited number of people, more face-to-face interaction and more opportunities for lying, this sort of skill might be more common; for what it's worth, a disproportionate number of the "truth wizards" found in the study were Native Americans, though I can't find any information about how traditional their origins were or why that should matter.

If our ancestors were good at lie detection - either "truth wizard" good or just the good that comes from interacting with the same group of under two hundred people for one's entire life - then anyone who could beat the lie detectors would get the advantages that accrue from being the only person able to lie plausibly.

Trivers' theory is that the conscious/unconscious distinction is partly based around allowing people to craft narratives that paint them in a favorable light. The conscious mind gets some sanitized access to the output of the unconscious, and uses it along with its own self-serving bias to come up with a socially admirable story about its desires, emotions, and plans. The unconscious then goes and does whatever has the highest expected reward - which may be socially admirable, since social status is a reinforcer - but may not be.

HOMOSEXUALITY: A CASE STUDY

It's almost a truism by now that some of the people who most strongly oppose homosexuality may be gay themselves. The truism is supported by research: the Journal of Abnormal Psychology published a study measuring penile erection in 64 homophobic and nonhomophobic heterosexual men upon watching different types of pornography, and found significantly greater erection upon watching gay pornography in the homophobes. Although somehow this study has gone fifteen years without replication, it provides some support for the folk theory.

Since in many communities openly declaring one's self homosexual is low status or even dangerous, these men have an incentive to lie about their sexuality. Because their facade may not be perfect, they also have an incentive to take extra efforts to signal heterosexuality by for example attacking gay people (something which, in theory, a gay person would never do).

Although a few now-outed gays admit to having done this consciously, Trivers' theory offers a model in which this could also occur subconsciously. Homosexual urges never make it into the sanitized version of thought presented to consciousness, but the unconscious is able to deal with them. It objects to homosexuality (motivated by internal reinforcement - reduction of worry about personal orientation), and the conscious mind toes party line by believing that there's something morally wrong with gay people and only I have the courage and moral clarity to speak out against it.

This provides a possible evolutionary mechanism for what Freud described as reaction formation, the tendency to hide an impulse by exaggerating its opposite. A person wants to signal to others (and possibly to themselves) that they lack an unacceptable impulse, and so exaggerates the opposite as "proof".

SUMMARY

Trivers' theory has been summed up by calling consciousness "the public relations agency of the brain". It consists of a group of thoughts selected because they paint the thinker in a positive light, and of speech motivated in harmony with those thoughts. This ties together signaling, the many self-promotion biases that have thus far been discovered, and the increasing awareness that consciousness is more of a side office in the mind's organizational structure than it is a decision-maker.

Not for the Sake of Selfishness Alone

22 lukeprog 02 July 2011 05:37PM

Related: Fake SelfishnessNot for the Sake of Pleasure AloneNot for the Sake of Happiness (Alone), Value is Fragile, Fake Fake Utility Functions

No one deserves thanks from another about something he has done for him or goodness he has done. He is either willing to get a reward from God, therefore he wanted to serve himself. Or he wanted to get a reward from people, therefore he has done that to get profit for himself. Or to be mentioned and praised by people, therefore, it is also for himself. Or due to his mercy and tenderheartedness, so he has simply done that goodness to pacify these feelings and treat himself.

- Mohammed Ibn Al-Jahm Al-Barmaki

In a 1990 experiment, Jack Dovidio made subjects feel empathy for a young woman by asking subjects to imagine what she felt as she faced a particular problem.1 Half the subjects focused on one problem faced by the woman, while the other half focused on a different problem she faced. When given the opportunity to help the woman, subjects in the high empathy condition were more likely to help than subjects in the low empathy condition, and the increase was specific to the problem that had been used to evoke empathy.

What does this study say about altruism and selfishness?

Some people think that humans are purely selfish, that we act for selfish motives alone. They will re-interpret any counter-example you give ("But wouldn't you sacrifice your life to save the rest of the human species?") as being compatible with purely selfish motives.

Are they right? Do we act for selfish motives alone?

Let's examine the evidence.2

We begin with a rough sketch of human motivation. We have 'ultimate' desires: things we desire for their own sake. We also have 'instrumental' desires: things we desire because we belief they will satisfy our ultimate desires.

I instrumentally desire to go to the kitchen because I ultimately desire to eat a brownie and I believe brownies are in the kitchen. But if I come to believe brownies are in the dining room and not the kitchen, I will instrumentally desire to walk to the dining room instead, to fulfill my ultimate desire to eat a brownie. Or perhaps my desire to eat a brownie is also an instrumental desire, and my ultimate desire is to taste something sweet, and I instrumentally desire to eat a brownie because I believe that eating a brownie will satisfy my desire to taste something sweet.

Of course, desires compete with each other. Perhaps I have an ultimate desire to taste something sweet, and thus I instrumentally desire to eat a brownie. But I also have an ultimate desire for regular sex, and I believe that eating a brownie will contribute to obesity that will lessen the chances of satisfying my desire for regular sex. In this case, the 'stronger' desire will determine my action.

The full picture is more complicated than this,3 but we only need a basic picture to assess the claim that we only act for selfish motives alone.

We might categorize ultimate desires like this:4

Psychological egoists think all ultimate desires are of type 2. Psychological hedonists are a subset of egoists who think that all ultimate desires are of type 1. Psychological altruists think that at least some ultimate desires are of type 4. If some ultimate desires are of type 3, but none are of type 4, then both egoism and altruism are false.

Previously, I presented neurobiological evidence that psychological hedonism is false. In short: desire and pleasure are encoded separately by the brain, and we sometimes desire things that are not aimed at producing pleasure, and in fact we sometimes desire things that do not produce pleasure when we get them.

But can we also disprove the claim that we act for selfish reasons alone (psychological egoism), by showing that normal humans have desires for the well-being of others?

continue reading »

The True Rejection Challenge

43 Alicorn 27 June 2011 07:18AM

An exercise:

Name something that you do not do but should/wish you did/are told you ought, or that you do less than is normally recommended.  (For instance, "exercise" or "eat vegetables".)

Make an exhaustive list of your sufficient conditions for avoiding this thing.  (If you suspect that your list may be non-exhaustive, mention that in your comment.)

Precommit that: If someone comes up with a way to do the thing which doesn't have any of your listed problems, you will at least try it.  It counts if you come up with this response yourself upon making your list.

(Based on: Is That Your True Rejection?)

Edit to add: Kindly stick to the spirit of the exercise; if you have no advice in line with the exercise, this is not the place to offer it.  Do not drift into confrontational or abusive demands that people adjust their restrictions to suit your cached suggestion, and do not offer unsolicited other-optimizing.

To alleviate crowding, Armok_GoB has created a second thread for this challenge.

Leadership and Self Deception, Anatomy of Peace

9 TimFreeman 06 May 2011 03:56AM

I highly recommend reading Leadership and Self Deception (Henceforth "L&SD") by the Arbinger Institute (Amazon, Barnes and Noble, Google Books, Arbinger Institute Home Page). The sequel, Anatomy of Peace, is also good, but this article is based on a reading of L&SD.

They give a simple model of one cause of some or most everyday subtle neurotic behavior, and have practical suggestions for dealing with it. They present this indirectly, as a first-person narrative from a new executive at a fictional company is being taught this by his managers. The book has its good and bad points, with the good points hugely outweighing the bad. This post contains:

  • a summary of what's good and bad about the book, without spoilers;
  • a description of the main points of the book, which may or may not prevent people from actually understanding and using that information;
  • a list of some unanswered questions I had when I finished reading the book; and
  • some additional plausible assertions that, if true, would clarify the answers to those questions.

A prominent problem with many groups of highly intelligent people is that high intelligence makes it possible to deceive oneself more effectively, so they have pointless social conflict. I hope this model is good enough to help intelligent people identify the tendency to self decieve in social contexts and at least partially compensate for it.

continue reading »

So You've Changed Your Mind

60 Spurlock 28 April 2011 07:42PM

Related to: Politics is the mind-killer, Entangled Truths, Contagious Lies, The Importance of Saying "Oops", Leave a Line of Retreat, You Can Face Reality

This is something I wrote, sort of in brain-dump mode, in the process of trying to organize my thoughts for a song I'm working on. I don't think it covers any new ground for this community, but I was somewhat taken with the way it turned out and figured I'd go ahead and post it for LW's enjoyment.

 

So you've changed your mind. Given up your sacred belief, the one that defined so much of who you are for so long.

You are probably feeling pretty scared right now.

Your life revolved around this. There is not a single aspect of your life that will not feel the effects of this momentous tumult. Right now though, you're still in shock. You know that later, little by little, as you lie awake in bed or stare at your desk at work, the idea will creep its way through the web of your mind. It will touch each and every idea, and change it, and move on. And that changed idea will change other ideas, and those ideas will change as well. Who are you as a person if not the person who holds that idea? For as this new notion gradually but violently makes its way through your skull, will it not upset everything that you know, everything that you do, everything that you are? Will you not then be another person?

The thought is terrifying. What person will you be?

continue reading »

Value Deathism

26 Vladimir_Nesov 30 October 2010 06:20PM

Ben Goertzel:

I doubt human value is particularly fragile. Human value has evolved and morphed over time and will continue to do so. It already takes multiple different forms. It will likely evolve in future in coordination with AGI and other technology. I think it's fairly robust.

Robin Hanson:

Like Ben, I think it is ok (if not ideal) if our descendants' values deviate from ours, as ours have from our ancestors. The risks of attempting a world government anytime soon to prevent this outcome seem worse overall.

We all know the problem with deathism: a strong belief that death is almost impossible to avoid, clashing with undesirability of the outcome, leads people to rationalize either the illusory nature of death (afterlife memes), or desirability of death (deathism proper). But of course the claims are separate, and shouldn't influence each other.

Change in values of the future agents, however sudden of gradual, means that the Future (the whole freackin' Future!) won't be optimized according to our values, won't be anywhere as good as it could've been otherwise. It's easier to see a sudden change as morally relevant, and easier to rationalize gradual development as morally "business as usual", but if we look at the end result, the risks of value drift are the same. And it is difficult to make it so that the future is optimized: to stop uncontrolled "evolution" of value (value drift) or recover more of astronomical waste.

Regardless of difficulty of the challenge, it's NOT OK to lose the Future. The loss might prove impossible to avert, but still it's not OK, the value judgment cares not for feasibility of its desire. Let's not succumb to the deathist pattern and lose the battle before it's done. Have the courage and rationality to admit that the loss is real, even if it's too great for mere human emotions to express.

Conflicts Between Mental Subagents: Expanding Wei Dai's Master-Slave Model

46 Yvain 04 August 2010 09:16AM

Related to: Alien Parasite Technical Guy, A Master-Slave Model of Human Preferences

In Alien Parasite Technical Guy, Phil Goetz argues that mental conflicts can be explained as a conscious mind (the "alien parasite”) trying to take over from an unsuspecting unconscious.

Last year, Wei Dai presented a model (the master-slave model) with some major points of departure from Phil's: in particular, the conscious mind was a special-purpose subroutine and the unconscious had a pretty good idea what it was doing1. But Wei said at the beginning that his model ignored akrasia.

I want to propose an expansion and slight amendment of Wei's model so it includes akrasia and some other features of human behavior. Starting with the signaling theory implicit in Wei's writing, I'll move on to show why optimizing for signaling ability would produce behaviors like self-signaling and akrasia, speculate on why the same model would also promote some of the cognitive biases discussed here, and finish with even more speculative links between a wide range of conscious-unconscious conflicts.

The Signaling Theory of Consciousness

This model begins with the signaling theory of consciousness. In the signaling theory, the conscious mind is the psychological equivalent of a public relations agency. The mind-at-large (hereafter called U for “unconscious” and similar to Wei's “master”) has socially unacceptable primate drives you would expect of a fitness-maximizing agent like sex, status, and survival. These are unsuitable for polite society, where only socially admirable values like true love, compassion, and honor are likely to win you friends and supporters. U could lie and claim to support the admirable values, but most people are terrible liars and society would probably notice.

So you wall off a little area of your mind (hereafter called C for “conscious” and similar to Wei's “slave”) and convince it that it has only admirable goals. C is allowed access to the speech centers. Now if anyone asks you what you value, C answers "Only admirable things like compassion and honor, of course!" and no one detects a lie because the part of the mind that's moving your mouth isn't lying.

This is a useful model because it replicates three observed features of the real world: people say they have admirable goals, they honestly believe on introspection that they have admirable goals, but they tend to pursue more selfish goals. But so far, it doesn't explain the most important question: why do people sometimes pursue their admirable goals and sometimes not?

continue reading »

The Threat of Cryonics

36 lsparrish 03 August 2010 07:57PM

It is obvious that many people find cryonics threatening. Most of the arguments encountered in debates on the topic are not calculated to persuade on objective grounds, but function as curiosity-stoppers. Here are some common examples:

  • Elevated burden of proof. As if cryonics demands more than a small amount of evidence to be worth trying.
  • Elevated cost expectation. Thinking that cryonics is (and could only ever be) affordable only for the very rich.
  • Unresearched suspicions regarding the ethics and business practices of cryonics organizations.
  • Sudden certainty that earth-shattering catastrophes are just around the corner.
  • Assuming the worst about the moral attitudes of humanity's descendants towards cryonics patients.
  • Associations with prescientific mummification, or sci-fi that handwaves the technical difficulties.

The question is what causes this sensation that cryonics is a threat? What does it specifically threaten?

continue reading »

View more: Next