The Illusion of Sameness

10 Elizabeth 22 January 2011 06:41AM

I have lately been pondering two contradictory human beliefs: the belief in our own exceptionalism and the belief in our own ordinariness.  We model the world with ourselves as prototypical humans, using our own emotions and reactions and thought processes to run a program predicting the behavior of others.  That is, after all, what our mirror neurons evolved for.  However, when it comes to our abilities or our intelligence or our problems, we believe we are something out of the ordinary.  The second bias is easier to compensate for than the first, but it is the first that interests me, because even when confronted with significant evidence of your own difference, it is extremely difficult to really internalize it and change your model of the world.

I consider it likely that among those reading this, more of you grew up in families of intelligent, educated people than the national average.  It is also likely that the number of readers who grew up in liberal and nonreligious families exceeds the norm.  Not all, certainly.  Quite possibly not even the majority.  However, I think there must be many among you who understand the shock of leaving your family and community, perhaps to go to college, and slowly discovering that there are people who don't think like you.  People who don't share your foundational knowledge, who trust different authorities, who have completely different default settings.

There are two separate pieces to this: the first is our default setting for the beliefs and authorities of others as similar to our own, and the second is our modeling of other people's mental processes as similar to our own.  The second is seldom run across in everyday life, unless engaging in a discussion of mental processes as in the comments on this post.  The first is run across fairly frequently, but here I must apologize for bringing up the mind-killer, for it is most apparent in politics.  I will endeavor to keep my example brief.

In the spring of 2010 I was substitute teaching in a rural area of upstate New York.  I was in the teacher's room eating lunch, with ten or so other teachers, when the subject of the BP oil spill arose, as it was the major current event at the time.  My experience dictated that the conversation would start with "Isn't this a terrible thing?" and proceed to "Oil companies shouldn't be allowed to make a mess they can't clean up." or "Shouldn't we invest in clean energy?"  However, though the conversation began as I expected, I was subsequently informed that the oil companies were fully capable of cleaning it up, and that the reason it had not been cleaned up already was a conspiracy on the part of President Obama.

This was particularly shocking to me because there were no warning signs.  These were people who were all educated to a Master's Degree level.  I had spoken to several on more innocuous topics, and they seemed both interesting and intelligent. (I realize that this reveals a potential bias on my part regarding a correlation between intelligence and a liberal bent).  They seemed, in every respect, to be people like me.

How could I have better predicted this?  I remain at a loss.  The only significant difference between that group and the people who react according to my model is region of origin, but that oversimplifies the question.  I am not only confused, I am viscerally uncomfortable.  How do we model for people whose cultural contexts and information delivering authorities are fundamentally different from our own, without lumping them into a faceless group?

Reference Points

32 lionhearted 17 November 2010 08:09AM

I just spent some time reading Thomas Schelling's "Choice and Consequences" and I heartily recommend it. Here's a Google books link to the chapter I was reading, "The Intimate Contest for Self Command."

It's fascinating, and if you like LessWrong, rationality, understanding things, decision theories, figuring people and the world out - well, then I think you'd like Schelling. Actually, you'll probably be amazed with how much of his stuff you're already familiar with - he really established a heck of a lot modern thinking on game theory.

Allow me to depart from Schelling a moment, and talk of Sam Snyder. He's a very intelligent guy who has lots of intelligent thoughts. Here's a link to his website - there's massive amounts of data and references there, so I'd recommend you just skim his site if you go visit until you find something interesting. You'll probably find something interesting pretty quickly.

I got a chance to have a conversation with him a while back, and we covered immense amounts of ground. He introduced me to a concept I've been thinking about nonstop since learning it from him - reference points.

Now, he explained it very eloquently, and I'm afraid I'm going to mangle and not do justice to his explanation. But to make a long story really short, your reference points affect your motivation a lot.

An example would help.

What does the average person think about he thinks of running? He thinks of huffing, puffing, being tired and sore, having a hard time getting going, looking fat in workout clothes and being embarrassed at being out of shape. A lot of people try running at some point in their life, and most people don't keep doing it.

On the other hand, what does a regular runner think of? He thinks of the "runner's high" and gliding across the pavement, enjoying a great run, and feeling like a million bucks afterwards.

Since that conversation, I've been trying to change my reference points. For instance, if I feel like I'd like some fried food, I try not to imagine/reference eating the salty greased food. Yes, eating french fries and a grilled chicken sandwich will be salty and fatty and delicious. It's a superstimulus, we're not really evolved to handle that stuff appropriately.

So when most people think of the McChicken Sandwich, large fry, large drink, they think about the grease and salt and sugar and how good it'll taste.

I still like that stuff. In fact, since I quit a lot of vices, sometimes I crave even harder for the few I have left. But I was able to cut my junk food consumption way down by changing my reference point. When I start to have a desire for that sort of food, I think about how my stomach and energy levels are going to feel 90 minutes after eating it. That answer is - not too good. So I go out to a local restaurant and order plain chicken, rice, and vegetables, and I feel good later.

continue reading »

Vipassana Meditation: Developing Meta-Feeling Skills

23 [deleted] 18 October 2010 04:55PM

Followup to: Understanding vipassana meditation

I explain how to practice vipassana meditation1 (a form of Buddhist meditation), giving instructions, advice, and measures of progress. By practicing vipassana one becomes aware of (and exerts control over) the process of affective judgment. This process may underlie important (and subtle) mental patterns of feeling that are responsible for common rationality failures. While I've tried to give helpful information, you should meditate at your own risk; you may experience mental instability or change in undesirable ways.2 This is a somewhat brief post containing information I've personally found most helpful on my meditative journey, see other guides (like this one) for more.

continue reading »

The Halo Effect

23 Eliezer_Yudkowsky 30 November 2007 12:58AM

The affect heuristic is how an overall feeling of goodness or badness contributes to many other judgments, whether it's logical or not, whether you're aware of it or not.  Subjects told about the benefits of nuclear power are likely to rate it as having fewer risks; stock analysts rating unfamiliar stocks judge them as generally good or generally bad—low risk and high returns, or high risk and low returns—in defiance of ordinary economic theory, which says that risk and return should correlate positively.

The halo effect is the manifestation of the affect heuristic in social psychology.  Robert Cialdini, in Influence: Science and Practice, summarizes:

Research has shown that we automatically assign to good-looking individuals such favorable traits as talent, kindness, honesty, and intelligence (for a review of this evidence, see Eagly, Ashmore, Makhijani, & Longo, 1991).  Furthermore, we make these judgments without being aware that physical attractiveness plays a role in the process.  Some consequences of this unconscious assumption that "good-looking equals good" scare me.  For example, a study of the 1974 Canadian federal elections found that attractive candidates received more than two and a half times as many votes as unattractive candidates (Efran & Patterson, 1976).  Despite such evidence of favoritism toward handsome politicians, follow-up research demonstrated that voters did not realize their bias.  In fact, 73 percent of Canadian voters surveyed denied in the strongest possible terms that their votes had been influenced by physical appearance; only 14 percent even allowed for the possibility of such influence (Efran & Patterson, 1976).  Voters can deny the impact of attractiveness on electability all they want, but evidence has continued to confirm its troubling presence (Budesheim & DePaola, 1994).

continue reading »

The Trouble With "Good"

83 Yvain 17 April 2009 02:07AM

Related to: How An Algorithm Feels From Inside, The Affect Heuristic, The Power of Positivist Thinking

I am a normative utilitarian and a descriptive emotivist: I believe utilitarianism is the correct way to resolve moral problems, but that the normal mental algorithms for resolving moral problems use emotivism.

Emotivism, aka the yay/boo theory, is the belief that moral statements, however official they may sound, are merely personal opinions of preference or dislike. Thus, "feeding the hungry is a moral duty" corresponds to "yay for feeding the hungry!" and "murdering kittens is wrong" corresponds to "boo for kitten murderers!"

Emotivism is a very nice theory of what people actually mean when they make moral statements. Billions of people around the world, even the non-religious, happily make moral statements every day without having any idea what they reduce to or feeling like they ought to reduce to anything.

Emotivism also does a remarkably good job capturing the common meanings of the words "good" and "bad". An average person may have beliefs like "pizza is good, but seafood is bad", "Israel is good, but Palestine is bad", "the book was good, but the movie was bad", "atheism is good, theism is bad", "evolution is good, creationism is bad", and "dogs are good, but cats are bad". Some of these seem to be moral beliefs, others seem to be factual beliefs, and others seem to be personal preferences. But we are happy using the word "good" for all of them, and it doesn't feel like we're using the same word in several different ways, the way it does when we use "right" to mean both "correct" and "opposite of left". It feels like they're all just the same thing. The moral theory that captures that feeling is emotivism. Yay pizza, books, Israelis, atheists, dogs, and evolution! Boo seafood, Palestinians, movies, theists, creationism, and cats!

continue reading »

Don't judge a skill by its specialists

49 Academian 26 September 2010 08:56PM

tl;dr: The marginal benefits of learning a skill shouldn't be judged heavily on the performance of people who have had it for a long time. People are unfortunately susceptible to these poor judgments via the representativeness heuristic.

Warn and beware of the following kludgy argument, which I hear often and have to dispel or refine:

"Naively, learning «skill type» should help my performance in «domain». But people with «skill type» aren't significantly better at «domain», so learning it is unlikely to help me."

In the presence or absence of obvious mediating factors, skills otherwise judged as "inapplicable" might instead present low hanging fruit for improvement. But people too often toss them away using biased heuristics to continue being lazy and mentally stagnant. Here are some parallel examples to give the general idea (these are just illustrative, and might be wrong):

Weak argument: "Gamers are awkward, so learning games won't help my social skills."
Mediating factor: Lack of practice with face-to-face interaction.
Ideal: Socialite acquires moves-ahead thinking and learns about signalling to help get a great charity off the ground.

Weak argument: "Physicists aren't good at sports, so physics won't help me improve my game."
Mediating factor: Lack of exercise.
Ideal: Athlete or coach learns basic physics and tweaks training to gain a leading edge.

Weak argument: "Mathematicians aren't romantically successful, so math won't help me with dating."
Mediating factor: Aversion to unstructured environments.
Ideal: Serial dater learns basic probability to combat cognitive biases in selecting partners.

Weak argument: "Psychologists are often depressed, so learning psychology won't help me fix my problems."
Mediating factor: Time spent with unhappy people.
Ideal: College student learns basic neuropsychology and restructures study/social routine to accommodate better unconscious brain functions.

continue reading »

Money: The Unit of Caring

95 Eliezer_Yudkowsky 31 March 2009 12:35PM

Previously in seriesHelpless Individuals

Steve Omohundro has suggested a folk theorem to the effect that, within the interior of any approximately rational, self-modifying agent, the marginal benefit of investing additional resources in anything ought to be about equal.  Or, to put it a bit more exactly, shifting a unit of resource between any two tasks should produce no increase in expected utility, relative to the agent's utility function and its probabilistic expectations about its own algorithms.

This resource balance principle implies that—over a very wide range of approximately rational systems, including even the interior of a self-modifying mind—there will exist some common currency of expected utilons, by which everything worth doing can be measured.

In our society, this common currency of expected utilons is called "money".  It is the measure of how much society cares about something.

This is a brutal yet obvious point, which many are motivated to deny.

With this audience, I hope, I can simply state it and move on.  It's not as if you thought "society" was intelligent, benevolent, and sane up until this point, right?

I say this to make a certain point held in common across many good causes.  Any charitable institution you've ever had a kind word for, certainly wishes you would appreciate this point, whether or not they've ever said anything out loud.  For I have listened to others in the nonprofit world, and I know that I am not speaking only for myself here...

continue reading »

One Life Against the World

32 Eliezer_Yudkowsky 18 May 2007 10:06PM

Followup to: Scope Insensitivity

"Whoever saves a single life, it is as if he had saved the whole world."

-- The Talmud, Sanhedrin 4:5

It's a beautiful thought, isn't it? Feel that warm glow.

I can testify that helping one person feels just as good as helping the whole world. Once upon a time, when I was burned out for the day and wasting time on the Internet - it's a bit complicated, but essentially, I managed to turn someone's whole life around by leaving an anonymous blog comment. I wasn't expecting it to have an effect that large, but it did. When I discovered what I had accomplished, it gave me a tremendous high. The euphoria lasted through that day and into the night, only wearing off somewhat the next morning. It felt just as good (this is the scary part) as the euphoria of a major scientific insight, which had previously been my best referent for what it might feel like to do drugs.

Saving one life probably does feel just as good as being the first person to realize what makes the stars shine. It probably does feel just as good as saving the entire world.

But if you ever have a choice, dear reader, between saving a single life and saving the whole world - then save the world. Please. Because beyond that warm glow is one heck of a gigantic difference.

continue reading »

Applying Behavioral Psychology on Myself

53 John_Maxwell_IV 20 June 2010 06:25AM

In which I attempt to apply findings from behavioral psychology to my own life.

Behavioral Psychology Finding #1: Habituation

The psychological process of "extinction" or "habituation" occurs when a stimulus is administered repeatedly to an animal, causing the animal's response to gradually diminish.  You can imagine that if you were to eat your favorite food for breakfast every morning, it wouldn't be your favorite food after a while.  Habituation tends to happen the fastest when the following three conditions are met:

  • The stimulus is delivered frequently
  • The stimulus is delivered in small doses
  • The stimulus is delivered at regular intervals

Source is here.

Applied Habituation

I had a project I was working on that was really important to me, but whenever I started working on it I would get demoralized.  So I habituated myself to the project: I alternated 2 minutes of work with 2 minutes of sitting in the yard for about 20 minutes.  This worked.

continue reading »

Too busy to think about life

85 Academian 22 April 2010 03:14PM

Many adults maintain their intelligence through a dedication to study or hard work.  I suspect this is related to sub-optimal levels of careful introspection among intellectuals.

If someone asks you what you want for yourself in life, do you have the answer ready at hand?  How about what you want for others?  Human values are complex, which means your talents and technical knowledge should help you think about them.  Just as in your work, complexity shouldn't be a curiosity-stopper.  It means "think", not "give up now."

But there are so many terrible excuses stopping you...

Too busy studying?  Life is the exam you are always taking.  Are you studying for that?  Did you even write yourself a course outline?

Too busy helping?  Decision-making is the skill you are aways using, or always lacking, as much when you help others as yourself.  Isn't something you use constantly worth improving on purpose?

Too busy thinking to learn about your brain?  That's like being too busy flying an airplane to learn where the engines are.  Yes, you've got passengers in real life, too: the people whose lives you affect.

Emotions too irrational to think about them?  Irrational emotions are things you don't want to think for you, and therefore are something you want to think about.  By analogy, children are often irrational, and no one sane concludes that we therefore shouldn't think about their welfare, or that they shouldn't exist.

So set aside a date.  Sometime soon.  Write yourself some notes.  Find that introspective friend of yours, and start solving for happiness.  Don't have one?  For the first time in history, you've got LessWrong.com!

Reasons to make the effort:

Happiness is a pairing between your situation and your disposition. Truly optimizing your life requires adjusting both variables: what happens, and how it affects you.

You are constantly changing your disposition.  The question is whether you'll do it with a purpose.  Your experiences change you, and you affect those, as well as how you think about them, which also changes you.  It's going to happen.  It's happening now.  Do you even know how it works?  Put your intelligence to work and figure it out!

The road to harm is paved with ignorance.  Using your capability to understand yourself and what you're doing is a matter of responsibility to others, too.  It makes you better able to be a better friend.

You're almost certainly suffering from Ugh Fields unconscious don't-think-about-it reflexes that form via Pavlovian conditioning.  The issues most in need of your attention are often ones you just happen not to think about for reasons undetectable to you.

How not to waste the effort:

Don't wait till you're sad.  Only thinking when you're sad gives you a skew perspective.  Don't infer that you can think better when you're sad just because that's the only time you try to be thoughtful.  Sadness often makes it harder to think: you're farther from happiness, which can make it more difficult to empathize with and understand.  Nonethess we often have to think when sad, because something bad may have happened that needs addressing.

Introspect carefully, not constantly.  Don't interrupt your work every 20 minutes to wonder whether it's your true purpose in life.  Respect that question as something that requires concentration, note-taking, and solid blocks of scheduled time.  In those times, check over your analysis by trying to confound it, so lingering doubts can be justifiably quieted by remembering how thorough you were.

Re-evaluate on an appropriate time-scale.  Try devoting a few days before each semester or work period to look at your life as a whole.  At these times you'll have accumulated experience data from the last period, ripe and ready for analysis.  You'll have more ideas per hour that way, and feel better about it.  Before starting something new is also the most natural and opportune time to affirm or change long term goals.  Then, barring large unexpecte d opportunities, stick to what you decide until the next period when you've gathered enough experience to warrant new reflection.

(The absent minded driver is a mathematical example of how planning outperforms constant re-evaluation.  When not engaged in a deep and careful introspection, we're all absent minded drivers to a degree.)

Lost about where to start?  I think Alicorn's story is an inspiring one.  Learn to understand and defeat procrastination/akrasia.  Overcome your cached selves so you can grow freely (definitely read their possible strategies at the end).  Foster an everyday awareness that you are a brain, and in fact more like two half-brains.

These suggestions are among the top-rated LessWrong posts, so they'll be of interest to lots of intellectually-minded, rationalist-curious individuals.  But you have your own task ahead of you, that only you can fulfill.

So don't give up.  Don't procrastinate it.  If you haven't done it already, schedule a day and time right now when you can realistically assess

  • how you want your life to affect you and other people, and
  • what you must change to better achieve this.

Eliezer has said I want you to live.  Let me say:

I want you to be better at your life.

View more: Next