Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Modularity, signaling, and belief in belief

18 Kaj_Sotala 13 November 2011 11:54AM

This is the fourth part in a mini-sequence presenting material from Robert Kurzban's excellent book Why Everyone (Else) Is a Hypocrite: Evolution and the Modular Mind.

In the previous post, Strategic ignorance and plausible deniability, we discussed some ways by which people might have modules designed to keep them away from certain kinds of information. These arguments were relatively straightforward.

The next step up is the hypothesis that our "press secretary module" might be designed to contain information that is useful for certain purposes, even if other modules have information that not only conflicts with this information, but is also more likely to be accurate. That is, some modules are designed to acquire systematically biased - i.e. false - information, including information that other modules "know" is wrong.

continue reading »

Strategic ignorance and plausible deniability

37 Kaj_Sotala 10 August 2011 09:30AM

This is the third part in a mini-sequence presenting material from Robert Kurzban's excellent book Why Everyone (Else) Is a Hypocrite: Evolution and the Modular Mind.

The press secretary of an organization is tasked with presenting outsiders with the best possible image of the organization. While they're not supposed to outright lie, they do use euphemisms and try to only mention the positive sides of things.

A plot point in the TV series West Wing is that the President of the United States has a disease which he wants to hide from the public. The White House Press Secretary is careful to ask whether there's anything she needs to know about the President's health, instead of whether there's anything she should know. As the President's disease is technically something she should know but not something she needs to know, this allows the President to hide the disease from her without lying to her (and by extension, to the American public). As she then doesn't need to lie either, she can do her job better.

If our minds are modular, critical information can be kept away from the modules that are associated with consciousness and speech production. It can often be better if the parts of the system that exist to deal with others are blissfully ignorant, or even actively mistaken, about information that exists in other parts of the system.

In one experiment, people could choose between two options. Choosing option A meant they got $5, and someone else also got $5. Option B meant that they got $6 and the other person got $1. About two thirds were generous and chose option A.

A different group of people played a slightly different game. As before, they could choose between $5 or $6 for themselves, but they didn't know how their choice would affect the other person's payoff. They could find out, however – if they just clicked a button, they'd be told whether the choice was between $5/$5 and $6/$1, or $5/$1 and $6/$5. From a subject's point of view, clicking a button might tell them that picking the option they actually preferred meant they were costing the other person $4. Not clicking meant that they could honestly say that they didn't know what their choice cost the other person. It turned out that about half of the people refused to look at the other player's payoffs, and that many more subjects chose $6/? than $5/?.

There are many situations where not knowing something means you can avoid a lose-lose situation. If know your friend is guilty of a serious crime and you are called to testify in court, you may either betray your friend or commit perjury. If you see a building on fire, and a small boy comes to tell you that a cat is caught in the window, your options are to either risk yourself to save the cat, or take the reputational hit of neglecting a socially perceived duty to rescue the cat. (Footnote in the book: ”You could kill the boy, but then you've got other problems.”) In the trolley problem, many people will consider both options wrong. In one setup, 87% of the people who were asked thought that pushing a man to the tracks to save five was wrong, and 62% said that not pushing him was wrong. Better to never see the people on the tracks. In addition to having your reputation besmirched by not trying to save someone, many nations have actual ”duty to rescue” laws which require you to act if you see someone in serious trouble.

In general, people (and societies) often believe that if you know about something bad, you have a duty to stop it. If you don't know about something, then obviously you can't be blamed for not stopping it. So we should expect that part of our behavior is designed to avoid finding out information that would impose an unpleasant duty on us.

I personally tend to notice this conflict if I see people in public places who look like they might be sleeping or passed out. Most likely, they're just sleeping and don't want to be bothered. If they're drunk or on drugs, they could even be aggressive. But then there's always the chance that they have some kind of a condition and need medical assistance. Should I go poke them to make sure? You can't be blamed if you act like you didn't notice them, some part of me whispers. Remember the suggestion that you can fight the bystander effect by singling out a person and asking them directly for help? You can't pretend you haven't noticed a duty if the duty is pointed out to you directly. As for the bystander effect in general, there's less of a perceived duty to help if everyone else ignores the person, too. (But then this can't be the sole explanation, because people are most likely to act when they're alone and there's nobody else around to know about your duty. The bystander effect isn't actually discussed in the book, this paragraph is my own speculation.)

The police may also prefer not to know about some minor crime that is being committed. If it's known that they're ignoring drug use (say), they lose some of their authority and may end up punished by their superiors. If they don't ignore it, they may spend all of their time doing minor busts instead of concentrating on more serious crime. Parents may also pretend that they don't notice their kids engaging in some minor misbehavior, if they don't want to lose their authority but don't feel like interfering either.

In effect, the value of ignorance comes from the costs of others seeing you know something that puts you in a position in which you are perceived to have a duty and must choose to do one of two costly acts – punish, or ignore. In may own lab, we have found that people know this. When our subjects are given the opportunity to punish someone who has been unkind in an economic game, they do so much less when their punishment won't be known by anyone. That is, they decline to punish when the cloak of anonymity protects them.

The (soon-to-expire) ”don't ask, don't tell” policy of the United States military can be seen as an institutionalization of this rule. Soldiers are forbidden from revealing information about their sexuality, which would force their commanders to discharge them. On the other hand, commanders are also forbidden from inquiring into the matter and finding out.

A related factor is the desire for plausible deniability. A person who wants to have multiple sexual partners may resist getting himself tested for sexual disease. If he was tested, he might find out he had a disease, and then he'd be accused of knowingly endangering others if he didn't tell them about his disease. If he isn't tested, he'll only be accused of not finding out that information, which is often considered less serious.

These are examples of situations where it's advantageous to be ignorant of something. But there are also situations where it is good to be actively mistaken. More about them in the next post.

Consistently Inconsistent

60 Kaj_Sotala 04 August 2011 10:33PM

Robert Kurzban's Why Everyone (Else) Is a Hypocrite: Evolution and the Modular Mind is a book about how our brains are composed of a variety of different, interacting systems. While that premise is hardly new, many of our intuitions are still grounded in the idea of a unified, non-compartmental self. Why Everyone (Else) Is a Hypocrite takes the modular view and systematically attacks a number of ideas based on the unified view, replacing them with a theory based on the modular view. It clarifies a number of issues previously discussed on Overcoming Bias and Less Wrong, and even debunks some outright fallacious theories that we on Less Wrong have implicitly accepted. It is quite possibly the best single book on psychology that I've read. In this posts and posts that follow, I will be summarizing some of its most important contributions.

Chapter 1: Consistently Inconsistent (available for free here) presents evidence of our brains being modular, and points out some implications of this.

As previously discussed, severing the connection between the two hemispheres of a person's brain causes some odd effects. Present the left hemisphere with a picture of a chicken claw, and the right with a picture of a wintry scene. Now show the patient an array of cards with pictures of objects on them, and ask them to point (with each hand) something related to what they saw. The hand controlled by the left hemisphere points to a chicken, the hand controlled by the right hemisphere points to a snow shovel. Fine so far.

But what happens when you ask the patient to explain why they pointed to those objects in particular? The left hemisphere is in control of the verbal apparatus. It knows that it saw a chicken claw, and it knows that it pointed at the picture of the chicken, and that the hand controlled by the other hemisphere pointed at the picture of a shovel. Asked to explain this, it comes up with the explanation that the shovel is for cleaning up after the chicken. While the right hemisphere knows about the snowy scene, it doesn't control the verbal apparatus and can't communicate directly with the left hemisphere, so this doesn't affect the reply.

Now one asks, what did ”the patient” think was going on? A crucial point of the book is that there's no such thing as the patient. ”The patient” is just two different hemispheres, to some extent disconnected. You can either ask what the left hemisphere thinks, or what the right hemisphere thinks. But asking about ”the patient's beliefs” is a wrong question. If you know what the left hemisphere believes, what the right hemisphere believes, and how this influences the overall behavior, then you know all that there is to know.

continue reading »

Modularity and Buzzy

24 Kaj_Sotala 04 August 2011 11:35AM

This is the second part in a mini-sequence presenting material from Robert Kurzban's excellent book Why Everyone (Else) Is a Hypocrite: Evolution and the Modular Mind.

Chapter 2: Evolution and the Fragmented Brain. Braitenberg's Vehicles are thought experiments that use Matchbox car-like vehicles. A simple one might have a sensor that made the car drive away from heat. A more complex one has four sensors: one for light, one for temperature, one for organic material, and one for oxygen. This can already cause some complex behaviors: ”It dislikes high temperature, turns away from hot places, and at the same time seems to dislike light bulbs with even greater passion, since it turns toward them and destroys them.” Adding simple modules specialized for different tasks, such as avoiding high temperatures, can make the overall behavior increasingly complex as the modules' influences interact.

A ”module”, in the context of the book, is an information-processing mechanism specialized for some function. It's comparable to subroutine in a computer program, operating relatively independently of other parts of the code. There's a strong reason to believe that human brains are composed of a large number of modules, for specialization yields efficiency.

Consider a hammer or screwdriver. Both tools have very specific shapes, for they've been designed to manipulate objects of a certain shape in a specific way. If they were of a different shape, they'd work worse for the purpose they were intended for. Workers will do better if they have both hammers and screwdrivers in their toolbox, instead of one ”general” tool meant to perform both functions. Likewise, a toaster is specialized for toasting bread, with slots just large enough for the bread to fit in, but small enough to efficiently deliver the heat to both sides of the bread. You could toast bread with a butane torch, but it would be hard to toast it evenly – assuming you didn't just immolate the bread. The toaster ”assumes” many things about the problem it has to solve – the shape of the bread, the amount of time the toast needs to be heated, that the socket it's plugged into will deliver the right kind of power, and so on. You could use the toaster as a paperweight or a weapon, but not being specialized for those tasks, it would do poorly at it.

To the extent that there is a problem with regularities, an efficient solution to the problem will embody those regularities. This is true for both physical objects and computational ones. Microsoft Word is worse for writing code than a dedicated programming environment, which has all kinds of specialized tools for the task of writing, running and debugging code.

continue reading »

Your Evolved Intuitions

16 lukeprog 05 May 2011 04:21PM

Part of the sequence: Rationality and Philosophy

We have already examined one source of our intuitions: attribute substitution heuristics. Today we examine a second source of our intuitions: biological evolution.

 

Evolutionary psychology

Evolutionary psychology1 has been covered on Less Wrong many times before, but let's review anyway.

Lions walk on four legs and hunt for food. Skunks defend themselves with a spray. Spiders make webs. Each species is shaped by selection pressures, and is different from that of other species.

Certain evolved psychological mechanisms in humans are part of what makes us like each other and not like lions, skunks, and spiders.

These mechanisms evolved to solve specific adaptive problems. It is not an accident that people around the world prefer calorie-rich foods,2 that women around the world prefer men with resources,3 that men around the world prefer women with signs of fertility,4 or that most of us inherently fear snakes and spiders but not cars and electrical outlets.5

An an example of evolutionary psychology at work, consider the 'hunter-gatherer hypothesis' that men evolved psychological mechanisms to aid in hunting, while women evolved psychological mechanisms to aid in gathering.6 This hypothesis leads to a list of bold predictions. If the hypothesis is correct, then:

  1. Men in modern tribal societies should spend a lot of time hunting, and women more time gathering.
  2. Humans should show a greater tendency toward strong male coalitions than similar species in which males do not hunt much, because strong male coalitions are required to hunt big game.
  3. Because meat from most game comes in quantities larger than a single hunter can consume, and because hunting success is highly variable (one week may be a success, but perhaps not the next week), humans should exhibit food sharing and reciprocal altruism.
  4. We should expect to see a sexual division of labor, due to the different traits conducive for hunting vs. gathering.
  5. Men should exploit status gains to be had from 'showing off' large hunting successes.
  6. Men should have superior cognitive ability to navigate across large distances and perform 3D mental rotation tasks required for throwing spears and similar hunting acts. Women should have superior cognitive ability with spacial location memory and object arrays.

And as it turns out, all these predictions are correct.7 (And no, evolutionary psychologists do not only offer 'postdictions' or 'just so' stories. Besides, probability theory does not have separate categories for 'predictions' and 'postdictions'.)

continue reading »

Guilt: Another Gift Nobody Wants

63 Yvain 31 March 2011 12:27AM

Evolutionary psychology has made impressive progress in understanding the origins of morality. Along with the many posts about these origins on Less Wrong I recommend Robert Wright's The Moral Animal for an excellent introduction to the subject.

Guilt does not naturally fall out of these explanations. One can imagine a mind design that although often behaving morally for the same reasons we do, sometimes decides a selfish approach is best and pursues that approach without compunction. In fact, this design would have advantages; it would remove a potentially crippling psychological burden, prevent loss of status from admission of wrongdoing, and allow more rational calculation of when moral actions are or are not advantageous. So why guilt?

In one of the few existing writings I could find on the subject, Tooby and Cosmides theorize that "guilt functions as an emotion mode specialized for recalibration of regulatory variables that control trade-offs in welfare between self and other."

If I understand their meaning, they are saying that when an action results in a bad outcome, guilt is a byproduct of updating your mental processes so that it doesn't happen again. In their example, if you don't share food with your sister, and your sister starves and becomes sick, your brain gives you a strong burst of negative emotion around the event so that you reconsider your decision not to share. It is generally a bad idea to disagree with Tooby and Cosmides, but this explanation doesn't satisfy me for several reasons.

First, guilt is just as associated with good outcomes as bad outcomes. If I kill my brother so I can inherit the throne, then even if everything goes according to plan and I become king, I may still feel guilt. But why should I recalibrate here? My original assumptions - that fratricide would be easy and useful - were entirely correct. But I am still likely to feel bad about it. In fact, some criminals report feeling "relieved" when caught, as if a negative outcome decreased their feelings of guilt instead of exacerbating them.

Second, guilt is not only an emotion, but an entire complex of behaviors. Our modern word self-flagellation comes from the old practice of literally whipping one's self out of feelings of guilt or unworthiness. We may not literally self-flagellate anymore, but when I feel guilty I am less likely to do activities I enjoy and more likely to deliberately make myself miserable.

Third, although guilt can be very private it has an undeniable social aspect. People have messaged me at 3 AM in the morning just to tell me how guilty they feel about something they did to someone I've never met; this sort of outpouring of emotion can even be therapeutic. The aforementioned self-flagellators would parade around town in their sackcloth and ashes, just in case anyone didn't know how guilty they felt. And we expect guilt in certain situations: a criminal who feels guilty about what ey has done may get a shorter sentence.

Fourth, guilt sometimes occurs even when a person has done nothing wrong. People who through no fault of their own are associated with disasters can nevertheless report "survivor's guilt" and feel like events were partly their fault. If this is a tool for recalibrating choices, it is a very bad one. This is not a knockdown argument - a lot of mental adaptations are very bad at what they do - but it should at least raise suspicion that there is another part to the puzzle besides recalibration.

continue reading »

You're in Newcomb's Box

38 HonoreDB 05 February 2011 08:46PM

Part 1:  Transparent Newcomb with your existence at stake

Related: Newcomb's Problem and Regret of Rationality

 

Omega, a wise and trustworthy being, presents you with a one-time-only game and a surprising revelation.  

 

"I have here two boxes, each containing $100," he says.  "You may choose to take both Box A and Box B, or just Box B.  You get all the money in the box or boxes you take, and there will be no other consequences of any kind.  But before you choose, there is something I must tell you."

 

Omega pauses portentously.

 

"You were created by a god: a being called Prometheus.  Prometheus was neither omniscient nor particularly benevolent.  He was given a large set of blueprints for possible human embryos, and for each blueprint that pleased him he created that embryo and implanted it in a human woman.  Here was how he judged the blueprints: any that he guessed would grow into a person who would choose only Box B in this situation, he created.  If he judged that the embryo would grow into a person who chose both boxes, he filed that blueprint away unused.  Prometheus's predictive ability was not perfect, but it was very strong; he was the god, after all, of Foresight."

 

Do you take both boxes, or only Box B?

continue reading »

How are critical thinking skills acquired? Five perspectives

9 matt 22 October 2010 02:29AM

Link to sourcehttp://timvangelder.com/2010/10/20/how-are-critical-thinking-skills-acquired-five-perspectives/
Previous LW discussion of argument mappingArgument Maps Improve Critical ThinkingDebate tools: an experience report

How are critical thinking skills acquired? Five perspectivesTim van Gelder discusses acquisition of critical thinking skills, suggesting several theories of skill acquisition that don't work, and one with which he and hundreds of his students have had significant success.

In our work in the Reason Project at the University of Melbourne we refined the Practice perspective into what we called the Quality (or Deliberate) Practice Hypothesis.   This was based on the foundational work of Ericsson and others who have shown that skill acquisition in general depends on extensive quality practice.  We conjectured that this would also be true of critical thinking; i.e. critical thinking skills would be (best) acquired by doing lots and lots of good-quality practice on a wide range of real (or realistic) critical thinking problems.   To improve the quality of practice we developed a training program based around the use of argument mapping, resulting in what has been called the LAMP (Lots of Argument Mapping) approach.   In a series of rigorous (or rather, as-rigorous-as-possible-under-the-circumstances) studies involving pre-, post- and follow-up testing using a variety of tests, and setting our results in the context of a meta-analysis of hundreds of other studies of critical thinking gains, we were able to establish that critical thinking skills gains could be dramatically accelerated, with students reliably improving 7-8 times faster, over one semester, than they would otherwise have done just as university students.   (For some of the detail on the Quality Practice hypothesis and our studies, see this paper, and this chapter.)

LW has been introduced to argument mapping before

The Meaning of Life

13 b1shop 17 September 2010 07:29PM

Fifteen thousand years ago, our ancestors bred dogs to serve man. In merely 150 centuries, we shaped collies to herd our sheep and pekingese to sit in our emperor's sleeves. Wild wolves can't understand us, but we teach their domesticated counterparts tricks for fun. And, most importantly of all, dogs get emotional pleasure out of serving their master. When my family's terrier runs to the kennel, she does so with blissful, self-reinforcing obedience.

When I hear amateur philosophers ponder the meaning of life, I worry humans suffer from the same embarrassing shortcoming.

It's not enough to find a meaningful cause. These monkeys want to look in the stars and see their lives' purpose described in explicit detail. They expect to comb through ancient writings and suddenly discover an edict reading "the meaning of life is to collect as many paperclips as possible" and then happily go about their lives as imperfect, yet fulfilled paperclip maximizers.

I'd expect us to shout "life is without mandated meaning!" with lungs full of joy. There are no rules we have to follow, only the consequences we choose for us and our fellow humans. Huzzah!

But most humans want nothing more than to surrender to a powerful force. See Augustine's conception of freedom, the definition of the word Islam, or Popper's "The Open Society and Its Enemies." When they can't find one overwhelming enough, they furrow their brow and declare with frustration that life has no meaning.

This is part denunciation and part confession. At times, I've felt the same way. I worry man is a domesticated species.

continue reading »

Morality as Parfitian-filtered Decision Theory?

24 SilasBarta 30 August 2010 09:37PM

Non-political follow-up to: Ungrateful Hitchhikers (offsite)

 

Related to: Prices or Bindings?, The True Prisoner's Dilemma

 

Summary: Situations like the Parfit's Hitchhiker problem select for a certain kind of mind: specifically, one that recognizes that an action can be optimal, in a self-interested sense, even if it can no longer cause any future benefit.  A mind that can identify such actions might put them in a different category which enables it to perform them, in defiance of the (futureward) consequentialist concerns that normally need to motivate it.  Our evolutionary history has put us through such "Parfitian filters", and the corresponding actions, viewed from the inside, feel like "something we should do", even if we don’t do it, and even if we recognize the lack of a future benefit.  Therein lies the origin of our moral intuitions, as well as the basis for creating the category "morality" in the first place.

 

Introduction: What kind of mind survives Parfit's Dilemma?

 

Parfit's Dilemma – my version – goes like this: You are lost in the desert and near death.  A superbeing known as Omega finds you and considers whether to take you back to civilization and stabilize you.  It is a perfect predictor of what you will do, and only plans to rescue you if it predicts that you will, upon recovering, give it $0.01 from your bank account.  If it doesn’t predict you’ll pay, you’re left in the desert to die. [1]

 

So what kind of mind wakes up from this?  One that would give Omega the money.  Most importantly, the mind is not convinced to withhold payment on the basis that the benefit was received only in the past.  Even if it recognizes that no future benefit will result from this decision -- and only future costs will result -- it decides to make the payment anyway.

continue reading »

View more: Next