Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Consistently Inconsistent

60 Kaj_Sotala 04 August 2011 10:33PM

Robert Kurzban's Why Everyone (Else) Is a Hypocrite: Evolution and the Modular Mind is a book about how our brains are composed of a variety of different, interacting systems. While that premise is hardly new, many of our intuitions are still grounded in the idea of a unified, non-compartmental self. Why Everyone (Else) Is a Hypocrite takes the modular view and systematically attacks a number of ideas based on the unified view, replacing them with a theory based on the modular view. It clarifies a number of issues previously discussed on Overcoming Bias and Less Wrong, and even debunks some outright fallacious theories that we on Less Wrong have implicitly accepted. It is quite possibly the best single book on psychology that I've read. In this posts and posts that follow, I will be summarizing some of its most important contributions.

Chapter 1: Consistently Inconsistent (available for free here) presents evidence of our brains being modular, and points out some implications of this.

As previously discussed, severing the connection between the two hemispheres of a person's brain causes some odd effects. Present the left hemisphere with a picture of a chicken claw, and the right with a picture of a wintry scene. Now show the patient an array of cards with pictures of objects on them, and ask them to point (with each hand) something related to what they saw. The hand controlled by the left hemisphere points to a chicken, the hand controlled by the right hemisphere points to a snow shovel. Fine so far.

But what happens when you ask the patient to explain why they pointed to those objects in particular? The left hemisphere is in control of the verbal apparatus. It knows that it saw a chicken claw, and it knows that it pointed at the picture of the chicken, and that the hand controlled by the other hemisphere pointed at the picture of a shovel. Asked to explain this, it comes up with the explanation that the shovel is for cleaning up after the chicken. While the right hemisphere knows about the snowy scene, it doesn't control the verbal apparatus and can't communicate directly with the left hemisphere, so this doesn't affect the reply.

Now one asks, what did ”the patient” think was going on? A crucial point of the book is that there's no such thing as the patient. ”The patient” is just two different hemispheres, to some extent disconnected. You can either ask what the left hemisphere thinks, or what the right hemisphere thinks. But asking about ”the patient's beliefs” is a wrong question. If you know what the left hemisphere believes, what the right hemisphere believes, and how this influences the overall behavior, then you know all that there is to know.

continue reading »

Modularity and Buzzy

24 Kaj_Sotala 04 August 2011 11:35AM

This is the second part in a mini-sequence presenting material from Robert Kurzban's excellent book Why Everyone (Else) Is a Hypocrite: Evolution and the Modular Mind.

Chapter 2: Evolution and the Fragmented Brain. Braitenberg's Vehicles are thought experiments that use Matchbox car-like vehicles. A simple one might have a sensor that made the car drive away from heat. A more complex one has four sensors: one for light, one for temperature, one for organic material, and one for oxygen. This can already cause some complex behaviors: ”It dislikes high temperature, turns away from hot places, and at the same time seems to dislike light bulbs with even greater passion, since it turns toward them and destroys them.” Adding simple modules specialized for different tasks, such as avoiding high temperatures, can make the overall behavior increasingly complex as the modules' influences interact.

A ”module”, in the context of the book, is an information-processing mechanism specialized for some function. It's comparable to subroutine in a computer program, operating relatively independently of other parts of the code. There's a strong reason to believe that human brains are composed of a large number of modules, for specialization yields efficiency.

Consider a hammer or screwdriver. Both tools have very specific shapes, for they've been designed to manipulate objects of a certain shape in a specific way. If they were of a different shape, they'd work worse for the purpose they were intended for. Workers will do better if they have both hammers and screwdrivers in their toolbox, instead of one ”general” tool meant to perform both functions. Likewise, a toaster is specialized for toasting bread, with slots just large enough for the bread to fit in, but small enough to efficiently deliver the heat to both sides of the bread. You could toast bread with a butane torch, but it would be hard to toast it evenly – assuming you didn't just immolate the bread. The toaster ”assumes” many things about the problem it has to solve – the shape of the bread, the amount of time the toast needs to be heated, that the socket it's plugged into will deliver the right kind of power, and so on. You could use the toaster as a paperweight or a weapon, but not being specialized for those tasks, it would do poorly at it.

To the extent that there is a problem with regularities, an efficient solution to the problem will embody those regularities. This is true for both physical objects and computational ones. Microsoft Word is worse for writing code than a dedicated programming environment, which has all kinds of specialized tools for the task of writing, running and debugging code.

continue reading »

Nature: Red, in Truth and Qualia

35 orthonormal 29 May 2011 11:50PM

Previously: Seeing Red: Dissolving Mary's Room and Qualia, A Study of Scarlet: The Conscious Mental Graph

When we left off, we'd introduced a hypothetical organism called Martha whose actions are directed by a mobile graph of simple mental agents. The tip of the iceberg, consisting of the agents that are connected to Martha's language centers, we called the conscious subgraph. Now we're going to place Martha into a situation like Mary's Room: we'll say that a large unconscious agent of hers (like color vision) has never been active, we'll grant her an excellent conscious understanding of that agent, and then we'll see what happens when we activate it for the first time.

But first, there's one more mental agent we need to introduce, one which serves a key purpose in Martha's evolutionary history: a simple agent that identifies learning.

continue reading »

A Study of Scarlet: The Conscious Mental Graph

29 orthonormal 27 May 2011 08:13PM

Sequel to: Seeing Red: Dissolving Mary's Room and Qualia

Seriously, you should read first: Dissolving the Question, How an Algorithm Feels From Inside

In the previous post, we introduced the concept of qualia and the thought experiment of Mary's Room, set out to dissolve the question, and decided that we were seeking a simple model of a mind which includes both learning and a conscious/subconscious distinction. Since for now we're just trying to prove a philosophical point, we don't need to worry whether our model corresponds well to the human mind (though it would certainly be convenient if it did); we'll therefore pick an abstract mathematical structure that we can analyze more easily.

continue reading »

Seeing Red: Dissolving Mary's Room and Qualia

38 orthonormal 26 May 2011 05:47PM

Essential Background: Dissolving the Question

How could we fully explain the difference between red and green to a colorblind person?

Well, we could of course draw the analogy between colors of the spectrum and tones of sound; have them learn which objects are typically green and which are typically red (or better yet, give them a video camera with a red filter to look through); explain many of the political, cultural and emotional associations of red and green, and so forth... but it seems that the actual difference between our experience of redness and our experience of greenness is something much harder to convey. If we focus in on that aspect of experience, we end up with the classic philosophical concept of qualia, and the famous thought experiment known as Mary’s Room1.

Mary is a brilliant neuroscientist who has been colorblind from birth (due to a retina problem; her visual cortex would work normally if it were given the color input). She’s an expert on the electromagnetic spectrum, optics, and the science of color vision. We can postulate, since this is a thought experiment, that she knows and fully understands every physical fact involved in color vision; she knows precisely what happens, on various levels, when the human eye sees red (and the optic nerve transmits particular types of signals, and the visual cortex processes these signals, etc).

One day, Mary gets an operation that fixes her retinas, so that she finally sees in color for the first time. And when she wakes up, she looks at an apple and exclaims, "Oh! So that's what red actually looks like."2

Now, this exclamation poses a challenge to any physical reductionist account of subjective experience. For if the qualia of seeing red could be reduced to a collection of basic facts about the physical world, then Mary would have learned those facts earlier and wouldn't learn anything extra now– but of course it seems that she really does learn something when she sees red for the first time. This is not merely the god-of-the-gaps argument that we haven't yet found a full reductionist explanation of subjective experience, but an intuitive proof that no such explanation would be complete.

The argument in academic philosophy over Mary's Room remains unsettled to this day (though it has an interesting history, including a change of mind on the part of its originator). If we ignore the topic of subjective experience, the arguments for reductionism appear to be quite overwhelming; so why does this objection, in a domain in which our ignorance is so vast3, seem so difficult for reductionists to convincingly reject?

Veterans of this blog will know where I'm going: a question like this needs to be dissolved, not merely answered.

continue reading »

Suffering as attention-allocational conflict

49 Kaj_Sotala 18 May 2011 03:12PM

I previously characterized Michael Vassar's theory on suffering as follows: "Pain is not suffering. Pain is just an attention signal. Suffering is when one neural system tells you to pay attention, and another says it doesn't want the state of the world to be like this." While not too far off the mark, it turns out this wasn't what he actually said. Instead, he said that suffering is a conflict between two (or more) attention-allocation mechanisms in the brain.

I have been successful at using this different framing to reduce the amount of suffering I feel. The method goes like this. First, I notice that I'm experiencing something that could be called suffering. Next, I ask, what kind of an attention-allocational conflict is going on? I consider the answer, attend to the conflict, resolve it, and then I no longer suffer.

An example is probably in order, so here goes. Last Friday, there was a Helsinki meetup with Patri Friedman present. I had organized the meetup, and wanted to go. Unfortunately, I already had other obligations for that day, ones I couldn't back out from. One evening, I felt considerable frustration over this.

Noticing my frustration, I asked: what attention-allocational conflict is this? It quickly become obvious that two systems were fighting it out:

* The Meet-Up System was trying to convey the message: ”Hey, this is a rare opportunity to network with a smart, high-status individual and discuss his ideas with other smart people. You really should attend.”
* The Prior Obligation System responded with the message: ”You've already previously agreed to go somewhere else. You know it'll be fun, and besides, several people are expecting you to go. Not going bears an unacceptable social cost, not to mention screwing over the other people's plans.”

Now, I wouldn't have needed to consciously reflect on the messages to be aware of them. It was hard to not be aware of them: it felt like my consciousness was in a constant crossfire, with both systems bombarding it with their respective messages.

But there's an important insight here, one which I originally picked up from PJ Eby. If a mental subsystem is trying to tell you something important, then it will persist in doing so until it's properly acknowledged. Trying to push away the message means it has not been properly addressed and acknowledged, meaning the subsystem has to continue repeating it.

continue reading »

Mapping our maps: types of knowledge

5 Swimmer963 27 April 2011 02:16AM

Related toMap and Territory.

This post is based on ideas that came to be during my second-year nursing Research Methods class. The fact that I did terribly in this class maybe indicates that I shouldn’t be trying to explain it to anyone, but it also has a lot to do with the way I zoned out for most of every class, mulling over the material that would later become this post.

Types of map: the level of abstraction, or ‘how many steps away from reality’?

Probably in the third or fourth Research Methods class, we learned that any given research proposal could be divided into one of the following four categories:

  • Descriptive
  • Exploratory
  • Explanatory
  • Predictive

continue reading »

Were atoms real?

61 AnnaSalamon 08 December 2010 05:30PM

Related to: Dissolving the Question, Words as Hidden Inferences

In what sense is the world “real”?  What are we asking, when we ask that question?

I don’t know.  But G. Polya recommends that when facing a difficult problem, one look for similar but easier problems that one can solve as warm-ups.  I would like to do one of those warm-ups today; I would like to ask what disguised empirical question scientists were asking were asking in 1860, when they debated (fiercely!) whether atoms were real.[1]

Let’s start by looking at the data that swayed these, and similar, scientists.

Atomic theory:  By 1860, it was clear that atomic theory was a useful pedagogical device.  Atomic theory helped chemists describe several regularities:

  • The law of definite proportions (chemicals combining to form a given compound always combine in a fixed ratio)
  • The law of multiple proportions (the ratios in which chemicals combine when forming distinct compounds, such as carbon dioxide and carbon monoxide, form simple integer ratios; this holds for many different compounds, including complicated organic compounds).
  • If fixed volumes of distinct gases are isolated, at a fixed temperature and pressure, their masses form these same ratios.

Despite this usefulness, there was considerable debate as to whether atoms were “real” or were merely a useful pedagogical device.  Some argued that substances might simply prefer to combine in certain ratios and that such empirical regularities were all there was to atomic theory; it was needless to additionally suppose that matter came in small unbreakable units.

continue reading »

What is the group selection debate?

28 Academian 02 November 2010 02:02AM

Related to Group selection update, The tragegy of group selectionism

tl;dr: In competitive selection processes, selection is a two-place word: there's something being selected (a cause), and something it's being selected for (an effect). The phrase group-level gene selection helps dissolve questions and confusion surrounding the less descriptive phrase "group selection".

(Essential note for new readers on reduction: Reality does not seem to keep track of different "levels of organization" and apply different laws at each level; rather, it seems that the patterns we observe at higher levels are statistical consequences of the laws and initial conditions at the lower levels. This is the "reductionist thesis.")

When I first encountered people debating "whether group selection is real", I couldn't see what there was to possibly debate about. I've since realized the debate is mostly a confusion arising from a cognitive misuse of a two-place "selection" relation.

Causes being selected versus effects they're being selected for.

A gene is an example of a Replicating Cause. (So is a meme; postpone discussion here.) A gene has many effects, one of which is that what we call "copies" of it tend to crop up in reality, through various mechanisms that involve cellular and organismal reproduction.

For example, suppose a particular human gene X causes cells containing it to immediately reproduce without bound, i.e. the gene is "cancerous". One effect is that there will soon be many more cells with that gene, hence more copies of the gene. Another effect is that the human organism containing it is liable to die without passing it on, hence fewer copies of the gene (once the dead organism starts to decay). If that's what happens, the gene itself can be considered unfit: all things considered, its various effects eventually lead it to stop existing.

(An individual in the next generation can still "get cancer", though, if a mutation in one produces a new cancerous gene, Y. This is what happens in reality.)

Thus, cancers are examples of where higher-complexity mechanisms trump lower complexity-mechanisms: organism-level gene selection versus cellular-level gene selection. Note that the Replicating Cause being selected is always the gene, but it is being selected for its net effects occurring on various levels.

So what's left to debate about?

continue reading »

Your intuitions are not magic

65 Kaj_Sotala 10 June 2010 12:11AM

This article is an attempt to summarize basic material, and thus probably won't have anything new for the hard core posting crowd. If you're new and this article got you curious, we recommend the Sequences.

People who know a little bit of statistics - enough to use statistical techniques, not enough to understand why or how they work - often end up horribly misusing them. Statistical tests are complicated mathematical techniques, and to work, they tend to make numerous assumptions. The problem is that if those assumptions are not valid, most statistical tests do not cleanly fail and produce obviously false results. Neither do they require you to carry out impossible mathematical operations, like dividing by zero. Instead, they simply produce results that do not tell you what you think they tell you. As a formal system, pure math exists only inside our heads. We can try to apply it to the real world, but if we are misapplying it, nothing in the system itself will tell us that we're making a mistake.

Examples of misapplied statistics have been discussed here before. Cyan discussed a "test" that could only produce one outcome. PhilGoetz critiqued a statistical method which implicitly assumed that taking a healthy dose of vitamins had a comparable effect as taking a toxic dose.

Even a very simple statistical technique, like taking the correlation between two variables, might be misleading if you forget about the assumptions it's making. When someone says "correlation", they are most commonly talking about Pearson's correlation coefficient, which seeks to gauge whether there's a linear relationship between two variables. In other words, if X increases, does Y also tend to increase. (Or decrease.) However, like with vitamin dosages and their effects on health, two variables might have a non-linear relationship. Increasing X might increase Y up to a certain point, after which increasing X would decrease Y. Simply calculating Pearson's correlation on two such variables might cause someone to get a low correlation, and therefore conclude that there's no relationship or there's only a weak relationship between the two. (See also Anscombe's quartet.)

The lesson here, then, is that not understanding how your analytical tools work will get you incorrect results when you try to analyze something. A person who doesn't stop to consider the assumptions of the techniques she's using is, in effect, thinking that her techniques are magical. No matter how she might use them, they will always produce the right results. Of course, assuming that makes about as much sense as assuming that your hammer is magical and can be used to repair anything. Even if you had a broken window, you could fix that by hitting it with your magic hammer. But I'm not only talking about statistics here, for the same principle can be applied in a more general manner.

continue reading »

View more: Prev | Next