Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Confused as to usefulness of 'consciousness' as a concept

35 KnaveOfAllTrades 13 July 2014 11:01AM

Years ago, before I had come across many of the power tools in statistics, information theory, algorithmics, decision theory, or the Sequences, I was very confused by the concept of intelligence. Like many, I was inclined to reify it as some mysterious, effectively-supernatural force that tilted success at problem-solving in various domains towards the 'intelligent', and which occupied a scale imperfectly captured by measures such as IQ.

Realising that 'intelligence' (as a ranking of agents or as a scale) was a lossy compression of an infinity of statements about the relative success of different agents in various situations was part of dissolving the confusion; the reason that those called 'intelligent' or 'skillful' succeeded more often was that there were underlying processes that had a greater average tendency to output success, and that greater average success caused the application of the labels.

Any agent can be made to lose by an adversarial environment. But for a fixed set of environments, there might be some types of decision processes that do relatively well over that set of environments than other processes, and one can quantify this relative success in any number of ways.

It's almost embarrassing to write that since put that way, it's obvious. But it still seems to me that intelligence is reified (for example, look at most discussions about IQ), and the same basic mistake is made in other contexts, e.g. the commonly-held teleological approach to physical and mental diseases or 'conditions', in which the label is treated as if—by some force of supernatural linguistic determinism—it *causes* the condition, rather than the symptoms of the condition, in their presentation, causing the application of the labels. Or how a label like 'human biological sex' is treated as if it is a true binary distinction that carves reality at the joints and exerts magical causal power over the characteristics of humans, when it is really a fuzzy dividing 'line' in the space of possible or actual humans, the validity of which can only be granted by how well it summarises the characteristics.

For the sake of brevity, even when we realise these approximations, we often use them without commenting upon or disclaiming our usage, and in many cases this is sensible. Indeed, in many cases it's not clear what the exact, decompressed form of a concept would be, or it seems obvious that there can in fact be no single, unique rigorous form of the concept, but that the usage of the imprecise term is still reasonably consistent and correlates usefully with some relevant phenomenon (e.g. tendency to successfully solve problems). Hearing that one person has a higher IQ than another might allow one to make more reliable predictions about who will have the higher lifetime income, for example.

However, widespread use of such shorthands has drawbacks. If a term like 'intelligence' is used without concern or without understanding of its core (i.e. tendencies of agents to succeed in varying situations, or 'efficient cross-domain optimization'), then it might be used teleologically; the term is reified (the mental causal graph goes from "optimising algorithm->success->'intelligent'" to "'intelligent'->success").

In this teleological mode, it feels like 'intelligence' is the 'prime mover' in the system, rather than a description applied retroactively to a set of correlations. But knowledge of those correlations makes the term redundant; once we are aware of the correlations, the term 'intelligence' is just a pointer to them, and does not add anything to them. Despite this, it seems to me that some smart people get caught up in obsessing about reified intelligence (or measures like IQ) as if it were a magical key to all else.

Over the past while, I have been leaning more and more towards the conclusion that the term 'consciousness' is used in similarly dubious ways, and today it occurred to me that there is a very strong analogy between the potential failure modes of discussion of 'consciousness' and between the potential failure modes of discussion of 'intelligence'. In fact, I suspect that the perils of 'consciousness' might be far greater than those of 'intelligence'.


A few weeks ago, Scott Aaronson posted to his blog a criticism of integrated information theory (IIT). IIT attempts to provide a quantitative measure of the consciousness of a system. (Specifically, a nonnegative real number phi). Scott points out what he sees as failures of the measure phi to meet the desiderata of a definition or measure of consciousness, thereby arguing that IIT fails to capture the notion of consciousness.

What I read and understood of Scott's criticism seemed sound and decisive, but I can't shake a feeling that such arguments about measuring consciousness are missing the broader point that all such measures of consciousness are doomed to failure from the start, in the same way that arguments about specific measures of intelligence are missing a broader point about lossy compression.

Let's say I ask you to make predictions about the outcome of a game of half-court basketball between Alpha and Beta. Your prior knowledge is that Alpha always beats Beta at (individual versions of) every sport except half-court basketball, and that Beta always beats Alpha at half-court basketball. From this fact you assign Alpha a Sports Quotient (SQ) of 100 and Beta an SQ of 10. Since Alpha's SQ is greater than Beta's, you confidently predict that Alpha will beat Beta at half-court.

Of course, that would be wrong, wrong, wrong; the SQ's are encoding (or compressing) the comparative strengths and weaknesses of Alpha and Beta across various sports, and in particular that Alpha always loses to Beta at half-court. (In fact, if other combinations lead to the same SQ's, then *not even that much* information is encoded, since other combinations might lead to the same scores.) So to just look at the SQ's as numbers and use that as your prediction criterion is a knowably inferior strategy to looking at the details of the case in question, i.e. the actual past results of half-court games between the two.

Since measures like this fictional SQ or actual IQ or fuzzy (or even quantitative) notions of consciousness are at best shorthands for specific abilities or behaviours, tabooing the shorthand should never leave you with less information, since a true shorthand, by its very nature, does not add any information.

When I look at something like IIT, which (if Scott's criticism is accurate) assigns a superhuman consciousness score to a system that evaluates a polynomial at some points, my reaction is pretty much, "Well, this kind of flaw is pretty much inevitable in such an overambitious definition."

Six months ago, I wrote:

"...it feels like there's a useful (but possibly quantitative and not qualitative) difference between myself (obviously 'conscious' for any coherent extrapolated meaning of the term) and my computer (obviously not conscious (to any significant extent?))..."

Mark Friedenbach replied recently (so, a few months later):

"Why do you think your computer is not conscious? It probably has more of a conscious experience than, say, a flatworm or sea urchin. (As byrnema notes, conscious does not necessarily imply self-aware here.)"

I feel like if Mark had made that reply soon after my comment, I might have had a hard time formulating why, but that I would have been inclined towards disputing that my computer is conscious. As it is, at this point I am struggling to see that there is any meaningful disagreement here. Would we disagree over what my computer can do? What information it can process? What tasks it is good for, and for which not so much?

What about an animal instead of my computer? Would we feel the same philosophical confusion over any given capability of an average chicken? An average human?

Even if we did disagree (or at least did not agree) over, say, an average human's ability to detect and avoid ultraviolet light without artificial aids and modern knowledge, this lack of agreement would not feel like a messy, confusing philosophical one. It would feel like one tractable to direct experimentation. You know, like, blindfold some experimental subjects, control subjects, and experimenters and see how the experimental subjects react to ultraviolet light versus other light in the control subjects. Just like if we were arguing about whether Alpha or Beta is the better athlete, there would be no mystery left over once we'd agreed about their relative abilities at every athletic activity. At most there would be terminological bickering over which scoring rule over athletic activities we should be using to measure 'athletic ability', but not any disagreement for any fixed measure.

I have been turning it over for a while now, and I am struggling to think of contexts in which consciousness really holds up to attempts to reify it. If asked why it doesn't make sense to politely ask a virus to stop multiplying because it's going to kill its host, a conceivable response might be something like, "Erm, you know it's not conscious, right?" This response might well do the job. But if pressed to cash out this response, what we're really concerned with is the absence of the usual physical-biological processes by which talking at a system might affect its behaviour, so that there is no reason to expect the polite request to increase the chance of the favourable outcome. Sufficient knowledge of physics and biology could make this even more rigorous, and no reference need be made to consciousness.

The only context in which the notion of consciousness seems inextricable from the statement is in ethical statements like, "We shouldn't eat chickens because they're conscious." In such statements, it feels like a particular sense of 'conscious' is being used, one which is *defined* (or at least characterised) as 'the thing that gives moral worth to creatures, such that we shouldn't eat them'. But then it's not clear why we should call this moral criterion 'consciousness'; insomuch as consciousness is about information processing or understanding an environment, it's not obvious what connection this has to moral worth. And insomuch as consciousness is the Magic Token of Moral Worth, it's not clear what it has to do with information processing.

If we relabelled zxcv=conscious and rewrote, "We shouldn't eat chickens because they're zxcv," then this makes it clearer that the explanation is not entirely satisfactory; what does zxcv have to do with moral worth? Well, what does consciousness have to do with moral worth? Conservation of argumentative work and the usual prohibitions on equivocation apply: You can't introduce a new sense of the word 'conscious' then plug it into a statement like "We shouldn't eat chickens because they're conscious" and dust your hands off as if your argumentative work is done. That work is done only if one's actual values and the definition of consciousness to do with information processing already exactly coincide, and this coincidence is known. But it seems to me like a claim of any such coincidence must stem from confusion rather than actual understanding of one's values; valuing a system commensurate with its ability to process information is a fake utility function.

When intelligence is reified, it becomes a teleological fake explanation; consistently successful people are consistently successful because they are known to be Intelligent, rather than their consistent success causing them to be called intelligent. Similarly consciousness becomes teleological in moral contexts: We shouldn't eat chickens because they are called Conscious, rather than 'these properties of chickens mean we shouldn't eat them, and chickens also qualify as conscious'.

So it is that I have recently been very skeptical of the term 'consciousness' (though grant that it can sometimes be a useful shorthand), and hence my question to you: Have I overlooked any counts in favour of the term 'consciousness'?

An attempt to dissolve subjective expectation and personal identity

35 Kaj_Sotala 22 February 2013 08:44PM

I attempt to figure out a way to dissolve the concepts of 'personal identity' and 'subjective expectation' down to the level of cognitive algorithms, in a way that would let one bite the bullets of the anthropic trilemma. I proceed by considering four clues which seem important: 1) the evolutionary function of personal identity, 2) a sense of personal identity being really sticky, 3) an undefined personal identity causing undefined behavior in our decision-making machinery, and 4) our decision-making machinery being more strongly grounded in our subjective expectation than in abstract models. Taken together, these seem to suggest a solution.

I ended up re-reading some of the debates about the anthropic trilemma, and it struck me odd that, aside for a few references to personal identity being an evolutionary adaptation, there seemed to be no attempt to reduce the concept to the level of cognitive algorithms. Several commenters thought that there wasn't really any problem, and Eliezer asked them to explain why the claim of there not being any problem regardless violated the intuitive rules of subjective expectation. That seemed like a very strong indication that the question needs to be dissolved, but almost none of the attempted answers seemed to do that, instead trying to solve the question via decision theory without ever addressing the core issue of subjective expectation. rwallace's I-less Eye argued - I believe correctly - that subjective anticipation isn't ontologically fundamental, but still didn't address the question of why it feels like it is.

Here's a sketch of a dissolvement. It seems relatively convincing to me, but I'm not sure how others will take it, so let's give it a shot. Even if others find it incomplete, it should at least help provide clues that point towards a better dissolvement.

Clue 1: The evolutionary function of personal identity.

Let's first consider the evolutionary function. Why have we evolved a sense of personal identity?

The first answer that always comes to everyone's mind is that our brains have evolved for the task of spreading our genes, which involves surviving at least for as long as it takes to reproduce. Simpler neural functions, like maintaining a pulse and having reflexes, obviously do fine without a concept of personal identity. But if we wish to use abstract, explicit reasoning to advance our own interests, we need some definition for exactly whose interests it is that our reasoning process is supposed to be optimizing. So evolution comes up with a fuzzy sense of personal identity, so that optimizing the interests of this identity also happens to optimize the interests of the organism in question.

That's simple enough, and this point was already made in the discussions so far. But that doesn't feel like it would resolve our confusion yet, so we need to look at the way that personal identity is actually implemented in our brains. What is the cognitive function of personal identity?

Clue 2: A sense of personal identity is really sticky.

Even people who disbelieve in personal identity don't really seem to disalieve it: for the most part, they're just as likely to be nervous about their future as anyone else. Even advanced meditators who go out trying to dissolve their personal identity seem to still retain some form of it. PyryP claims that at one point, he reached a stage in meditation where the experience of “somebody who experiences things” shattered and he could turn it entirely off, or attach it to something entirely different, such as a nearby flower vase. But then the experience of having a self began to come back: it was as if the brain was hardwired to maintain one, and to reconstruct it whenever it was broken. I asked him to comment on that for this post, and he provided the following:

continue reading »

Causal Reference

30 Eliezer_Yudkowsky 20 October 2012 10:12PM

Followup to:  The Fabric of Real ThingsStuff That Makes Stuff Happen

Previous meditation: "Does your rule forbid epiphenomenalist theories of consciousness that consciousness is caused by neurons, but doesn't affect those neurons in turn? The classic argument for epiphenomenal consciousness is that we can imagine a universe where people behave exactly the same way, but there's nobody home - no awareness, no consciousness, inside the brain. For all the atoms in this universe to be in the same place - for there to be no detectable difference internally, not just externally - 'consciousness' would have to be something created by the atoms in the brain, but which didn't affect those atoms in turn. It would be an effect of atoms, but not a cause of atoms. Now, I'm not so much interested in whether you think epiphenomenal theories of consciousness are true or false - rather, I want to know if you think they're impossible or meaningless a priori based on your rules."

Is it coherent to imagine a universe in which a real entity can be an effect but not a cause?

Well... there's a couple of senses in which it seems imaginable. It's important to remember that imagining things yields info primarily about what human brains can imagine. It only provides info about reality to the extent that we think imagination and reality are systematically correlated for some reason.

That said, I can certainly write a computer program in which there's a tier of objects affecting each other, and a second tier - a lower tier - of epiphenomenal objects which are affected by them, but don't affect them. For example, I could write a program to simulate some balls that bounce off each other, and then some little shadows that follow the balls around.

But then I only know about the shadows because I'm outside that whole universe, looking in. So my mind is being affected by both the balls and shadows - to observe something is to be affected by it. I know where the shadow is, because the shadow makes pixels be drawn on screen, which make my eye see pixels. If your universe has two tiers of causality - a tier with things that affect each other, and another tier of things that are affected by the first tier without affecting them - then could you know that fact from inside that universe?

continue reading »

The Protagonist Problem

17 atucker 23 October 2011 03:06AM

Followup to: Neural Correlates of Conscious Access; Related to: How an Algorithm Feels From the Inside, Dissolving the Question

Global Workspace Theory and its associated Theater Metaphor are empirically plausible, but why should it result in consciousness? Why should globally available information being processed by separate cognitive modules make us talk about being conscious?

Sure, that's how brains see stuff, but why would that make us think that it's happening to anyone? What in the world corresponds to a self?

So far, I've only encountered two threads of thought that try to approach this problem: the Social Cognitive Interface of Kurzban, and the Self-Model theories like those of Metzinger and Damasio.

I’ll be talking about the latter, starting off with what self-models are, and a bit about how they’re constructed. Then I’ll say what a self-model theory is.

Humans as Informational Processing Systems

Questions: What exactly is there for things to happen to? What can perceive things?

Well, bodies exist, and stuff can to happen to them. So let's start there.

Humans have bodies which include informational processing systems called brains. Our brains are causally entangled with the outside world, and are capable of mapping it. Sensory inputs are transformed into neural representations which can then be used in performing adaptive responses to the environment.

In addition to receiving sensory input from our eyes, ears, nose, tongue, skin, etc, we get sensory input about the pH level of our blood, various hormone concentrations, etc. We map not only things about the outside world, but things about our own bodies. Our brain's models of our bodies also include things like limb position.

From the third person, brains are capable of representing the bodies that they're attached to. Humans are information processing systems which, in the process of interacting with the environment, maintain a representation of themselves used by the system for the purposes of the system.

Answers: We exist. We can perceive things. What we see as being our "self" is our brain's representation of ourselves. Generalizably, a "self" is a product of a system representing itself.

Note: I don't mean to assert that human self-modeling accomplished by a single neurological system or module, but I do mean to say that there are a nonzero set of systems which, when taken together, can be elegantly expressed as being part of a self-model which presents information about a person to the person's brain.

Bodily Self Models

Human self-models seem to normally be based on sensory input, but can be separated from it. Your bodily self-model looks a lot like this:Sensory Homunculus, Courtesy Wikipedia
Phantom limb syndrome  is a phenomenon where, after a limb is amputated, a person continues to feel it. Their self-model continues to include the limb, even though they no longer receive sensory input from it. Phantom limb syndrome has also been reported by people who, due to congenital circumstances, never had those limbs. This suggests that to some extent, they’re based on neurological structures that humans are born with.

Freaky stuff happens when a body model and sensory inputs don't coincide.  Apotemnophilia  is a disorder where people want to amputate one of their otherwise healthy limbs, complaining that their body is "overcomplete", or that the limb is "intrusive". They also have very specific and consistent specifications for the amputation that they want, suggesting that the desire comes from a stable trait rather than say, attention seeking. They don't want want to get an amputation, they want a particular amputation. Which sounds pretty strange.

This is distinct from  somatoparaphrenia,  where a patient denies that a limb is theirs but is fairly apathetic towards it. Somatoparaphrenia is caused by damage to both S1 and the superior parietal lobule, leading to a limb which isn't represented in the self-model, that they don't get sensory input from. Hence, its not theirs and its just sorta hanging out there, but its not particularly distressing or creepy. Apotemnophilia can be described as lacking a limb in the self-model, but continuing to get input from it. Imagine if you felt a bunch of armness coming into your back.

In some sense, your brain also marks this model of the body as being you. I'll talk more about it in another article, but for now just notice that that's important. It's useful to know that our body is in fact ours for planning purposes, etc.

Self Models and Global Availability

Anosognosia  is the disorder where someone has a disability, but is unable/unwilling to believe that they have that disability. They insist that they don't move paralyzed arms because they don't want to, or that they can actually see while they're stumbling around bumping into things.

This is also naturally explained in terms of self-model theory. A blind person with anosognosia isn't able to see, and doesn't receive visual information, but they still represent themselves as seeing. So when you ask them about it, or they try and plan, they assert that they can still see. When the brain damage leading to blindness and anosognosia occurs, they stop being able to see, but their self-model isn't updated to reflect that.

Blindsight is the reverse case where someone is able to see, but don't represent themselves as seeing.
In both cases, the person's lack of an ability to represent themselves as having particular properties interferes with those properties being referred to by other cognitive modules such as those of speech or planning.

Self-Model Theories of Consciousness

Self-Model Theories hold that we're self aware because we're able to pay attention to ourselves in the same way that we pay attention to other things. You map yourself based on sensory inputs the same way that you map other things, and identify your model as being you.

We think that things are happening to someone because we're able to notice that things are happening to something

That's true of lots of animals though. What makes humans more conscious?

Humans are better at incorporating further processing based on the self-model into the self-model. Animals form representations of and act in the environment, but humans can talk about their representations. Animals represent things, but they don't represent their representation. The lights are on, and somebody's home, but they don't know they're home.
Animals and Humans (and  some Robots) both represent themselves, but Humans are really good at representing other things -- like intentions -- and incorporating that into their self-model.
So umm... How does that work?
To be continued...
The vast majority of this article is from Being No One.
Thanks again to John Salvatier for reading early versions of this post, as well as getting a few papers for me.
Metzinger, T. (2003). Being No One: The Self-Model Theory of Subjectivity. Nature (p. 713). MIT Press. Chapter 7

Kurzban, R., & Aktipis, C. A. (2007). Modularity and the social mind: are psychologists too self-ish? Personality and social psychology review : an official journal of the Society for Personality and Social Psychology, Inc, 11(2), 131-49. doi:10.1177/1088868306294906


Ramachandran, V. S., Brang, D., McGeoch, P. D., & Rosar, W. (2009). Sexual and food preference in apotemnophilia and anorexia: interactions between “beliefs” and “needs” regulated by two-way connections between body image and limbic structures. Perception, 38(5), 775-777. doi:10.1068/p6350

Neural Correlates of Conscious Access

23 atucker 07 October 2011 11:12PM

Summary: Neuroimaging scans and EEG readings comparing nonconscious and conscious stimuli are compared, showing particular patterns in conscious processes. These findings are in line with predictions made by the Global Workspace Theory of consciousness, in which consciousness is closely related to interaction between specialized modules of the brain.

When a bunch of photons hit your eye, it unleashes a long chain of cause and effect that leads to an image being mapped in your brain. When does that image become conscious?

Conscious and Unconscious Perception1
The most basic method of discriminating between conscious and unconscious information is to ask the subject if they noticed it. However, people can respond to information that they don't report. What does it mean to notice something then?

Merikle et al performed experiments in the 80s which helped to resolve this question. In the Stroop task, people are asked to read words written in a different color than the word. Words written in their color (green) are easier to read than those not in their color (also red). Merikle modified the stroop task, using only two colors (red and green), and using the word to prime subjects to describe the color. As was expected, when "green" comes before a green square, subjects respond faster than with no priming.

However, when the situation is regularly reversed and the "red" prime normally comes before a green square (and vice versa) people also respond faster to similar levels. That is to say, subjects are able to notice that the prime and stimulus are incongruent, and act on that information to respond faster to the stimuli.

When the reversed prime ("red" before green) is flashed for such a short time span that people don't report seeing it, they are unable to use this information to react faster to the green stimulus, and the typical Stroop effect is observed -- being subliminally primed with a congruent color speeds up recognition, being subliminally primed with an incongruent color slows it.

There are methods of interfering with subject's reports (muteness trivially, anosognosia creepily), but for most humans it very closely corresponds to what is normally considered conscious perception.
People's brains respond to information even if the person is unaware of it, but there are measurable differences in perception without awareness and perception with it.
Methods of Manipulating Perception2
Nonconscious stimulation is split up into two categories:  subliminal  and  preconscious. A subliminal stimulus is one in which the bottom up process information is so reduced that people cannot detect it, even if they're paying attention to it. A preconscious stimulus is one that is potentially noticeable (i.e. it's presented in a way that subjects can normally report seeing it), but not reported because of other distractions.
Dichoptic Masking, image from Zeki 2003
To present a stimulus subliminally you can:
  • Mask a stimulus, by presenting it close in time to other unrelated or interfering stimuli. (i.e. a word flashed for 33 ms is noticeable by itself, but not when proceeded and followed by geometric shapes)2,3
  • Use dichoptic masking, where you present two different images to each eye, and the subject reports seeing something which is neither of those4
  • Use flash suppression, where you show one eye an image and flash shapes in the other eye to interfere with image perception5
To present something preconsciously you can:
  • Use inattentional blindness, where you present something that participants aren't focusing on.
  • Distract them! Present another stimulus and then quickly follow it with the one that you're interested in presenting preconsciously during their attentional blink.6
Neurological Differences
So what's the biggest difference between when people do and don't report seeing something?
Across various different methods of nonconscious stimulation, a few patterns emerged. When your eyes are stimulated, areas in your visual cortex (in the back of your head) undergo activity to process it regardless of whether or not you report seeing it. When people do report something, much more of the brain "lights up".
from Dehaune et al 2011
This lighting up also corresponds with recurrent processing, and ERP components in the P3b range. Recurrent processing is simply when a signal whips back and forth between specialized subregions, such as when signals from the visual cortex goes to the frontoparietal region then back to the visual cortex.1,7

The idea that conscious access is related to recurrent processing in the frontoparietal region stands up to experimental verification. Researchers are able to interfere with conscious reports of information independently of stimulus identification simply by applying transcranial magnetic stimulation to the prefrontal cortex, without changing the stimulus.8
So basically, consciousness seems to be related to widespread neural activity in cortical areas, as well as recurrent signalling and some particular components of EEG readings. So what?
The Global Workspace Theory1,9,10
The Global Workspace Theory of Consciousness asserts that consciousness is related to information from the various specialized subregions of the brain becoming “globally available” for attention, motor control, and cognitive reference. This explains phenomena like blindsight fairly elegantly, saying that visual information in the scotoma ceases to be conscious information because it ceases to be globally available to the system. Douhane adds that neurons with long axons in the frontoparietal cortex are probably the Global Workspace.1
Baars and Dennett are fond of the theater metaphor of consciousness. There’s a spotlight of attention on the stage, and actors (specialized cortical systems) come into and out of this to play their parts. This group of interacting subagents is actually somewhat close to orthonormal's model for dissolving qualia. Behind the scenes, directors and stagehands (decision processes, attention direction, contextual systems) arrange the scenes. Everywhere we shine the spotlight we see consciousness, because consciousness is attached to the light.
No part of the system is conscious, but there’s a show going on. And that’s what we see.
Next Obvious Question:
Okay, so why does that make us talk about consciousness? Why would we use the first person?
To be continued...


huge thanks to John Salvatier for getting me a bunch of the papers and editing feedback and putting up with my previous attempts to write an article like this. Also thanks to mtaran, falenas108, and RS (you don't know him) for reading drafts of this article.

Images are from Zeki 2003 and Dehaene 2011, respectively. I'd be very happy if someone helped me format that to show up with the pictures.

1Merikle & Joordens, 1997

2Dehaene, S., & Changeux, J.-P. 2011

3Breitmeyer & Ogmen, 2007

4Moutoussis & Zeki 2002, Image from Zeki 2003

5Tsuchiya & Koch

6Marti et al 2010

7Lamme 2006

8Rounis et al 2010

9Baars 1997



Baars, B. (1997). In the Theatre of Consciousness: The Workplace of the Mind. New York: Oxford University Press. Retrieved from here

Bruno G. Breitmeyer and Haluk Ogmen (2007) Visual masking. Scholarpedia, 2(7):3330

Dehaene, S., & Changeux, J.-P. (2011). Experimental and theoretical approaches to conscious processing. Neuron, 70(2), 200-27. Elsevier Inc. doi:10.1016/j.neuron.2011.03.018

Kouider, S., & Dehaene, S. (2007). Levels of processing during non-conscious perception: a critical review of visual masking. Philosophical transactions of the Royal Society of London. Series B, Biological sciences, 362(1481), 857-75. doi:10.1098/rstb.2007.2093

Lamme, V. A. F. (2006). Towards a true neural stance on consciousness. Trends in Cognitive Sciences, 10(11). doi:10.1016/j.tics.2006.09.001

Merikle, P. M., & Joordens, S. (1997). Parallels between perception without attention and perception without awareness.Consciousness and cognition6(2-3), 219-36. doi:10.1006/ccog.1997.0310

Lamme, V. A. F. (2006). Towards a true neural stance on consciousness. Trends in Cognitive Sciences, 10(11). doi:10.1016/j.tics.2006.09.001

Lau, H., & Rosenthal, D. (2011). Empirical support for higher-order theories of conscious awareness. Trends in cognitive sciences15(8), 365-373. doi:10.1016/j.tics.2011.05.009

Marti, S., Sackur, J., Sigman, M., & Dehaene, S. (2010). Mapping introspection’s blind spot: reconstruction of dual-task phenomenology using quantified introspection. Cognition, 115(2), 303-13. Elsevier B.V. doi:10.1016/j.cognition.2010.01.003

Metzinger, T. (2003). Being No One. Philosophy, 699. MIT Press.

Moutoussis, K., & Zeki, S. (2002). The relationship between cortical activation and perception investigated with invisible stimuli. Proceedings of the National Academy of Sciences, 99(14), 9527. National Acad Sciences. doi:10.1073/pnas.PNAS

Rounis, E., Maniscalco, B., Rothwell, J., Passingham, R., & Lau, H. (2010). Theta-burst transcranial magnetic stimulation to the prefrontal cortex impairs metacognitive visual awareness. Cognitive Neuroscience, 1(3), 165-175. doi:10.1080/17588921003632529

Tsuchiya, N., & Koch, C. (2005). Continuous flash suppression reduces negative afterimages. Nature neuroscience, 8(8), 1096-101. doi:10.1038/nn1500

Zeki, S. (2003). The disunity of consciousness. Trends in Cognitive Sciences, 7(5), 214-218. doi:10.1016/S1364-6613(03)00081-0



Blindsight and Consciousness

14 atucker 22 September 2011 06:42PM

Thomas Metzinger is a philosopher who pays lots of attention to cognitive science and psychology, and likes to think about consciousness. Most of the interesting ideas that follow come from his books The Ego Tunnel and Being No One. I hope to write a series of posts summarizing some of the evidence and arguments in Being No One, which focuses on consciousness.


Blindsight patients have damage to their primary visual cortex (V1), leading to a scotoma, or area of blindsight in the visual field. Most but not all visual signals go through V1, so they can still influence the brain in very restricted channels. Blindsight patients don't report seeing things in their scotoma, and don't initiate plans based on it. If they're thirsty and there's a bottle of water in their scotoma, they don't pick it up and drink it.

Human subjects and animal subjects are treated differently in psychological experiments regarding what they do and don't know. Humans are generally asked to report on their own experience, while animal actions are observed. We get interesting results when we ask people to report on their experience, while also observing their actions.

If you ask a blindsight patient what they see in their scotoma, they respond to the point that they can't see anything there. However, if you tell them to do things like "grab the thing in your scomata" they can grasp it. If you ask them to guess what's in it, they can perform better than chance. Some blindsight patients can tell if something is moving in their scotoma, but they can't tell you what it is. They often describe this awareness as a hunch.

Most people consider it fair to say that blindsight patients are not conscious of the things in their scotoma.

Attention and Conscious Experience

Patients with blindsight can act on visual information in their scotomas in some ways, but they can't notice it.

Metzinger argues that humans don't have a conscious experience of what we can't pay attention to. Note: There's a difference between can't pay attention to, and not currently paying attention to.

Visual information in the scotoma isn't accessible to the parts of my brain that plan, or the parts that cause me to say "I can see X". My unconscious is able to refer to this information for things in forced choice situations, but the information isn't available to me.

Constraints on Theories of Consciousness

Any theory which says that you need to be conscious in order to do things is probably wrong. Also, robots work. And machine learning exists. See also unconscious goals.

It's possible for your brain to refer to something, but not have it be consciously available to you. It's also possible to change what these things are.

The parts of your brain causing you to say that you notice something can be cut off from the parts that let you do things. This implies that some neural processes lead to you being conscious and others don't, and that those processes can be interrupted without ruining everything.

Citations, Notes:

1"The Case of Blindsight" by Weiskrantz in the Blackwell Companion to Consciousness (you can get it here, though there are other place on the internet that talk about blindsight)

Heavily drawn from The Ego Tunnel and Being No One (both by Metzinger).

Thanks to John Salvatier for reviewing drafts of this post.


Modularity and Buzzy

24 Kaj_Sotala 04 August 2011 11:35AM

This is the second part in a mini-sequence presenting material from Robert Kurzban's excellent book Why Everyone (Else) Is a Hypocrite: Evolution and the Modular Mind.

Chapter 2: Evolution and the Fragmented Brain. Braitenberg's Vehicles are thought experiments that use Matchbox car-like vehicles. A simple one might have a sensor that made the car drive away from heat. A more complex one has four sensors: one for light, one for temperature, one for organic material, and one for oxygen. This can already cause some complex behaviors: ”It dislikes high temperature, turns away from hot places, and at the same time seems to dislike light bulbs with even greater passion, since it turns toward them and destroys them.” Adding simple modules specialized for different tasks, such as avoiding high temperatures, can make the overall behavior increasingly complex as the modules' influences interact.

A ”module”, in the context of the book, is an information-processing mechanism specialized for some function. It's comparable to subroutine in a computer program, operating relatively independently of other parts of the code. There's a strong reason to believe that human brains are composed of a large number of modules, for specialization yields efficiency.

Consider a hammer or screwdriver. Both tools have very specific shapes, for they've been designed to manipulate objects of a certain shape in a specific way. If they were of a different shape, they'd work worse for the purpose they were intended for. Workers will do better if they have both hammers and screwdrivers in their toolbox, instead of one ”general” tool meant to perform both functions. Likewise, a toaster is specialized for toasting bread, with slots just large enough for the bread to fit in, but small enough to efficiently deliver the heat to both sides of the bread. You could toast bread with a butane torch, but it would be hard to toast it evenly – assuming you didn't just immolate the bread. The toaster ”assumes” many things about the problem it has to solve – the shape of the bread, the amount of time the toast needs to be heated, that the socket it's plugged into will deliver the right kind of power, and so on. You could use the toaster as a paperweight or a weapon, but not being specialized for those tasks, it would do poorly at it.

To the extent that there is a problem with regularities, an efficient solution to the problem will embody those regularities. This is true for both physical objects and computational ones. Microsoft Word is worse for writing code than a dedicated programming environment, which has all kinds of specialized tools for the task of writing, running and debugging code.

continue reading »

Trivers on Self-Deception

33 Yvain 12 July 2011 09:04PM

People usually have good guesses about the origins of their behavior. If they eat, we believe them when they say it was because they were hungry; if they go to a concert, we believe them when they say they like the music, or want to go out with their friends. We usually assume people's self-reports of their motives are accurate.

Discussions of signaling usually make the opposite assumption: that our stated (and mentally accessible) reasons for actions are false. For example, a person who believes they are donating to charity to "do the right thing" might really be doing it to impress others; a person who buys an expensive watch because "you can really tell the difference in quality" might really want to conspicuously consume wealth.

Signaling theories share the behaviorist perspective that actions do not derive from thoughts, but rather that actions and thoughts are both selected behavior. In this paradigm, predicted reward might lead one to signal, but reinforcement of positive-affect producing thoughts might create the thought "I did that because I'm a nice person".

Robert Trivers is one of the founders of evolutionary psychology, responsible for ideas like reciprocal altruism and parent-offspring conflict. He also developed a theory of consciousness which provides a plausible explanation for the distinction between selected actions and selected thoughts.


Trivers starts from the same place a lot of evolutionary psychologists start from: small bands of early humans grown successful enough that food and safety were less important determinants of reproduction than social status.

The Invention of Lying may have been a very silly movie, but the core idea - that a good liar has a major advantage in a world of people unaccustomed to lies - is sound. The evolutionary invention of lying led to an "arms race" between better and better liars and more and more sophisticated mental lie detectors.

There's some controversy over exactly how good our mental lie detectors are or can be. There are certainly cases in which it is possible to catch lies reliably: my mother can identify my lies so accurately that I can't even play minor pranks on her anymore. But there's also some evidence that there are certain people who can reliably detect lies from any source at least 80% of the time without any previous training: microexpressions expert Paul Ekman calls them (sigh...I can't believe I have to write this) Truth Wizards, and identifies them at about one in four hundred people.

The psychic unity of mankind should preclude the existence of a miraculous genetic ability like this in only one in four hundred people: if it's possible, it should have achieved fixation. Ekman believes that everyone can be trained to this level of success (and has created the relevant training materials himself) but that his "wizards" achieve it naturally; perhaps because they've had a lot of practice. One can speculate that in an ancestral environment with a limited number of people, more face-to-face interaction and more opportunities for lying, this sort of skill might be more common; for what it's worth, a disproportionate number of the "truth wizards" found in the study were Native Americans, though I can't find any information about how traditional their origins were or why that should matter.

If our ancestors were good at lie detection - either "truth wizard" good or just the good that comes from interacting with the same group of under two hundred people for one's entire life - then anyone who could beat the lie detectors would get the advantages that accrue from being the only person able to lie plausibly.

Trivers' theory is that the conscious/unconscious distinction is partly based around allowing people to craft narratives that paint them in a favorable light. The conscious mind gets some sanitized access to the output of the unconscious, and uses it along with its own self-serving bias to come up with a socially admirable story about its desires, emotions, and plans. The unconscious then goes and does whatever has the highest expected reward - which may be socially admirable, since social status is a reinforcer - but may not be.


It's almost a truism by now that some of the people who most strongly oppose homosexuality may be gay themselves. The truism is supported by research: the Journal of Abnormal Psychology published a study measuring penile erection in 64 homophobic and nonhomophobic heterosexual men upon watching different types of pornography, and found significantly greater erection upon watching gay pornography in the homophobes. Although somehow this study has gone fifteen years without replication, it provides some support for the folk theory.

Since in many communities openly declaring one's self homosexual is low status or even dangerous, these men have an incentive to lie about their sexuality. Because their facade may not be perfect, they also have an incentive to take extra efforts to signal heterosexuality by for example attacking gay people (something which, in theory, a gay person would never do).

Although a few now-outed gays admit to having done this consciously, Trivers' theory offers a model in which this could also occur subconsciously. Homosexual urges never make it into the sanitized version of thought presented to consciousness, but the unconscious is able to deal with them. It objects to homosexuality (motivated by internal reinforcement - reduction of worry about personal orientation), and the conscious mind toes party line by believing that there's something morally wrong with gay people and only I have the courage and moral clarity to speak out against it.

This provides a possible evolutionary mechanism for what Freud described as reaction formation, the tendency to hide an impulse by exaggerating its opposite. A person wants to signal to others (and possibly to themselves) that they lack an unacceptable impulse, and so exaggerates the opposite as "proof".


Trivers' theory has been summed up by calling consciousness "the public relations agency of the brain". It consists of a group of thoughts selected because they paint the thinker in a positive light, and of speech motivated in harmony with those thoughts. This ties together signaling, the many self-promotion biases that have thus far been discovered, and the increasing awareness that consciousness is more of a side office in the mind's organizational structure than it is a decision-maker.

Voluntary Behavior, Conscious Thoughts

24 Yvain 11 July 2011 10:13PM

Skinner proposes a surprisingly easy way to dissolve the problem of what it means for an action to be "voluntary", or "under voluntary control".

We commonly perceive certain actions as under voluntary control: for example, I can control what words I'm typing right now, or whether I go out for dinner tonight. Other actions are not under voluntary control: for example, absent some exciting technique like biofeedback I can't control my heartbeat or my core body temperature or the amount of bile produced by my liver.

Other, larger-scale actions also get classified as involuntary. Many people consider sleepwalking involuntary, including the bizarre "sleep-eating" behaviors some people display on Ambien and related drugs. The tics of Tourette's are involuntary. Our emotions and preferences are at least a little involuntary: office workers might like to be able to will away their boredom, or mourners their sorrow, but most can't.

Here "involuntary" needs to be distinguished from "hard-to-resist". Most people do not define smoking as an involuntary behavior, because, although people may smoke even when they wish they wouldn't, they have the feeling that they could have chosen not to smoke, they just didn't.

The philosophy of voluntary versus involuntary behavior seems to run up against a wall when it hits the question of "what is truly me?". If we make the reductionist identification of "me" with "my brain", well, clearly it's my brain controlling sleepwalking and boredom, but it still doesn't feel like I am controlling these things. Trying to go deeper ends up hopelessly vague, usually with talk of "higher level brain processes" versus "lower level brain processes" and an identification of "myself" with the higher ones. There may be a role for this kind of talk, but it couldn't hurt to look for something more explanatory.

Skinner, true to his quest, explains the distinction without any discussion of "brain processes" or "self". He says that voluntary behavior is behavior subject to operant conditioning, and involuntary behavior is everything else.

It might be clearer to define voluntary behavior as fully transparent to reinforcement. Imagine a man with a gun, threatening to shoot me if I go out for dinner tonight. The fear of punishment will be effective: I'll avoid going out. Lust for reward, too, would be effective. If Bill Gates offered me $1 billion to stay in, that's what I'd do.

But when our masked gunman tells me to increase my body temperature by two degrees or he'll shoot, he is out of luck. And no matter how much money Bill Gates offers me for same, he can't make me give myself a fever either.

There is a place, too, for the hard-to-resist behaviors in all this: these are behaviors which can be affected by reward, but as yet have not been. If a masked man held his gun to the head of smokers and told them to stop or he'd shoot, they would stop. But thus far, none of the potential rewards of not smoking have been sufficient to change smokers' behavior.


The idea of voluntary behavior is tied so intimately to the idea of the self, or of consciousness (the easy problem, not the hard one), that one would hope that a new approach to one might be able to shed some light on the other. If voluntary action depends on transparency to reinforcement, where does that leave consciousness?

I haven't been able to find Skinner's beliefs on this subject (when he talks about consciousness, it's usually to deny it as an ontologically fundamental entity) and I've never seen anywhere near as elegant a reduction. But an explanation in the spirit of reinforcement learning would have to start by insisting on treating thoughts and emotions as effects rather than causes. Instead of explaining my choice of restaurant by saying I thought about it and decided McDonalds was best, it would be more accurate to say that previous experiences with McDonalds caused both the thought "I should go to McDonalds" and the behavior of going to McDonalds.

There is an intuitive connection between thought and language, and Soviet psychologist Lev Vygotsky made the connection more explicit; he found that children begin by speaking their stream of consciousness aloud to inform other people, and eventually learn to suppress that stream into nonvocal (subvocal?) thought.

The last post in this sequence discussed different reinforcement of thought and action. Speech and thought make a natural category as opposed to action; both are fast and easy, and so less likely to be affected by time and effort discounting. Both are point actions as opposed to a long project like learning Swahili or quitting smoking. And both bring reinforcement not through normal sensory channels (saying a word doesn't give pleasure in the same way smoking a cigarette might, nor pain in the same way having to study a boring grammar textbook might) but in what they say about you as a person and how they affect other people's real (and perceived) opinion of you.

So even if there is no governor anywhere unifying all thoughts and words, they may come out in harmony because they were selected by the same processes for the same reasons. And actions may not end up so harmonious, because they suffer from differential reinforcement.

Such harmony resembles the idea of a core "me", of whom all my thoughts are a part, and who has complete power over my organs of speech - but who is sometimes at odds with my actions or emotions.

The reinforcement governing thought and speech is most likely to be internal reinforcement based on your own self-perception and on others' perception of you. If there's a good reason reputation management processes need to be different from decision-making processes, understanding that difference could help understand the evolutionary history of a perceived difference between the conscious and unconscious mind. One such reason is provided by Robert Trivers' theory of social consciousness, the subject of tomorrow's post.

Being Wrong about Your Own Subjective Experience

37 lukeprog 24 April 2011 08:24PM

Hume was skeptical of induction and causality. Descartes began his philosophy by doubting everything. Both thought we may be in great error about the external world. But neither could bring themselves to seriously doubt the contents of their own subjective conscious experience.

Philosophers and non-philosophers alike often say: "I may not know whether that is really a yellow banana before me, but surely I know the character of my visual experience of a yellow banana! I may not know whether I really just dropped a barbell on my toe, but surely I know the subjective character of my pain experience, right?"

In this article I hope to persuade you that yes, you can be wrong about the subjective quality of your own conscious experience. In fact, such errors are common.


Human echolocation

Thomas Nagel famously said that we cannot imagine the subjective experience of bat sonar:

Bat sonar, though clearly a form of perception, is not similar in its operation to any sense that we possess, and there is no reason to suppose that it is subjectively like anything we can experience or imagine.1

Hold up a book in front of your face at arm's length, close your eyes, and say something loudly. Can you hear the emptiness of the space in front of you? Close your eyes again, hold the book directly in front of your face, and say the book's name again. Can you now hear that the book is closer?

I'll bet you can, and thus you may be more bat-like than Nagel seems to think is possible, and more bat-like than you have previously thought. When I discovered this, I realized that not only had I been wrong about my perceptual capabilities, I had also been ignorant of the daily content of my subjective auditory experience.

Blind people can be especially good at using echolocation to navigate the world. Just like bats and dolphins and whales (but less accurately), humans can make sounds and then hear how nearby objects reflect and modify those sounds. People with normal vision can also be trained to echolocate to some degree with training, for example detecting the location of walls while blindfolded.2 After some practice, blindfolded people can use sound to distinguish objects of different shapes and textures (at a rate significantly better than chance).3

You can try this yourself. Get a friend to blindfold you and then move their hand to one of four quadrants of space in front of your face. Try hissing or talking loudly and see if you can tell something about where your friend's hand is. Have your friend move their hand to another quadrant, and try again. Do this a few dozen times. I suspect you will find that after a while you'll do better than chance at locating the quadrant your friend's hand is in, and you may be able to tell something about its distance as well. If so, you are echolocating. You are having an auditory experience of the physical location of an object - something you may not have realized that you can do, something you probably have been doing your whole life without much realizing it.

Alternatively, have a friend blindfold you and place you some unspecified distance from a wall. Step toward the wall a few inches at a time, speaking loudly, and stop when the wall is directly in front of you. Most people find they can do this quite reliably. But of course you can't see or touch the wall, and the wall is making no sound of its own. You are echolocating.

One final test to prove it to yourself, this one relevant to shape and texture. Close your eyes, repeat some syllable, and have a friend hold one of three objects in front of your face: a book, a wadded-up T-shirt, and a mixing bowl. I think you'll find that you can distinguish between these three silent objects better than chance, and that the book will sound solid, the T-shirt will sound soft, and the mixing bowl will sound hollow. You are echolocating shape and texture.

continue reading »

View more: Next