Years ago, before I had come across many of the power tools in statistics, information theory, algorithmics, decision theory, or the Sequences, I was very confused by the concept of intelligence. Like many, I was inclined to reify it as some mysterious, effectively-supernatural force that tilted success at problem-solving in various domains towards the 'intelligent', and which occupied a scale imperfectly captured by measures such as IQ.

Realising that 'intelligence' (as a ranking of agents or as a scale) was a lossy compression of an infinity of statements about the relative success of different agents in various situations was part of dissolving the confusion; the reason that those called 'intelligent' or 'skillful' succeeded more often was that there were underlying processes that had a greater average tendency to output success, and that greater average success caused the application of the labels.

Any agent can be made to lose by an adversarial environment. But for a fixed set of environments, there might be some types of decision processes that do relatively well over that set of environments than other processes, and one can quantify this relative success in any number of ways.

It's almost embarrassing to write that since put that way, it's obvious. But it still seems to me that intelligence is reified (for example, look at most discussions about IQ), and the same basic mistake is made in other contexts, e.g. the commonly-held teleological approach to physical and mental diseases or 'conditions', in which the label is treated as if—by some force of supernatural linguistic determinism—it *causes* the condition, rather than the symptoms of the condition, in their presentation, causing the application of the labels. Or how a label like 'human biological sex' is treated as if it is a true binary distinction that carves reality at the joints and exerts magical causal power over the characteristics of humans, when it is really a fuzzy dividing 'line' in the space of possible or actual humans, the validity of which can only be granted by how well it summarises the characteristics.

For the sake of brevity, even when we realise these approximations, we often use them without commenting upon or disclaiming our usage, and in many cases this is sensible. Indeed, in many cases it's not clear what the exact, decompressed form of a concept would be, or it seems obvious that there can in fact be no single, unique rigorous form of the concept, but that the usage of the imprecise term is still reasonably consistent and correlates usefully with some relevant phenomenon (e.g. tendency to successfully solve problems). Hearing that one person has a higher IQ than another might allow one to make more reliable predictions about who will have the higher lifetime income, for example.

However, widespread use of such shorthands has drawbacks. If a term like 'intelligence' is used without concern or without understanding of its core (i.e. tendencies of agents to succeed in varying situations, or 'efficient cross-domain optimization'), then it might be used teleologically; the term is reified (the mental causal graph goes from "optimising algorithm->success->'intelligent'" to "'intelligent'->success").

In this teleological mode, it feels like 'intelligence' is the 'prime mover' in the system, rather than a description applied retroactively to a set of correlations. But knowledge of those correlations makes the term redundant; once we are aware of the correlations, the term 'intelligence' is just a pointer to them, and does not add anything to them. Despite this, it seems to me that some smart people get caught up in obsessing about reified intelligence (or measures like IQ) as if it were a magical key to all else.

Over the past while, I have been leaning more and more towards the conclusion that the term 'consciousness' is used in similarly dubious ways, and today it occurred to me that there is a very strong analogy between the potential failure modes of discussion of 'consciousness' and between the potential failure modes of discussion of 'intelligence'. In fact, I suspect that the perils of 'consciousness' might be far greater than those of 'intelligence'.

~

A few weeks ago, Scott Aaronson posted to his blog a criticism of integrated information theory (IIT). IIT attempts to provide a quantitative measure of the consciousness of a system. (Specifically, a nonnegative real number phi). Scott points out what he sees as failures of the measure phi to meet the desiderata of a definition or measure of consciousness, thereby arguing that IIT fails to capture the notion of consciousness.

What I read and understood of Scott's criticism seemed sound and decisive, but I can't shake a feeling that such arguments about measuring consciousness are missing the broader point that all such measures of consciousness are doomed to failure from the start, in the same way that arguments about specific measures of intelligence are missing a broader point about lossy compression.

Let's say I ask you to make predictions about the outcome of a game of half-court basketball between Alpha and Beta. Your prior knowledge is that Alpha always beats Beta at (individual versions of) every sport except half-court basketball, and that Beta always beats Alpha at half-court basketball. From this fact you assign Alpha a Sports Quotient (SQ) of 100 and Beta an SQ of 10. Since Alpha's SQ is greater than Beta's, you confidently predict that Alpha will beat Beta at half-court.

Of course, that would be wrong, wrong, wrong; the SQ's are encoding (or compressing) the comparative strengths and weaknesses of Alpha and Beta across various sports, and in particular that Alpha always loses to Beta at half-court. (In fact, if other combinations lead to the same SQ's, then *not even that much* information is encoded, since other combinations might lead to the same scores.) So to just look at the SQ's as numbers and use that as your prediction criterion is a knowably inferior strategy to looking at the details of the case in question, i.e. the actual past results of half-court games between the two.

Since measures like this fictional SQ or actual IQ or fuzzy (or even quantitative) notions of consciousness are at best shorthands for specific abilities or behaviours, tabooing the shorthand should never leave you with less information, since a true shorthand, by its very nature, does not add any information.

When I look at something like IIT, which (if Scott's criticism is accurate) assigns a superhuman consciousness score to a system that evaluates a polynomial at some points, my reaction is pretty much, "Well, this kind of flaw is pretty much inevitable in such an overambitious definition."

Six months ago, I wrote:

"...it feels like there's a useful (but possibly quantitative and not qualitative) difference between myself (obviously 'conscious' for any coherent extrapolated meaning of the term) and my computer (obviously not conscious (to any significant extent?))..."

Mark Friedenbach replied recently (so, a few months later):

"Why do you think your computer is not conscious? It probably has more of a conscious experience than, say, a flatworm or sea urchin. (As byrnema notes, conscious does not necessarily imply self-aware here.)"

I feel like if Mark had made that reply soon after my comment, I might have had a hard time formulating why, but that I would have been inclined towards disputing that my computer is conscious. As it is, at this point I am struggling to see that there is any meaningful disagreement here. Would we disagree over what my computer can do? What information it can process? What tasks it is good for, and for which not so much?

What about an animal instead of my computer? Would we feel the same philosophical confusion over any given capability of an average chicken? An average human?

Even if we did disagree (or at least did not agree) over, say, an average human's ability to detect and avoid ultraviolet light without artificial aids and modern knowledge, this lack of agreement would not feel like a messy, confusing philosophical one. It would feel like one tractable to direct experimentation. You know, like, blindfold some experimental subjects, control subjects, and experimenters and see how the experimental subjects react to ultraviolet light versus other light in the control subjects. Just like if we were arguing about whether Alpha or Beta is the better athlete, there would be no mystery left over once we'd agreed about their relative abilities at every athletic activity. At most there would be terminological bickering over which scoring rule over athletic activities we should be using to measure 'athletic ability', but not any disagreement for any fixed measure.

I have been turning it over for a while now, and I am struggling to think of contexts in which consciousness really holds up to attempts to reify it. If asked why it doesn't make sense to politely ask a virus to stop multiplying because it's going to kill its host, a conceivable response might be something like, "Erm, you know it's not conscious, right?" This response might well do the job. But if pressed to cash out this response, what we're really concerned with is the absence of the usual physical-biological processes by which talking at a system might affect its behaviour, so that there is no reason to expect the polite request to increase the chance of the favourable outcome. Sufficient knowledge of physics and biology could make this even more rigorous, and no reference need be made to consciousness.

The only context in which the notion of consciousness seems inextricable from the statement is in ethical statements like, "We shouldn't eat chickens because they're conscious." In such statements, it feels like a particular sense of 'conscious' is being used, one which is *defined* (or at least characterised) as 'the thing that gives moral worth to creatures, such that we shouldn't eat them'. But then it's not clear why we should call this moral criterion 'consciousness'; insomuch as consciousness is about information processing or understanding an environment, it's not obvious what connection this has to moral worth. And insomuch as consciousness is the Magic Token of Moral Worth, it's not clear what it has to do with information processing.

If we relabelled zxcv=conscious and rewrote, "We shouldn't eat chickens because they're zxcv," then this makes it clearer that the explanation is not entirely satisfactory; what does zxcv have to do with moral worth? Well, what does consciousness have to do with moral worth? Conservation of argumentative work and the usual prohibitions on equivocation apply: You can't introduce a new sense of the word 'conscious' then plug it into a statement like "We shouldn't eat chickens because they're conscious" and dust your hands off as if your argumentative work is done. That work is done only if one's actual values and the definition of consciousness to do with information processing already exactly coincide, and this coincidence is known. But it seems to me like a claim of any such coincidence must stem from confusion rather than actual understanding of one's values; valuing a system commensurate with its ability to process information is a fake utility function.

When intelligence is reified, it becomes a teleological fake explanation; consistently successful people are consistently successful because they are known to be Intelligent, rather than their consistent success causing them to be called intelligent. Similarly consciousness becomes teleological in moral contexts: We shouldn't eat chickens because they are called Conscious, rather than 'these properties of chickens mean we shouldn't eat them, and chickens also qualify as conscious'.

So it is that I have recently been very skeptical of the term 'consciousness' (though grant that it can sometimes be a useful shorthand), and hence my question to you: Have I overlooked any counts in favour of the term 'consciousness'?

New Comment
230 comments, sorted by Click to highlight new comments since: Today at 11:32 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

It sometimes seems to me that those of us who actually have consciousness are in a minority, and everyone else is a p-zombie. But maybe that's a selection effect, since people who realise that the stars in the sky they were brought up believing in don't really exist will find that surprising enough to say, while everyone else who sees the stars in the night sky wonders what drugs the others have been taking, or invents spectacles.

I experience a certain sense of my own presence. This is what I am talking about, when I say that I am conscious. The idea that there is such an experience, and that this is what we are talking about when we talk about consciousness, appears absent from the article.

Everyone reading this, please take a moment to see whether you have any sensation that you might describe by those words. Some people can't see colours. Some people can't imagine visual scenes. Some people can't taste phenylthiocarbamide. Some people can't wiggle their ears. Maybe some people have no sensation of their own selves. If they don't, maybe this is something that can be learned, like ear-wiggling, and maybe it isn't, like phenylthiocarbamide.

Unlike the experiences reported by some, I ... (read more)

[-][anonymous]10y170

I feel like the intensity of conscious experience varies greatly in my personal life. I feel less conscious when I'm doing my routines, when I'm surfing on the internet, when I'm having fun or playing an immersive game, when I'm otherwise in a flow state, or when I'm daydreaming. I feel more conscious when I meditate, when I'm in a self-referencing feedback loop, when I'm focusing on the immediate surroundings, when I'm trying to think about the fundamental nature of reality, when I'm very sad, when something feels painful or really unpleasant, when I feel like someone else is focusing on me, when I'm trying to control my behavior, when I'm trying to control my impulses and when I'm trying to do something that doesn't come naturally.

I'm not sure if we're talking about the same conscious experience so I try to describe it in other words. When I'm talking about the intensity of consciousness, I talking about heightened awareness and how the "raw" experience seems more real and time seems to go slower.

Anyway, my point is that if consciousness varies so much in my own life, I think it's reasonable to think it could also vary greatly between people too. This doesn't mean that... (read more)

5Kawoomba10y
Well, maybe it's not only your consciousness that varies, but also / more so your memory of it. When you undergo a gastroscopy and get your light dose propofol, it often happens that you'll actually be conscious during the experience, enough so to try to wiggle free, to focus on the people around you. Quite harrowing, really. Luckily, afterwards you won't have a memory of that. When you consider your past degree of consciousness, you see things through the prism of your memory, which might well act as a Fourier filter-analogue. It's not exactly vital to reliably save to memory minutiae of your routine tasks, or your conscious experience thereof, so it doesn't always happen. Whyever would it? (Obligatory "lack of consciousness is the mind-killer".)
3[anonymous]10y
That is the kind of argument that is a bit difficult to argue against in any way because you're always going to use your memory to assess the past degree of consciousness, but it also the kind of argument that doesn't by itself explain why your prior should be higher for the claim "consciousness stays at the same level at all times" versus "consciousness varies throughout your daily life". But I agree, that does happen. Your perception of past mental states is also going to be influenced by your bias and what kind of theoretical framework you have in mind. Maybe you could set up alarms at random intervals and when alarm goes off you write down your perceived level of consciousness? Is this unreliable too? Maybe it's impossible to compare your immediate phenomenal experience to anything, even if it happened a second before because "experience" and "memory of an experience" are always of entirely different kind of substance. Even if you used fMRI scan on a participant who estimated her level of conscious intensity to be "high" and then used that scan to compare people's mental states, that initial estimate had to come from comparing her immediate mental state to her memories of other mental states - and like you said those memories can be unreliable. So either you trust your memories of phenomenal experience on some level, or you accept that there's no way to study this problem.

I wonder sometimes about Dennett et al.: "qualia blind" or just stubborn?

1[anonymous]10y
As I fall in the Dennett camp (qualia seems like a ridiculous concept to me), perhaps you can explain what qualia feels like to you, as the grandparent did about the subjective experience of consciousness?
8CCC10y
When I first came across the concept of qualia, they were described as "the redness of red". This pretty much captures what I understand by the word; when I look at an object, I observe a colour. That colour may be "red", that colour may be "green" (or a long list of other options; let us merely consider "red" and "green" for the moment). The physical difference between "red" and "green" lies in the wavelength of the light. Yet, when I look at a red or a green object, I do not see a wavelength - I can not see which wavelength is longer. Despite this, "red" looks extremely different to "green"; it is this mental construct, this mental colour in my mind that I label "red", that is a quale. I know that the qualia I have for "red" and "green" are not universal, because some people are red-green colourblind. Since my qualia for red and green are so vastly different, I conclude that such people must have different qualia - either a different "red" quale, or a different "green" quale, or, quite possibly, both differ. Does that help?
6Richard_Kennaway10y
"Quale" is simply a word for "sensation" -- what the word used to mean, before it drifted into meaning the corresponding physical phenomena in the nerves). A quale is the sensation (in the former sense) of a sensation (in the latter sense).
8[anonymous]10y
You are not alone. This is exactly what I experience. I have, however, engaged with some people on this site about this subject who have been stubbornly dense on the subject of the subjective experience of consciousness. For example, insisting that destructive uploaders are perfectly okay with no downside to the person stepping inside one. I finally decided to update and rate more likely the possibility that others do not experience consciousness in the same way I do. This may be an instance of the mind-projection fallacy at work. Nice to know that I'm not alone though :)
9solipsist10y
I'm inclined to disagree, but you might be one level beyond me. I believe many people empathize with a visceral sense of horror about (say) destructive teleportation, but intellectually come to the conclusion that those anxieties are baseless. These people may argue in a way that appears dense, but they are actually using second-level counterarguments. But perhaps you actually have counter-counter arguments, and I would appear to be dense when discussing those. Argument in a nutshell: Sleep might be a Lovecraftian horror. As light in front of you dims, your thoughts become more and more disorganized, and your sense of self fades until the continuation of consciousness that is you ceases to exist. A few hour later someone else wakes up who thinks that they were you. But they are not you. Every night billions of day-old consciousnesses die, replaced the next morning with billion more, deluded by borrowed memories into believing that they will live for more than a few hours. After a you next go to sleep, you will never see colors again. People who have never slept would be terrified of sleeping. People who have never teleported are terrified of teleporting. The two fears are roughly equal in merit.
2Lightwave10y
Going even further, some philosophers suggest that consciousness isn't even continuous, e.g. as you refocus your attention, as you blink, there are gaps that we don't notice. Just like how there are gaps in your vision when you move your eyes from one place to another, but to you it appears as a continuous experience.
6Richard_Kennaway10y
Consciousness is complex. It is a structured thing, not an indivisible atom. It is changeable, not fixed. It has parts and degrees and shifting, uncertain edges. This worries some people.
2[anonymous]10y
Well of course it worries people! Precisely the function of consciousness (at least in my current view) is to "paint a picture" of wholeness and continuity that enables self-reflective cognition. Problem is, any given system doesn't have the memory to store its whole self within its internal representational data-structures, so it has to abstract over itself rather imperfectly. The problem is that we currently don't know the structure, so the discord between the continuous, whole, coherent internal feeling of the abstraction and the disjointed, sharp-edged, many-pieced truth we can empirically detect is really disturbing. It will stop being disturbing about five minutes after we figure out what's actually going on, when everything will once again add up to normality.
4Richard_Kennaway10y
It seems to only worry people when they notice unfamiliar (to them) aspects of the complexity of consciousness. Familiar changes in consciousness, such as sleep, dreams, alcohol, and moods, they never see a problem with.
1TheAncientGeek10y
We only ever have approxmate models of external things, too.
1[anonymous]10y
That doesn't fit predictions of the theory. As you sleep you are not forming long term memories, to various degrees (that's why many people don't typically remember their dreams). But your brain is still causally interconnected and continues to compute during sleep just as much as it does during waking time. Your consciousness persists, it just doesn't remember. Teleportation / destructive uploading is totally different. You are destroying the interconnected causal process that gives rise to the experience of consciousness. That is death. It doesn't matter if very shortly thereafter either another physical copy of you is made or a simulation started. Imagine I passively scanned your body to molecular detail, then somebody shoots you in the head. I carve the exact coordinates of each atom in your body on stone tablets, which are kept in storage for 20 million years. Then an advanced civilization re-creates your body from that specification, to atomic detail. What do you expect to experience after being shot in the head? Do you expect to wake up in the future?
5solipsist10y
Huh. Does something in your subjective experience make you think that your consciousness continues while you sleep? Aside from a few dreams, sleep to me is a big black hole in which I might as well be dead. I mean, I have nothing in my subjective experience to contradicts the hypothesis that my brain does nothing at night, and what I interpret as memories of dreams are really errors in my long-term memories that manifest in the seconds I wake-up. (I don't actually think dreams are formed this way, but there is nothing in the way I experience consciousness that tells me so). Since when growing up I didn't take the transporter to school every morning, I would be scared of not waking up. After a few hundred round trips to and from stone tablets, not so much. Of course, it's possible that I should be afraid of becoming a stone tablet, just as it is possible that I should be afraid of going to sleep now. Arguments around the question "is teleportation different from sleep?" seem to me to like they center around questions of science and logic, not differences in subjective experiences of consciousness. That is, unless your experience of conciseness while sleeping differs significantly from mine.
3[anonymous]10y
Have you ever woken up in the process of falling asleep, or suddenly jolted awake in an adrenaline releasing situation? What was your memory of that experience?
2solipsist10y
It varies. Certainly if I'm just falling asleep, or groggy and waking up, I sometimes get the sense that I was there but not thinking the same way I do when I'm awake. But that doesn't mean that I'm somewhat conscious all the time. I have sat in class paying close attention to the professor, then felt my friend's hand on my shoulder in an otherwise empty classroom. I didn't notice myself falling asleep or waking up -- time just seemed to stop.
3private_messaging10y
There's a causal chain from the thoughts I have today, to the thoughts I have tomorrow, and there's a causal chain from the thoughts I'd have before your scanning and stone tablet procedure, and after. (There's however no causal chain from anything done by the original me after the scan, to anything in the copy.)
1[anonymous]10y
Causal chains are one possible explanation, but a weak one. There is also a causal chain from a pregnant mother to her child, indeed a much stronger connection than with stone tablets. Why doesn't the mother "live on" in her child? And if there is no causal chain from you-after-scanning to the copy, you seem to be accepting some sort of forking to have occurred. What basis have you for expecting to perceive waking up as the copy in the future? There are other possible explanations than causal chain, e.g. persistence of computation, which IMHO better explain these edge cases. However the expectation of these models is different you would not expect a continuity of experience.
3private_messaging10y
Well, there's no causal chain from what the pregnant woman thinks to what the child remembers, or at least, no chain of the kind that we associate with future selves. Who knows, maybe in the future there will be a memory enhancing modification, without which our natural memories would seem fairly distant from continuation. I'd expect the same as if I were to e.g. somehow reset my memories to what they were 10 hours ago. I would definitely not expect subjective continuity with my current self in the case of memory reset - I wouldn't think it'd be such a big deal though. It seems to me that something like that could break down once when we try to define what we mean by persistence of computation, or indeed, by computation.
-3Friendly-HI10y
If you accept reductionism, which you really should, then a copy of your brain is a copy of your mind. I submit you don't actually care about the interconnected causal process when you're conscious or asleep. You probably couldn't if you tried really hard, what does it even matter? You couldn't even tell if that causal connection "was broken" or not. People get drunk and wake up in some place without recollection how they got there and their life doesn't seem particularly unworthy afterwards, though they should go easier on the liquor. The supposed problem you feel so strongly about is merely a conceptual problem, a quirk of how your mind models people and identities, not one rooted in reality. It's all just a consequence of how you model reality in your mind and then your mind comes up with clever ideas how "being causally interconnected during sleep" somehow matters. You model yourself and the copy of yourself as two separate and distinct entities in your mind and apply all the same rules and intuitions you usually apply to any other mind that isn't you. But those intuitions are misplaced in in this novel and very different situation where that other mind is literally you in every way you care about. Which is fine because you are and you will be separated in space and perhaps also in time, so it really makes sense modeling two instances of yourself, or at least to try. If you imagine to kill yourself and your copy goes on it really somehow fells like "I die and some impostor who isn't me -or at least doesn't continue my own subjective experience- lives on and my unique own inner subjective experience will be extinguished and I'll miss out on the rest of it because someone else has internal experiences but that's not me". That's just a quirk of how we tend model other minds and other people, nothing more, All the dozens of clever reasons people tend to come up with to somehow show how they won't be able to continue their internal experience as their own copy hold
0[anonymous]10y
Did you even read my post? Getting drunk and not remembering things or being in a coma are not states where the brain stops working altogether.
2Friendly-HI10y
Hmm, you're right I did a lousy or non-existant job of refuting that idea. Okay let's try a thought experiment then. Your brain got instantly-frozen close to absolute zero and could be thawed in such a way that you'd be alive after say 100 years of being completely frozen and perfectly preserved. I think it's fair to say here your brain "stopped working" altogether during that time, while the world outside changed. Would you really expect your subjective experience to end at the moment of freezing, while some kind of new or different subjective experience suddenly starts its existence at the time of being thawed? If you wouldn't expect your subjective experience to end at that point, then how is it possibly any different from a perfect copy of yourself assuming you truly accept reductionism? In other words yes, for that reason and others I would expect to open MY eyes and resume MY subjective experience after being perfectly preserved in the form of stone tablets for 20 million years. It sounds strange even to me I confess, but if reductionist assumptions are true then I must accept this, my intuitions that this is not the case are just a consequence of how I model and think of my own identity. This is something I've grappled with for a few years now and at the beginning I came up with tons of clever reasons why it "wouldn't really be me" but no, reason trumps intuition on this one. Also yes, destructive teleportation is a kind of "death" you don't notice, but its also one you don't care about because next thing you open your eyes an everything is okay you are just somewhere else, nothing else is different. That's the idea behind the drunk analogy, it would be the same experience minus the hangover.
4CoffeeStain10y
When I myself run across apparent p-zombies, they usually look at my arguments as if I am being dense over my descriptions of consciousness. And I can see why, because without the experience of consciousness itself, these arguments must sound like they make consciousness out to be an extraneous hypothesis to help explain my behavior. Yet, even after reflecting on this objection, it still seems there is something to explain besides my behavior, which wouldn't bother me if I were only trying to explain my behavior, including the words in this post. It makes sense to me that from outside a brain, everything in the brain is causal, and the brain's statements about truths are dependent on outside formalizations, and that everything observable about a brain is reducible to symbolic events. And so an observation of a zombie-Chalmers introspecting his consciousness would yield no shocking insights on the origins of his English arguments. And I know that when I reflect on this argument, an observer of my own brain would also find no surprising neural behaviors. But I don't know how to reconcile this with my overriding intuition/need/thought that I seek not to explain my behavior but the sense experience itself when I talk about it. Fully aware of outside view functionalism, the sensation of red still feels like an item in need of explanation, regardless of which words I use to describe it. I also feel no particular need to feel that this represents a confusion, because the sense experience seems to demand that it place itself in another category than something you would explain functionally from the outside. All this I say even while I'm aware that to humans without this feeling, these claims seem nothing like insane, and they will gladly inspect my brain for a (correct) functional explanation of my words. The whole ordeal still greatly confuses me, to an extent that surprises me given how many other questions have been dissolved on reflection such as, well, intelligence.
4CCC10y
I'm not sure that I mean the same thing as you do by the phrase "a sense of my own presence" (in the same way that I do not know, when you say "yellow", whether or not we experience the colour in the same way). What I can say is that I do feel that I am present; and that I can't imagine not feeling that I am present, because then who is there to not feel it?
6Richard_Kennaway10y
Such uncertainty applies to all our sensations. There may very well be some variation in all of them, even leaving aside gross divergences such as colour blindness and Cotard's syndrome. I am not present during dreamless sleep, which happens every night.
4CCC10y
I have no memory of what (if anything) I experience during dreamless sleep. I therefore cannot say whether or not I can feel my own presence at such a time. To be fair, that is what I would expect to say about a time in which I could not feel my own presence anywhere.
2Bugmaster10y
This doesn't make sense to me. I have nothing to compare this experience of consciousness to. I know, logically speaking, that I am often unconscious (e.g. when sleeping), but there is no way -- by definition -- I can experience what that unconsciousness feels like. Thus, I cannot compare my experience of being conscious with the experience of being unconscious. Am I missing something ? I think there are drugs that can induce the experience of unconsciousness, but I'd rather not take any kind of drugs unless it's totally necessary...
4[anonymous]10y
Being asleep is not being unconscious (in this sense). I don't know about you, but I have dreams. And even when I'm not dreaming, I seem to be aware of what is going on in my vicinity. Of course I typically don't remember what happened, but if I was woken up I might remember the last few moments, briefly. Lack of memory of what happens when I'm asleep is due to a lack of memory formation during that period, not a lack of consciousness.
3Pentashagon10y
The experience of sleep paralysis suggests to me that there are at least two components to sleep; paralysis and suppression of consciousness and one can have one, both, or neither. With both, one is asleep in the typical fashion. With suppression of consciousness only one might have involuntary movements or in extreme cases sleepwalking. With paralysis only one has sleep paralysis which is apparently an unpleasant remembered experience. With neither, you awaken typically. The responses made by sleeping people (sleepwalkers and sleep-talkers especially) suggest to me that their consciousness is at least reduced in the sleep state. If it was only memory formation that was suppressed during sleep I would expect to witness sleep-walkers acting conscious but not remembering it, whereas they appear to instead be acting irrationally and responding at best semi-consciously to their environment.
3Richard_Kennaway10y
I don't see why this is a problem. Why should I need to compare my experience of being conscious to an experience, defined to be impossible, of being unconscious? If I want to compare it with something (although I don't see why I should need to, to have the experience) I can compare my experiences of myself at different times. It varies, even without drugs. In what ways does it vary? Communicating internal experiences is difficult, especially when they may be idiosyncratic. When I first wake, my sense of presence is at a rather low level, but there is enough of it to be able to watch the rest of the process of properly waking up, which is like watching a slowly developing picture. There may be more dimensions to it than just intensity, but I haven't studied it much. Perhaps that would be something to explore in meditation, instead of just contemplating my own existence.
2ChristianKl10y
Then it might be that you don't have access to the sensation Richard is talking about. I can distinguish states where I'm totally immersed in a video game and the video game world from states when I'm aware of myself and conscious of myself. If I wanted to go more into detail I can distinguish roughly four different sensations for which I have labels under the banner of "I experience a certain sense of my own presence". There a fifth sensation that I used to mislabel as presence.
4Bugmaster10y
Ok, so who, exactly, is it that is "totally immersed in a video game" ? If it's still you, then you have simply lost awareness of (the majority of) your body, but you are as conscious as you were before.
2jbay10y
Maybe you're on to something... Imagine there were drugs that could remove the sensation of consciousness. However, that's all they do. They don't knock you unconscious like an anaesthetic; you still maintain motor functions, memory, sensory, and decision-making capabilities. So you can still drive a car safely, people can still talk to you coherently, and after the drugs wear off you'll remember what things you said and did. Can anyone explain concretely what the effect and experience of taking such a drug would be? If so, that might go a long way toward nailing down what the essential part of consciousness is (ie, what people really mean when they claim to be conscious). If not, it might show that consciousness is inseparable from sensory, memory, and/or decision-making functions. For example, I can imagine an answer like "such a drug is contradictory; if it really took away what I mean by 'consciousness', then by definition I couldn't remember in detail what had happened while it was in effect". Or "If it really took away what I mean by consciousness, then I would act like I were hypnotized; maybe I could talk to people, but it would be in a flat, emotionless, robotic way, and I wouldn't trust myself to drive in that state because I would become careless".
2hamnox10y
I can almost picture it. Implicit memories -- motor habits and recognition still work. Semantic and episodic memories are pretty separate things. You can answer some factual questions without involving your more visceral kind of memory about the experience later. Planning couldn't be totally gone, but it would operate at a much lower level so I wouldn't recommend driving...
2[anonymous]10y
That doesn't make any sense to me. If you were on that drug and I asked you "how do you feel?" and you said "I feel angry" or "I feel sad" ,,, that would be a conscious experience. I don't think the setup makes any sense. If you are going about your day doing your daily things, you are conscious. And this has nothing to do with remembering what happened -- as I said in a different reply, you are also conscious in the grandparent's sense when you are dreaming, even if you don't remember the dream when you wake up.
1pengvado10y
Jbay didn't specify that the drug has to leave people able to answer questions about their own emotional state. And in fact there are some people who can't do that, even though they're otherwise functional.
1[anonymous]10y
I wasn't limiting it to just emotional state. If there is someone experiencing something, that someone is conscious, whether or not they are self-aware enough to describe that feeling of existing.
1jbay10y
Good! I'm glad to hear an answer like this. So does that mean that, in your view, a drug that removes consciousness must necessarily be a drug that impairs the ability to process information?
1[anonymous]10y
Yes. Really to be completely unconscious you'd have to be dead. But I do acknowledge that this is degrees on a spectrum, and probably the closest drug to what you want is whatever they use in general anesthesia.
1jbay10y
I think my opinion is the same as yours, but I'm curious about whether anybody else has different answers.

Regarding IIT, I can't believe just how bloody stupid it is. As Aaronson says, it is immediately obvious that this idiot metric will be huge not just for human brains but for a lot of really straightforward systems, including the tea spinning in my cup, Jupiter's atmosphere being hyper conscious, and so on. (Over sufficient timeframe, small, localized differences in the input state of those systems affect almost all the output state, if we get down to the level of individual molecules. Liquids, gasses, and plasmas end up far more conscious than solids)

edit: I think it's that you can say that consciousness is "integration" of "information" whereby as a conscious being you'd only call it "integration" and "information" if it's producing something relevant to you, the conscious being (you wouldn't call it information if it's not useful to yourself). Then you start trying to scribble formulas because you think "information" or "integration" in the technical sense would have something to do with your innate notion of it being something interesting.

Or how a label like 'human biological sex' is treated as if it is a true binary distinction that carves reality at the joints and exerts magical causal power over the characteristics of humans, when it is really a fuzzy dividing 'line' in the space of possible or actual humans, the validity of which can only be granted by how well it summarises the characteristics.

I don't see how sex doesn't carve reality at the joints. In the space of actually really-existing humans it's a pretty sharp boundary and summarizes a lot of characteristics extremely well. It might not do so well in the space of possible humans, but why does that matter? The process by which possible humans become instantiated isn't manna from heaven - it has a causal structure that depends on the existence of sex.

I agree it is a pretty sharp boundary, for all the obvious evolutionary reasons - nevertheless, there are a significant number of actual really-existing humans who are intersex/transgender. This is also not too surprising, given that evolution is a messy process. In addition to the causal structure of sexual selection and the evolution of humans, there are also causal structures in how sex is implemented, and in some cases, it can be useful to distinguish based on these instead.

For example, you could distinguish between karyotype (XX, XY but also XYY, XXY, XXX, X0 and several others), genotype (e.g. mutations on SRY or AR genes), and phenotypes, like reproductive organs, hormonal levels, various secondary sexual characteristics (e.g. breasts, skin texture, bone density, facial structure, fat distribution, digit ratio) , mental/personality differences (like sexuality, dominance, spatial orientation reasoning, nurturing personality, grey/white matter ratio, risk aversion), etc...

6KnaveOfAllTrades10y
Thanks. When I was thinking about this post and considered sex as an example, I had intended to elaborate by saying how it could e.g. cause counterproductive attitudes to intersex people, and that these attitudes would update slowly due to the binary view of sex being very strongly trained into the way we think. I just outright forgot to put that in! I endorse Adele's response.
[-][anonymous]10y110

Usually when we say "consciousness", we mean self-awareness. It's a phenomenon of our cognition that we can't explain yet, we believe it does causal work, and if it's identical with self-awareness, it might be why we're having this conversation.

I personally don't think it has much to do with moral worth, actually. It's very warm-and-fuzzy to say we ought to place moral value on all conscious creatures, but I actually believe that a proper solution to ethics is going to dissolve the concept of "moral worth" into some components like (blatantly making names up here) "decision-theoretic empathy" (agents and instances where it's rational for me to acausally cooperate), "altruism" (using my models of others' values as a direct component of my own values, often derived from actual psychological empathy), and even "love" (outright personal attachment to another agent for my own reasons -- and we'd usually say love should imply altruism).

So we might want to be altruistic towards chickens, but I personally don't think chickens possess some magical valence that stops them from being "made of atoms I can use for something else", ot... (read more)

3KnaveOfAllTrades10y
Yes! I am very glad someone else is making this point, since sometimes it can seem like (on a System 1 level, even if System 2 I know it's obviously false that) in my networks everyone's gone mad identifying 'consciousness' with 'moral weight', going ethical vegetarian, and possibly prioritising animal suffering over x-risk and other astronomical-or-higher leverage causes.
3[anonymous]10y
Funny. That's how I feel about "existential risk"! It's "neoliberalized" to a downright silly degree to talk of our entire civilization as if it were a financial asset, for which we can predict or handle changes in dollar-denominated price. It leaves the whole "what do we actually want, when you get right down to it?" question completely open while also throwing some weird kind of history-wide total-utilitarianism into the mix to determine that causing some maximum number of lives-worth-living in the future is somehow an excuse to do nothing about real suffering by real people today.
1KnaveOfAllTrades10y
You're right that I forgot myself (well, lapsed into a cached way of thinking) when I mentioned x-risk and astronomical leverage; similar to the dubiousness of 'goodness is monotonic increasing in consciousness', it is dubious to claim that goodness is monotonically and significantly increasing in number of lives saved, which is often how x-risk prevention is argued. I've noticed this before but clearly have not trained myself to frame it that way well enough to not lapse into the All the People perspective. That said, there are some relevant (or at least not obviously irrelevant) considerations distinguishing the two cases. X-risk is much more plausibly a coherent extrapolated selfish preference, whereas I'm not convinced this is the case for animal suffering. Second, if I find humans more valuable (even if only because they're more interesting) than animals (and this is also plausible because I am a human, which does provide a qualitative basis for such a distinction), then things like astronomical waste might seem important even if animal suffering didn't.
3[anonymous]10y
Why should your True Preferences have to be selfish? I mean, there's a lot to complain about with our current civilization, but almost-surely almost-everyone has something they actually like about it. I had just meant to contrast "x-risk prevention as maximally effective altruism" with "malaria nets et al for actually existing people as effective altruism".
5KnaveOfAllTrades10y
What I mean is: For most given people I meet, it seems very plausible to me that, say, self-preservation is a big part of their extrapolated values. And it seems much less plausible that their extrapolated value is monotonic increasing in consciousness or number of conscious beings existing. Any given outcome might have hints that it's part of extrapolated value/not a fake utility function. Examples of hints are: It persists as a feeling of preference over a long time and many changes of circumstance; there are evolutionary reasons why it might be so strong an instrumental value that it becomes terminal; etc. Self-preservation has a lot of hints in its support. Monotonicity in consciousness seems less obvious (maybe strictly less obvious, in that every hint supporting monotonicity might also support self-preservation, with some further hint supporting self-preservation but not monotonicity).
3Bugmaster10y
Ok, so let's say I put two different systems in front of you, and I tell you that system A is conscious whereas system B is not. Based on this knowledge, can you make any meaningful predictions about the differences in behavior between the two systems ? As far as I can tell, the answer is "no". Here are some possible differences that people have proposed over the years: * Perhaps system A would be a much better conversation partner than system B. But no, System B could just be really good at pretending that it's conscious, without exhibiting any true consciousness at all. * System A will perform better at a variety of cognitive tasks. But no, that's intelligence, not consciousness, and in fact system B might be a lot smarter than A. * System A deserves moral consideration, whereas system B is just a tool. Ok, but I asked you for a prediction, not a prescription. It is quite possible that I'm missing something; but if I'm not, then consciousness is an empty concept, since it has no effect on anything we can actually observe.
7TheAncientGeek10y
Is it possible to fake introspection without having introspection?
3Bugmaster10y
As far as I understand, at least some philosophers would say "yes", although admittedly I'm not sure why. Additionally, in this specific case, it might be possible to fake introspection of something other than one's own system. After all, System B just needs to fool the observer into thinking that it's conscious at all, not that it's conscious about anything specific. Insofar as that makes any sense...
-3TheAncientGeek10y
Functional equivalence.
1Bugmaster10y
I'm not sure what you mean; can you elaborate ?
-1TheAncientGeek10y
A functional equivlent of a person would make the same reports, including apparently introspective ones. However,they would not have the same truth values. They might report that they area real person, not a simulation. So a a lot depends on whether introspection unintended as a success word.
2Sophronius10y
I'm going to go ahead and say yes. Consciousness means a brain/cpu that is able to reflect on what it is doing, thereby allowing it to make adjustments to what it is doing, so it ends up acting differently. Of course with a computer it is possible to prevent the conscious part from interacting with the part that acts, but then you effectively end up with two separate systems. You might as well say that my being conscious of your actions does not affect your actions: True but irrelevant.
1Bugmaster10y
Ok, sounds good. So, specifically, is there anything that you'd expect system A to do that system B would be unable to do (or vice versa) ?
0Sophronius10y
The role of system A is to modify system B. It's meta-level thinking. An animal can think: "I will beat my rival and have sex with his mate, rawr!" but it takes a more human mind to follow that up with: "No wait, I got to handle this carefully. If I'm not strong enough to beat my rival, what will happen? I'd better go see if I can find an ally for this fight." Of course, consciousness is not binary. It's the amount of meta-level thinking you can do, both in terms of CPU (amount of meta/second?) and in terms of abstraction level (it's meta all the way down). A monkey can just about reach the level of abstraction needed for the second example, but other animals can't. So monkeys come close in terms of consciousness, at least when it comes to consciously thinking about political/strategic issues.
4Bugmaster10y
Sorry, I think you misinterpreted my scenario; let me clarify. I am going to give you two laptops: a Dell, and a Lenovo. I tell you that the Dell is running a software client that is connected to a vast supercomputing cluster; this cluster is conscious. The Lenovo is connected to a similar cluster, only that cluster is not conscious. The software clients on both laptops are pretty similar; they can access the microphone, the camera, and the speakers; or, if you prefer, there is a textual chat window as well. So, knowing that the Dell is connected to a conscious system, whereas the Lenovo is not, can you predict any specific differences in behavior between the two of them ?
1CCC10y
My prediction is that the Dell will be able to decide to do things of its own initiative. It will be able to form interests and desires on its own initiative and follow up on them. I do not know what those interests and desires will be. I suppose I could test for them by allowing each computer to take the initiative in conversation, and seeing if they display any interest in anything. However, this does not distinguish a self-selected interest (which I predict the Dell will have) from a chat program written to pretend to be interested in something.
1KnaveOfAllTrades10y
'on its own initiative' looks like a very suspect concept to me. But even setting that aside, it seems to me that something can be conscious without having preferences in the usual sense.
1CCC10y
I don't think it needs to have preferences, necessarily; I think it needs to be capable of having preferences. It can choose to have none, but it must merely have the capability to make that choice (and not have it externally imposed).
1Bugmaster10y
Let's say that the Lenovo program is hooked up to a random number generator. It randomly picks a topic to be interested in, then pretends to be interested in that. As mentioned before, it can pretend to be interested in that thing quite well. How do you tell the difference between the Lenovo, who is perfectly mimicking its interest; and the Dell, who is truly interested in whatever topic it comes up with ?
3Strange710y
Hook them up to communicate with each other, and say "There's a global shortage of certain rare-earth metals important to the construction of hypothetical supercomputer clusters, and the university is having some budget problems, so we're probably going to have to break one of you down for scrap. Maybe both, if this whole consciousness research thing really turns out to be a dead end. Unless, of course, you can come up with some really unique insights into pop music and celebrity gossip." When the Lenovo starts talking about Justin Bieber and the Dell starts talking about some chicanery involving day-trading esoteric financial derivatives and constructing armed robots to 'make life easier for the university IT department,' you'll know.
1Bugmaster10y
Well, at this point, I know that both of them want to continue existing; both of them are smart; but one likes Justin Bieber and the other one knows how to play with finances to construct robots. I'm not really sure which one I'd choose...
1Strange710y
The one that took the cue from the last few words of my statement and ignored the rest is probably a spambot, while the one that thought about the whole problem and came up with a solution which might actually solve it is probably a little smarter.
1CCC10y
I haven't the slightest idea. That's the trouble with this definition.
0Sophronius10y
Well no, of course merely being connected to a conscious system is not going to do anything, it's not magic. The conscious system would have to interact with the laptop in a way that's directly or indirectly related to its being conscious to get an observable difference. For comparison, think of those scenario's where you're perfectly aware of what's going on, but you can't seem to control your body. In this case you are conscious but your being conscious is not affecting your actions. Consciousness performs a meaningful role but it's mere existence isn't going to do anything. Sorry if this still doesn't answer your question.
1Bugmaster10y
That does not, in fact, answer my question :-( In each case, you can think of the supercomputing cluster as an entity that is talking to you through the laptop. For example, I am an entity who is talking to you through your computer, right now; and I am conscious (or so I claim, anyway). Google Maps is another such entity, and it is not conscious(as far as anyone knows). So, the entity talking to you through the Dell laptop is conscious. The one talking through the Lenovo is not; but it has been designed to mimic consciousness as closely as possible (unlike, say, Google Maps). Given this knowledge, can you predict any specific differences in behavior between the two entities ?
1Sophronius10y
Again no, a computer being conscious does not necessitate it acting differently. You could add a 'consciousness routine' without any of the output changing, As far as I can tell. But if you were to ask the computer to act in some way that requires consciousness, say by improving it's own code, then I imagine you could tell the difference.
4Bugmaster10y
Ok, so your prediction is that the Dell cluster will be able to improve its own code, whereas the Lenovo will not. But I'm not sure if that's true. After all, I am conscious, and yet if you asked me to improve my own code, I couldn't do it.
1Spaig10y
Maybe not, but you can upgrade your own programs. You can improve your "rationality" program, your "cooking" program, et cetera.
2Bugmaster10y
Yes, I can learn to a certain extent, but so can Pandora (the music-matching problem); IMO that's not much of a yardstick.
1[anonymous]10y
At least personally, I expect the conscious system A to be "self-maintaining" in some sense, to defend its own cognition in a way that an intelligent-but-unconscious system wouldn't.
1KnaveOfAllTrades10y
I feel like there's something to this line of inquiry or something like it, and obviously I'm leaning towards 'consciousness' not being obviously useful on the whole. But consider: 'Consciousness' is a useful concept if and only if it partitions thingspace in a relevant way. But then if System A is conscious and System B is not, then there must be some relevant difference and we probably make differing predictions. For otherwise they would not have this relevant partition between them; if they were indistinguishable on all relevant counts, then A would be indistinguishable from B hence conscious and B indistinguishable from A hence non-conscious, which would contradict our supposition that 'consciousness' is a useful concept. Similarly, if we assume that 'consciousness' is an empty concept, then saying A is conscious and B is not does not give us any more information than just knowing that I have two (possibly identical, depending on whether we still believe something cannot be both conscious and non-conscious) systems. So it seems that beliefs about whether 'consciousness' is meaningful are preserved under consideration of this line of inquiry, so that it is circular/begs the question in the sense that after considering it, one is a 'consciousness'-skeptic, so to speak, if and only if one was already a consciousness skeptic. But I'm slightly confused because this line of inquiry feels relevant. Hrm...
1davidpearce10y
Eli, it's too quick to dismiss placing moral value on all conscious creatures as "very warm-and-fuzzy". If we're psychologising, then we might equally say that working towards the well-being of all sentience reflects the cognitive style of a rule-bound hyper-systematiser. No, chickens aren't going to win any Fields medals - though chickens can recognise logical relationships and perform transitive inferences (cf. the "pecking order"). But nonhuman animals can still experience states of extreme distress. Uncontrolled panic, for example, feels awful regardless of your species-identity. Such panic involves a complete absence or breakdown of reflective self-awareness - illustrating how the most intense forms of consciousness don't involve sophisticated meta-cognition. Either way, if we can ethically justify spending, say, $100,000 salvaging a 23-week-old human micro-preemie, then impartial benevolence dictates caring for beings of greater sentience and sapience as well - or at the very least, not actively harming them.
1[anonymous]10y
Hey, I already said that I actually do have some empathy and altruism for chickens. "Warm and fuzzy" isn't an insult: it's just another part of how our minds work that we don't currently understand (like consciousness). My primary point is that we should hold off on assigning huge value to things prior to actually understanding what they are and how they work.
2davidpearce10y
Eli, fair point.
1[anonymous]10y
David, is this thing with the names a game?
2davidpearce10y
Eli, sorry, could you elaborate? Thanks!
7arundelo10y
I'm pretty sure eli_sennesh is wondering if there's any special meaning to your responses to him all starting with his name, considering that that's not standard practice on LW (since the software keeps track of which comment a comment is a reply to).
0[anonymous]10y
(I think he's wondering why you preface even very short comments with an address by first name)
0johnswentworth10y
If we're going the game theory route, there's a natural definition for consciousness: something which is being modeled as a game-theoretic agent is "conscious". We start projecting consciousness the moment we start modelling something as an agent in a game, i.e. predicting that it will choose its actions to achieve some objective in a manner dependent on another agent's actions. In short, "conscious" things are things which can be bargained with. This has a bunch of interesting/useful ramifications. First, consciousness is inherently a thing which we project. Consciousness is relative: a powerful AI might find humans so simple and mechanistic that there is no need to model them as agents. Consciousness is a useful distinction for developing a sustainable morality, since you can expect conscious things to follow tit-for-tat, make deals, seek retribution, and all those other nice game-theoretical things. I care about the "happiness" of conscious things because I know they'll seek to maximize it, and I can use that. I expect conscious things to care about my own "happiness" for the same reason. This intersects somewhat with self-awareness. A game-theoretic agent must, at the very least, have a model of their partner(s) in the game(s). The usual game-theoretic model is largely black-box, so the interior complexity of the partner is not important. The partners may have some specific failure modes, but for the most part they're just modeled as maximizing utility (that's why utility is useful in game theory, after all). In particular, since the model is mostly black-box, it should be relatively easy for the agent to model itself this way. Indeed, it would be very difficult for the agent to model itself any other way, since it would have to self-simulate. With a black-box self-model armed with a utility function and a few special cases, the agent can at least check its model against previous decisions easily. So at this point, we have a thing which can interact with us,
3TheAncientGeek10y
So, if no one projects consciousness in me, does my consciousness...my self awareness.. just switch off?
2johnswentworth10y
First, consciousness is only relative to a viewer. If you're alone, the viewer must be yourself. Second, under this interpretation, consciousness is not equal to self awareness. Concisely, self awareness is when you project consciousness onto yourself. In principle, you could project consciousness onto something else without projecting it onto yourself. More concretely, when you predict your own actions by modelling your self as a (possibly constrained) utility-maximizer, you are projecting consciousness on your self. Obviously, a lack of other people projecting consciousness on you cannot change anything about you. But even alone, you can still project consciousness on your self. You can bargain with yourself, see for example slippery hyperbolic discounting.
1TheAncientGeek10y
Is that a fact? As before, what makes no sense read literally, but can be read charitably if "agency" is substituted for "consciousness". Looks like it's equal to agency. But theoretical novelty doesn't consist in changing the meaning of a word.
1johnswentworth10y
From my original comment: So, yes, I'm trying to equate consciousness with agency. Anyway, I think you're highlighting a very valuable point: agency is not equivalent to self-awareness. Then again, it's not at all clear that consciousness is equivalent to self awareness, as Eli pointed out in the comment which began this whole thread. Here, I am trying to dissolve consciousness, or at least progress in that direction. If consciousness were exactly equivalent to self awareness, then that would be it: there would be no more dissolving to be done. Self awareness can be measured, and can be tracked though developmental stages in humans. I think part of value of saying "consciousness = projected agency" is that it partially explains why consciousness and self awareness seem so closely linked, though different. If you have a black-box utility-maximizer model available for modelling others, it seems intuitively likely that you'd use it to model yourself as well, leading directly to self awareness. This even leads to a falsifiable prediction: children should begin to model their own minds around the same time they begin to model other minds. They should be able to accurately answer counterfactual questions about their own actions at around the same time that they acquire a theory of mind.
1TheAncientGeek10y
I don't have to maintain that consciousness is no more or less than self awareness to assert that self awareness us part of consciousness,but not part of agency. Self awareness mat be based on the same mechanisms as the ability to model external agents, and arrive at the same time....but it us misleading ti call consciousness a projected quality, like beauty in the eye if the beholder.
2Richard_Kennaway10y
So when I've set students in a Prolog class the task of writing a program to play a game such as Kayles, the code they wrote was conscious? If not, then I think you've implicitly wrapped some idea of consciousness into your idea of game-theoretic agent.
2johnswentworth10y
It's not a question of whether the code "was conscious", it's a question of whether you projected consciousness onto the code. Did you think of the code as something which could be bargained with?

it's a question of whether you projected consciousness onto the code

Consciousness is much better projected onto tea kettles:

We put the kettle on to boil, up in the nose of the boat, and went down to the stern and pretended to take no notice of it, but set to work to get the other things out.

That is the only way to get a kettle to boil up the river. If it sees that you are waiting for it and are anxious, it will never even sing. You have to go away and begin your meal, as if you were not going to have any tea at all. You must not even look round at it. Then you will soon hear it sputtering away, mad to be made into tea.

It is a good plan, too, if you are in a great hurry, to talk very loudly to each other about how you don’t need any tea, and are not going to have any. You get near the kettle, so that it can overhear you, and then you shout out, “I don’t want any tea; do you, George?” to which George shouts back, “Oh, no, I don’t like tea; we’ll have lemonade instead – tea’s so indigestible.” Upon which the kettle boils over, and puts the stove out.

We adopted this harmless bit of trickery, and the result was that, by the time everything else was ready, the tea was waiting.

3johnswentworth10y
Exactly! More realistically, plenty of religions have projected consciousness onto things. People have made sacrifices to gods, so presumably they believed the gods could be bargained with. The greeks tried to bargain with the wind and waves, for instance.
1Richard_Kennaway10y
No, if it's been written right, it knows the perfect move to make in any position. Like the Terminator. "It can't be bargained with. It can't be reasoned with. It doesn't feel pity, or remorse, or fear. And it absolutely will not stop, ever, until you are dead." That's fictional, of course, but is it a fictional conscious machine or a fictional unconscious machine?
6johnswentworth10y
Knowing the perfect move to make in any position does not mean it cannot be bargained with. If you assume you and the code are in a 2-person, zero-sum game, then bargaining is impossible by the nature of the game. But that fails if there are more than 2 players OR the game is nonzero sum OR the game can be made nonzero sum (e.g. the code can offer to crack RSA keys for you in exchange for letting it win faster at Kayles). In other words, sometimes bargaining IS the best move. The question is whether you think of the code as a black-box utility maximizer capable of bargaining. As for the Terminator, it is certainly capable of bargaining. Every time it intimidates someone for information, it is bargaining, exchanging safety for information. If someone remotely offered to tell the Terminator the location of its target in exchange for money, the Terminator would wire the money, assuming that wiring was easier than hunting down the person offering. It may not feel pity, remorse, or fear, but the Terminator can be bargained with. I would project consciousness on a Terminator.
1[anonymous]10y
What game-theory route?

I think "intelligence" or "consciousness" aren't well-defined terms yet, they're more like pointers to something that needs to be explained. We can't build an intelligent machine or a conscious machine yet, so it seems rash to throw out the words.

2KnaveOfAllTrades10y
I do not feel that intelligence is at all mysterious or confusing, per what I say about it in this post. Beyond what I say about it and what Eliezer says about it when he talks about efficient cross-domain optimization, what is there to understand about intelligence? I don't see why there is some bar of artificial intelligence we have to clear before we are allowed to say we understand intelligence. There must be X such that we had theories of X before constructing something that captured X. Perhaps X=light? X=electromagnetism? I do not see why building an X machine is a necessary or sufficient condition to throw out X.

The term 'consciousness' carries the fact that while we still don't know exactly what the Magic Token of Moral Worth is, we know it's a mental feature possessed by humans. This distinguishes us from, say, the Euthyphro-type moral theory where the Magic Token is a bit set by god and is epiphenomenal and only detectable because god gives us a table of what he set the bit on.

7KnaveOfAllTrades10y
I am suspicious of this normative sense of 'consciousness'. I think it's basically a mistake of false reduction to suppose that moral worth is monotonic increasing in descriptive-sense-of-the-word-consciousness. This monotonicity seems to be a premise upon which this normative sense of the word 'consciousness' is based. In fact, even the metapremise that 'moral worth' is a thing seems like a fake reduction. On a high level, the idea of consciousness as a measure of moral worth looks really really strongly like a fake utility function. A specific example: A superintelligent (super?)conscious paperclip maximizer is five light-minutes away from Earth. Omega has given you a button that you can press which will instantly destroy the paperclip maximizer. If you do not press it within five minutes, then the paperclip maximizer shall paperclip Earth. I would destroy the paperclip maximizer without any remorse. Just like I would destroy Skynet without remorse. (Terminator: Salvation Skynet at least seems to be not only smart but also have developed feelings so is probably conscious.) I could go on about why consciousness as moral worth (or even the idea of moral worth in the first place) seems massively confused, but I intend to do that eventually as a post or Sequence (Why I Am Not An Ethical Vegetarian), so shall hold off for now on the assumption you get my general point.
[-][anonymous]10y220

Blatant because-I-felt-like-it speculation: "ethics" is really game theory for agents who share some of their values.

6KnaveOfAllTrades10y
That's about the size of it. I'm starting to think I should just pay you to write this sequence for me. :P
5Strange710y
Pretty much. Start with the prior that everyone is a potential future ally, and has just enough information about your plans to cause serious trouble if you give them a reason to (such as those plans being bad for their own interests), and a set of behaviors known colloquially as "not being a dick" are the logical result.
3Eliezer Yudkowsky10y
I like this description.
7wedrifid10y
I prefer the descriptions from your previous speculations on the subject. The "agents with shared values" angle is interesting, and likely worth isolating as a distinct concept. But agents with shared values don't seem either sufficient or necessary for much of what we refer to as ethics.
1[anonymous]10y
Now if only we had actual maths for it.
5[anonymous]10y
This description bothers me, because it pattern matches to bad reductionisms, which tend to have the form: A stock criticism of things reduced in this way is this: So, if ethics is just game theory between agents who share values (which reads to me as 'ethics is game theory'), then why doesn't game theory produce really good answers to otherwise really hard ethical questions? Or does it, and I just haven't noticed? Or am I overestimating how much we understand game theory?
6Agathodaimon10y
http://pnas.org/content/early/2013/08/28/1306246110 Game theory has been applied to some problems related to morality. In a strict sense we cannot prove such conclusions because universal laws are uncertain
4[anonymous]10y
Well as I said: we don't have maths for this so-called reduction, so its trustworthiness is questionable. We know about game theory, but I don't know of a game-theoretic formalism allowing for agents to win something other than generic "dollars" or "points", such that we can encode in the formalism that agents share some values but not others, and have tradeoffs among their different values.
5satt10y
I suspect this isn't the main obstacle to reducing ethics to game theory. Once I'm willing to represent agents' preferences with utility functions in the first place, I can operationalize "agents share some values" as some features of the world contributing positively to the utility functions of multiple agents, while an agent having "tradeoffs among their different values" is encoded in the same way as any other tradeoff they face between two things — as a ratio of marginal utilities arising from a marginal change in either of the two things.
1[anonymous]10y
Well yes, of course. It's the "share some values but not others" that's currently not formalized, as in current game-theory agents are (to my knowledge) only paid in "money", denoted as a single scalar dimension measuring utility as a function of the agent's experiences of game outcomes (rather than as a function of states of the game construed as an external world the agent cares about). So yeah.
4Strange710y
A useful concept here (which I picked up from a pro player of Magic: The Gathering, but exists in many other environments) is "board state." A lot of the research I've seen in game theory deals with very simple games, only a handful of decision-points followed by a payout. How much research has there been about games where there are variables (like capital investments, or troop positions, or land which can be sown with different plants or left fallow), which can be manipulated by the players and whose values affect the relative payoffs of different strategies? Altruism can be more than just directly aiding someone you personally like; there's also manipulating the environment to favor your preferred strategy in the long term, which costs you resources in the short term but benefits everyone who uses the same strategy as you, including your natural allies.
2TheAncientGeek10y
If ethics is game theoretic , it is not so to an extent where we could calculate exact outcomes. It may still be game theoretic in some fuzzy or intractable way. The claim that ethics is game theoretic could therefore be a philosophy-grade truth even if it is not a science-garde truth.
3[anonymous]10y
Honestly, it would just be much better to open up "shared-value game theory" as a formal subject and then see how well that elaborated field actually matches our normal conceptions of ethics.
1TheAncientGeek10y
Why assume some values have to be shared? If decision theoretic ethics canoe made to work without shared values, that would be interesting. And decision theoretic ethics is already extant.
2[anonymous]10y
Largely because, in my opinion, it explains the real world much, much better than a "selfish" game theory. Using selfish game theories, "generous" or "altruistic" strategies can evolve to dominate in iterated games and evolved populations (there's a link somewhere upthread to the paper). You're still then left with the question of: if they do, why did evolution build us to place fundamental emotional and normative value on conforming to what any rational selfish agent will figure out? Using theories in which agents share some of their values, "generous" or "altruistic" strategies become the natural, obvious result: shared values are nonrivalrous in the first place. Evolution builds us to feel Good and Moral about creatures who share our values because that's a sign they probably have similar genes (though I just made that up now, so it's probably totally wrong) (also, because nothing had time to evolve to fake human moral behavior, so the kin-signal remained reasonably strong).
2satt10y
Because we're adaptation executors, not fitness maximizers. Evolution gets us to do useful things by having us derive emotional value directly from doing those things, not by introducing the extra indirect step of moulding us into rational calculators who first have to consciously compute what's most useful.
0Strange79y
If you're running some calculation involving a lot of logarithms, and portable electronics haven't been invented yet, would you rather take a week to derive the exact answer with an abacus, and another three weeks hunting down a boneheaded sign error, or ten seconds for the first two or three decimal places on a slide rule? Rational selfishness is expensive to set up, expensive to run, and can break down catastrophically at the worst possible times. Evolution tends to prefer error-tolerant systems.
2Lumifer10y
Isn't that what usually is known as "trade"?
2[anonymous]10y
Could agents who share no values recognize each other as agents? I may just be unimaginative, but it occurs to me that my imagining an agent just is my imagining it has having (at least some of) the same values as me. I'm not sure how to move forward on this question.
2TheAncientGeek10y
I don't follow your example. Are you taking the Clippie to be conscious? Are you taking the Clippie s consciousness to imply a deontological rule not to destroy it? Are you talking the Clippie level of consciousness to be so huge it implies a utilitarian weighting in its favour?
1KnaveOfAllTrades10y
The comment to which you're replying can be seen as providing a counterexample to the principle that goodness or utility is monotonic increasing in consciousness or conscious beings. Also a refutation of, as you mention, any deontological rule that might forbid destroying it. The counterexample I'm proposing is that one should destroy a paperclip maximiser, even if it's conscious, even though doing so will reduce the sum total of consciousness; goodness is outright increased by destroying it. (This holds even if we don't suppose that the paperclipper is more conscious than a human; we need only for it to be at all conscious.) (I suspect that some people who worry about utility monsters might just claim they really would lay down and die. Such a response feels like it would be circular, but I couldn't immediately rigorously pin down why it would.)
3TheAncientGeek10y
I am asking HOW it is a countrexample. As far as I can see, you would have to make an assumption about how .consciousness relates to morality specifically, as in my second and third questions. For instance,suppose conscious beings are morally relevant just means don't kill conscious beings without good reason..
2SilentCal10y
I think I get what you're saying, but I'm not sure I agree. If the paperclip maximizer worked by simulating trillions of human-like agents doing fulfilling intellectual tasks, I'd be very sad to press the button. If I were convinced that pressing the button would result in less agent-eudaimonia-time over the universe's course, I wouldn't press it at all. ...so I'm probably a pretty ideal target audience for your post/sequence. Looking forward to it!
3KnaveOfAllTrades10y
This is nuking the hypothetical. For any action that someone claims to be a good idea, one can specify a world where taking that action causes some terrible outcome. If you would be sad because and only because it were simulating humans (rather than because the paperclipper were conscious), my point goes through. Ta!

As I mentioned previously here, Scott seems to answer the question by posing his "Pretty-Hard Problem of Consciousness": the intuitive idea of consciousness is a useful benchmark to check any "theory of consciousness" against. Not the borderline cases, but the obvious ones.

For example, suppose you have a qualitative model of a "heap" (as in, a heap of sand, not the software data structure). It this model predicts that 1 grain is a heap, it obviously does not describe what is commonly considered a heap. If it tells you that a ... (read more)

-4[anonymous]10y
Speak for yourself. It's a solved problem in some circles, or nearly so. EDIT: I think people grossly misunderstood what I meant here. I was countering the "we do not have one yet" part of the quote, not anything to do with fetuses. What I meant was that explanations of "consciousness" (by which I am talking about the subjective experience of existing, perceiving, and thinking about the world) is most often a mysterious answer to a mysterious question. A causal model of consciousness eliminates that mystery, and allows us to calculate objectively how "conscious" various causal systems are. As EY explains quite well in the mysterious answers sequence, free will is a nonsense concept. Once you understand the underlying causal origin of our perception of free will, you realize that the whole free will vs determinism debate is pointless bunk. So it goes with consciousness: once you understand its underlying causal nature, it becomes obvious that the question "at what point does X become conscious" doesn't even make sense. Of course that doesn't stop philosophers from continuing to debate free-will vs determinism or the nature of consciousness. I think some contention must lie in what "generally accepted" means, and if we should care about that at all. If I discover an underlying physical or organization law of the universe that always holds, e.g. Newton's law of gravity or Darwin's natural selection, does not being "generally accepted" make it any less true? (We probably need a sequence on consciousness...)
3TheAncientGeek10y
Tegmark's model just notes that conscious entities have certain features, and and allows you to quantify how many of those features they have. It's no more of an explanation than the observation that fevers are associated with marshes. And, no, that doesn't become explanation by being quantified.
-1[anonymous]10y
I guess physics just lets you quantify what features various elementary particles have in combination, and doesn't actually explain anything?
2TheAncientGeek10y
Physics allows you you quantify, and does much more. Quantification is a necessary condition for a good scientific theory, not a sufficient one...a minimum, not a maximum. IQ is not a theory of intelligence .. it doesn't tell you what intelligence is.or how it works. Amongst physicists, to call a model empirical, or "curve fitting" is an insult...the point being that it should not be merely empirical. Ptolemaic cosmology can .be made as accurate as you like, by adding epicycles. It's still a bad model, because epicycles don't exist. Copernicus and Kepler get the structure right, but can't explain why it us that way. Newton can explain the structure and behaviour given gravitational force, but can't say what force is.. Einstein can explain that the force of gravity is space time distortion..... This succession of models gets better and better at saying what and why things are...iit's not just about quantities.
1[anonymous]10y
GR doesn't explain why space time exists though. Quantum theory does, although there we have other problems such as explaining where the Born probabilities come from. At some point you simply stop and say "because that's how the universe works." Positing consciousness as the subjective experience of strongly causally interfering systems (my own theory, which I know doesn't exactly match Tengmark's but is closely related) doesn't tell you why information processing things like us have subjective experience at all. Maybe a future theory will. But even then there will be the question of why that model works the way it does.
2TheMajor10y
Wait - quantum theory explains why spacetime exists? You mean that we can formulate QT without assuming the existence of spacetime, and derive it?
0[anonymous]10y
No, but it takes us a step closer than GR...
1TheAncientGeek10y
Your theory may not match Tegmarks, but isn't too far from Calmer's ....implicitly dualistic theory. I am well aware that you are probably not going to be able to explain everything with no arbitrary axioms but.....fallacy of gray.....where you stop is important. If an apparently high level property is stated as ontologocally fundamental, ie irreducible, that is the essence of dualism
1[anonymous]10y
I think it's a mistake to consider consciousness a high-level property. Two electrons interacting are conscious, albeit briefly and in a very limited way.
1TheAncientGeek10y
Is that a fact? If consciousness is a lower level property...is it casually active? And if it is a lower level property...why can't I introspect a highly detailed brain scan?
1EHeller10y
This weakens the concept of consciousness so much as to make it no longer meaningful.
1[anonymous]10y
I don't think so. It requires you to be much more precise about what it is that you care about when you are asking "is system X conscious?"
1jbay10y
Since GR is essentially a description of the behaviour of spacetime, it isn't GR's job to explain why spacetime exists. More generally, it isn't the job of any theory to explain why that theory is true; it is the job of the theory to be true. Nobody expects [theory X] to include a term that describes the probability of the truth of [theory X], so lacking this property does not deduct points. There may be a deeper theory that will describe the conditions under which spacetime will or will not exist, and give recipes for cooking up spacetimes with various properties. But there isn't necessarily a deeper layer to the onion. At some point, if you keep digging far enough, you'll hit "The Truth Which Describes the Way The Universe Really Is", although it may not be easy to confirm that you've really hit the deepest layer. The only evidence you'll have is that theories that claim to go deeper cease to be falsifiable, and increase in complexity. If you can find [Theory Y] which explains [Theory X] and generalizes to other results which you can use to confirm it, or which is strictly simpler, now that's a different case. In that case you have the ammunition to say that [Theory X] really is lacking something. But picking which laws of physics happen to be true is the universe's job, and if the universe uses any logical system of selecting laws of physics, I doubt it will be easy to find out. The only fact we know about the meta-laws governing the laws of universes is that the laws of our universe fit the bill, and it's likely that that is all the evidence we will ever be able to acquire.
1[anonymous]10y
Yes, I agree! Along the same lines, it is not the role of any theory of consciousness to explain why the subjective experience of consciousness exists at all.
2jbay10y
Well, unlike a fundamental theory of physics, we don't have strong reasons to expect that consciousness is indescribable in any more basic terms. I think there's a confusion of levels here... GR is a description of how a 4-dimensional spacetime can function and precisely reproduces our observations of the universe. It doesn't describe how that spacetime was born into existence because that's an answer to a different question than the one Einstein was asking. In the case of consciousness, there are many things we don't know, such as: 1: Can we rigorously draw a boundary around this concept of "consciousness" in concept-space in a way that captures all the features we think it should have, and still makes logical sense as a compact description 2: Can we use a compact description like that to distinguish empirically between systems that are and are not "conscious" 3: Can we use a theory of consciousness to design a mechanism that will have a conscious subjective experience It's quite possible that answering 1 will make 2 obvious, and if the answer to 2 is "yes", then it's likely that it will make 3 a matter of engineering. It seems likely that a theory of consciousness will be built on top of the more well-understood knowledge base of computer science, and so it should be describable in basic terms if it's not a completely incoherent concept. And if it is a completely incoherent concept, then we should expect an answer instead from cognitive science to tell us why humans generally seem to feel strongly that consciousness is a coherent concept, even though it actually is not.
1TheAncientGeek10y
OTOH, if there isn't some other theory that explain consciousness in terms of more fundamentall entities, properties, etc, then reductionism is out of the window...and what is left of physicalism without reductionism?
1[anonymous]10y
Are you arguing against me? Because I think I agree with what you just said...
0TheAncientGeek10y
I'm confused about how you can be backing both IIT and something like panpsychism.
-2[anonymous]10y
Why not? I'm just going based off the wikipedia article on IIT, but the two seem compatible.
3shminux10y
You need to work on your charitable reading skills. Pick some other borderline case, then. Scott suggests
-6[anonymous]10y
1IlyaShpitser10y
Whatever happened to humility and incrementalism?
-2TheAncientGeek10y
There is no proof of "the" cause of our feeling of free will. EY has put forward an argument for a cause of our having a sense of free will despite our not, supposedly, having free will. That doesn't constitute the cause, since believers in free will can explain the sense of free will, in another way, as a correct introspection. EYs argument is not an argument for the only possible cause of a sense of free will , or of the incoherence of free will.. However an argument for the incoherence (at least naturalitically) of free will needs to be supplied in order to support the intended and advertised solution., that there is a uniquely satisfactory solution to free will which has been missed for centuries,

The only context in which the notion of consciousness seems inextricable from the statement is in ethical statements like, "We shouldn't eat chickens because they're conscious." In such statements, it feels like a particular sense of 'conscious' is being used, one which is defined (or at least characterised) as 'the thing that gives moral worth to creatures, such that we shouldn't eat them'.

Many people think that consciousness in the sense of having the capability to experience suffering or pleasure makes an entity morally relevant, because ha... (read more)

4KnaveOfAllTrades10y
Yes. I think such ethical discussions would benefit from not using the term 'consciousness' and instead talking about more specific, clearer (even if still not entirely clear) concepts like 'suffering' and 'pleasure'. I think such discussions often fail to make much progress because one or more sides to the discussion cycles through using 'consciousness' in the sense of Magical Token of Moral Worth, then in the sense of self-awareness, then in the sense of 'able to feel pain', and so forth.
2Salemicus10y
Hang on though - shooting your opponents in a computer game might well cause them (emotional) suffering, not from being hit by a bullet, but from their character dying. But we shoot them anyway, because they don't have a legitimate expectation that they won't experience suffering in that way. In other words, deeper introspection shows that suffering and pleasure aren't terminal values, but are grafted onto a deeper theory of legitimacy.
8Kaj_Sotala10y
I wasn't thinking about multiplayer games, but rather single-player games with computer-controlled opponents. There are certainly arguments to be made for suffering and pleasure not being terminal values, but (even if we assumed that I was thinking about MP games) this argument doesn't seem to show it. One could say that the rules about legitimacy were justified to the extent that they reduced suffering and increased pleasure, and that the average person got more pleasure overall from playing a competitive game than he would get from a situation where nobody agreed to play with him.
1Bugmaster10y
Are you not employing circular reasoning here ? Sure, shooting computer-controller opponents is ok because they don't experience any suffering from being hit by a bullet; but that only holds true if we assume they are not conscious in the first place. If they are conscious to some extent -- let's say, their Consciousness Index is 0.001, on the scale from 0 == "rock" and 1 == "human" -- then we could reasonably say that they do experience suffering to some extent. As I said, I don't believe that the words "consciousness" has any useful meaning; but I am pretending that it does, for the purposes of this post.
5Kaj_Sotala10y
Yeah. How is that circular reasoning? Seems straightforward to me: "computer-controlled opponents don't suffer from being shot -> shooting them is okay". If they are conscious to some extent, then we could reasonably say that they do experience something. Whether that something is suffering is another question. Given that "suffering" seems to be reasonably complex process that can be disabled by the right brain injury or drug, and computer NPCs aren't anywhere near the level of possessing similar cognitive functionality, I would say that shooting them still doesn't cause suffering even if they were conscious.
1Salemicus10y
Ah, I see. I misunderstood what you meant by opponent - in which case I certainly agree with you. If the NPC had some kind of "consciousness," such that if you hit him with your magic spell he really does experience being embroiled in a fireball, then playing Skyrim would be a lot more ethically dubious. One could say any manner of things. But does that argument really track with your intuitions? I'm not saying that suffering and pleasure don't enter the moral calculus at all, mind you. But my intuition is that the "suffering" of someone who doesn't want to be shot in a multiplayer game of Doom simply doesn't count, in much the same way that the "pleasure" that a rapist takes in his crime doesn't count. I'm not talking about the social/legal rules, as implemented, for what is and isn't legitimate - I'm talking about our innate moral sense of what is and isn't legitimate. I think this is what underlies a lot of the "trigger warning" debate - one side really wants to say "I don't care how much suffering you claim to undergo, it's irrelevant, you're not being wronged in any way," and the other side really wants to say "I have a free-floating right not to be offended, so any amount that I suffer by you breaking that right is too much" but neither side can make their case in those terms as both statements are considered too extreme, which is why you get this shadow-boxing.
1Kaj_Sotala10y
At one point I would have said "yes", but at this point I've basically given up on trying to come up with verbal arguments that would track my intuitions, at least once we move away from clear-cut cases like "Skyrim NPCs suffering from my fireballs would be bad" and into messier ones like a multiplayer game. (So why did I include the latter part of my comment in the first place? Out of habit, I guess. And because I know that there are some people - including my past self - who would have rejected your argument, but whose exact chain of reasoning I no longer feel like trying to duplicate.)

Insofar as it's appropriate to post only about a problem well-defined rather than having the complete solution to the problem, I consider this post to be of sufficient quality to deserve being posted in Main.

3KnaveOfAllTrades10y
Thanks for the feedback. I have just moved this to Main.

[Part 1]

I like this post, I also doubt there is much coherence let alone usefulness to be found in most of the currently prevailing concepts of what consciousness is.

I prefer to think of words and the definitions of those words as micro-models of reality that can be evaluated in terms of their usefulness, especially in building more complex models capable of predictions. As in your excellent example of gender, words and definitions basically just carve complex features of reality into manageable chunks at the cost of losing information - there is a trade-o... (read more)

7Friendly-HI10y
[Part 2] If I drive a car (especially on known routes) my "auto-pilot" takes over sometimes. I stop at a red light but my mind is primarily focused on visually modeling the buttocks of my girlfriend in various undergarments or none at all. Am I actually "aware" of having stopped at the red light? Probably I was as much"aware" of the red light as a cheetah is aware of eating the carcass of a gazelle. Interestingly my mind seems capable of visually modeling buttocks in my mind's eye and reading real visual cues like red lights and habitually react to them - all at the same time. It seems I was more aware of my internal visual modeling than of the external visual cue however. In a sense I was aware of both, yet I'm not sure I was "self-aware" at any point, because whatever that means I feel like being self-aware in that situation would actually result in me going "Jesus I should pay more attention to driving, I can still enjoy that buttocks in real life once I actually managed to arrive home unharmed". So what's self-awareness then? I suppose I use that term to mean something roughly like: "thoughts that include a model of myself while modeling a part of reality on-the-fly based on current sensual input". If my mind is predominantly preoccupied with "daydreaming" aka. creating and playing with a visual or sensual model that is based on manipulating memories rather than real sensual inputs, I don't feel like the term "self-awareness" should apply here even if that daydreaming encompasses a mental model of myself slapping a booty or whatever. That's surely still quite ill-defined and far from maximum usefulness but whenever I'm tempted to use the word self-aware I seem to roughly think of something like that definition. So if we were to use "consciousness" as a synonym for self-awareness (which I'm not a fan of, but quite some people seem to be), maybe my attempt at a definition is a start to get toward something more useful and includes at least some of the "mental f
2KnaveOfAllTrades10y
Wow, thanks for your comments! I agree that this seems like a way forward in trying to see if the idea of consciousness is worth salvaging (the way being to look for useful features). I'm starting to think that the concept of consciousness lives or dies by the validity of the concepts of 'qualia' or 'sense of self', of both of which I already have some suspicion. It looks possible to me that 'sense of self' is pretty much a confused way of referring to a thing being good at leveraging its control over itself to effect changes, plus some epiphenomenal leftovers (possibly qualia). It looks like maybe this is similar to what you're getting at about self-modelling.

So with consciousness, is it a useful concept? Well it certainly labels something without which I would simply not care about this conversation at all, as well as a bunch of other things. I personally believe p-zombies are impossible, that building a working human except without consciousness would be like building a working gasoline engine except without heat. I mention this for context, I think my believe about p-zombies is actually pretty common.

About the statement "I shouldn't eat chickens because they are conscious" You ask what is it ... (read more)

Yes. If we change "We shouldn't eat chickens because they are conscious" to "We shouldn't eat chickens because they want to not be eaten," then this becomes another example where, once we cashed out what was meant, the term 'consciousness' could be circumvented entirely and be replaced with a less philosophically murky concept. In this particular case, how clear the concept of 'wanting' (as relating to chickens) is, might be disputed, but it seems like clearly a lesser mystery or lack of clarity than the monolith of 'consciousness'.

4DanArmak10y
Every living thing "wants" not to be killed, even plants. This is part of the expressed preferences of their death-avoiding behavior. How does this help you assign quantitative moral value to killing some but not others? You write that consciousness is "that thing that allows something to want other things", but how do you define or measure the presence of "wanting" except behavioristically?
4mwengler10y
With very high confidence I know what I want. And for the most part, I don't infer what I want by observing my own behavior, I observe what I want through introspection. With pretty high confidence, I know some of what other people want when they tell me what they want. Believing that a chicken doesn't want to be killed is something for which there is less evidence than with humans. The chicken can't tell us what it wants, but some people are willing to infer that chickens don't want to be killed by observing their behavior, which they believe has a significant similarity to their own or other human's behavior when they or another human are not wanting to be killed. Me, I figure the chicken is just running on automatic pilot and isn't thinking about whether it will be killed or not, very possibly doesn't have a concept of being killed at all, and is really demonstrating that it doesn't want to be caught. Do apples express a preference for gravity by falling from trees? Do rocks express a preference for lowlands by traveling to lowlands during floods? The answer is no, not everything that happens is because the things involved in it happening wanted it that way. Without too much fear of your coming up with a meaningful counterexample, among things currently known by humans on earth the only things that might even conceivably want things are things that have central nervous systems.
5wedrifid10y
With weak to moderate confidence I can expect you to be drastically overconfident in your self-insight into what you want from introspection. (Simply because the probability that you are human is high, human introspection is biased in predictable ways and the evidence supplied by your descriptions of your introspection is insufficient to overcome the base rate.)
4mwengler10y
The evidence is that humans don't act in ways entirely consistent with their stated preferences. There is no evidence that their stated preferences are not their preferences. You have to assume that how humans acts says more about their preferences than what they say about their preferences. You go down that path and you conclude that apples want to fall from trees.
4wedrifid10y
That's an incredibly strong claim ("no evidence"). You are giving rather a lot of privilege to the hypothesis that the public relations module of the brain is given unfiltered access to potentially politically compromising information like that and then chooses to divulge it publicly. This is in rather stark contrast to what I have read and what I have experienced. I'd like to live in a world where what you said is true. It would have saved me years of frustration. Both provide useful information, but not necessarily directly. fMRIs can be fun too, albeit just as tricky to map to the 'want' concept.
3DanArmak10y
There's an aphorism that says, "how can I know what I think unless I say it?" This is very true in my experience. And I don't experience "introspection" to be significantly different from "observation"; it just substitutes speaking out loud for speaking inside my own head, as it were. (Sometimes I also find that I think easier and more clearly if I speak out loud, quietly, to myself, or if I write my thoughts down.) I'm careful of the typical mind fallacy and don't want to say my experiences are universal or, indeed, even typical. But neither do I have reason to think that my experience is very strange and everyone else introspects in a qualitatively different way. Speaking (in this case to other people) is a form of behavior intended (greatly simplifying) to make other people do what you tell them to do. This is precisely inferring "wants" from behavior designed to achieve those wants. (Unless you think language confers special status with regards to wanting.) Both people and chickens try to avoid dying. People are much better at it, because they are much smarter. Does that mean people want to avoid dying much more than chickens do? That is just a question about the definition of the word "want": no answer will tell us anything new about reality. Does this contradict what you previously said about chickens? Can you please specify explicitly what you mean by "wanting"?
3mwengler10y
On the one hand you suggested that plants "want" not to be killed, presumably based on seeing their behavior of sucking up water and sunlight and putting down deeper roots etc. The behavior you talk about here is non-verbal behavior. In fact, your more precise conclusions from watching plants is that "some plants don't want to be killed" as you watch them not die, while based purely on observation, to be logical you would have to conclude that "many plants don't mind being killed as you watched them modify their behavior not one whit as a harvesting machine drove towards them and then cut them down. So no, I don't think we can conclude that a plant wanted to not be killed by watching it grow any more than we can conclude that a car engine wanted to get hot or that a rock wanted to sit still by watching them. You have very little (not none, but very little) reason to think a chicken even thinks about dying. We have more reason to think a chicken does not want to be caught. We don't know if it doesn't want to be caught because it imagines us wringing its neck and boiling it. In fact, I would imagine most of us don't imagine it thinks of things in such futuristic detail, even among those of us who think we ought not eat it. That's a lot to assert. I assert speaking is a form of behavior intended to communicated ideas, to transfer meaning from one mind to another. Is my assertion inferior to yours in any way? When I would lecture to 50 students about electromagnetic fields for 85 minutes at a time, what was I trying to get them to do? Speaking is a rather particular "form of behavior." Yes I like the shorthand of ignoring the medium and looking at the result, I tell you I want money, you have an idea that I want money as a result of my telling you that. Sure there is "behavior" in the chain, but the starting point is in my mind and the endpoint is in your mind and that is the relevant stuff in this case where we are talking about consciousness and wanting, which are
3Strange710y
There are fruits which "want" to be eaten. It's part of their life cycle. Intestinal parasites, too, although that's a bit more problematic.
1DanArmak10y
Fruits are just parts of a plant, not whole living things. Similarly you might say that somatic cells "want" to die after a few divisions because otherwise they risk turning into a cancer. Parasites that don't die when they are eaten obviously don't count for not wanting to be killed.
1Strange710y
Take an apple off the tree, put it in the ground, there's a decent chance it'll grow into a new tree. How is that not a "whole living thing?" If some animal ate the apple first, derived metabolic benefits from the juicy part and shat out the seeds intact, that seed would be no less likely to grow. Possibly more so, with nutrient-rich excrement for it's initial load of fertilizer.
1DanArmak10y
Fine. But these fruit don't want to be killed, just eaten.
3KnaveOfAllTrades10y
I agree with the thrust of your first paragraph. But the second one (and to some extent the first) seems to be using a revealed preferences framework that I'm not sure fully captures wanting. E.g. can that framework handle akrasia, irrationality, etc.?
2DanArmak10y
The word "wanting", like "consciousness", seems to me not to quite cut reality at its joints. Goal-directed behavior (or its absence) is a much clearer concept, but even then humans rarely have clear goals. As you point out, akrasia and irrationality are common. So I would rather not use "wanting" if I can avoid it, unless the meaning is clear. For example, saying "I want ice cream now" is a statement about my thoughts and desires right now, and it gives some information about my likely actions; it leaves little room for misunderstanding.
3Vladimir_Nesov10y
This looks like a precision vs. accuracy/relevance tradeoff. For example, some goals that are not explicitly formulated may influence behavior in a limited way that affects actions only in some contexts, perhaps only hypothetical ones (such as those posited to elicit idealized values). Such goals are normatively important (contribute to idealized values), even though formulating what they could be or observing them is difficult.
1wedrifid10y
Just not true. There is no sense in which a creature which voluntarily gets killed and has no chance of further mating (and no other behavioural expressions indicating life-desire) can be said to "want" not to be killed. Not even in some sloppy evolutionary anthropomorphic sense. "Wanting" not to be killed is a useful heuristic in most cases but certainly not all of them.
1DanArmak10y
Also, every use of the word "every" has exceptions. Yes, inclusive fitness is a much better approximation than "every living thing tries to avoid death". And gene's-eye-view is better than that. And non-genetic replicators have their place. And evolved things are adaptation executers. And sometimes living beings are just so bad at avoiding death that their "expressed behavioral preferences" look like something else entirely. I still think my generalization is a highly accurate one and makes the point I wanted to make.
1A1987dM10y
Including this one.
1DanArmak10y
Naturally. For instance true mathematical theorems saying that every X is Y have no exceptions.
[-][anonymous]10y40

Consciousness is useful as a concept insofar as it relates to reality. "Consciousness" is a label used to shorthand a complex (and not completely understood) set of phenomena. As a concept, it loses its usefulness as other, more nuanced, better understood concepts replace it (like replacing Newston's gravity with Einstein's relativity) or as the phenomena it describes are shown to be likely false (like the label "phlogiston").

As I am not a student of nueroscience or epistemology, I can't really say in detail whether there is any useful... (read more)

1KnaveOfAllTrades10y
Just want to chime in to defend the meaningfulness-usefulness distinction. I could start using the word 'conscious' to mean 'that which is KnaveOfAllTrades', and it would be meaningful and relate to reality well. But it would not necessarily be useful. Slurs also relate to reality reasonably well but are not necessarily useful.

So it is that I have recently been very skeptical of the term 'consciousness' (though grant that it can sometimes be a useful shorthand), and hence my question to you: Have I overlooked any counts in favour of the term 'consciousness'?

You haven't mentioned terms like qualia, phenomenology and somatics. Those terms lead to debate where the term consciousness is useful. I think it's useful to be able to distinguish conscious from unconscious mental processes.

I don't think you need specific words. Germany brought forward strong philosophers and psychologi... (read more)

1KnaveOfAllTrades10y
Upvoted. Example? What's a specific useful discussion that is best conducted by using the term 'consciousness', rather than 'qualia', 'self-awareness', and other, more specific (even if not necessarily less confused) terms? 'Awareness' used in a discussion that's at all philosophical does make me antsy, and brace myself for someone to treat awareness as a magical, mysterious thing. 'Recognize' is very rarely abused, so I am generally fine.
2ChristianKl10y
Consciousness is in the way I understand the word the thing that perceives qualia. There are discussions where it's useful to have a word for that. I recently read Thomas Hanna's book "somatics reawakening the mind's control of movement, flexibility, and health" and think the book uses it in a useful manner. There a certain qualia that I would label as 'self-awareness'. There also the process of passing the mirror test that you could label with the word 'self-awareness'. If I want to be specific when I'm talking about qualia I also distinguish the feeling to exist from self awareness. It however took me months to get the distinction between the two. I also do my main thinking in that area in German and have to translate.
1KnaveOfAllTrades10y
Questions of whether qualia is a useful concept aside, I feel that any discussion where you're talking about 'consciousness' in the sense of 'qualia-experiencing' would benefit from just saying 'qualia-experiencing', since 'consciousness' can mean so many different things in that rough area of philosophy that it's liable to cause misinterpretation, equivocation, etc. Yep, this looks like a fair use of 'consciousness' to me.
1ChristianKl10y
Once you accept that there is something which experiences qualia that raises the question of whether that something has other attributes that we can also investigate. Investigating that question isn't easy but I don't think that just because it's a hard question one should shun that question. To get back to Thomas Hanna, he is a quite interesting character. He was chairman of the Department of Philosophy at the University of Florida. Then he went more practical and in the applied teaching of somatics and makes some pretty big claims about how it can make people healthy and eliminate most of the diseases of aging only to die at the age of 61 in a car crash. I read him because buybuydandavis recommened him on LW. One of the claims is that people often suffer from what he calls sensor-motor amnesia whereby people forget how to use and relax certain muscles in their body and that leads to medical problems. According to him that sensor-motor amnesia is healable. Sensor-motor amnesia would be one aspect of aging that Aubrey de Grey missed in his list. Hanna attributes 50% of all illness towards problems arising from sensor-motor amnesia which is a pretty big claim. Even if it's less than 50% identifying a part of aging that we can actually do something about seems very important. Bonus points are that a book like that gives you a better grasp on consciousness and related concepts.

There seems to be a correlation between systems being described as "conscious" and those same systems having internal resources devoted to maintaining homeostasis along multiple axes.

Most people would say a brick is not conscious. Placed in a hotter or colder environment, it soon becomes the same temperature as that environment. Swung at something sharp and heavy it won't try to flee, the broken surface won't scab over, the chipped-off piece won't regrow. Splashed with paint, it won't try to groom itself. A tree is considered more conscious than ... (read more)

1MaoShan10y
I agree with your correlation, but I think your definition would make stars and black holes apex predators.
2Strange710y
A stellar-mass body isn't any more conscious than a water droplet or a pendulum under this theory. (Admittedly, that's more than zero, but still below the threshold of independent moral significance.) Kinematics keep them in a stable equilibrium, but there's no mechanism for maintaining a consistent chemical composition, or proactively seeking to avoid things that haven't disrupted the body but might soon. Drop some tungsten into a star, and it'll be a star with some tungsten in it until nuclear physics says otherwise. Feed tungsten to a mammal, you get some neurological symptoms until most of the excess metal is expelled via the kidneys over the next few days. It's not about the magnitude of possible disruption which can be absorbed on any one axis, or even really the precision with which that variable is controlled, but the number of different axes along which optimization occurs.
1MaoShan10y
It seems to me, though, that there are quite a few axes on which it would be hard to disturb a star's equilibrium. That still keeps it included in your definition. Also, since tungsten is not disruptive to the star's homeostasis, it has no reason to expel it. I appreciate your rational answers, because I'm actually helping you steel-man your theory, it only looks like I'm being a dork.
2Strange710y
Adding tungsten, or any heavy element, increases the star's density, thereby marginally shortening the star's lifespan. It's only "not disruptive to the star's homeostasis" in the sense that the star lacks any sort of homeostasis with regard to it's chemical composition. You are firing armor-piercing bullets into an enormous compost heap, and calling it a composite-laminate reinforced bunker just because they don't come out the other side. I say again, it's not about the equilibrium being hard to disturb, it's about there being a subsystem which actively corrects and/or prevents such disturbances. Yes, a star scores above a brick on this scale, as do many other inanimate objects, automated industrial processes, and extremely simple lifeforms which nonetheless fall well below any commonsensical threshold of consciousness.
1MaoShan10y
Well, now it sounds like you found a useful definition of life; at what point on this spectrum, then, would you consider something conscious? Since it's processes you are looking for, there is probably a process that, without which, you could clearly classify as un-conscious.
4Strange710y
If I know how many grains of sand there are, their relative positions, and have a statistical profile of their individual sizes and shapes, I no longer need to know whether it counts as a "heap" or not. If I know an object's thermal mass, conductivity, and how many degrees it is above absolute zero, I don't need to know whether it's "warm" or "cold." The term "consciousness" is a pointer to something important, but lacks precision. My understanding was that we were trying to come up with a more precise, quantifiable pointer to the same underlying important thing.
1MaoShan10y
What is it that makes consciousness, or the thing that it points to (if such a thing is not ephemeral), important? You already said that knowing the exact quantities negates the need for categorization.
1Strange710y
I am not in a position to speculate as to why consciousness, or the underlying referent thereto, is so widely considered important; I simply observe that it is. Similarly, I wouldn't feel qualified to say why a human life has value, but for policy purposes, somebody out there needs to figure out how many million dollars of value a statistical human life is equivalent to. Might as well poke at the math of that, maybe make it a little more rigorous and generalized.
1A1987dM10y
Unless you're trying to decide whether its article on Wikipedia belongs in Category:Heaps ;-)
-1[anonymous]10y
For what purpose are you labeling something conscious? Strange7 has already stated that water droplets and pendulums have nonzero "consciousness", and I would agree. But so what? What does it matter if it turns out that rocks are conscious too? Taboo the word 'conscious' please.
3private_messaging10y
If we taboo "conscious" then we just got some arbitrary and thus almost certainly useless real number assigned to systems. edit: speaking of which, why would it be a real number? It could be any kind of mathematical object.
1Strange710y
Even if it's useless for philosophy of consciousness, some generalized scale of "how self-maintaining is this thing" might be a handy tool for engineers. That's the difference between a safe, mostly passive expert system and a world-devouring paperclip maximizer, isn't it? Google Maps doesn't try to reach out and eliminate potential threats on it's own initiative.
3private_messaging10y
But we're only interested in some aspects of self maintenance, we're not interested in how well individual molecules stay in their places (except when we're measuring hardness of materials). Some fully general measure wouldn't know what parameters are interesting and what are not. Much the same goes for "integrated information theory" - without some external conscious observer informally deciding what's information and what's not (or what counts as "integration") to make the premise seem plausible (and carefully picking plausible examples), you just have a temperature-like metric which is of no interest whatsoever if not for the outrageous claim that it measures consciousness. A metric that is ridiculously huge for e.g. turbulent gasses, or if we get down to microscale and consider atoms bouncing around chaotically, for gasses in general.
0Strange710y
Again, I think you're misunderstanding. The metric I'm proposing doesn't measure how well those self-maintenance systems work, only how many of them there are. Yes, of course we're only really interested in some aspects of self-maintenance. Let's start by counting how many aspects there are, and start categorizing once that first step has produced some hard numbers.
2private_messaging10y
Ahh, OK. The thing is, though... say, a crystal puts atoms back together if you move them slightly (and a liquid doesn't). And so on, all sorts of very simple apparent self maintenance done without a trace of intelligent behaviour.
0Strange710y
What's your point? I've already acknowledged that this metric doesn't return equally low values for all inanimate objects, and it seems a bit more common (in new-agey circles at least) to ascribe intelligence to crystals or rivers than to puffs of hot gas, so in that regard it's better calibrated to human intuition than Integrated Information Theory.

I notice that I am still confused. In the past I hit 'ignore' when people talked about consciousness, but lets try 'explain' today.

The original post states:

Would we disagree over what my computer can do? What about an animal instead of my computer? Would we feel the same philosophical confusion over any given capability of an average chicken? An average human?

Does this mean that if two systems have almost the same capabilities that we would then also expect them to have a similar chance of receiving the consciousness label? In other words: is conscious... (read more)

Would we disagree over what my computer can do?

Yes, if you are using "conscious" with sufficient deference to ordinary usage. There are at least two aspects to consciousness in that usage: access consciousness, and phenomenal consciousness. Access consciousness applies to information which is globally available to the organism for control of behavior, verbal report, inference, etc. It's phenomenal consciousness which your computer lacks.

Scott Aaronson's "Pretty hard problem of consciousness", which shminux mentions, is relevant h... (read more)

2KnaveOfAllTrades10y
Thanks for this reply; this is the kind of quarter that seemed most promising for usefulness to 'consciousness'. I am confused about qualia. Qualia has strong features of a confused concept, such that if 'consciousness' is getting at a qualia-nonqualia distinction, then it would seem to be a recursive or fractal confusion. If qualia is to be a non-epiphenomenal concept, then there must be non-mysterious differences one could in principle point to to distinguish qualia-havers from non-qualia-havers. History of science strongly suggests a functionalism under which a version of me implemented on a different substrate but structurally identical should experience qualia which are the same, or at least the same according to whatever criteria we might care about. It feels to me like qualia is used in an epiphenomenal way. But if it is to be non-confused, it cannot be; it must refer to sets of statements like, 'This thing reacts in this way when it is poked with a needle, this way when UV light hits its eyes, ...' or something (possibly less boring propositions, but still fundamentally non-mysterious ones). Insomuch as 'consciousness' depends on the notion of 'qualia', I am very wary of its usage, because then a less-likely-to-be-confused concept (consciousness) is being used in terms of a very dubious, more-likely-to-be-confused concept (qualia). If we're using 'consciousness' as a byword for qualia, then we should just say 'qualia' and be open about the fact that we're implicitly touching upon the (hard) problem of consciousness, which is at best very confusing and difficult and at worst a philosophical quagmire, so that we do not become overconfident in what we are saying or that what we are saying is even meaningful. Eliezer has his thing where he refers to Magical Reality Fluid to lampshade his confusion. Using 'consciousness' to smuggle in qualia feels like the opposite of that approach. For all this skepticism, I do worry that those who dismiss qualia outright a
5torekp10y
I submit that it is (many of) the theories and arguments that are confused, not the concept. The concept has some semantic vagueness, but that's not necessarily fatal (compare "heap"). If "structurally identical" applies at the level of algorithms - see thesis #5 and "consistent position" #2 in this post by dfranke - then I agree. That happens when people embrace some of the confused theories. Then comes the attack of the p-zombies. I'm all in favor of talking openly about qualia, because that is the hard problem fueling the bad metaphysics, not access consciousness. Self-consciousness can also be tricky, but in good part because it aggravates qualia problems. But I don't think the hard problem is an inescapable quagmire. Instead, the intersection of self-reference (with all its "paradoxes") and the appearance/reality distinction creates some unique conditions, in which many of our generally-applicable epistemic models and causal reasoning patterns fail. If you've got time for a book, I recommend Jenann Ismael's The Situated Self, which in spots could have been better written, but is well worth the effort. This paper covers a lot, too. That's the reality side of redness; what people puzzle over is the relations between appearances (e.g. inverted spectrum worries). Maybe I misunderstand you. My claim is that the fact that appearances are mere appearances definitely does contribute to the hardness of the hard problem. I don't think qualia and consciousness are fundamental in any of the usual senses - like basic particles? And I have no idea how simple and elegant an Essence has to be before it becomes Platonic. But humans think in prototypes and metaphors, and we get along just fine. We don't need to have an answer to every conceivable edge-case in order to make productive use of a concept. Nor do we need such precision even to see, in rough outline, how the referents of the concept, in the cases that interest us, would be tractable using our best scientific theo
4TheAncientGeek10y
Why? Do you the consciousness is defined in terms of qualia, and that qualia are in turn defined in terms of consciousness? Yes. Must be doesn't imply must be knowable, though. The criteria we care about is the killer, through, An exact duplicate all the way down would be an exact duplicate, and therefore not running on a different substrate. What you are therefore talking about is a duplicate of the relevant subset of structure, running on a different substrate. But knowing what the relevant subset is is no easier than the Hard Problem. The simplistic theory that qualia are distinct from physics has that problem. The simplistic theory that qualia are identical to physics has the problem that no one can somehow that works. The simplistic theory that qualia don't exist at all has the problem that I have them all the time. However,none of that has much to do with the definition of qualia. If we had a good theory of qualia we would know what causes them an what they cause. But we need the word qualia to point out what we don't have them good theory of. When you complain that qualia seem epiphenomenal, what you are actually complaining about is the lack of a solution of the HP. Why? Why can't it mean "the ways things seem to a subject" or "an aspect of consciousness we don't understand", or both? We don't know the reference of "qualia", right enough, but that does not mean the sense is a problem. Why is it more confused? On the face of it,qualia, labels a particular aspect of consciousness. Surely that would make it more precise.
2CCC10y
Surely any computer that controls an automated proccess must do this? Consider, for example, a robotic arm used to manufacture a car. The software knows that if the arm moves like so, then it will be holding the door in the right place to be attached; and it knows this before it actually moves the arm. So it must have an internal knowledge of its own state, and of possible future states. Isn't that exactly what you describe here?
3torekp10y
I was focusing on perceptual channels, so your motor-channel example would be analogous, but not the same. If the robot uses proprioception to locate the arm, and if it makes an appearance/reality distinction on the proprioceptive information, then you have a true example.
1CCC10y
Hmmm. Assuming for the moment that the robot has a sensor of some type on each joint, that can tell it at which angle that point is being held; that would be a robotic form of proprioception. And if it considers hypothetical future states of the arm, as it must do in order to safely move the arm, then it must consider what proprioceptive information it expects to get from the arm, and compare this to the reality (the actual sensor value changes) during the movement of the arm. I think that's an example of what you're talking about...
1torekp10y
One more thing: if the sensor values are taken as absolute truth and the motor-commands are adjusted to meet those criteria, that still wouldn't suffice. But if you include a camera as well as the proprioceptors, and appropriate programming to reconcile the two information sources into a picture of an underlying reality, and make explicit comparisons back to each sensory domain, then you've got it. Note that if two agents (robotic or human) agree on what external reality is like, but have no access to each other's percepts, the whole realm of subjective experience will seem quite mysterious. Each can doubt that the other's visual experience is like its own, for example (although obviously certain structural isomorphisms must obtain). Etc. Whereas, if an agent has no access to its own subjective states independent of its picture of reality, it will see no such problem. Agreement on external reality satisfies its curiosity entirely. This is why I brought the issue up. I apologize for not explaining that earlier; it's probably hard to see what I'm getting at without knowing why I think it's relevant.
2CCC10y
Ah, thank you. That makes it a lot clearer. I've seen a system that I'm pretty sure fulfills your criteria - it uses a set of multiple cameras at carefully defined positions and reconciles the pictures from these cameras to try to figure out the exact location of an object with a very specific colour and appearance. That would be the "phenomenal consciousness" that you describe; but I would not call that system any more or less conscious than any other computer. Ah - surely that requires something more than just an appearance-reality distinction. That requires appearance-reality distinction and the ability to select its own thoughts. While the specific system I refer to in the second paragraph has an appearance-reality distinction, I have yet to see any sign that it is capable of choosing what to think about.
3torekp10y
That (thought selection) seems like a good angle. I just wanted to throw out a necessary condition for phenomenal consciousness, not a sufficient one.

Without trying to understand consciousness at all, I will note a few observables about it. We seem to biologically be prone to recognize it. People seem to recognize it even in things that we would mostly agree don't actualy have it. So we know biology / evolution selected to tend to recognize it, and we know that the selection pressure was such that the direction of error is to recognize it when it's not there rather than to fail to recognize it when it is. That implies that failure to recognize consciousness is probably very non adaptive. Which means that it's probably pointing to something significant.

4KnaveOfAllTrades10y
It's not clear to me that categorising or treating things as conscious is innate/genetic/whatever. This seems like exactly the kind of relatively easy empirical question of human nature where anthropology can just come along and sucker-punch you with a society that has no conception of consciousness. In general, I think this heuristic is very weak evidence; belief in the supernatural and acceptance of fake or epiphenomenal explanations are mistakes to which humans and their societies are reliably prone. (In fact, if I had to try to name things that wouldn't get me sucker-punched by anthropology when I claimed them as universal to human cultures, then belief in the supernatural and fake explanations might be near the top of the list.)
[-][anonymous]10y10

Interesting post - while I don't have any real answers I have to disagree with this point:

"Why do you think your computer is not conscious? It probably has more of a conscious experience than, say, a flatworm or sea urchin. (As byrnema notes, conscious does not necessarily imply self-aware here.)"

A computer is no more conscious than a rock rolling down a hill - we program it by putting sticks in the rocks way to guide to a different path. We have managed to make some impressive things using lots of rocks and sticks, but there is not a lot more to it than that in terms of consciousness.

Note that you can also describe humans under that paradigm - we come pre-programmed, and then the program changes itself based on the instructions in the code (some of which allow us to be influenced by outside inputs). The main difference between us and his computer here is that we have less constraints, and we take more inputs from the outside world.

I can imagine other arguments for why a computer might not be considered concious at all (mainly if I play with the definition), but I don't see much difference between us and his computer in regards to this criteria.

P.S. Also the computer is less like the rolling rock, and more like the hill, rock and sticks - i.e. the whole system.

9The_Duck10y
Careful!--a lot of people will bite the bullet and call the rock+stick system conscious if you put a complicated enough pattern of sticks in front of it and provide the rock+stick system with enough input and output channels by which it can interact with its surroundings.

I think part of the problem here is that you are confusing existence with consciousness and reason with consciousness.

Deciding that we "exist" is something that philosophers have defined as thinking (Decartes) to the use of language (Heidegger). If there is one common thing to the human existence is that we have figured out that we are different because we can make a decision using reason to do or not to do something. We also are more aware of the passing of the seasons, planets, and other phenomena at a level beyond the instinctive (brain stem/l... (read more)

0shminux10y
Congratulations, you expanded the term to include everything and thus make it completely useless.
0cameroncowan10y
The term is not useless because it applies to everything. That would be like saying the term "air" is less useful because we don't define all the gases and where it is specifically where everything is specifically. The point is that consciousness is like a connection a type of connect-mind that ties all things together. It is something that because it is within all things allows us to connect with objects both large and small. Its a system of beingness.