It sometimes seems to me that those of us who actually have consciousness are in a minority, and everyone else is a p-zombie. But maybe that's a selection effect, since people who realise that the stars in the sky they were brought up believing in don't really exist will find that surprising enough to say, while everyone else who sees the stars in the night sky wonders what drugs the others have been taking, or invents spectacles.
I experience a certain sense of my own presence. This is what I am talking about, when I say that I am conscious. The idea that there is such an experience, and that this is what we are talking about when we talk about consciousness, appears absent from the article.
Everyone reading this, please take a moment to see whether you have any sensation that you might describe by those words. Some people can't see colours. Some people can't imagine visual scenes. Some people can't taste phenylthiocarbamide. Some people can't wiggle their ears. Maybe some people have no sensation of their own selves. If they don't, maybe this is something that can be learned, like ear-wiggling, and maybe it isn't, like phenylthiocarbamide.
Unlike the experiences reported by some, I ...
I feel like the intensity of conscious experience varies greatly in my personal life. I feel less conscious when I'm doing my routines, when I'm surfing on the internet, when I'm having fun or playing an immersive game, when I'm otherwise in a flow state, or when I'm daydreaming. I feel more conscious when I meditate, when I'm in a self-referencing feedback loop, when I'm focusing on the immediate surroundings, when I'm trying to think about the fundamental nature of reality, when I'm very sad, when something feels painful or really unpleasant, when I feel like someone else is focusing on me, when I'm trying to control my behavior, when I'm trying to control my impulses and when I'm trying to do something that doesn't come naturally.
I'm not sure if we're talking about the same conscious experience so I try to describe it in other words. When I'm talking about the intensity of consciousness, I talking about heightened awareness and how the "raw" experience seems more real and time seems to go slower.
Anyway, my point is that if consciousness varies so much in my own life, I think it's reasonable to think it could also vary greatly between people too. This doesn't mean that...
Regarding IIT, I can't believe just how bloody stupid it is. As Aaronson says, it is immediately obvious that this idiot metric will be huge not just for human brains but for a lot of really straightforward systems, including the tea spinning in my cup, Jupiter's atmosphere being hyper conscious, and so on. (Over sufficient timeframe, small, localized differences in the input state of those systems affect almost all the output state, if we get down to the level of individual molecules. Liquids, gasses, and plasmas end up far more conscious than solids)
edit: I think it's that you can say that consciousness is "integration" of "information" whereby as a conscious being you'd only call it "integration" and "information" if it's producing something relevant to you, the conscious being (you wouldn't call it information if it's not useful to yourself). Then you start trying to scribble formulas because you think "information" or "integration" in the technical sense would have something to do with your innate notion of it being something interesting.
Or how a label like 'human biological sex' is treated as if it is a true binary distinction that carves reality at the joints and exerts magical causal power over the characteristics of humans, when it is really a fuzzy dividing 'line' in the space of possible or actual humans, the validity of which can only be granted by how well it summarises the characteristics.
I don't see how sex doesn't carve reality at the joints. In the space of actually really-existing humans it's a pretty sharp boundary and summarizes a lot of characteristics extremely well. It might not do so well in the space of possible humans, but why does that matter? The process by which possible humans become instantiated isn't manna from heaven - it has a causal structure that depends on the existence of sex.
I agree it is a pretty sharp boundary, for all the obvious evolutionary reasons - nevertheless, there are a significant number of actual really-existing humans who are intersex/transgender. This is also not too surprising, given that evolution is a messy process. In addition to the causal structure of sexual selection and the evolution of humans, there are also causal structures in how sex is implemented, and in some cases, it can be useful to distinguish based on these instead.
For example, you could distinguish between karyotype (XX, XY but also XYY, XXY, XXX, X0 and several others), genotype (e.g. mutations on SRY or AR genes), and phenotypes, like reproductive organs, hormonal levels, various secondary sexual characteristics (e.g. breasts, skin texture, bone density, facial structure, fat distribution, digit ratio) , mental/personality differences (like sexuality, dominance, spatial orientation reasoning, nurturing personality, grey/white matter ratio, risk aversion), etc...
Usually when we say "consciousness", we mean self-awareness. It's a phenomenon of our cognition that we can't explain yet, we believe it does causal work, and if it's identical with self-awareness, it might be why we're having this conversation.
I personally don't think it has much to do with moral worth, actually. It's very warm-and-fuzzy to say we ought to place moral value on all conscious creatures, but I actually believe that a proper solution to ethics is going to dissolve the concept of "moral worth" into some components like (blatantly making names up here) "decision-theoretic empathy" (agents and instances where it's rational for me to acausally cooperate), "altruism" (using my models of others' values as a direct component of my own values, often derived from actual psychological empathy), and even "love" (outright personal attachment to another agent for my own reasons -- and we'd usually say love should imply altruism).
So we might want to be altruistic towards chickens, but I personally don't think chickens possess some magical valence that stops them from being "made of atoms I can use for something else", ot...
it's a question of whether you projected consciousness onto the code
Consciousness is much better projected onto tea kettles:
We put the kettle on to boil, up in the nose of the boat, and went down to the stern and pretended to take no notice of it, but set to work to get the other things out.
That is the only way to get a kettle to boil up the river. If it sees that you are waiting for it and are anxious, it will never even sing. You have to go away and begin your meal, as if you were not going to have any tea at all. You must not even look round at it. Then you will soon hear it sputtering away, mad to be made into tea.
It is a good plan, too, if you are in a great hurry, to talk very loudly to each other about how you don’t need any tea, and are not going to have any. You get near the kettle, so that it can overhear you, and then you shout out, “I don’t want any tea; do you, George?” to which George shouts back, “Oh, no, I don’t like tea; we’ll have lemonade instead – tea’s so indigestible.” Upon which the kettle boils over, and puts the stove out.
We adopted this harmless bit of trickery, and the result was that, by the time everything else was ready, the tea was waiting.
I think "intelligence" or "consciousness" aren't well-defined terms yet, they're more like pointers to something that needs to be explained. We can't build an intelligent machine or a conscious machine yet, so it seems rash to throw out the words.
The term 'consciousness' carries the fact that while we still don't know exactly what the Magic Token of Moral Worth is, we know it's a mental feature possessed by humans. This distinguishes us from, say, the Euthyphro-type moral theory where the Magic Token is a bit set by god and is epiphenomenal and only detectable because god gives us a table of what he set the bit on.
Blatant because-I-felt-like-it speculation: "ethics" is really game theory for agents who share some of their values.
As I mentioned previously here, Scott seems to answer the question by posing his "Pretty-Hard Problem of Consciousness": the intuitive idea of consciousness is a useful benchmark to check any "theory of consciousness" against. Not the borderline cases, but the obvious ones.
For example, suppose you have a qualitative model of a "heap" (as in, a heap of sand, not the software data structure). It this model predicts that 1 grain is a heap, it obviously does not describe what is commonly considered a heap. If it tells you that a ...
The only context in which the notion of consciousness seems inextricable from the statement is in ethical statements like, "We shouldn't eat chickens because they're conscious." In such statements, it feels like a particular sense of 'conscious' is being used, one which is defined (or at least characterised) as 'the thing that gives moral worth to creatures, such that we shouldn't eat them'.
Many people think that consciousness in the sense of having the capability to experience suffering or pleasure makes an entity morally relevant, because ha...
Insofar as it's appropriate to post only about a problem well-defined rather than having the complete solution to the problem, I consider this post to be of sufficient quality to deserve being posted in Main.
[Part 1]
I like this post, I also doubt there is much coherence let alone usefulness to be found in most of the currently prevailing concepts of what consciousness is.
I prefer to think of words and the definitions of those words as micro-models of reality that can be evaluated in terms of their usefulness, especially in building more complex models capable of predictions. As in your excellent example of gender, words and definitions basically just carve complex features of reality into manageable chunks at the cost of losing information - there is a trade-o...
So with consciousness, is it a useful concept? Well it certainly labels something without which I would simply not care about this conversation at all, as well as a bunch of other things. I personally believe p-zombies are impossible, that building a working human except without consciousness would be like building a working gasoline engine except without heat. I mention this for context, I think my believe about p-zombies is actually pretty common.
About the statement "I shouldn't eat chickens because they are conscious" You ask what is it ...
Yes. If we change "We shouldn't eat chickens because they are conscious" to "We shouldn't eat chickens because they want to not be eaten," then this becomes another example where, once we cashed out what was meant, the term 'consciousness' could be circumvented entirely and be replaced with a less philosophically murky concept. In this particular case, how clear the concept of 'wanting' (as relating to chickens) is, might be disputed, but it seems like clearly a lesser mystery or lack of clarity than the monolith of 'consciousness'.
Consciousness is useful as a concept insofar as it relates to reality. "Consciousness" is a label used to shorthand a complex (and not completely understood) set of phenomena. As a concept, it loses its usefulness as other, more nuanced, better understood concepts replace it (like replacing Newston's gravity with Einstein's relativity) or as the phenomena it describes are shown to be likely false (like the label "phlogiston").
As I am not a student of nueroscience or epistemology, I can't really say in detail whether there is any useful...
So it is that I have recently been very skeptical of the term 'consciousness' (though grant that it can sometimes be a useful shorthand), and hence my question to you: Have I overlooked any counts in favour of the term 'consciousness'?
You haven't mentioned terms like qualia, phenomenology and somatics. Those terms lead to debate where the term consciousness is useful. I think it's useful to be able to distinguish conscious from unconscious mental processes.
I don't think you need specific words. Germany brought forward strong philosophers and psychologi...
There seems to be a correlation between systems being described as "conscious" and those same systems having internal resources devoted to maintaining homeostasis along multiple axes.
Most people would say a brick is not conscious. Placed in a hotter or colder environment, it soon becomes the same temperature as that environment. Swung at something sharp and heavy it won't try to flee, the broken surface won't scab over, the chipped-off piece won't regrow. Splashed with paint, it won't try to groom itself. A tree is considered more conscious than ...
I notice that I am still confused. In the past I hit 'ignore' when people talked about consciousness, but lets try 'explain' today.
The original post states:
Would we disagree over what my computer can do? What about an animal instead of my computer? Would we feel the same philosophical confusion over any given capability of an average chicken? An average human?
Does this mean that if two systems have almost the same capabilities that we would then also expect them to have a similar chance of receiving the consciousness label? In other words: is conscious...
Would we disagree over what my computer can do?
Yes, if you are using "conscious" with sufficient deference to ordinary usage. There are at least two aspects to consciousness in that usage: access consciousness, and phenomenal consciousness. Access consciousness applies to information which is globally available to the organism for control of behavior, verbal report, inference, etc. It's phenomenal consciousness which your computer lacks.
Scott Aaronson's "Pretty hard problem of consciousness", which shminux mentions, is relevant h...
Without trying to understand consciousness at all, I will note a few observables about it. We seem to biologically be prone to recognize it. People seem to recognize it even in things that we would mostly agree don't actualy have it. So we know biology / evolution selected to tend to recognize it, and we know that the selection pressure was such that the direction of error is to recognize it when it's not there rather than to fail to recognize it when it is. That implies that failure to recognize consciousness is probably very non adaptive. Which means that it's probably pointing to something significant.
Interesting post - while I don't have any real answers I have to disagree with this point:
"Why do you think your computer is not conscious? It probably has more of a conscious experience than, say, a flatworm or sea urchin. (As byrnema notes, conscious does not necessarily imply self-aware here.)"
A computer is no more conscious than a rock rolling down a hill - we program it by putting sticks in the rocks way to guide to a different path. We have managed to make some impressive things using lots of rocks and sticks, but there is not a lot more to it than that in terms of consciousness.
Note that you can also describe humans under that paradigm - we come pre-programmed, and then the program changes itself based on the instructions in the code (some of which allow us to be influenced by outside inputs). The main difference between us and his computer here is that we have less constraints, and we take more inputs from the outside world.
I can imagine other arguments for why a computer might not be considered concious at all (mainly if I play with the definition), but I don't see much difference between us and his computer in regards to this criteria.
P.S. Also the computer is less like the rolling rock, and more like the hill, rock and sticks - i.e. the whole system.
I think part of the problem here is that you are confusing existence with consciousness and reason with consciousness.
Deciding that we "exist" is something that philosophers have defined as thinking (Decartes) to the use of language (Heidegger). If there is one common thing to the human existence is that we have figured out that we are different because we can make a decision using reason to do or not to do something. We also are more aware of the passing of the seasons, planets, and other phenomena at a level beyond the instinctive (brain stem/l...
Years ago, before I had come across many of the power tools in statistics, information theory, algorithmics, decision theory, or the Sequences, I was very confused by the concept of intelligence. Like many, I was inclined to reify it as some mysterious, effectively-supernatural force that tilted success at problem-solving in various domains towards the 'intelligent', and which occupied a scale imperfectly captured by measures such as IQ.
Realising that 'intelligence' (as a ranking of agents or as a scale) was a lossy compression of an infinity of statements about the relative success of different agents in various situations was part of dissolving the confusion; the reason that those called 'intelligent' or 'skillful' succeeded more often was that there were underlying processes that had a greater average tendency to output success, and that greater average success caused the application of the labels.
Any agent can be made to lose by an adversarial environment. But for a fixed set of environments, there might be some types of decision processes that do relatively well over that set of environments than other processes, and one can quantify this relative success in any number of ways.
It's almost embarrassing to write that since put that way, it's obvious. But it still seems to me that intelligence is reified (for example, look at most discussions about IQ), and the same basic mistake is made in other contexts, e.g. the commonly-held teleological approach to physical and mental diseases or 'conditions', in which the label is treated as if—by some force of supernatural linguistic determinism—it *causes* the condition, rather than the symptoms of the condition, in their presentation, causing the application of the labels. Or how a label like 'human biological sex' is treated as if it is a true binary distinction that carves reality at the joints and exerts magical causal power over the characteristics of humans, when it is really a fuzzy dividing 'line' in the space of possible or actual humans, the validity of which can only be granted by how well it summarises the characteristics.
For the sake of brevity, even when we realise these approximations, we often use them without commenting upon or disclaiming our usage, and in many cases this is sensible. Indeed, in many cases it's not clear what the exact, decompressed form of a concept would be, or it seems obvious that there can in fact be no single, unique rigorous form of the concept, but that the usage of the imprecise term is still reasonably consistent and correlates usefully with some relevant phenomenon (e.g. tendency to successfully solve problems). Hearing that one person has a higher IQ than another might allow one to make more reliable predictions about who will have the higher lifetime income, for example.
However, widespread use of such shorthands has drawbacks. If a term like 'intelligence' is used without concern or without understanding of its core (i.e. tendencies of agents to succeed in varying situations, or 'efficient cross-domain optimization'), then it might be used teleologically; the term is reified (the mental causal graph goes from "optimising algorithm->success->'intelligent'" to "'intelligent'->success").
In this teleological mode, it feels like 'intelligence' is the 'prime mover' in the system, rather than a description applied retroactively to a set of correlations. But knowledge of those correlations makes the term redundant; once we are aware of the correlations, the term 'intelligence' is just a pointer to them, and does not add anything to them. Despite this, it seems to me that some smart people get caught up in obsessing about reified intelligence (or measures like IQ) as if it were a magical key to all else.
Over the past while, I have been leaning more and more towards the conclusion that the term 'consciousness' is used in similarly dubious ways, and today it occurred to me that there is a very strong analogy between the potential failure modes of discussion of 'consciousness' and between the potential failure modes of discussion of 'intelligence'. In fact, I suspect that the perils of 'consciousness' might be far greater than those of 'intelligence'.
~
A few weeks ago, Scott Aaronson posted to his blog a criticism of integrated information theory (IIT). IIT attempts to provide a quantitative measure of the consciousness of a system. (Specifically, a nonnegative real number phi). Scott points out what he sees as failures of the measure phi to meet the desiderata of a definition or measure of consciousness, thereby arguing that IIT fails to capture the notion of consciousness.
What I read and understood of Scott's criticism seemed sound and decisive, but I can't shake a feeling that such arguments about measuring consciousness are missing the broader point that all such measures of consciousness are doomed to failure from the start, in the same way that arguments about specific measures of intelligence are missing a broader point about lossy compression.
Let's say I ask you to make predictions about the outcome of a game of half-court basketball between Alpha and Beta. Your prior knowledge is that Alpha always beats Beta at (individual versions of) every sport except half-court basketball, and that Beta always beats Alpha at half-court basketball. From this fact you assign Alpha a Sports Quotient (SQ) of 100 and Beta an SQ of 10. Since Alpha's SQ is greater than Beta's, you confidently predict that Alpha will beat Beta at half-court.
Of course, that would be wrong, wrong, wrong; the SQ's are encoding (or compressing) the comparative strengths and weaknesses of Alpha and Beta across various sports, and in particular that Alpha always loses to Beta at half-court. (In fact, if other combinations lead to the same SQ's, then *not even that much* information is encoded, since other combinations might lead to the same scores.) So to just look at the SQ's as numbers and use that as your prediction criterion is a knowably inferior strategy to looking at the details of the case in question, i.e. the actual past results of half-court games between the two.
Since measures like this fictional SQ or actual IQ or fuzzy (or even quantitative) notions of consciousness are at best shorthands for specific abilities or behaviours, tabooing the shorthand should never leave you with less information, since a true shorthand, by its very nature, does not add any information.
When I look at something like IIT, which (if Scott's criticism is accurate) assigns a superhuman consciousness score to a system that evaluates a polynomial at some points, my reaction is pretty much, "Well, this kind of flaw is pretty much inevitable in such an overambitious definition."
Six months ago, I wrote:
"...it feels like there's a useful (but possibly quantitative and not qualitative) difference between myself (obviously 'conscious' for any coherent extrapolated meaning of the term) and my computer (obviously not conscious (to any significant extent?))..."
Mark Friedenbach replied recently (so, a few months later):
"Why do you think your computer is not conscious? It probably has more of a conscious experience than, say, a flatworm or sea urchin. (As byrnema notes, conscious does not necessarily imply self-aware here.)"
I feel like if Mark had made that reply soon after my comment, I might have had a hard time formulating why, but that I would have been inclined towards disputing that my computer is conscious. As it is, at this point I am struggling to see that there is any meaningful disagreement here. Would we disagree over what my computer can do? What information it can process? What tasks it is good for, and for which not so much?
What about an animal instead of my computer? Would we feel the same philosophical confusion over any given capability of an average chicken? An average human?
Even if we did disagree (or at least did not agree) over, say, an average human's ability to detect and avoid ultraviolet light without artificial aids and modern knowledge, this lack of agreement would not feel like a messy, confusing philosophical one. It would feel like one tractable to direct experimentation. You know, like, blindfold some experimental subjects, control subjects, and experimenters and see how the experimental subjects react to ultraviolet light versus other light in the control subjects. Just like if we were arguing about whether Alpha or Beta is the better athlete, there would be no mystery left over once we'd agreed about their relative abilities at every athletic activity. At most there would be terminological bickering over which scoring rule over athletic activities we should be using to measure 'athletic ability', but not any disagreement for any fixed measure.
I have been turning it over for a while now, and I am struggling to think of contexts in which consciousness really holds up to attempts to reify it. If asked why it doesn't make sense to politely ask a virus to stop multiplying because it's going to kill its host, a conceivable response might be something like, "Erm, you know it's not conscious, right?" This response might well do the job. But if pressed to cash out this response, what we're really concerned with is the absence of the usual physical-biological processes by which talking at a system might affect its behaviour, so that there is no reason to expect the polite request to increase the chance of the favourable outcome. Sufficient knowledge of physics and biology could make this even more rigorous, and no reference need be made to consciousness.
The only context in which the notion of consciousness seems inextricable from the statement is in ethical statements like, "We shouldn't eat chickens because they're conscious." In such statements, it feels like a particular sense of 'conscious' is being used, one which is *defined* (or at least characterised) as 'the thing that gives moral worth to creatures, such that we shouldn't eat them'. But then it's not clear why we should call this moral criterion 'consciousness'; insomuch as consciousness is about information processing or understanding an environment, it's not obvious what connection this has to moral worth. And insomuch as consciousness is the Magic Token of Moral Worth, it's not clear what it has to do with information processing.
If we relabelled zxcv=conscious and rewrote, "We shouldn't eat chickens because they're zxcv," then this makes it clearer that the explanation is not entirely satisfactory; what does zxcv have to do with moral worth? Well, what does consciousness have to do with moral worth? Conservation of argumentative work and the usual prohibitions on equivocation apply: You can't introduce a new sense of the word 'conscious' then plug it into a statement like "We shouldn't eat chickens because they're conscious" and dust your hands off as if your argumentative work is done. That work is done only if one's actual values and the definition of consciousness to do with information processing already exactly coincide, and this coincidence is known. But it seems to me like a claim of any such coincidence must stem from confusion rather than actual understanding of one's values; valuing a system commensurate with its ability to process information is a fake utility function.
When intelligence is reified, it becomes a teleological fake explanation; consistently successful people are consistently successful because they are known to be Intelligent, rather than their consistent success causing them to be called intelligent. Similarly consciousness becomes teleological in moral contexts: We shouldn't eat chickens because they are called Conscious, rather than 'these properties of chickens mean we shouldn't eat them, and chickens also qualify as conscious'.
So it is that I have recently been very skeptical of the term 'consciousness' (though grant that it can sometimes be a useful shorthand), and hence my question to you: Have I overlooked any counts in favour of the term 'consciousness'?