Consciousness is a difficult question because it is poorly defined and is the subjective experience of the entity experiencing it. Because an individual experiences their own consciousness directly, that experience is always richer and more compelling than the perception of consciousness in any other entity; your own consciousness always seem more “real” and richer than the would-be consciousness of another entity.

Because the experience of consciousness is subjective, we can never “know for sure” that an entity is actually experiencing consciousness. However there must be certain computational functions that must be accomplished for consciousness to be experienced. I am not attempting to discuss all computational functions that are necessary, just a first step at enumerating some of them and considering implications.

First an entity must have a “self detector”; a pattern recognition computation structure which it uses to recognizes its own state of being an entity and of being the same entity over time. If an entity is unable to recognize itself as an entity, then it can't be conscious that it is an entity. To rephrase Descartes, "I perceive myself to be an entity, therefore I am an entity."  It is possible to be an entity and not perceive that one is an entity. This happens in humans but rarely. Other computation structures may be necessary also, but without an ability to recognize itself as an entity an entity cannot be conscious.

All pattern recognition-type is inherently subject to errors, usually of type 1 (false positive) or type 2, (false negative). In pattern recognition there is an inherent trade-off of false positives for false negatives. Reducing both false positives and false negatives is much more difficult than trading off one for the other.

I suspect that detection of external entities evolved before the detection of internal entities; for predator avoidance, prey capture, offspring recognition and mate detection. Social organisms can recognize other members of their group. The objectives in the recognition of other entities are varied, and so the fidelity of recognition required is varied also. Entity detection is followed by entity identification and sorting of the entity into various classes so as to determine how to interact with that entity; opposite gender mate: reproduce, offspring: feed and care for, predator: run away.

Humans have entity detectors and those detectors exhibit false positives (detecting the entity (spirit) of the tree, rock, river and other inanimate objects, pareidolia) and false negatives (not recognizing a particular ethnic group as fully human). Evolution tends to put more of a bias toward false positives, a false alarm in detecting a predator is a lot better than a non-detection of a predator.

How the human entity detector works is not fully understood, I have a hypothesis which I outline in my blog post on xenophobia.

I suggest that when two humans meet, I think they unconsciously do the equivalent of a Turing Test, with the objective in being to determine if the other entity is “human enough”. Essentially they try to communicate, and if the error rate is too high (due to non-consilience of communication protocols), then xenophobia is triggered via the uncanny valley effect. In the context of this discussion, the entity detector defaults to non-human (or non-green beard (see below)).

I think the uncanny valley effect is an artifact of the human entity detector (which evolved only to facilitate survival and reproduction of those humans with it). Killing entities that are close enough to be potential competitors and rivals but not so close that they might be kin is a good evolutionary strategy; the functional equivalent of the mythic green beard gene (where the green beard gene causes the expression of a green beard and the compulsion to kill all without a green beard). Because humans evolved a large brain recently, the “non-green-beard-detector” can't be instantiated by neural structures directly and completely specified by genes, but must have a learned component. I think this learned component is the mechanism behind cultural xenophobia and religious bigotry.

Back to consciousness. Consciousness implies the continuity of entity identity over time. The individual that is conscious at time=1 is self-perceived to be “the same” individual as is conscious at time=2. What does this actually mean? Is there a one-to-one correspondence between the two entities? No, there is not. The entity at time=1 will evolve into different entities at time=2 depending on the experience-path the entity has taken.

If a snap-shot of the entity at time=1 is taken, duplicated into multiple exact copies and each copy is allowed to have different experiences, at time=2 there will be multiple different entities derived from the first original. Is any one of these entities “more” like the original entity? No, they are all equally derived from the original entity, they are all equally as much like the original. Each of them will have the subjective experience that they are derived from the original (because they are), each one will have the subjective experience that they are the original (because as far as each one knows, they are the original) and the subjective experience that all the others must therefore be imposters.

This seeming paradox arises because of the resolution the human entity detector has. Can that detector distinguish between extremely similar versions of entities? To do pattern recognition, one must have a pattern for comparison. The more exacting the comparison, the more exacting the compared to pattern must be. In the limit a perfect comparison requires an exact and complete representation; in the limit it takes a complete 100% fidelity emulation of the entity (so as to be able to do a one-to-one comparison). I think this relates to what Nietzsche was talking about when he said:

“Whoever battles with monsters had better see that it does not turn him into a monster. And if you gaze long into an abyss, the abyss will gaze back into you.”

To be able to perceive anything, one must have data for the pattern recognition necessary to detect what ever is being perceived. If the pattern recognition computational structures are unable to identify something, that something cannot be perceived. To perceive the abyss, you must have a mapping of the abyss inside of you. Because humans have self-modifying pattern recognition structures, those structures self-modify to become better at detecting what ever is being observed. As you stare into the abyss, your brain becomes more abyss-like to optimize abyss detection.

With the understanding that to detect an entity, one must have pattern recognition that can recognize that entity, then the reason that there is the appearance of continuity of consciousness is seen to be an artifact of human pattern recognition. Human entity detection necessarily compares the observed entity with a reference entity. When that reference entity is the self, there is always a one-to-one correspondence with what is observed to be the self and the reference entity (which is the self), so there is always the identification of the observed entity as the self. It is not that there is actual continuity of a self-entity over time, rather there is the illusion of continuity because the reference is changing exactly as the entity is changing. There are some rare cases where people feel “not themselves” (depersonalization disorder) where they think they are a substitute, or dead, or somehow not the actual person they once were. This dissociation sometimes happens due to extreme traumatic stress, and there is some thought that it is protective; dissociate the self so that the self is not there to experience the trauma and be irreparably injured by that trauma (this may not be correct). This may be what happens during anesthesia.

I think this solves the “problem” of uploading; how can the uploaded entity be identical to the non-uploaded entity? The actual answer is that it can't be, but that doesn't matter if the uploaded entity “feels” or “believes” it is the same entity as the non-uploaded entity, then as far as the uploaded entity is concerned, it is. I appreciate that this may not be a satisfactory solution to anyone but the uploaded entity because it implies that continuity of consciousness is merely an illusion, essentially a hallucination caused by a defect in the entity detection pattern recognition. The data from the non-uploaded entity doesn't really matter. All that matters is does the uploaded entity “feel” it is the same.

I think that in a very real sense, those who seek personal immortality via cryonics or via uploading are pursuing an illusion. They are pursuing the perpetuation of the illusion of self-entity continuity. The same illusion those who believe in an immortal soul are pursuing. The same illusion ancient Egyptians persued via mummification.  If the entity is to be self-identical for perpetuity, then it cannot change. If it cannot change, then it cannot have new experiences. If it has new experiences and changes, then it is not the same entity that it was before those changes.

In terms of an AI; the AI can only be conscious if it has an entity detector that detects itself and uses itself as the pattern for that detection. It can only be conscious about aspects of itself that its entity detector has access to. For example humans are not conscious of the data processing that goes on in the visual cortex. Why? Because the human entity detector does not attempt to map that conceptual space. If the AI entity detector doesn't map part of its own computational equipment, then the AI won't be conscious of that part of its own data processing either.

A recipe for friendly AI might be to program the AI to use the coherent extrapolated volition of a select group of humans as its reference entity for entity detection. In effect, that may be what some cultures are already trying to accomplish through ancestor and hero worship; attempting to mold future generations by holding up ideals as examples. That may be analogous to what EY was getting at in discussing what he wants to protect.  If the AI were given a compulsion to become ever more like the CEV of its reference entity, there are limits to how much it could change. 

That might be a better use for the data that some humans want to upload to try and achieve personal immortality. They can't achieve personal immortality because the continuity of entity identity is an illusion. Selecting which humans to use would be tricky, but if their coherent extrapolated volition could be “captured”, combined and then used as the reference entity for the AI, it might be a good idea. The AI would then be no worse (and no better) than the sum of those individuals. Of course how to select those individuals is the tricky part. Anyone who wants to be selected is probably not suitable. The mind-set that I think is most appropriate is that of a parent nurturing their child, not to live vicariously through the child, but for the child's benefit. A certain turn-over per generation would keep the AI connected to present humanity but allow for change.  We do not want individuals who seek to acquire power by clawing their way to the top of the social power hierarchy by forcing others down (in a zero-sum manner). 

I think allowing “wild-type” AI (individuals who upload themselves) is probably too dangerous, and is really just a monument to their egotistical fantasy of entity continuity.  Just like the Pyramids, but a pyramid that could change into something fatally destructive (uFAI). 

There are some animals that “think” and act like they are people, some dogs that have been completely human acclimated. What has happened is that the dog is using a “human-like” self representation as the reference for its entity pattern recognition, but because of the limited cognitive capacities of that particular dog, it doesn't recognize that the humans it observes are different than itself. An AI could be designed to think that it was human (once we knew how to actually design any AI, designing it to think it was human would be easy).

Humans can do this too (emulate another entity such that they think they are that entity), I think that is in essence what Stockholm Syndrome causes. Under severe trauma, following dissociation and depersonalization, the self reforms, but in a pattern that matches, identifies with, and bonds to the perpetrator of the trauma. The traumatized person has attempted to emulate the “green-beard persona” to avoid death and abuse being perpetrated upon them by the person with the “green beard”.

This may be the solution to Fermi's Paradox.  There may be no galaxy spanning AIs because by the time civilizations can accomplish such things they realize that continuity of entity identity is an illusion and have grown beyond wanting to spend effort on illusions. 

New Comment
83 comments, sorted by Click to highlight new comments since: Today at 2:29 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

I think you've hit on an important point in asking what dissociation syndromes show about the way the mind processes "selfhood", and you could expand upon that by considering a whole bunch of interesting altered states that seem to correspond to something in the temporal lobe (I can't remember the exact research).

I didn't completely follow the rest of the article. Is "consciousness" even the right term to use here? It has way too many meanings, and some of them aren't what you're talking about here - for example, I don't see why there can't be an entity that has subjective experience but no personal identity or self-knowledge. Consider calling the concept you're looking for "personal identity" instead.

I also take issue with some of the language around continuity of personal identity being an illusion. I agree with you that it probably doesn't correspond to anything in the universe, but it belongs in a category with morality of "Things we're not forced to go along with by natural law, but which are built into our goal system and finding they don't have any objective basis doesn't force us to give them up". I don't think aliens would be philosophically rash enough to stop existing just because of a belief that personal identity is an illusion.

Also, paragraph breaks!

4daedalus2u14y
Yvain, what I mean by illusion is: perceptions not corresponding to objective reality due to defects in sensory information processing used as the basis for that perception. Optical illusions are examples of perceptions that don't correspond to reality because of how our nervous system processes light signals. Errors in perception; either false positives or false negatives are illusions. In some of the meditative traditions there is the goal of "losing the self". I have never studied those traditions and don't know much about them. I do know about dissociation from PTSD. There can be entities that are not self-aware. I think that most animals that don't recognize themselves in a mirror fit in the category of not recognizing themselves as entities. That was not the focus of what I wanted to talk about. To be self-aware, an entity must have an entity detector that registers “self” upon exposure to certain stimuli. Some animals do recognize other entities but don't recognize themselves as “self”. They perceive another entity in a mirror, not themselves.

The first dubious statement in the post seems to be this:

Because the experience of consciousness is subjective, we can never “know for sure” that an entity is actually experiencing consciousness.

How can you make such a statement about the entire future of science? A couple quotes:

"We may determine their forms, their distances, their bulk and their motions, but we can never know anything about their chemical and mineralogical structure" - Auguste Comte talking about stars in 1835

"Heavier than air flying machines are impossible" - Lord Kelvin, 1895

The second dubious statement comes right after the first:

However there must be certain computational functions that must be accomplished for consciousness to be experienced.

The same question applies: how on Earth do you know that? Where's your evidence? Sharing opinions only gets us so far!

And it just goes downhill from there.

0nawitus14y
He is probably talking about the hard problem of consciousness, e.g. whether qualia exists. While it's possible conceptually to have empirical tests for subjective consciousness, it's seems extremely unlikely. We can already imagine a computational simulation of the brain, and empirical test for qualia seems impossible pretty much by definition. Sure, it's possible to test whether the simulation has self-awareness from a computational point (and it will have that since it's a human brain simulation).
4Oscar_Cunningham14y
If there is a (physical) cause for qualia, such that qualia occur if and only if that cause is present, and we work out what that cause is, then we have an empirical test for subjective conciousness. I wouldn't call that, "extremely unlikely".
0nawitus14y
Yet qualia cannot be measured empirically (atleast that's the consensus), which makes such tests extremely unlikely. And this discussion seems to turn into a regular qualia debate. I'm not sure if that's desirable.
2WrongBot14y
Yet. No one knows what science doesn't know.
0Oscar_Cunningham14y
I agree that it's not desirable.
0[anonymous]14y
I completely share your reluctance to start yet another discussion about qualia, but there's an important meta-issue here. No nontrivial statement about the real world (e.g. the impossibility of something) can ever be true "by definition", because that would make dictionaries capable of sympathetic magic.
-1daedalus2u14y
With all due respect to Lord Kelvin, he personally knew of heavier than air flying machines. We now call them birds. He called them birds too.

I'm not sure he realized they were machines, though.

-1daedalus2u14y
Yes, and some people today don't realize that the brain does computations on sensory input in order to accomplish pattern recognition, and without that computation there is no pattern recognition and no perception. Of anything.

I confess, I am lost. It seems we are in an arguments as soldiers situation in which everyone is shooting at everyone else. To recap:

  • You said "we can never “know for sure” that an entity is actually experiencing consciousness". (Incidentally, I agree.)
  • Cousin_it criticised, comparing you to Kelvin.
  • You responded, pointing out that the Kelvin quote is odd, given what we suspect Kelvin knew (Why did you do this?)
  • I suggest the Kelvin quote was maybe not so odd, given his misconceptions (Why did I do this???)
  • You point out that people today (what people?) have misconceptions as severe as Kelvin's.

This is either a rhetorical master stroke, or just random lashing out. I can't tell. I am completely lost. WTF is going on?

-4daedalus2u14y
My purpose in pointing this out was to say that yes, people today are making the same types of category errors as Kelvin was; the mistaken belief that some types of objects are fundamentally not comparable (in Kelvin's case living things and machines), in the example I used computations by a sensory neural network and computations by a machine pattern recognition system. They are both doing computations, they can both be compared as computing devices; they both need computation resources to accomplish the computations and data to do the computations on. For either of them to detect something they both need both data and computation resources. Even when the thing being detected is consciousness. Why there is the need/desire to keep “consciousness” as a special category of things/objects for which the normal rules of sensory detection do not apply, is not something I understand. My experience has always been that if you look hard enough for errors you will find them. If someone wants to look for trivial errors and so discount whatever is present that is not trivial error, then discussions of difficult problems becomes impossible. My motivation for my original post was not to do battle, but to discuss the computational requirements of consciousness and consciousness detection. If that is such a hot-button topic that people feel the need to attack me, my arguments, my ineptness at text formatting, pile on, and vote my karma down to oblivion, then perhaps LW is not ready to discuss such things and I should move it to my blog.
[-][anonymous]14y130

then perhaps LW is not ready to discuss such things

Uh, what? The post is poorly written along a number of dimensions, and was downvoted because people don't want to see poorly written posts on the front page. The comments are pointing out specific problems with it. To interpret that as a problem with the community is a fairly egregious example of cognitive dissonance.

4Perplexed14y
So, if I understand you, detecting consciousness in someone else is something like detecting anger in someone else - of course we can't do it perfectly, but we can still do it. Makes sense to me. Happy to have fed you the straight-line. I understand your frustration. FWIW, I upvoted you some time ago, not because I liked your post, but rather because it wasn't nearly bad enough to be downvoted that far. Maybe I felt a bit guilty. I don't really think there is "the need/desire to keep “consciousness” as a special category of things/objects", at least not in this community. However, there is a kind of exhaustion regarding the topic, and an intuition that the topic can quickly become a quicksand. As I said, I found your title attractive because I thought it would be something like "here are the computations which we know/suspect that a conscious entity must accomplish, and here is how big/difficult they are". Well, maybe the posting started with that, but then it shifted from computation to establish/maintain consciousness to computation to recognize consciousness, to who knows what else. My complaint was that your posting was disorganized. But down at the sentence/paragraph level, it struck me as competent and occasionally interesting. I hope you don't let this bad experience drive you from LW.
-1daedalus2u14y
perplexed, If detecting consciousness in someone else requires data and computation, why is our own consciousness special such that it doesn't require data and computation to be detected? No one has presented any evidence or any arguments that our own consciousness is special. Until I see a reasonable argument otherwise; my default will be that my own consciousness is not special and that everyone else's consciousness is not special either. I appreciate that some people do privilege their own consciousness. My interpretation of that self-privileging is that it is not based on any rational examination of the issue but merely on feelings. If there is a rational examination of the issue I would like to see it. If every other instance of detecting consciousness requires data and pattern recognition, then why doesn't the self-detection of self-consciousness require data and pattern recognition? If people are exhausted by a topic, they should not read posts on it. If people are afraid of getting caught in quicksand, they should stay away from it. If people find their intuition not useful, they should not rely on it. When I asserted that self-detection of self-consciousness requires data and computation resources, I anticipated it being labeled a self-evident and/or obvious and/or trivial statement. To have it labeled as “opinion” is completely perplexing to me. To have that labeling as “opinion” up voted means that multiple people must share it. How can any type of cognition happen without data and computation resources? Any type of information processing requires data and computation resources. Even a dualism treatment posits mythical immaterial data and mythical immaterial computation resources to do the necessary information processing. To be asked for “evidence” that cognition requires computation resources is something I find bizarre. It is not something I know how to respond to. When multiple people need to see evidence that cognition requires computation resou
8GuySrinivasan14y
If smart people disagree so bizarrely, smart money's on a misunderstanding, not a disagreement. e.g. here, cousin_it said: What might he have meant that's not insane? Perhaps that he wants evidence that there must be certain computational functions, rather than that he wants evidence that there must be certain computational functions.
1daedalus2u14y
GuySrinivasan, I really can't figure out what is being meant. In my next sentence I say I am not trying to describe all computations that are necessary, and in the sentence after that I start talking about entity detection computation structures being necessary. I think that is a pretty clear description of a certain cognitive structure that requires computational resources for an entity to self-recognize itself. What is it that cousin_it is disputing and wants me to provide evidence for? That an entity doesn't need a “self-detector” to recognize itself? That a “self-detector” doesn't require pattern recognition? That pattern recognition doesn't require computation? I really don't understand. But some other people must have understood it because they up voted the comment, maybe some of those people could explain it to me.
5cousin_it14y
That consciousness requires a self detector thingy. This may or may not be true - you haven't given enough evidence either way. Sure, humans are conscious and they can also self-detect; so what? At this stage it's like claiming that flight requires flapping your wings.
0daedalus2u14y
It is your contention that an entity can be conscious without being aware that it is conscious? There are entities that are not aware of being conscious. To me, if an entity is not aware of being conscious (i.e. is unconscious of being conscious), then it is unconscious. By my understanding of the term, the one thing an entity must be aware of to be conscious is its own consciousness. I see that as an inherent part of the definition. I can not conceive of a definition of “consciousness” that allows for a conscious entity to be unaware that it is conscious. Could you give me a definition of "consciousness" that allows for being unaware of being conscious?
3thomblake14y
if all that consciousness entails is being aware of being conscious, it doesn't mean anything at all, does it? We could just as well say: "My machine is fepton! I know this because it's aware of being fepton; just ask, and it well tell you that it's fepton! What's fepton, you ask? Well, it's the property of being aware of being fepton!" I'm not allowed, under your definition, to posit a conscious being that is aware of every fact about the universe except the fact of its own consciousness, only because a being with such a description would be unconscious, by definition. It seems to be a pretty useless thing to be aware of.
0daedalus2u14y
If a being is not aware of being conscious, then it is not conscious no matter what else it is aware of. I am not saying that all consciousness entails is being aware of being conscious, but it does at a minimum entail that. If an entity does not have self-awareness, then it is not conscious, no matter what other properties that entity has. You are free to make up any hypothetical entities and states that you want, but the term “consciousness” has a generally recognized meaning. If you want to deviate from that meaning you have to tell me what you mean by the term, otherwise my default is the generally recognized meaning. Could you give me a definition of "consciousness" that allows for being unaware of being conscious?
3cousin_it14y
10 seconds ago I was unaware of being conscious: my attention was directed elsewhere. Does that mean I was unconscious? How about a creature who spends all its life like that? - will you claim that it's only conscious because it has a potential possibility of noticing its own consciousness, or something?
0daedalus2u14y
Yes, if you are not aware of being conscious then you are unconscious. You may have the capacity to be conscious, but if you are not using that capacity, because you are asleep, are under anesthesia, or because you have sufficiently dissociated from being conscious, then you are not conscious at that moment. There are states where people do “black-out”, that is where they seemingly function appropriately but have no memory later of those periods. Those states can occur due to drug use, they can also happen via psychogenic processes called a fugue state. There is also the term semiconscious. Maybe that would be the appropriate term to use when an entity capable of consciousness is not using that capacity.
0NancyLebovitz14y
Do you consider flow states (being so fascinated by something that you forget yourself and the passage of time) as not being conscious?
0daedalus2u14y
Yes. I would consider those states to be “unconscious”. I am not using “conscious” or “unconscious” as pejorative terms or as terms with any type of value, but purely as descriptive terms that describe the state of an entity. If an entity is not self-aware in the moment, then it is not conscious. People are not self-aware of the data processing their visual cortex is doing (at least I am not). When you are not aware of the data processing you are doing, the outcome of that data processing is “transparent” to you, that is the output is achieved without an understanding of the path by which the output was achieved. Because you don't have the ability to influence the data processing your visual cortex is doing, the output is susceptible to optical illusions. Dissociation is not uncommon. In thinking about it, I think I dissociate quite a bit, and that it is fairly easy for me to dissociate. I do my best intellectual work when I am in what I call a “dissociative focus”. Where I really am quite oblivious to a lot of extraneous things and even about my physical state, hunger, fatigue, those kinds of things. I think that entering a dissociative state is not uncommon, particularly under conditions of very high stress. I think there is a reason for that, under conditions of very high stress, all computational resources of the brain are needed to deal with what ever is causing that stress. Spending computational resources being conscious or self-aware is a luxury that an entity can't afford while it is “running from a bear” (to use my favorite extreme stress state). I haven't looked at the living luminously sequences carefully, but I think I mostly disagree with it as something to strive for. It is ok, and if that is what you want to do that is fine, but I don't aspire to think that way. Trying to think that way would interfere with what I am trying to accomplish. I see living while being extremely conscious of self (i.e. what I understand to be the luminous state), and
0[anonymous]14y
Yes, if you are not aware of being conscious then you are unconscious. You may have the capacity to be conscious, but if you are not using that capacity, because you are asleep, are under anesthesia, or because you have sufficiently dissociated from being conscious, then you are not conscious at that moment. There are states where people do “black-out”, that is where they seemingly function appropriately but have no memory later of those periods. Those states can occur due to drug use, they can also happen via psychogenic processes called a fugue state. There is also the term semiconscious. Maybe that would be the appropriate term to use when an entity capable of consciousness is not using that capacity.
5Perplexed14y
It strikes me as bizarre too, particularly here. So, you have to ask yourself whether you are misinterpreting. Maybe they are asking for evidence of something else. You are asking me to think about topics I usually try to avoid. I believe that most talk about cognition is confused, and doubt that I can do any better. But here goes. During the evolutionary development of human cognition, we passed through these stages: * (1) recognition of others (i.e. animate objects) as volitional agents who act so as to maximize the achievement of their own preferences. The ability to make this discrimination between animate and inanimate is a survival skill, as is the ability to infer the preferences of others. * (2) recognition of others as epistemic agents who have beliefs about the world. The ability to infer others' beliefs is also a survival skill. * (3) recognition that among the beliefs of others is the belief that we ourselves are volitional and epistemic agents. It is a very important survival skill to infer the beliefs of others about ourselves. * (4) roughly at the same time, we come to understand that the beliefs of others that we are volitional and epistemic agents appear to be true. This realization is certainly interesting, but has little survival value. However, some folks call this realization "consciousness" and believe it is a big deal. * (5) finally, we develop language so that we can both (a) discuss, and (b) introspect on all of the above. This turns out, by accident as it were, to have enormous survival value and is the thing that makes us human. And some other folk call this linguistic ability "consciousness", rather than applying that label to the mere awareness of an equivalence in cognitive function between self and other. So that is my off-the-cuff theory of consciousness. It certainly requires social cognition and it probably requires language. It obviously requires computation. It is relatively useless, but it is the inevitable byproduct of
0red7514y
Self-model theory of subjectivity can also suggest (7) Ability to explicitly represent state of our own knowledge, intentions, focus of attention etc. Ability to analyse performance of our own brain and find ways to circumvent limitations. Ability to control brain's resource allocation by learned (vs evolved) procedures. Interesting thing about consciousness is that the map is a part of territory it describes, and as the map should be represented by neuronal connections and activity it can presumable influence territory.
1daedalus2u14y
Yes, and 1, 2, 3, 4, 5, and 6 and 7 all require data and computation resources. And to compare a map with a territory one needs a map (i.e. data) and a comparator (i.e. a pattern recognition device) and needs computational resources to compare the data with the territory using the comparator. When one is thinking about internal states, the map, the territory and the comparator are all internal. That they are internal does not obviate the need for them.
-2thomblake14y
It's perplexing to me that you would be perplexed by this. Is it not your opinion? I would assume it is your opinion, since you have asserted it. It is clearly not your opinion that its negation is true.

Good first approximation: don't write posts about consciousness, if you don't want to be downvoted.

9Oscar_Cunningham14y
Better approximation: Don't write posts about consciousness, unless you have read about mysterious answers to mysterious questions, and you've had an insight that make consciousness seem less mysterious than before.
1daedalus2u14y
I had read mysterious answers to mysterious questions. I think I do have an explanation that makes consciousness seem less mysterious and which does not introduce any additional mysteries. Unfortunately I seem to be the only one who appreciates that. Maybe if I had started out to discuss the computational requirements of the perception of consciousness there would have been less objection. But I don't see any way to differentiate between perception of consciousness and consciousness. I don't think you can have one without the other.
4nawitus14y
There should be some kind of "read this first before talking about consciousness" post which would atleast provide some definitions so articles about consciousness would be comprehensive.

Well, I liked the title.

Then in the second paragraph, I was a little disappointed to read:

I am not attempting to discuss all computational functions that are necessary, just a first step at enumerating some of them and considering implications.

But I think, "That is ok, a partial list is a step in the right direction." The next paragraph begins promisingly with the word "First ...". But then I read and read and ... well, I never found a paragraph which started with the word "Second..." Nor could I find anything much about computation or data. So I skipped to the conclusion. But I couldn't find anything in the last 5 paragraphs or so that was even talking about consciousness.

So, I guess I have changed my mind about liking the title.

This is my first article on LW, so be gentle.

I tried. I really tried.

daedalus2u, taboo "consciousness".

3wedrifid14y
Good idea. and since Daedalus mentions that he is new lets also give him a link that explains why tabooing a word can be a useful way to facilitate understanding of a topic.
0[anonymous]14y
How about: consciousness is a sensory input that senses the brains own internal state and of which the brain makes use in the same way as it's other senses.
1Perplexed14y
That sound a lot like this "Global Workspace" theory about consciousness. Just ran across this.
0nawitus14y
Consciousness actually means a number of different things, so any one definition will make discussion problematic. There really should be a number of different definitions for qualia/subjective consciousness, empirical consciousness etc.
0daedalus2u14y
nawitus, my post was too long as it is. If I had included multiple discussions of multiple definitions of consciousness and qualia, you would either still be reading it or would have stopped because it was too long.
0nawitus14y
And that's why we need an article somewhere which would define some common terms, so you don't have to define them all over again in every article about consciousness.
0daedalus2u14y
[Consciousness] :The subjective state of being self-aware that one is an autonomous entity that can differentially regulate what one is thinking about.
5wedrifid14y
So, for example, any computer program that has the ability to to parse and understand relevant features of its own source code and also happens to have a few 'if' statements in some of the relevant areas. It may actually exclude certain humans that I would consider conscious. (I believe Yvain mentioned this too.)
0daedalus2u14y
I am talking about minimum requirements, not sufficient requirements. I am not sure what you mean by "understand relevant features of its own source code". I don't know any humans that I would consider conscious that don't fit the definition of consciousness that I am using. If you have a different definition I would be happy to consider it.
2wedrifid14y
Those two seem to be the same thing in this context. No, it's as good as any. Yet the 'any' I've seen are all incomplete. Just be very careful that when you are discussing one element of 'consciousness' you are careful to only come to conclusions that require that element of consciousness and not some part of consciousness that is not included in your definition. For example I don't consider the above definition to be at all relevant to the Fermi paradox.
1daedalus2u14y
To be a car; a machine at a minimum must have wheels. Wheels are not sufficient to make a machine into a car. To be conscious, an entity must be self-aware of self-consciousness. To be self-aware of self-consciousness an entity must have a "self-consciousness-detector" A self-consciousness-detector requires data and computation resources to do the pattern recognition necessary to detect self-consciousness. What else consciousness requires I don't know, but I know it must require detection of self-consciousness.
0wedrifid14y
"Necessary" but not sufficient.
7Perplexed14y
I'm not convinced it is even necessary. For example, I did not learn that I am conscious by using a consciousness detector. Instead, I was taught that I am conscious. It happened in fifth grade Spelling class. I recall that I learned both the word "conscious" and "unconscious" that day, and I unlearned the non-word "unconscience". I sometimes think that philosophers pretend too strenuously that we all work out these things logically as adults, when we all know that the reality is that we are catechised into them as children.
-1daedalus2u14y
perplexed, how do you know you do not have a consciousness detector? Do you see because you use a light detector? Or because you use your eyes? Or because you learned what the word “see” means? When you understand spoken language do you use a sound detector? A word detector? Do the parts of your brain that you use to decode sounds into words into language into meaning not do computations on the signals those parts receive from your ears? The only reason you can think a thought is because there are neural structures that are instantiating that thought. If your neural structures were incapable of instantiating a thought, you would be unable to think that thought. Many people are unable to think many thoughts. It takes many years to train a brain to be able to think about quantum mechanics. I am unable to think accurately about quantum mechanics. My brain does not have the neural structures to do so. My brain also does not have the neural structures to understand Chinese. If it did, I would be able to understand Chinese, which I cannot do. There has to be a one-to-one correspondence between the neural structures that instantiate a mental activity and the ability to do that mental activity. The brain is not magic; it is chemistry and physics just like everything else. If a brain can do something it is because it has the structures that can do it. Why is consciousness different than sight or hearing? If consciousness is something that can be detected, there needs to be brain structures that are doing the detecting. If consciousness is not something that can be detected, then what is it that we are talking about? This is very basic stuff. I am just stating logical identities here. I don't understand where the disagreement is coming from.
9Perplexed14y
I'm not sure there is a disagreement. As I said, I don't spend much time thinking about consciousness, and even less time reading about it, so please bear with me as I struggle to communicate. Suppose I have a genetic defect such that my consciousness detector is broken. How would I know that? As I say, I didn't discover that I am conscious by introspection. I was told that I am conscious when I was young enough to believe what I was told. I was told that all the other people I know are conscious - except maybe when they are asleep or knocked out after a fall. I was told that no one really knows for sure whether my dog Cookie was conscious. But that the ants in my ant farm almost certainly were not. Based on this information, I constructed a kind of operational definition for the term. But I really had (and still have) no idea whether my definition matched anyone else's. But here is the thing. I have a friend whose color-red qualia detector has a genetic defect. He learned the meaning of the word "red" and the word "green" as a child just like me. But he didn't know that he had learned the wrong meanings until a teacher became suspicious and sent him to have his vision checked. See, they can detect defects in color-red qualia detectors. So, he knows his is defective. He now knows that when he was told the meaning of the word red by example, he got the wrong idea. So how do I know that my consciousness detector is working? I do notice that even though most people were told the meaning of the word back in grade school just like me, they don't all seem to have the same idea. Are some of their consciousness detectors broken? Is mine broken? Are you and I in disagreement? If you think that we are in disagreement, do you now understand where the disagreement is coming from? One thing I am pretty sure of: if I do have a broken consciousness detector due to a genetic defect, this defect hasn't really hurt me too much. It doesn't seem to be something crucial. I do fine ju
6Pavitra14y
Assume you do, in fact, have a consciousness detector. Do you trust it to work correctly in weird edge cases? Humans have fairly advanced hardwired circuitry for detecting other humans, but our human detectors fail completely when presented with a photograph or a movie screen. We see a picture of a human, and it looks like a human.
0Zetetic14y
That seems like a very confusing way of saying this. You aren't 'self aware of self consciousness', self consciousness is, as far as I can tell in this context, equivalent to self awareness. The phrase totally redundant. The only meaningful reduction I can make out here is that you think to be conscious a person has to be self aware. I think it's probably a mistake to propose a "self consciousness detector". What is really going on? You can focus on previously made patterns of thought and actions and ask questions about them for future reference. Why did I do this? Why did I think that? You are noticing a very complex internal process and in doing so applying another complex internal process to the memory of that process in order to gather useful or attractive (I am ignorant of the physical processes that dictate when and about what we think about during metacognition) information.
4JoshuaZ14y
"self-aware" "differentially regulate" and "what one is thinking about" carry almost as much baggage as consciousness. I'm not sure that this particularly unpacking helps much.
3[anonymous]14y
This needs further unpacking - you seem to be referring to (at least) 3 things simultaneously: Qualia, Self-Awareness, and Executive Control. I can imagine having any one of those without the others, which may be why so many people are disputing some of your assertions, and why your post seems so disorganized.
0KrisC14y
Is memory necessary?
0Jayson_Virissimo14y
I interpret your definition as being specifically about self-consciousness, not consciousness in general. Is this a good interpretation? Do you mean explicit (conceptual) self-awareness or implicit self-awareness, or both? If the former, young children probably wouldn't be conscious, but if the latter, then just about every animal would be.
[-]knb14y40

This may be the solution to Fermi's Paradox. There may be no galaxy spanning AIs because by the time civilizations can accomplish such things they realize that continuity of entity identity is an illusion and have grown beyond wanting to spend effort on illusions.

I don't quite get this part. How would this explain the Fermi Paradox? Why would all civilizations cease growth and expansion just because identity continuity is an illusion? Why would such an evolutionarily unstable pattern be replicated broadly amongst so many alien species?

This might be nit-picking, but I think it’s necessary, given the confusing subjects:

Humans have entity detectors and those detectors exhibit false positives (detecting the entity (spirit) of the tree, rock, river and other inanimate objects, pareidolia) and false negatives (not recognizing a particular ethnic group as fully human).

These, particularly the negative, aren’t failures of the entity detectors, but of the entity classifiers. As an example, slavers do recognize slaves & potential slaves as entities, they simply classify them as non-human.... (read more)

0TobyBartels14y
I agree with you about the slaves, but I disagree with your not-quite-examples from physics; I think that these are false negatives, certainly as much as your examples of false negatives from mathematics. People did once misclassify planetary orbits as "intervention by angelic beings" and falling down as "elements seeking their level", but the real point of Newton's law of gravity is that it developed an entirely new classification, "aspects of universal gravitation" for them; Newton recognised the previously unrecognised entity of universal gravitation. Similarly, while memes were not classified as "subjects of Darwinian evolution" until recently, we entirely missed the existence of Darwinian evolution for most of the history of recorded human thought. Conversely, if you don't agree that these are examples of entities that weren't recognised as entities, why are (say) irrational numbers an example? People knew about the ratio of the length of the diagonal of a square to the length of its side; they just misclassified it as a rational number. But again, the important discovery was that there was such a classification as "irrational number" at all; that the cited ratio is an instance is only of secondary importance.
1bogdanb14y
I think I understand your point; I started to disagree, but then I realized the source of the disagreement. Let me see if I can explain better: I see a white-skinned Homo sapiens and I classify it as “God-made inheritor of Earth”, and I see a black-skinned Homo sapiens and I classify it as “soul-less automaton”, and (implicitly) consider the two categories disjunct and unrelated. Case (a): I change my mind later, and re-classify the black-skinned H.s. in the first category. Ignoring for the moment the “God” part, this is clearly a mis-classification. Case (b): I change my mind, and describe the first category as “evolved sapient creatures worthy of respect”. Technically I just moved all white-skinned H.s. in a different, third category, leaving the first one empty. But my mind instinctively doesn’t think that way: since the categories have the exact same membership, it feels as if I kept the existing category and I just improved its description. (As far as I can tell, human brains by default don’t build categories as abstract concepts, then put objects in them, but build categories as collections of objects, then use abstract concepts to describe them, despite using words like “define” instead of “describe”. Brains that have learned how to reason formally, like mine and yours, also do it the other way around, but not all the time. I think that is the source of our semi-disagreement.) [we’re still in case (b)] So moving white H.s. from “God’s children” to “Darwin’s” isn’t felt as a misclassification, rather as an “improved understanding of an existing category”. Then if I change my mind again and also move black H.s. in “evolved sapients” category, and then I rename it as “human” for short, this second part feels like remediation of a mis-classification. In our history the first part of (b), essentially changing our understanding of what it means to be human, changed in many short steps. Since the category is very close (essentially, “us”), it mostly felt like

Humans can do this too (emulate another entity such that they think they are that entity), I think that is in essence what Stockholm Syndrome causes. Under severe trauma, following dissociation and depersonalization, the self reforms, but in a pattern that matches, identifies with, and bonds to the perpetrator of the trauma. The traumatized person has attempted to emulate the “green-beard persona” to avoid death and abuse being perpetrated upon them by the person with the “green beard”.

This doesn't seem to be the natural interpretation. Stockholm Syndrome is more or less the typical outcome of human social politics exaggerated somewhat.

-1daedalus2u14y
Is there something wrong with my interpretation of Stockholm Syndrome other than it is not the “natural interpretation"? Is it inconsistent with anything known about Stockholm Syndrome, how people interact, or how humans evolved? Would we consider it surprising if humans did have a mechanism to try and emulate a “green beard” if having a green beard became essential for survival? We know that some people find many green-beard-type reasons for attacking and even killing other humans. Race, ethnicity, religion, sexual orientation, gender, and so on are all reasons for hating and even killing other humans. How do the victims prevent themselves from being victimized? Usually by obscuring their identity, by attempting to display the “green beard” the absence of which brings attack. Stockholm Syndrome happens in a short period of time, so it is easier to study than the “poser” habits that occur over a lifetime. Is it fundamentally different, or is it just one point on a spectrum?

bogdanb said a lot of what I wanted to say. So I'll make the smaller point that an "entity" can be a non-living thing (I think you might want to use the term "agent" in its place) so your examples are technically not false positives (unless we are talking about the "spirit" of a thing rather than the thing itself) or even negatives.

I also don't think that dogs, however pampered or tiny-brained they may be, really can't distinguish between humans and dogs.

If I'm following your "logic" correctly, and if you yourself adhere to the conclusions you've set forth, you should have no problem with me murdering your body (if I do it painlessly). After all, there's no such thing as continuity of identity, so you're already dead; the guy in your body is just a guy who thinks he's you.

I think this may safely be taken as a symptom that there is a flaw in your argument.

0daedalus2u14y
No, there are useful things I want to accomplish with the remaining lifespan of the body I have. That there is no continuity of personal identity is irrelevant to what I can accomplish. That continuity of personaal identity is an illusion simply means that the goal of indefinite extension of personal identity is a useless goal that can never be achieved. I don't doubt that a machine could be programmed to think it was the continuation of a flesh-and-blood entity. People have posited paper clip maximizers too.
0jmmcd14y
There might be useful things I want to accomplish with my post-upload body and brain. I agree with inklesspen: this is a fatal inconsistency.
0daedalus2u14y
I see this as analogous to what some religious people say when they are unable to conceive of a sense of morality or any code of behavior that does not come from their God. If you are unable to conceive of a sense of purpose that is not attached to a personal sense of continued personal identity, I am not sure I can convince you otherwise. But why you consider that my ability to conceive of a sense of purpose without a personal belief in a continued sense of personal identity is somehow a "flaw" in my reasoning is not something I quite understand. Are you arguing that because some people "need" a personal sense of continued personal identity that reality "has to" be that way? People made (and still make) similar arguments about the existence of God.
0jmmcd14y
Your entire reply deals with arguments you wish I had made. Without coming down anywhere on the issue of continued personal identity being an illusion, OR the issue of a sense of purpose in this scenario, I'm trying to point out a purely logical inconsistency: If uploading for personal immortality is "pursuing an illusion", then so is living: so you should allow inklesspen to murder you. The other way around: if you want to accomplish things in the future with your current body, then you should be able to conceive of people wanting to accomplish things in their post-upload future. The continuity with the current self is equally illusory in each case, according to you.
0daedalus2u14y
Inklesspen's argument (which you said you agreed with) was was that my belief in a lack of personal identity continuity was incompatible with being unwilling to accept a painless death and that this constitutes a fatal flaw in my argument. If there are things you want to accomplish and where you believe the most effective way for you to accomplish those things is via uploading what you believe will be a version of your identity into an electronic gizmo; all I can say is good luck with that. You are welcome to your beliefs. In no way does that address Inklesspen's argument that my unwillingness to immediately experience a painless death somehow contradicts or disproves my belief in a lack of personal identity continuity or constitutes a flaw in my argument. I don't associate my “identity” with my consciousness, I associate my identity with my body and especially with my brain, but it is coupled to the rest of it. That my consciousness is not the same from day to day is not an issue for me. My body very much is alive and is quite good at doing things. It would be a waste to kill it. That it is not static is actually quite a feature, I can learn and do new things. I have an actual body with which I can do actual things and with which I am doing actual things. All that can be said about the uploading you want to do is that it is very hypothetical. There might be electronic gizmos in the future that might be able to hold a simulation of an identity that might be able to be extracted from a human brain and that electronic gizmo might then be able to do things. Your belief that you will accomplish things once a version of your identity is uploaded into an electronic gizmo is about you and your beliefs. It is not in the slightest bit about me or my reasoning that a belief in personal identity continuity is an illusion. People professing a belief in an actual Heaven where they will receive actual rewards doesn't constitute evidence that such beliefs are not illusory eit

This is my first article on LW, so be gentle.

This is why it's strongly recommended to try out an article idea on the Open Thread first.

You owe it to your readers to have clearly organized and well-explained thoughts before writing a top-level post, and the best way to get there is to discuss your ideas with veterans first. If you say in advance that you want to write a top-level post, we'll respect that; I've never seen anyone here poach a post idea (though of course others may want to write their own ideas on the topic).

0thomblake14y
People are welcome to poach my ideas, as I have more of them than time to write.
0orthonormal14y
Right; I meant that people don't do so without permission.
-1thomblake14y
Oh yes, I agree. I was just making a note of that since otherwise, given your observation, people will not poach my ideas; I would thus be decreasing the amount of good Lw posts by naming them!
8[anonymous]14y
There seems to be a problem with the paragraph formatting at the beginning. More line breaks maybe?
2Emile14y
Yes, for some reasons the top paragraphs have style="margin-bottom: 0in;", which makes them stick together. Some other things that would help making the post more readable: * Breaking it up into sub-sections with titles * Adding a short summary at the beginning that tries to whet my appetite.
5Oscar_Cunningham14y
EDIT: I realise that you asked us to be gentle, and all I've done is point out a flaws. Feel free to ignore me. You explore many interesting ideas, but none of them are backed up with enough evidence to be convincing. I doubt that anything you've said is correct. The first example of this is this statement: How do you know? What if tomorrow a biologist worked out what caused conciousness and created a simple scan for it? What evidence do you have that would make you surprised if this happened? Why? What is it that actually makes it impossible to have a concious (has qualia) entity that is not self-aware (knows some stuff about itself). Recommended reading: http://lesswrong.com/lw/jl/what_is_evidence/
0daedalus2u14y
We can't “know for sure” because consciousness is a subjective experience. The only way you could “know for sure” would be if you simulated an entity and so knew from how you put the simulation together that the entity you were simulating did experience self-consciousness. So how does this hypothetical biologist calibrate his consciousness scanner? Calibrate it so that he “knows for sure” that it is reading consciousness correctly? His degree of certainty in the output of his consciousness scanner is limited by his degree of certainty in his calibration standards. Even if it worked perfectly. In order to be aware of something, you need to detect something. To detect something you need to receive sensory data and then process that data via pattern recognition into detection or not detection. To detect consciousness your hypothetical biologist needs a “consciousness scanner”. So does any would-be detector of any consciousness. That “consciousness scanner” has to have certain properties whether it is instantiated in electronics or in meat. Those properties include receipt of sufficient data and then pattern recognition on that data to determine a detection or a not detection. That pattern recognition will be subject to type 1 errors and type 2 errors.
-6obx14y
3wedrifid14y
To be honest you lost me at 'consciousness'. The whole question of computational requirements here seems to be one that is just a function of an arbitrary and not included word definition.