I think you've hit on an important point in asking what dissociation syndromes show about the way the mind processes "selfhood", and you could expand upon that by considering a whole bunch of interesting altered states that seem to correspond to something in the temporal lobe (I can't remember the exact research).
I didn't completely follow the rest of the article. Is "consciousness" even the right term to use here? It has way too many meanings, and some of them aren't what you're talking about here - for example, I don't see why there can't be an entity that has subjective experience but no personal identity or self-knowledge. Consider calling the concept you're looking for "personal identity" instead.
I also take issue with some of the language around continuity of personal identity being an illusion. I agree with you that it probably doesn't correspond to anything in the universe, but it belongs in a category with morality of "Things we're not forced to go along with by natural law, but which are built into our goal system and finding they don't have any objective basis doesn't force us to give them up". I don't think aliens would be philosophically rash enough to stop existing just because of a belief that personal identity is an illusion.
Also, paragraph breaks!
The first dubious statement in the post seems to be this:
Because the experience of consciousness is subjective, we can never “know for sure” that an entity is actually experiencing consciousness.
How can you make such a statement about the entire future of science? A couple quotes:
"We may determine their forms, their distances, their bulk and their motions, but we can never know anything about their chemical and mineralogical structure" - Auguste Comte talking about stars in 1835
"Heavier than air flying machines are impossible" - Lord Kelvin, 1895
The second dubious statement comes right after the first:
However there must be certain computational functions that must be accomplished for consciousness to be experienced.
The same question applies: how on Earth do you know that? Where's your evidence? Sharing opinions only gets us so far!
And it just goes downhill from there.
I confess, I am lost. It seems we are in an arguments as soldiers situation in which everyone is shooting at everyone else. To recap:
This is either a rhetorical master stroke, or just random lashing out. I can't tell. I am completely lost. WTF is going on?
then perhaps LW is not ready to discuss such things
Uh, what? The post is poorly written along a number of dimensions, and was downvoted because people don't want to see poorly written posts on the front page. The comments are pointing out specific problems with it. To interpret that as a problem with the community is a fairly egregious example of cognitive dissonance.
Good first approximation: don't write posts about consciousness, if you don't want to be downvoted.
Well, I liked the title.
Then in the second paragraph, I was a little disappointed to read:
I am not attempting to discuss all computational functions that are necessary, just a first step at enumerating some of them and considering implications.
But I think, "That is ok, a partial list is a step in the right direction." The next paragraph begins promisingly with the word "First ...". But then I read and read and ... well, I never found a paragraph which started with the word "Second..." Nor could I find anything much about computation or data. So I skipped to the conclusion. But I couldn't find anything in the last 5 paragraphs or so that was even talking about consciousness.
So, I guess I have changed my mind about liking the title.
This is my first article on LW, so be gentle.
I tried. I really tried.
This may be the solution to Fermi's Paradox. There may be no galaxy spanning AIs because by the time civilizations can accomplish such things they realize that continuity of entity identity is an illusion and have grown beyond wanting to spend effort on illusions.
I don't quite get this part. How would this explain the Fermi Paradox? Why would all civilizations cease growth and expansion just because identity continuity is an illusion? Why would such an evolutionarily unstable pattern be replicated broadly amongst so many alien species?
This might be nit-picking, but I think it’s necessary, given the confusing subjects:
Humans have entity detectors and those detectors exhibit false positives (detecting the entity (spirit) of the tree, rock, river and other inanimate objects, pareidolia) and false negatives (not recognizing a particular ethnic group as fully human).
These, particularly the negative, aren’t failures of the entity detectors, but of the entity classifiers. As an example, slavers do recognize slaves & potential slaves as entities, they simply classify them as non-human....
Humans can do this too (emulate another entity such that they think they are that entity), I think that is in essence what Stockholm Syndrome causes. Under severe trauma, following dissociation and depersonalization, the self reforms, but in a pattern that matches, identifies with, and bonds to the perpetrator of the trauma. The traumatized person has attempted to emulate the “green-beard persona” to avoid death and abuse being perpetrated upon them by the person with the “green beard”.
This doesn't seem to be the natural interpretation. Stockholm Syndrome is more or less the typical outcome of human social politics exaggerated somewhat.
bogdanb said a lot of what I wanted to say. So I'll make the smaller point that an "entity" can be a non-living thing (I think you might want to use the term "agent" in its place) so your examples are technically not false positives (unless we are talking about the "spirit" of a thing rather than the thing itself) or even negatives.
I also don't think that dogs, however pampered or tiny-brained they may be, really can't distinguish between humans and dogs.
If I'm following your "logic" correctly, and if you yourself adhere to the conclusions you've set forth, you should have no problem with me murdering your body (if I do it painlessly). After all, there's no such thing as continuity of identity, so you're already dead; the guy in your body is just a guy who thinks he's you.
I think this may safely be taken as a symptom that there is a flaw in your argument.
This is why it's strongly recommended to try out an article idea on the Open Thread first.
You owe it to your readers to have clearly organized and well-explained thoughts before writing a top-level post, and the best way to get there is to discuss your ideas with veterans first. If you say in advance that you want to write a top-level post, we'll respect that; I've never seen anyone here poach a post idea (though of course others may want to write their own ideas on the topic).
Consciousness is a difficult question because it is poorly defined and is the subjective experience of the entity experiencing it. Because an individual experiences their own consciousness directly, that experience is always richer and more compelling than the perception of consciousness in any other entity; your own consciousness always seem more “real” and richer than the would-be consciousness of another entity.
Because the experience of consciousness is subjective, we can never “know for sure” that an entity is actually experiencing consciousness. However there must be certain computational functions that must be accomplished for consciousness to be experienced. I am not attempting to discuss all computational functions that are necessary, just a first step at enumerating some of them and considering implications.
First an entity must have a “self detector”; a pattern recognition computation structure which it uses to recognizes its own state of being an entity and of being the same entity over time. If an entity is unable to recognize itself as an entity, then it can't be conscious that it is an entity. To rephrase Descartes, "I perceive myself to be an entity, therefore I am an entity." It is possible to be an entity and not perceive that one is an entity. This happens in humans but rarely. Other computation structures may be necessary also, but without an ability to recognize itself as an entity an entity cannot be conscious.
All pattern recognition-type is inherently subject to errors, usually of type 1 (false positive) or type 2, (false negative). In pattern recognition there is an inherent trade-off of false positives for false negatives. Reducing both false positives and false negatives is much more difficult than trading off one for the other.
I suspect that detection of external entities evolved before the detection of internal entities; for predator avoidance, prey capture, offspring recognition and mate detection. Social organisms can recognize other members of their group. The objectives in the recognition of other entities are varied, and so the fidelity of recognition required is varied also. Entity detection is followed by entity identification and sorting of the entity into various classes so as to determine how to interact with that entity; opposite gender mate: reproduce, offspring: feed and care for, predator: run away.
Humans have entity detectors and those detectors exhibit false positives (detecting the entity (spirit) of the tree, rock, river and other inanimate objects, pareidolia) and false negatives (not recognizing a particular ethnic group as fully human). Evolution tends to put more of a bias toward false positives, a false alarm in detecting a predator is a lot better than a non-detection of a predator.
How the human entity detector works is not fully understood,
I suggest that when two humans meet, I think they unconsciously do the equivalent of a Turing Test, with the objective in being to determine if the other entity is “human enough”. Essentially they try to communicate, and if the error rate is too high (due to non-consilience of communication protocols), then xenophobia is triggered via the uncanny valley effect. In the context of this discussion, the entity detector defaults to non-human (or non-green beard (see below)).
I think the uncanny valley effect is an artifact of the human entity detector (which evolved only to facilitate survival and reproduction of those humans with it). Killing entities that are close enough to be potential competitors and rivals but not so close that they might be kin is a good evolutionary strategy; the functional equivalent of the mythic green beard gene (where the green beard gene causes the expression of a green beard and the compulsion to kill all without a green beard). Because humans evolved a large brain recently, the “non-green-beard-detector” can't be instantiated by neural structures directly and completely specified by genes, but must have a learned component. I think this learned component is the mechanism behind cultural xenophobia and religious bigotry.
Back to consciousness. Consciousness implies the continuity of entity identity over time. The individual that is conscious at time=1 is self-perceived to be “the same” individual as is conscious at time=2. What does this actually mean? Is there a one-to-one correspondence between the two entities? No, there is not. The entity at time=1 will evolve into different entities at time=2 depending on the experience-path the entity has taken.
If a snap-shot of the entity at time=1 is taken, duplicated into multiple exact copies and each copy is allowed to have different experiences, at time=2 there will be multiple different entities derived from the first original. Is any one of these entities “more” like the original entity? No, they are all equally derived from the original entity, they are all equally as much like the original. Each of them will have the subjective experience that they are derived from the original (because they are), each one will have the subjective experience that they are the original (because as far as each one knows, they are the original) and the subjective experience that all the others must therefore be imposters.
This seeming paradox arises because of the resolution the human entity detector has. Can that detector distinguish between extremely similar versions of entities? To do pattern recognition, one must have a pattern for comparison. The more exacting the comparison, the more exacting the compared to pattern must be. In the limit a perfect comparison requires an exact and complete representation; in the limit it takes a complete 100% fidelity emulation of the entity (so as to be able to do a one-to-one comparison). I think this relates to what Nietzsche was talking about when he said:
“Whoever battles with monsters had better see that it does not turn him into a monster. And if you gaze long into an abyss, the abyss will gaze back into you.”
To be able to perceive anything, one must have data for the pattern recognition necessary to detect what ever is being perceived. If the pattern recognition computational structures are unable to identify something, that something cannot be perceived. To perceive the abyss, you must have a mapping of the abyss inside of you. Because humans have self-modifying pattern recognition structures, those structures self-modify to become better at detecting what ever is being observed. As you stare into the abyss, your brain becomes more abyss-like to optimize abyss detection.
With the understanding that to detect an entity, one must have pattern recognition that can recognize that entity, then the reason that there is the appearance of continuity of consciousness is seen to be an artifact of human pattern recognition. Human entity detection necessarily compares the observed entity with a reference entity. When that reference entity is the self, there is always a one-to-one correspondence with what is observed to be the self and the reference entity (which is the self), so there is always the identification of the observed entity as the self. It is not that there is actual continuity of a self-entity over time, rather there is the illusion of continuity because the reference is changing exactly as the entity is changing. There are some rare cases where people feel “not themselves” (depersonalization disorder) where they think they are a substitute, or dead, or somehow not the actual person they once were. This dissociation sometimes happens due to extreme traumatic stress, and there is some thought that it is protective; dissociate the self so that the self is not there to experience the trauma and be irreparably injured by that trauma (this may not be correct). This may be what happens during anesthesia.
I think this solves the “problem” of uploading; how can the uploaded entity be identical to the non-uploaded entity? The actual answer is that it can't be, but that doesn't matter if the uploaded entity “feels” or “believes” it is the same entity as the non-uploaded entity, then as far as the uploaded entity is concerned, it is. I appreciate that this may not be a satisfactory solution to anyone but the uploaded entity because it implies that continuity of consciousness is merely an illusion, essentially a hallucination caused by a defect in the entity detection pattern recognition. The data from the non-uploaded entity doesn't really matter. All that matters is does the uploaded entity “feel” it is the same.
I think that in a very real sense, those who seek personal immortality via cryonics or via uploading are pursuing an illusion. They are pursuing the perpetuation of the illusion of self-entity continuity. The same illusion those who believe in an immortal soul are pursuing. The same illusion ancient Egyptians persued via mummification. If the entity is to be self-identical for perpetuity, then it cannot change. If it cannot change, then it cannot have new experiences. If it has new experiences and changes, then it is not the same entity that it was before those changes.
In terms of an AI; the AI can only be conscious if it has an entity detector that detects itself and uses itself as the pattern for that detection. It can only be conscious about aspects of itself that its entity detector has access to. For example humans are not conscious of the data processing that goes on in the visual cortex. Why? Because the human entity detector does not attempt to map that conceptual space. If the AI entity detector doesn't map part of its own computational equipment, then the AI won't be conscious of that part of its own data processing either.
A recipe for friendly AI might be to program the AI to use the coherent extrapolated volition of a select group of humans as its reference entity for entity detection. In effect, that may be what some cultures are already trying to accomplish through ancestor and hero worship; attempting to mold future generations by holding up ideals as examples. That may be analogous to what EY was getting at in
If the AI were given a compulsion to become ever more like the CEV of its reference entity, there are limits to how much it could change.That might be a better use for the data that some humans want to upload to try and achieve personal immortality. They can't achieve personal immortality because the continuity of entity identity is an illusion. Selecting which humans to use would be tricky, but if their coherent extrapolated volition could be “captured”, combined and then used as the reference entity for the AI, it might be a good idea. The AI would then be no worse (and no better) than the sum of those individuals. Of course how to select those individuals is the tricky part. Anyone who wants to be selected is probably not suitable. The mind-set that I think is most appropriate is that of a parent nurturing their child, not to live vicariously through the child, but for the child's benefit. A certain turn-over per generation would keep the AI connected to present humanity but allow for change. We do not want individuals who seek to acquire power by clawing their way to the top of the social power hierarchy by forcing others down (in a zero-sum manner).
I think allowing “wild-type” AI (individuals who upload themselves) is probably too dangerous, and is really just a monument to their egotistical fantasy of entity continuity. Just like the Pyramids, but a pyramid that could change into something fatally destructive (uFAI).
There are some animals that “think” and act like they are people, some dogs that have been completely human acclimated. What has happened is that the dog is using a “human-like” self representation as the reference for its entity pattern recognition, but because of the limited cognitive capacities of that particular dog, it doesn't recognize that the humans it observes are different than itself. An AI could be designed to think that it was human (once we knew how to actually design any AI, designing it to think it was human would be easy).
Humans can do this too (emulate another entity such that they think they are that entity), I think that is in essence what Stockholm Syndrome causes. Under severe trauma, following dissociation and depersonalization, the self reforms, but in a pattern that matches, identifies with, and bonds to the perpetrator of the trauma. The traumatized person has attempted to emulate the “green-beard persona” to avoid death and abuse being perpetrated upon them by the person with the “green beard”.
This may be the solution to Fermi's Paradox. There may be no galaxy spanning AIs because by the time civilizations can accomplish such things they realize that continuity of entity identity is an illusion and have grown beyond wanting to spend effort on illusions.