Mitchell_Porter comments on A proposal for a cryogenic grave for cryonics - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (137)
My comments in this sub-thread brought out more challenges and queries than I expected. I thought that by now everyone would expect me to periodically say a few things out of line regarding identity, consciousness, and so on, and that only the people I was addressing might respond. I want to reply in a way which provides some context for the answers I'm going to give, but which covers old territory as little as possible. So I would first direct interested parties to my articles here, for the big picture according to me. Those articles are flawed in various ways, but much of what I have to say is there.
Just to review some basics: The problems of consciousness and personal identity are even more severe than is generally acknowledged here. Understanding consciousness, for example, is not just a matter of identifying which part of the brain is the conscious part. From the perspective of physics, any such identification looks like property dualism. Here I want to mention a view due to JanetK, which I never answered, according to which the missing ingredient is "biology": the reason that consciousness looks like a problem from a physical perspective is because one has failed to take into account various biological facts. My reply is that certainly consciousness will not be understood without those facts, but nonetheless, they do nothing to resolve the sort of problems described in my article on consciousness, because they can still be ontologically reduced to elaborate combinations of the same physical basics. Some far more radical ontological novelty will be required if we are going to assemble stuff like "color", "meaning", or "the flow of time" out of what physics gives us.
What we have, in our theories of consciousness, is property dualism that wants to be a monism. We say, here is the physical thing - a brain, or maybe a computer or an upload if we are being futuristic - and that is where the mind-stuff resides, or it is the mind-stuff. But for now, the two sides of the alleged identity are qualitatively distinct. That is why it is really a dualism, but a "property dualism" rather than a "substance dualism". The mind is (some part of) the brain, but the mindlike properties of the mind simply cannot be identified with the physical properties of the brain.
The instinct of people trained in modern science is to collapse the dualism onto the physical side of the equation, because they have been educated to think of reality in those terms. But color, meaning and time are real, so if they are not really present on the physical side of the identity, then a truly monistic solution has to go the other way. The problem now is that it sounds as if we are rejecting the reality of matter. This is why I talked about monads: it is a concept of what is physically elementary which can nonetheless be expanded into something which is actually mindlike in its "interior". It requires a considerable rethink of how the basic degrees of freedom in physics are grouped into things; and it also requires that what we would now call quantum effects are somewhere functionally relevant to conscious cognition, or else this ontological regrouping would make no difference at the level where the problem of consciousness resides. So yes, there are several big inferential leaps there, and a prediction (that there is a quantum neurobiology) for which there is as yet no support. All I can say is that I didn't make those leaps lightly, and that all simpler alternatives appear to be fatally compromised in some way.
One consequence of all this is that I can be a realist about the existence of a conscious self in ways which must sound very retrograde to everyone here who has embraced the brave new ideas of copying, patternist theories of identity, the unreality of time on the physical level, and so on. To my way of thinking, I am a "monad", some subsystem of the brain with many degrees of freedom, which is a genuine ontological unity, and whose state can be directly identified with (and not just associated with) my state in the world as I perceive it subjectively. This is an entity which persists in time, and which interacts with its environment (presumably, simpler monads making up the neighboring subsystems of the brain). If one grants for a moment the possibility of thinking about reality in these terms, clearly it makes these riddles about personal identity a lot simpler. There is a very clear sense in which I am not my copies. At best, they are other monads who start out in the same state. There is no conscious sorites paradox. Whenever you have consciousness, it is because you have a monad big enough to be conscious - it's that simple.
So having set the stage - and apologies to anyone tired of my screeds on these subjects - now we can turn to cryonics. I said to Roko
to which he responded
I posit that, in terms of current physics, the locus of consciousness is some mesoscopic quantum-coherent subsystem of the brain, whose coherence persists even during unconsciousness (which is just a change of its state) but which would not last through the cryonic freezing of the brain. If this persistent biological quantum coherence exists, it will exist because of, and not in spite of, metabolic activity. When that ceases, something must happen to the "monad" (which is just another name for something like "big irreducible tensor factor in the brain's wavefunction") - it comes apart into simpler monads, it sheds degrees of freedom until it becomes just another couple of correlated electrons, I don't have a fixed idea about it. But this is what death is, in the monadic "theory". If the frozen brain is restored to life, and a new conscious condensate (or whatever) forms, that will be a new "big tensor factor", a new "monad", and a new self. That is the idea.
You could accept my proposed metaphysics for the sake of argument and still say, but can't you identify with the successor monad? It will have your memories, and so forth. In other words, this ontology of monadic minds should still allow for something like a copy. I don't really have a fixed opinion about this, largely because how the conscious monad accesses and experiences its memories and identity remains completely untheorized by me. The existence of a monad as a persistent "substance" suggests the possibility that memories in a monad might be somehow internal to it, rather than externally supplied data which pops into its field of consciousness when appropriate. This in turn suggests that a lot of what is written, in futurist speculation about digital minds, transferrable memories, and so forth, would not apply. You might be able to transfer unconscious dispositions but not a certain type of authentic conscious memory; it might be that the only way in which the latter could be induced in a monad would be for it, that particular monad, to "personally" undergo the experience in question. Or, it might really be the case that all forms of memory, knowledge, perception and so forth are externally based and externally induced, so that my recollection of what happened this morning is not ontologically any different from the same "recollection" occurring in a newly created copy which never actually had the experience.
Again, I apologize somewhat for going on at such length with these speculations. But I do think that the philosophies of both mind and matter which are the consensus here - I'm thinking of a sort of blithe computationalism with respect to consciousness, and the splitting multiverse of MWI as a theory of physics - are very likely to be partly or even completely false, and this has to have implications for topics like cryonics, AI, various exotic ethical doctrines based on a future-centric utilitarianism, and so on.
My current leading hypotheses:
There is a long history of diverse speculation by scientists about quantum mechanics and the mind. There was an early phase when biology hardly figured and it was often a type of dualism inspired by Copenhagen-interpretation emphasis on "observers". But these days the emphasis is very much on applying quantum mechanics to specific neuromolecular structures. There are papers about superpositions of molecular conformation, transient quantum coherence in ionic complexes, phonons in filamentary structures, and so on. To me, this work still doesn't look good enough, but it's a necessary transitional step, in which ambitious simple models of elementary quantum biophysics are being proposed. The field certainly needs a regular dose of quantitative skepticism such as Tegmark provided. But entanglement in condensed-matter systems is a very subtle thing. There are many situations in which long-range quantum order forms despite local disorder. Like it or not, you can't debunk the idea of a quantum brain in a few pages because we assuredly have not thought of all the ways in which it might work.
As for the philosophical rationale of the thing, that varies a lot. But since we know that most neural computation is not conscious, I find it remarkably natural to suppose that it's entanglement that makes the difference. Any realistic hypothesis is not going to be fuzzy and just say "the quantum is the answer". It will be more like, special long-lived clathrins found in the porosome complex of astrocytes associated with glutamate-receptor hotspots in neocortical layer V share quantum excitons in a topologically protected way, forming a giant multifractal cluster state which nonlocally regulates glutamatergic excitation in the cortex - etc. And we're just not at that level yet.
What evidence is there that would promote any given quantum-mechanical theory of consciousness to attention?
I mean that sincerely - there ought to be some reason that, say, you have to come up with your monad theory, and I quite frankly don't know of any that would impel me to do so.
How I got here:
Starting point: consciousness is real. This sequence of conscious experiences is part of reality.
Next: The physical world doesn't look like that. (That consciousness is a problem for atomism has been known for more than 2000 years.)
So let us suppose that this is how it feels to be some physical thing "from the inside". Here we face a new problem if we suppose that orthodox computational neuroscience is the whole story. There must then be a mapping from various physical states (e.g. arrangements of elementary particles in space, forming a brain) to the corresponding conscious states. But mappings from physics to causal-functional roles are fuzzy in two ways. We don't have, and don't need, an exact criterion as to whether any particular elementary particle is part of the "thing" whose state we are characterizing functionally. Similarly, we don't have, and don't need, a dividing line in the space of all possible physical configurations providing an exact demarcation between one computational state and another.
All this is just a way of saying that functional and computational properties are not entirely objective from a physical standpoint. There are always borderline cases but we don't really care about not having an exact border, because most of the time the components of a functioning computational device are in physical states which are obviously well in correspondence with the abstract computational states they represent. A device whose components are constantly testing the boundaries of the mapping is a device in danger of deviating from its function.
However, when it comes to consciousness, a fuzzy-but-good-enough mapping like this is not good enough, because consciousness (according to our starting point) is an entirely real and "objective" element of reality. It is what it is "exactly", and therefore its counterpart in physical ontology must also have an exact characterization, both with respect to physical parts and with respect to physical states. A coarse-grained many-to-one mapping which is irresolvably fuzzy at the edges is not an option.
But this is a very hard thing to achieve if we persist in thinking of the physical world as a sort of hurricane of trillions of particles in space, with all that matters cognitively being certain mass movements of particles and things made out of them. Fortunately, as it turns out, quantum mechanics suggests the possibility of a rather different physical ontology, and neuroscience still has plenty of room for quantum effects to be cognitively relevant. Thus one is led to consider quantum ontologies in which there is something which can be the exact physical counterpart of consciousness, and theories of mind in which quantum effects are part of the brain's machinery.
I think you grant excessive reliability to your impressions of consciousness. A philosophical argument along the lines proposed is an awfully weak thread to hang a theory on.
Doesn't it mean that consciousness is epiphenomenon? As all quantum algorithms can be expressed as equivalent classical algorithms, and we can have unconscious computer which is functionally equivalent to human brain.
ETA: I can't see any reason to associate consciousness with some particular kind of physical object/process, as it undermines functional significance of consciousness as high-level coordination, decision making and self-representation system of brain.
No, it would just mean that you can have unconscious simulations of consciousness. Think of it like this. We say that the things in the universe which have causal power are "quantum tensor factors", and consciousness always inhabits a single big tensor factor, but we can simulate it with lots of little ones interacting appropriately. More precisely, consciousness is some sort of structure which is actually present in the big tensor factor, but which is not actually present in any of the small ones. However, its dynamics and interactions can be simulated by the small ones collectively. Also, if you took a small tensor factor and made it individually "big" somehow (evolved it into a big state), it might individually be able to acquire consciousness. But the hypothesis is that consciousness as such is only ever found in one tensor factor, not in sets of them. It's a slightly abstract conception when so many details are lacking, but it should be possible to understand the idea: the world is made of Xs, an individual X can have property Y, a set of Xs cannot, but a set of Xs can imitate the property.
What would really make consciousness epiphenomenal is if we persisted with property dualism, so we have the Xs, their "physical properties", and then their correlated "subjective properties". But the whole point of this exercise is to be able to say that the subjective properties (which we know to exist in ourselves) are the "physical properties" of a "big" X. That way, they can enter directly into cause and effect.
Doesn't this undermine your entire philosophical basis for your argument which rests on the experience of consciousness being real? if your system allows such an unconscious classical simulation then why believe you are one of the actual conscious entities? This seems P-Zombieish.
It's like asking, why do you think you exist, when there are books with fictional characters in them? I don't know exactly what is happening when I confirm by inspection that some reality exists or that I have consciousness. But I don't see any reason to doubt the reality or efficacy of such epistemic processes, just because there should also be unconscious state machines that can mimic their causal structure.
I understand you. Your definition is "real consciousness" is quantum tensor factor that belong to particular class of quantum tensor factors, because we can find them in human brains and
we know that at least one human brain is conscious and
consciousness must be physical entity to participate in causal chain.
All other quantum tensor factors and their sets are not consciousness by definition.
Questions are:
How to define said class without fuzziness, when it is yet not known what is not "real consciousness"? Should we include dolphins' tensor factors, great apes' ones and so on?
Is it always necessary for something to exist as physical entity to participate in causal chain? Does temperature exist as physical entity? Does "thermostatousness" of refrigerator exist as physical entity?
Of course, temperature and "termostatousness" are our high-level description of physical systems, they don't exist in your sense. So, it seems that you see contradiction in subjectively apparent existence of consciousness and apparent nonexistence of physical representation of consciousness as high-level description of brain functions. Don't you see flaw in that contradiction?
Causality for statistical or functional properties mostly reduces to generalizations about the behavior of exact microstates. ("Microstate" means physical state completely specified in its microscopic detail. A purely thermodynamic or macroscopic description is a "macrostate".) The entropy goes up because most microstate trajectories go from the small phase-space volume into the large phase-space volume. Macroscopic objects have persistent traits because most microstate trajectories for those objects stay in the same approximate region of state space.
So the second question is about ontology of macrostate causation. I say it is fundamentally statistical. Cause and effect in elemental form only operates locally in the microstate, between and within fundamental entities, whatever they are. Macrostate tendencies are like theromodynamic laws or Zipf's law, they are really statements about statistics of very large and complex chains of exact microscopic causal relations.
The usual materialist idea of consciousness is that it is also just a macrostate phenomenon and process. But as I explained, the macrostate definition is a little fuzzy, and this runs against the hypothesis that consciousness exists objectively. I will add that because these "monads" or "tensor factors" containing consciousness are necessarily very complex, there should be a sort of internal statistical dynamics. The laws of folk psychology might just be statistical mechanics of exact conscious states. But it is conceptually incoherent to say that consciousness is purely a high-level description if you think it exists objectively; it is the same fallacy as when some Buddhists say "everything only exists in the mind", which then implies that the mind only exists in the mind. A "high-level description" is necessarily something which is partly conceptual in nature, and not wholly objectively independent in its existence, and this means it is partly mind-dependent.
The first question is a question about how a theory like this would develop in detail. I can't say ahead of time. The physical premise is, the world is a web of tensor factors of various sizes, mostly small but a few of them big; and consciousness inhabits one of these big factors which exists during the lifetime of a brain. If a theory fulfilling the premise develops and makes sense, then I think you would expect any big tensor factor in a living organism, and also in any other physical system, to also correspond to some sort of consciousness. In principle, such a physical theory should itself tell you whether these big factors arise dynamically in a particular physical entity, given a specification of the entity.
Does this answer the final remark about contradiction? Each tensor factor exists completely objectively. The individual tensor factor which is complex enough to have consciousness also exists objectively and has its properties objectively, and such properties include all aspects of its subjectivity. The rest of the brain consists of the small tensor factors (which we would normally call uncorrelated or weakly correlated quantum particles), whose dynamics provide unconscious computation to supplement conscious dynamics of the big tensor factor. I think it is a self-consistent ontology in which consciousness exists objectively, fundamentally, and exactly, and I think we need such an ontology because of the paradox of saying otherwise, "the mind only exists in the mind".
This really sounds to me like a perfect fit for Robin's grandparent post. If, say, nonlocality is important, why achieve it through quantum means?
This is meant to be ontological nonlocality and not just causal coordination of activities throughout a spatial region. That is, we would be talking about entities which do not reduce to a sum of spatially localized parts possessing localized (encapsulated) states. An entangled EPR pair is a paradigm example of such ontological nonlocality, if you think the global quantum state is the actual state, because the wavefunction cannot be factorized into a tensor product of quantum states possessed by the individual particles in the pair. You are left with the impression of a single entity which interfaces with the rest of the universe in two places. (There are other, more esoteric indications that reality has ontological nonlocality.)
These complex unities glued together by quantum entanglement are of interest (to me) as a way to obtain physical entities which are complex and yet have objective boundaries; see my comment to RobinZ.
Though I agree that this quantum brain idea is against all evidence, I don't think the evolutionary criticism applies. Not every adaptation has a direct effect on inclusive genetic fitness; some are just side effects of other adaptations.
Well, it might be that maintaining the system rather than restarting it when full consciousness resumes is an easier path to the adaptation, or has some advantage we don't understand.
Of course, if the restarted "copy" would seem externally and internally as a continuation, the natural question is why bother positing such a monad in the first place?
If you want something that flies, the simplest way is for it to have wings that still exist even when it's on the ground. We don't actually know (big understatement there) the relative difficulty of evolving a "persistent quantum mind" versus a "transient quantum mind" versus a "wholly classical mind".
There may also be an anthropic aspect. If consciousness can only exist in a quantum ontological unit (e.g. the irreducible tensor factors I mention here), then you cannot find yourself to be an evolved intelligence based solely on classical computation employing many such entities. Such beings might exist in the universe, but by hypothesis there would be nobody home. This isn't relevant to persistent vs transient, but it's relevant for quantum vs classical.
This is, first of all, an exercise in taking appearances ("phenomenology") seriously. Consciousness comes in intervals with internal continuity, one often comes to waking consciousness out of a dream (suggesting that the same stream of consciousness still existed during sleep, but that with mental and physical relaxation and the dimming of the external senses, it was dominated by fantasy and spontaneous imagery), and one should consider the phenomenon of memory to at least be consistent with the idea that there is persistent existence, not just throughout one interval of waking consciousness, but throughout the whole biological lifetime.
So if you're going to think about yourself as physically actual and as actually persistent, you should think of yourself as existing at least for the duration of the current period of waking consciousness, and you have every reason to think that you are the same "you" who had those experiences in earlier periods that you can remember. The idea that you are flickering in and out of existence during a single day or during a lifetime is somewhat at odds with the phenomenological perspective.
Cryopreservation is far more disruptive than anything which happens during a biological lifetime. Cells full of liquid water freeze over and grow into ice crystals which burst their membranes. Metabolism ceases entirely. Some, maybe even most models of persistent biological quantum coherence have it depending on a metabolically maintained throughput of energy. To survive the freezing transition, it seems like the "bio-qubits" would have to exist in molecular capsules that weren't penetrated as the ice formed.
I must have been, at some point, but a long time ago and don't remember.
Clearly there are situations where extra facts would lead you to conclude that the impression of continuity is an illusion. If you woke up as Sherlock Holmes, remembering your struggle with Moriarty as you fell off a cliff moments before, and were then shown convincingly that Holmes was a fictional character from centuries before, and you were just an artificial person provided with false memories in his image, you would have to conclude that in this case, you had erred somehow in judging reality on the basis of subjective appearances.
It seems unlikely that reliable reconstruction of cryonics patients could occur and yet the problem of consciousness not yet be figured out. Reliable reconstruction would require such a profound knowledge of brain structure and function, that there wouldn't be room for continuing uncertainty about quantum effects in the brain. By then you would know it was there or not there, so regardless of how the revivee felt, the people(?) doing the reviving should already know the answers regarding identity and the nature of personal existence.
(I add the qualification reliable reconstruction, because there might well be a period in which it's possible to experiment with reconstructive protocols while not really knowing what you're doing. Consider the idea of freezing a C. elegans and then simulating it on the basis of micrometer sections. We could just about do this today, except that we would mostly be guessing how to map the preserved ultrastructure to computational elements of a simulation. One would prefer the revival of human beings not to proceed via similar trial and error.)
In the present, the question is whether subjectively continuous but temporally discontinuous experience, such as you report, is evidence for the self only having an intermittent physical existence. Well, the experience is consistent with the idea that you really did cease to exist during those 3 hours, but it is also consistent with the idea that you existed but your time sense shut down along with your usual senses, or that it stagnated in the absence of external and internal input.
Which is to say, decision theory for algorithms, understanding of how an algorithm controls mathematical structures, and how intuitions about the real world and subjective anticipation map to that formal setting.
I don't agree with this claim. One would simply need an understanding of what brain systems are necessary for consciousness and how to restore those systems to a close approximation to pre-existing state (presumably using nanotech). This doesn't take much in the way of actually understanding how those systems function. Once one had well-developed nanotech one could learn this sort of thing simply be trial and error on animals (seeing what was necessary for survival, and what was necessary for training to stay intact) and then move on to progressively larger brained creatures. This doesn't require a deep understanding of intelligence or consciousness, simply an understanding of what parts of the brain are being used and how to restore them.