The conscious tape
This post comprises one question and no answers. You have been warned.
I was reading "How minds can be computational systems", by William Rapaport, and something caught my attention. He wrote,
Computationalism is - or ought to be - the thesis that cognition is computable ... Note, first, that I have said that computationalism is the thesis that cognition is computable, not that it is computation (as Pylyshyn 1985 p. xiii characterizes it). ... To say that cognition is computable is to say that there is an algorithm - more likely, a collection of interrelated algorithms - that computes it. So, what does it mean to say that something 'computes cognition'? ... cognition is computable if and only if there is an algorithm ... that computes this function (or functions).
Rapaport was talking about cognition, not consciousness. The contention between these hypothesis is, however, only interesting if you are talking about consciousness; if you're talking about "cognition", it's just a choice between two different ways to define cognition.
When it comes to consciousness, I consider myself a computationalist. But I hadn't realized before that my explanation of consciousness as computational "works" by jumping back and forth between those two incompatible positions. Each one provides part of what I need; but each, on its own, seems impossible to me; and they are probably mutually exclusive.
Minimum computation and data requirements for consciousness.
Consciousness is a difficult question because it is poorly defined and is the subjective experience of the entity experiencing it. Because an individual experiences their own consciousness directly, that experience is always richer and more compelling than the perception of consciousness in any other entity; your own consciousness always seem more “real” and richer than the would-be consciousness of another entity.
Because the experience of consciousness is subjective, we can never “know for sure” that an entity is actually experiencing consciousness. However there must be certain computational functions that must be accomplished for consciousness to be experienced. I am not attempting to discuss all computational functions that are necessary, just a first step at enumerating some of them and considering implications.
First an entity must have a “self detector”; a pattern recognition computation structure which it uses to recognizes its own state of being an entity and of being the same entity over time. If an entity is unable to recognize itself as an entity, then it can't be conscious that it is an entity. To rephrase Descartes, "I perceive myself to be an entity, therefore I am an entity." It is possible to be an entity and not perceive that one is an entity. This happens in humans but rarely. Other computation structures may be necessary also, but without an ability to recognize itself as an entity an entity cannot be conscious.
Consciousness of simulations & uploads: a reductio
Related articles: Nonperson predicates, Zombies! Zombies?, & many more.
ETA: This argument appears to be a rehash of the Chinese room, which I had previously thought had nothing to do with consciousness, only intelligence. I nonetheless find this one instructive in that it makes certain things explicit which the Chinese room seems to gloss over.
ETA2: I think I may have made a mistake in this post. That mistake was in realizing what ontology functionalism would imply, and thinking that ontology too weird to be true. An argument from incredulity, essentially. Double oops.
Consciousness belongs to a class of topics I think of as my 'sore teeth.' I find myself thinking about them all the time: in the middle of bathing, running, cooking. I keep thinking about consciousness because no matter how much I read on the subject, I find I am still confused.
Positioning oneself to make a difference
Last weekend, while this year's Singularity Summit took place in San Francisco, I was turning 40 in my Australian obscurity. 40 is old enough to be thinking that I should just pick a SENS research theme and work on it, and also move to wherever in the world is most likely to have the best future biomedicine (that might be Boston). But at least since the late 1990s, when Eliezer first showed up, I have perceived that superintelligence trumps life extension as a futurist issue. And since 2006, when I first grasped how something like CEV could be an answer to the problem of superintelligence, I've had it before me as a model of how the future could and should play out. I have "contrarian" ideas about how consciousness works, but they do not contradict any of the essential notions of seed AI and friendly AI; they only imply that those notions would need to be adjusted and fitted to the true ontology, whatever that may be.
So I think this is what I should be working on - not just the ontological subproblem, but all aspects of the problem. The question is, how to go about this. At the moment, I'm working on a lengthy statement of how I think a Friendly Singularity could be achieved - a much better version of my top-level posts here, along with new material. But the main "methodological" problem is economic and perhaps social - what can I live on while I do this, and where in the world and in society should I situate myself for maximum insight and productivity. That's really what this post is about.
The obvious answer is, apply to SIAI. I'm not averse to the idea, and on occasion I raise the possibility with them, but I have two reasons for hesitation.
The first is the problem of consciousness. I often talk about this in terms of vaguely specified ideas about quantum entanglement in the brain, but the really important part is the radical disjunction between the physical ontology of the natural sciences and the manifest nature of consciousness. I cannot emphasize enough that this is a huge gaping hole in the scientific understanding of the world, the equal of any gap in the scientific worldview that came before it, and that the standard "scientific" way of thinking about it is a form of property dualism, even if people won't admit this to themselves. All the quantum stuff you hear from me is just an idea about how to restore a type of monism. I actually think it's a conservative solution to a very big problem, but to believe that you would have to agree with me that the other solutions on offer can't work (as well as understanding just what it is that I propose instead).
This "reason for not applying to SIAI" leads to two sub-reasons. First, I'm not sure that the SIAI intellectual environment can accommodate my approach. Second, the problem with consciousness is of course not specific to SIAI, it is a symptom of the overall scientific zeitgeist, and maybe I should be working there, in the field of consciousness studies. If expert opinion changes, SIAI will surely notice, and so I should be trying to convince the neuroscientists, not the Friendly AI researchers.
The second top-level reason for hesitation is simply that SIAI doesn't have much money. If I can accomplish part of the shared agenda while supported by other means, that would be better. Mostly I think in terms of doing a PhD. A few years back I almost started one with Ben Goertzel as co-supervisor, which would have looked at implementing a CEV-like process in a toy physical model, but that fell through at my end. Lately I'm looking around again. In Australia we have David Chalmers and Marcus Hutter. I know Chalmers from my quantum-mind days in Arizona ten years ago, and I met with Hutter recently. The strong interdisciplinarity of my real agenda makes it difficult to see where I could work directly on the central task, but also implies that there are many fields (cognitive neuroscience, decision theory, various quantum topics) where I might be able to limp along with partial support from an institution.
So that's the situation. Are there any other ideas? (Private communications can go to mporter at gmail.)
Alien parasite technical guy
Custers & Aarts have a paper in the July 2 Science called "The Unconscious Will: How the pursuit of goals operates outside of conscious awareness". It reviews work indicating that people's brains make decisions and set goals without the brains' "owners" ever being consciously aware of them.
A famous early study is Libet et al. 1983, which claimed to find signals being sent to the fingers before people were aware of deciding to move them. This is a dubious study; it assumes that our perception of time is accurate, whereas in fact our brains shuffle our percept timeline around in our heads before presenting it to us, in order to provide us with a sequence of events that is useful to us (see Dennett's Consciousness Explained). Also, Trevina & Miller repeated the test, and also looked at cases where people did not move their fingers; and found that the signal measured by Libet et al. could not predict whether the fingers would move.
Fortunately, the flaws of Libet et al. were not discovered before it spawned many studies showing that unconscious priming of concepts related to goals causes people to spend more effort pursuing those goals; and those are what Custers & Aarts review. In brief: If you expose someone, even using subliminal messages, to pictures, words, etc., closely-connected to some goals and not to others, people will work harder towards those goals without being aware of it.
Another way to look at consciousness
Edit: First paragraph removed and small changes made to the rest.
I am putting forth a hypothesis is about the nature of consciousness. First I will have to tell you how I am using certain words because they are generally used in a number of ways. 'Brain' is an biological organ and it has a function, 'mind'. Mind is not an object; it is what brains do. It is not a property of the brain, let alone an emergent property, whatever that is. It is a function - so mind-is-to-brain as circulation-is-to-heart or digestion-is-to- intestine. There is one brain in any head and there is one mind being produced by that brain – not two. (Assuming sanity) The different parts of the cortex work together; the two hemispheres work together; the fore-brain structures work together with the mid-brain structures. The mind includes at least: perception, cognition, learning, intention, motor control, remembering, and most importantly, the forming a model of the environment and the person in that environment. The division between 'conscious mind' and 'unconscious mind' is meaningless. The brain does its mind-function which maintains the model. Some but not all of this model is made globally accessible to all of the brain and remembered. That edit of the model is what we experience as conscious experience, in other words, is our 'consciousness'. Consciousness is awareness not thought. Consciousness is not separate but part of a single mind-function. Now that the words are straight, I can describe the hypothesis.
How is the model edited for consciousness?
There is an attention focus that is triggered by the on-going work of the mind and the events that happen in the environment. I may concentrate on some task so that I am not conscious of other parts of the model but a loud noise will cause my attention focus to shift to the source of noise in the model. The level of attention is variable from non-existent (coma) to intense. This level depends on the signals coming from the lower parts of the brain, through the thalamus, into the cortex. A common analogue for attention is a searchlight scanning the mind-model of reality. We cannot be aware of the whole of the model at any instant of time.
How is the model formed?
The fragments for the model are forced together into a best fit global model. The perception of the various senses, inborn constraints, our understanding of the world, our memory of the previous seconds, our expectations etc. together build a cohesive model by constructing a synchronous neural activity. Fragments that cannot be fit into the model are lost from it. This is done by an almost unbelievable number of parallel, slightly overlapping feedback loops, across the cortex and between the cortex and the mid-brain (especially the thalamus). The feedback loops are much more like patch boards then like digital computers. They rattle for an instance until they find a stable synchrony. There is nothing like step-wise processes at this stage of forming a global model.
How is the consciousness edit of the model used?
There is little doubt that consciousness is useful because it is biologically expensive. Evolution will eliminate expensive functions that do not earn their keep. There are three very important processes that are carried on by the consciousness aspect of mind.
1) The working memory that holds the last few frames of consciousness is the source of episodic memory. There is an important link between consciousness and the formation of memory. We could treat working memory as part of consciousness or part of more permanent memory or even the link between them. Consciousness is in effect 'the leading edge of memory'. No conscious experience of something than no memory of it.
2) The working memory allows some cognition and learning that needs to 'juggle' information. I cannot add two digits if I cannot retain one while I perceive the other. So some thought processes are going to be in the edited model so that they are be continued through the use of working memory. This does not constitute a conscious mind that is separate from an unconscious mind. It is only that some types of thinking register bits of their progress in our awareness so they can be retrieved later.
3) Consciousness does mild prediction and therefore can register errors in perception and motor control. It takes a fraction of a second to form the conscious experience of an event. But we do not live our lives a fraction of a second late. Information from time t is used to form a model of what the world will be like a t+x and x being the time it takes to create the model and its conscious edited version, then we will seem to experience t+x at t+x. The difference between the model of t+x and what comes in via our senses at t+x is the actual error in our perception and motor control and is be used to correct the system.
These three functions seem ample to justify the metabolic expense of consciousness and rule out philosophical zombies. The functions also seem to rule out consciousness being uniquely human. 'If it quacks like a duck' logic applies to animal consciousness. If an animal appears to have a good memory of events, learns from its experiences, has smooth motor control in complex changing situations, then it is hard to imagine how this happens without consciousness including self consciousness. There would, of course, be degrees of consciousness and variations in the aspects of environment/self that would be modeled by different animals.
My answers to some problems ahead of their being asked
Most readers of this site are comfortable with the idea of the map and the territory. This post is using a very similar (maybe the same) idea of reality and model of reality. There is nothing surprising about the difference between the physical tree in the garden and an element that stands for that tree in my model of reality. It is the same idea to think about the difference between the reality-now and the model-now. The difference between my physical leg and my model leg is not difficult. We need to extent that comfort to the difference between the reality-me and the model-me. Introspection gives us awareness of our model, it is not our reality-mind but our model-mind we are turning our focus of attention on. There is a difference between reality-decisions and model-decisions. We live in our model and have absolutely, positively no direct knowledge of anything else – none ever.
I have given no evidence for the hypotheses here but for two years I have been collecting evidence on consciousness in my website, Thoughts on thoughts. My hypothesis is not that different from the one that Academian is giving in his series of posts and I do not mean mine to be in opposition to his, but to a large extent supportive. Treating consciousness as a sense is not that different from treating it as as a selective awareness. There is no need to get hung up on the words or analogies we use.
I have side-stepped the 'hard question' of how and why red is experienced as red. I have the feeling that this is a 'wrong question' but I am not sure why. It is certainly not explained by the hypothesis I have given here. All I have to say about the hard question is: “Can you think of a better way to be aware of red then the one you have?, Is there something more efficient or more vivid or more biologically functional?” In other words, “What is the alternative?” Even if you go all spiritual, that still does not explain the experience of red. Dualism does not answer the hard question either and I have not encountered any philosophy that does. If it is answered, I would put my money on a scientific, material answer.
I have not side-stepped the question of how consciousness is reduced to physics. The method is clear: reduce consciousness to biology and biology to physics/chemistry. We accept that biology is in principle reducible to physics/chemistry. We generally assume that the brain is understandable as a biological organ and so if we can assume that consciousness is a function of the brain, it is in principle reducible to physics.
Physicalism: consciousness as the last sense
Follow-up to There just has to be something more, you know? and The two insights of materialism.
I have alluded that one cause for the common reluctance to consider physicalism — in particular, that our minds can in principle be characterized entirely by physical states — is an asymmetry in how people perceive characterization. This can be alleviated by analogy to how our external senses can supervene on each other, and how abstract manipulations of those senses using recording, playback, and editing technologies have made such characterizations useful and intuitive.
We have numerous external senses, and at least one internal sense that people call "thinking" or "consciousness". In part because you and I can point our external senses at the same objects, collaborative science has done a great job characterizing them in terms of each other. The first thing is to realize the symmetry and non-triviality of this situation.
First, at a personal level: say you've never sensed a musical instrument in any way, and for the first time, in the dark, you hear a cello playing. Then later, you see the actual cello. You probably wouldn't immediately recognize these perceptions as being of the same physical object. But watching and listening to the cello playing at the same time would certainly help, and physically intervening yourself to see that you can change the pitch of the note by placing your fingers on the strings would be a deal breaker: you'd start thinking of that sound, that sight, and that tactile sense as all coming from one object "cello".
Before moving on, note how in these circumstances we don't conclude that "only sight is real" and that sound is merely a derivate of it, but simply that the two senses are related and can characterize each other, at least roughly speaking: when you see a cello, you know what sort of sounds to expect, and conversely.
Next, consider the more precise correspondence that collaborative science has provided, which follows a similar trend: in the theory of characterizing sound as logitudinal compression waves, first came recording, then playback, and finally editing. In fact, the first intelligible recording of a human voice, in 1860, was played back for the first time in 2008, using computers. So, suppose it's 1810, well before the invention of the phonoautograph, and you've just heard the first movement of Beethowen's 5th. Then later, I unsuggestively show you a high-res version of this picture, with zooming capabilities:
The I-Less Eye
or: How I Learned to Stop Worrying and Love the Anthropic Trilemma
Imagine you live in a future society where the law allows up to a hundred instances of a person to exist at any one time, but insists that your property belongs to the original you, not to the copies. (Does this sound illogical? I may ask my readers to believe in the potential existence of uploading technology, but I would not insult your intelligence by asking you to believe in the existence of a society where all the laws were logical.)
So you decide to create your full allowance of 99 copies, and a customer service representative explains how the procedure works: the first copy is made, and informed he is copy number one; then the second copy is made, and informed he is copy number two, etc. That sounds fine until you start thinking about it, whereupon the native hue of resolution is sicklied o'er with the pale cast of thought. The problem lies in your anticipated subjective experience.
After step one, you have a 50% chance of finding yourself the original; there is nothing controversial about this much. If you are the original, you have a 50% chance of finding yourself still so after step two, and so on. That means after step 99, your subjective probability of still being the original is 0.5^99, in other words as close to zero as makes no difference.
Assume you prefer existing as a dependent copy to not existing at all, but preferable still would be existing as the original (in the eyes of the law) and therefore still owning your estate. You might reasonably have hoped for a 1% chance of the subjectively best outcome. 0.5^99 sounds entirely unreasonable!
It's not like anything to be a bat
...at least not if you accept a certain line of anthropic argument.
Thomas Nagel famously challenged the philosophical world to come to terms with qualia in his essay "What is it Like to Be a Bat?". Bats, with sensory systems so completely different from those of humans, must have exotic bat qualia that we could never imagine. Even if we deduce all the physical principles behind echolocation, even if we could specify the movement of every atom in a bat's senses and nervous system that represents its knowledge of where an echolocated insect is, we still have no idea what it's like to feel a subjective echolocation quale.
Anthropic reasoning is the idea that you can reason conditioning on your own existence. For example, the Doomsday Argument says that you would be more likely to exist in the present day if the overall number of future humans was medium-sized instead of humongous, therefore since you exist in the present day, there must be only a medium-sized number of future humans, and the apocalypse must be nigh, for values of nigh equal to "within a few hundred years or so".
The Buddhists have a parable to motivate young seekers after enlightenment. They say - there are zillions upon zillions of insects, trillions upon trillions of lesser animals, and only a relative handful of human beings. For a reincarnating soul to be born as a human being, then, is a rare and precious gift, and an opportunity that should be seized with great enthusiasm, as it will be endless eons before it comes around again.
Whatever one thinks of reincarnation, the parable raises an interesting point. Considering the vast number of non-human animals compared to humans, the probability of being a human is vanishingly low. Therefore, chances are that if I could be an animal, I would be. This makes a strong anthropic argument that it is impossible for me to be an animal.
Subjective Anticipation and Death
tldr; It is incoherent to talk about a "you" which stretches through time. Instead, we should think of a series of similar mind-moments.
Once upon a time, there was a little boy, who answered to the name Lucas Sloan and was scared of dying. I too answer to the name Lucas Sloan, and I remember being afraid of dying. Little Lucas wasn't scared of the present state of affairs, but it is fairly obvious that Little Lucas isn't around anymore. By any practical definition, Little Lucas is dead, he only exists as a memory in my mind and more indirectly in the minds of others. Little Lucas did not care that other people remembered him, he cared that he did not die. So what is this death thing, if Little Lucas was scared of it, but was not scared of the present situation?
= 783df68a0f980790206b9ea87794c5b6)
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)