Very soon, Eliezer is supposed to start posting a new sequence, on "Open Problems in Friendly AI". After several years in which its activities were dominated by the topic of human rationality, this ought to mark the beginning of a new phase for the Singularity Institute, one in which it is visibly working on artificial intelligence once again. If everything comes together, then it will now be a straight line from here to the end.

I foresee that, once the new sequence gets going, it won't be that easy to question the framework in terms of which the problems are posed. So I consider this my last opportunity for some time, to set out an alternative big picture. It's a framework in which all those rigorous mathematical and computational issues still need to be investigated, so a lot of "orthodox" ideas about Friendly AI should carry across. But the context is different, and it makes a difference.

Begin with the really big picture. What would it take to produce a friendly singularity? You need to find the true ontology, find the true morality, and win the intelligence race. For example, if your Friendly AI was to be an expected utility maximizer, it would need to model the world correctly ("true ontology"), value the world correctly ("true morality"), and it would need to outsmart its opponents ("win the intelligence race").

Now let's consider how SI will approach these goals.

The evidence says that the working ontological hypothesis of SI-associated researchers will be timeless many-worlds quantum mechanics, possibly embedded in a "Tegmark Level IV multiverse", with the auxiliary hypothesis that algorithms can "feel like something from inside" and that this is what conscious experience is.

The true morality is to be found by understanding the true decision procedure employed by human beings, and idealizing it according to criteria implicit in that procedure. That is, one would seek to understand conceptually the physical and cognitive causation at work in concrete human choices, both conscious and unconscious, with the expectation that there will be a crisp, complex, and specific answer to the question "why and how do humans make the choices that they do?" Undoubtedly there would be some biological variation, and there would also be significant elements of the "human decision procedure",  as instantiated in any specific individual, which are set by experience and by culture, rather than by genetics. Nonetheless one expects that there is something like a specific algorithm or algorithm-template here, which is part of the standard Homo sapiens cognitive package and biological design; just another anatomical feature, particular to our species.

Having reconstructed this algorithm via scientific analysis of human genome, brain, and behavior, one would then idealize it using its own criteria. This algorithm defines the de-facto value system that human beings employ, but that is not necessarily the value system they would wish to employ; nonetheless, human self-dissatisfaction also arises from the use of this algorithm to judge ourselves. So it contains the seeds of its own improvement. The value system of a Friendly AI is to be obtained from the recursive self-improvement of the natural human decision procedure.

Finally, this is all for naught if seriously unfriendly AI appears first. It isn't good enough just to have the right goals, you must be able to carry them out. In the global race towards artificial general intelligence, SI might hope to "win" either by being the first to achieve AGI, or by having its prescriptions adopted by those who do first achieve AGI. They have some in-house competence regarding models of universal AI like AIXI, and they have many contacts in the world of AGI research, so they're at least engaged with this aspect of the problem.

Upon examining this tentative reconstruction of SI's game-plan, I find I have two major reservations. The big one, and the one most difficult to convey, concerns the ontological assumptions. In second place is what I see as an undue emphasis on the idea of outsourcing the methodological and design problems of FAI research to uploaded researchers and/or a proto-FAI which is simulating or modeling human researchers. This is supposed to be a way to finesse philosophical difficulties like "what is consciousness anyway"; you just simulate some humans until they agree that they have solved the problem. The reasoning goes that if the simulation is good enough, it will be just as good as if ordinary non-simulated humans solved it.

I also used to have a third major criticism, that the big SI focus on rationality outreach was a mistake; but it brought in a lot of new people, and in any case that phase is ending, with the creation of CFAR, a separate organization. So we are down to two basic criticisms.

First, "ontology". I do not think that SI intends to just program its AI with an apriori belief in the Everett multiverse, for two reasons. First, like anyone else, their ventures into AI will surely begin with programs that work within very limited and more down-to-earth ontological domains. Second, at least some of the AI's world-model ought to be obtained rationally. Scientific theories are supposed to be rationally justified, e.g. by their capacity to make successful predictions, and one would prefer that the AI's ontology results from the employment of its epistemology, rather than just being an axiom; not least because we want it to be able to question that ontology, should the evidence begin to count against it.

For this reason, although I have campaigned against many-worlds dogmatism on this site for several years, I'm not especially concerned about the possibility of SI producing an AI that is "dogmatic" in this way. For an AI to independently assess the merits of rival physical theories, the theories would need to be expressed with much more precision than they have been in LW's debates, and the disagreements about which theory is rationally favored would be replaced with objectively resolvable choices among exactly specified models.

The real problem, which is not just SI's problem, but a chronic and worsening problem of intellectual culture in the era of mathematically formalized science, is a dwindling of the ontological options to materialism, platonism, or an unstable combination of the two, and a similar restriction of epistemology to computation.

Any assertion that we need an ontology beyond materialism (or physicalism or naturalism) is liable to be immediately rejected by this audience, so I shall immediately explain what I mean. It's just the usual problem of "qualia". There are qualities which are part of reality - we know this because they are part of experience, and experience is part of reality - but which are not part of our physical description of reality. The problematic "belief in materialism" is actually the belief in the completeness of current materialist ontology, a belief which prevents people from seeing any need to consider radical or exotic solutions to the qualia problem. There is every reason to think that the world-picture arising from a correct solution to that problem will still be one in which you have "things with states" causally interacting with other "things with states", and a sensible materialist shouldn't find that objectionable.

What I mean by platonism, is an ontology which reifies mathematical or computational abstractions, and says that they are the stuff of reality. Thus assertions that reality is a computer program, or a Hilbert space. Once again, the qualia are absent; but in this case, instead of the deficient ontology being based on supposing that there is nothing but particles, it's based on supposing that there is nothing but the intellectual constructs used to model the world.

Although the abstract concept of a computer program (the abstractly conceived state machine which it instantiates) does not contain qualia, people often treat programs as having mind-like qualities, especially by imbuing them with semantics - the states of the program are conceived to be "about" something, just like thoughts are. And thus computation has been the way in which materialism has tried to restore the mind to a place in its ontology. This is the unstable combination of materialism and platonism to which I referred. It's unstable because it's not a real solution, though it can live unexamined for a long time in a person's belief system.

An ontology which genuinely contains qualia will nonetheless still contain "things with states" undergoing state transitions, so there will be state machines, and consequently, computational concepts will still be valid, they will still have a place in the description of reality. But the computational description is an abstraction; the ontological essence of the state plays no part in this description; only its causal role in the network of possible states matters for computation. The attempt to make computation the foundation of an ontology of mind is therefore proceeding in the wrong direction.

But here we run up against the hazards of computational epistemology, which is playing such a central role in artificial intelligence. Computational epistemology is good at identifying the minimal state machine which could have produced the data. But it cannot by itself tell you what those states are "like". It can only say that X was probably caused by a Y that was itself caused by Z.

Among the properties of human consciousness are knowledge that something exists, knowledge that consciousness exists, and a long string of other facts about the nature of what we experience. Even if an AI scientist employing a computational epistemology managed to produce a model of the world which correctly identified the causal relations between consciousness, its knowledge, and the objects of its knowledge, the AI scientist would not know that its X, Y, and Z refer to, say, "knowledge of existence", "experience of existence", and "existence". The same might be said of any successful analysis of qualia, knowledge of qualia, and how they fit into neurophysical causality.

It would be up to human beings - for example, the AI's programmers and handlers - to ensure that entities in the AI's causal model were given appropriate significance. And here we approach the second big problem, the enthusiasm for outsourcing the solution of hard problems of FAI design to the AI and/or to simulated human beings. The latter is a somewhat impractical idea anyway, but here I want to highlight the risk that the AI's designers will have false ontological beliefs about the nature of mind, which are then implemented apriori in the AI. That strikes me as far more likely than implanting a wrong apriori about physics; computational epistemology can discriminate usefully between different mathematical models of physics, because it can judge one state machine model as better than another, and current physical ontology is essentially one of interacting state machines. But as I have argued, not only must the true ontology be deeper than state-machine materialism, there is no way for an AI employing computational epistemology to bootstrap to a deeper ontology.

In a phrase: to use computational epistemology is to commit to state-machine materialism as your apriori ontology. And the problem with state-machine materialism is not that it models the world in terms of causal interactions between things-with-states; the problem is that it can't go any deeper than that, yet apparently we can. Something about the ontological constitution of consciousness makes it possible for us to experience existence, to have the concept of existence, to know that we are experiencing existence, and similarly for the experience of color, time, and all those other aspects of being that fit so uncomfortably into our scientific ontology.

It must be that the true epistemology, for a conscious being, is something more than computational epistemology. And maybe an AI can't bootstrap its way to knowing this expanded epistemology - because an AI doesn't really know or experience anything, only a consciousness, whether natural or artificial, does those things - but maybe a human being can. My own investigations suggest that the tradition of thought which made the most progress in this direction was the philosophical school known as transcendental phenomenology. But transcendental phenomenology is very unfashionable now, precisely because of apriori materialism. People don't see what "categorial intuition" or "adumbrations of givenness" or any of the other weird phenomenological concepts could possibly mean for an evolved Bayesian neural network; and they're right, there is no connection. But the idea that a human being is a state machine running on a distributed neural computation is just a hypothesis, and I would argue that it is a hypothesis in contradiction with so much of the phenomenological data, that we really ought to look for a more sophisticated refinement of the idea. Fortunately, 21st-century physics, if not yet neurobiology, can provide alternative hypotheses in which complexity of state originates from something other than concatenation of parts - for example, entanglement, or from topological structures in a field. In such ideas I believe we see a glimpse of the true ontology of mind, one which from the inside resembles the ontology of transcendental phenomenology; which in its mathematical, formal representation may involve structures like iterated Clifford algebras; and which in its biophysical context would appear to be describing a mass of entangled electrons in that hypothetical sweet spot, somewhere in the brain, where there's a mechanism to protect against decoherence.

Of course this is why I've talked about "monads" in the past, but my objective here is not to promote neo-monadology, that's something I need to take up with neuroscientists and biophysicists and quantum foundations people. What I wish to do here is to argue against the completeness of computational epistemology, and to caution against the rejection of phenomenological data just because it conflicts with state-machine materialism or computational epistemology. This is an argument and a warning that should be meaningful for anyone trying to make sense of their existence in the scientific cosmos, but it has a special significance for this arcane and idealistic enterprise called "friendly AI". My message for friendly AI researchers is not that computational epistemology is invalid, or that it's wrong to think about the mind as a state machine, just that all that isn't the full story. A monadic mind would be a state machine, but ontologically it would be different from the same state machine running on a network of a billion monads. You need to do the impossible one more time, and make your plans bearing in mind that the true ontology is something more than your current intellectual tools allow you to represent.

New Comment
148 comments, sorted by Click to highlight new comments since: Today at 1:50 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Upvoted for the accurate and concise summary of the big picture according to SI.

There are qualities which are part of reality - we know this because they are part of experience, and experience is part of reality - but which are not part of our physical description of reality.

This continues to strike me as a category error akin to thinking that our knowledge of integrated circuit design is incomplete because we can't use it to account for Java classes.

I have been publicly and repeatedly skeptical of any proposal to make an AI compute the answer to a philosophical question you don't know how to solve yourself, not because it's impossible in principle, but because it seems quite improbable and definitely very unreliable to claim that you know that computation X will output the correct answer to a philosophical problem and yet you've got no idea how to solve it yourself. Philosophical problems are not problems because they are well-specified and yet too computationally intensive for any one human mind. They're problems because we don't know what procedure will output the right answer, and if we had that procedure we would probably be able to compute the answer ourselves using relatively little computing power. Imagine someone telling you they'd written a program requiring a thousand CPU-years of computing time to solve the free will problem.

And once again, I expect that the hardest part of the FAI problem is not "winning the intelligence race" but winning it with an AI design restricted to the much narrower part of the cognitive space that integrates with the F part, i.e., all algorithms must be conducive to clean self-modification. That's the hard part of the work.

2Wei Dai12y
What do you think the chances are that there is some single procedure that can be used to solve all philosophical problems? That for example the procedure our brains are using to try to solve decision theory is essentially the same as the one we'll use to solve consciousness? (I mean some sort of procedure that we can isolate and not just the human mind as a whole.) If there isn't such a single procedure, I just don't see how we can possibly solve all of the necessarily philosophical problems to build an FAI before someone builds an AGI, because we are still at the stage where every step forward we make just lets us see how many more problems there are (see Open Problems Related to Solomonoff Induction for example) and we are making forward steps so slowly, and worse, there's no good way of verifying that each step we take really is a step forward and not some erroneous digression.
4Eliezer Yudkowsky12y
Very low, of course. (Then again, relative to the perspective of nonscientists, there turned out to be a single procedure that could be used to solve all empirical problems.) But in general, problems always look much more complicated than solutions do; the presence of a host of confusions does not indicate that the set of deep truths underlying all the solutions is noncompact.
0Wei Dai12y
Do you think it's reasonable to estimate the amount of philosophical confusion we will have at some given time in the future by looking at the amount of philosophical confusions we currently face, and compared that to the rate at which we are clearing them up minus the rate at which new confusions are popping up? If so, how much of your relative optimism is accounted for by your work on meta-ethics? (Recall that we have a disagreement over how much progress that work represents.) Do you think my pessimism would be reasonable if we assume for the sake of argument that that work does not actually represent much progress?
-3Mitchell_Porter12y
This is why I keep mentioning transcendental phenomenology. It is for philosophy what string theory is for physics, a strong candidate for the final answer. It's epistemologically deeper than natural science or mathematics, which it treats as specialized forms of rational subjective activity. But it's a difficult subject, which is why I mention it more often than I explain it. To truly teach it, I'd first need to understand, reproduce, and verify all its claims and procedures for myself, which I have not done. But I've seen enough to be impressed. Regardless of whether it is the final answer philosophically, I guarantee that mastering its concepts and terminology is a goal that would take a person philosophically deeper than anything else I could recommend.
2[anonymous]12y
So many questions! Excited for the Open Problems Sequence.

You invoke as granted the assumption that there's anything besides your immediately present self (including your remembered past selves) that has qualia, but then you deny that some anticipatable things will have qualia. Presumably there are some philosophically informed epistemic-ish rules that you have been using, and implicitly endorsing, for the determination of whether any given stimuli you encounter were generated by something with qualia, and there are some other meta-philosophical epistemology-like rules that you are implicitly using and endorsing for determining whether the first set of rules was correct. Can you highlight any suitable past discussion you have given of the epistemology of the problem of other minds?

eta: I guess the discussions here, or here, sort of count, in that they explain how you could think what you do... except they're about something more like priors than like likelihoods.

In retrospect, the rest of your position is like that too, based on sort of metaphysical arguments about what is even coherently postulable, though you treat the conclusions with a certainty I don't see how to justify (e.g. one of your underlying concepts might not be fundamental ... (read more)

the problem with state-machine materialism is not that it models the world in terms of causal interactions between things-with-states; the problem is that it can't go any deeper than that, yet apparently we can.

I may have missed the part where you explained why qualia can't fit into a state machine-model of the universe. Where does the incompatibility come from? I'm aware that it looks like no human-designed mathematical objects have experienced qualia yet, which is some level of evidence for it being impossible, but not so strong that I think you're justified in saying a materialist/mathematical platonist view of reality can never account for conscious experiences.

6dbc12y
I think Mitchell's point is that we don't know whether state-machines have qualia, and the costs of making assumptions could be large.

Parts of this I think are brilliant, other parts I think are absolute nonsense. Not sure how I want to vote on this.

there is no way for an AI employing computational epistemology to bootstrap to a deeper ontology.

This strikes me as probably true but unproven.

My own investigations suggest that the tradition of thought which made the most progress in this direction was the philosophical school known as transcendental phenomenology.

You are anthropomorphizing the universe.

2Mitchell_Porter12y
Phenomenology is the study of appearances. The only part of the universe that it is directly concerned with is "you experiencing existence". That part of the universe is anthropomorphic by definition.
0David_Allen12y
It seems possible for an AI to engage in a process of search within the ontological Hilbert space. It may not be efficient, but a random search should make all parts of any particular space accessible, and a random search across a Hilbert space of ontological spaces should make other types of ontological spaces accessible, and a random search across a Hilbert space containing Hilbert spaces of ontological spaces should... and on up the meta-chain. It isn't clear why such a system wouldn't have access to any ontology that is accessible by the human mind.
-3Mitchell_Porter12y
My original formulation is that AI = state-machine materialism = computational epistemology = a closed circle. However, it's true that you could have an AI which axiomatically imputes a particular phenomenology to the physical states, and such an AI could even reason about the mental life associated with transhumanly complex physical states, all while having no mental life of its own. It might be able to tell us that a certain type of state machine is required in order to feel meta-meta-pain, meta-meta-pain being something that no human being has ever felt or imagined, but which can be defined combinatorically as a certain sort of higher-order intentionality. However, an AI cannot go from just an ontology of physical causality, to an ontology which includes something like pain, employing only computational epistemology. It would have to be told that state X is "pain". And even then it doesn't really know that to be in state X is to feel pain. (I am assuming that the AI doesn't possess consciousness; if it does, then it may be capable of feeling pain itself, which I take to be a prerequisite for knowing what pain is.)
0David_Allen12y
Continuing my argument. It appears to me that you are looking for an ontology that provides a natural explanation for things like "qualia" and "consciousness" (perhaps by way of phenomenology). You would refer to this ontology as the "true ontology". You reject Platonism "an ontology which reifies mathematical or computational abstractions", because things like "qualia" are absent. From my perspective, your search for the "true ontology"--which privileges the phenomenological perspective of "consciousness"--is indistinguishable from the scientific realism that you reject under the name "Platonism"--which (by some accounts) privileges a materialistic or mathematical perspective of everything. For example, using a form of your argument I could reject both of these approaches to realism because they fail to directly account for the phenomenological existence of SpongeBob SquarePants, and his wacky antics. Much of what you have written roughly matches my perspective, so to be clear I am objecting to the following concepts and many of the conclusions you have drawn from them: * "true ontology" * "true epistemology" * "Consciousness objectively exists" I claim that variants of antirealism have more to offer than realism. References to "true" and "objective" have implied contexts from which they must be considered, and without those contexts they hold no meaning. There is nothing that we can claim to be universally true or objective that does not have this dependency (including this very claim (meta-recursively...)). Sometimes this concept is stated as "we have no direct access to reality". So from what basis can we evaluate "reality" (whatever that is)? We clearly are evaluating reality from within our dynamic existence, some of which we refer to as consciousness. But consciousness can't be fundamental, because its identification appears to depend upon itself performing the identification; and a description of consciousness appears to be incomplete in that it do
0Mitchell_Porter12y
People have noticed circular dependencies among subdisciplines of philosophy before. A really well-known one is the cycle connecting ontology and epistemology: your epistemology should imply your ontology, and your ontology must permit your epistemology. More arcane is the interplay between phenomenology, epistemology, and methodology. Your approach to ontology seems to combine these two cycles, with the p/e/m cycle being more fundamental. All ontological claims are said to be dependent on a cognitive context, and this justifies ontological relativism. That's not my philosophy; I see the possibility of reaching foundations, and also the possibility of countering the relativistic influence of the p/e/m perspective, simply by having a good ontological account of what the p/e/m cycle is about. From this perspective, the cycle isn't an endless merry-go-round, it's a process that you iterate in order to perfect your thinking. You chase down the implications of one ology for another, and you keep that up until you have something that is complete and consistent. Or until you discover the phenomenological counterpart of Gödel's theorem. In what you write I don't see a proof that foundations don't exist or can't be reached. Perhaps they can't, but in the absence of a proof, I see no reason to abandon cognitive optimism.
0David_Allen12y
I have read many of your comments and I am uncertain how to model your meanings for 'ontology', 'epistemology' and 'methodology', especially in relation to each other. Do you have links to sources that describe these types of cycles, or are you willing to describe the cycles you are referring to--in the process establishing the relationship between these terms? The term "cycles" doesn't really capture my sense of the situation. Perhaps the sense of recurrent hypergraphs is closer. Also, I do not limit my argument only to things we describe as cognitive contexts. My argument allows for any type of context of evaluation. For example an antennae interacting with a photon creates a context of evaluation that generates meaning in terms of the described system. I think that this epistemology actually justifies something more like an ontological perspectivism, but it generalizes the context of evaluation beyond the human centric concepts found in relativism and perspectivism. Essentially it stops privileging human consciousness as the only context of evaluation that can generate meaning. It is this core idea that separates my epistemology from most of the related work I have found in epistemology, philosophy, linguistics and semiotics. I'm glad you don't see those proofs because I can't claim either point from the implied perspective of your statement. Your statement assumes that there exists an objective perspective from which a foundation can be described. The problem with this concept is that we don't have access to any such objective perspective. We can only identify the perspective as "objective" from some perspective... which means that the identified "objective" perspective depends upon the perspective that generated the label, rendering the label subjective. You do provide an algorithm for finding an objective description: Again from this it seems that while you reject some current conclusions of science, you actually embrace scientific realism--that there i
0Mitchell_Porter12y
Let's say that ontology is the study of that which exists, epistemology the study of knowledge, phenomenology the study of appearances, and methodology the study of technique. There's naturally an interplay between these disciplines. Each discipline has methods, the methods might be employed before you're clear on how they work, so you might perform a phenomenological study of the methods in order to establish what it is that you're doing. Reflection is supposed to be a source of knowledge about consciousness, so it's an epistemological methodology for constructing a phenomenological ontology... I don't have a formula for how it all fits together (but if you do an image search on "hermeneutic circle" you can find various crude flowcharts). If I did, I would be much more advanced. I wouldn't call that meaning, unless you're going to explicitly say that there are meaning-qualia in your antenna-photon system. Otherwise it's just cause and effect. True meaning is an aspect of consciousness. Functionalist "meaning" is based on an analogy with meaning-driven behavior in a conscious being. Does your philosophy have a name? Like "functionalist perspectivism"?
0David_Allen12y
Thanks for the description. That would place the core of my claims as an ontology, with implications for how to approach epistemology, and phenomenology. I recognize that my use of meaning is not normative. I won't defend this use because my model for it is still sloppy, but I will attempt to explain it. The antenna-photon interaction that you refer to as cause and effect I would refer to as a change in the dynamics of the system, as described from a particular perspective. To refer to this interaction as cause and effect requires that some aspect of the system be considered the baseline; the effect then is how the state of the system is modified by the influencing entity. Such a perspective can be adopted and might even be useful. But the perspective that I am holding is that the antenna and the photon are interacting. This is a process that modifies both systems. The "meaning" that is formed is unique to the system; it depends on the particulars of the systems and their interactions. Within the system that "meaning" exists in terms of the dynamics allowed by the nature of the system. When we describe that "meaning" we do so in the terms generated from an external perspective, but that description will only capture certain aspects of the "meaning" actually generated within the system. How does this description compare with your concept of "meaning-qualia"? I think that both functionalism and perspectivism are poor labels for what I'm attempting to describe; because both philosophies pay too much attention to human consciousness and neither are set to explain the nature of existence generally. For now I'm calling my philosophy the interpretive context hypothesis (ICH), at least until I discover a better name or a better model.
0David_Allen12y
The contexts from which you identify "state-machine materialism" and "pain" appear to be very different from each other, so it is no surprise that you find no room for "pain" within your model of "state-machine materialism". You appear to identify this issue directly in this comment: Looking for the qualia of "pain" in a state-machine model of a computer is like trying to find out what my favorite color is by using a hammer to examine the contents of my head. You are simply using the wrong interface to the system. If you examine the compressed and encrypted bit sequence stored on a DVD as a series of 0 and 1 characters, you will not be watching the movie. If you don't understand the Russian language, then for a novel written in Russian you will not find the subtle twists of plot compelling. If you choose some perspectives on Searle's Chinese room thought experiment you will not see the Chinese speaker, you will only see the mechanism that generates Chinese symbols. So stuff like "qualia", "pain", "consciousness", and "electrons" only exist (hold meaning) from perspectives that are capable of identifying them. From other perspective they are non-existent (have no meaning). If you chose a perspective on "conscious experience" that requires a specific sort of physical entity to be present, then a computer without that will never qualify as "conscious", for you. Others may disagree, perhaps pointing out aspects of its responses to them, or how some aspects of the system are functionally equivalent to the physical entity you require. So, which is the right way to identify consciousness? To figure that out you need to create a perspective from which you can identify one as right, and the other as wrong.

Your point sounds similar to Wei's point that solving FAI requires metaphilosophy.

Maybe I missed this, but did you ever write up the Monday/Tuesday game with your views on consciousness? On Monday, consciousness is an algorithm running on a brain, and when people say they have consciously experienced something, they are reporting the output of this algorithm. On Tuesday, the true ontology of mind resembles the ontology of transcendental phenomenology. What's different?

I'm also confused about why an algorithm couldn't represent a mass of entangled electrons.

3novalis12y
Oh, also: imagine that SIAI makes an AI. Why should they make it conscious at all? They're just trying to create an intelligence, not a consciousness. Surely, even if consciousness requires whatever it is you think it requires, an intelligence does not.
1David_Gerard12y
Indeed. Is my cat conscious? It's certainly an agent (it appears to have its own drives and motivations), with considerable intelligence (for a cat) and something I'd call creativity (it's an ex-stray with a remarkable ability to work out how to get into places with food it's after).
4David_Gerard12y
And the answer appears to be: yes. “The absence of a neocortex does not appear to preclude an organism from experiencing affective states. Convergent evidence indicates that non-human animals have the neuroanatomical, neurochemical, and neurophysiological substrates of conscious states along with the capacity to exhibit intentional behaviors. Consequently, the weight of evidence indicates that humans are not unique in possessing the neurological substrates that generate consciousness. Nonhuman animals, including all mammals and birds, and many other creatures, including octopuses, also possess these neurological substrates.”

That is all very interesting, but what difference does it practically make?

Suppose I were trying to build an AGI out of computation and physical sensors and actuators, and I had what appeared to me to be a wonderful new approach, and I was unconcerned with whether the device would "really" think or have qualia, just with whether it worked to do practical things. Maybe I'm concerned with fooming and Friendliness, but again, only in terms of the practical consequences, i.e. I don't want the world suddenly turned into paperclips. At what point, if any, would I need to ponder these epistemological issues?

4Mitchell_Porter12y
It will be hard for your AGI to be an ethical agent if it doesn't know who is conscious and who is not.
5Richard_Kennaway12y
It's easy enough for us (leaving aside edge cases about animals, the unborn, and the brain dead, which in fact people find hard, or at least persistently disagree on) How do we do it? By any other means than our ordinary senses?
1JQuinton12y
I would argue that humans are not very good at this. If by "good" you mean high succcess rate and low false positive rate for detecting consciousness. It seems to me that the only reason that we have a high success rate for detecting consciousness is because our false positive rate for detecting consciousness is also high (e.g. religion, ghosts, fear of the dark, etc.)
1haig12y
We have evolved moral intuitions such as empathy and compassion that underly what we consider to be right or wrong. These intuitions only work because we consciously internalize another agent's subjective experience and identify with it. In other words, without the various quales that we experience we would have no foundation to act ethically. An unconscious AI that does not experience these quales could, in theory, act the way we think it should act by mimicking behaviors from a repertoire of rules (and ways to create further rules) that we give it, but that is a very brittle and complicated route, and is the route the SIAI has been taking because they have discounted qualia, which is what this post is really all about.
-1Mitchell_Porter12y
A human being does it by presuming that observed similarities, between themselves and the other humans around them, extend to the common possession of inner states. You could design an AI to employ a similar heuristic, though perhaps it would be pattern-matching against a designated model human, rather than against itself. But the edge cases show that you need better heuristics than that, and in any case one would expect the AI to seek consistency between its ontology of agents worth caring about and its overall ontology, which will lead it down one of the forking paths in philosophy of mind. If it arrives at the wrong terminus...
3Richard_Kennaway12y
I don't see how this is different from having the AI recognise a teacup. We don't actually know how we do it. That's why it is difficult to make a machine to do it. We also don't know how we recognise people. "Presuming that observed similarities etc." isn't a useful description of how we do it, and I don't think any amount of introspection about our experience of doing it will help, any more than that sort of thinking has helped to develop machine vision, or indeed any of the modest successes that AI has had.
1scav12y
Firstly, I honestly don't see how you came to the conclusion that the qualia you and I (as far as you know) experience are not part of a computational process. It doesn't seem to be a belief that makes testable predictions. Since the qualia of others are not accessible to you, you can't know that any particular arrangement of matter and information doesn't have them, including people, plants, and computers. You also cannot know whether my qualia feel anything like your own when subjected to the same stimuli. If you have any reason to believe they do (for your model of empathy to make sense), what reason do you have to believe it is due to something non-computable? It seems intuitively appealing that someone who is kind to you feels similarly to you and is therefore similar to you. It helps you like them, and reciprocate the kindness, which has advantages of its own. But ultimately, your experience of another's kindness is about the consequences to you, not their intentions or mental model of you. If a computer with unknowable computational qualia is successfully kind to me, I'll take that over a human with unknowable differently-computational qualia doing what they think would be best for me and fucking it up because they aren't very good at evaluating the possible consequences.
-2Mitchell_Porter12y
Qualia are part of some sort of causal process. If it's cognition, maybe it deserves the name of a computational process. It certainly ought to be a computable process, in the sense that it could be simulated by a computer. My position is that a world described in terms of purely physical properties or purely computational properties does not contain qualia. Such a description itself would contain no reference to qualia. The various attempts of materialist philosophers of mind to define qualia solely in terms of physical or computational properties do not work. The physical and computational descriptions are black-box descriptions of "things with states", and you need to go into more detail about those states in order to be talking about qualia. Those more detailed descriptions will contain terms whose meaning one can only know by having been conscious and thereby being familiar with the relevant phenomenological realities, like pain. Otherwise, these terms will just be formal properties p, q, r... known only by how they enter into causal relations. Moving one step up in controversialness, I also don't believe that computational simulation of qualia will itself produce qualia. This is because the current theories about what the physical correlates of conscious states are, already require an implausible sort of correspondence between mesoscopic "functional" states (defined e.g. by the motions of large numbers of ions) and the elementary qualia which together make up an overall state of consciousness. The theory that any good enough simulation of this will also have qualia requires that the correspondence be extended in ways that no-one anywhere can specify (thus see the debates about simulations running on giant look-up tables, or the "dust theory" of simulations whose sequential conscious states are scattered across the multiverse, causally disconnected in space and time). The whole situation looks intellectually pathological to me, and it's a lot simpler to supp
1scav12y
How is that simpler? If there is a theory that qualia can only occur in a specific sort of physical entity, then that theory must delineate all the complicated boundary conditions and exceptions as to why similar processes on entities that differ in various ways don't count as qualia. It must be simpler to suppose that qualia are informational processes that have certain (currently unknown) mathematical properties. When you can identify and measure qualia in a person's brain and understand truly what they are, THEN you can say whether they can or can't happen on a semiconductor and WHY. Until then, words are wind.
-1Mitchell_Porter12y
Physically, an "informational process" involves bulk movements of microphysical entities, like electrons within a transistor or ions across a cell membrane. So let's suppose that we want to know the physical conditions under which a particular quale occurs in a human being (something like a flash of red in your visual field), and that the physical correlate is some bulk molecular process, where N copies of a particular biomolecule participate. And let's say that we're confident that the quale does not occur when N=0 or 1, and that it does occur when N=1000. All I have to do is ask, for what magic value of N does the quale start happening? People characteristically evade such questions, they wave their hands and say, that doesn't matter, there doesn't have to be a definite answer to that question. (Just as most MWI advocates do, when asked exactly when it is that you go from having one world to two worlds.) But let's suppose we have 1000 people, numbered from 1 to 1000, and in each one the potentially quale-inducing process is occurring, with that many copies of the biomolecule participating. We can say that person number 1 definitely doesn't have the quale, and person number 1000 definitely does, but what about the people in between? The handwaving non-answer, "there is no definite threshold", means that for people in the middle, with maybe 234 or 569 molecules taking part, the answer to the question "Are they having this experience or not?" is "none of the above". There's supposed to be no exact fact about whether they have that flash of red or not. There is absolutely no reason to take that seriously as an intellectual position about the nature of qualia. It's actually a reductio ad absurdum of a commonly held view. The counterargument might be made, what about electrons in a transistor? There doesn't have to be an exact answer to the question, how many electrons is enough for the transistor to really be in the "1" state rather than the "0" state. But the rea
1scav12y
Why? I don't seem to experience qualia as all-or-nothing. I doubt that you do either. I don't see a problem with the amount of qualia experienced being a real number between 0 and 1 in response to varying stimuli of pain or redness. Therefore I don't see a problem with qualia being measurable on a similar scale across different informational processes with more or fewer neurons or other computing elements involved in the structure that generates them.
0Mitchell_Porter12y
Do you think that there is a slightly different quale for each difference in the physical state, no matter how minute that physical difference is?
1scav12y
I don't know. But I don't think so, not in the sense that it would feel like a different kind of experience. More or less intense, more definite or more ambiguous perhaps. And of course there could always be differences too small to be noticeable. As a wild guess based on no evidence, I suppose that different kinds of qualia have different functions (in the sense of uses, not mathematical mappings) in a consciousness, and equivalent functions can be performed by different structures and processes. I am aware of qualia (or they wouldn't be qualia), but I am not aware of the mechanism by which they are generated, so I have no reason to believe that mechanism could not be implemented differently and still have the same outputs, and feel the same to me.
0Mitchell_Porter12y
I have just expanded on the argument that any mapping between "physics" and "phenomenology" must fundamentally be an exact one. This does not mean that a proposed mapping, that would be inexact by microphysical standards, is necessarily false, it just means that it is necessarily incomplete. The argument for exactness still goes through even if you allow for gradations of experience. For any individual gradation, it's still true that it is what it is, and that's enough to imply that the fundamental mapping must be exact, because the alternative would lead to incoherent statements like "an exact physical configuration has a state of consciousness associated with it, but not a particular state of consciousness". The requirement that any "law" of psychophysical correspondence must be microphysically exact in its complete form, including for physical configurations that we would otherwise regard as edge cases, is problematic for conventional functionalism, precisely because conventional functionalism adopts the practical rough-and-ready philosophy used by circuit designers. Circuit designers don't care if states intermediate between "definitely 0" and "definitely 1" are really 0, 1, or neither; they just want to make sure that these states don't show up during the operation of their machine, because functionally they are unpredictable, that's why their semantics would be unclear. Scientists and ontologists of consciousness have no such option, because the principle of ontological non-vagueness (mentioned in the other comment) applies to consciousness. Consciousness objectively exists, it's not just a useful heuristic concept, and so any theory of how it relates to physics must admit of a similarly objective completion; and that means there must be a specific answer to the question, exactly what state(s) of consciousness, if any, are present in this physical configuration... there must be a specific answer to that question for every possible physical configuration. B
0scav12y
What predictions does your theory make?
2Mitchell_Porter12y
The idea, more or less, is that there is a big ball of quantum entanglement somewhere in the brain, and that's the locus of consciousness. It might involve phonons in the microfilaments, anyons in the microtubules, both or neither of these; it's presumably tissue-specific, involving particular cell types where the relevant structures are optimized for this role; and it must be causally relevant for conscious cognition, which should do something to pin down its anatomical location. You could say that one major prediction is just that there will be such a thing as respectable quantum neurobiology and cognitive quantum neuroscience. From a quantum-physical and condensed-matter perspective, biomolecules and cells are highly nontrivial objects. By now "quantum biology" has a long history, and it's a topic that is beloved of thinkers who are, shall we say, more poetic than scientific, but we're still at the very beginning of that subject. We basically know nothing about the dynamics of quantum coherence and decoherence in living matter. It's not something that's easily measured, and the handful of models that have been employed in order to calculate this dynamics are "spherical cow" models; they're radically oversimplified for the sake of calculability, and just a first step into the unknown. What I write on this subject is speculative, and it's idiosyncratic even when compared to "well-known" forms of quantum-mind discourse. I am more interested in establishing the possibility of a very alternative view, and also in highlighting implausibilities of the conventional view that go unnoticed, or which are tolerated because the conventional picture of the brain appears to require them.
0torekp12y
If this is an argument with the second sentence as premise, it's a non sequitur. I can give you a description of the 1000 brightest objects in the night sky without mentioning the Evening Star; but that does not mean that the night sky lacked the Evening Star or that my description was incomplete.
0Mitchell_Porter12y
The rest of the paragraph covers the case of indirect reference to qualia. It's sketchy because I was outlining an argument rather than making it, if you know what I mean. I had to convey that this is not about "non-computability".
4David_Gerard12y
Is a human who is dissociating conscious? Or one who spaces out for a couple of seconds then retcons continuous consciousness later (as appears to be what brains actually do)? Or one who is talking and doing complicated things while sleepwalking?
2David_Gerard12y
Indeed. We're after intelligence that behaves in a particular way. At what point do qualia enter our model? What do they do in a model? To answer this question we need to be using an expansion of the term "qualia" which can be observed from the outside.
0thomblake12y
I get the impression that Mitchell_Porter is tentatively accepting Eliezer's assertion that FAI should not be a person, but nonetheless those "epistemological issues" seem relevant to the content of ethics. A machine with the wrong ideas about ontology might make huge mistakes regarding what makes life worth living for humans.

Some brief attempted translation for the last part:

A "monad", in Mitchell Porter's usage, is supposed to be a somewhat isolatable quantum state machine, with states and dynamics factorizable somewhat as if it was a quantum analogue of a classical dynamic graphical model such as a dynamic Bayesian network (e.g., in the linked physics paper, a quantum cellular automaton). (I guess, unlike graphical models, it could also be supposed to not necessarily have a uniquely best natural decomposition of its Hilbert space for all purposes, like how with an ... (read more)

This needs further translation.

9Steve_Rayhawk12y
(It's also based on an intuition I don't understand that says that classical states can't evolve toward something like representational equilibrium the way quantum states can -- e.g. you can't have something that tries to come up with an equilibrium of anticipation/decisions, like neural approximate computation of Nash equilibria, but using something more like representations of starting states of motor programs that, once underway, you've learned will predictably try to search combinatorial spaces of options and/or redo a computation like the current one but with different details -- or that, even if you can get ths sort of evolution in classical states, it's still knowably irrelevant. Earlier he invoked bafflingly intense intuitions about the obviously compelling ontological significance of the lack of spatial locality cues attached to subjective consciousness, such as "this quale is experienced in my anterior cingulate cortex, and this one in Wernicke's area", to argue that experience is necessarily nonclassically replicable. (As compared with, what, the spatial cues one would expect a classical simulation of the functional core of a conscious quantum state machine to magically become able to report experiencing?) He's now willing to spontaneously talk about non-conscious classical machines that simulate quantum ones (including not magically manifesting p-zombie subjective reports of spatial cues relating to its computational hardware), so I don't know what the causal role of that earlier intuition is in his present beliefs; but his reference to a "sweet spot", rather than a sweet protected quantum subspace of a space of network states or something, is suggestive, unless that's somehow necessary for the imagined tensor products to be able to stack up high enough.)
-1Mitchell_Porter12y
Let's go back to the local paradigm for explaining consciousness: "how it feels from the inside". On one side of the equation, we have a particular configuration of trillions of particles, on the other side we have a conscious being experiencing a particular combination of sensations, feelings, memories, and beliefs. The latter is supposed to be "how it feels to be that configuration". If I ontologically analyze the configuration of particles, I'll probably do so in terms of nested spatial structures - particles in atoms in molecules in organelles in cells in networks. What if I analyze the other side of the equation, the experience, or even the conscious being having the experience? This is where phenomenology matters. Whenever materialists talk about consciousness, they keep interjecting references to neurons and brain computations even though none of this is evident in the experience itself. Phenomenology is the art of characterizing the experience solely in terms of how it presents itself. So let's look for the phenomenological "parts" of an experience. One way to divide it up is into the different sensory modalities, e.g. that which is being seen versus that which is being heard. We can also distinguish objects that may be known multimodally, so there can be some cross-classification here, e.g. I see you but I also hear you. This synthesis of a unified perception from distinct sensations seems to be an intellectual activity, so I might say that there are some visual sensations, some auditory sensations, a concept of you, and a belief that the two types of sensations are both caused by the same external entity. The analysis can keep going in many directions from here. I can focus just on vision and examine the particular qualities that make up a localized visual sensation (e.g. the classic three-dimensional color schemes). I can look at concepts and thoughts and ask how they are generated and compounded. When I listen to my own thinking, what exactly is going
4Mitchell_Porter12y
I don't know where you got the part about representational equilibria from. My conception of a monad is that it is "physically elementary" but can have "mental states". Mental states are complex so there's some sort of structure there, but it's not spatial structure. The monad isn't obtained by physically concatenating simpler objects; its complexity has some other nature. Consider the Game of Life cellular automaton. The cells are the "physically elementary objects" and they can have one of two states, "on" or "off". Now imagine a cellular automaton in which the state space of each individual cell is a set of binary trees of arbitrary depth. So the sequence of states experienced by a single cell, rather than being like 0, 1, 1, 0, 0, 0,... might be more like (X(XX)), (XX), ((XX)X), (X(XX)), (X(X(XX)))... There's an internal combinatorial structure to the state of the single entity, and ontologically some of these states might even be phenomenal or intentional states. Finally, if you get this dynamics as a result of something like the changing tensor decomposition of one of those quantum CAs, then you would have a causal system which mathematically is an automaton of "tree-state" cells, ontologically is a causal grid of monads capable of developing internal intentionality, and physically is described by a Hamiltonian built out of Pauli matrices, such as might describe a many-body quantum system. Furthermore, since the states of the individual cell can have great or even arbitrary internal complexity, it may be possible to simulate the dynamics of a single grid-cell in complex states, using a large number of grid-cells in simple states. The simulated complex tree-states would actually be a concatenation of simple tree-states. This is the "network of a billion simple monads simulating a single complex monad".

Do you think that the outputs of human philosophers of mind, or physicists thinking about consciousness, can't be accurately modeled by computational processes, even with access to humans? If they can be predicted or heard, then they can be deferred to.

CEV is supposed to extrapolate our wishes "if we knew more", and the AI may be so sure that consciousness doesn't really exist in some fundamental ontological sense that it will override human philosophers' conclusions and extrapolate them as if they also thought consciousness doesn't exist in this ontological sense. (ETA: I think Eliezer has talked specifically about fixing people's wrong beliefs before starting to extrapolate them.) I share a similar concern, not so much about this particular philosophical problem, but that the AI will be wrong on some philosophical issue and reach some kind of disastrous or strongly suboptimal conclusion.

1Pentashagon12y
There's a possibility that we are disastrously wrong about our own philosophical conclusions. Consciousness itself may be ethically monstrous in a truly rational moral framework. Especially when you contrast the desire for immortality with the heat death. What is the utility of 3^^^3 people facing an eventual certain death versus even just 2^^^2 or a few trillion? I don't think there's a high probability that consciousness itself will turn out to be the ultimate evil but it's at least a possibility. A more subtle problem may be that allowing consciousness to exist in this universe is evil. It may be far more ethical to only allow consciousness inside simulations with no defined end and just run them as long as possible with the inhabitants blissfully unaware of their eventual eternal pause. They won't cease to exist so much as wait for some random universe to exist that just happens to encode their next valid state...
4moridinamael12y
You could say the same of anyone who has ever died, for some sense of "valid" ... This, and similar waterfall-type arguments lead me to suspect that we haven't satisfactorily defined what it means for something to "happen."
3Pentashagon12y
It depends on the natural laws the person lived under. The next "valid" state of a dead person is decomposition. I don't find the waterfall argument compelling because the information necessary to specify the mappings is more complex than the computed function itself.
4fubarobfusco12y
I'm hearing an invocation of the Anti-Zombie Principle here, i.e.: "If simulations of human philosophers of mind will talk about consciousness, they will do so for the same reasons that human philosophers do, namely, that they actually have consciousness to talk about" ...
1CarlShulman12y
Yes. Not necessarily, in the mystical sense.
5fubarobfusco12y
Okay, to clarify: If 'consciousness' refers to anything, it refers to something possessed both by human philosophers and accurate simulations of human philosophers. So one of the following must be true: ① human philosophers can't be accurately simulated, ② simulated human philosophers have consciousness, or ③ 'consciousness' doesn't refer to anything.
1CarlShulman12y
Dualists needn't grant your first sentence, claiming epiphenomena. I am talking about whether mystical mind features would screw up the ability of an AI to carry out our aims, not arguing for physicalism (here).
1Mitchell_Porter12y
I agree that a totally accurate simulation of a philosopher ought to arrive at the same conclusions as the original. But a totally accurate simulation of a human being is incredibly hard to obtain. I've mentioned that I have a problem with outsourcing FAI design to sim-humans, and that I have a problem with the assumption of "state-machine materialism". These are mostly different concerns. Outsourcing to sim-humans is just wildly impractical, and it distracts real humans from gearing up to tackle the problems of FAI design directly. Adopting state-machine materialism is something you can do, right now, and it will shape your methods and your goals. The proverbial 500-subjective-year congress of sim-philosophers might be able to resolve the problem of state-machine materialism for you, but then so would the discovery of communications from an alien civilization which had solved the problem. I just don't think you can rely on either method, and I also think real humans do have a chance of solving the ontological problem by working on it directly.

The simple solution is to demystify qualia: I don't understand the manner in which ionic transfer within my brain appears to create sensation, but I don't have to make the jump from that to 'sensation and experience are different from brain state'. All of my sense data comes through channels- typically as an ion discharge through a nerve or a chemical in my blood. Those ion discharges and chemicals interact with brain cells in a complicated manner, and "I" "experience" "sensation". The experience and sensation are no more mysterious than the identity.

I find Mitchell_Porter difficult to understand, but I've voted this up just for the well-written summary of the SI's strategy (can an insider tell me whether the summary is accurate?)

Just one thing though - I feel like this isn't the first time I've seen How An Algorithm Feels From Inside linked to as if it was talking about qualia - which it really isn't. It would be a good title for an essay about qualia, but the actual text is more about general dissolving-the-question stuff.

5David_Gerard12y
What is the expansion of your usage of "qualia"? The term used without more specificity is too vague when applied to discussions of reducibility and materialism (and I did just check SEP on the matter and it marked the topic "hotly debated"); there is a meaning that is in philosophical use which could indeed be reasonably described as something very like what How An Algorithm Feels From Inside.
4haig12y
"How an algorithm feels from inside" discusses a particular quale, that of the intuitive feeling of holding a correct answer from inside the cognizing agent. It does not touch upon what types of physically realizable systems can have qualia.
1David_Gerard12y
Um, OK. What types of physically realizable systems can have qualia? Evidently I'm unclear on the concept.
0haig12y
That is the $64,000 question.
1David_Gerard12y
It's not yet clear to me that we're talking about anything that's anything. I suppose I'm asking for something that does make that a bit clearer.
0haig12y
Ok, so we can with confidence say that humans and other organisms with developed neural systems experience the world subjectively, maybe not exactly in similar ways, but conscious experience seems likely for these systems unless you are a radical skeptic or solipsist. Based on our current physical and mathematical laws, we can reductively analyse these systems and see how each subsystem functions, and, eventually, with sufficient technology we'll be able to have a map of the neural correlates that are active in certain environments and which produce certain qualia. Neuroscientists are on that path already. But, are only physical nervous systems capable of producing a subjective experience? If we emulate with enough precision a brain with sufficient input and output to an environment, computationalists assume that it will behave and experience the same as if it was a physical wetware brain. Given this assumption, we conclude that the simulated brain, which is just some machine code operating on transistors, has qualia. So now qualia is attributed to a software system. How much can we diverge from this perfect software emulation and still have some system that experiences qualia? From the other end, by building a cognitive agent piece-meal in software without reference to biology, what types of dynamics will cause qualia to arise, if at all? The simulated brain is just data, as is Microsoft Windows, but Windows isn't conscious, or so we think. Looking at the electrons moving through the transistors tells us nothing about what running software has qualia and what does not. On the other hand, It might be the case that deeper physics beyond the classical must be involved for the system to have qualia. In that case, classical computers will be unable to produce software that experiences qualia and machines that exploit quantum properties may be needed, this is still speculative, but the whole question of qualia is still speculative. So now, when designing an AI that will
0Giles12y
Can you explain?
1David_Gerard12y
I'm not sure how to break that phrase down further. Section 3 of that SEP article covers the issue, but the writing is as awful as most of the SEP. It's a word for the redness of red as a phenomenon of the nervous system, which is broadly similar between humans (since complex adaptations have to be universal to evolve). But all this is an attempt to rescue the word "qualia". Again, I suggest expanding the word to whatever it is we're actually talking about in terms of the problem it's being raised as an issue concerning.

So what should I make of this argument if I happen to know you're actually an upload running on classical computing hardware?

-1Mitchell_Porter12y
That someone managed to produce an implausibly successful simulation of a human being. There's no contradiction in saying "zombies are possible" and "zombie-me would say that zombies are possible". (But let me add that I don't mean the sort of zombie which is supposed to be just the physical part of me, with an epiphenomenal consciousness subtracted, because I don't believe that consciousness is epiphenomenal. By a zombie I mean a simulation of a conscious being, in which the causal role of consciousness is being played by a part that isn't actually conscious.)
4Risto_Saarelma12y
So if you accidentally cut the top of your head open while shaving and discovered that someone had went and replaced your brain with a high-end classical computing CPU sometime while you were sleeping, you couldn't accept actually being an upload, since the causal structure that produces the thoughts that you are having qualia are still there? (I suppose you might object to the assumed-to-be-zombie upload you being referred to as 'you' as well.) Reason I'm asking is that I'm a bit confused exactly where the problems from just the philosophical part would come in with the outsourcing to uploaded researchers scenario. Some kind of more concrete prediction, like that a neuromorphic AI architecturally isomorphic to a real human central nervous system just plain won't ever run as intended until you build an quantum octonion monad CPU to house the qualia bit, would be a lot more not-confusing stance, but I don't think I've seen you take that.
2Cyan12y
I'm going to collect some premises that I think you affirm: * consciousness is something most or all humans have; likewise for the genes that encode this phenotype * consciousness is a quantum phenomenon * the input-output relation of the algorithm that the locus of consciousness implements can be simulated to arbitrary accuracy (with difficulty) * if the simulation isn't implemented with the right kind of quantum system, it won't be conscious I have some questions about the implications of these assertions. * Do you think the high penetrance of consciousness is a result of founder effect + neutral drift or the result of selection (or something else)? * What do you think is the complexity class of the algorithm that the locus of consciousness implements? * If you answered "selection" to the first question, what factors do you think contributed to the selection of the phenotype that implements that algorithm in a way that induces consciousness as a "causal side-effect"?
0Mitchell_Porter12y
It's anthropically necessary that the ontology of our universe permits consciousness, but selection just operates on state machines, and I would guess that self-consciousness is adaptive because of its functional implications. So this is like looking for an evolutionary explanation of why magnetite can become magnetized. Magnetite may be in the brain of birds because it helps them to navigate, and it helps them to navigate because it can be magnetized; but the reason that this substance can be magnetized has to do with physics, not evolution. Similarly, the alleged quantum locus may be there because it has a state-machine structure permitting reflective cognition, and it has that state-machine structure because it's conscious; but it's conscious because of some anthropically necessitated ontological traits of our universe, not because of its useful functions. Evolution elsewhere may have produced unconscious intelligences with brains that only perform classical computations.
0Cyan12y
I think you have mistaken the thrust of my questions. I'm not asking for an evolutionary explanation of consciousness per se -- I'm trying to take your view as given and figure out what useful functions one ought to expect to be associated with the locus of consciousness.
0Mitchell_Porter12y
What does conscious cognition do that unconscious cognition doesn't do? The answer to that tells you what consciousness is doing (though not whether these activities are useful...).
0[anonymous]12y
So if you observed such a classical upload passing exceedingly carefully designed and administered turing tests, you wouldn't change your position on this issue? Is there any observation which would falsify your position?
-5Mitchell_Porter12y

Although the abstract concept of a computer program (the abstractly conceived state machine which it instantiates) does not contain qualia, people often treat programs as having mind-like qualities, especially by imbuing them with semantics - the states of the program are conceived to be "about" something, just like thoughts are.

So far as I can tell, I am also in the set of programs that are treated as having mind-like qualities by imbuing them with semantics. We go to a good deal of trouble to teach people to treat themselves and others as pe... (read more)

6David_Gerard12y
Consciousness is not continuous - it appears to be something we retcon after the fact.

But the idea that a human being is a state machine running on a distributed neural computation is just a hypothesis, and I would argue that it is a hypothesis in contradiction with so much of the phenomenological data, that we really ought to look for a more sophisticated refinement of the idea.

I'm not aware of any "phenomenological data" that contradicts computationalism.

You have to factor in the idea that human brains have evolved to believe themselves to be mega-special and valuable. Once you have accounted for this, no phenomenological data contradicts computationalism.

-14hankx778712y

You need to do the impossible one more time, and make your plans bearing in mind that the true ontology [...] something more than your current intellectual tools allow you to represent.

With the "is" removed and replaced by an implied "might be", this seems like a good sentiment...

...well, given scenarios in which there were some other process that could come to represent it, such that there'd be a point in using (necessarily-)current intellectual tools to figure out how to stay out of those processes' way...

...and depending on the re... (read more)

In addition to being a great post overall, the first ~half of the post is a really excellent and compact summary of huge and complicated interlocking ideas. So, thanks for writing that, it's very useful to be able to see how all the ideas fit together at a glance, even if one already has a pretty good grasp of the ideas individually.

I've formed a tentative hypothesis that some human beings experience their own subjective consciousness much more strongly than others. An even more tentative explanation for why this might happen is that perhaps the brain re... (read more)

Upvoted for clarity.

I think, along with most LWers, that your concerns about qualia and the need for a new ontology are mistaken. But even granting that part of your argument, I don't see why it is problematic to approach the FAI problem through simulation of humans. Yes, you would only be simulating their physical/computational aspects, not the ineffable subjectiveness, but does that loss matter, for the purposes of seeing how the simulations react to different extrapolations and trying to determine CEV? Only if a) the qualia humans experience are relate... (read more)

2Filipe12y
This seems essentially the same answer as the most upvoted comment on the thread. Yet, you were at -2 just a while ago. I wonder why.
7Alejandro112y
I wondered too, but I don't like the "why the downvotes?" attitude when I see it in others, so I refrained from asking. (Fundamental attribution error lesson of the day: what looks like a legitimate puzzled query from the inside, looks like being a whiner from the outside). My main hypothesis was that the "upvoted for clarity" may have bugged some who saw the original post as obscure. And I must admit that the last paragraphs were much more obscure than the first ones.

I think you're on the right track with your initial criticisms but qualia is the wrong response. Qualia is a product of making the same mistake as the platonists; it's reifying the qualities of objects into mental entities. But if you take the alternative (IMO correct) approach - leaving qualities in the world where they belong - you get a similar sort of critique because clearly reducing the world to just the aspects that scientists measure is a non-starter (note that it's not even the case that qualitative aspects can't be measured - i.e., you can identi... (read more)

I've been trying to find a way to empathically emulate people who talk about quantum consciousness for a while, so far with only moderate success. Mitchell, I'm curious if you're aware of the work of Christof Koch and Giulio Tononi, and if so, could you speak to their approach?

For reference (if people aren't familiar with the work already) Koch's team is mostly doing experiments... and seems to be somewhat close to having mice that have genes knocked out so that they "logically would seem" to lack certain kinds of qualia that normal mice "... (read more)

4shminux12y
It bugs me when people talk about 'quantum consciousness", given that classical computers can do anything quantum computers can do, only sometimes slower.
3Mitchell_Porter12y
IIT's measure of "information integration", phi, is still insufficiently exact to escape the "functionalist sorites problem". It could be relevant for a state-machine analysis of the brain, but I can't see it being enough to specify the mapping between physical and phenomenological states. Also, Tononi's account of conscious states seems to be just at the level of sensation. But this is an approach which could converge with mine if the right extra details were added. "We" are a heterogeneous group. Chopra and Penrose - not much in common. Besides, even if you believe consciousness can arise from classical computation but you also believe in many worlds, then quantum concepts do play a role in your theory of mind, in that you say that the mind consists of interactions between distinct states of decohered objects. Figure out how Tononi's "phi" could be calculated for the distinct branches of a quantum computer, and lots of people will want to be your friend.
3JenniferRM12y
If I understand what you're calling the "functionalist sorites problem", it seems to me that Integrated Information Theory is meant to address almost exactly that issue, with its "phi" parameter being a measure of something like the degree (in bits) which an input is capable of exerting influence over a behavioral outcome. Moreover, qualia, at least as I seem to experience them, are non-binary. Merely hearing the word "red" causes aspects of my present environment to leap to salience in a way that I associate with those facets of the world being more able to influence my subsequent behavior... or to put it much more prosaically: reminders can, in fact, bring reminded content to my attention and thereby actually work. Equally, however, I frequently notice my output having probably been influenced by external factors that were in my consciousness to only a very minor degree such that it would fall under the rubric of priming. Maybe this is ultimately a problem due to generalizing from one example? Maybe I have many gradations of conscious awareness and you have binary awareness and we're each assuming homogeneity where none exists? Solving a fun problem and lots of people wanting to be my friend sounds neat... like a minor goad to working on the problem in my spare time and seeing if I can get a neat paper on it? But I suspect you're overestimating people's interest, and I still haven't figured out the trick of being paid well to play with ideas, so until them schema inference software probably pays the bills more predictably than trying to rid the world of quantum woo. There are about 1000 things I could spend the next few years on, and I only get to do maybe 2-5 of them, and then only in half-assed ways unless I settle on ONLY one of them. Hobby quantum consciousness research is ~8 on the list and unlikely to actually get many brain cycles in the next year :-P
2Mitchell_Porter12y
I posed the functionalist sorites problem in the form of existence vs nonexistence of a specific quale, but it can equally be posed in the form of one state of consciousness vs another, where the difference may be as blatant or as subtle as you wish. The question is, what are the exact physical conditions under which a completely specific quale or state of consciousness exists? And we can highlight the need for exactness, by asking at the same time what the exact conditions are, under which no quale occurs, or under which the other state of consciousness occurs; and then considering edge cases, where the physical conditions are intermediate between one vague specification and another vague specification. For the argument to work, you must be clear on the principle that any state of consciousness is exactly something, even if we are not totally aware of it or wouldn't know how to completely describe it. This principle - which amounts to saying that there is no such thing as entities which are objectively vague - is one that we already accept when discussing physics, I hope. Suppose we are discussing what the position of an unmeasured electron is. I might say that it has a particular position; I might say that it has several positions or all positions, in different worlds; I might say that it has no position at all, that it just isn't located in space right now. All of those are meaningful statements. But to say that it has a position, but it doesn't have a particular position, is conceptually incoherent. It doesn't designate a possibility. It most resembles "the electron has no position at all", but then you don't get to talk as if the electron nonetheless has a (nonspecific) position at the same time as not actually having a position. The same principle applies to conscious experience. The quale is always a particular quale, even if you aren't noticing its particularities. Now let us assume for the moment that this principle of non-vagueness is true for all phy
1JenniferRM12y
Someone downvoted you, but I upvoted you to correct it. I only downvote when I think there is (1) bad faith communication or (2) an issue above LW's sanity line is being discussed tactlessly. Neither seems to apply here. That said, I think you just made a creationist "no transitional forms" move in your argument? A creationist might deny that 200-million-year-separated organisms, seemingly obviously related by descent, are "the same" magically/essentially distinct "kind". There's a gap between them! When pressed (say by being shown some intermediate forms that have been found given the state of the scientific excavation of the crust) a creationist could point in between each intermediate form to more gaps which might naively seem to make their "gaps exist" point a stronger point against the general notion of "evolution by natural selection". But it doesn't. Its not a stronger argument thereby, but a weaker one. Similarly, you seem to have a rhetorical starting point where you verbally deploy the law of the excluded middle to say that either a quale "is or is not" experienced due to a given micro-physical configuration state (notice the similarity of focusing on simplistic verbal/propositional/logical modeling of rigid "kinds" or "sets" with magically perfect inclusion/exclusion criteria). I pushed on that and you backed down. So it seems like you've retreated to a position where each verbally distinguishable level of conscious awareness should probably have a different physical configuration, and in fact this is what we seem to observe with things like fMRI...if you squint your eyes and acknowledge limitations in observation and theory that are being rectified by science even as we write. We haven't nanotechnogically consumed the entire crust of the earth to find every fossil, and we haven't simulated a brain yet, but these things may both be on the long term path of "the effecting of all things possible". My hope in trying to empathically emulate people who take
2Mitchell_Porter12y
No, I explicitly mentioned the idea that there might be a continuum of possible quale states; you even quoted the sentence where I brought it up. But it is irrelevant to my argument, which is that for a proposed mapping between physical and phenomenological states to have any chance of being true, it must possess an extension to an exact mapping between fundamental microphysical states and phenomenological states (not necessarily a 1-to-1 mapping) - because the alternative is "objective vagueness" about which conscious state is present in certain physical configurations - and this requirement is very problematic for standard functionalism based on vaguely defined mesoscopic states, since any specification of how all the edge cases correspond to the functional states will be highly arbitrary. Let me ask you this directly: do you think it would be coherent to claim that there are physical configurations in which there is a state of consciousness present, but it's not any particular state of consciousness? It doesn't have to be a state of consciousness that we presently know how to completely characterize, or a state of consciousness that we can subjectively discriminate from all other possible states of consciousness; it just has to be a definite, particular state of consciousness. If we agree that ontological definiteness of physical state implies ontological definiteness in any accompanying state of consciousness (again I'll emphasize that this is ontological definiteness, not phenomenological definiteness; I must allow for the fact that states of consciousness have details that aren't noticed by the experiencer), then that immediately implies the existence of an exact mapping from microphysically exact states to ontologically definite states of consciousness. Which implies an inverse mapping from ontologically definite states of consciousness, to a set of exact microphysical states, which are the physical states (or state, there might only be one) in which that p
2JenniferRM12y
OK, I hope I'm starting to get it. Are you looking for a basis to power a pigeonhole argument about equivalence classes? If we're going to count things, then a potential source of confusion is that there are probably more ontologically distinct states of "being consciously depressed" than can detectable from the inside, because humans just aren't very good at internal monitoring and stuff, but that doesn't mean they aren't differences that a martian with Awesome Scanning Equipment couldn't detect. So a mental patient could be phenomenologically depressed in a certain way and say "that feeling I just felt was exactly the same feeling as in the past modulo some mental trivia about vaguely knowing it is Tuesday rather than Sunday" and the Martian anthropologist might check the scanner logs and might truthfully agree but more likely the the Martian might truthfully say, "Technically no: you were more consciously obsessed about your ex-boyfriend than you were consciously obsessed about your cellulite, which is the opposite ordering of every time in the past, though until I said this you were not aware of this difference in your awareness" and then the patient might introspect based on the statement and say "Huh, yeah, I guess you're right, curious that I didn't notice that from the inside while it was happening... oh well, time for more crying now..." And in general, absent some crazy sort of phenomenological noise source, there are almost certainly fewer phenomenologically distinct states than ontologically distinct states. So then the question arises as to how the Martian's "ontology monitoring" scanner worked. It might have measured physical brain states via advanced but ultimately prosaic classical-Turing-neuron technology or it might have used some sort of quantum-chakra-scanner that detects qualia states directly. Perhaps it has both and can run either or both scanners and compare their results over time? One of them can report that a stray serotonin molecule wa
4Tyrrell_McAllister12y
I think that there was a miscommunication here. To be strictly correct, Mitchell should have written "Which implies an inverse mapping from ontologically definite states of consciousness, to sets of exact microphysical states...". His additional text makes it clear that he's talking about a map f sending every qualia state q to a set f(q) of brain states, namely, the set of brain states b such that being in brain state b implies experiencing qualia state q. This is consistent with the ordering B>Q>P that you expect.
1Mitchell_Porter12y
This is not about counting the number of states. It is about disallowing vagueness at the fundamental level, and then seeing the implications of that for functionalist theories of consciousness. A functionalist theory of consciousness says that a particular state of consciousness occurs, if and only if the physical object is in a particular "functional state". If you classify all the possible physical states into functional states, there will be borderline cases. But if we disallow vagueness, then every one of those borderline cases must correspond to a specific state of consciousness. Someone with no hair is bald, someone with a head full of hair is not bald, yet we don't have a non-arbitrary criterion for where the exact boundary between bald and not-bald lies. This doesn't matter because baldness is a rough judgment and not an objective property. But states of consciousness are objective, intrinsic attributes of the conscious being. So objective vagueness isn't allowed, and there must be a definite fact about which conscious state, if any, is present, for every possible physical state. If we are employing the usual sort of functionalist theory, then the physical variables defining the functional states will be bulk mesoscopic quantities, there will be borderline areas between one functional state and another, and any line drawn through a borderline area, demarcating an exact boundary, just for the sake of avoiding vagueness, will be completely arbitrary at the finest level. The difference between experiencing one shade of red and another will be that you have 4000 color neurons firing rather than 4001 color neurons, and a cell will count as a color neuron if it has 10 of the appropriate receptors but not if it only has 9, and a state of this neuron will count as firing if the action potential manages to traverse the whole length of the axon, but not if it's just a localized fizzle... The arbitrariness of the distinctions that would need to be made, in order t
2Tyrrell_McAllister12y
I see that you already addressed precisely the points that I made here. You wrote I agree that any final "theory of qualia" should say, for every physically possible state, whether that state bears qualia or not. I take seriously the idea that such a final theory of qualia is possible, meaning that there really is an objective fact of the matter about what the qualia properties of any physically possible state are. I don't have quite the apodeictic certainty that you seem to have, but I take the idea seriously. At any rate, I feel at least some persuasive force in your argument that we shouldn't be drawing arbitrary boundaries around the microphysical states associated with different qualia states. But even granting the objective nature of qualia properties, I'm still not getting why vagueness or arbitrariness is an inevitable consequence of any assignment of qualia states to microphysical states. Why couldn't the property of bearing qualia be something that can, in general, be present with various degrees of intensity, ranging from intensely present to entirely absent? Perhaps the "isolated islands" normally traversed by our brains are always at one extreme or another of this range. In that case, it would be impossible for us to imagine what it would "be like" to "bear qualia" in only a very attenuated sense. Nonetheless, perhaps a sufficiently powerful nano-manipulator could rearrange the particles in your brain into such a state. To be clear, I'm not talking about states that experience specifc qualia — a patch of red, say — very dimly. I'm talking about states that just barely qualify as bearing qualia at all. I'm trying to understand how you rule out the possibility that "bearing qualia" is a continuous property, like the geometrical property of "being longer than a given unit". Just as a geometrical figure can have a length varying from not exceeding, to just barely exceeding, to greatly exceeding that of a given unit, why might not the property of bearing
2Mitchell_Porter12y
There are two problems here. First, you need to make the idea of "barely having qualia" meaningful. Second, you need to explain how that can solve the arbitrariness problem for a microphysically exact psychophysical correspondence. Are weak qualia a bridge across the gap between having qualia and not having qualia? Or is the axis intense-vs-weak, orthogonal to the axis there-vs-not-there-at-all? In the latter case, even though you only have weak qualia, you still have them 100%. The classic phenomenological proposition regarding the nature of consciousness, is that it is essentially about intentionality. According to this, even perception has an intentional structure, and you never find sense-qualia existing outside of intentionality. I guess that according to the later Husserl, all possible states of consciousness would be different forms of a fundamental ontological structure called "transcendental intentionality"; and the fundamental difference between a conscious entity and a non-conscious entity is the existence of that structure "in" the entity. There are mathematical precedents for qualitative discontinuity. If you consider a circle versus a line interval, there's no topological property such as "almost closed". In the context of physics, you can't have entanglement in a Hilbert space with less than four dimensions. So it's conceivable that there is a discontinuity in nature, between states of consciousness and states of non-consciousness. Twisty distinctions may need to be made. At least verbally, I can distinguish between (1) an entity whose state just is a red quale (2) an entity whose state is one of awareness of the red quale (3) an entity which is aware that it is aware of the red quale. The ontological position I described previously would say that (3) is what we call self-awareness; (2) is what we might just call awareness; there's no such thing as (1), and intentionality is present in (2) as well as in (3). I'm agnostic about the existence of som
0Tyrrell_McAllister12y
I'm still not sure where this arbitrariness problem comes from. I'm supposing that the bearing of qualia is an objective structural property of certain physical systems. Another mathematical analogy might be the property of connectivity in graphs. A given graph is either connected or not, though connectivity is also something that exists in degrees, so that there is a difference between being highly connected and just barely connected. On this view, how does arbitrariness get in? I'm suggesting something more like your "bridge across the gap" option. Analogously, one might say that the barely connected graphs are a bridge between disconnected graphs and highly connected graphs. Or, to repeat my analogy from the grandparent, the geometrical property of "being barely longer than a given unit" is a bridge across the gap between "being shorter that the given unit" and "being much longer than the given unit". I'm afraid that I'm not seeing the difficulty. I am suggesting that the possession of a given qualia state is a certain structure property of physical systems. I am suggesting that this structure property is of the sort that can be possessed by a variety of different physical systems in a variety of different states. Why couldn't various parts be added or removed from the system while leaving intact the structure property corresponding to the given qualia state?
0Mitchell_Porter12y
Give me an example of an "objective structural property" of a physical system. I expect that it will either be "vague" or "arbitrary"...
0Tyrrell_McAllister12y
I'm not sure that I understand the question. Would you agree with the following? A given physical system in a given state satisfies certain structural properties, in virtue of which the system is in that state and not some other state.
0Mitchell_Porter12y
I just want a specific example, first. You're "supposing that the bearing of qualia is an objective structural property of certain physical systems". So please give me one entirely concrete example of "an objective structural property".
0Tyrrell_McAllister12y
A sentence giving such a property would have to be in the context of a true and complete theory of physics, which I do not possess. I expect that such a theory will provide a language for describing many such structural properties. I have this expectation because every theory that has been offered in the past, had it been literally true, would have provided such a language. For example, suppose that the universe were in fact a collection of indivisible particles in Euclidean 3-space governed by Newtonian mechanics. Then the distances separating the centers of mass of the various particles would have determinate ratios, triples of particles would determine line segments meeting at determinate angles, etc. Since Newtonian mechanics isn't an accurate description of physical reality, the properties that I can describe within the framework of Newtonian mechanics don't make sense for actual physical systems. A similar problem bedevils any physical theory that is not literally true. Nonetheless, all of the false theories so far describe structural properties for physical systems. I see no reason to expect that the true theory of physics differs from its predecessors in this regard.
0Mitchell_Porter12y
Let's use this as an example (and let's suppose that the main force in this universe is like Newtonian gravitation). It's certainly relevant to functionalist theories of consciousness, because it ought to be possible to make universal Turing machines in such a universe. A bit might consist in the presence or absence of a medium-sized mass orbiting a massive body at a standard distance, something which is tested for by the passage of very light probe-bodies and which can be rewritten by the insertion of an object into an unoccupied orbit, or by the perturbation of an object out of an occupied orbit. I claim that any mapping of these physical states onto computational states is going to be vague at the edges, that it can only be made exact by the delineation of arbitrary exact boundaries in physical state space with no functional consequence, and that this already exemplifies all the problems involved in positing an exact mapping between qualia-states and physics as we know it. Let's say that functionally, the difference between whether a given planetary system encodes 0 or 1 is whether the light probe-mass returns to its sender or not. We're supposing that all the trajectories are synchronized such that, if the orbit is occupied, the probe will swing around the massive body, do a 180-degree turn, and go back from whence it came - that's a "1"; but otherwise it will just sail straight through. If we allow ourselves to be concerned with the full continuum of possible physical configurations, we will run into edge cases. If the probe does a 90-degree turn, probably that's not "return to sender" and so can't count as a successful "read-out" that the orbit is occupied. What about a 179.999999-degree turn? That's so close to 180 degrees, that if our orrery-computer has any robustness-against-perturbation in its dynamics, at all, it still ought to get the job done. But somewhere in between that almost-perfect turn and the 90-degree turn, there's a transition between a fu
2Tyrrell_McAllister12y
I don't think that this is why we don't bother ourselves with intermediate states in computers. To say that we can model a physical system as a computer is not to say that we have a many-to-one map sending every possible microphysical state to a computational state. Rather, we are saying that there is a subset Σ′ of the entire space Σ of microstates for the physical system, and a state machine M, such that, (1) as the system evolves according to physical laws under the conditions where we wish to apply our computational model, states in Σ′ will only evolve into other states in Σ′, but never into states in the complement of Σ′; (2) there is a many-to-one map f sending states in Σ′ to computational states of M (i.e., states in Σ′ correspond to unambiguous states of M); and (3) if the laws of physics say that the microphysical state σ ∈ Σ′ evolves into the state σ′ ∈ Σ′, then the definition of the state machine M says that the state f(σ) transitions to the state f(σ′). But, in general, Σ′ is a proper subset of Σ. If a physical system, under the operating conditions that we care about, could really evolve into any arbitrary state in Σ, then most of the states that the system reached would be homogeneous blobs. In that case, we probably wouldn't be tempted to model the physical system as a computer. I propose that physical systems are properly modeled as computers only when the proper subset Σ′ is a union of "isolated islands" in the larger state-space Σ, with each isolated island mapping to a distinct computational state. The isolated islands are separated by "broad channels" of states in the complement of Σ′. To the extent that states in the "islands" could evolve into states in the "channels", then, to that extent, the system shouldn't be modeled as a computer. Conversely, insofar as a system is validly modeled as a computer, that system never enters "vague" computational states. The computational theory of mind amounts to the claim that the brain can be modele
2JenniferRM12y
This is not working. I keep trying to get you to think in E-Prime for simplicity's sake and you keep emitting words that seem to me to lack any implication for what I should expect to experience. I can think of a few ways to proceed from this state of affairs that might work. One idea is for you to restate the bit I'm about to quote while tabooing the words "attribute", "property", "trait", "state", "intrinsic", "objective", "subjective", and similar words. If I translate this I hear this statement as being confused about the way to properly use abstraction in the course of reasoning, and insisting on pedantic precision whenever logical abstractions come up. Pushing all the squirrelly words into similar form for clarity, it sounds roughly like this: Do you see how this is a plausible interpretation of what you said? Do you see how the heart of our contention seems to me to have nothing to do with consciousness and everything to do with the language and methods of abstract reasoning? We don't have to play taboo. A second way that we might resolve our lack of linguistic/conceptual agreement is by working with the concepts that we don't seem to use the same way in a much simpler place where all the trivial facts are settled and only the difficult concepts are at stake. Consider the way that area, width, and height are all "intrinsic properties" of a rectangle in euclidean geometry. For me, this is another way of saying that if a construct defined in euclidean geometry lacks one of these features then it is not a rectangle. Consider another property of rectangles, the "tallness" of the rectangle, defined as ratio of the height to the width. This is not intrinsic and other than zero and infinity it could be anything and where you put the cutoff is mostly arbitrary. However, I also know that within the intrinsic properties of {width, height, area} any two of them are sufficient for defining a euclidean rectangle and thereby exactly constraining the third property to
3Mitchell_Porter12y
I will try to get across what I mean by calling states of consciousness "intrinsic", "objectively existing", and so forth; by describing what it would mean for them to not have these attributes. It would mean that you only exist by convention or by definition. It would mean that there is no definite fact about whether your life is part of reality. It wouldn't just be that some models of reality acknowledge your existence and others don't; it would mean that you are nothing more than a fuzzy heuristic concept in someone else's model, and that if they switched models, you would no longer exist even in that limited sense. I would like to think that you personally have a robust enough sense of your own reality to decisively reject such propositions. But by now, nothing would surprise me, coming from a materialist. It's been amply demonstrated that people can be willing to profess disbelief in anything and everything, if they think that's the price of believing in science. So I won't presume that you believe that you exist, I'll just hope that you do, because if you don't, it will be hard to have a sensible conversation about these topics. But... if you do agree that you definitely exist, independently of any "model" that actual or hypothetical observers have, then it's a short step to saying that you must also have some of your properties intrinsically, rather than through model-dependent attribution. The alternative would be to say that you exist, you're a "thing", but not any particular thing; which is the sort of untenable objective vagueness that I was talking about. The concept of an intrinsic property is arising somewhat differently here, than it does in your discussion of squares and rectangles. The idealized geometrical figures have their intrinsic properties by definition, or by logical implication from the definition. But I can say that you have intrinsic properties, not by definition (or not just by definition), but because you exist, and to be is to be s
3JenniferRM12y
I'm going back and forth on whether to tap out here. On the one hand I feel like I'm making progress in understanding your perspective. On the other hand the progress is clarifying that it would take a large amount of time and energy to derive a vocabulary to converse in a mutually transparent way about material truth claims in this area. It had not occurred to me that pulling on the word "intrinsic" would flip the conversation into a solipsistic zone by way of Cartesian skepticism. Ooof. Perhaps we could schedule a few hours of IM or IRC to try a bit of very low latency mutual vocabulary development, and then maybe post the logs back here for posterity (raw or edited) if that seems worthwhile to us. (See private message for logistics.) If you want to stick to public essays I recommend taking things up with Tyrrell; he's a more careful thinker than I am and I generally agree with what he says. He noticed and extended a more generous and more interesting parsing of your claims than I did when I thought you were trying to make a pigeonhole argument in favor of magical entities, and he seems to be interested. Either public essays with Tyrrell, IM with me, or both, or neither... as you like :-) (And/or Steve of course, but he generally requires a lot of unpacking, and I frequently only really understand why his concepts were better starting places than my own between 6 and 18 months after talking with him.)
3Steve_Rayhawk12y
Or in a cascade of your own successive models, including of the cascade. Or an incentive to keep using that model rather than to switch to another one. The models are made up, but the incentives are real. (To whatever extent the thing subject to the incentives is.) Not that I'm agreeing, but some clever ways to formulate almost your objection could be built around the wording "The mind is in the mind, not in reality".
0JenniferRM12y
Crap. I had not thought of quines in reference to simulationist metaphysics before.
3Tyrrell_McAllister12y
I have some sympathy for the view that my-here-now qualia are determinant and objective. But I don't see why that implies that there must be a determinant objective unique collection of particles that is experiencing the qualia. Why not say that there are various different boundaries that I could draw, but, no matter which of these boundaries I draw, the qualia being experienced by the contained system of particles would be the same? For example, adding or removing the table in front of me doesn't change the qualia experienced by the system. (Here I am supposing that I can map the relevant physical systems to qualia in the manner that I describe in this comment.)
1Richard_Kennaway12y
My subjective conscious experience seems no more exact a thing to me than my experience of distinctions of colours. States of consciousness seem to be a continuous space, and there isn't even a hard boundary (again, as I perceive things subjectively) between what is conscious and what is not. But perhaps people vary in this; perhaps it is different for you?

To summarize (mostly for my sake so I know I haven't misunderstood the OP):

  • 1.) Subjective conscious experience or qualia play a non-negligible role in how we behave and how we form our beliefs, especially of the mushy (technical term) variety that ethical reasoning is so bound up in.
  • 2.) The current popular computational flavor of philosophy of mind has inadequately addressed qualia in your eyes because the universality of the extended church-turing thesis, though satisfactorily covering the mechanistic descriptions of matter in a way that provides for
... (read more)

One question I had reading this is: What does it matter if our model of human consciousness is wrong? If we create FAI that has all of the outward functionality of consciousness I still would consider that a win. Not all eyes that have evolved are human eyes; the same could happen with consciousness. If we manufactured some mechanical "eye" that didn't model exactly the interior bits of a human eye but was still able to do what eyes do, shouldn't we still consider this an eye? It would seem nonsensical to me to question whether this mechanical ey... (read more)

which in its biophysical context would appear to be describing a mass of entangled electrons in that hypothetical sweet spot, somewhere in the brain, where there's a mechanism to protect against decoherence.

Which activity totally inaccessible to state machines do you think these electrons do?

-2Mitchell_Porter12y
The idea is not that state machines can't have qualia. Something with qualia will still be a state machine. But you couldn't know that something had qualia, if you just had the state machine description and no preexisting concept of qualia. If a certain bunch of electrons are what's conscious in the brain, my point is that the "electrons" are actually qualia and that this isn't part of our physics concept of what an electron is; and that you - or a Friendly AI - couldn't arrive at this "discovery" by reasoning just within physical and computational ontologies.
2Manfred12y
Could an AI just look at the physical causes of humans saying "I think I have qualia"? Why wouldn't these electrons be a central cause, if they're the key to qualia?
1David_Gerard12y
Please expand the word "qualia", and please explain how you see that the presence or absence of these phenomena will make an observable difference in the problem you are addressing.
-2Mitchell_Porter12y
See this discussion. Physical theories of human identity must equate the world of appearances, which is the only world that we actually know about, with some part of a posited world of "physical entities". Everything from the world of appearances is a quale, but an AI with a computational-materialist philosophy only "knows" various hypotheses about what the physical entities are. The most it could do is develop a concept like "the type of physical entity which causes a human to talk about appearances", but it still won't spontaneously attach the right significance to such concepts (e.g. to a concept of pain). I have agreed elsewhere that it is - remotely! - possible that an appropriately guided AI could solve the hard problems of consciousness and ethics before humans did, e.g. by establishing a fantastically detailed causal model of human thought, and contemplating the deliberations of a philosophical sim-human. But when even the humans guiding the AI abandon their privileged epistemic access to phenomenological facts, and personally imitate the AI's limitations by restricting themselves to computational epistemology, then the project is doomed.

I might be mistaken, but it seems like you're forwarding a theory of consciousness, as opposed to a theory of intelligence.

Two issues with that - first, that's not necessarily the goal of AI research. Second, you're evaluating consciousness, or possibly intelligence, from the inside, rather than the outside.

2dbc12y
I think consciousness is relevant here because it may be an important component of our preferences. For instance, all else being equal, I would prefer a universe filled with conscious beings to one filled with paper clips. If an AI cannot figure out what consciousness is, then it could have a hard time enacting human preferences.
1OrphanWilde12y
That presumes consciousness can only be understood or recognized from the inside. An AI doesn't have to know what consciousness feels like (or more particularly, what "feels like" even means) in order to recognize it.
0torekp12y
True, but it does need to recognize it, and if it is somehow irreversibly committed to computationalism and that is a mistake, it will fail to be promote consciousness correctly. For what it's worth, I strongly doubt Mitchell's argument for the "irreversibly committed" step. Even an AI lacking all human-like sensation and feeling might reject computationalism, I suspect, provided that it's false.

If everything comes together, then it will now be a straight line from here to the end.

To the end of what? The sequence? Or the humanity as we know it?

You need to find the true ontology, find the true morality, and win the intelligence race. For example, if your Friendly AI was to be an expected utility maximizer, it would need to model the world correctly ("true ontology"), value the world correctly ("true morality"), and it would need to outsmart its opponents ("win the intelligence race").

Is there one "true&qu... (read more)

-1Mitchell_Porter12y
The end of SI's mission, in success, failure, or change of paradigm. There's one reality so all "true ontologies" ought to be specializations of the same truth. One true morality is a shakier proposition, given that morality is the judgment of an agent and there's more than one agent. It's not even clear that just picking out the moral component of the human decision procedure is enough for SI's purposes. What FAI research is really after is "decision procedure that a sober-minded and fully-informed human being would prefer to be employed by an AGI".

Phenomenal experience does not give a non-theoretical access to existence claims. Qualia are theoretical tools of theories implemented as (parts of) minds. I do not (go as far as) posit "computational epistemology", just provide a constraint on ontology.

[-][anonymous]12y00

First, "ontology".

This makes me think you're going to next talk about your second objection, to "outsourcing the methodological and design problems of FAI research to uploaded researchers and/or a proto-FAI which is simulating or modeling human researchers", but you never mention that again. Was that intentional or did you forget? Do you have any new thoughts on that since the discussions in Extrapolating values without outsourcing?

[This comment is no longer endorsed by its author]Reply

OK.

In all seriousness, there's a lot you're saying that seems contradictory at first glance. A few snippets:

My message for friendly AI researchers is not that computational epistemology is invalid, or that it's wrong to think about the mind as a state machine, just that all that isn't the full story.

If computation epistemology is not the full story, if true epistemology for a conscious being is "something more", then you are saying that it is so incomplete as to be invalid. (Doesn't Searle hold similar beliefs, along the lines of "consci... (read more)

-2Mitchell_Porter12y
It's a valid way to arrive at a state-machine model of something. It just won't tell you what the states are like on the inside, or even whether they have an inside. The true ontology is richer than state-machine ontology, and the true epistemology is richer than computational epistemology. I do know that there's lots of work to be done. But this is what Eliezer's sequence will be about. I agree with the Legg-Hutter idea that quantifiable definitions of general intelligence for programs should exist, e.g. by ranking them using some combination of stored mathematical knowledge and quality of general heuristics. You have to worry about no-free-lunch theorems and so forth (i.e. that a program's IQ depends on the domain being tested), but on a practical level, there's no question that efficiency of algorithms and quality of heuristics available to an AI is at least semi-independent of what the AI's goals are. Otherwise all chess programs would be equally good.
[+][anonymous]12y-90