Or the converse problem - an agent that contains all the aspects of human value, except the valuation of subjective experience. So that the result is a nonsentient optimizer that goes around making genuine discoveries, but the discoveries are not savored and enjoyed, because there is no one there to do so. This, I admit, I don't quite know to be possible. Consciousness does still confuse me to some extent. But a universe with no one to bear witness to it, might as well not be.
- Eliezer Yudkowsky, "Value is Fragile"
I had meant to try to write a long post for LessWrong on consciousness, but I'm getting stuck on it, partly because I'm not sure how well I know my audience here. So instead, I'm writing a short post, with my main purpose being just to informally poll the LessWrong community on one question: how sure are you that whole brain emulations would be conscious?
There's actually a fair amount of philosophical literature about issues in this vicinity; David Chalmers' paper "The Singularity: A Philosophical Analysis" has a good introduction to the debate in section 9, including some relevant terminology:
Biological theorists of consciousness hold that consciousness is essentially biological and that no nonbiological system can be conscious. Functionalist theorists of consciousness hold that what matters to consciousness is not biological makeup but causal structure and causal role, so that a nonbiological system can be conscious as long as it is organized correctly.
So, on the functionalist view, emulations would be conscious, while on the biological view, they would not be.
Personally, I think there are good arguments for the functionalist view, and the biological view seems problematic: "biological" is a fuzzy, high-level category that doesn't seem like it could be of any fundamental importance. So probably emulations will be conscious--but I'm not too sure of that. Consciousness confuses me a great deal, and seems to confuse other people a great deal, and because of that I'd caution against being too sure of much of anything about consciousness. I'm worried not so much that the biological view will turn out to be right, but that the truth might be some third option no one has thought of, which might or might not entail emulations are conscious.
Uncertainty about whether emulations would be conscious is potentially of great practical concern. I don't think it's much of an argument against uploading-as-life-extension; better to probably survive as an up than do nothing and die for sure. But it's worrisome if you think about the possibility, say, of an intended-to-be-Friendly AI deciding we'd all be better off if we were forcibly uploaded (or persuaded, using its superhuman intelligence, to "voluntarily" upload...) Uncertainty about whether emulations would be conscious also makes Robin Hanson's "em revolution" scenario less appealing.
For a long time, I've vaguely hoped that advances in neuroscience and cognitive science would lead to unraveling the problem of consciousness. Perhaps working on creating the first emulations would do the trick. But this is only a vague hope, I have no clear idea of how that could possibly happen. Another hope would be that if we can get all the other problems in Friendly AI right, we'll be able to trust the AI to solve consciousness for us. But with our present understanding of consciousness, can we really be sure that would be the case?
That leads me to my second question for the LessWrong community: is there anything we can do now to to get clearer on consciousness? Any way to hack away at the edges?
This brings up something that has been on my mind for a long time. What are the necessary and sufficient conditions for two computations to be (homeo?)morphic? This could mean a lot of things, but specifically I'd like to capture the notion of being able to contain a consciousness, so what I'm asking is, what we would have to prove in order to say program A contains a consciousness --> program B contains a consciousness. "pointwise" isomorphism, if you're saying what I think, seems too strict. On the other hand, allowing any invertible function to be a ___morphism doesn't seem strict enough. For one thing we can put any reversible computation in 1-1 correspondence with a program that merely stores a copy of the initial state of the first program and ticks off the natural numbers. Restricting our functions by, say, resource complexity, also seems to lead to both similar and unrelated issues...
Has this been discussed in any other threads?