Ultimately you can proclaim literally anything undefined in such a manner, e.g. "a brick". What a brick exactly is? Clay is equally in need of definition, and if you define clay, you'll need to be defining other things.
I'm doing my best to argue in good faith.
When you say "brick", I have a pretty good idea of what you mean. I could be wrong, I could be surprised, but I do have an assumption with high confidence.
But when you say "consciousness in a WBE", I really honestly don't know what it is you mean. There are several alternatives - different things that different people mean - and also there are some confused people who say such words but don't mean anything consistent by them (e.g. non-materialists). So I'm asking you to clarify what you mean. (Or asking the OP, in this case.)
There is this disparity between fairly symmetrical objective picture of the world, which has multiple humans, and subjective picture (i.e. literally what you see with your own eyes), which needs extra information to locate whose eyes the picture is coming from
So far I'm with you. Today I can look down and see my own body and say "aha, that's who I am in the objective world". If I were a WBE I could be connected to very different inputs and then I would be confused and my sense of self could change. That's a very interesting issue but that doesn't clarify what "consciousness" is.
some yet unknown mapping from that information to a choice of being, mapping that may or may not include emulations in it's possible outputs.
I've lost you here. What does "a choice of being" mean? What is this mapping that includes some... beings... and not others?
If I were a WBE
And here is the question: does that sentence describe an actual possibility or not?
What if you were a big database that simply stores an answer to every question I can ask you? Can you seriously consider the possibility that you are merely a database that does this purely mechanical operation? This database does not think, it just answers. For all I know you might be such a database, but I am pretty sure that I am not such a database nor would I want to be replaced with such a database.
Or let's consider two programs that take a string an...
- Eliezer Yudkowsky, "Value is Fragile"
I had meant to try to write a long post for LessWrong on consciousness, but I'm getting stuck on it, partly because I'm not sure how well I know my audience here. So instead, I'm writing a short post, with my main purpose being just to informally poll the LessWrong community on one question: how sure are you that whole brain emulations would be conscious?
There's actually a fair amount of philosophical literature about issues in this vicinity; David Chalmers' paper "The Singularity: A Philosophical Analysis" has a good introduction to the debate in section 9, including some relevant terminology:
So, on the functionalist view, emulations would be conscious, while on the biological view, they would not be.
Personally, I think there are good arguments for the functionalist view, and the biological view seems problematic: "biological" is a fuzzy, high-level category that doesn't seem like it could be of any fundamental importance. So probably emulations will be conscious--but I'm not too sure of that. Consciousness confuses me a great deal, and seems to confuse other people a great deal, and because of that I'd caution against being too sure of much of anything about consciousness. I'm worried not so much that the biological view will turn out to be right, but that the truth might be some third option no one has thought of, which might or might not entail emulations are conscious.
Uncertainty about whether emulations would be conscious is potentially of great practical concern. I don't think it's much of an argument against uploading-as-life-extension; better to probably survive as an up than do nothing and die for sure. But it's worrisome if you think about the possibility, say, of an intended-to-be-Friendly AI deciding we'd all be better off if we were forcibly uploaded (or persuaded, using its superhuman intelligence, to "voluntarily" upload...) Uncertainty about whether emulations would be conscious also makes Robin Hanson's "em revolution" scenario less appealing.
For a long time, I've vaguely hoped that advances in neuroscience and cognitive science would lead to unraveling the problem of consciousness. Perhaps working on creating the first emulations would do the trick. But this is only a vague hope, I have no clear idea of how that could possibly happen. Another hope would be that if we can get all the other problems in Friendly AI right, we'll be able to trust the AI to solve consciousness for us. But with our present understanding of consciousness, can we really be sure that would be the case?
That leads me to my second question for the LessWrong community: is there anything we can do now to to get clearer on consciousness? Any way to hack away at the edges?