In response to the classic Mysterious Answers to Mysterious Questions, I express some skepticism that consciousness is can be understood by science. I postulate (with low confidence) that consciousness is “inherently mysterious”, in that it is philosophically and scientifically impenetrable. The mysteriousness is a fact about our state of mind, but that state of mind is due to a fundamental epistemic feature of consciousness and is impossible to resolve.
My issue with understanding the cause of consciousness involves p-zombies. Any experiment with the goal of understanding consciousness would have to be able to detect consciousness, which seems to me to be philosophically impossible. To be more specific, any scientific investigation of the cause of consciousness would have (to simplify) an independent variable that we could manipulate to see how this manipulation affects the dependent variable, the presence or absence of consciousness. We assume that those around us are conscious, and we have good reason to do so, but we can't rely on that assumption in any experiment in which we are investigating consciousness. Before we ask “what is causing x?”, we first have to know that x is present.
As Eliezer points out, that an individual says he's conscious is a pretty good signal of consciousness, but we can't necessarily rely on that signal for non-human minds. A conscious AI may never talk about its internal states depending on its structure. (Humans have a survival advantage to social sharing of internal realities; an AI will not be subject to that selection pressure. There’s no reason for it to have any sort of emotional need to share its feelings, for example.) On the flip side, a savvy but non-conscious AI may talk about it's "internal states", not because it actually has internal states, but because it is “guessing the teacher's password” in the strongest way imaginable: it has no understanding whatsoever of what those states are, but computes that aping internal states will accomplish it's goals. I don't know how we could possibly know if the AI is aping consciousness for it own ends or if it actually is conscious. If consciousness is thus undetectable, I can't see how science can investigate it.
That said, I am very well aware that “Throughout history, every mystery, ever solved has turned out to be not magic*” and that every single time something has seemed inscrutable to science, a reductionist explanation eventually surfaced. Knowing this, I seriously downgrade my confidence that "No, really, this time it is different. This phenomenon really is beyond the grasp of science." I look forward to someone coming forward with something clever that dissolves the question, but even so, it does seem inscrutable.
*- Though, to be fair, this is a selection bias. Of course, all the solved mysteries weren't magic. All the mysteries that are acctully magic remain unsolved, because they're magic! This is NOT to say I believe in magic, just to say that it's hardly saying much to claim that all the things we've come to understand were in principle understandable. To steelman: I do understand that with each mystery that was once declared to be magical, then later shown not to be, our collective priors for the existence of magical things decrease. (There is a sort of halting problem: if a question has remained unsolved since the dawn of asking questions, is that because it is unsolvable, or because we're right around the corner form solving it?)
Well, one of the reasons that the Turing Test has lasted so long as a benchmark, despite its problems, is the central genius of holding inorganic machines to the same standards as organic ones. Notwithstanding p-zombies and some of the weirder anime shows, we're actionably and emotionally confident in the consciousness of the humans that surround us every day. We can't experience these consciousnesses directly, but we do care about their states in terms of both instrumental and object-level utility.
An AGI presents new challenges, but we've already demonstrated a basic willingness to treat ambulatory meat sacks as valuable beings with an internal perspective. By assigning the same sort of 'conscious' label to a synthetic being who nonetheless has a similar set of experiential consequences in our lives, we can somewhat comfortably map our previous assumptions on to a new domain. That gives us a beachhead, and a basis for cautious expansion and observation in the much more malleable space of inorganic intelligences.
I'm not sure how comfortably.
I saw a bit of the movie her about the love affair between a guy and his operating system. It was horrifying to me, but I think for a different reason than everyone else in the room. I was thinking, "he might be falling in love with an automaton. How do we know if he is in a relationship with another mind or just an unthinking mechanism of gears and levers that looks like another mind from the outside?" The idea of being emotionally invest... (read more)