Risto_Saarelma comments on Friendly AI and the limits of computational epistemology - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (146)
So what should I make of this argument if I happen to know you're actually an upload running on classical computing hardware?
That someone managed to produce an implausibly successful simulation of a human being.
There's no contradiction in saying "zombies are possible" and "zombie-me would say that zombies are possible". (But let me add that I don't mean the sort of zombie which is supposed to be just the physical part of me, with an epiphenomenal consciousness subtracted, because I don't believe that consciousness is epiphenomenal. By a zombie I mean a simulation of a conscious being, in which the causal role of consciousness is being played by a part that isn't actually conscious.)
So if you accidentally cut the top of your head open while shaving and discovered that someone had went and replaced your brain with a high-end classical computing CPU sometime while you were sleeping, you couldn't accept actually being an upload, since the causal structure that produces the thoughts that you are having qualia are still there? (I suppose you might object to the assumed-to-be-zombie upload you being referred to as 'you' as well.)
Reason I'm asking is that I'm a bit confused exactly where the problems from just the philosophical part would come in with the outsourcing to uploaded researchers scenario. Some kind of more concrete prediction, like that a neuromorphic AI architecturally isomorphic to a real human central nervous system just plain won't ever run as intended until you build an quantum octonion monad CPU to house the qualia bit, would be a lot more not-confusing stance, but I don't think I've seen you take that.
I'm going to collect some premises that I think you affirm:
I have some questions about the implications of these assertions.
It's anthropically necessary that the ontology of our universe permits consciousness, but selection just operates on state machines, and I would guess that self-consciousness is adaptive because of its functional implications. So this is like looking for an evolutionary explanation of why magnetite can become magnetized. Magnetite may be in the brain of birds because it helps them to navigate, and it helps them to navigate because it can be magnetized; but the reason that this substance can be magnetized has to do with physics, not evolution. Similarly, the alleged quantum locus may be there because it has a state-machine structure permitting reflective cognition, and it has that state-machine structure because it's conscious; but it's conscious because of some anthropically necessitated ontological traits of our universe, not because of its useful functions. Evolution elsewhere may have produced unconscious intelligences with brains that only perform classical computations.
I think you have mistaken the thrust of my questions. I'm not asking for an evolutionary explanation of consciousness per se -- I'm trying to take your view as given and figure out what useful functions one ought to expect to be associated with the locus of consciousness.
What does conscious cognition do that unconscious cognition doesn't do? The answer to that tells you what consciousness is doing (though not whether these activities are useful...).
So if you observed such a classical upload passing exceedingly carefully designed and administered turing tests, you wouldn't change your position on this issue? Is there any observation which would falsify your position?