You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

cameroncowan comments on [LINK] Could a Quantum Computer Have Subjective Experience? - Less Wrong Discussion

16 Post author: shminux 26 August 2014 06:55PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (55)

You are viewing a single comment's thread.

Comment author: cameroncowan 27 August 2014 09:45:31PM 0 points [-]

I think the question is how are you going to define consciousness and how are you going apriori prove that? If you use the language test then yes and FHE encrypted sm with a lost key is still conscious (see comment below).

If I untorture a reversible simulation you have to decide how far the reversibility goes and if there is any imprint or trauma left behind. Does the computer feel or experience that reverse as a loss? Can you fully reverse the imprint of torture on consciousness in such a manner that running the simulation backwards has an incomplete or complete effect?

The Vaidman brain isn't conscious I don't think because its based on a specific input and a specific output. I still think John Searle is off on this despite my opinion.

Comment author: gjm 28 August 2014 12:06:09PM 1 point [-]

If you use the language test

What language test? (And, how would a fully-homomorphically-encrypted sim with a lost key be shown to be conscious by anything that requires communicating with it?)

you have to decide how far the reversibility goes

The sort of reversibility Scott Aaronson is talking about goes all the way: after reversal, the thing in question is in exactly the same state as it was in before. No memory, no trauma, no imprint, nothing.

The Vaidman brain isn't conscious I don't think because it's based on a specific input and a specific output.

I don't understand that at all. Why does that stop it being conscious? If I ask you a specific yes/no question (in the ordinary fashion, no Vaidman tricksiness) and you answer it, does the fact that you were giving a specific answer to a specific question mean that you weren't conscious while you did it?

Comment author: [deleted] 28 August 2014 04:52:14PM 1 point [-]

Giving answers is an irreversible operation. The whole "is a fully reversible computer conscious?" thing doesn't really make sense to me -- for the computer to actually have an effect on the world requires irreversible outputs. So I have trouble imagiing scenarios where my expectactions are different but the entire process remains reversible...

Comment author: calef 28 August 2014 05:46:05PM 0 points [-]

You could set up a fully quantum whole brain emulation of a person sitting in a room with a piece of paper that says "Prove the Riemann Hypothesis". Once they've finished the proof, you record what's written on their paper, and reverse the entire simulation (as it was fully quantum mechanical, thus, in principle, fully unitarily reversible).

Looking at what they wrote on the paper doesn't mean you have to communicate with them.

Comment author: [deleted] 28 August 2014 06:54:00PM *  3 points [-]

The act of writing on the paper was an irreversible action. And yes, looking at it is comunication, in the physical sense. Specifically, the photon interaction with the paper and with your eyes is not reversible. Any act of extracting information from the computational process in a way where the information or anything causally dependent on that information is not also reversed when the computation is run backwards, must be an irreversable action.

What does a universe look like where a computation has been run forwards, and then run backwards in a fully reversible way? Like it never happened at all.

Comment author: calef 28 August 2014 10:41:10PM *  0 points [-]

I think the confusion here is about what "fully quantum whole brain emulation" actually means.

The idea is that you have a box (probably large), within which is running a closed system calculation which is equivalent to simulating someone sitting in a room trying to write a theorem (all the way down to the quantum level). You are not interacting with the simulation, you are running the simulation. At every stage of the simulation, you have perfect information about the full density matrix of the system (i.e., the person being simulated, the room, the atoms in the person's brain, the movements of the pencil, etc.)

If you have this level of control, then you are implementing the full unitary time evolution of the system. The time evolution operator is reversible. Thus, you can just run the calculation backwards.

So, to the person in the room writing the proof, as far as they know, the photon flying from the paper hitting their eye and being registered by their brain is an irreversible interaction--they don't have complete control over their environment. But to you, the simulation runner, this action is perfectly reversible.

Now, the contention may be that this simulated person wasn't actually ever conscious during the course of this ultra-high-fidelity experiment. Answering that question either way seems to have strange philosophical implications.

Comment author: [deleted] 28 August 2014 11:27:42PM *  0 points [-]

What you describe is all true, however useless as described. The earlier poster wanted the simulation to output data (e.g. by writing it on paper -- the paper being outside of the simulation), and then reverse the simulation. Sorry, you can't do that. "Reversible" has very specific meaning in the context of statistical and quantum physics. Even if the computation itself can be reversed, once it has output data that property is lost. We'd no longer be talking about a reversible process, because once the computation is reversed, that output still exists.

Comment author: calef 28 August 2014 11:34:35PM 0 points [-]

I'm not sure who you're talking about because I'm the person above referring to someone writing on paper--and the paper was meant to also be within the simulation. The simulator is "reading the paper" by nature of having perfect information about the system.

"Reversible" in this context is only meant to describe the contents of the simulation. Computation can occur completely reversibly.

Comment author: [deleted] 29 August 2014 12:28:36AM *  0 points [-]

Sorry, got mixed up with cameroncowan. Anyway, to the original point:

You said "Once they've finished the proof, you record what's written on their paper, and reverse the entire simulation... Looking at what they wrote on the paper doesn't mean you have to communicate with them."

My interpretation--which may be wrong--is that you are suggesting that the person running the simulation record the state of the simulation at the moment the problem is solved, or at least the part of the simulator state having to do with the paper. However the process of extracting information out of the simulation -- saving state -- is irreversable, at least if you want it to survive rewinding the simulation.

To put differently, if the simulation is fully reversible, then you run it forwards, run it backwards, and that the end you have absolutely zero knowledge about what happened inbetween. Any preserved state that wasn't there at the beginning would mean that the process wasn't fully reversed.

Looking at the paper is communicating with the simulation. It maybe be a one-way communication, but that is enough.

Comment author: calef 29 August 2014 01:50:27AM *  0 points [-]

I'm suggesting that the person running the simulation knows the state of the simulation at all times. If this bothers you, pretend everything is being done digitally, on a classical computer, with exponential slowdown.

Such a calculation can be done reversibly without ever passing information into the system.

Comment author: cameroncowan 28 August 2014 07:39:18PM 0 points [-]

I would like to know that as well because I think there is an effect if it is conscious to make it fully reversible I think denies a certain consciousness.

Comment author: [deleted] 28 August 2014 08:04:01PM 2 points [-]

That's what Scott's blog is about :)

Comment author: cameroncowan 28 August 2014 07:38:35PM 0 points [-]

But writing the proof and reading it is communication.

Comment author: calef 28 August 2014 10:52:31PM 0 points [-]

"Reading it" is akin to "having perfect information about the full density matrix of the system". You don't have to perturb the system to get information out of it.

Comment author: cameroncowan 28 August 2014 07:41:23PM -1 points [-]

Language Test: The Language Test is simple language for the Heideggarian idea of language as a proof of consciousness.

Reversibility: I don't think that kind of reversibility is possible while also maintaining consciousness.

Vaidman Brain: Then that invalidates the idea if you remove the tricksiness. I would of course remain in a certain state of conscious the entire time.

Comment author: gjm 28 August 2014 08:47:01PM 0 points [-]

How is a simulation of a conscious mind, operating behind a "wall" of fully homomorphic encryption for which no one has the key, going to pass this "language test"?

I don't think that kind of reversibility is possible while also maintaining consciousness.

Then you agree with Scott Aaronson on at least one thing.

Then that invalidates the idea if you remove the tricksiness.

What I am trying to understand is what about the Vaidman procedure makes consciousness not be present, in your opinion. What you said before is "based on a specific input and a specific output", but we seem to be agreed that one can have a normal interaction with a normal conscious brain "based on a specific input and a specific output" so that can't be it. So what is the relevant difference, in your opinion?

Comment author: cameroncowan 28 August 2014 10:27:05PM 0 points [-]

That is my point, its not and therefore can't pass the conscious language test and I think thats quite the problem.

I think the Vaidman procedure doesn't make consciousness present because the specific input and output being only a yes or no answer makes it no better than the computers we are using right now. I can ask SIRI yes or no answers and get something out but we can agree that Siri is an extremely simple kind of consciousness embodied in computer code built at Apple to work as an assistant in iPhones. If the Vaidman brain were to be conscious I should be able to ask it a "question" without definable bounds and get any answer between "42" and "I don't know or I cannot answer that." So for example, you can ask me all these questions and I can work to create an answer as I am now doing or I could simply say "I don't know" or "my head is parrot your post is invalid." The answer would exist as a signpost of my consciousness although it might be unsatisfying. The Vaidman brain could not work under these conditions because the bounds are set. Any time you have set bounds saying that it is apriori consciousness is impossible.

Comment author: gjm 28 August 2014 11:39:18PM 0 points [-]

That is my point [...]

Then I have no idea what you meant by "If you use the language test then yes and FHE encrypted sm with a lost key is still conscious".

the specific input and output being only a yes or no answer makes it no better than the computers we are using right now.

If I ask you a question and somehow constrain you only to answer yes or no, that doesn't stop you being conscious as you decide your answer. There's a simulation of your whole brain in there, and it arrives at its yes/no answer by doing whatever your brain usually does to decide. All that's unusual is the context. (But the context is very unusual.)

Comment author: cameroncowan 29 August 2014 02:32:29AM 0 points [-]

I would say that no the FHE is not because the most important aspect of language is communication and meaning. The ability to communicate matters not as long as it cannot have meaning to at least one other person. Upon the 2nd point we are agreed.