Yet another exceptionally interesting blog post by Scott Aaronson, describing his talk at the Quantum Foundations of a Classical Universe workshop, videos of which should be posted soon. Despite the disclaimer "My talk is for entertainment purposes only; it should not be taken seriously by anyone", it raises several serious and semi-serious points about the nature of conscious experience and related paradoxes, which are generally overlooked by the philosophers, including Eliezer, because they have no relevant CS/QC expertise. For example:

  • Is an FHE-encrypted sim with a lost key conscious?
  • If you "untorture" a reversible simulation, did it happen? What does the untorture feel like?
  • Is Vaidman brain conscious? (You have to read the blog post to learn what it is, not going to spoil it.)

Scott also suggests a model of consciousness which sort-of resolves the issues of cloning, identity and such, by introducing what he calls a "digital abstraction layer" (again, read the blog post to understand what he means by that). Our brains might be lacking such a layer and so be "fundamentally unclonable". 

Another interesting observation is that you never actually kill the cat in the Schroedinger's cat experiment, for a reasonable definition of "kill".

There are several more mind-blowing insights in this "entertainment purposes" post/talk, related to the existence of p-zombies, consciousness of Boltzmann brains, the observed large-scale structure of the Universe and the "reality" of Tegmark IV.

I certainly got the humbling experience that Scott is the level above mine, and I would like to know if other people did, too.

Finally, the standard bright dilettante caveat applies: if you think up a quick objection to what an expert in the area argues, and you yourself are not such an expert, the odds are extremely heavy that this objection is either silly or has been considered and addressed by the expert already. 

 

New Comment
55 comments, sorted by Click to highlight new comments since: Today at 8:36 AM

I haven't read it yet, but I think that the bright dilettante caveat applies less strongly than usual given that it is disclaimed with: "My talk is for entertainment purposes only; it should not be taken seriously by anyone," and I think it's weird you felt it was necessary to bring it up for this post specifically. Do you want people to take this more seriously than Scott seems to? Anyway, I feel more suspicious going in to the post than I would otherwise because of this.

I think Scott is being overly (possibly falsely) modest here. He calls his untestable speculations "entertainment", whereas a philosophy department would call a similarly deep speculation a PhD thesis. He is a complexity theory expert, and from this point of view anything that is not a theorem or at least a mathematical conjecture is "entertainment".

Another interesting comment by Scott on why he is less of a "pure reductionist" than he used to be. One of his many points is related to "singulatarians":

My contacts with the singularity movement, and especially with Robin Hanson and Eliezer Yudkowsky, who I regard as two of the most interesting thinkers now alive (Nick Bostrom is another). I give the singulatarians enormous credit for taking the computational theory of mind to its logical conclusions—for not just (like most scientifically-minded people) paying lip service to it, but trying extremely hard to think through what it will actually be like when and if we all exist as digital uploads, who can make trillions of copies of ourselves, maybe “rewind” anything that happened to us that we didn’t like, etc. What will ethics look like in such a world? What will the simulated beings value, and what should they value? At the same time, the very specificity of the scenarios that the singulatarians obsessed about left a funny taste in my mouth: when I read (for example) the lengthy discourses about the programmer in his basement clicking “Run” on a newly-created AI, which then (because of bad programming) promptly sets about converting the whole observable universe into paperclips, I was less terrified than amused: what were the chances that, out of all the possible futures, ours would so perfectly fit the mold of a dark science-fiction comedy? Whereas the singulatarians reasoned:

“Our starting assumptions are probably right, ergo we can say with some confidence that the future will involve trillions of identical uploaded minds maximizing their utility functions, unless of course the Paperclip-Maximizer ‘clips’ it all in the bud”

I accepted the importance and correctness of their inference, but I ran it in the opposite direction:

“It seems obvious that we can’t say such things with any confidence, ergo the starting assumptions ought to be carefully revisited—even the ones about mind and computation that most scientifically-literate people say they agree with.”

“Our starting assumptions are probably right, ergo we can say with some confidence that the future will involve trillions of identical uploaded minds maximizing their utility functions, unless of course the Paperclip-Maximizer ‘clips’ it all in the bud”

I accepted the importance and correctness of their inference, but I ran it in the opposite direction:

“It seems obvious that we can’t say such things with any confidence, ergo the starting assumptions ought to be carefully revisited—even the ones about mind and computation that most scientifically-literate people say they agree with.”

I don't see how Scott's proposed revision of the starting assumptions actually changes the conclusions. Even if he is right that uploads and AIs with a "digital abstraction layer" can't be conscious, that's not going to stop a future involving trillions of uploads, or stop paperclip maximizers.

If these uploads are p-zombies (Scott-zombies?) because they are reversible computations, then their welfare doesn't matter. I don't think he says that it prevents paperclip maximizers.

So Scott meant to argue against "the future should involve trillions of uploads" rather than "the future will involve trillions of uploads"?

He suggests that all those uploads might not be conscious if they are run on a quantum computer reversibly (or have some other "clean digital abstraction layer"). He states that this is a huge speculation, but it is still an alternative not usually considered by the orthodox reductionists.

trying extremely hard to think through what it will actually be like when and if we all exist as digital uploads

The first serious attempt at this that I've seen is Permutation City which came out 20 years ago.

[-][anonymous]10y90

Maybe my favorite thought experiment along these lines was invented by my former student Andy Drucker. In the past five years, there’s been a revolution in theoretical cryptography, around something called Fully Homomorphic Encryption (FHE), which was first discovered by Craig Gentry. What FHE lets you do is to perform arbitrary computations on encrypted data, without ever decrypting the data at any point. So, to someone with the decryption key, you could be proving theorems, simulating planetary motions, etc. But to someone without the key, it looks for all the world like you’re just shuffling random strings and producing other random strings as output.

You can probably see where this is going. What if we homomorphically encrypted a simulation of your brain? And what if we hid the only copy of the decryption key, let’s say in another galaxy? Would this computation—which looks to anyone in our galaxy like a reshuffling of gobbledygook—be silently producing your consciousness?

Okay, I think my bright dilettante answer to this is the following: The key is what allows you to prove that the FHE is conscious. It is not, itself, the FHE's consciousness, which is probably still silently running (although that can no longer be proven). Proof of consciousness and consciousness are different things, although they clearly are related, and something may or may not have proved it's consciousness in the past before losing its ability to do so in the future.

I used the following thought experiment while thinking about this:

Assume Bob, Debra, and Flora work at a company with a number of FHEs. Everyone at the company has to wear their FHE's decryption key and keep it with them at all times.

Alice is an FHE simulation in the middle of calculating a problem for Bob. It will take about 5 minutes to solve. Charlie is a seperate FHE simulation in the middle of calculating a seperate problem for Debra. It will also take 5 minutes to solve.

Bob and Debra both remove their keys, go to the bathroom, and come back. That takes 4 minutes.

Debra plugs the key back in, and sure enough FHE Charlie reports that it needs 1 more minute to solve the problem. A minute later Charlie solves it, and gives Debra the answer.

Bob comes in and tells Debra that he appears to have gotten water on his key and it is no longer working, so all he can get from Alice is just random gibberish. Bob is going to shut Alice down.

"Wait a minute." Debra tells Bob. "Remember, the problem we were working on was 'Are you conscious?' and the answer Charlie gave me was 'Yes. And here is a novel and convincing proof.' I read the proof and it is novel and convincing. Alice was meant to independently test the same question, because she has the same architecture as Charlie, just different specific information, like how you and I have the same architecture but different information. It doesn't seem plausible that Charlie would be conscious and Alice wouldn't."

"True." Bob says, reading the paper. "But the difference is, Charlie has now PROVED he's conscious, at least to the extent that can be done by this novel and convincing proof. Alice may or may not have had consciousness in the first place. She may have had a misplaced semicolon and outputted a recipe for blueberry pie. I can't tell."

"But she was similar to Charlie in every way prior to you breaking the encryption key. It doesn't make sense that she would lose consciousness when you had a bathroom accident." Debra says.

"Let's rephrase. She didn't LOSE conciousness, but she did lose the ability to PROVE she's conscious." Bob says.

"Hey guys?" Flora, a coworker says. "Speaking of bathroom accidents, I just got water on my key and it stopped working."

"We need to waterproof these! We don't have spares." Debra says shaking her head. "What happened with your FHE, Edward?"

"Well, he proved he was conscious with a novel and convincing proof." Flora says. handing a decrypted printout of it over to Debra. "After I read it, I was going to have a meeting with our boss to share the good news, and I wanted to hit the bathroom first... and then this happened."

Debra and Bob read the proof. "This isn't the same as Charlie's proof. It really is novel." Debra notes.

"Well, clearly Edward is conscious." Bob says. "At least, he was at the time of this proof. If he lost consciousness in the near future, and started outputting random gibberish we wouldn't be able to tell."

FHE: Charlie chimes in. "Since I'm working, and you still have a decryption key for me, you can at least test that I don't start producing random gibberish in the near future. Since we're based on similar architecture, the same reasoning should apply to Alice and Edward. Also Debra, could you please waterproof your key ASAP? I don't want people to take a broken key as an excuse to shut me down."

End thought experiment.

Now that I've come up with that, and I don't see any holes myself, I guess I need to start finding out what I'm missing as someone who only dilettantes this. If I were to guess, it might be somewhere in the statement 'Proof of consciousness and consciousness are different things.' That seems to be a likely weak point. But I'm not sure how to address it immediately.

Okay, I think my bright dilettante answer to this is the following: The key is what allows you to prove that the FHE is conscious. It is not, itself, the FHE's consciousness, which is probably still silently running (although that can no longer be proven). Proof of consciousness and consciousness are different things, although they clearly are related, and something may or may not have proved it's consciousness in the past before losing its ability to do so in the future.

I used the following thought experiment while thinking about this:

The thought experiment that occurs to me is simply looking at someone's brain while they do something stereotypically consciousness-indicating! An outside observer watching a brain might say, "oh, that just looks like a wet, wobbly lump of meat, I can't even remotely tell how it's supposed to operate just by looking at it, why would I think it's generating consciousness?" The analogue to FHE here would be a lack of knowledge about neuroscience & such.

Hmm, it seems to me that what is missing from this is the definition of consciousness.

Thanks for the link. I think one knows progress is happening when one gets an intense sensation that consciousness is a human word and the universe does not have to oblige our parochial biases by having it as a natural category.

That said, I am happy to take his recommendation to only take it a bit seriously, and in particular I think he's off-base about duplication and the "digital abstraction layer."

Taking up on the "level above mine" comments -- Scott is a very talented and successful researcher. He also has tenure and can work on what he likes. The fact that he considers this sort of philosophical investigation worth his time and attention makes me upwardly revise my impression of how worthwhile the topic is.

Once you've asked about decoherence and irreversibility, that immediately raises the question of whether these are what we're aiming at, or something usually very closely related - or indeed whether these are the same thing at all! Suppose we have a quantum computer with three parts, each much larger than the previous.

  • Alice is a simulation of a putatively conscious entity. Suppose that the only reason we'd have not to think it's conscious is what we're about to do to it.
  • Alice's Room is an entropy sink Alice will interact with in the process of its being putatively conscious
  • In order to run Alice and Alice's Room, we also have an entropy sink we use for error correction.

We run Alice and Alice's Room forwards in time for a while, and Alice is doing a bunch of locally-irreversible computations, dumping the resulting entropy into Alice's Room instead of outer space.

At some point, we quantum-randomly either: 1) let Alice's Room shed entropy into outer space, causing the local irreversibility to become permanent, or 2) we time-reverse the dynamics of Alice and Alice's Room until we reach the initial state.

Was Alice conscious in case 1? In case 2? Since the sequence of events in both cases were in fact the same exact sequence of events - not merely identical, but referring to the exact same physically realized sequence of events - up to our quantum coinflip, it's nonsense to say that one was conscious and the other was not.

So yes, consciousness is connected to the arrow of time, but on a local level, not necessarily on the billion-year scale.

This lets us spit out that bullet about the Anti-deSitter space. If you're in an AdS space, you're going to choke on your own waste heat a zillion years before quantum billiards brings you back close to the starting point.

So, I'd say that there's consciousness inside this AdS trap, for a little while, until they die. When quantum billiards has again randomly lowered entropy to the point that a potentially conscious entity might have an entropy sink, then you can again have consciousness.

So, the AdS sphere is 99.999...(insert a lot)..99% not conscious, on account of its being dead, not on account of its being quantum-reversible.

wolfgang proposed a similar example on Scott's blog:

I wonder if we can turn this into a real physics problem:

1) Assume a large-scale quantum computer is possible (thinking deep thoughts, but not really self-conscious as long as its evolution is fully unitary).

2) Assume there is a channel which allows enough photons to escape in such a way to enable consciousness.

3) However, at the end of this channel we place a mirror – if it is in the consciousness-OFF position the photons are reflected back into the machine and unitarity is restored, but in the consciousness-ON position the photons escape into the deSitter universe.

4) As you can guess we use a radioactive device to set the mirror into c-ON or c-OFF position with 50% probability.

Will the quantum computer now experience i) a superposition of consciousness and unconsciousness or ii) will it always have a “normal” conscious experience or iii) will it have a conscious experience in 50% of the cases ?

Scott responded that

I tend to gravitate toward an option that’s not any of the three you listed. Namely: the fact that the system is set up in such a way that we could have restored unitarity, seems like a clue that there’s no consciousness there at all—even if, as it turns out, we don’t restore unitarity.

This answer is consistent with my treatment of other, simpler cases. For example, the view I’m exploring doesn’t assert that, if you make a perfect copy of an AI bot, then your act of copying causes the original to be unconscious. Rather, it says that the fact that you could (consistent with the laws of physics) perfectly copy the bot’s state and thereafter predict all its behavior, is an empirical clue that the bot isn’t conscious—even before you make a copy, and even if you never make a copy.

His example is different in a very particular way:

His conscious entity gets to dump photons into de Sitter space directly and only if you open it. This makes Scott's counter-claim prima facie basically plausible - if your putative consciousness only involves reversible actions, then is it really conscious?

But, I specifically drew a line between Alice and Alice's Room, and specified that Alice's normal operations are irreversible - but they must also dump entropy into the Room, taking in one of its 0 bits and returning something that might be 1 or 0, and if you feed her a 1 bit, she dies on waste heat (maybe she has some degree of tolerance for 1s, but as the density of 1s approaches 50% she cannot survive).

If you were to just leave the Room open all the time, always resetting its qbits to 0, Alice would operate the same, aside from having no risk of heatstroke. (In this case, of course, if you run the simulation backwards, the result would not be where you started, but catastrophe).

I think this is a pretty crucial distinction.

...

At least that find explains why the comment disappeared without a ripple. It triggered "I've seen this before".

Was Alice conscious in case 1? In case 2? Since the sequence of events in both cases were in fact the same exact sequence of events - not merely identical, but referring to the exact same physically realized sequence of events - up to our quantum coinflip, it's nonsense to say that one was conscious and the other was not.

Well, Scott disagrees:

that you and I are conscious seems like a pretty clear paradigm-case. On the other hand, that you and I would still be conscious even if there were aliens who could perfectly copy, predict, reverse, and cohere us (very likely by first uploading us into a digital substrate), seems far from a paradigm-case. If anything, it seems to me like a paradigmatic non-paradigm-case.

I disagree with his caveat for consciousness, since I would like to think of myself as conscious even if I am a simulation someone can run backwards, but I am not 100% sure, because reversibility changes the game considerably. Scott alludes to it in the Schrodinger's cat experiment, by noting that death becomes reversible (in the QM-sense, not the cryonic sense), and thus largely loses its meaning:

I claim that there’s no animal cruelty at all in the Schrödinger’s cat experiment. And here’s why: in order to prove that the cat was ever in a coherent superposition of |Alive〉 and |Dead〉, you need to be able to measure it in a basis like {|Alive〉+|Dead〉,|Alive〉-|Dead〉}. But if you can do that, you must have such precise control over all the cat’s degrees of freedom that you can also rotate unitarily between the |Alive〉 and |Dead〉 states. (To see this, let U be the unitary that you applied to the |Alive〉 branch, and V the unitary that you applied to the |Dead〉 branch, to bring them into coherence with each other; then consider applying U-1V.) But if you can do that, then in what sense should we say that the cat in the |Dead〉 state was ever “dead” at all? Normally, when we speak of “killing,” we mean doing something irreversible—not rotating to some point in a Hilbert space that we could just as easily rotate away from.

Since this changes at least one fundamental concept, I am reluctant to state that it cannot apply to another.

He was willing to bite a big bullet to defend the definition he used. I just applied the definition he'd used, and plopped a much fatter bullet on his plate.

To recap - He would interpret the same sequence of past physical states as conscious or not depending on which branch of a later quantum split he ended up in.

Meanwhile, I provided an alternate very similar interpretation that maintains all of the benefits I can discern of his formulation and dodges both bullets.

Consider posting your comment on his blog.

Too bad he didn't consider it worth replying to (yet?)

Too bad indeed. In my experience, if he hasn't within a day or so, he won't.

Funny, I just came here to copy it for that purpose.

Regarding fully homomorphic encryption; only a small number of operations can be performed on FHE variables without the public key, and "bootstrapping" FHE from a somewhat homomorphic scheme requires the public key to be used in all operations as well as the secret key itself to be encrypted under the FHE scheme to allow bootstrapping, at least with the currently known schemes based on lattices and integer arithmetic by Gentry et al.

It seems unlikely that FHE could operate without knowledge of at least the public key. If it were possible to continue a simulation indefinitely without the public key then the implication is that one could evaluate O(2^N) simulations with O(N) work: Choose an N-bit scheme such that N >= the number of bits required for the state of the simulation and run the simulation on arbitrary FHE values. Decryption with any N-bit key would yield a different, valid simulation history assuming a mapping from decrypted states to simulated states.

There's this part near the end:

From this extreme, even most scientific rationalists recoil. They say, no, even if we don’t yet know exactly what’s meant by “physical instantiation”, we agree that you only get consciousness if the computer program is physically instantiated somehow.

Why, no, I disagree. To explain why I'm not a Boltzmann brain or a sequence of pi digits or asomething like that, it seems enough to just use a prior based on description length. There's no need to postulate the existence of physics as something separate from math.

[-][anonymous]10y10

Don't project your priors onto the universe. You might find yourself surprised.

Feel free to elaborate, here or there.

Added some elaboration to the parent comment. I just feel that using a simplicity-based prior might solve many problems that seem otherwise mysterious. 1) I'm not a Boltzmann brain because locating a Boltzmann brain takes much more bits than deriving my brain from the laws of physics. 2) A mind running under homomorphic encryption is conscious, and its measure depends inverse exponentially on the size of the decryption key. 3) Multiple or larger computers running the same program contain more consciousness than one small computer, because they take fewer bits to locate. 4) The early universe had low entropy because it had a short description. And so on.

It's interesting that we had a very similar discussion here minus the actual quantum mechanics. At least intuitively it seems like physical change is what leads to consciousness, not simply the possibility or knowledge of change. One possible counter-argument to consciousness being dependent on decoherence is the following: What if we could choose whether or not, and when, to decohere? For example, what if inside Schroedinger's box is a cat embryo that will be grown into a perfectly normal immortal cat if nucleus A decays, and the box will open if nucleus B decays. When the box opens, is there no cat, a conscious cat, or a cat with no previous consciousness? What if B is extremely unlikely to decay but the cat can press a switch that will open the box? It seems non-intuitive that consciousness should depend on what happens in the future, outside your environment.

I think tying physical change to consciousness is dangerous because that would make things that do not change unconscious or things that stay in a permanent state to lose their consciousness. Indeed we know that atoms are always moving but if we stopped that process would consciousness cease? If I freeze you so you move very slowly does that end the consciousness of your being until things speed up again? How does this work within the mind and soul? How could we stop them and end their consciousness? I don't think you can comprehend consciousness without thinking of it as continuous.

That commits you to the position that all instances of human unconsciousness are just failure to form memories and do a bunch of other things (like interact with the environment), but the lights are never off, as it were.

Yes, and people live their whole lives this way, a state of unconscious where they don't do all kinds fo things. There were instances of people being in freeze or not remembering very traumatic events. There are people that have terrible things happen to them and have no memory of it until something triggers. That's why we use EMDR to help those people.

I don't see how this is relevant at all, feel free to explain.

I think the question is how are you going to define consciousness and how are you going apriori prove that? If you use the language test then yes and FHE encrypted sm with a lost key is still conscious (see comment below).

If I untorture a reversible simulation you have to decide how far the reversibility goes and if there is any imprint or trauma left behind. Does the computer feel or experience that reverse as a loss? Can you fully reverse the imprint of torture on consciousness in such a manner that running the simulation backwards has an incomplete or complete effect?

The Vaidman brain isn't conscious I don't think because its based on a specific input and a specific output. I still think John Searle is off on this despite my opinion.

If you use the language test

What language test? (And, how would a fully-homomorphically-encrypted sim with a lost key be shown to be conscious by anything that requires communicating with it?)

you have to decide how far the reversibility goes

The sort of reversibility Scott Aaronson is talking about goes all the way: after reversal, the thing in question is in exactly the same state as it was in before. No memory, no trauma, no imprint, nothing.

The Vaidman brain isn't conscious I don't think because it's based on a specific input and a specific output.

I don't understand that at all. Why does that stop it being conscious? If I ask you a specific yes/no question (in the ordinary fashion, no Vaidman tricksiness) and you answer it, does the fact that you were giving a specific answer to a specific question mean that you weren't conscious while you did it?

[-][anonymous]10y20

Giving answers is an irreversible operation. The whole "is a fully reversible computer conscious?" thing doesn't really make sense to me -- for the computer to actually have an effect on the world requires irreversible outputs. So I have trouble imagiing scenarios where my expectactions are different but the entire process remains reversible...

You could set up a fully quantum whole brain emulation of a person sitting in a room with a piece of paper that says "Prove the Riemann Hypothesis". Once they've finished the proof, you record what's written on their paper, and reverse the entire simulation (as it was fully quantum mechanical, thus, in principle, fully unitarily reversible).

Looking at what they wrote on the paper doesn't mean you have to communicate with them.

[-][anonymous]10y40

The act of writing on the paper was an irreversible action. And yes, looking at it is comunication, in the physical sense. Specifically, the photon interaction with the paper and with your eyes is not reversible. Any act of extracting information from the computational process in a way where the information or anything causally dependent on that information is not also reversed when the computation is run backwards, must be an irreversable action.

What does a universe look like where a computation has been run forwards, and then run backwards in a fully reversible way? Like it never happened at all.

I think the confusion here is about what "fully quantum whole brain emulation" actually means.

The idea is that you have a box (probably large), within which is running a closed system calculation which is equivalent to simulating someone sitting in a room trying to write a theorem (all the way down to the quantum level). You are not interacting with the simulation, you are running the simulation. At every stage of the simulation, you have perfect information about the full density matrix of the system (i.e., the person being simulated, the room, the atoms in the person's brain, the movements of the pencil, etc.)

If you have this level of control, then you are implementing the full unitary time evolution of the system. The time evolution operator is reversible. Thus, you can just run the calculation backwards.

So, to the person in the room writing the proof, as far as they know, the photon flying from the paper hitting their eye and being registered by their brain is an irreversible interaction--they don't have complete control over their environment. But to you, the simulation runner, this action is perfectly reversible.

Now, the contention may be that this simulated person wasn't actually ever conscious during the course of this ultra-high-fidelity experiment. Answering that question either way seems to have strange philosophical implications.

[-][anonymous]10y00

What you describe is all true, however useless as described. The earlier poster wanted the simulation to output data (e.g. by writing it on paper -- the paper being outside of the simulation), and then reverse the simulation. Sorry, you can't do that. "Reversible" has very specific meaning in the context of statistical and quantum physics. Even if the computation itself can be reversed, once it has output data that property is lost. We'd no longer be talking about a reversible process, because once the computation is reversed, that output still exists.

I'm not sure who you're talking about because I'm the person above referring to someone writing on paper--and the paper was meant to also be within the simulation. The simulator is "reading the paper" by nature of having perfect information about the system.

"Reversible" in this context is only meant to describe the contents of the simulation. Computation can occur completely reversibly.

[-][anonymous]10y00

Sorry, got mixed up with cameroncowan. Anyway, to the original point:

You said "Once they've finished the proof, you record what's written on their paper, and reverse the entire simulation... Looking at what they wrote on the paper doesn't mean you have to communicate with them."

My interpretation--which may be wrong--is that you are suggesting that the person running the simulation record the state of the simulation at the moment the problem is solved, or at least the part of the simulator state having to do with the paper. However the process of extracting information out of the simulation -- saving state -- is irreversable, at least if you want it to survive rewinding the simulation.

To put differently, if the simulation is fully reversible, then you run it forwards, run it backwards, and that the end you have absolutely zero knowledge about what happened inbetween. Any preserved state that wasn't there at the beginning would mean that the process wasn't fully reversed.

Looking at the paper is communicating with the simulation. It maybe be a one-way communication, but that is enough.

I'm suggesting that the person running the simulation knows the state of the simulation at all times. If this bothers you, pretend everything is being done digitally, on a classical computer, with exponential slowdown.

Such a calculation can be done reversibly without ever passing information into the system.

[-][anonymous]10y40

What do you mean by "knows the state of the simulation"? What is the point of this exercise?

Yes the machine running the simulation knows the current state of the simulation at any given point (ignoring fully homomorphic encryption). It must however forget this intermediate state when the computation is reversed, including any copies/checkpoints it has. Otherwise we're not talking about a reversible process. Do we agree on this point?

My original post was:

Giving answers is an irreversible operation. The whole "is a fully reversible computer conscious?" thing doesn't really make sense to me -- for the computer to actually have an effect on the world requires irreversible outputs. So I have trouble imagiing scenarios where my expectactions are different but the entire process remains reversible...

How does your setup of a simulated person performing mathmatics, then being forgotten as the simulation is run backwards address this concern?

I disagree that "giving answers is an irreversible operation". My setup explicitly doesn't "forget" the calculation (the calculation being simulating someone proving the Riemann hypothesis, and us extracting that proof from the simulation), and my setup is explicitly reversible (because we have the full density matrix of the system at all times, and can in principle perform unitary time evolution backwards from the final state if we wanted to).

Nothing is ever being forgotten. I'm not sure where that came from, because I've never claimed that anything is being forgotten at any step. I'm not sure why you're insisting that things be forgotten to satisfy reversibility, either.

I would like to know that as well because I think there is an effect if it is conscious to make it fully reversible I think denies a certain consciousness.

[-][anonymous]10y20

That's what Scott's blog is about :)

But writing the proof and reading it is communication.

"Reading it" is akin to "having perfect information about the full density matrix of the system". You don't have to perturb the system to get information out of it.

Language Test: The Language Test is simple language for the Heideggarian idea of language as a proof of consciousness.

Reversibility: I don't think that kind of reversibility is possible while also maintaining consciousness.

Vaidman Brain: Then that invalidates the idea if you remove the tricksiness. I would of course remain in a certain state of conscious the entire time.

How is a simulation of a conscious mind, operating behind a "wall" of fully homomorphic encryption for which no one has the key, going to pass this "language test"?

I don't think that kind of reversibility is possible while also maintaining consciousness.

Then you agree with Scott Aaronson on at least one thing.

Then that invalidates the idea if you remove the tricksiness.

What I am trying to understand is what about the Vaidman procedure makes consciousness not be present, in your opinion. What you said before is "based on a specific input and a specific output", but we seem to be agreed that one can have a normal interaction with a normal conscious brain "based on a specific input and a specific output" so that can't be it. So what is the relevant difference, in your opinion?

That is my point, its not and therefore can't pass the conscious language test and I think thats quite the problem.

I think the Vaidman procedure doesn't make consciousness present because the specific input and output being only a yes or no answer makes it no better than the computers we are using right now. I can ask SIRI yes or no answers and get something out but we can agree that Siri is an extremely simple kind of consciousness embodied in computer code built at Apple to work as an assistant in iPhones. If the Vaidman brain were to be conscious I should be able to ask it a "question" without definable bounds and get any answer between "42" and "I don't know or I cannot answer that." So for example, you can ask me all these questions and I can work to create an answer as I am now doing or I could simply say "I don't know" or "my head is parrot your post is invalid." The answer would exist as a signpost of my consciousness although it might be unsatisfying. The Vaidman brain could not work under these conditions because the bounds are set. Any time you have set bounds saying that it is apriori consciousness is impossible.

That is my point [...]

Then I have no idea what you meant by "If you use the language test then yes and FHE encrypted sm with a lost key is still conscious".

the specific input and output being only a yes or no answer makes it no better than the computers we are using right now.

If I ask you a question and somehow constrain you only to answer yes or no, that doesn't stop you being conscious as you decide your answer. There's a simulation of your whole brain in there, and it arrives at its yes/no answer by doing whatever your brain usually does to decide. All that's unusual is the context. (But the context is very unusual.)

I would say that no the FHE is not because the most important aspect of language is communication and meaning. The ability to communicate matters not as long as it cannot have meaning to at least one other person. Upon the 2nd point we are agreed.