Yet another exceptionally interesting blog post by Scott Aaronson, describing his talk at the Quantum Foundations of a Classical Universe workshop, videos of which should be posted soon. Despite the disclaimer "My talk is for entertainment purposes only; it should not be taken seriously by anyone", it raises several serious and semi-serious points about the nature of conscious experience and related paradoxes, which are generally overlooked by the philosophers, including Eliezer, because they have no relevant CS/QC expertise. For example:
- Is an FHE-encrypted sim with a lost key conscious?
- If you "untorture" a reversible simulation, did it happen? What does the untorture feel like?
- Is Vaidman brain conscious? (You have to read the blog post to learn what it is, not going to spoil it.)
Scott also suggests a model of consciousness which sort-of resolves the issues of cloning, identity and such, by introducing what he calls a "digital abstraction layer" (again, read the blog post to understand what he means by that). Our brains might be lacking such a layer and so be "fundamentally unclonable".
Another interesting observation is that you never actually kill the cat in the Schroedinger's cat experiment, for a reasonable definition of "kill".
There are several more mind-blowing insights in this "entertainment purposes" post/talk, related to the existence of p-zombies, consciousness of Boltzmann brains, the observed large-scale structure of the Universe and the "reality" of Tegmark IV.
I certainly got the humbling experience that Scott is the level above mine, and I would like to know if other people did, too.
Finally, the standard bright dilettante caveat applies: if you think up a quick objection to what an expert in the area argues, and you yourself are not such an expert, the odds are extremely heavy that this objection is either silly or has been considered and addressed by the expert already.
I think the question is how are you going to define consciousness and how are you going apriori prove that? If you use the language test then yes and FHE encrypted sm with a lost key is still conscious (see comment below).
If I untorture a reversible simulation you have to decide how far the reversibility goes and if there is any imprint or trauma left behind. Does the computer feel or experience that reverse as a loss? Can you fully reverse the imprint of torture on consciousness in such a manner that running the simulation backwards has an incomplete or complete effect?
The Vaidman brain isn't conscious I don't think because its based on a specific input and a specific output. I still think John Searle is off on this despite my opinion.
What language test? (And, how would a fully-homomorphically-encrypted sim with a lost key be shown to be conscious by anything that requires communicating with it?)
The sort of reversibility Scott Aaronson is talking about goes all the way: after reversal, the thing in question is in exactly the same state as it was in before. No memory, no trauma, no imprint, nothing.
... (read more)