wolfgang proposed a similar example on Scott's blog:
I wonder if we can turn this into a real physics problem:
1) Assume a large-scale quantum computer is possible (thinking deep thoughts, but not really self-conscious as long as its evolution is fully unitary).
2) Assume there is a channel which allows enough photons to escape in such a way to enable consciousness.
3) However, at the end of this channel we place a mirror – if it is in the consciousness-OFF position the photons are reflected back into the machine and unitarity is restored, but in the consciousness-ON position the photons escape into the deSitter universe.
4) As you can guess we use a radioactive device to set the mirror into c-ON or c-OFF position with 50% probability.
Will the quantum computer now experience i) a superposition of consciousness and unconsciousness or ii) will it always have a “normal” conscious experience or iii) will it have a conscious experience in 50% of the cases ?
Scott responded that
I tend to gravitate toward an option that’s not any of the three you listed. Namely: the fact that the system is set up in such a way that we could have restored unitarity, seems like a clue that there’s no consciousness there at all—even if, as it turns out, we don’t restore unitarity.
This answer is consistent with my treatment of other, simpler cases. For example, the view I’m exploring doesn’t assert that, if you make a perfect copy of an AI bot, then your act of copying causes the original to be unconscious. Rather, it says that the fact that you could (consistent with the laws of physics) perfectly copy the bot’s state and thereafter predict all its behavior, is an empirical clue that the bot isn’t conscious—even before you make a copy, and even if you never make a copy.
His example is different in a very particular way:
His conscious entity gets to dump photons into de Sitter space directly and only if you open it. This makes Scott's counter-claim prima facie basically plausible - if your putative consciousness only involves reversible actions, then is it really conscious?
But, I specifically drew a line between Alice and Alice's Room, and specified that Alice's normal operations are irreversible - but they must also dump entropy into the Room, taking in one of its 0 bits and returning something that might be 1 or 0, and if...
Yet another exceptionally interesting blog post by Scott Aaronson, describing his talk at the Quantum Foundations of a Classical Universe workshop, videos of which should be posted soon. Despite the disclaimer "My talk is for entertainment purposes only; it should not be taken seriously by anyone", it raises several serious and semi-serious points about the nature of conscious experience and related paradoxes, which are generally overlooked by the philosophers, including Eliezer, because they have no relevant CS/QC expertise. For example:
Scott also suggests a model of consciousness which sort-of resolves the issues of cloning, identity and such, by introducing what he calls a "digital abstraction layer" (again, read the blog post to understand what he means by that). Our brains might be lacking such a layer and so be "fundamentally unclonable".
Another interesting observation is that you never actually kill the cat in the Schroedinger's cat experiment, for a reasonable definition of "kill".
There are several more mind-blowing insights in this "entertainment purposes" post/talk, related to the existence of p-zombies, consciousness of Boltzmann brains, the observed large-scale structure of the Universe and the "reality" of Tegmark IV.
I certainly got the humbling experience that Scott is the level above mine, and I would like to know if other people did, too.
Finally, the standard bright dilettante caveat applies: if you think up a quick objection to what an expert in the area argues, and you yourself are not such an expert, the odds are extremely heavy that this objection is either silly or has been considered and addressed by the expert already.