Yet another exceptionally interesting blog post by Scott Aaronson, describing his talk at the Quantum Foundations of a Classical Universe workshop, videos of which should be posted soon. Despite the disclaimer "My talk is for entertainment purposes only; it should not be taken seriously by anyone", it raises several serious and semi-serious points about the nature of conscious experience and related paradoxes, which are generally overlooked by the philosophers, including Eliezer, because they have no relevant CS/QC expertise. For example:
- Is an FHE-encrypted sim with a lost key conscious?
- If you "untorture" a reversible simulation, did it happen? What does the untorture feel like?
- Is Vaidman brain conscious? (You have to read the blog post to learn what it is, not going to spoil it.)
Scott also suggests a model of consciousness which sort-of resolves the issues of cloning, identity and such, by introducing what he calls a "digital abstraction layer" (again, read the blog post to understand what he means by that). Our brains might be lacking such a layer and so be "fundamentally unclonable".
Another interesting observation is that you never actually kill the cat in the Schroedinger's cat experiment, for a reasonable definition of "kill".
There are several more mind-blowing insights in this "entertainment purposes" post/talk, related to the existence of p-zombies, consciousness of Boltzmann brains, the observed large-scale structure of the Universe and the "reality" of Tegmark IV.
I certainly got the humbling experience that Scott is the level above mine, and I would like to know if other people did, too.
Finally, the standard bright dilettante caveat applies: if you think up a quick objection to what an expert in the area argues, and you yourself are not such an expert, the odds are extremely heavy that this objection is either silly or has been considered and addressed by the expert already.
Feel free to elaborate, here or there.
Added some elaboration to the parent comment. I just feel that using a simplicity-based prior might solve many problems that seem otherwise mysterious. 1) I'm not a Boltzmann brain because locating a Boltzmann brain takes much more bits than deriving my brain from the laws of physics. 2) A mind running under homomorphic encryption is conscious, and its measure depends inverse exponentially on the size of the decryption key. 3) Multiple or larger computers running the same program contain more consciousness than one small computer, because they take fewer bits to locate. 4) The early universe had low entropy because it had a short description. And so on.