Manfred comments on an ethical puzzle about brain emulation - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (55)
By the time we get to E, to a neutral observer it's just as likely we're writing the state of a happy brain rather than a sad one. See the waterfall argument, where we can map the motion of a waterfall to different computations, and thus a waterfall encodes every possible brain at once.
This probably reflects something about a simplicity or pattern-matching criterion in how we make ethical judgments.
Yes. I agree with that. The problem is that the same argument goes through for D -- no real computationally-limited observer can distinguish an encryption of a happy brain from the encryption of a brain in pain. But they are really different: with high probability there's no possible encryption key under which we have a happy brain. (Edited original to clarify this.)
And to make it worse, there's a continuum between C and D as we shrink the size of the key; computationally-limited observers can gradually tell that it's a brain-in-pain.
And there's a continuum from D to E as we increase the size of the key - a one-time pad is basically a key the size of the data. The bigger the key, the more possible brains an encrypted data set maps onto, and at some point it becomes quite likely that a happy brain is also contained within the possible brains.
But anyhow, I'd start caring less as early as B (for Nozick's Experience Machine reasons) - since my caring is on a continuum, it doesn't even raise any edge-case issues that the reality is on a continuum as well.
So it is a brain in pain. The complexity of the key just hides the fact.
Except "it" refers to the key and the "random" bits...not just the random bits, and not just the key. Both the bits and the key contain information about the mind. Deleting either the pseudo random bits or the key deletes the mind.
If you only delete the key, then there is a continuum of how much you've deleted the mind, as a function of how possible it is to recover the key. How much information was lost? How easy is it to recover? As the key becomes more complex, more and more of the information which makes it a mind rather than a random computation is in the key.
In the case where only one possible key in the space of keys leads to a mind, we haven't actually lost any information about the mind by deleting the key - doing a search through the space of all keys will eventually lead us to find the correct one.
I think the moral dimension lies in stuff that pin down a mind from the space of possible computations.
Can't find it. Link?
Also, this is a strange coincidence...my roommate and I once talked about the exact same scenario, and I also used the example of a "rock, waterfall, or other object" to illustrate this point.
My friend concluded that the ethically relevant portion of the computation was in the mapping and the waterfall, not simply in the waterfall itself, and I agree. It's the specific mapping that pins down the mind out of all the other possible computations you might map to.
So in asr's case, the "torture" is occurring with respect to the random bits and the encryption used to turn them into sensible bits. If you erase either one, you kill the mind.
A search on LW turns up this: http://lesswrong.com/lw/9nn/waterfall_ethics/ I'm pretty sure the original example is due to John Searle, I just can't find it.
On page 208-210 of The Rediscovery of the Mind, Searle writes: