Manfred comments on an ethical puzzle about brain emulation - Less Wrong

14 Post author: asr 13 December 2013 09:53PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (55)

You are viewing a single comment's thread.

Comment author: Manfred 13 December 2013 10:21:06PM 2 points [-]

By the time we get to E, to a neutral observer it's just as likely we're writing the state of a happy brain rather than a sad one. See the waterfall argument, where we can map the motion of a waterfall to different computations, and thus a waterfall encodes every possible brain at once.

This probably reflects something about a simplicity or pattern-matching criterion in how we make ethical judgments.

Comment author: asr 13 December 2013 10:24:40PM *  2 points [-]

Yes. I agree with that. The problem is that the same argument goes through for D -- no real computationally-limited observer can distinguish an encryption of a happy brain from the encryption of a brain in pain. But they are really different: with high probability there's no possible encryption key under which we have a happy brain. (Edited original to clarify this.)

And to make it worse, there's a continuum between C and D as we shrink the size of the key; computationally-limited observers can gradually tell that it's a brain-in-pain.

Comment author: Manfred 13 December 2013 11:09:15PM *  2 points [-]

And there's a continuum from D to E as we increase the size of the key - a one-time pad is basically a key the size of the data. The bigger the key, the more possible brains an encrypted data set maps onto, and at some point it becomes quite likely that a happy brain is also contained within the possible brains.

But anyhow, I'd start caring less as early as B (for Nozick's Experience Machine reasons) - since my caring is on a continuum, it doesn't even raise any edge-case issues that the reality is on a continuum as well.

Comment author: Ishaan 13 December 2013 11:20:01PM *  0 points [-]

And to make it worse, there's a continuum between C and D as we shrink the size of the key; computationally-limited observers can gradually tell that it's a brain-in-pain.

So it is a brain in pain. The complexity of the key just hides the fact.

Except "it" refers to the key and the "random" bits...not just the random bits, and not just the key. Both the bits and the key contain information about the mind. Deleting either the pseudo random bits or the key deletes the mind.

If you only delete the key, then there is a continuum of how much you've deleted the mind, as a function of how possible it is to recover the key. How much information was lost? How easy is it to recover? As the key becomes more complex, more and more of the information which makes it a mind rather than a random computation is in the key.

But they are really different: with high probability there's no possible encryption key under which we have a happy brain.

In the case where only one possible key in the space of keys leads to a mind, we haven't actually lost any information about the mind by deleting the key - doing a search through the space of all keys will eventually lead us to find the correct one.

I think the moral dimension lies in stuff that pin down a mind from the space of possible computations.

Comment author: Ishaan 13 December 2013 10:52:22PM *  0 points [-]

See the waterfall argument

Can't find it. Link?

Also, this is a strange coincidence...my roommate and I once talked about the exact same scenario, and I also used the example of a "rock, waterfall, or other object" to illustrate this point.

My friend concluded that the ethically relevant portion of the computation was in the mapping and the waterfall, not simply in the waterfall itself, and I agree. It's the specific mapping that pins down the mind out of all the other possible computations you might map to.

So in asr's case, the "torture" is occurring with respect to the random bits and the encryption used to turn them into sensible bits. If you erase either one, you kill the mind.

Comment author: Manfred 13 December 2013 10:56:53PM 0 points [-]

A search on LW turns up this: http://lesswrong.com/lw/9nn/waterfall_ethics/ I'm pretty sure the original example is due to John Searle, I just can't find it.

Comment author: Kaj_Sotala 14 December 2013 11:09:07AM *  3 points [-]

On page 208-210 of The Rediscovery of the Mind, Searle writes:

On the standard textbook definition of computation, it is hard to see how to avoid the following results:

  1. For any object there is some description of that object such that under that description the object is a digital computer.

  2. For any program and for any sufficiently complex object, there is some description of the object under which it is implementing the program. Thus for example the wall behind my back is right now implementing the Wordstar program, because there is some pattern of molecule movements that is isomorphic with the formal structure of Wordstar. But if the wall is implementing Wordstar, then if it is a big enough wall it is implementing any program, including any program implemented in the brain. [...]

I do not think that the problem of universal realizability is a serious one. I think it is possible to block the result of universal realizability by tightening up our definition of computation. Certainly we ought to respect the fact that programmers and engineers regard it as a quirk of Turing's original definitions and not as a real feature of computation. Unpublished works by Brian Smith, Vinod Goel, and John Batali all suggest that a more realistic definition of computation will emphasize such features as the causal relations among program states, programmability and controllability of the mechanism, and situatedness in the real world. All these will produce the result that the pattern is not enough. There must be a causal structure sufficient to warrant counterfactuals. But these further restrictions on the definition of computation are no help in the present discussion because the really deep problem is that syntax is essentially an observer-relative notion. The multiple realizability of computationally equivalent processes in different physical media is not just a sign that the processes are abstract, but that they are not intrinsic to the system at all. They depend on an interpretation from outside. We were looking for some facts of the matter that would make brain processes computational; but given the way we have defined computation, there never could be any such facts of the matter. We can't, on the one hand, say that anything is a digital computer if we can assign a syntax to it, and then suppose there is a factual question intrinsic to its physical operation whether or not a natural system such as the brain is a digital computer.

And if the word "syntax" seems puzzling, the same point can be stated without it. That is, someone might claim that the notions of "syntax" and "symbols" are just a manner of speaking and that what we are really interested in is the existence of systems with discrete physical phenomena and state transitions between them. On this view, we don't really need 0's and l's; they are just a convenient shorthand. But, I believe, this move is no help. A physical state of a system is a computational state only relative to the assignment to that state of some computational role, function, or interpretation. The same problem arises without 0's and l's because notions such as computation, algorithm, and program do not name intrinsic physical features of systems. Computational states are not discovered within the physics, they are assigned to the physics.