I've been thinking about ethics and brain emulations for a while and now have realized I am confused.  Here are five scenarios. I am pretty sure the first is morally problematic, and pretty sure the last is completely innocuous. But I can't find a clean way to partition the intermediate cases.

 

A) We grab John Smith off the street, scan his brain, torture him, and then by some means, restore him to a mental and physical state as though the torture never happened.

 

B)  We scan John Smith's brain, and then run a detailed simulation of the brain being tortured for ten seconds, and over again. If we attached appropriate hardware to the appropriate simulated neurons, we would hear the simulation screaming.

 

C) We store, on disk, each timestep of the simulation in scenario B. Then we sequentially load each timestep into memory, and overwrite it. 

 

D) The same as C, except that each timestep is encrypted with a secure symmetric cipher, say, AES. The key used for encryption has been lost. (Edit: The key length is much smaller than the size of the stored state and there's only one possible valid decryption.)

 

E) The same as D, except we have encrypted each timestep with a one time pad.

 

I take for granted that scenario A is bad: one oughtn't be inflicting pain, even if there's no permanent record or consequence of the pain.  And I can't think of any moral reason to distinguish a supercomputer simulation of a brain from the traditional implementation made of neurons and synapses. So that says that B should be equally immoral.

 

Scenario C is just B with an implementation tweak -- instead of _calculating_ each subsequent step, we're just playing it back from storage. The simulated brain has the same sequence of states as in B and the same outputs.

 

Scenario D is just C with a different data format.  

 

Scenario E is just D with a different encryption.

 

Now here I am confused. Scenario E is just repeatedly writing random bytes to memory. This cannot possibly have any moral significance!  D and E are indistinguishable to any practical algorithm. (By definition, secure encryption produces bytes that "look random" to any adversary that doesn't know the key). 

 

Either torture in case A is actually not immoral or two of these adjacent scenarios are morally distinct. But none of those options seem appealing.  I don't see a simple clean way to resolve the paradox here. Thoughts?

 

As an aside: Scenarios C,D and E aren't so far beyond current technology as you might expect.  Wikipedia tells me that the brain has ~120 trillion synapses.  Most of the storage cost will be the per-timestep data, not the underlying topology. If we need one byte per synapse per timestep, that's 120TB/timestep. If we have a timestep every millisecond, that's 120 PB/second. That's a lot of data, but it's not unthinkably beyond what's commercially available today, So this isn't a Chinese-Room case where the premise can't possibly be realized, physically.

 

 

New Comment
57 comments, sorted by Click to highlight new comments since:
[-]TsviBT190

Even in C,D, and E, the torture was still computed at some point. This will still feel like torture. It just happened before the part of the scenario you stated, where you load up the pre-computed torture. That part doesn't seem bad. See The Generalized Anti-Zombie Principle.

Suppose we discovered a gigantic truth-table in space such that we could point a telescope at some part of it and discover the result of S(x,t) where x is the initial state of a brain and t is the amount of time S simulates the brain being tortured. Is John Smith tortured if we point the telescope at the location where S(John Smith, 10 minutes) is found? Now suppose that instead there are several truth tables, one for each major region of John Smith's brain and having enough inputs and outputs such that we can look up the results of torturing parts of John Smith's brain and match them together at appropriately small intervals to give us the same output as S(x,t). Is John Smith tortured by using this method? What about truth tables for neuron groups or a truth table for a sufficiently generic individual neuron? How about a truth table that gives us the four results of AND NOT for two boolean variables, and we have to interpret S as a logical circuit to look up the results using many, many iterations?

Is John Smith tortured just by the existence of a particular level of truth table (perhaps the one for S(x,t) ) if no one computed it? If so, does it matter if someone re-computes that truth table since John Smith is tortured by it anyway? If John Smith would only be tortured by computing the tables then suppose P=NP and we can somehow calculate the truth tables directly without all the intermediate calculations. Would that torture John Smith or does the act of each individual computation contribute to the experience?

For the largest truth table that causes the lookup procedure to torture John Smith, does it matter how many times the lookup is done? For instance, if looking at S(John Smith,t) tortures John Smith, does it matter how many times we look or does John Smith experience each of the S(John Smith,t) only once? The smallest truth table that allows torture-free lookups corresponds to the post's C - E options of doing unlimited lookups without increasing torture.

Is there a level of truth tables that doesn't torture John Smith and won't cause his torture if someone looks up the results of S(x,t) using those tables? This seems the most unlikely, but perhaps there is a way to compute truth tables where a generic person is tortured which still returns accurate S(x,t) without torturing John Smith.

I don't have good answers to all those questions but I think that "doing computation = torture" is too simple an answer. I am on the fence about mathematical realism and that has a large impact on what the truth tables would mean. If mathematical realism is true then the truth tables already exist and correspond to real experience. If it's false then it's just a convenient thought experiment to determine where, how, and when (and how often) experience actually occurs: If truth tables at the whole brain or large brain region level constitute torture then I would assume that the experience happens once when (or if) the tables are generated and that multiple lookups probably don't cause further experience. Once neural groups or neuron lookups are being used to run a simulation I think some experience probably exists each time. By the time everything is computed I think it's almost certainly causing experience during each simulation. But suppose we find the mechanism of conscious awareness and it's possible to figure out what a conscious person feels while being tortured using truth tables for conscious thoughts and a full simulation of the rest of their brain. Is that as bad as physically torturing them? I don't think so, but it would probably still be morally wrong.

If you've read Nick Bostrom's paper on Unification vs. Duplication I think I find myself somewhere in the middle; using truth tables to find the result of a simulation seems a lot like Unification while direct computation fits with Duplication.

For my own part, I'm pretty confident labeling as "torturing John Smith" any process that computes all and only the states of John Smith's brain during torture, regardless of how those states are represented and stored, and regardless of how the computation is performed.

More generally (since I have no idea why we're using torture in this example and I find it distasteful to keep doing so) I'm pretty confident saying that any process that computes all and only the states of John Smith's brain during some experience X involves John experiencing X, regardless of how those states are represented and stored, and regardless of how the computation is performed.

I certainly agree that if we describe the computation as being performed "off-camera" (by whatever unimaginable process created it), or being performed by a combination of that ineffable process and manual lookups, or distract attention from the process altogether, our intuitions are led to conclude that X is not experienced... for example, that Searle's Chinese Room is not actually experiencing the human-level Chinese conversations it's involved in.

But I don't find that such intuitions are stable under reflection.

Is John Smith tortured just by the existence of a particular level of truth table (perhaps the one for S(x,t) ) if no one computed it?

Wait, what? You mean, if the states are somehow brought into existence ex nihilo without any process having computed them? I have no idea. I'm not sure the question makes sense.

I think what I want to say about such things is that moral judgments are about actions and events. In this scenario I have no idea what action is being performed and no idea what event occurred, so I don't know how to make a moral judgment about it.

If so, does it matter if someone re-computes that truth table since John Smith is tortured by it anyway?

Well, as above, I'm pretty confident that re-computing the table causes John to experience X (in addition to causing there to have been a John to experience it). I'm not confident what I want to say about the moral implications of identical recomputations of an event that has a certain moral character. My intuitions conflict.

That said, at the moment I'm inclined to say that all the computations have equivalent moral status, and their moral statuses add in the ordinary way for two discrete events, whatever that is.

perhaps there is a way to compute truth tables where a generic person is tortured which still returns accurate S(x,t) without torturing John Smith

If A is sufficiently similar to B, performing a process P on A will allow me to sufficiently accurately predict the results of P(B) without actually performing P(B)... sure. And sure, perhaps there's a way to do this if A is a "generic person" (whatever that means) and B is John Smith.

[-]asr10

More generally (since I have no idea why we're using torture in this example and I find it distasteful to keep doing so) I'm pretty confident saying that any process that computes all and only the states of John Smith's brain during some experience X involves John experiencing X, regardless of how those states are represented and stored, and regardless of how the computation is performed.

I picked the torture example, because I'm not sure what "John experiences X" really means, once you taboo all the confusing terms about personal identity and consciousness" -- but I think the moral question is a "territory" question, not a "map" question.

The "all states and only the states of the brain" part confuses me. Suppose we do time-slicing; the computer takes turns simulating John and simulating Richard. That can't be a moral distinction. I suspect it will take some very careful phrasing to find a definition for "all states and only those states" that isn't obviously wrong.

Well, as above, I'm pretty confident that re-computing the table causes John to experience X (in addition to causing there to have been a John to experience it). I'm not confident what I want to say about the moral implications of identical recomputations of an event that has a certain moral character. My intuitions conflict.

Yah. After thinking about this for a couple of days the only firm conclusion I have is that moral intuition doesn't work in these cases. I have a slight worry that thinking too hard about these sorts of hypotheticals will damage my moral intuition for the real-world cases -- but I don't think this is anything more than a baby basilisk at most.

I picked the torture example, because I'm not sure what "John experiences X" really means, once you taboo all the confusing terms about personal identity and consciousness" -- but I think the moral question is a "territory" question, not a "map" question.

I don't quite understand this. If a given event is not an example of John experiencing torture, then how is the moral status of John experiencing torture relevant?

The "all states and only the states of the brain" part confuses me.

I wasn't trying to argue that if this condition is not met, then there is no moral difficulty, I was just trying to narrow my initial claim to one I could make with confidence.

If I remove the "and only" clause I open myself up to a wide range of rabbit holes that confuse my intuitions, such as "we generate the GLUT of all possible future experiences John might have, including both torture and a wildly wonderful life".

the only firm conclusion I have is that moral intuition doesn't work in these cases.

IME moral intuitions do work in these cases, but they conflict, so it becomes necessary to think carefully about tradeoffs and boundary conditions to come up with a more precise and consistent formulation of those intuitions. That said, changing the intuitions themselves is certainly simpler, but has obvious difficulties.

More generally (since I have no idea why we're using torture in this example and I find it distasteful to keep doing so) I'm pretty confident saying that any process that computes all and only the states of John Smith's brain during some experience X involves John experiencing X, regardless of how those states are represented and stored, and regardless of how the computation is performed.

I was primarily interested in whether there is a continuum of experience ranging from full physical simulation to reading values from disk or a lookup/truth table, or if there is a hard line between the shortest program that computes John Smith's brain states over time and the shortest program that reads the pre-existing history of John Smith's brain states into memory, with all other programs falling on either side of that line. Agreed regarding torture.

I'm not confident what I want to say about the moral implications of identical recomputations of an event that has a certain moral character. My intuitions conflict.

Suppose that recomputations do not cause additional experience. In that case the waterfall argument is basically true if any computation causes experience regardless of how the states are represented or stored; all possible representations can be mapped to a single computation and therefore all possible experience happens. If recomputations do cause additional experience then how much additional experience occurs for varying complexity of representation and computation?

If A is sufficiently similar to B, performing a process P on A will allow me to sufficiently accurately predict the results of P(B) without actually performing P(B)... sure. And sure, perhaps there's a way to do this if A is a "generic person" (whatever that means) and B is John Smith.

By a generic person I mean a person who, for whatever reason, is lacking much of what we would consider identity. No name, no definite loved ones, no clear memories of moments in their life. A human person with recognizable emotional and intellectual and physical responses but without much else. Dementia patients might be a close analogue.

If such a generic person experiences joy or sadness then I think that it is real experience, and I care about it morally. However, if that model of a generic person was used to look up the reaction that I would have to similar experiences, I am not convinced that "I" would experience the same joy or sadness, at least not to the same extent as the generic person did. This has implications if an AGI is going to upload us and (presumably) try to simulate us as efficiently as possible. If it aggressively memoizes its computations of our brain states such that eventually nearly all human activity is reduced to the equivalent of truth-table lookups then I am not sure if that would be as morally desirable as computing an accurate physical simulation of everyone, even given the increased number of awesome-person-years possible with increased efficiency.

Others have argued that it doesn't matter how "thick" neurons are or how much redundant computation is done to simulate humans, but I haven't yet ran across a moral examination of dramatically thinning out neurons or brain regions or simplifying computations by abstracting the details of physical behavior away almost entirely while still simulating accurately. The standard argument for neuron replacement goes something like "if you replace all the neurons in your brain with fully functional simulacrums, you will not notice the difference" but what I am conceiving of is "if you replace all the neurons in your brain with lookup tables, do you notice?"

So, I'm sorry, but I've read this comment several times and I simply don't follow your train of thought here. There are pieces here I agree with, and pieces I disagree with, but I don't understand how they connect to each other or to what they purport to respond to, and I don't know how to begin responding to it.

So it's probably best to leave the discussion here.

Could you explain the relevance of the GAZP? I'm not sure I'm following.

Also, would it be fair to characterize your argument as saying that C, D, and E are bad only because they include B as a prerequisite, and that the additional steps beyond just B are innocuous?

I think the relevance of the GAZP was supposed to be reasoning along the lines of:
1) Either (A1) consciousness is solely the result of brain-states being computed, or (A2) it involves some kind of epiphenomenal property.
2) The GAZP precludes epiphenomenal properties being responsible for consciousness.
3) Therefore A1.

The difficulty with this reasoning, of course, is that there's a huge excluded middle between A1 and A2.

C, D, and E are bad only because they include B

For my own part I would not quite agree with this, though it's close.
I would agree that if a scenario includes (B,C,D,E) the vast bulk of the badness in that scenario is on account of B.

There might be some badness that follows from (C,D,E) alone... I certainly have a strong intuitive aversion to them, and while I suspect that that preference would not be stable under reflection I'm not strongly confident of that.

I would say that, by the time you get to C, there probably isn't any problem anymore. You're not actually computing the torture; or, rather, you already did that.

Scenario C is actually this:

You scan John Smith's brain, run a detailed simulation of his being tortured while streaming the intermediate stages to disk, and then stream the disk state back to memory (for no good reason).

There is torture there, to be sure; it's in the "detailed simulation" step. I find it hard to believe that streaming, without doing any serious computation, is sufficient to produce consciousness. Scenario D and E are the same. Now, if you manage to construct scenario B in a homomorphic encryption system, then I'd have to admit to some real uncertainty.

I find it hard to believe that streaming, without doing any serious computation, is sufficient to produce consciousness.

That's the key observation here, I think. There's a good case to be made that scenario B has consciousness. But does scenario C have it? It's not so obvious anymore.

Now, if you manage to construct scenario B in a homomorphic encryption system, then I'd have to admit to some real uncertainty.

I don't think that's different even if we threw away the private key before beginning the simulation. It's akin to sending spaceships beyond the observable edge of the universe or otherwise hiding parts of reality from ourselves. In fact, I think it may be beneficial to live in a homomorphically encrypted environment that is essentially immune to outside manipulation. It could be made to either work flawlessly or acquire near-maximum entropy at every time step with very high probability and with nearly as much measure in the "works flawlessly" region as a traditional simulation.

I've been thinking a lot about this issue (and the broader issue that this is a special case of) recently. My two cents:

Under most views, this isn't just an ethical problem. It can be reformulated as a problem about what we ought to expect. Suppose you are John Smith. Do you anticipate different experiences depending on how far down the sequence your enemies will go? This makes the problem more problematic, because while there is nothing wrong with valuing a system less and less as it gets less and less biological and more and more encrypted, there is something strange about thinking that a system is less and less... of a contributor to your expectations about the future? Perhaps this could be made to make sense, but it would take a bit of work. Alternatively, we could reject the notion of expectations and use some different model entirely. This "kicking away the ladder" approach raises worries of its own though.

I think the problem generalizes even further, actually. Like others have said, this is basically one facet of an issue that includes terms like "dust theory" and "computationalism."

Personally, I'm starting to seriously doubt the computationalist theory of mind I've held since high school. Not sure what else to believe though.

[-]asr10

Yes. I picked the ethical formulation as a way to make clear that this isn't just a terminological problem.

I like the framing in terms of expectation.

And I agree that this line of thought makes me skeptical about the computationalist theory of mind. The conventional formulations of computation seem to abstract away enough stuff about identity that you just can't hang a theory of mind and future expectation on what's left.

I think that arguments like this are a good reason to doubt computationalism. That means accepting that two systems performing the same computations can have different experiences, even though they behave in exactly the same way. But we already should have suspected this: it's just like the inverted spectrum problem, where you and I both call the same flower "red," but the subjective experience I have is what you would call "green" if you had it. We know that most computations even in our brains are not accompanied by conscious perceptual experience, so it shouldn't be surprising if we can make a system that does whatever we want, but does it unconsciously.

Could the relevant moral change happen going from B to C, perhaps? i.e. maybe a mind needs to actually be physically/causally computed in order to experience things. Then the torture would have occurred whenever John's mind was first simulated, but not for subsequent "replays," where you're just reloading data.

Check out "Counterfactuals Can't Count" for a response to this. Basically, if a recording is different in what it experiences than running a computation, then two computations that calculate the same thing in the same way, but one has bits of code that never run, experience things differently.

[-]asr00

The reference is a good one -- thanks! But I don't quite understand the rest of your comments. Can you rephrase more clearly?

Sorry, I was just trying to paraphrase the paper in one sentence. The point of the paper is that there is something wrong with computationalism. It attempts to prove that two systems with the same sequence of computational states must have different conscious experiences. It does this by taking a robot brain that calculates the same way as a conscious human brain, and transforms it, always using computationally equivalent steps, to a system that is computationally equivalent to a digital clock. This means that either we accept that a clock is at every moment experiencing everything that can be experienced, or that something is wrong with computationalism. If we take the second option, it means that two systems with the exact same behavior and computational structure can have different perceptual consciousness.

[-][anonymous]00

duplicate comment

[This comment is no longer endorsed by its author]Reply

By the time we get to E, to a neutral observer it's just as likely we're writing the state of a happy brain rather than a sad one. See the waterfall argument, where we can map the motion of a waterfall to different computations, and thus a waterfall encodes every possible brain at once.

This probably reflects something about a simplicity or pattern-matching criterion in how we make ethical judgments.

[-]asr40

Yes. I agree with that. The problem is that the same argument goes through for D -- no real computationally-limited observer can distinguish an encryption of a happy brain from the encryption of a brain in pain. But they are really different: with high probability there's no possible encryption key under which we have a happy brain. (Edited original to clarify this.)

And to make it worse, there's a continuum between C and D as we shrink the size of the key; computationally-limited observers can gradually tell that it's a brain-in-pain.

And there's a continuum from D to E as we increase the size of the key - a one-time pad is basically a key the size of the data. The bigger the key, the more possible brains an encrypted data set maps onto, and at some point it becomes quite likely that a happy brain is also contained within the possible brains.

But anyhow, I'd start caring less as early as B (for Nozick's Experience Machine reasons) - since my caring is on a continuum, it doesn't even raise any edge-case issues that the reality is on a continuum as well.

And to make it worse, there's a continuum between C and D as we shrink the size of the key; computationally-limited observers can gradually tell that it's a brain-in-pain.

So it is a brain in pain. The complexity of the key just hides the fact.

Except "it" refers to the key and the "random" bits...not just the random bits, and not just the key. Both the bits and the key contain information about the mind. Deleting either the pseudo random bits or the key deletes the mind.

If you only delete the key, then there is a continuum of how much you've deleted the mind, as a function of how possible it is to recover the key. How much information was lost? How easy is it to recover? As the key becomes more complex, more and more of the information which makes it a mind rather than a random computation is in the key.

But they are really different: with high probability there's no possible encryption key under which we have a happy brain.

In the case where only one possible key in the space of keys leads to a mind, we haven't actually lost any information about the mind by deleting the key - doing a search through the space of all keys will eventually lead us to find the correct one.

I think the moral dimension lies in stuff that pin down a mind from the space of possible computations.

See the waterfall argument

Can't find it. Link?

Also, this is a strange coincidence...my roommate and I once talked about the exact same scenario, and I also used the example of a "rock, waterfall, or other object" to illustrate this point.

My friend concluded that the ethically relevant portion of the computation was in the mapping and the waterfall, not simply in the waterfall itself, and I agree. It's the specific mapping that pins down the mind out of all the other possible computations you might map to.

So in asr's case, the "torture" is occurring with respect to the random bits and the encryption used to turn them into sensible bits. If you erase either one, you kill the mind.

A search on LW turns up this: http://lesswrong.com/lw/9nn/waterfall_ethics/ I'm pretty sure the original example is due to John Searle, I just can't find it.

On page 208-210 of The Rediscovery of the Mind, Searle writes:

On the standard textbook definition of computation, it is hard to see how to avoid the following results:

  1. For any object there is some description of that object such that under that description the object is a digital computer.

  2. For any program and for any sufficiently complex object, there is some description of the object under which it is implementing the program. Thus for example the wall behind my back is right now implementing the Wordstar program, because there is some pattern of molecule movements that is isomorphic with the formal structure of Wordstar. But if the wall is implementing Wordstar, then if it is a big enough wall it is implementing any program, including any program implemented in the brain. [...]

I do not think that the problem of universal realizability is a serious one. I think it is possible to block the result of universal realizability by tightening up our definition of computation. Certainly we ought to respect the fact that programmers and engineers regard it as a quirk of Turing's original definitions and not as a real feature of computation. Unpublished works by Brian Smith, Vinod Goel, and John Batali all suggest that a more realistic definition of computation will emphasize such features as the causal relations among program states, programmability and controllability of the mechanism, and situatedness in the real world. All these will produce the result that the pattern is not enough. There must be a causal structure sufficient to warrant counterfactuals. But these further restrictions on the definition of computation are no help in the present discussion because the really deep problem is that syntax is essentially an observer-relative notion. The multiple realizability of computationally equivalent processes in different physical media is not just a sign that the processes are abstract, but that they are not intrinsic to the system at all. They depend on an interpretation from outside. We were looking for some facts of the matter that would make brain processes computational; but given the way we have defined computation, there never could be any such facts of the matter. We can't, on the one hand, say that anything is a digital computer if we can assign a syntax to it, and then suppose there is a factual question intrinsic to its physical operation whether or not a natural system such as the brain is a digital computer.

And if the word "syntax" seems puzzling, the same point can be stated without it. That is, someone might claim that the notions of "syntax" and "symbols" are just a manner of speaking and that what we are really interested in is the existence of systems with discrete physical phenomena and state transitions between them. On this view, we don't really need 0's and l's; they are just a convenient shorthand. But, I believe, this move is no help. A physical state of a system is a computational state only relative to the assignment to that state of some computational role, function, or interpretation. The same problem arises without 0's and l's because notions such as computation, algorithm, and program do not name intrinsic physical features of systems. Computational states are not discovered within the physics, they are assigned to the physics.

Yeah, add me to the "If anything morally interesting is happening here, it is happening when the mind-states are computed. Copying those computed-and-recorded states into various media, including 'into memory,' doesn't have moral significance."

More generally: any interesting computational properties of a system are interesting only during computation; the stored results of those computations lack those interesting properties.

[-]asr20

More generally: any interesting computational properties of a system are interesting only during computation; the stored results of those computations lack those interesting properties.

So what happens if I do a mix? The computer can, at each step, choose randomly between reading a cached copy of brain-state t, and computing state(t) based on state(t-1). No outside observer can tell which option the machine chose at each step, and the internal states are ALSO the same. You can also imagine caching parts of the brain-state at every step, and recomputing other parts.

In any simulation, "compute state(t)" and "read a cached copy of state(t)" can blur into each other. And this is a problem philosophicaly, because they blur into each other in ways that don't have externally-visible consequences. And this means we'll be drawing moral distinctions based on an implementation choice with no physical consequences -- and that seems like a problem, from a consequentialist point of view.

because they blur into each other in ways that don't have externally-visible consequences.

Not true; look at polarix's top-level comment.

A generalization of that idea is that torture represents, at minimum a causal chain with the torturer as the cause and the victim as the effect. Therefore changing some parameter of the torturer should result in some change of parameter of the victim. But if you're just loading frames from memory, that does not occur. The causal chain is broken.

OK.

So a system S is computed in such a way that some interesting computational property (C) arises, and all the interim S-states are cached. I then execute a process P that at every step might be recomputing S or might be looking up the cached S-state, in such a way that no outside observer can tell the difference via any conceivable test. Yes?

So, sure, P might or might not cause C to arise, and we have no way of telling which is the case.

I'm not quite sure why this is particularly a problem for moral consequentialism. If C arising is a consequence we prefer to avoid, executing P in the first place is morally problematic in the same way that playing Russian Roulette is... it creates the possibility of a future state we prefer to avoid. And creating the cached S-states was definitely an act we prefer to avoid, and therefore immoral on moral-consequentialist grounds.

[-][anonymous]00

I can make a physical argument for this: if I can have subjective experiences while my computation is frozen, why do all available external observations of my subjective experience process (ie: my life) seem to show that I require time and calories in order to experience things?

My instinct is that each mental state only happens once. If you redo the calculations, he doesn't experience it twice. It's similar to the idea that doubling the size of each neuron doesn't seem like it would change anything.

On the other hand, the Born rule suggests that the universes involved having twice the value makes them four times as likely to be experienced. That suggests that running a one-second calculation ten times would be ten times worse than just running a ten-second calculation.

In any case, I would definitely say that C isn't as bad as A. Moving written values around is not the same as running calculations.

Up-voted for the construction of this hierachy. As the other comments show this can be made finer. probably arbitrarily fine. And it shows that ethics is in the end a vague concept standing on the aggregate of our complex values. You can't find a hard border because there is no simple value to observe changing.

[-][anonymous]20

What if we fully homomorphically encrypt John Smith's brain simulation before we start torturing it? We can even throw away our copy of the private keys so that no one, ever, outside the simulation will be able to see the results of the torture even though we can calculate it perfectly for as long as we want.

Does encrypted John Smith have qualia? If so, then isn't the waterfall argument true? We could choose initial keys with enough entropy such that every possible state of the simulation has a corresponding key that would decrypt it. Our physics simulation just reads the encrypted state of every particle/wave in the simulation, applies the results of physics, writes the encrypted results back and repeats. Our choice of decryption key effectively picks a random universe whose evolution is fully determined by the single series of homomorphically encrypted computations, and it's no longer the computation that's creating specific qualia, but our choice of key revealing it. Another interpretation is that a single series of computations is producing all possible qualia. This seems to be a very strong case for the waterfall argument being true; the physical evolution of the universe is producing states that can be interpreted in many different ways, and the interpretation is what matters to us.

In complexity terms, fully homomorphic encryption is expensive but it is polynomial in the size of the key, and simulations are polynomial in the number of elements being simulated. If N is the number of elements and M is the number of bits necessary to represent the state of one element then the key would be NM bits. Current fully homomorphic ciphers encrypt/decrypt one bit at a time, meaning that each bit of the encrypted state would require NM bits of storage, and computations are implemented as boolean circuits using homomorphic operations on encrypted bits. Calculating one step of the simulation would require O( ( N + C(N) ) H(N M) ) time and O( ( N + C(N) ) (N M) ) space where C(x) is the boolean circuit size of a physics simulation for N elements and H(x) is the cost of homomorphic encryption operations on x bits. C(x) is polynomial if the simulation is polynomial in x. H(x) is polynomial. The result is polynomial and smaller than the Θ(M^N) universes that could be decrypted. That implies either that John Smith and his exponentially many parallel selves have qualia and we can create exponential qualia with only polynomial work, or encrypted encrypted John Smith does not have qualia. If only some of the John Smith's have qualia that would imply the existence of philosophical zombies. If we can create exponential qualia from polynomial work then I think the waterfall argument is basically true.

Nick Bostrom's paper on Duplication vs Unification claims that we should accept Duplication because otherwise we lose the ability to compare the morality of different actions. If the waterfall argument is true then we probably have to accept Unification. There may still be room for some kind of moral optimization under Unification by finding the simplest perspective/mapping such that we can observe the most good given our computational constraints, but if not then we are left with being morally neutral in cases A-F.

[This comment is no longer endorsed by its author]Reply

A is not bad because torturing a person and then restoring their initial state has precisely same consequences as forging own memory of torturing a person and restoring their initial state.

[-]asr00

From a virtue-ethics point of view, it seems reasonable to judge anybody who would do this.

Good people would not want to remember committing torture, even if they didn't do it, because this would result in their future selves being wracked with unearned guilt, which is doing an injustice to that future self.

Put the other way: Anybody who would want to falsely remember being a torturer would probably be the sort of person who enjoys the idea of being a torturer -- which is to say, a bad person.

It seems the disconnect is between B & C for most people.

But why is the generative simulation (B) not morally equivalent to the replay simulation (C)?

Perhaps because the failure modes are different. Imagine the case of a system sensitive to cosmic rays. In the replay simulation, the everett bundle is locally stable; isolated blips are largely irrelevant. When each frame is causally determining the subsequent steps, the system exhibits a very different signature.

IMO, even E is problematic: where did the torture-information come from in the first place?

Reminds me of the error -- on charitable reading, of the characters, but perhaps of the author -- in "Permutation City". There's no such a thing as out-of-order simulation.

It all breaks done to the "nature of consciousness", which isn't fully understood yet, even we have a lot of insight.

In my view, the "nature of consciousness" lies somewhere in the computing, in the dynamics, in the current and neurotransmitters flowing into the brain (or it's simulation), more than in the actual configuration of neurons. So scenario A and B are unethical, because the process is there, so the consciousness is there, but scenario C to E aren't because there is no one to feel it.

Even without upload, if you using nanotech you synthetize many different set of brains, representing each "step" of a person being tortured, but each brain being frozen in time (kept at extra low temperature, or accelerated near the speed of light, or whatever), there is no one feeling the torture.

[-]Shmi10

I take for granted that scenario A is bad: one oughtn't be inflicting pain, even if there's no permanent record or consequence of the pain.

Is this deontologically bad, virtue ethics bad or consequentialism bad? Why?

[-]asr10

Yes. I think all three, but deliberately phrased things to avoid committing to a meta-ethical framework.

The virtue-ethics and deontological bits are straightforward: "good people don't hurt people without a very good reason", and "don't hurt people [without a good reason]."

The utilitarian case depends how you define utility. But most utilitarians I talk to believe that suffering is bad per se, even if it doesn't have long term consequences. It's wrong to abuse somebody who's dying, even if they'll be dead soon anyway.

[+]Shmi-50

Just for reference, this has been discussed on less wrong before in my post Waterfall ethics.

[-]asr00

And I see I even commented on it, but hadn't remembered the discussion. Thanks for the backpointer.

Though I notice that the discussion then and now seems somewhat divergent. I feel like I learned something here that I don't get reading the previous discussion.

[-][anonymous]00

Perhaps a way to consider the question would be "You're John Smith. If told in advance about this, before it happens, how much do you pay to avoid having A happen? How much do you pay to avoid having B happen? C, D, E, etc."

Of course, a great deal of this depends on context and not everyone has the same answers. Some people might say "But John Smith doesn't even know anything happened, and experiences no physical or mental consequences so he can't be a victim in any usual sense of the word. If I was John Smith I wouldn't pay a penny to avoid even A."

Which I guess brings up the question of 'Does anyone else know and how much do they know, and what does John Smith know about them?'

I mean, there are at least three potential levels of attention I'm envisioning:

1: Unknown entities are doing A-E and they never ask us about it and we can't tell when it happens in the slightest.

2: John Smith's buddy Bob says "John, I think unknown entities may have done something from A-E to you last night, but I'm not sure what. I haven't told anyone else, but I thought you should know, since they did it to you."

3: John Smith finds out anyone can download John_Smith_Pain.Sim online and that 1 million people have done so in the 24 hours since it happened, even though John Smith is personally unaware of anything that happened in John_Smith_Pain.Sim.

It seems reasonable that John Smith would be willing to pay different amounts of money to avoid 1-3 (and other knowledge combinations that aren't listed)

Of course, 3 presumably has mental consequences, which may mean that it was implied to be screened off from A.

The key used for encryption has been lost.

Did you mean the key used for decryption? And could you elaborate on the significance of this?

[-]asr10

I was thinking of a symmetric cipher, so the same key would be for both encryption and decryption. The significance of "the key is lost" is that it's no longer feasible to distinguish the resulting data from random bits.

Gotcha, thanks.

We can 'answer' this one from the recent Ethics of brain emulation post:

[...] the nature of the emulation does not matter: if it is cruel [in real life] the same cruel impulse is present in [the simulation]. It is like damaging an effigy: it is the intention behind doing damage that is morally bad, not the damage. Conversely, treating emulations well might be like treating dolls well: it might not be morally obligatory but its compassionate.

Note: This reasoning depends on your ethics (in this kind kantian).

Wikipedia tells me that the brain has ~120 trillion synapses. Most of the storage cost will be the per-timestep data, not the underlying topology. If we need one byte per synapse per timestep, that's 120TB/timestep.

I don't think there any reason to assume that one byte is enough to fully represent the state of a synapse.

[-]asr00

I don't think there any reason to assume that one byte is enough to fully represent the state of a synapse.

I have no particular background in neurology -- my goal was an order-of-magnitude estimate for how much storage it would take to describe a brain accurately enough for simulation. I'd enjoy hearing better numbers if anybody has them.

I have no particular background in neurology -- my goal was an order-of-magnitude estimate for how much storage it would take to describe a brain accurately enough for simulation.

Accurate enough for simulation is a tricky term. It depends on the purpose of the simulation. If you just want to simulate what happens in a time window of 1 second you can ignore changes in myelin sheath thickness. But the resulting system isn't equivalent to a human brain and it's complicated to argue that it's conscious.

When it comes to long term learning neurons are capable of expressing genes as a result of neurotransmitters. We don't know the functions of all genes in the human genome.

This is just a special case of a larger question -- "dust theory."