Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

The ethics of randomized computation in the multiverse

8 Post author: lukeprog 22 November 2011 04:31PM

From David Deutsch's The Beginning of Infinity:

Take a powerful computer and set each bit randomly to 0 or 1 using a quantum randomizer. (That means that 0 and 1 occur in histories of equal measure.) At that point all possible contents of the computer’s memory exist in the multiverse. So there are necessarily histories present in which the computer contains an AI program – indeed, all possible AI programs in all possible states, up to the size that the computer’s memory can hold. Some of them are fairly accurate representations of you, living in a virtual-reality environment crudely resembling your actual environment. (Present-day computers do not have enough memory to simulate a realistic environment accurately, but, as I said in Chapter 7, I am sure that they have more than enough to simulate a person.) There are also people in every possible state of suffering. So my question is: is it wrong to switch the computer on, setting it executing all those programs simultaneously in different histories? Is it, in fact, the worst crime ever committed? Or is it merely inadvisable, because the combined measure of all the histories containing suffering is very tiny? Or is it innocent and trivial?

I'm not so sure we have the computing power to "simulate a person," but suppose we did. (Perhaps we will soon.) How would you respond to this worry?

Comments (36)

Comment author: [deleted] 22 November 2011 05:25:41PM 18 points [-]

Shouldn't I be equally worried about stirring my coffee?

Comment author: [deleted] 22 November 2011 08:34:10PM *  4 points [-]

That depends whether you think that you are increasing the probability of a Boltzmann brain coming into existence by a greater factor when you stir your coffee, in comparison to when the memory of the computer is randomized.

I'm sure the coffee contains enough atoms for a Boltzmann brain to form. However, the entropy of the coffee is already high from your point of view before you stir the coffee, i.e. the probability of the coffee containing a Boltzmann brain is changed little by your stirring it.

It also depends to some extent on the size of the computer memory in question. We can infer that it is vast, since it is capable of simulating a human. However, is it just sufficient to do that or much bigger still?

Comment author: lessdazed 22 November 2011 06:47:21PM 4 points [-]

You monster!

Comment author: Louie 24 November 2011 06:11:16PM 0 points [-]

I'm willing to consider that this quantum computer could be a novel situation that demands real consideration.

The vector space of your coffee is basically flat whereas the vector space of the algorithm Deutsch is describing is unimaginably vast. The fact that both are embedded in quantum physics is somewhat besides the point.

Comment author: Baughn 22 November 2011 05:02:19PM 7 points [-]

The measure of the branches where the computer contain any such program, as opposed to total nonsense, is so small as to be ignorable. There's no point in worrying about it, because it practically doesn't happen.

Comment author: hairyfigment 24 November 2011 07:36:43AM 0 points [-]

Maybe I'm confused here. For background, I thought that even in MWI some 'worlds' might not have conscious observers. Normally we can comfort ourselves with the thought that extremely low-amplitude configurations (like those in which ravenous pink teddy-bears spontaneously destroy all that we hold dear) might not cause anyone pain because they might lack the ability to support consciousness. (Obviously I'm ignoring Tegmark IV here.)

But surely every configuration of ones and zeros in the computer has equal amplitude. That would mean that if we 'observe' each bit, the world we then live in has the same amplitude as each of the horribly-suffering-simulations. On what grounds can we say that the latter don't happen?

Comment author: Baughn 24 November 2011 07:19:00PM 0 points [-]

In this construction every configuration of ones and zeros have equal amplitude, yes. However, most of them are nonsensical; the sum of the measures of meaningful worlds are very very close to zero.

Meanwhile, the sum of measures in this scenario where you exist is, well, 1.

That you see each of the nonsensical numbers with equally low probability doesn't matter. If you roll a d1000 and get 687, the chance of that was the same as 1; you still wouldn't expect to get 1. In the same way, you wouldn't expect to get any particular configuration, but you're effectively summing over all the nonsensical ones, and that sum is pretty close to 1.

Comment author: hairyfigment 28 November 2011 07:09:09AM 0 points [-]

The part I don't get is why we should care if we observe the person suffering or not.

This conversation is confusing me; possibly this comment will help us understand each other.

Comment author: Baughn 28 November 2011 08:20:31PM *  0 points [-]

Does it help if I say I completely agree with Manfred?

Not all people have the same "degree of existence" (warning: don't understand what this really is!).

You may gain an improved intuition for what's going on if you read about Mangled Worlds. It may not be true, but it's the best one yet.

Comment author: Manfred 22 November 2011 10:06:35PM 0 points [-]

More specifically, I'm pretty sure us humans don't have any negative parts of our utility function that grow exponentially with "badness," so there's no bad outcome that can overcome the exponential decrease in probability with program size to actually be a significant factor.

Comment author: hairyfigment 28 November 2011 07:04:42AM 0 points [-]

Are you going with Torture v Dust Specks here? Or do you just reject Many Worlds? (Or have I missed something?)

It seems to this layman that using quantum randomization would give us no increase or a tiny increase in utility per world, relative to overwriting each bit with 0 or a piece of Loren Ipsum. And as with Dust Specks, if we actually know we might have prevented torture then I'd get a warm feeling which should count towards the total.

Comment author: Manfred 28 November 2011 07:32:08AM *  0 points [-]

Are you going with Torture v Dust Specks here? Or do you just reject Many Worlds?

Neither is relevant in this case. My claim is that it's not worth spending even a second of time, even a teensy bit of thought, on changing which kind of randomization you use.

Why? Exponential functions drop off really, really quickly. Really quickly. The proportion of of random bit strings that, when booted up, are minds in horrible agony drops roughly as the exponential of the complexity of the idea "minds in horrible agony." It would look approximately like 2^-(complexity).

To turn this exponentially small chance into something I'd care about, we'd need the consequence to be of exponential magnitude. But it's not. It's just a regular number like 1 billion dollars or so. That's 2^30. It's nothing. You aren't going to write a computer program that detects minds in horrible agony using 30 bits. You aren't going to write one with 500 bits, either (concentration of one part in 10^-151). It's simply not worth worrying about things that are worth less than 10^-140 cents.

Comment author: hairyfigment 28 November 2011 07:54:04AM 0 points [-]

I'm saying I don't understand what you're measuring. Does a world with a suffering simulation exist, given the OP's scenario, or not?

If it does, then the proliferation of other worlds doesn't matter unless they contain something that might offset the pain. If they're morally neutral they can number Aleph-1 and it won't make any difference.

Comment author: Manfred 28 November 2011 09:35:10AM *  0 points [-]

Decision-making in many-worlds is exactly identical to ordinary decision-making. You weight the utility of possible outcomes by their measure, and add them up into an expected utility. The bad stuff in one of those outcomes only feels more important when you phrase it in terms of many-worlds, because a certainty of small bad stuff often feels worse than a chance of big bad stuff, even when the expected utility is the same.

Comment author: Vladimir_Nesov 22 November 2011 11:57:19PM 0 points [-]

The more competent AIs will be conquering the universe, so it's value of the universe being optimized in each of the possible ways that's playing against low measure.

Comment author: Nisan 23 November 2011 12:52:12AM *  0 points [-]

If that's what we're worried about, then we might as well ask whether it's risky to randomly program a classical computer and then run it.

Comment author: Vladimir_Nesov 23 November 2011 01:07:43AM 0 points [-]

My argument is about utility, but probability is low. On the other hand, with enough computational power a sufficiently clever evolutionary dynamic might well blow up the universe.

Comment author: cousin_it 22 November 2011 11:21:31PM *  3 points [-]

If the MWI is correct, then our reality already does something similar: there's always a very low but nonzero chance of a quantum fluctuation that will flip your brain into a suffering state. If you don't worry about that, you probably shouldn't worry about the computer.

Comment author: Vladimir_Nesov 23 November 2011 12:43:16AM *  4 points [-]

You have control over what happens with the computer, and the measure of consequences is immensely greater with the computer, even if very low in both cases.

Comment author: cousin_it 23 November 2011 12:52:27AM *  0 points [-]

the measure of consequences is immensely greater with the computer

Why? It seems to me that the reverse might well be true. Measure of random unhappiness inside the computer depends on the number of bits in a brain. Measure of random unhappiness in reality (given that humans already exist) depends on the number of bits in a "diff" between a happy brain and an unhappy one, which is probably smaller.

ETA: this comment is wrong because neurons in reality are macroscopic, so you need a lot of correlated quantum randomness to flip one of them. Please disregard.

Comment author: Vladimir_Nesov 23 November 2011 01:04:23AM 1 point [-]

I'm assuming that expected value of running the computer is dominated by universe-optimizing AGIs it generates, which would have much better conditions for bootstrapping from a well-defined program in a fully-functional computer than if they have to do it boltzmann brain-style.

Comment author: cousin_it 23 November 2011 02:25:30AM *  0 points [-]

Our world already contains many computers that are subject to quantum fluctuations. Some of them even use quantum noise random number generators, so you just need a small glitch to accidentally execute that data, thus creating all the universe-optimizing AGIs you can imagine.

Comment author: Vladimir_Nesov 23 November 2011 02:33:06AM 1 point [-]

It's still less probable, and still not under your control.

Comment author: DanielLC 24 November 2011 02:04:28AM 0 points [-]

so you need a lot of correlated quantum randomness to flip one of them.

If it happens by quantum vibrations that's true, but our brains aren't perfect, and the state they go into is somewhat random. There is a reasonable chance of becoming depressed, to the point that it's actually happened in this universe many times over.

Comment author: DanielLC 24 November 2011 02:03:07AM -1 points [-]

But I also have control over stuff with a high probability. I can donate to a good charity and have a high probability of taking someone out of a suffering state.

Comment author: steven0461 22 November 2011 09:15:08PM 3 points [-]

as I said in Chapter 7, I am sure that they have more than enough to simulate a person

What's his argument, if any?

Comment author: steven0461 22 November 2011 09:08:30PM *  3 points [-]
Comment author: XiXiDu 22 November 2011 05:26:11PM 0 points [-]

Reminds me of this.

Comment author: DavidPlumpton 23 November 2011 04:45:25AM 1 point [-]

Free the Everett Branches!

Comment author: [deleted] 23 November 2011 06:06:44PM 0 points [-]

This seems complicated. Here's what I've worked out after about an hour of thinking about it:

If we are considering this from a many worlds perspective, then am I correct if I say I have to multiply all entities? As an example, there isn't 1 person considering the switch, there are A people. There isn't 1 computer, there are B computers. In essence, there are A people deciding for B computers with all states, representing C simulations that may or may not be experiencing suffering, and based on my decision, there will either be D suffering(on) or E suffering(off).

Now, if my primary goal is to minimize suffering, then I should pick the smaller of D and E. If D=E, then my decision is irrelevant for my primary goal.

So the real question is, is D=E, or is D!=E?

The initial problem seems to assume that there will be less suffering with it off. But it doesn't actually lay out an argument for the size of D and E.

It seems like the size of D and E is an important consideration. My current understanding of math theory is:

1: There is a difference between 1 quadrillion units of suffering and 999 trillion units of suffering. Pick 999 trillion, it's smaller.

2: There is not a difference between the infinity of the natural numbers and the infinity of the natural numbers-1 trillion. Your choice doesn't matter.

3: There is a difference between the infinity of the natural numbers and the infinity of the real numbers. Pick the infinity of the natural numbers, it's smaller.

4: There is not a difference between the infinity of the real numbers and the infinity of the real numbers minus the infinity of the natural numbers. Your choice doesn't matter.

And despite all of that explanation, I haven't even yet taken to account the possibility of being wrong about my mathematical judgement of the sizes of D and E, or the possibility of being wrong about many worlds (note, not necessarily in general, but about the specifics I use to attempt to calculate D and E.)

Does it sound like I'm on the right track for considering this problem?

Comment author: Logos01 23 November 2011 11:32:33AM -2 points [-]

There are also people in every possible state of suffering. So my question is: is it wrong to switch the computer on, setting it executing all those programs simultaneously in different histories? Is it, in fact, the worst crime ever committed? Or is it merely inadvisable, because the combined measure of all the histories containing suffering is very tiny? Or is it innocent and trivial?

I'm not so sure we have the computing power to "simulate a person," but suppose we did. (Perhaps we will soon.) How would you respond to this worry?

Pascal's Mugging rang. It wants tree-fiddy.

  1. Assuming we have sufficiently dense register as to provide for a human consciousness within a quantum randomizer's memory bank.

  2. Assuming many-worlds.

Every available mental state would occur infinitely many times despite being an infinitessimal likelihood of the device. Those mental states where the suffering is sufficiently great as to cause the sentience to prefer not existing at all is necessarily a minor portion of the total of those who suffer. Those who neither suffer nor prosper likely also prefer existing, in the main. Those who prosper also overwhelmingly (likely) prefer to exist.

Should we allow those entities, hypothetically, to vote on whether they should be brought into existence at all - as a group, it is my belief that they would vote "yes".

Of course, I'm something of a heretic here at LW in that I do not accept postulate #2. (Note: I do not accept the "Copenhagen Interpretation" either.)

Comment author: ArisKatsaris 23 November 2011 01:59:42PM *  0 points [-]
  1. Assuming we have sufficiently dense register as to provide for a human consciousness within a quantum randomizer's memory bank.
  2. Assuming many-worlds.

Also:
3. Assuming simulations of people are people.

Comment author: Logos01 23 November 2011 02:50:20PM 0 points [-]

I understand "perfect copy" to mean that it is the thing it is a copy of -- functionally and observationally indistinguishable.

Comment author: ArisKatsaris 23 November 2011 04:26:36PM 0 points [-]

I don't see the words "perfect copy" or even just "copy" used anywhere in the article, only simulation and representation. That consciousness can be produced in a traditional silicon computer via an algorithm merely isomorphic to the processes in the human brain is an assumption I don't yet grant.

Comment author: Logos01 23 November 2011 04:51:55PM *  1 point [-]

I don't see the words "perfect copy"

Correct, but I did in item one postulate "a human consciousness".

Is a human consciousness not a person, merely because it is a simulated human consciousness?

That consciousness can be produced in a traditional silicon computer via an algorithm merely isomorphic to the processes in the human brain is an assumption I don't yet grant.

I think you and I are using very different understandings of what postulated item #1 meant.

Comment author: DanielLC 24 November 2011 02:12:58AM 0 points [-]

Or is it merely inadvisable, because the combined measure of all the histories containing suffering is very tiny?

I'm going to go with this one.

If you decided that, for some reason, all that matters is that there is a nonzero probability, then there's nothing you can do to stop it. The amplitude will only be zero at isolated points in configuration space. Move a photon a Planck length to the left, and it will now have non-zero amplitude.