From David Deutsch's The Beginning of Infinity:

Take a powerful computer and set each bit randomly to 0 or 1 using a quantum randomizer. (That means that 0 and 1 occur in histories of equal measure.) At that point all possible contents of the computer’s memory exist in the multiverse. So there are necessarily histories present in which the computer contains an AI program – indeed, all possible AI programs in all possible states, up to the size that the computer’s memory can hold. Some of them are fairly accurate representations of you, living in a virtual-reality environment crudely resembling your actual environment. (Present-day computers do not have enough memory to simulate a realistic environment accurately, but, as I said in Chapter 7, I am sure that they have more than enough to simulate a person.) There are also people in every possible state of suffering. So my question is: is it wrong to switch the computer on, setting it executing all those programs simultaneously in different histories? Is it, in fact, the worst crime ever committed? Or is it merely inadvisable, because the combined measure of all the histories containing suffering is very tiny? Or is it innocent and trivial?

I'm not so sure we have the computing power to "simulate a person," but suppose we did. (Perhaps we will soon.) How would you respond to this worry?

New Comment
37 comments, sorted by Click to highlight new comments since:
[-][anonymous]260

Shouldn't I be equally worried about stirring my coffee?

[-][anonymous]60

That depends whether you think that you are increasing the probability of a Boltzmann brain coming into existence by a greater factor when you stir your coffee, in comparison to when the memory of the computer is randomized.

I'm sure the coffee contains enough atoms for a Boltzmann brain to form. However, the entropy of the coffee is already high from your point of view before you stir the coffee, i.e. the probability of the coffee containing a Boltzmann brain is changed little by your stirring it.

It also depends to some extent on the size of the computer memory in question. We can infer that it is vast, since it is capable of simulating a human. However, is it just sufficient to do that or much bigger still?

You monster!

[-]Louie-10

I'm willing to consider that this quantum computer could be a novel situation that demands real consideration.

The vector space of your coffee is basically flat whereas the vector space of the algorithm Deutsch is describing is unimaginably vast. The fact that both are embedded in quantum physics is somewhat besides the point.

[-]Baughn120

The measure of the branches where the computer contain any such program, as opposed to total nonsense, is so small as to be ignorable. There's no point in worrying about it, because it practically doesn't happen.

Maybe I'm confused here. For background, I thought that even in MWI some 'worlds' might not have conscious observers. Normally we can comfort ourselves with the thought that extremely low-amplitude configurations (like those in which ravenous pink teddy-bears spontaneously destroy all that we hold dear) might not cause anyone pain because they might lack the ability to support consciousness. (Obviously I'm ignoring Tegmark IV here.)

But surely every configuration of ones and zeros in the computer has equal amplitude. That would mean that if we 'observe' each bit, the world we then live in has the same amplitude as each of the horribly-suffering-simulations. On what grounds can we say that the latter don't happen?

In this construction every configuration of ones and zeros have equal amplitude, yes. However, most of them are nonsensical; the sum of the measures of meaningful worlds are very very close to zero.

Meanwhile, the sum of measures in this scenario where you exist is, well, 1.

That you see each of the nonsensical numbers with equally low probability doesn't matter. If you roll a d1000 and get 687, the chance of that was the same as 1; you still wouldn't expect to get 1. In the same way, you wouldn't expect to get any particular configuration, but you're effectively summing over all the nonsensical ones, and that sum is pretty close to 1.

The part I don't get is why we should care if we observe the person suffering or not.

This conversation is confusing me; possibly this comment will help us understand each other.

Does it help if I say I completely agree with Manfred?

Not all people have the same "degree of existence" (warning: don't understand what this really is!).

You may gain an improved intuition for what's going on if you read about Mangled Worlds. It may not be true, but it's the best one yet.

More specifically, I'm pretty sure us humans don't have any negative parts of our utility function that grow exponentially with "badness," so there's no bad outcome that can overcome the exponential decrease in probability with program size to actually be a significant factor.

Are you going with Torture v Dust Specks here? Or do you just reject Many Worlds? (Or have I missed something?)

It seems to this layman that using quantum randomization would give us no increase or a tiny increase in utility per world, relative to overwriting each bit with 0 or a piece of Loren Ipsum. And as with Dust Specks, if we actually know we might have prevented torture then I'd get a warm feeling which should count towards the total.

Are you going with Torture v Dust Specks here? Or do you just reject Many Worlds?

Neither is relevant in this case. My claim is that it's not worth spending even a second of time, even a teensy bit of thought, on changing which kind of randomization you use.

Why? Exponential functions drop off really, really quickly. Really quickly. The proportion of of random bit strings that, when booted up, are minds in horrible agony drops roughly as the exponential of the complexity of the idea "minds in horrible agony." It would look approximately like 2^-(complexity).

To turn this exponentially small chance into something I'd care about, we'd need the consequence to be of exponential magnitude. But it's not. It's just a regular number like 1 billion dollars or so. That's 2^30. It's nothing. You aren't going to write a computer program that detects minds in horrible agony using 30 bits. You aren't going to write one with 500 bits, either (concentration of one part in 10^-151). It's simply not worth worrying about things that are worth less than 10^-140 cents.

I'm saying I don't understand what you're measuring. Does a world with a suffering simulation exist, given the OP's scenario, or not?

If it does, then the proliferation of other worlds doesn't matter unless they contain something that might offset the pain. If they're morally neutral they can number Aleph-1 and it won't make any difference.

Decision-making in many-worlds is exactly identical to ordinary decision-making. You weight the utility of possible outcomes by their measure, and add them up into an expected utility. The bad stuff in one of those outcomes only feels more important when you phrase it in terms of many-worlds, because a certainty of small bad stuff often feels worse than a chance of big bad stuff, even when the expected utility is the same.

The more competent AIs will be conquering the universe, so it's value of the universe being optimized in each of the possible ways that's playing against low measure.

If that's what we're worried about, then we might as well ask whether it's risky to randomly program a classical computer and then run it.

My argument is about utility, but probability is low. On the other hand, with enough computational power a sufficiently clever evolutionary dynamic might well blow up the universe.

If the MWI is correct, then our reality already does something similar: there's always a very low but nonzero chance of a quantum fluctuation that will flip your brain into a suffering state. If you don't worry about that, you probably shouldn't worry about the computer.

You have control over what happens with the computer, and the measure of consequences is immensely greater with the computer, even if very low in both cases.

the measure of consequences is immensely greater with the computer

Why? It seems to me that the reverse might well be true. Measure of random unhappiness inside the computer depends on the number of bits in a brain. Measure of random unhappiness in reality (given that humans already exist) depends on the number of bits in a "diff" between a happy brain and an unhappy one, which is probably smaller.

ETA: this comment is wrong because neurons in reality are macroscopic, so you need a lot of correlated quantum randomness to flip one of them. Please disregard.

I'm assuming that expected value of running the computer is dominated by universe-optimizing AGIs it generates, which would have much better conditions for bootstrapping from a well-defined program in a fully-functional computer than if they have to do it boltzmann brain-style.

Our world already contains many computers that are subject to quantum fluctuations. Some of them even use quantum noise random number generators, so you just need a small glitch to accidentally execute that data, thus creating all the universe-optimizing AGIs you can imagine.

It's still less probable, and still not under your control.

so you need a lot of correlated quantum randomness to flip one of them.

If it happens by quantum vibrations that's true, but our brains aren't perfect, and the state they go into is somewhat random. There is a reasonable chance of becoming depressed, to the point that it's actually happened in this universe many times over.

But I also have control over stuff with a high probability. I can donate to a good charity and have a high probability of taking someone out of a suffering state.

as I said in Chapter 7, I am sure that they have more than enough to simulate a person

What's his argument, if any?

Free the Everett Branches!

Or is it merely inadvisable, because the combined measure of all the histories containing suffering is very tiny?

I'm going to go with this one.

If you decided that, for some reason, all that matters is that there is a nonzero probability, then there's nothing you can do to stop it. The amplitude will only be zero at isolated points in configuration space. Move a photon a Planck length to the left, and it will now have non-zero amplitude.

[-][anonymous]10

This seems complicated. Here's what I've worked out after about an hour of thinking about it:

If we are considering this from a many worlds perspective, then am I correct if I say I have to multiply all entities? As an example, there isn't 1 person considering the switch, there are A people. There isn't 1 computer, there are B computers. In essence, there are A people deciding for B computers with all states, representing C simulations that may or may not be experiencing suffering, and based on my decision, there will either be D suffering(on) or E suffering(off).

Now, if my primary goal is to minimize suffering, then I should pick the smaller of D and E. If D=E, then my decision is irrelevant for my primary goal.

So the real question is, is D=E, or is D!=E?

The initial problem seems to assume that there will be less suffering with it off. But it doesn't actually lay out an argument for the size of D and E.

It seems like the size of D and E is an important consideration. My current understanding of math theory is:

1: There is a difference between 1 quadrillion units of suffering and 999 trillion units of suffering. Pick 999 trillion, it's smaller.

2: There is not a difference between the infinity of the natural numbers and the infinity of the natural numbers-1 trillion. Your choice doesn't matter.

3: There is a difference between the infinity of the natural numbers and the infinity of the real numbers. Pick the infinity of the natural numbers, it's smaller.

4: There is not a difference between the infinity of the real numbers and the infinity of the real numbers minus the infinity of the natural numbers. Your choice doesn't matter.

And despite all of that explanation, I haven't even yet taken to account the possibility of being wrong about my mathematical judgement of the sizes of D and E, or the possibility of being wrong about many worlds (note, not necessarily in general, but about the specifics I use to attempt to calculate D and E.)

Does it sound like I'm on the right track for considering this problem?

[-][anonymous]-10

Deutsch poses the dilemma as though to do this would necessarily be either evil or trivial - he omits the possibility that such an action might create a net positive utility for humans. That seems worth pointing out as a potential source of bias.

Anyhow, I suppose we have to take into account what is in the computer's memory in the first place. If it is running a sentient simulation(s) of a human being tortured, then it is good to randomise it. If it is running a sentient simulation of a happy human, then it would be evil to randomise it. If the memory is already random with equal measure of 0 and 1, then it is neutral (ignoring the cost of performing the randomisation itself).

Given these results, it seems that the implicit assumption in this dilemma is that the computer's memory is initially set to all 0 or all 1. It could be that the memory was already random with a different proportion of 0 and 1, but as far as I can see that has no substantive bearing on the fundamental problem (are Boltzmann brain simulations more likely with 55/45 ratio of 1s and 0s rather than 50/50?) – what the dilemma is really probing is whether it is good to increase the probability of (simulated) Boltzmann brains coming into existence.

I would answer: humans primarily attach value to the experiences of qualia-rich – or sentient as some would have it – beings (including themselves). We understand very little of the “rules of qualia” – what is the maximum pain and maximum pleasure, how do they come about, how do they relate to everything else in the universe – but we have to form our best judgement given what we little we know.

Based on introspection and on what others have said about their qualia (or subjective experiences), negative-valued qualia (i.e. various kinds of physical and emotional pain) generally feel more intense than positive-valued qualia, and negative qualia appear to be more motivating to humans than positive qualia. Humans experience life-long trauma after intense negative qualia and memories can be very painful; positive qualia are comparatively feeble. In the immortal words of Rush, “They shout about love but when push comes to shove, They live for the things they’re afraid of”.

Therefore I would assign a somewhat greater moral weight to the negative qualia experienced by (simulated) Boltzmann brains, assuming that negative and positive qualia are otherwise equally likely to be generated in random computations (I don’t see any basis for assuming otherwise).

Of course this is a simplified view of human values; most of us don’t consider orgasmium, a pleasure centre containing an extremely large integer, to be the most desirable form of being to bring into existence. But if we try to include that moral complexity in our decision, this would seem to reduce the expected utility of randomising the computer memory still further. The vast majority of sentient Boltzmann brains, regardless of whether (if at all) they are experiencing what we would recognise as "pleasure" or "pain", are chaotic beings with chaotic experiences. The vast majority are also primitive – just complex enough to possess qualia. Anything that we would consider to be more or less a simulated human is likely to be torn apart in a fraction of a second by its hostile environment, and (setting aside pleasure/pain) if miraculously he is able to persist he will almost certainly live an aesthetically displeasing life from our perspective.

I conclude that, ceteris paribus, the computer memory should not be randomized.

I agree with those people who have pointed out that of course in reality, the conditions of bounded rationality and the fact that there are tangible expected costs and benefits of randomizing a computer memory (e.g. time wasted, electricity used) render this problem insignificant – although then again the unresolved problem of tiny probabilities of vast utilities is not to be taken lightly. Nonetheless I would point out that this is a thought experiment, intended to illuminate a point of philosophical interest rather than necessarily pose a problem of direct practical importance.

[This comment is no longer endorsed by its author]Reply
[-]XiXiDu-10

Reminds me of this.

There are also people in every possible state of suffering. So my question is: is it wrong to switch the computer on, setting it executing all those programs simultaneously in different histories? Is it, in fact, the worst crime ever committed? Or is it merely inadvisable, because the combined measure of all the histories containing suffering is very tiny? Or is it innocent and trivial?

I'm not so sure we have the computing power to "simulate a person," but suppose we did. (Perhaps we will soon.) How would you respond to this worry?

Pascal's Mugging rang. It wants tree-fiddy.

  1. Assuming we have sufficiently dense register as to provide for a human consciousness within a quantum randomizer's memory bank.

  2. Assuming many-worlds.

Every available mental state would occur infinitely many times despite being an infinitessimal likelihood of the device. Those mental states where the suffering is sufficiently great as to cause the sentience to prefer not existing at all is necessarily a minor portion of the total of those who suffer. Those who neither suffer nor prosper likely also prefer existing, in the main. Those who prosper also overwhelmingly (likely) prefer to exist.

Should we allow those entities, hypothetically, to vote on whether they should be brought into existence at all - as a group, it is my belief that they would vote "yes".

Of course, I'm something of a heretic here at LW in that I do not accept postulate #2. (Note: I do not accept the "Copenhagen Interpretation" either.)

  1. Assuming we have sufficiently dense register as to provide for a human consciousness within a quantum randomizer's memory bank.
  2. Assuming many-worlds.

Also:

  1. Assuming simulations of people are people.

I understand "perfect copy" to mean that it is the thing it is a copy of -- functionally and observationally indistinguishable.

I don't see the words "perfect copy" or even just "copy" used anywhere in the article, only simulation and representation. That consciousness can be produced in a traditional silicon computer via an algorithm merely isomorphic to the processes in the human brain is an assumption I don't yet grant.

I don't see the words "perfect copy"

Correct, but I did in item one postulate "a human consciousness".

Is a human consciousness not a person, merely because it is a simulated human consciousness?

That consciousness can be produced in a traditional silicon computer via an algorithm merely isomorphic to the processes in the human brain is an assumption I don't yet grant.

I think you and I are using very different understandings of what postulated item #1 meant.