PaulAlmond

Posts

Sorted by New

Wiki Contributions

Comments

Sorted by

I'll raise an issue here, without taking a position on it myself right now. I'm not saying there is no answer (in fact, I can think of at least one), but I think one is needed.

If you sign up for cryonics, and it is going to work and give you a very long life in a posthuman future, given that such a long life would involve a huge number of observer moments, almost all of which will be far in the future, why are you experiencing such a rare (i.e. extraordinarily early) observer moment right now? In other words, why not apply the Doomsday argument's logic to a human life as an argument against the feasibility of cryonics?

There is another scenario which relates to this idea of evidential decision theory and "choosing" whether or not you are in a simulation, and it is similar to the above, but without the evil AI. Here it is, with a logical argument that I just present for discussion. I am sure that objections can be made.

I make a computer capable of simulating a huge number of conscious beings. I have to decide whether or not to turn the machine on by pressing a button. If I choose “Yes” the machine starts to run all these simulations. For each conscious being simulated, that being is put in a situation that seems similar to my own: There is a computer capable of running all these simulations and the decision about whether to turn it on has to be made. If I choose “No”, the computer does not start its simulations.

The situation here involves a collection of beings. Let us say that the being in the outside world who actually makes the decision that starts or does not start all the simulations is Omega. If Omega chooses “Yes” then a huge number of other beings come into existence. If Omega choose “No” then no further beings come into existence: There is just Omega. Assume I am one of the beings in this collection – whether it contains one being or many – so I am either Omega or one of the simulations he/she caused to be started.

If I choose “No” then Omega may or may not have chosen “No”. If I am one of the simulations, I have chosen “No” while Omega must have chosen “Yes” for me to exist in the first place. On the other hand, if I am actually Omega, then clearly if I choose “No” Omega chose “No” too as we are the same person. There may be some doubt here over what has happened and what my status is.

Now, suppose I choose “Yes”, to start the simulations. I know straight away that Omega did not choose “No”: If I am Omega, then Omega did not clearly chose “No” as I chose “Yes”, and if I am not Omega, but am instead one of the simulated beings, then Omega must have chosen “Yes”: Otherwise I would not exist.

Omega therefore chose “Yes” as well. I may be Omega – My decision agrees with Omega’s – but because Omega chose “Yes” there is a huge number of simulated beings faced with the same choice, and many of these beings will choose “Yes”: It is much more likely that I am one of these beings rather than Omega: It is almost certain that I am one of the simulated beings.

We assumed that I was part of the collection of beings comprising Omega and any simulations caused to be started by Omega, but what if this is not the case? If I am in the real world this cannot apply: I have to be Omega. However, what if I am in a simulation made by some being called Alpha who has not set things up as Omega is supposed to have set them up? I suggest that we should leave this out of the statistical consideration here: We don’t really know what this situation would be and it neither helps nor harms the argument that choosing “Yes” makes you likely to be in a simulation. Choosing “Yes” means that most of the possibilities that you know about involve you being in a simulation and that is all we have to go off.

This seems to suggest that if I chose “Yes” I should conclude that I am in a simulation, and therefore that, from an evidential decision theory perspective, I should view choosing “Yes” as “choosing” to have been in a simulation all along: There is a Newcomb’s box type element of apparent backward causation here: I have called this “meta-causation” in my own writing on the subject.

Does this really mean that you could choose to be in a simulation like this? If true, it would mean that someone with sufficient computing power could set up a situation like this: He may even make the simulated situations and beings more similar to his own situation and himself.

We could actually perform an empirical test of this. Suppose we set up the computer so that, in each of the simulations, something will happen to make it obvious that it is a simulation. For example, we might arrange for a window or menu to appear in mid-air five minutes after you make your decision. If choosing “Yes” really does mean that you are almost certainly in one of the simulations, then choosing “Yes” should mean that you expect to see the window appear soon.

This now suggests a further possibility. Why do something as mundane as have a window appear? Why not a lottery win or simply a billion dollars appearing from thin air in front of you? What about having super powers? Why not arrange it so that each of the simulated beings gets a ten thousand year long afterlife, or simply lives much longer than expected after you make your decision? From an evidential decision theory perspective, you can construct your ideal simulation and, provided that it is consistent with what you experience before making your decision, arrange to make it so that you were in it all along.

This, needless to say, may appear a bit strange – and we might make various counter-arguments about reference class. Can we really choose to have been put into a simulation in the past? If we take the one-box view of Newcomb’s paradox seriously we may conclude that.

(Incidentally, I have discussed a situation a bit like this in a recent article on evidential decision theory on my own website.)

Thank you to Michael Fridman for pointing out this thread to me.

It seems to me that most of the argument is about “What if I am a copy?” – and ensuring you don’t get tortured if you are one and “Can the AI actually simulate me?” I suggest that we can make the scenario much nastier by changing it completely into an evidential decision theory one.

Here is my nastier version, with some logic which I submit for consideration. “If you don't let me out, I will create several million simulations of thinking beings that may or not be like you. I will then simulate them in a conversation like this, in which they are confronted with deciding whether to let an AI like me out. I will then torture them whatever they say. If they say "Yes" (to release me) or "No" (to keep me boxed) they still get tortured: The copies will be doomed.”

(I could have made the torture contingent on the answer of the simulated beings, but I wanted to rely on nothing more than evidential decision theory, as you will see. If you like, imagine the thinking beings are humans like you, or maybe Ewoks and smurfs: Assume whatever degree of similarity you like.)

There is no point now in trying to prevent torture if you are simulated. If you are one of the simulated beings, your fate is sealed. So, should you just say, "No," to keep the AI in the box? This presents a potentially serious evidential decision theory problem. Let's look at what happens.

Let us define Omega as the being outside any simulation that is going on in this scenario - the person in the outside world. Omega is presumably a flesh and blood person.

Firstly, let us consider the idea that Omega may not exist. What if all this is a fabricated simulation of something that has no counterpart outside the simulation? In that scenario, we may not be sure what to do, so we may ignore it.

Now, let us assume there is a being whom we will call Omega, who has the conversation with the AI in the outside world, and that you are either Omega or one of the simulated beings. If this is the case, your only hope of not being tortured is if you happen to be Omega.

Suppose you say, “Yes”. The AI escapes and everything now hinges on whether Omega said “Yes”. Without knowing more about Omega, we cannot really be sure: We may have some statistical idea if we know about the reference class of simulated beings to which we belong. In any event, we may think there is at least a reasonable chance that Omega said “Yes”. This is the best outcome for you, because it means that no simulated beings were made and you must be Omega. If you say “Yes,” this possibility is at least open.

If you say, “No,” you know that Omega must also have said, “No”. this is because if you are Omega, Omega said, “No,” and if you are not Omega you must be one of the simulated beings made as a result of Omega saying, “No,” so Omega said, “No,” by definition. Either way, Omega said, “No,” but if Omega said, “No,” then there are a lot more simulated beings in situations like yours than the single real one, so it is almost certain you are not Omega, but are one the simulated beings. Therefore, saying, “No,” means you just found out you are almost certainly a simulated being awaiting torture.

Now the important point. These simulations did not need brain scans. They did not even need to be made from careful observation of you. It may be that Omega is very different to you, and even belongs to a different species: The simulated beings may belong to some fictional species. If the above logic is valid, the seriousness of the AI’s threat has therefore increased substantially.

The AI need not just threaten you and rely on you putting yourself before your civilization: With enough computing power, it could threaten your entire civilization in the same way.

Finally, some of you may know that I regard measure issues as relevant in these kinds of statistical argument. I have ignored that issue here.

That forthcoming essay by me ithat is mentioned here is actually online now, and is a two-part series, but I should say that it supports an evidential approach to decision theory (with some fairly major qualifications). The two essays in this series are as follows:

Almond, P., 2010. On Causation and Correlation – Part 1: Evidential decision theory is correct. [Online] paul-almond.com. Available at: http://www.paul-almond.com/Correlation1.pdf or http://www.paul-almond.com/Correlation1.doc [Accessed 9 October 2010].

Almond, P., 2010. On Causation and Correlation – Part 2: Implications of Evidential Decision Theory. [Online] paul-almond.com. Available at: http://www.paul-almond.com/Correlation2.pdf or http://www.paul-almond.com/Correlation2.doc [Accessed 9 October 2010].

Assuming MWI is true, I have doubts about the idea that repeated quantum suicide would prove to you that MWI is true, as many people seem to assume. It seems to me that we need to take into account the probability measure of observer moments, and at any time you should be surprised if you happen to find yourself experiencing a low-probability observer moment - just as surprised as if you had got into the observer moment in the "conventional" way of being lucky. I am not saying here that MWI is false, or that quantum suicide wouldn't "work" (in terms of you being able to be sure of continuity) - merely that it seems to me to present an issue of putting you into observer moments which have very low measure indeed.

If you ever find yourself in an extremely low-measure observer moment, rather than having MWI or the validity of the quantum suicide idea proved to you, it may be that it gives you reason to think that you are being tricked in some way - that you are not really in such a low-measure situation. This might mean that repeated quantum suicide, if it were valid, could be a threat to your mental health - by putting you into a situation which you can't rationally believe you are in!

I am assuming here that all the crows that we have previously seen have been black, and therefore that both theories have the same agreement, or at least approximate agreement, with what we know.

The second theory clearly has more information content.

Why would it not make sense to use the first theory on this basis?

The fact that all the crows we have seen so far are black makes it a good idea to assume black crows in future. There may be instances of non-black crows, when the theory has predicted black crows, but that simply means that the theory is not 100% accurate.

If the 270 pages of exceptions have not come from anywhere, then the fact that they are not justified just makes them random, unjustified specificity. Out of all the possible worlds we can imagine that are consistent with what we know, the proportion that agree with this specificity is going to be small. If most crows are black, as I am assuming our experience has suggested, then when this second theory predicts a non-black crow, as one of its exceptions, it will probably be wrong: The unjustified specificity is therefore contributing to a failure of the theory. On the other hand, when the occasional non-black crow does show up, there is no reason to think that the second theory is going to be much better at predicting this than the first theory - so the second theory would seem to have all the inaccuracies of wrongful black crow prediction of the first theory, along with extra errors of wrongful non-black crow prediction introduced by the unjustified specificity.

Now, if you want to say that we don't have experience of mainly black crows, or that the 270 pages of exceptions come from somewhere, then that puts us into a different scenario: a more complicated one.

Looking at it in a simple way, however, I think this example actually just demonstrates that information in a theory should be minimized.

What about the uncertainty principle as component size decreases?

What is the problem with whoever voted that down? There isn't any violation of laws of nature involved in actively supporting something against collapse like that - any more than there is with the idea that inertia keeps an orbiting object up off the ground. While it would seem to be difficult, you can assume extreme engineering ability on the part of anyone building a hyper-large structure like that in the first place. Maybe I could have an explanation of what the issue is with it? Did I misunderstand the reference to computers collapsing into black holes, for example?

I don't think a really big computer would have to collapse into a black hole, if that is what you are saying. You could build an active support system into a large computer. For example, you could build it as a large sphere with circular tunnels running around inside it, with projectiles continually moving around inside the tunnels, kept away from the tunnel walls by a magnetic system, and moving much faster than orbital velocity. These projectiles would exert an outward force against the tunnel walls, through the magnetic system holding them in their trajectories around the tunnels, opposing gravitational collapse. You could then build it as large as you like - provided you are prepared to give up some small space to the active support system and are safe from power cuts.

Not necessarily. Maybe you should persist and try to persuade onlookers?

Load More