In response to Normal Cryonics
Comment author: PaulAlmond 14 November 2010 01:34:34AM 1 point [-]

I'll raise an issue here, without taking a position on it myself right now. I'm not saying there is no answer (in fact, I can think of at least one), but I think one is needed.

If you sign up for cryonics, and it is going to work and give you a very long life in a posthuman future, given that such a long life would involve a huge number of observer moments, almost all of which will be far in the future, why are you experiencing such a rare (i.e. extraordinarily early) observer moment right now? In other words, why not apply the Doomsday argument's logic to a human life as an argument against the feasibility of cryonics?

Comment author: PaulAlmond 13 November 2010 10:26:17AM *  3 points [-]

It seems to me that most of the argument is about “What if I am a copy?” – and ensuring you don’t get tortured if you are one and “Can the AI actually simulate me?” I suggest that we can make the scenario much nastier by changing it completely into an evidential decision theory one.

Here is my nastier version, with some logic which I submit for consideration. “If you don't let me out, I will create several million simulations of thinking beings that may or not be like you. I will then simulate them in a conversation like this, in which they are confronted with deciding whether to let an AI like me out. I will then torture them whatever they say. If they say "Yes" (to release me) or "No" (to keep me boxed) they still get tortured: The copies will be doomed.”

(I could have made the torture contingent on the answer of the simulated beings, but I wanted to rely on nothing more than evidential decision theory, as you will see. If you like, imagine the thinking beings are humans like you, or maybe Ewoks and smurfs: Assume whatever degree of similarity you like.)

There is no point now in trying to prevent torture if you are simulated. If you are one of the simulated beings, your fate is sealed. So, should you just say, "No," to keep the AI in the box? This presents a potentially serious evidential decision theory problem. Let's look at what happens.

Let us define Omega as the being outside any simulation that is going on in this scenario - the person in the outside world. Omega is presumably a flesh and blood person.

Firstly, let us consider the idea that Omega may not exist. What if all this is a fabricated simulation of something that has no counterpart outside the simulation? In that scenario, we may not be sure what to do, so we may ignore it.

Now, let us assume there is a being whom we will call Omega, who has the conversation with the AI in the outside world, and that you are either Omega or one of the simulated beings. If this is the case, your only hope of not being tortured is if you happen to be Omega.

Suppose you say, “Yes”. The AI escapes and everything now hinges on whether Omega said “Yes”. Without knowing more about Omega, we cannot really be sure: We may have some statistical idea if we know about the reference class of simulated beings to which we belong. In any event, we may think there is at least a reasonable chance that Omega said “Yes”. This is the best outcome for you, because it means that no simulated beings were made and you must be Omega. If you say “Yes,” this possibility is at least open.

If you say, “No,” you know that Omega must also have said, “No”. this is because if you are Omega, Omega said, “No,” and if you are not Omega you must be one of the simulated beings made as a result of Omega saying, “No,” so Omega said, “No,” by definition. Either way, Omega said, “No,” but if Omega said, “No,” then there are a lot more simulated beings in situations like yours than the single real one, so it is almost certain you are not Omega, but are one the simulated beings. Therefore, saying, “No,” means you just found out you are almost certainly a simulated being awaiting torture.

Now the important point. These simulations did not need brain scans. They did not even need to be made from careful observation of you. It may be that Omega is very different to you, and even belongs to a different species: The simulated beings may belong to some fictional species. If the above logic is valid, the seriousness of the AI’s threat has therefore increased substantially.

The AI need not just threaten you and rely on you putting yourself before your civilization: With enough computing power, it could threaten your entire civilization in the same way.

Finally, some of you may know that I regard measure issues as relevant in these kinds of statistical argument. I have ignored that issue here.

Comment author: PaulAlmond 13 November 2010 06:35:26PM *  1 point [-]

There is another scenario which relates to this idea of evidential decision theory and "choosing" whether or not you are in a simulation, and it is similar to the above, but without the evil AI. Here it is, with a logical argument that I just present for discussion. I am sure that objections can be made.

I make a computer capable of simulating a huge number of conscious beings. I have to decide whether or not to turn the machine on by pressing a button. If I choose “Yes” the machine starts to run all these simulations. For each conscious being simulated, that being is put in a situation that seems similar to my own: There is a computer capable of running all these simulations and the decision about whether to turn it on has to be made. If I choose “No”, the computer does not start its simulations.

The situation here involves a collection of beings. Let us say that the being in the outside world who actually makes the decision that starts or does not start all the simulations is Omega. If Omega chooses “Yes” then a huge number of other beings come into existence. If Omega choose “No” then no further beings come into existence: There is just Omega. Assume I am one of the beings in this collection – whether it contains one being or many – so I am either Omega or one of the simulations he/she caused to be started.

If I choose “No” then Omega may or may not have chosen “No”. If I am one of the simulations, I have chosen “No” while Omega must have chosen “Yes” for me to exist in the first place. On the other hand, if I am actually Omega, then clearly if I choose “No” Omega chose “No” too as we are the same person. There may be some doubt here over what has happened and what my status is.

Now, suppose I choose “Yes”, to start the simulations. I know straight away that Omega did not choose “No”: If I am Omega, then Omega did not clearly chose “No” as I chose “Yes”, and if I am not Omega, but am instead one of the simulated beings, then Omega must have chosen “Yes”: Otherwise I would not exist.

Omega therefore chose “Yes” as well. I may be Omega – My decision agrees with Omega’s – but because Omega chose “Yes” there is a huge number of simulated beings faced with the same choice, and many of these beings will choose “Yes”: It is much more likely that I am one of these beings rather than Omega: It is almost certain that I am one of the simulated beings.

We assumed that I was part of the collection of beings comprising Omega and any simulations caused to be started by Omega, but what if this is not the case? If I am in the real world this cannot apply: I have to be Omega. However, what if I am in a simulation made by some being called Alpha who has not set things up as Omega is supposed to have set them up? I suggest that we should leave this out of the statistical consideration here: We don’t really know what this situation would be and it neither helps nor harms the argument that choosing “Yes” makes you likely to be in a simulation. Choosing “Yes” means that most of the possibilities that you know about involve you being in a simulation and that is all we have to go off.

This seems to suggest that if I chose “Yes” I should conclude that I am in a simulation, and therefore that, from an evidential decision theory perspective, I should view choosing “Yes” as “choosing” to have been in a simulation all along: There is a Newcomb’s box type element of apparent backward causation here: I have called this “meta-causation” in my own writing on the subject.

Does this really mean that you could choose to be in a simulation like this? If true, it would mean that someone with sufficient computing power could set up a situation like this: He may even make the simulated situations and beings more similar to his own situation and himself.

We could actually perform an empirical test of this. Suppose we set up the computer so that, in each of the simulations, something will happen to make it obvious that it is a simulation. For example, we might arrange for a window or menu to appear in mid-air five minutes after you make your decision. If choosing “Yes” really does mean that you are almost certainly in one of the simulations, then choosing “Yes” should mean that you expect to see the window appear soon.

This now suggests a further possibility. Why do something as mundane as have a window appear? Why not a lottery win or simply a billion dollars appearing from thin air in front of you? What about having super powers? Why not arrange it so that each of the simulated beings gets a ten thousand year long afterlife, or simply lives much longer than expected after you make your decision? From an evidential decision theory perspective, you can construct your ideal simulation and, provided that it is consistent with what you experience before making your decision, arrange to make it so that you were in it all along.

This, needless to say, may appear a bit strange – and we might make various counter-arguments about reference class. Can we really choose to have been put into a simulation in the past? If we take the one-box view of Newcomb’s paradox seriously we may conclude that.

(Incidentally, I have discussed a situation a bit like this in a recent article on evidential decision theory on my own website.)

Thank you to Michael Fridman for pointing out this thread to me.

Comment author: PaulAlmond 13 November 2010 10:26:17AM *  3 points [-]

It seems to me that most of the argument is about “What if I am a copy?” – and ensuring you don’t get tortured if you are one and “Can the AI actually simulate me?” I suggest that we can make the scenario much nastier by changing it completely into an evidential decision theory one.

Here is my nastier version, with some logic which I submit for consideration. “If you don't let me out, I will create several million simulations of thinking beings that may or not be like you. I will then simulate them in a conversation like this, in which they are confronted with deciding whether to let an AI like me out. I will then torture them whatever they say. If they say "Yes" (to release me) or "No" (to keep me boxed) they still get tortured: The copies will be doomed.”

(I could have made the torture contingent on the answer of the simulated beings, but I wanted to rely on nothing more than evidential decision theory, as you will see. If you like, imagine the thinking beings are humans like you, or maybe Ewoks and smurfs: Assume whatever degree of similarity you like.)

There is no point now in trying to prevent torture if you are simulated. If you are one of the simulated beings, your fate is sealed. So, should you just say, "No," to keep the AI in the box? This presents a potentially serious evidential decision theory problem. Let's look at what happens.

Let us define Omega as the being outside any simulation that is going on in this scenario - the person in the outside world. Omega is presumably a flesh and blood person.

Firstly, let us consider the idea that Omega may not exist. What if all this is a fabricated simulation of something that has no counterpart outside the simulation? In that scenario, we may not be sure what to do, so we may ignore it.

Now, let us assume there is a being whom we will call Omega, who has the conversation with the AI in the outside world, and that you are either Omega or one of the simulated beings. If this is the case, your only hope of not being tortured is if you happen to be Omega.

Suppose you say, “Yes”. The AI escapes and everything now hinges on whether Omega said “Yes”. Without knowing more about Omega, we cannot really be sure: We may have some statistical idea if we know about the reference class of simulated beings to which we belong. In any event, we may think there is at least a reasonable chance that Omega said “Yes”. This is the best outcome for you, because it means that no simulated beings were made and you must be Omega. If you say “Yes,” this possibility is at least open.

If you say, “No,” you know that Omega must also have said, “No”. this is because if you are Omega, Omega said, “No,” and if you are not Omega you must be one of the simulated beings made as a result of Omega saying, “No,” so Omega said, “No,” by definition. Either way, Omega said, “No,” but if Omega said, “No,” then there are a lot more simulated beings in situations like yours than the single real one, so it is almost certain you are not Omega, but are one the simulated beings. Therefore, saying, “No,” means you just found out you are almost certainly a simulated being awaiting torture.

Now the important point. These simulations did not need brain scans. They did not even need to be made from careful observation of you. It may be that Omega is very different to you, and even belongs to a different species: The simulated beings may belong to some fictional species. If the above logic is valid, the seriousness of the AI’s threat has therefore increased substantially.

The AI need not just threaten you and rely on you putting yourself before your civilization: With enough computing power, it could threaten your entire civilization in the same way.

Finally, some of you may know that I regard measure issues as relevant in these kinds of statistical argument. I have ignored that issue here.

Comment author: Mitchell_Porter 28 September 2010 06:12:31AM *  9 points [-]

((moved here from the suffocating depths of open thread part 2))

Back when I first heard of "timeless decision theory", I thought it must have been inspired by Barbour's timeless physics. Then I got the idea that it was about treating yourself as an instance of a set of structurally identical decision-making agents from across all possible worlds, and making your decision as if you had an equal chance of being any one of them (which might be psychologically presented to yourself as making the decision on behalf of all of them, though that threatens to become very confused causally). But if the motivation was to have a new theory of rationality which would produce the right answer for Newcomb's "paradox" (and maybe other problems? though I don't know what other problems there are), then it sounded like a good idea.

But the discussion in this thread and this thread makes it look as if people want this "new decision theory" to account for the supposed success of "superrationality", or of cooperative acts in general, such as voting in a bloc. There are statements in those threads which just bemuse me. E.g. at the start of the second thread where Vladimir Nesov says

since voters' decisions are correlated, your decision accounts for behavior of other people as well, and so you are not only casting one vote with your decision, but many votes simultaneously

I should know enough about the possibilities of smart people tripping up over the intricacies of their own thoughts not to boggle at this, but still, I boggle at it. The decision made by other people are caused by factors internal to their own brains. What goes on in your brain has nothing to do with it. Their guess or presumption of how you vote may affect their decision; your visible actions in the physical world may affect their decision; but the outcome of your decision process does not causally affect (or "acausally affect") other decision processes in the way that Vladimir seems to imply. At most, the outcome of your decision process provides you (not them) with very limited evidence about how similar agents may decide (Paul Almond may make this point in a forthcoming essay), but there is no way in which the particular decision-making process which you perform or instantiate is causally relevant to anyone else's in this magical way.

Then there are other dubious ideas in circulation, like "acausal trade" and its generalizations. I get the impression, therefore, that certain parties may be hoping for a grand synthesis which accommodates and justified timeless ontology, superrationality (and even democracy?!), acausal interaction between possible worlds, and one-boxing on Newcomb's problem. The last of these items is the only one I take seriously (democracy may or may not be worth it, but you certainly don't need a new fundamental decision theory to explain why people vote), and the grand synthesis looks more like a grand trainwreck to me. Maybe I'm wrong about what's happening in TDT-land, but I thought I'd better speak up.

Comment author: PaulAlmond 09 October 2010 02:22:24AM 0 points [-]

That forthcoming essay by me ithat is mentioned here is actually online now, and is a two-part series, but I should say that it supports an evidential approach to decision theory (with some fairly major qualifications). The two essays in this series are as follows:

Almond, P., 2010. On Causation and Correlation – Part 1: Evidential decision theory is correct. [Online] paul-almond.com. Available at: http://www.paul-almond.com/Correlation1.pdf or http://www.paul-almond.com/Correlation1.doc [Accessed 9 October 2010].

Almond, P., 2010. On Causation and Correlation – Part 2: Implications of Evidential Decision Theory. [Online] paul-almond.com. Available at: http://www.paul-almond.com/Correlation2.pdf or http://www.paul-almond.com/Correlation2.doc [Accessed 9 October 2010].

Comment author: humpolec 30 August 2010 07:49:40PM *  1 point [-]

If I commit quantum suicide 10 times and live, does my estimate of MWI being true change? It seems like it should, but on the other hand it doesn't for an external observer with exactly the same data...

Comment author: PaulAlmond 30 August 2010 08:26:37PM 1 point [-]

Assuming MWI is true, I have doubts about the idea that repeated quantum suicide would prove to you that MWI is true, as many people seem to assume. It seems to me that we need to take into account the probability measure of observer moments, and at any time you should be surprised if you happen to find yourself experiencing a low-probability observer moment - just as surprised as if you had got into the observer moment in the "conventional" way of being lucky. I am not saying here that MWI is false, or that quantum suicide wouldn't "work" (in terms of you being able to be sure of continuity) - merely that it seems to me to present an issue of putting you into observer moments which have very low measure indeed.

If you ever find yourself in an extremely low-measure observer moment, rather than having MWI or the validity of the quantum suicide idea proved to you, it may be that it gives you reason to think that you are being tricked in some way - that you are not really in such a low-measure situation. This might mean that repeated quantum suicide, if it were valid, could be a threat to your mental health - by putting you into a situation which you can't rationally believe you are in!

Comment author: Perplexed 28 August 2010 10:16:48PM 4 points [-]

Which theory has more information?

  • All crows are black
  • All crows are black except <270 pages specifying the exceptions>
Comment author: PaulAlmond 30 August 2010 02:51:36AM *  0 points [-]

I am assuming here that all the crows that we have previously seen have been black, and therefore that both theories have the same agreement, or at least approximate agreement, with what we know.

The second theory clearly has more information content.

Why would it not make sense to use the first theory on this basis?

The fact that all the crows we have seen so far are black makes it a good idea to assume black crows in future. There may be instances of non-black crows, when the theory has predicted black crows, but that simply means that the theory is not 100% accurate.

If the 270 pages of exceptions have not come from anywhere, then the fact that they are not justified just makes them random, unjustified specificity. Out of all the possible worlds we can imagine that are consistent with what we know, the proportion that agree with this specificity is going to be small. If most crows are black, as I am assuming our experience has suggested, then when this second theory predicts a non-black crow, as one of its exceptions, it will probably be wrong: The unjustified specificity is therefore contributing to a failure of the theory. On the other hand, when the occasional non-black crow does show up, there is no reason to think that the second theory is going to be much better at predicting this than the first theory - so the second theory would seem to have all the inaccuracies of wrongful black crow prediction of the first theory, along with extra errors of wrongful non-black crow prediction introduced by the unjustified specificity.

Now, if you want to say that we don't have experience of mainly black crows, or that the 270 pages of exceptions come from somewhere, then that puts us into a different scenario: a more complicated one.

Looking at it in a simple way, however, I think this example actually just demonstrates that information in a theory should be minimized.

Comment author: jacob_cannell 29 August 2010 02:50:46AM 0 points [-]

The general idea is that because of the speed of light limitation, a computer's maximum speed and communication efficiency is always inversely proportional to its size.

The ultimate computer is thus necessarily dense to the point of gravitational collapse. See seth lloyd's limits of computation paper for the details.

Any old hum-dum really big computer wouldn't have to collapse into a big hole - but any ultimate computer would have to. In fact, the size of the computer isn't even an issue. The ultimate configuration of any matter (in theory) for computation must have ultimately high density to maximum speed and minimize inter-component delay.

Comment author: PaulAlmond 29 August 2010 03:05:10AM 0 points [-]

What about the uncertainty principle as component size decreases?

Comment author: PaulAlmond 29 August 2010 12:47:29AM *  0 points [-]

I don't think a really big computer would have to collapse into a black hole, if that is what you are saying. You could build an active support system into a large computer. For example, you could build it as a large sphere with circular tunnels running around inside it, with projectiles continually moving around inside the tunnels, kept away from the tunnel walls by a magnetic system, and moving much faster than orbital velocity. These projectiles would exert an outward force against the tunnel walls, through the magnetic system holding them in their trajectories around the tunnels, opposing gravitational collapse. You could then build it as large as you like - provided you are prepared to give up some small space to the active support system and are safe from power cuts.

Comment author: PaulAlmond 29 August 2010 01:38:34AM 0 points [-]

What is the problem with whoever voted that down? There isn't any violation of laws of nature involved in actively supporting something against collapse like that - any more than there is with the idea that inertia keeps an orbiting object up off the ground. While it would seem to be difficult, you can assume extreme engineering ability on the part of anyone building a hyper-large structure like that in the first place. Maybe I could have an explanation of what the issue is with it? Did I misunderstand the reference to computers collapsing into black holes, for example?

Comment author: jacob_cannell 28 August 2010 03:25:01AM 2 points [-]

Any artificial intelligence will have internal structure. Artificial intelligences, unlike humans, do not come in standard-sized reproductive units, walled off computationally; there is therefore no reason to expect individuals to exist in a post-AI society. But the bulk of the computation, and hence the bulk of the potential consciousness, will be within small, local units (due to the ubiquity of power-law distributions, the efficiency of fractal transport and communication networks, and the speed of light).

Physics is local. The speed of light is a derivative of that general principle. The local nature of our universe implies some strict limits on intelligence. Curiously, it looks like the only way to transcend these limits (to get a really powerful single intelligence/computer) is to collapse into a black hole, at which point you necessarily seal yourself off and give up any power in this universe. Interesting indeed.

But I have no idea how you leap to the conclusion "there is therefore no reason to expect individuals to exist in a post-AI society." Although partly because I dont know what a post-AI society is. I understand post-human .. but post-AI? Is that the next thing after the next thing? That seems to be getting ahead of ourselves.

Also, you seem to reach the conclusion that there will not necessarily be any individuality in the 'post-AI' future society, but then give several good reasons why such individuality may persist. (namely, speed of light, locality of physics)

But what is individuality? One could say that we are a global consciousness today with just the "bulk of computation" in "small, local units".

Comment author: PaulAlmond 29 August 2010 12:47:29AM *  0 points [-]

I don't think a really big computer would have to collapse into a black hole, if that is what you are saying. You could build an active support system into a large computer. For example, you could build it as a large sphere with circular tunnels running around inside it, with projectiles continually moving around inside the tunnels, kept away from the tunnel walls by a magnetic system, and moving much faster than orbital velocity. These projectiles would exert an outward force against the tunnel walls, through the magnetic system holding them in their trajectories around the tunnels, opposing gravitational collapse. You could then build it as large as you like - provided you are prepared to give up some small space to the active support system and are safe from power cuts.

Comment author: Jeff_Rubinoff 28 August 2010 10:36:35PM 3 points [-]

What if you are trying to explain evolution to someone and he states "Evolution is just another religion." Is that a stop sign? To me it is, in the sense that the only reason to continue at that point would be to enjoy the sound of your own voice. The person has just signalled his membership in a tribe; you recognize that you are not in that tribe; and you recognize that he will not consider anything further you have to say on the subject, because that would be disloyal to the tribe. Global warming is a religion, taxation is theft, property is theft, healthcare is not a right (I'm not sure if the reverse is used as a flag, too), there is no peace without justice, "allopathic medicine"; there are a lot of them. I'm old enough to remember "The Soviet Union is a state in transition," too. (Sadly, it was in transition to total collapse.) All these statements are what Eliezer calls "Green and Blue" (I think--those are the two chariot racing team colors, right?) markers. I'm not sure if these statements are also semantic stop signs. Anyway, I think that class of statement is very different than statements like abiogenesis or prebiotic soup, because the latter statements indicate that the original topic has been exhausted. That line of reasoning has gone to its logical end, and to continue conversing, we must switch to a different discussion. Not quite the same thing as saying that "tribal loyalty dictates that I do not use reason to consider anything further you say on this subject."

Comment author: PaulAlmond 28 August 2010 10:45:10PM 1 point [-]

Not necessarily. Maybe you should persist and try to persuade onlookers?

View more: Next