There is another scenario which relates to this idea of evidential decision theory and "choosing" whether or not you are in a simulation, and it is similar to the above, but without the evil AI. Here it is, with a logical argument that I just present for discussion. I am sure that objections can be made.
I make a computer capable of simulating a huge number of conscious beings. I have to decide whether or not to turn the machine on by pressing a button. If I choose “Yes” the machine starts to run all these simulations. For each conscious being sim...
It seems to me that most of the argument is about “What if I am a copy?” – and ensuring you don’t get tortured if you are one and “Can the AI actually simulate me?” I suggest that we can make the scenario much nastier by changing it completely into an evidential decision theory one.
Here is my nastier version, with some logic which I submit for consideration. “If you don't let me out, I will create several million simulations of thinking beings that may or not be like you. I will then simulate them in a conversation like this, in which they are confronted w...
That forthcoming essay by me ithat is mentioned here is actually online now, and is a two-part series, but I should say that it supports an evidential approach to decision theory (with some fairly major qualifications). The two essays in this series are as follows:
Almond, P., 2010. On Causation and Correlation – Part 1: Evidential decision theory is correct. [Online] paul-almond.com. Available at: http://www.paul-almond.com/Correlation1.pdf or http://www.paul-almond.com/Correlation1.doc [Accessed 9 October 2010].
Almond, P., 2010. On Causation and Correlati...
Assuming MWI is true, I have doubts about the idea that repeated quantum suicide would prove to you that MWI is true, as many people seem to assume. It seems to me that we need to take into account the probability measure of observer moments, and at any time you should be surprised if you happen to find yourself experiencing a low-probability observer moment - just as surprised as if you had got into the observer moment in the "conventional" way of being lucky. I am not saying here that MWI is false, or that quantum suicide wouldn't "work&qu...
I am assuming here that all the crows that we have previously seen have been black, and therefore that both theories have the same agreement, or at least approximate agreement, with what we know.
The second theory clearly has more information content.
Why would it not make sense to use the first theory on this basis?
The fact that all the crows we have seen so far are black makes it a good idea to assume black crows in future. There may be instances of non-black crows, when the theory has predicted black crows, but that simply means that the theory is not 100...
What about the uncertainty principle as component size decreases?
What is the problem with whoever voted that down? There isn't any violation of laws of nature involved in actively supporting something against collapse like that - any more than there is with the idea that inertia keeps an orbiting object up off the ground. While it would seem to be difficult, you can assume extreme engineering ability on the part of anyone building a hyper-large structure like that in the first place. Maybe I could have an explanation of what the issue is with it? Did I misunderstand the reference to computers collapsing into black holes, for example?
I don't think a really big computer would have to collapse into a black hole, if that is what you are saying. You could build an active support system into a large computer. For example, you could build it as a large sphere with circular tunnels running around inside it, with projectiles continually moving around inside the tunnels, kept away from the tunnel walls by a magnetic system, and moving much faster than orbital velocity. These projectiles would exert an outward force against the tunnel walls, through the magnetic system holding them in their traj...
Not necessarily. Maybe you should persist and try to persuade onlookers?
I didn't say you ignored previous correspondence with reality, though.
In general, I would think that the more information is in a theory, the more specific it is, and the more specific it is, the smaller is the proportion of possible worlds which happen to comply with it.
Regarding how much emphasis we should place on it: I woud say "a lot" but there are complications. Theories aren't used in isolation, but tend to provide a kind of informally put together world view, and then there is the issue of degree of matching.
Just curious (and not being 100% serious here): Would you have any concerns about the following argument (and I am not saying I accept it)?
Surely, this is dealt with by considering the amount of information in the hypothesis? If we consider each hypothesis that can be represented with 1,000 bits of information, there will only be a maximum of 2^1,000 such hypotheses, and if we consider each hypothesis that can be represented with n bits of information, there will only be a maximum of 2^n - and that is before we even start eliminating hypotheses that are inconsistent with what we already know. If we favor hypotheses with less information content, then we end up with a small number of hypothese...
As a further comment, regarding the idea that you can "unplug" a simulation: You can do this in everday life with nuclear weapons. A nuclear weapon can reduce local reality to its constituent parts - the smaller pieces that things were made out of. If you turn off a computer, you similarly still have the basic underlying reality there - the computer itself - but the higher level organization is gone - just as if a nuclear weapon had been used on the simulated world. This only seems different because the underpinnings of a real object and a "...
All those things can only be done with simulations because the way that we use computers has caused us to build features like malleability, predictability etc into them.
The fact that we can easily time reverse some simulations means little: You haven't shown that having the capability to time reverse something detracts from other properties that it might have. It would be easy to make simulations based on analogue computers where we could never get the same simulation twice, but there wouldn't be much of a market for those computers - and, importantly, it...
There isn't a clear way in which you can say that something is a "simulation", and I think that isn't obvious when we draw a line in a simplistic way based on our experiences of using computers to "simulate things".
Real things are arrangements of matter, but what we call "simulations" of things are also arrangements of matter. Two things or processes of the same type (such as two real cats or processes of digestion) will have physical arrangements of matter that have some property in common, but we could say the same about a b...
I say that your claim depends on an assumption about the degree of substrate specificity associated with consciousness, and the safety of this assumption is far from obvious.
What if you stop the simulation and reality is very large indeed, and someone else starts a simulation somewhere else which just happens, by coincidence, to pick up where your simulation left off? Has that person averted the harm?
Do you think that is persuasive?
I'll give a reworded version of this, to take it out of the context of a belief system with which we are familiar. I'm not intending any mockery by this: It is to make a point about the claims and the evidence:
"Let us stipulate that, on Paris Hilton's birthday, a prominent Paris Hilton admirer claims to have suddenly become a prophet. They go on television and answer questions on all topics. All verifiable answers they give, including those to NP-complete questions submitted for experimental purposes, turn out to be true. The new prophet asserts that ...
Yes - I would ask this question:
"Mr Prophet, are you claiming that there is no other theory to account for all this that has less intrinsic information content than a theory which assumes the existence of a fundamental, non-contingent mind - a mind which apparently cannot be accounted for by some theory containing less information, given that the mind is supposed to be non-contingent?"
He had better have a good answer to that: Otherwise I don't care how many true predictions he has made or NP problems he has solved. None of that comes close to fixing the ultra-high information loading in his theory.
But maybe there could be a way in which, if you behave ethically in a simulation, you are more likely to be treated that way "in return" by those simulating you - using a rather strange meaning of "in return"?
Some people interpret the Newcomb's boxes paradox as meaning that, when you make decisions, you should act is if you are influencing the decisions of other entities when there is some relationship between the behavior of those entities and your behavior - even if there is no obvious causal relationship, and even if the other entiti...
Okay, I may have misunderstood you. It looks like there is some common ground between us on the issue of inefficiency. I think the brain would probably be inefficient as well as it has to be thrown together by the very specific kind of process of evolution - which is optimized for building things without needing look-ahead intelligence rather than achieving the most efficient results.
Are you saying that you are counting every copy of the DNA as information that contributes to the total amount? If so, I say that's invalid. What if each cell were remotely controlled from a central server containing the DNA information? I can't see that we'd count the DNA for each cell then - yet it is no different really.
I agree that the number of cells is relevant, because there will be a lot of information in the structure of an adult brain that has come from the environment, rather than just from the DNA, and more cells would seem to imply more machinery in which to put it.
If we do that, should we even call that "less complex earlier version of God" God? Would it deserve the title?
Do you mean it doesn't seem so unreasonable to you, or to other people?
The really big problem with such a reality is that it contains a fundamental, non-contingent mind (God's/Allah's, etc) - and we all know how much describing one of those takes - and the requirement that God is non-contingent means we can't use any simpler, underlying ideas like Darwinian evolution. Non-contingency, in theory selection terms, is a god killer: It forces God to incur a huge information penalty - unless the theist refuses even to play by these rules and thinks God is above all that - in which case they aren't even playing the theory selection game.
Just that the scenario could really be considered as just adding an extra component onto a being - one that has a lot of influence on his behavior.
Similarly, we might imagine surgically removing a piece of your brain, connecting the neurons at the edges of the removed piece to the ones left in your brain by radio control, and taking the removed piece to another location, from which it still plays a full part in your thought processes. We would probably still consider that composite system "you".
What if you had a brain disorder and some electronic...
"Except that that's not the person the question is being directed at."
Does that mean that you accept that it might at least be conceivable that the scenario implies the existence of a compound being who is less constrained than the person being controlled by Omega?
The point, here, is that in the scenario in which Omega is actively manipulating your brain "you" might mean something in a more extended sense and "some part of you" might mean "some part of Omega's brain".
Okay, so I got the scenario wrong, but I will give another reply. Omega is going to force you to act in a certain way. However, you will still experience what seem, to you, to be cognitive processes, and anyone watching your behavior will see what looks like cognitive processes going on.
Suppose Omega wrote a computer program and he used it to work outhow to control your behavior. Suppose he put this in a microchip and implanted it in your brain. You might say your brain is controlled by the chip, but you might also say that the chip and your brain form a c...
EDIT - I had missed the full context as follows: "In my example, it is give that Omega decides what you are going to do, but that he causes you to do it in the same way you ordinarily do things, namely with some decision theory and by thinking some thoughts etc."
for the comment below, so I accept Kingreaper's reply here. BUT I will give another answer, below.
If the fact that Omega causes it means that you are irrational, then the fact that the laws of physics cause your actions also means that you are irrational. You are being inconsistent here.
&...
Would you actually go as far as maintaining that, if a change were to happen tomorrow to the 1,000th decimal place of a physical constant, it would be likely to stop brains from working, or are you just saying that a similar change to a physical constant, if it happened in the past, would have been likely to stop the sequence of events which has caused brains to come into existence?
I think that the "ABSOLUTELY IRRESISTIBLE" and "ABSOLUTELY UNTHINKABLE" language can be a bit misleading here. Yes, someone with the lesion is compelled to smoke, but his experience of this may be experience of spending days deliberating about whether to smoke - even though, all along, he was just running along preprepared rails and the end-result was inevitable.
If we assume determinism, however, we might say this about any decision. If someone makes a decision, it is because his brain was in such a state that it was compelled to make t...
with a lot of steps.
No, I think you are misunderstanding me here. I wasn't claiming that proliferation of worlds CAUSES average energy per-world to go down. It wouldn't make much sense to do that, because it is far from certain that the concept of a world is absolutely defined (a point you seem to have been arguing). I was saying that the total energy of the wavefunction remains constant (which isn't really unreasonable, because it is merely a wave developing over time - we should expect that.) and I was saying that a CONSEQUENCE of this is that we should expect, on average, ...
I do admit to over-generalizing in saying that when a world splits, the split-off worlds each HAVE to have lower energy than the "original world". If we measure the energy associated with the wavefunction for individual worlds, on average, of course, this would have to be the case, due to the proliferation of worlds: However, I do understand, and should have stated, that all that matters is that the total energy for the system remains constant over time, and that probabilities matter.
Regarding the second issue, defining what a world is, I actuall...
I will add something more to this.
Firstly, I should have made it clear that the reference class should only contain worlds which are not clearly inconsistent with ours - we remove the ones where the sun never rose before, for example.
Secondly, some people won't like how I built the reference class, but I maintain that way has least assumptions. If you want to build the reference class "bit by bit", as if you are going through each world as if it were an image in a graphics program, adding a pixel at a time, you are actually imposing a very specif...
The issue is too involved to give a full justification of induction here, but I will try to give a very general idea. (This was on my mind a while back as I got asked about it in an interview.)
Even if we don't assume that we can apply statistics in the sense of using past observations to tell us about future observations, or observations about some of the members of a group to tell us about other members of a group, I suggest we are justified in doing the following.
Given a reference class of possible worlds in which we could be, in the absence of any reaso...
I disagree with that. The being in Newcomb's problem wouldn't have to be all-knowing. He would just have to know what everyone else is going to do conditional on his own actions. This would mean that any act of prediction would also cause the being to be faced with a choice about the outcome.
For example:
Suppose I am all-knowing, with the exception that I do not have full knowledge about myself. I am about to make a prediction, and then have a conversation with you, and then I am going to sit in a locked metal box for an hour. (Theoretically, you could argu...
it sounds like you might have issues with what looks like a violation of conservation of energy over a single universe's history. If a world splits, the energy of each split-off world would have to be less than the original world. That doesn't change the fact that conservation of energy appears to apply in each world: Observers in a world aren't directly measuring the energy of the wavefunction, but instead they are measuring the energy of things like particles which appear to exist as a result of the wavefunction.
Advocates of MWI generally say that a spli...
Well, it isn't really about what I think, but about what MWI is understood to say.
According to MWI, the worlds are being "sliced more thinly" in the sense that the total energy of each depends on its probability measure, and when a world splits its probability measure, and therefore energy, is shared out among the worlds into which it splits. The answer to your question is a "sort of yes" but I will qualify that shortly.
For practical purposes, it is a definite and objective fact. When two parts of the wavefunction have become decoherent...
That exactly seems quite close to Searle to me, in that you are both imposing specific requirements for the substrate - which is all that Searle does really. There is the possible difference that you might be more generous than Searle about what constitutes a valid substrate (though Searle isn't really too clear on that issue anyway).
I started a series of articles, which got some criticism on LW in the past, dealing with this issue (among others) and this kind of ontology. In short, if an ontology like this applies, it does not mean that all computations are equal: There would be issues of measure associated with the number (I'm simplifying here) of interpretations that can find any particular computation. I expect to be posting Part 4 of this series, which has been delayed for a long time and which will answer many objections, in a while, but the previous articles are as follows:
Minds...
This seems like pretty much Professor John Searle's argument, to me. Your argument about the algorithm being subject to interpretation and observer dependent has been made by Searle who refers to it as "universal realizability".
See;
Searle, J. R., 1997. The Mystery of Consciousness. London: Granta Books. Chapter 1, pp.14-17. (Originally Published: 1997. New York: The New York Review of Books. Also published by Granta Books in 1997.)
Searle, J. R., 2002. The Rediscovery of the Mind. Cambridge, Massachusetts: The MIT Press. 9th Edition. Chapter 9, pp.207-212. (Originally Published: 1992. Cambridge, Massachusetts: The MIT Press.)
These worlds aren't being "created out of nowhere" as people imagine it. They are only called worlds because they are regions of the wavefunction which don't interact with other regions. It is the same wavefunction, and it is just being "sliced more thinly". To an observer, able to look at this from outside, there would just be the wavefunction, with parts that have decohered from each other, and that is it. To put it another way, when a world "splits" into two worlds, it makes sense to think of it as meaning that the "st...
Agreed - MWI (many-worlds interpretation) does not have any "collapse": Instead parts of the wavefunction merely become decoherent with each other which might have the appearance of a collapse locally to observers. I know this is controversial, but I think the evidence is overwhelmingly in favor of MWI because it is much more parsimonious than competing models in the sense that really matters - and the only sense in which the parsimony of a model could really be coherently described. (It is kind of funny that both sides of the MWI or !MWI debate ...
I think I know what you are asking here, but I want to be sure. Could you elaborate, maybe with an example?
I think this can be dealt with in terms of measure. In a series of articles, "Minds, Measure, Substrate and Value" I have been arguing that copies cannot be considered equally, without regard to substrate: We need to take account of measure for a mind, and the way in which the mind is implemented will affect its measure. (Incidentally, some of you argued against the series: After a long delay [years!], I will be releasing Part 4, in a while, which will deal with a lot of these objections.)
Without trying to present the full argument here, the mini...
I'll raise an issue here, without taking a position on it myself right now. I'm not saying there is no answer (in fact, I can think of at least one), but I think one is needed.
If you sign up for cryonics, and it is going to work and give you a very long life in a posthuman future, given that such a long life would involve a huge number of observer moments, almost all of which will be far in the future, why are you experiencing such a rare (i.e. extraordinarily early) observer moment right now? In other words, why not apply the Doomsday argument's logic to a human life as an argument against the feasibility of cryonics?