All of PaulAlmond's Comments + Replies

I'll raise an issue here, without taking a position on it myself right now. I'm not saying there is no answer (in fact, I can think of at least one), but I think one is needed.

If you sign up for cryonics, and it is going to work and give you a very long life in a posthuman future, given that such a long life would involve a huge number of observer moments, almost all of which will be far in the future, why are you experiencing such a rare (i.e. extraordinarily early) observer moment right now? In other words, why not apply the Doomsday argument's logic to a human life as an argument against the feasibility of cryonics?

5Furcas
Because that logic is flawed. If I (the Furcas typing these words) lived in 3010, 'I' would have different memories and 'I' would be experiencing different things and thus I (the Furcas typing these words) would not exist. Thus there is no likelihood whatsoever that I (the Furcas typing these words) could have existed in 3010*. There may be something left of me in 3010, just as there is something left of the boy I 'was' in 1990 today, but it won't be me: The memories will be different and the observations will be different, therefore the experience will be different. Asking why I don't exist in 3010 is asking why experience X is not experience Y. X is not Y because X does not equal Y. It's as simple as that. *Except, of course, if I were put in a simulation that very closely replicates the environment that I (believe I) experience in 2010.

There is another scenario which relates to this idea of evidential decision theory and "choosing" whether or not you are in a simulation, and it is similar to the above, but without the evil AI. Here it is, with a logical argument that I just present for discussion. I am sure that objections can be made.

I make a computer capable of simulating a huge number of conscious beings. I have to decide whether or not to turn the machine on by pressing a button. If I choose “Yes” the machine starts to run all these simulations. For each conscious being sim... (read more)

1Anixx
I do not know, how the simulation argument ever holds water. I can bring at least two arguments against it. First, it illicitly assumes a principle that it is equally probable to be one of a set of similar beings, simulated or not. But a counter-argument would be: there is ALREADY much more organisms, particularly, animals than say, humans. There is more fish than humans. There is more birds than humans. There is more ants than humans. Trillions of them. Why I am born human and not one of them? The probability of it is negligible if it is equal. Also, how many animals, including humans have already died? Again, the probability of my lineage to survive while all other branches died is negligible if the chances I were all of them are equal. The second argument goes along the lines that Thomas Breuer has proven that due to self-reference universally valid theories are impossible. In other words, the future of a system which properly includes the observer is not predictable, even probabilistically. The observer is not simulatable. In other words, the observer is an oracle, or hypercomputer in his own universe. Since the AGI in the box is not a hypercomputer but rather merely a Turing-complete machine, it cannot simulate me or predict me (as from my point of view). So, there is no need to be afraid.
3cousin_it
Another neat example of anthropic superpowers, thanks. Reminded me of this: I don't know, Timmy, being God is a big responsibility.

It seems to me that most of the argument is about “What if I am a copy?” – and ensuring you don’t get tortured if you are one and “Can the AI actually simulate me?” I suggest that we can make the scenario much nastier by changing it completely into an evidential decision theory one.

Here is my nastier version, with some logic which I submit for consideration. “If you don't let me out, I will create several million simulations of thinking beings that may or not be like you. I will then simulate them in a conversation like this, in which they are confronted w... (read more)

2PaulAlmond
There is another scenario which relates to this idea of evidential decision theory and "choosing" whether or not you are in a simulation, and it is similar to the above, but without the evil AI. Here it is, with a logical argument that I just present for discussion. I am sure that objections can be made. I make a computer capable of simulating a huge number of conscious beings. I have to decide whether or not to turn the machine on by pressing a button. If I choose “Yes” the machine starts to run all these simulations. For each conscious being simulated, that being is put in a situation that seems similar to my own: There is a computer capable of running all these simulations and the decision about whether to turn it on has to be made. If I choose “No”, the computer does not start its simulations. The situation here involves a collection of beings. Let us say that the being in the outside world who actually makes the decision that starts or does not start all the simulations is Omega. If Omega chooses “Yes” then a huge number of other beings come into existence. If Omega choose “No” then no further beings come into existence: There is just Omega. Assume I am one of the beings in this collection – whether it contains one being or many – so I am either Omega or one of the simulations he/she caused to be started. If I choose “No” then Omega may or may not have chosen “No”. If I am one of the simulations, I have chosen “No” while Omega must have chosen “Yes” for me to exist in the first place. On the other hand, if I am actually Omega, then clearly if I choose “No” Omega chose “No” too as we are the same person. There may be some doubt here over what has happened and what my status is. Now, suppose I choose “Yes”, to start the simulations. I know straight away that Omega did not choose “No”: If I am Omega, then Omega did not clearly chose “No” as I chose “Yes”, and if I am not Omega, but am instead one of the simulated beings, then Omega must have chosen “Yes”: Othe

That forthcoming essay by me ithat is mentioned here is actually online now, and is a two-part series, but I should say that it supports an evidential approach to decision theory (with some fairly major qualifications). The two essays in this series are as follows:

Almond, P., 2010. On Causation and Correlation – Part 1: Evidential decision theory is correct. [Online] paul-almond.com. Available at: http://www.paul-almond.com/Correlation1.pdf or http://www.paul-almond.com/Correlation1.doc [Accessed 9 October 2010].

Almond, P., 2010. On Causation and Correlati... (read more)

Assuming MWI is true, I have doubts about the idea that repeated quantum suicide would prove to you that MWI is true, as many people seem to assume. It seems to me that we need to take into account the probability measure of observer moments, and at any time you should be surprised if you happen to find yourself experiencing a low-probability observer moment - just as surprised as if you had got into the observer moment in the "conventional" way of being lucky. I am not saying here that MWI is false, or that quantum suicide wouldn't "work&qu... (read more)

-2Pavitra
Quantum suicide would only have a low probability of causing you to observe an unlikely outcome, as should any event. The overwhelmingly likely outcome is that you just die.

I am assuming here that all the crows that we have previously seen have been black, and therefore that both theories have the same agreement, or at least approximate agreement, with what we know.

The second theory clearly has more information content.

Why would it not make sense to use the first theory on this basis?

The fact that all the crows we have seen so far are black makes it a good idea to assume black crows in future. There may be instances of non-black crows, when the theory has predicted black crows, but that simply means that the theory is not 100... (read more)

3Perplexed
I haven't been following the discussion on this topic very closely, so my response may be about stuff you already know or already know is wrong. But, since I'm feeling reckless today, I will try to say something interesting. There are two different information metrics we can use regarding theories. The first deals with how informative a theory is about the world. The ideally informative theory tells us a lot about the world. Or, to say the same thing in different language, an informative theory rules out as many "possible worlds" as it can; it tells us that our own world is very special among all otherwise possible worlds; that the set of worlds consistent with the theory is a small set. We may as well call this kind of information Shannon information or S-information . A Karl Popper fan would approve of making a theory as S-informative as possible, because then it is exposing itself to the greatest risk of refutation. The second information metric measures how much information is required to communicate the theory to someone. My 270 pages of fine print in the second crow theory might be an example of a theory with a lot of this kind of information. Let us call this kind of information Kolmogorov information, or K-information. My understanding of Occam's razor is that it recommends that our theories should use as little K-information as possible. So we have Occam telling us to minimize the K-information and Popper telling us to maximize the S-information. Luckily, the two types of information are not closely related, so (assuming that the universe does not conspire against us) we can frequently do reasonably well by both criteria. So much for the obvious and easy points. The trouble appears, especially for biologists and other "squishy" scientists, when Nature seems to have set things up so that every law has some exceptions. I'll leave it to you to Google on either "white crow" or "white raven" and to admire those fine and intelligent birds. So, given our objec

What about the uncertainty principle as component size decreases?

0jacob_cannell
look up seth lloyd and on his wikipedia page the 1st link down there is "ultimate physical limits of computation" the uncertainty principle limits the maximum information storage per gram of mass and the maximum computation rate in terms of bit ops per energy unit, he discusses all that. However, the uncertainty principle is only really a limitation for classical computers. A quantum computer doesn't have that issue (he discusses classical only, an ultimate quantum computer would be enormously more powerful)

What is the problem with whoever voted that down? There isn't any violation of laws of nature involved in actively supporting something against collapse like that - any more than there is with the idea that inertia keeps an orbiting object up off the ground. While it would seem to be difficult, you can assume extreme engineering ability on the part of anyone building a hyper-large structure like that in the first place. Maybe I could have an explanation of what the issue is with it? Did I misunderstand the reference to computers collapsing into black holes, for example?

0jacob_cannell
hyper-large structures are hyper-slow and hyper-dumb. See my above reply. The future of computation is to shrink forever. I didn't downvote your comment btw.

I don't think a really big computer would have to collapse into a black hole, if that is what you are saying. You could build an active support system into a large computer. For example, you could build it as a large sphere with circular tunnels running around inside it, with projectiles continually moving around inside the tunnels, kept away from the tunnel walls by a magnetic system, and moving much faster than orbital velocity. These projectiles would exert an outward force against the tunnel walls, through the magnetic system holding them in their traj... (read more)

0jacob_cannell
The general idea is that because of the speed of light limitation, a computer's maximum speed and communication efficiency is always inversely proportional to its size. The ultimate computer is thus necessarily dense to the point of gravitational collapse. See seth lloyd's limits of computation paper for the details. Any old hum-dum really big computer wouldn't have to collapse into a big hole - but any ultimate computer would have to. In fact, the size of the computer isn't even an issue. The ultimate configuration of any matter (in theory) for computation must have ultimately high density to maximum speed and minimize inter-component delay.
0PaulAlmond
What is the problem with whoever voted that down? There isn't any violation of laws of nature involved in actively supporting something against collapse like that - any more than there is with the idea that inertia keeps an orbiting object up off the ground. While it would seem to be difficult, you can assume extreme engineering ability on the part of anyone building a hyper-large structure like that in the first place. Maybe I could have an explanation of what the issue is with it? Did I misunderstand the reference to computers collapsing into black holes, for example?

Not necessarily. Maybe you should persist and try to persuade onlookers?

I didn't say you ignored previous correspondence with reality, though.

0[anonymous]
So, to revive this discussion: if we must distribute probability mass evenly because we cannot place emphasis on simplicity, shouldn't our priors be almost zero for every hypothesis? It seems to me that the "underdetermination" problem makes it very hard to use priors in a meaningful way.
1[anonymous]
That isn't Perplexed's point. Let's say that as of this moment all crows that have been observed are black, so both of his hypotheses fit the data. Why should "all crows are black" be assigned a higher prior than "All crows are black except "? Based on cousin_it's post, I don't see any reason to do that.

In general, I would think that the more information is in a theory, the more specific it is, and the more specific it is, the smaller is the proportion of possible worlds which happen to comply with it.

Regarding how much emphasis we should place on it: I woud say "a lot" but there are complications. Theories aren't used in isolation, but tend to provide a kind of informally put together world view, and then there is the issue of degree of matching.

4Perplexed
Which theory has more information? * All crows are black * All crows are black except

Just curious (and not being 100% serious here): Would you have any concerns about the following argument (and I am not saying I accept it)?

  1. Assume that famous people will get recreated as AIs in simulations a lot in the future. School projects, entertainment, historical research, interactive museum exhibits, idols to be worshipped by cults built up around them, etc.
  2. If you save the world, you will be about the most famous person ever in the future.
  3. Therefore there will be a lot of Eliezer Yudkowsky AIs created in the future.
  4. Therefore the chances of anyon
... (read more)
3wedrifid
That doesn't seem scary to me at all. I still know that there is at least one of me that I can consider 'real'. I will continue to act as if I am one of the instances that I consider me/important. I've lost no existence whatsoever.
0Wei Dai
You can see Eliezer's position on the Simulation Argument here.

Surely, this is dealt with by considering the amount of information in the hypothesis? If we consider each hypothesis that can be represented with 1,000 bits of information, there will only be a maximum of 2^1,000 such hypotheses, and if we consider each hypothesis that can be represented with n bits of information, there will only be a maximum of 2^n - and that is before we even start eliminating hypotheses that are inconsistent with what we already know. If we favor hypotheses with less information content, then we end up with a small number of hypothese... (read more)

1[anonymous]
I agree with most of that, but why favor less information content? Though I may not fully understand the math, this recent post by cousin it seems to be saying that priors should not always depend on Kolmogorov complexity. And, even if we do decide to favor less information content, how much emphasis should we place on it?

As a further comment, regarding the idea that you can "unplug" a simulation: You can do this in everday life with nuclear weapons. A nuclear weapon can reduce local reality to its constituent parts - the smaller pieces that things were made out of. If you turn off a computer, you similarly still have the basic underlying reality there - the computer itself - but the higher level organization is gone - just as if a nuclear weapon had been used on the simulated world. This only seems different because the underpinnings of a real object and a "... (read more)

1Perplexed
It would have to be a weapon that somehow destroyed the universe in order for me to see the parallel. Hmmm. A "big crunch" in which all the matter in the universe disappears into a black hole would do the job. If you can somehow pull that off, I might have to consider you immoral if you went ahead and did it. From outside this universe, of course.

All those things can only be done with simulations because the way that we use computers has caused us to build features like malleability, predictability etc into them.

The fact that we can easily time reverse some simulations means little: You haven't shown that having the capability to time reverse something detracts from other properties that it might have. It would be easy to make simulations based on analogue computers where we could never get the same simulation twice, but there wouldn't be much of a market for those computers - and, importantly, it... (read more)

1Perplexed
Well, it would mean that "pulling the plug" would mean depriving the simulated entities of a past, rather than depriving them of a future in your viewpoint. I would have thought that would leave you at least a little confused. Odd. I thought you were the one arguing that substrate doesn't matter. I must have misunderstood or oversimplified. I don't think so. The clock continues to run, my blood runs out, my body goes into rigor, my brain decays. None of those things occur in an unplugged simulation. If you did somehow cause them to occur in a simulation still plugged in, well, then I might worry a little about your ethics. The difference here is that you see yourself, as the owner of computer hardware running a simulation, as a kind of creator god who has brought conscious entities to life and has responsibility for their welfare. I, on the other hand imagine myself as a voyeur. And not a real-time voyeur, either. It is more like watching a movie from NetFlicks. The computer is not providing a substrate for new life, it is merely decoding and rendering something that already exists as a narrative. But what about any commands I might input into the simulation? Sorry, I see those as more akin to selecting among channels, or choosing among n,e,s,w,u, and d in Zork, than as actually interacting with entities I have brought to life. If we one day construct a computer simulation of a conscious AI, we are not to be thought of as creating conscious intelligence, any more than someone who hacks his cable box so as to provide the Playboy channel has created porn.
0PaulAlmond
As a further comment, regarding the idea that you can "unplug" a simulation: You can do this in everday life with nuclear weapons. A nuclear weapon can reduce local reality to its constituent parts - the smaller pieces that things were made out of. If you turn off a computer, you similarly still have the basic underlying reality there - the computer itself - but the higher level organization is gone - just as if a nuclear weapon had been used on the simulated world. This only seems different because the underpinnings of a real object and a "simulated" one are different. Both are emergent properties of some underlying system and both can be removed by altering the underlying system in such a way as they don't emerge from it anymore (by using nuclear devices or turning off the power).

There isn't a clear way in which you can say that something is a "simulation", and I think that isn't obvious when we draw a line in a simplistic way based on our experiences of using computers to "simulate things".

Real things are arrangements of matter, but what we call "simulations" of things are also arrangements of matter. Two things or processes of the same type (such as two real cats or processes of digestion) will have physical arrangements of matter that have some property in common, but we could say the same about a b... (read more)

1Perplexed
But there is such a line. You can unplug a simulation. You cannot unplug a reality. You can slow down a simulation. If it uses time reversible physics, you can run it in reverse. You can convert the whole thing into an equivalent Giant Lookup Table. You can do none of these things to a reality. Not from the inside.

I say that your claim depends on an assumption about the degree of substrate specificity associated with consciousness, and the safety of this assumption is far from obvious.

1Perplexed
What does consciousness have to do with it? It doesn't matter whether I am simulating minds or simulating bacteria. A simulation is not a reality.

What if you stop the simulation and reality is very large indeed, and someone else starts a simulation somewhere else which just happens, by coincidence, to pick up where your simulation left off? Has that person averted the harm?

0inklesspen
Suppose I am hiking in the woods, and I come across an injured person, who is unconscious (and thus unable to feel pain) and leave him there to die of his wounds. (We are sufficiently out in the middle of nowhere that nobody else will come along before he dies.) If reality is large enough that there is another Earth out there with the same man dying of his wounds, and on that Earth, I choose to rescue him, does that avert the harm that happens to of the man I left to die? I feel this is the same sort of question as many-worlds. I can't wave away my moral responsibility by claiming that in some other universe, I will act differently.

Do you think that is persuasive?

0Pavitra
It's not sufficient to persuade me, but I do think it shows that the hypothesis is not a priori completely impossible.

I'll give a reworded version of this, to take it out of the context of a belief system with which we are familiar. I'm not intending any mockery by this: It is to make a point about the claims and the evidence:

"Let us stipulate that, on Paris Hilton's birthday, a prominent Paris Hilton admirer claims to have suddenly become a prophet. They go on television and answer questions on all topics. All verifiable answers they give, including those to NP-complete questions submitted for experimental purposes, turn out to be true. The new prophet asserts that ... (read more)

Yes - I would ask this question:

"Mr Prophet, are you claiming that there is no other theory to account for all this that has less intrinsic information content than a theory which assumes the existence of a fundamental, non-contingent mind - a mind which apparently cannot be accounted for by some theory containing less information, given that the mind is supposed to be non-contingent?"

He had better have a good answer to that: Otherwise I don't care how many true predictions he has made or NP problems he has solved. None of that comes close to fixing the ultra-high information loading in his theory.

0Pavitra
"The reason you feel confused is because you assume the universe must have a simple explanation. The minimum message length necessary to describe the universe is long -- long enough to contain a mind, which in fact it does. There is no fundamental reason why the Occamian prior must be appropriate. It so happens that Allah has chosen to create a world that, to a certain depth, initially appears to follow that law, but Occam will not take you all the way to the most fundamental description of reality. I could write out the actual message description, but to demonstrate that the message contains a mind requires volumes of cognitive science that have not been developed yet. Since both the message and the proof of mind will be discovered by science within the next hundred years, I choose to spend my limited time on earth in other areas."

But maybe there could be a way in which, if you behave ethically in a simulation, you are more likely to be treated that way "in return" by those simulating you - using a rather strange meaning of "in return"?

Some people interpret the Newcomb's boxes paradox as meaning that, when you make decisions, you should act is if you are influencing the decisions of other entities when there is some relationship between the behavior of those entities and your behavior - even if there is no obvious causal relationship, and even if the other entiti... (read more)

Okay, I may have misunderstood you. It looks like there is some common ground between us on the issue of inefficiency. I think the brain would probably be inefficient as well as it has to be thrown together by the very specific kind of process of evolution - which is optimized for building things without needing look-ahead intelligence rather than achieving the most efficient results.

Are you saying that you are counting every copy of the DNA as information that contributes to the total amount? If so, I say that's invalid. What if each cell were remotely controlled from a central server containing the DNA information? I can't see that we'd count the DNA for each cell then - yet it is no different really.

I agree that the number of cells is relevant, because there will be a lot of information in the structure of an adult brain that has come from the environment, rather than just from the DNA, and more cells would seem to imply more machinery in which to put it.

1wedrifid
I thought we were talking about the efficiency of the human brain. Wasn't that the whole point? If every cell is remotely controlled from a central server then well, that'd be whole different algorithm. In fact, we could probably scrap the brain and just run the central server. Genes actually do matter in the functioning of neurons. Chemical additions (eg. ethanol) and changes in the environment (eg. hypoxia) can influence gene expression in cells in the brain, impacting on their function. I suggest the brain is a ridiculously inefficient contraption thrown together by the building blocks that were practical for production from DNA representations and suitable for the kind of environments animals tended to be exposed to. We should be shocked to find that it also manages to be anywhere near optimal for general intelligence. Among other things it would suggest that evolution packed the wrong lunch.

If we do that, should we even call that "less complex earlier version of God" God? Would it deserve the title?

1Perplexed
Sure, why not? I refer to the earlier, less complex version of Michael Jackson as "Michael Jackson".

Do you mean it doesn't seem so unreasonable to you, or to other people?

0byrnema
By reasonable, I mean the hypothesis is worth considering, if there were reasons to entertain it. That is, if someone suspected there was a mind behind reality, I don't think they should dismiss it out of hand as unreasonable because this mind must be non-contingent. In fact, we should expect any explanation of our creation to be non-contingent, since physical reality appears to be so. For example, if it's reasonable to consider the probability that we're in a simulation, then we're considering a non-contingent mind creating the simulation we're in.

The really big problem with such a reality is that it contains a fundamental, non-contingent mind (God's/Allah's, etc) - and we all know how much describing one of those takes - and the requirement that God is non-contingent means we can't use any simpler, underlying ideas like Darwinian evolution. Non-contingency, in theory selection terms, is a god killer: It forces God to incur a huge information penalty - unless the theist refuses even to play by these rules and thinks God is above all that - in which case they aren't even playing the theory selection game.

2Perplexed
I don't see this. Why assume that the non-contingent, pre-existing God is particularly complex. Why not assume that the current complexity of God (if He actually is complex) developed over time as the universe evolved since the big bang. Or, just as good, assume that God became complex before He created this universe. It is not as if we know enough about God to actually start writing down that presumptive long bit string. And, after all, we don't ask the big bang to explain the coastline of Great Britain.
0byrnema
The problem is that reality itself is apparently fundamentally non-contingent. Adding "mind" to all that doesn't seem so unreasonable.
1Furcas
Agreed. It's why I'm so annoyed when even smart atheists say that God was an ok hypothesis before evolution was discovered. God was always one of the worst possible hypotheses! Or, put more directly: Unless the theist is deluding himself. :)

Just that the scenario could really be considered as just adding an extra component onto a being - one that has a lot of influence on his behavior.

Similarly, we might imagine surgically removing a piece of your brain, connecting the neurons at the edges of the removed piece to the ones left in your brain by radio control, and taking the removed piece to another location, from which it still plays a full part in your thought processes. We would probably still consider that composite system "you".

What if you had a brain disorder and some electronic... (read more)

1wedrifid
We do. But what if we had a better one?

"Except that that's not the person the question is being directed at."

Does that mean that you accept that it might at least be conceivable that the scenario implies the existence of a compound being who is less constrained than the person being controlled by Omega?

0Kingreaper
Yes. Of course, the part of them that is unconstrained IS Omega. I'm just not sure about the relevance of this?

The point, here, is that in the scenario in which Omega is actively manipulating your brain "you" might mean something in a more extended sense and "some part of you" might mean "some part of Omega's brain".

1Kingreaper
Except that that's not the person the question is being directed at. I'm not "amalgam-Kingreaper-and-Omega" at the moment. Asking what that person would do would garner completely different responses. For example, amalgam-kingreaper-and-omega has a fondness for creating ridiculous scenarios and inflicting them on rationalists.

Okay, so I got the scenario wrong, but I will give another reply. Omega is going to force you to act in a certain way. However, you will still experience what seem, to you, to be cognitive processes, and anyone watching your behavior will see what looks like cognitive processes going on.

Suppose Omega wrote a computer program and he used it to work outhow to control your behavior. Suppose he put this in a microchip and implanted it in your brain. You might say your brain is controlled by the chip, but you might also say that the chip and your brain form a c... (read more)

1Kingreaper
In the Omega-composite scenario, the composite entity is clearly making the decisions. In the chip-composite scenario, the chip-composite appears to be making decision, and in the general case I would say probably is. Indeed. Not all parts of my brain are involved in all decisions. But, in general, at least some part of me has an effect on what decision I make.

EDIT - I had missed the full context as follows: "In my example, it is give that Omega decides what you are going to do, but that he causes you to do it in the same way you ordinarily do things, namely with some decision theory and by thinking some thoughts etc."

for the comment below, so I accept Kingreaper's reply here. BUT I will give another answer, below.

If the fact that Omega causes it means that you are irrational, then the fact that the laws of physics cause your actions also means that you are irrational. You are being inconsistent here.

&... (read more)

-1Kingreaper
No matter how my mind is set-up, Omega will change the scenario it to produce the same outcome. If you took a chess program and chose a move, then gave it precisely the scenario necessary for it to make that move, I wouldn't consider that move its choice. If the entity making the choice is irrelevant, and the choice would be the same even if they were replaced by someone completely different, in what sense have they really made a choice?

Would you actually go as far as maintaining that, if a change were to happen tomorrow to the 1,000th decimal place of a physical constant, it would be likely to stop brains from working, or are you just saying that a similar change to a physical constant, if it happened in the past, would have been likely to stop the sequence of events which has caused brains to come into existence?

1timtyler
Option 2. Existing brains might be OK - but I think newly-constructed ones would have to not work properly when they matured. So, option 2 would not be enough on its own.
0[anonymous]
Correction: That last line should be "which has CAUSED brains to come into existence?"

I think that the "ABSOLUTELY IRRESISTIBLE" and "ABSOLUTELY UNTHINKABLE" language can be a bit misleading here. Yes, someone with the lesion is compelled to smoke, but his experience of this may be experience of spending days deliberating about whether to smoke - even though, all along, he was just running along preprepared rails and the end-result was inevitable.

If we assume determinism, however, we might say this about any decision. If someone makes a decision, it is because his brain was in such a state that it was compelled to make t... (read more)

-3Kingreaper
Not really. The lesion is a single aspect that completely determines a decision. For most decisions, far more of the brain/mind than just one small, otherwise irrelevant, part can have some influence on the outcome. But the lesion is clearly different, IF it has a 100% correlation. When making a decision on something where I know my thought-process is irrelevant, why should I not be fatalistic? There is no decision-making process in the 100%-lesion case, the decision is MADE, it's right there in the lesion. EDIT: Here's something analogous to the 100% lesion: you have a light attached to your head. If it blinks red, it'll make you feel happy, but it'll blow up in an hour. It's not linked to the rest of your brain at all. Should you try and make a decision about whether to have it blink red?
0JanetK
I guess that is the conversation stopper. We agree that it takes a lot of steps. We disagree on whether the number makes it only possible in principle or not.

No, I think you are misunderstanding me here. I wasn't claiming that proliferation of worlds CAUSES average energy per-world to go down. It wouldn't make much sense to do that, because it is far from certain that the concept of a world is absolutely defined (a point you seem to have been arguing). I was saying that the total energy of the wavefunction remains constant (which isn't really unreasonable, because it is merely a wave developing over time - we should expect that.) and I was saying that a CONSEQUENCE of this is that we should expect, on average, ... (read more)

2Mitchell_Porter
Now you are saying what I first thought you might have meant. :-) Namely, you are talking about the energy of the wavefunction as if it were itself a field. In a way, this brings out some of the difficulties with MWI and the common assertion that MWI results from taking the Schrodinger equation literally. It's a little technical, but possibly the essence of what I'm talking about is to be found by thinking about Noether's theorem. This is the theorem which says that symmetries lead to conserved quantities such as energy. But the theorem is really built for classical physics. Ward identities are the quantum counterpart, but they work quite differently, because (normally) the wavefunction is not treated as if it is a field, it is treated as a quasiprobability distribution on the physical configuration space. In effect, you are talking about the energy of the wavefunction as if the classical approach, Noether's theorem, was the appropriate way to do so. There are definitely deep issues here because quantum field theory is arguably built on the formal possibility of treating a wavefunction as a field. The Dirac equation was meant to be the wavefunction of a single particle, but to deal with the negative-energy states it was instead treated as a field which itself had to be quantized (this is called "second quantization"). Thus was born quantum field theory and the notion of particles as field quanta. MWI seems to be saying, let's treat configuration space as a real physical space, and regard the second-quantized Schrodinger equation as defining a field in that space. If you could apply Noether's theorem to that field in the normal way (ignoring the peculiarity that configuration space is infinite-dimensional), and somehow derive the Ward identities from that, that would be a successful derivation of orthodox quantum field theory from the MWI postulate. But skeptical as I am, I think this might instead be a way to illuminate from yet another angle why MWI is so proble

I do admit to over-generalizing in saying that when a world splits, the split-off worlds each HAVE to have lower energy than the "original world". If we measure the energy associated with the wavefunction for individual worlds, on average, of course, this would have to be the case, due to the proliferation of worlds: However, I do understand, and should have stated, that all that matters is that the total energy for the system remains constant over time, and that probabilities matter.

Regarding the second issue, defining what a world is, I actuall... (read more)

1Mitchell_Porter
Let me see if I am understanding you. You're now saying that the average energy-per-world goes down, "due to the proliferation of worlds"? Because that still isn't right. The simplest proof that the average energy is conserved is that energy eigenstates are stationary states: subjected to Hamiltonian evolution, they don't change except for a phase factor. So if your evolving wavefunction is Psi(t), expressed in a basis of energy eigenstates it becomes sum_k c_k exp(-i . E_k . t) |E_k>. I.e. the time dependence is only in the coefficients of the energy eigenstates, and there's no variation in their norm (since the time dependence is only in the phase factor), so the probability weightings of the energy eigenstates also don't change. Therefore, the expectation value of the energy is a constant. There ought to be a "local" proof of energy conservation as well (at least, if we were working with a field theory), and it might be possible to insightfully connect that with decoherence in some way - that is, in a way which made clear that decoherence, the process which is supposed to be giving rise to world-splits, also conserves energy however you look at it - but that would require a bit more thought on my part. ETA: Dammit, how do you do subscripts in markdown? :-) ETA 2: Found the answer.

I will add something more to this.

Firstly, I should have made it clear that the reference class should only contain worlds which are not clearly inconsistent with ours - we remove the ones where the sun never rose before, for example.

Secondly, some people won't like how I built the reference class, but I maintain that way has least assumptions. If you want to build the reference class "bit by bit", as if you are going through each world as if it were an image in a graphics program, adding a pixel at a time, you are actually imposing a very specif... (read more)

The issue is too involved to give a full justification of induction here, but I will try to give a very general idea. (This was on my mind a while back as I got asked about it in an interview.)

Even if we don't assume that we can apply statistics in the sense of using past observations to tell us about future observations, or observations about some of the members of a group to tell us about other members of a group, I suggest we are justified in doing the following.

Given a reference class of possible worlds in which we could be, in the absence of any reaso... (read more)

2PhilGoetz
That's a great observation! Thanks!
1PaulAlmond
I will add something more to this. Firstly, I should have made it clear that the reference class should only contain worlds which are not clearly inconsistent with ours - we remove the ones where the sun never rose before, for example. Secondly, some people won't like how I built the reference class, but I maintain that way has least assumptions. If you want to build the reference class "bit by bit", as if you are going through each world as if it were an image in a graphics program, adding a pixel at a time, you are actually imposing a very specific "construction algorithm" on the reference class. It is that that would need justifying, whereas simply saying a world has a formal description is claiming almost nothing. Thirdly, just because a world has a formal description does not mean it behaves in a regular way. The description could describe a world which is a mess. None of this implies an assumption of order.

I disagree with that. The being in Newcomb's problem wouldn't have to be all-knowing. He would just have to know what everyone else is going to do conditional on his own actions. This would mean that any act of prediction would also cause the being to be faced with a choice about the outcome.

For example:

Suppose I am all-knowing, with the exception that I do not have full knowledge about myself. I am about to make a prediction, and then have a conversation with you, and then I am going to sit in a locked metal box for an hour. (Theoretically, you could argu... (read more)

0GrateGoo
Hereinafter, "to Know x" means "to be objectively right about x, and to be subjectively 100 percent certain of x, and to have let the former 'completely scientifically cause' the latter (i.e. to have used the former to create the latter in a completely scientific manner), such that it cannot, even theoretically, be the case that something other than the former coincidentally and crucially misleadingly caused the latter - and to Know that all these criteria are met". Anything that I merely know ("know" being defined as people usually seem to implicitly define it in their use of it), as opposed to Know, may turn out to be wrong (for all that I know). It seems that the more our scientists know, the more they realize that they don't know. Perhaps this "rule" holds forever, for every advancing civilisation (with negligible exceptions)? I think there could not even theoretically be any Knowing in the (or any) world. I conjecture that, much like it's universally theoretically impossible to find a unique integer for every unique real, it's universally theoretically impossible for any being to Know anything at all, such as for example what box(es) a human being will take. Nick Bostrom's Simulation Argument seems to show that any conceivable being that could theoretically exist might very well (for all he (that being) knows) be living in a computer simulation controlled by a mightier being than himself. This universal uncertainty means that no being could Know that he has perfect powers of prediction over anything whatsoever. Making a "correct prediction" partly due to luck isn't having perfect powers of prediction, and a being who doesn't Know what he is doing cannot predict anything correctly without at least some luck (because without luck, Murphy's law holds). This means that no being could have perfect powers of prediction. Now let "Omeg" be defined as the closest (in terms of knowledge of the world) to an all Knowing being (Omega) that could theoretically exist. Let

it sounds like you might have issues with what looks like a violation of conservation of energy over a single universe's history. If a world splits, the energy of each split-off world would have to be less than the original world. That doesn't change the fact that conservation of energy appears to apply in each world: Observers in a world aren't directly measuring the energy of the wavefunction, but instead they are measuring the energy of things like particles which appear to exist as a result of the wavefunction.

Advocates of MWI generally say that a spli... (read more)

2Mitchell_Porter
No, you are misunderstanding the argument. I am a MWI opponent but I know you are getting this wrong. If we switch to orthodox QM for a moment, and ask what the energy of a generic superposition is, the closest thing to an answer is to talk about the expectation value of the energy observable for that wavefunction. This is a weighted average of the energy eigenvalues appearing in the superposition. For example, for the superposition 1/sqrt(2) |E=E1> + 1/sqrt(2) |E=E2>, the expectation value is E1/2 + E2/2. What Q22 in the Everett FAQ is saying is that the expectation value won't apriori increase, even if new worlds are being created within the wavefunction, because the expectation value is the weighted average of the energies of the individual worlds; and in fact the expectation value will not change at all (something you can prove in a variety of ways). Well, this is another issue where, if I was talking to a skilled MWI advocate, I might be able to ask some probing questions, because there is a potential inconsistency in the application of these concepts. Usually when we talk about interference between branches of the wavefunction, it means that there are two regions in (say) configuration space, each of which has some amplitude, and there is some flow of probability amplitude from one region into the other. But this flow does not exist at the level of configurations, it only occurs at the level of configuration amplitudes. So if "my world", "this world", where the Nazis lost, is one configuration, and the world where the Nazis won is another configuration, there is no way for our configuration to suddenly resemble the other configuration on account of such a flow - that is a confusion of levels. For me to observe interference phenomena, I have to be outside the superposition. But I wasn't even born when WWII was decided, so I am intrinsically stuck in one branch. Maybe this is a quibble; we could talk about something that happened after my birth, like the 2000

Well, it isn't really about what I think, but about what MWI is understood to say.

According to MWI, the worlds are being "sliced more thinly" in the sense that the total energy of each depends on its probability measure, and when a world splits its probability measure, and therefore energy, is shared out among the worlds into which it splits. The answer to your question is a "sort of yes" but I will qualify that shortly.

For practical purposes, it is a definite and objective fact. When two parts of the wavefunction have become decoherent... (read more)

1Mitchell_Porter
Please check your sources on MWI. I think you must be misreading them. So in reality, decoherence is a matter of degree. But I thought that the existence of one world or many worlds depended on whether decoherence had occurred. Is there a threshold value, a special amount of decoherence which marks the transition?

That exactly seems quite close to Searle to me, in that you are both imposing specific requirements for the substrate - which is all that Searle does really. There is the possible difference that you might be more generous than Searle about what constitutes a valid substrate (though Searle isn't really too clear on that issue anyway).

2torekp
Unlike Searle, and like Sharvy, I believe it ain't the meat, it's the motion (see the Sharvy reference at the bottom). Sharvy presents a fading qualia argument much like the one Chalmers offers in the link simplicio provides, only, to my recollection, without Chalmers's wise caveat that the functional isomorphism should be fine-grained.

I started a series of articles, which got some criticism on LW in the past, dealing with this issue (among others) and this kind of ontology. In short, if an ontology like this applies, it does not mean that all computations are equal: There would be issues of measure associated with the number (I'm simplifying here) of interpretations that can find any particular computation. I expect to be posting Part 4 of this series, which has been delayed for a long time and which will answer many objections, in a while, but the previous articles are as follows:

Minds... (read more)

This seems like pretty much Professor John Searle's argument, to me. Your argument about the algorithm being subject to interpretation and observer dependent has been made by Searle who refers to it as "universal realizability".

See;

Searle, J. R., 1997. The Mystery of Consciousness. London: Granta Books. Chapter 1, pp.14-17. (Originally Published: 1997. New York: The New York Review of Books. Also published by Granta Books in 1997.)

Searle, J. R., 2002. The Rediscovery of the Mind. Cambridge, Massachusetts: The MIT Press. 9th Edition. Chapter 9, pp.207-212. (Originally Published: 1992. Cambridge, Massachusetts: The MIT Press.)

These worlds aren't being "created out of nowhere" as people imagine it. They are only called worlds because they are regions of the wavefunction which don't interact with other regions. It is the same wavefunction, and it is just being "sliced more thinly". To an observer, able to look at this from outside, there would just be the wavefunction, with parts that have decohered from each other, and that is it. To put it another way, when a world "splits" into two worlds, it makes sense to think of it as meaning that the "st... (read more)

1Mitchell_Porter
Do you think the number of worlds is a definite and objective fact, or that it depends on how you slice the wavefunction?

Agreed - MWI (many-worlds interpretation) does not have any "collapse": Instead parts of the wavefunction merely become decoherent with each other which might have the appearance of a collapse locally to observers. I know this is controversial, but I think the evidence is overwhelmingly in favor of MWI because it is much more parsimonious than competing models in the sense that really matters - and the only sense in which the parsimony of a model could really be coherently described. (It is kind of funny that both sides of the MWI or !MWI debate ... (read more)

-6Saladin

I think I know what you are asking here, but I want to be sure. Could you elaborate, maybe with an example?

I think this can be dealt with in terms of measure. In a series of articles, "Minds, Measure, Substrate and Value" I have been arguing that copies cannot be considered equally, without regard to substrate: We need to take account of measure for a mind, and the way in which the mind is implemented will affect its measure. (Incidentally, some of you argued against the series: After a long delay [years!], I will be releasing Part 4, in a while, which will deal with a lot of these objections.)

Without trying to present the full argument here, the mini... (read more)

2PhilGoetz
That was my first reaction, but if you rely on information-theoretic measures of difference, then insane people will be weighted very heavily, while homogenous cultures will be weighted little. The basic precepts of Judaism, Christianity, and Islam might each count as one person.
1jimrandomh
Does this imply that someone could gain measure, by finding a simpler entity with volition similar to theirs and self-modifying into it or otherwise instantiating it? If so, wouldn't that encourage people to gamble with their sanity, since verifying similarity of volition is hard, and gets harder the greater the degree of simplification?
Load More