Eitan_Zohar comments on A resolution to the Doomsday Argument. - Less Wrong

-2 Post author: Eitan_Zohar 24 May 2015 05:58PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (86)

You are viewing a single comment's thread. Show more comments above.

Comment author: Eitan_Zohar 24 May 2015 08:03:07PM *  2 points [-]

So, we have a choice: to deny a very counterintuitive statement or to deny causality.

I'm not 'denying causality', I'm pointing out a way around it.

Comment author: estimator 24 May 2015 08:15:01PM *  0 points [-]

You say that one can change A by changing B, while there is no causal mechanism by which B can influence A. That's denying causality.

Well, if you don't like the term 'denying causality', feel free to replace it, but the point holds anyway.

In my prior probabilities system, finding a way around causality is somewhere near finding a way around energy conservation law. No way, unless there are tons of evidence.

Comment author: ike 26 May 2015 02:45:50PM 2 points [-]

You say that one can change A by changing B, while there is no causal mechanism by which B can influence A. That's denying causality.

Do you accept in theory that, provided MWI is true, one can win a quantum lottery by committing suicide if one does not win? If yes, is that not a similar violation of causality? If no, why not? What's your model of what would happen?

Comment author: Luke_A_Somers 17 July 2015 09:06:04PM 0 points [-]

Under MWI, you can win a lottery just by entering it; committing suicide is not necessary. Of course, almost all of you will lose.

All you're doing in quantum lotteries is deciding you really, REALLY don't care about the case where you lose, to the point that you want to not experience those branches at all, to the point that you'd kill yourself if you find yourself stuck in them.

That's the causality involved. You haven't gone out and changed the universe in any way (other than almost certainly killing yourself).

Comment author: ike 19 July 2015 03:31:54PM 0 points [-]

Under MWI, you can win a lottery just by entering it; committing suicide is not necessary. Of course, almost all of you will lose.

Replace "win a lottery" with "have a subjective probability of ~1 of winning a lottery".

All you're doing in quantum lotteries is deciding you really, REALLY don't care about the case where you lose, to the point that you want to not experience those branches at all, to the point that you'd kill yourself if you find yourself stuck in them.

That's wrong. If I found myself stuck in one, I would prefer to live; that's why I need a very strong precommitment, enforced by something I can't turn off.

You haven't gone out and changed the universe in any way (other than almost certainly killing yourself).

Here's where we differ; I identify every copy of me as "me", and deny any meaningful sense in which I can talk about which one "I" am before anything has diverged (or, in fact, before I have knowledge that excludes some of me). So there's no sense in which I "might" die, some of me certainly will, and some won't, and the end state of affairs is better given some conditions (like selfishness, no pain on death, and lots of other technicalities).

Comment author: Luke_A_Somers 21 July 2015 08:17:44PM *  0 points [-]

That's wrong. If I found myself stuck in one, I would prefer to live; that's why I need a very strong precommitment, enforced by something I can't turn off.

I mean, younow would prefer to kill youthen.

As for your last paragraph, the framing was from a global point of view, and probability in this case is the deterministic, Quantum-Measure-based sort.

Comment author: ike 21 July 2015 09:07:05PM 0 points [-]

I mean, younow would prefer to kill youthen

Not really. I prefer to kill my future self only because I anticipate living on in other selves; this can't accurately be described as "you really, REALLY don't care about the case where you lose, to the point that you want to not experience those branches at all, to the point that you'd kill yourself if you find yourself stuck in them."

I do care; what I don't care about is my measure between two measures of the same cardinality. If there was a chance of my being stuck in one world and not living on anywhere else, I wouldn't (now) want to kill myself in that future.

As for your last paragraph, the framing was from a global point of view, and probability in this case is the deterministic, Quantum-Measure-based sort.

Ok, we sort of agree, then; but then your claim of "You haven't gone out and changed the universe in any way" seems weak. If I can change my subjective probability of experiencing X, and the state of the universe that's not me doesn't factor into my utility except insofar as it affects me, why should I care whether I'm "changing the universe"?

(To clarify the "I care" claim further; I'm basically being paid in one branch to kill myself in another branch. I value that payment more than I disvalue killing myself in the second branch; that does not necessarily mean that I don't value the second branch at all, just less than the reward in branch 1)

Comment author: Jiro 26 May 2015 04:27:30PM 0 points [-]

Saying that "one can" do something in MWI is misleading because there are many "ones". If you don't commit suicide, there is a "one" who won and other "ones" who lost; if you do commit suicide, there is a "one" who won and the others are dead. Committing suicide doesn't cause you to win because you would have won in one of the branches in either situation.

Comment author: ike 26 May 2015 05:26:41PM 0 points [-]

Well, OP's argument would have many "ones" as well. Every simulated copy of me should count at least as much as it counts in MWI.

Comment author: estimator 26 May 2015 06:35:43PM -1 points [-]

I don't have a model which I believe with certainty, and I think it is a mistake to have one, unless you know sufficiently more than modern physics knows.

Why do you think that your consciousness always moves to the branch where you live, but not at random? Quantum lotteries, quantum immortality and the like require not just MWI, but MWI with a bunch of additional assumptions. And if some QM interpretation flavor violates causality, it is more an argument against such an interpretation, than against causality.

The thing I don't like about such way of winning quantum lotteries is that they require non-local physical laws. Imagine that a machine shoots you iff some condition is not fulfilled; you say that you will therefore find yourself in the branch where the condition is fulfilled. But the machine won't kill you instantly, so the choice of branch at time t must be done based on what happens at time t + dt.

Comment author: ike 26 May 2015 08:47:28PM 2 points [-]

I don't have a model which I believe with certainty, and I think it is a mistake to have one, unless you know sufficiently more than modern physics knows.

Note that I said provided MWI is true.

Why do you think that your consciousness always moves to the branch where you live, but not at random?

I think that, given MWI, your consciousness is in any world in which you exist, so that if you kill yourself in the other worlds, you only exist in worlds that you didn't kill yourself. I'm not sure what else could happen; obviously you can't exist in the worlds you're dead in.

The thing I don't like about such way of winning quantum lotteries is that they require non-local physical laws.

I don't see why; MWI doesn't violate locality.

Imagine that a machine shoots you iff some condition is not fulfilled; you say that you will therefore find yourself in the branch where the condition is fulfilled. But the machine won't kill you instantly, so the choice of branch at time t must be done based on what happens at time t + dt.

You have a point; my scenario is different from that, but I guess it isn't obvious. So let me restate my quantum suicide lottery in more detail. The general case I imagine is as follows: I go to sleep at time t. My computer checks some quantum data, and compares it to n. If it doesn't equal n, it kills me. Say I die at time t+dt in that case. If I don't die, it wakes me.

So at time t, the data is already determined from the computer's perspective, but not from mine. At t+dt, the data is determined from my perspective, as I've awoken. In the time between t and t+dt, it's meaningless to ask what "branch" I'm in; there's no test I can do to determine that in theory, as I only awaken if I'm in the data=n branch. It's meaningful to other people, but not to me. I don't see anywhere that requires non-local laws in this scenario.

Comment author: estimator 26 May 2015 09:53:26PM *  -1 points [-]

I don't have a model which I believe with certainty even provided MWI is true.

I think that, given MWI, your consciousness is in any world in which you exist, so that if you kill yourself in the other worlds, you only exist in worlds that you didn't kill yourself. I'm not sure what else could happen; obviously you can't exist in the worlds you're dead in.

What happens if you die in a non-MWI world? Pretty much the same for the case of MWI with random branch choice. If your random branch happens to be a bad one, you cease to exist, and maybe some of your clones in other branches are still alive.

So at time t, the data is already determined from the computer's perspective, but not from mine. At t+dt, the data is determined from my perspective, as I've awoken. In the time between t and t+dt, it's meaningless to ask what "branch" I'm in; there's no test I can do to determine that in theory, as I only awaken if I'm in the data=n branch. It's meaningful to other people, but not to me. I don't see anywhere that requires non-local laws in this scenario.

Non-locality is required if you claim that you (that copy of you which has your consciousness) will always wake up. Otherwise, it's just a twisted version of a Russian roulette and has nothing to do with quants.

At time t, the computer either shoots you, or not. At time t + dt, its bullet kills you (or not). So you say that at time t you will go to the branch where the computer doesn't kill you. But such a choice of a branch requires information at time t + dt (whether you are alive or not in that branch). So, physical laws have to perform a look-ahead in time to decide in which Everett branch they should put your consciousness.

Now, imagine that your (quantum) computer generates a random number n from the Poisson distribution. Then, it will kill you after n days. Now n = ... what? Well, thanks to thermodynamics, your (and computer's) lifespan is limited, so hopefully it will be a finite number -- but, look, if the universe allowed unbounded lifespan, it would be a logical contradiction in physical laws. Anyway, you see that the look-ahead in time required after the random number generation can be arbitrarily large. That's what I mean by non-locality here.

Comment author: ike 27 May 2015 01:56:18PM 2 points [-]

Non-locality is required if you claim that you (that copy of you which has your consciousness)

I deny that this is meaningful. If there are two copies of me, both "have my consciousness". I fail to see any sense in which my consciousness must move to only one copy.

So you say that at time t you will go to the branch where the computer doesn't kill you.

I do not claim that. I claim that I exist in both branches, up until one of them no longer contains my consciousness, because I'm dead, and then I only exist in one branch. (In fact, I can consider my sleeping self unconscious, in which case no branches contained my consciousness until I woke up.)

Now, imagine that your (quantum) computer generates a random number n from the Poisson distribution. Then, it will kill you after n days.

Then many copies of my consciousness will exist, some slowly dying each day.

So, physical laws have to perform a look-ahead in time to decide in which Everett branch they should put your consciousness.

I don't have any look-ahead required in my model at all.

Can you dissolve consciousness? What test can be performed to see which branch my consciousness has moved to, that doesn't require me to be awake, nor have knowledge of the random data?

Comment author: estimator 27 May 2015 02:05:52PM *  0 points [-]

OK, now imagine that the computer shows you the number n on it's screen. What will you see? You say that both copies have your consciousness; will you see a superposition of numbers? I don't see how simultaneously being in different branches makes sense from the qualia viewpoint.

Also, let's remove sleeping from the thought experiment. It is an unnecessary complication; by the way, I don't think that consciousness flow is interrupted while sleeping.

And no, I'm currently unable to dissolve the hard problem of consciousness.

Comment author: ike 27 May 2015 02:51:59PM 2 points [-]

OK, now imagine that the computer shows you the number n on it's screen. What will you see? You say that both copies have your consciousness; will you see a superposition of numbers?

No, one copy will see 1, another 2, etc. Something like that will fork my consciousness, which has uncertain effects, which is why I proposed being asleep throughout. Until my brain has any info about what the data is, my consciousness hasn't forked yet. The fact that the info is "out there" in this world is irrelevant; the opposite data is also out there "in this world", as long as I don't know, and both actually exist (although that requirement arguably is also irrelevant to the anthropic math), then I exist in both worlds. In other words, both copies will be "continuations" of me. If one suddenly disappears, then only the other "continues" me.

Also, let's remove sleeping from the thought experiment. It is an unnecessary complication; by the way, I don't think that consciousness flow is interrupted while sleeping.

There's a reason I included it. I'm more confident that the outcome will be good with it than without. In particular, if I'm not sleeping when killed, I expect to experience death.

But the fact that you think it's not interrupted when sleeping suggests we're using different definitions. If it's because of dreaming, then specify that the person isn't dreaming. The main point is that I won't feel pain upon dying (or in fact, won't feel anything before dying), so putting me under general anesthesia and ensuring the death would be before I begin to feel anything should be enough, in that case.

And no, I'm currently unable to dissolve the hard problem of consciousness.

I meant just enough that I could understand what you mean when you claim that consciousness must only go to one path.

Comment author: estimator 27 May 2015 03:27:14PM -1 points [-]

I think, the problem with consciousness/qualia discussions is that we don't have a good set of terms to describe such phenomena, while being unable to reduce it to other terms.

No, one copy will see 1, another 2, etc. Something like that will fork my consciousness, which has uncertain effects, which is why I proposed being asleep throughout.

I mean, one of the copies would be you (and share your qualia), while others are forks of you. That's because I think that a) your consciousness is preserved by the branching process and b) you don't experience living in different branches, at least after you observed their difference. So, if the quantum lottery works when you're awake, it requres look-ahead in time.

Now about sleeping. My best guess about consciousness is that we are sort-of conscious even while in non-REM sleep phases and under anesthesia; and halting (almost) all electric activity in the brain doesn't preserve consciousness. That's derived from the requirement of continuity of experience, which I find plausible. But that's probably irrelevant to our discussion.

As far as I understand, in your model, one's conscious experience is halted during quantum lottery (i.e. sleep is some kind of a temporary death). And then, his conscious experience continues in one of the survived copies. Is this a correct description of your model?

Comment author: Eitan_Zohar 24 May 2015 08:31:57PM *  1 point [-]

You say that one can change A by changing B, while there is no causal mechanism by which B can influence A. That's denying causality.

That's finding a loophole in causality, and the distinction is certainly worth making. The DA is only a product of perspective; it isn't a 'real' thing that exists.

Comment author: estimator 24 May 2015 08:47:42PM *  1 point [-]

Whether the distinction is worth making or not, it is irrelevant to my point, since both are very unlikely and therefore require much more evidence than we have now.

I assume that your idea is to prevent doomsday or make it less likely. If not, why bother with all these simulations?

Comment author: Eitan_Zohar 25 May 2015 09:30:50AM *  -1 points [-]

Whether the distinction is worth making or not, it is irrelevant to my point, since both are very unlikely and therefore require much more evidence than we have now.

Look, does this seem like solid reasoning to you? Because your arguments are beginning to sound quite like it.

I am not the first Lesswronger to think of a causality-evading idea, btw.

Comment author: estimator 25 May 2015 10:14:14AM 2 points [-]

Nope: there is sufficient evidence that the Earth is not flat, but there isn't sufficient evidence that causality doesn't exist. That is the difference. There are some counterintuitive theories, like QM or relativity or, maybe, round Earth, but all of them have been supported by a lot of evidence, there were actual experiments to prove them, etc. And these theories appeared, because old theories failed to explain existing evidence.

Can you name a single real-world example where causality doesn't work?

And you're not the first LessWronger to think that if your idea sounds clever enough, you don't actually need any evidence to prove it.

Comment author: Eitan_Zohar 25 May 2015 10:47:52AM *  1 point [-]

"Species can't evolve, that violates thermodynamics! We have too much evidence for thermodynamics to just toss it out the window."

Just realized how closely your argument mirrors this.

Comment author: estimator 25 May 2015 11:00:20AM *  1 point [-]

Er.. what? Evolution doesn't violate thermodynamics.

Bad analogies don't count as solid arguments, either. The difference between evolution/thermodynamics example and your case is that the relation between thermodynamics and evolution is complicated, and in fact there is no contradiction. While it's evident that your idea works only if you can acausally influence something. That's much closer to perpetual motion engine (direct contradiction), than to evolution (non-direct, questionable contradiction which turns out to be false).

Comment author: Eitan_Zohar 25 May 2015 11:23:53AM *  1 point [-]

Look, I explained the details in the OP. Create a lot of Earths and hope that yours turns out to be one of them. That already violates causality, according to your standards. I don't see much of a way to make it clearer.

Comment author: bortels 01 June 2015 09:34:53PM 0 points [-]

Ah - that's much clearer than your OP.

FWIW - I suspect it violates causality under nearly everyone's standards.

You asked if your proposal was plausible. Unless you can postulate some means to handle that causality issue, I would have to say the answer is "no".

So - you are suggesting that if the AI generates enough simulations of the "prime" reality with enough fidelity, then the chances that a given observer is in a sim approach 1, because of the sheer quantity of them. Correct?

If so - the flaw lies in orders of infinity. For every way you can simulate a world, you can incorrectly simulate it an infinite number of other ways. So - if you are in a sim, it is likely with a chance approaching unity that you are NOT in a simulation of the higher level reality simulating you. And if it's not the same, you have no causality violation, because the first sim is not actually the same as reality; it just seems to be from the POV an an inhabitant.

The whole thing seems a bit silly anyway - not your argument, but the sim argument - from a physics POV. Unless we are actually in a SIM right now, and our understanding of physics is fundamentally broken, doing the suggested would take more time and energy than has ever or will ever exist, and is still mathematically impossible (another orders of infinity thing).

Comment author: ThisSpaceAvailable 29 May 2015 05:10:30AM -1 points [-]

Look, does this seem like solid reasoning to you? Because your arguments are beginning to sound quite like it.

"Species can't evolve, that violates thermodynamics! We have too much evidence for thermodynamics to just toss it out the window."

Listing arguments that you find unconvincing, and simply declaring that you find your opponent's argument to be similar, is not a valid line of reasoning, isn't going to make anyone change their mind, and is kind of a dick move. This is, at its heart, simply begging the question: the similarity that you think exists is that you think all of these arguments are invalid. Saying "this argument is similar to another one because they're both invalid, and because it's so similar to an invalid argument, it's invalid" is just silly.

"My argument shares some similarities to an argument made by someone respected in this community" isn't much of an argument, either.

Comment author: Eitan_Zohar 29 May 2015 11:51:50AM *  1 point [-]

Sure, but I found the analogy useful because it is literally the exact same thing. Both draw a line between a certain mechanism and a broader principle with which it appears to clash if the mechanism were applied universally. Both then claim that the principle is very well established and that they do not need to condescend to address my theory unless I completely debunk the principle, even though the theory is very straightforward.

I was sort of hoping that he would see it for himself, and do better. This is a rationality site after all; I don't think that's a lot to ask.

Comment author: ThisSpaceAvailable 29 May 2015 06:13:50PM 1 point [-]

You clearly expect estimator to agree that the other arguments are fallacious. And yet estimator clearly believes that zir argument is not fallacious. To assert that they are literally the same thing, that they are similar in all respects, is to assert that estimator's argument is fallacious, which is exactly the matter under dispute. This is begging the question. I have already explained this, and you have simply ignored my explanation.

All the similarities that you cite are entirely irrelevant. Simply noting similarities between an argument, and a different, fallacious argument, does nothing to show that the argument in question is fallacious as well, and the fact that you insist on pretending otherwise does not speak well to your rationality.

Estimator clearly believes that there is no way that creating simulations can affect whether we are in a simulation. You have presented absolutely no argument for why it can. Instead, you've simply declared that your "theory" is "straightforward", and that disagreeing is unacceptable arrogance. Arguing that your "theory" violates a well-established principled is addressing your "theory". So apparently, when you write "do not need to condescend to address my theory", what you really mean is "have failed to present a counterargument that I have deigned to recognize as legitimate".