MichaelHoward comments on Solve Psy-Kosh's non-anthropic problem - Less Wrong

34 Post author: cousin_it 20 December 2010 09:24PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (99)

You are viewing a single comment's thread.

Comment author: MichaelHoward 21 December 2010 02:00:49AM 0 points [-]

I'm assuming all deciders are coming to the same best decision so no worries about deciders disagreeing if you change your mind.

I'm going to be the odd-one-out here and say that both answers are correct at the time they are made… if you care far more (which I don't think you should) about African kids in your own Everett branch (or live in a hypothetical crazy universe where many worlds is false).

(Chapter 1 of Permutation City spoiler, please click here first if not read it yet, you'll be glad you did...): Jura lbh punatr lbhe zvaq nsgre orvat gbyq, lbh jvyy or yvxr Cnhy Qheunz, qvfnoyvat gur cnenpuhgr nsgre gur pbcl jnf znqr.

If you care about African kids in other branches equally, then the first decision is always correct, because although the second choice would make it more likely that kids in your branch will be better off, it will cost the kids in other branches more.

Comment author: Manfred 21 December 2010 03:53:00AM *  3 points [-]

Why drag quantum mechanics into this? Taking the expected value gives you exactly the same thing as it does classically, and the answer is still the same. Nay is right and yea is wrong. You seem to be invoking "everett branch" as a mysterious, not-so-useful answer.

Comment author: MichaelHoward 21 December 2010 03:11:20PM 1 point [-]

I'm not trying to be mysterious. As far as I can see, there is a distinction. The expected value of switching to Yea from your point of view is affected by whether or not you care about the kids in the branches you are not yourself in.

After being told your status, you're split:

  • 1/20 are U=Decider, U=Heads. Yea is very bad here.

  • 9/20 are U=Decider, U=Tails. Yea is good here.

  • 9/20 are U=Passenger, U=Heads. Yea is very bad here.

  • 1/20 are U=Passenger, U=Tails. Yea is good here.

After being told your status, the new information changes the expected values across the set of branches you could now be in, because that set has changed. It is now only the first 2 lines, above, and is heavily weighted towards Yea = good, so for the kids in your own branches, Yea wins.

But the other branches still exist. If all deciders must come to the same decision (see above), then the expected value of Yea is lower than Nay as long as you care about the kids in branches you're not in yourself - Nay wins. If fact, this expected value is exactly what it was before you had the new information about which branches you can now be in yourself.

Comment author: Manfred 21 December 2010 04:13:18PM *  0 points [-]

Okay. You're bringing up quantum mechanics needlessly, though. This is exactly the same reasoning as cousin it went through in the post, and leads to exactly the same problem, since everyone can be expected to reason like you. If yea is only said because it generates better results, and you always switch to yea, then QED always saying yea should have better results. But it doesn't!

Comment author: MichaelHoward 21 December 2010 10:02:17PM 0 points [-]

But my whole point has been that yea can yield better results, iff you don't care about kids in other branches, which would make branches relevant.

To show that branches are not relevant, tell me why that argument (that Yeah wins in this case) is wrong, don't just assert that it's wrong.

Comment author: Manfred 21 December 2010 11:39:19PM 0 points [-]

Since, as I've been saying, it's identical to the original problem, if I knew how to resolve it I'd already have posted the resolution. :)

What can be shown is that it's contradictory. If yea is better for your "branch" when you vote yea, and everyone always follows this reasoning and votes yea, and the whole point of "branches" is that they're no longer causally linked, all the branches should do better. More simply, if yea is the right choice for every decider, it's because "always yea" actually does better than "always nay."

But always yea is not better than always nay.

If you would like to argue that there is no contradiction, you could try and find a way to resolve it by showing how a vote can be better every single time without being better all the time.

Comment author: MichaelHoward 22 December 2010 12:47:08AM 0 points [-]

if yea is the right choice for every decider, it's because "always yea" actually does better than "always nay.

It is the best choice for every decider who only cares about the kids in their Everett branches.

It's not the best choice for deciders (or non-deciders, though they don't get a say) who care equally about kids across all the branches. Their preferences are as before.

It's a really lousy choice for any non-deciders who only care about the kids in their Everett branches. Their expected outcome for "yea" just got worse by the same amount that first lot of deciders who only care about their kids got better. Unfortunately for them, their sole decider thinks he's probably in the Tails group, and that his kids will gain by saying "yea", as he is perfectly rational to think given the information he has at that time.

There is no contradiction.

Comment author: GuySrinivasan 22 December 2010 01:44:53AM 2 points [-]

What does an entity that only cares about the kids in its Everett branches even look like? I am confused. Usually things have preferences about lotteries over outcomes, and an outcome is an entire multiverse, and these things are physically realized and their preferences change when the coinflip happens? How does that even work? I guess if you want you can implement an entity that works like that, but I'm not certain why we'd even call it the same entity at any two times. This sort of entity would do very well to cut out its eyes and ears so it never learns it's a decider and begin chanting "nay, nay, nay!" wouldn't it?

Comment author: MichaelHoward 22 December 2010 02:36:34AM 0 points [-]

What does an entity that only cares about the kids in its Everett branches even look like?

Example 1: Someone that doesn't know about or believe in many worlds. The don't care about kids in alternate Everett branches, because to their mind they don't exist, so have zero value. In his mind, all value is in this single universe, with a coin that he is 90% sure landed Tails. By his beliefs, "yea" wins. Most people just don't think about entire multiverses.

Example 2: Someone who gets many worlds, but tends inclined to be overwhelmingly more charitable to those that feel Near rather than Far, and to those that feel like Their Responsibility rather than Someone Else's Problem. I hear this isn't too uncommon :-)

This sort of entity would do very well to cut out its eyes and ears so it never learns it's a decider and begin chanting "nay, nay, nay!" wouldn't it?

Actual cutting aside, this is an excellent strategy. Upvoted :)

Comment author: Manfred 22 December 2010 02:44:56AM *  -1 points [-]

I suppose I'll avoid repeating myself and try to say new things.

You seem to be saying that when you vote yea, it's right, but when other people vote yea, it's wrong. Hmm, I guess you could resolve it by allowing the validity of logic to vary depending on who used it. But that would be bad.

(Edited for clarity)

Comment author: MichaelHoward 22 December 2010 02:59:53AM 2 points [-]

I think we may be misunderstanding each-other, and possibly even arguing about different things. I'm finding it increasingly hard to think how your comments could possibly be a logical response to those you're responding to, and I suspect you're feeling the same.

Serves me right, of course.

So when you do it, it's right, but when other people do it, it's wrong?

When I do what? What are you even talking about?

Comment author: Manfred 22 December 2010 04:42:32AM *  0 points [-]

So when you do it, it's right, but when other people do it, it's wrong?

When I do what? What are you even talking about?

Ah, sorry, that does look odd. I meant "when you vote 'yea,' it's okay, but when they vote 'yea' for exactly the same reasons, it's bad."

Comment author: mwengler 23 December 2010 01:37:57AM -2 points [-]

I think your reasoning here is correct and that it is as good an argument against the many worlds interpretation as any that I have seen.

Comment author: wedrifid 23 December 2010 01:46:29AM 2 points [-]

I think your reasoning here is correct and that it is as good an argument against the many worlds interpretation as any that I have seen.

The best argument against the many worlds interpretation that you have seen is somewhat muddled thinking about ethical considerations with respect to normal coin tosses?

Comment author: mwengler 23 December 2010 03:33:16PM 0 points [-]

Yup, that's the best. I'd be happy to hear about the best you've seen, especially if you've seen better.

Comment author: wedrifid 23 December 2010 04:47:21PM 0 points [-]

Yup, that's the best. I'd be happy to hear about the best you've seen, especially if you've seen better.

Why do you assume I would be inclined to one up the argument? The more natural interpretation of my implied inference is in approximately the reverse direction.

If the best argument against MWI that a self professed physicist and MWI critic has ever seen has absolutely zero persuasive power then that is rather strong evidence in favor.

Comment author: mwengler 23 December 2010 07:45:08PM 3 points [-]

I am new to this board and come in with a "prior" of rejecting MWI beyond the tiniest amount on the basis of, among other things, conservation of energy and mass. (Where do these constantly forming new worlds come from?) MWI seems more like a mapmakers mistake than a description of the territory, which manifestly has only one universe in it every time I look.

I was inviting you to show me with links or description whatever you find most compelling, if you could be bothered to. I am reading main sequence stuff and this is one of the more interesting puzzles among Less Wrong's idiosyncratic consensi.

Comment author: XiXiDu 23 December 2010 08:23:22PM *  1 point [-]

Here a subsequent discussion about some experimental test(s) of MWI. Also here a video dicussion between Scott Aaronson and Yudkowsky (starting at 38:11). More links on topic can be found here.

ETA Sorry, I wanted to reply to another of your comments, wrong tab. Anyway.

Comment author: GuySrinivasan 23 December 2010 07:58:07PM 0 points [-]

Wikipedia points to a site that says conservation of energy is not violated. Do you know if it's factually wrong or what's going on here? (if so can you update wikipedia? :D)

Q22 Does many-worlds violate conservation of energy?

First, the law conservation of energy is based on observations within each world. All observations within each world are consistent with conservation of energy, therefore energy is conserved. Second, and more precisely, conservation of energy, in QM, is formulated in terms of weighted averages or expectation values. Conservation of energy is expressed by saying that the time derivative of the expected energy of a closed system vanishes. This statement can be scaled up to include the whole universe. Each world has an approximate energy, but the energy of the total wavefunction, or any subset of, involves summing over each world, weighted with its probability measure. This weighted sum is a constant. So energy is conserved within each world and also across the totality of worlds.

One way of viewing this result - that observed conserved quantities are conserved across the totality of worlds - is to note that new worlds are not created by the action of the wave equation, rather existing worlds are split into successively "thinner" and "thinner" slices, if we view the probability densities as "thickness".

Comment author: MichaelHoward 23 December 2010 12:46:19PM 0 points [-]

I don't understand. How is my argument an argument against the many worlds interpretation? (Without falling into the logical fallacy of Appeal to Consequences).

Comment author: mwengler 23 December 2010 03:32:32PM 0 points [-]

It would seem to suggest that if I want to be rich I should buy a bunch of lottery tickets and then kill myself when I don't win.

I have not seen the local discussion of MWI and everett branches, but my "conclusion" in the past has been that MWI is a defect of the map maker and not a feature of the territory. I'd be happy to be pointed to something that would change my mind or at least rock it a bit, but for now it looks like angels dancing on the heads of pins. Has somebody provided an experiment that would rule MWI in or out? If so, what was the result? If not, then how is a consideration of MWI anything other than confusing the map with the territory?

If I have fallen in to Appeal to Consequences with my original post, than my bad.

Comment author: MichaelHoward 23 December 2010 04:11:53PM 0 points [-]

It would seem to suggest that if I want to be rich I should buy a bunch of lottery tickets and then kill myself when I don't win.

I don't think that's the case, but even if it were, using that to argue against the likelihood of MWI would be Appeal to Consequences.

I have not seen the local discussion of MWI and everett branches, but my "conclusion" in the past has been that MWI is a defect of the map maker and not a feature of the territory.

That's what I used to think :)

I'd be happy to be pointed to something that would change my mind or at least rock it a bit

If you're prepared for a long but rewarding read, Eliezer's Quantum Physics Sequence is a non-mysterious introduction to quantum mechanics, intended to be accessible to anyone who can grok algebra and complex numbers. Cleaning up the old confusion about QM is used to introduce basic issues in rationality (such as the technical version of Occam's Razor), epistemology, reductionism, naturalism, and philosophy of science.

For a shorter sequence that concentrates on why MWI wins, see And the Winner is... Many-Worlds!

Has somebody provided an experiment that would rule MWI in or out? If so, what was the result? If not, then how is a consideration of MWI anything other than confusing the map with the territory?

The idea is that MWI is the simplest explanation that fits the data, by the definition of simplest that has proven to be most useful when predicting which of different theories that match the same data is actually correct.