Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Nisan 07 April 2014 02:06:24PM 2 points [-]

I might attend (probability 50%). If I do, I can give a lightning talk about Modal Combat.

Comment author: Armok_GoB 05 April 2014 02:14:03AM 1 point [-]

Induction. You have uncertainty about the extent to which you care about different universes. If it turns out you don't care about the born rule for one reason or another the universe you observe is an absurdly (as in probably-a-Boltzmann-brain absurd) tiny sliver of the multiverse, but if you do, it's still an absurdly tiny sliver but immensely less so. You should anticipate as if the born rule is true, because if you don't almost only care about world where it is true, then you care almost nothing about the current world, and being wrong in it doesn't matter, relatively to otherwise.

Hmm, I'm terrible at explaining this stuff. But the tl;dr is basically that there's this long complicated reason why you should anticipate and act this way and thus it's true in the "the simple truth" sense, that's mostly tangential to if it's "true" in some specific philosophy paper sense.

Comment author: Nisan 05 April 2014 03:31:12AM 1 point [-]

Oh, interesting. So just as one should act as if one is Jesus if one seems to be Jesus, then one should act as if one cares about world-histories in proportion to their L2 measure if one seems to care about world-histories in proportion to their L2 measure and one happens to be in a world-history with relatively high L2 measure. And if probability is degree of caring, then the fact that one's world history obeys the Born rule is evidence that one cares about world-histories in proportion to their L2 measure.

I take it you would prefer option 2 in my original comment, reduce anticipation to UDT, and explain away continuity of experience.

Have I correctly characterized your point of view?

Comment author: Armok_GoB 04 April 2014 02:00:30AM 0 points [-]

You're overextending a hack intuition. "Existence", "measure", "probability density", "what you should anticipate", etc. aren't actually all the exact same thing once you get this technical. Specifically, I suspect you're trying to set the later based on one of the former, without knowing which one since you assume they are identical. I recommend learning UDT and deciding what you want agents with your input history to anticipate, or if that's not feasible just do the math and stop bothering to make the intuition fit.

Comment author: Nisan 04 April 2014 03:57:11PM 0 points [-]

Hm, so you're saying that anticipation isn't a primitive, it's just part of one's decision-making process. But isn't there a sense in which I ought to expect the Born rule to hold in ordinary circumstances? Call it a set of preferences that all humans share — we care about futures in proportion to the square of the modulus of their amplitude (in the universal wavefunction? in the successor state to our Everett branch?). Do you have an opinion on exactly how that preference works, and what sorts of decision problems it applies to?

Comment author: Benito 03 April 2014 08:16:52PM 2 points [-]

Amusing, although I'll point out that there are some subtle difference between a physics classroom and the MOR!universe. Or at least, I think there are...

Comment author: Nisan 03 April 2014 08:42:22PM 19 points [-]

I will only say that when I was a physics major, there were negative course numbers in some copies of the course catalog. And the students who, it was rumored, attended those classes were... somewhat off, ever after.

And concerning how I got my math PhD, and the price I paid for it, and the reason I left the world of pure math research afterwards, I will say not one word.

Comment author: Benito 01 April 2014 07:35:28PM 21 points [-]

Trying to actually understand what equations describe is something I'm always trying to do in school, but I find my teachers positively trained in the art of superficiality and dark-side teaching. Allow me to share two actual conversations with my Maths and Physics teachers from school.:

(Teacher derives an equation, then suddenly makes it into an iterative formula, with no explanation of why)

Me: Woah, why has it suddenly become an iterative formula? What's that got to do with anything?

Teacher: Well, do you agree with the equation when it's not an iterative formula?

Me: Yes.

Teacher: And how about if I make it an iterative formula?

Me: But why do you do that?

Friend: Oh, I see.

Me: Do you see why it works?

Friend: Yes. Well, no. But I see it gets the right answer.

Me: But sir, can you explain why it gets the right answer?

Teacher: Ooh Ben, you're asking one of your tough questions again.

(Physics class)

Me: Can you explain that sir?

Teacher: Look, Ben, something not understanding things is a good thing.

And yet to most people, I can't even vent the ridiculousness of a teacher actually saying this; they just think it's the norm!

Comment author: Nisan 03 April 2014 08:07:30PM 4 points [-]

Teacher: Look, Ben, something not understanding things is a good thing.

Ahem:

"Headmaster! " said Professor Quirrell, sounding genuinely shocked. "Mr. Potter has told you that this spell is not spoken of with those who cannot cast it! You do not press a wizard on such matters!"

Comment author: Oscar_Cunningham 02 April 2014 11:49:04AM 2 points [-]

But theory 2 predicts that Bob will probably vanish!

I don't think it does. The probability current is locally conserved. So |u'> has to give a high probability to some world very close to Bob's, i.e. one with a continuous evolution of him in it.

Comment author: Nisan 02 April 2014 04:10:34PM 0 points [-]

Hm, so you're saying that if |u> has high probability density in the subspace that contains Bob, then in the near future there must still be high probability density there, or at least nearby. But in fact |u> has very low probability density in Bob's Everett branch. Consider all the accidents of weather and history that led to Bob's birth, not to mention the quantum fluctuations that led to Bob's galaxy being created.

Comment author: Nisan 02 April 2014 05:36:13AM 2 points [-]

I have a question about quantum physics. Suppose Bob is in state |Bob>, the rest of Bob's Everett branch is in state |rest>, and the universe is in state |u>, one of whose summands is |Bob>|rest>. How should Bob make predictions?

  1. Determine |b'>, the successor state to |Bob>|rest>. Then the expectation of observable o is <b'|o|b'>.

  2. Determine |u'>, the successor state to |u>. Then the expectation of observable o is <u'|o|u'>.

Theory 1 leads to the paradox I described in last week's open thread. Two users helpfully informed me that theory 1 is not what MWI says; MWI is more like theory 2. But theory 2 predicts that Bob will probably vanish! One could restrict to worlds that contain Bob, but that would imply quantum immortality.

Am I hopelessly confused? Does MWI imply that there is no continuity of experience? Has anyone ever proposed theory 1?

Comment author: bramflakes 31 March 2014 01:34:05PM 9 points [-]

Can someone explain to me the significance of problems like Sleeping Beauty? I see a lot of digital ink being spilled over them and I can kind of see how they call into question what we mean by "probability" and "expected utility", but I can't quite pin down the thread that connects all of them. Someone will pose a solution to a paradox X, and then another reply with a modified version X' that the previous solution fails on, and I tend to have trouble seeing what the exact thing is people are trying to solve.

Comment author: Nisan 31 March 2014 02:01:44PM *  1 point [-]

I don't know about academic philosophy, but on Less Wrong there is the hope of one day coming up with an algorithm that calculates the "best", "most rational" way to act.

That's a bit of a simplification, though. It is hoped that we can separate the question of how to learn (epistemology) and what is right (moral philosophy) from the question of given one's knowledge and values, what is the "best", "most rational" way to behave? (decision theory).

The von Neumann–Morgenstern theorem is the paradigmatic result here. It suggests (but does not prove) that given one's beliefs and values, one "should" act so as to maximize a certain weighted sum. But as the various paradoxes show, this is far from the last word on the matter.

Comment author: Strilanc 26 March 2014 03:17:47AM 3 points [-]

I disagree that Bob's expected value drops to -0.5$ during the experiment. If Bob is aware that he will be "super-duper quantum memory erased", then he should appropriately expect to receive 1$.

There may be more existential dread during the experiment, but the expectations about the outcome should stay the same throughout.

Comment author: Nisan 28 March 2014 07:21:34PM 0 points [-]

Ok, User:Manfred makes the same point here. It implies that at any point, heretofore invisible worlds could collide with ours, skewing the results of experiments and even leaving us with no future whatsoever (although admittedly with probability 0). Would you agree with that?

Comment author: VAuroch 28 March 2014 06:45:18PM -1 points [-]

on whether the person's experiences are veridical.

Is this different from whether their perception of their experiences is correct, or is it jargon?

Comment author: Nisan 28 March 2014 07:03:36PM 1 point [-]

Yes, I mean (for example) that if a person believes they're married to someone, their life's welfare could depend on whether their spouse is a real person or if it's a simple chatbot. Also, if a person feels that they've discovered a deep insight, their life's welfare could depend on whether they have actually discovered such an insight.

View more: Next