Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: raisin 21 April 2014 03:54:08PM 0 points [-]

Any guides on how to do that?

Comment author: Nisan 22 April 2014 12:56:45AM 0 points [-]

Psychological theories like IFS would recommend charitably interpreting the inclination to embarrassment as a friendly impulse to protect oneself by protecting one's reputation. For example, some people are embarrassed to eat out alone; a charitable interpretation is that part of their mind wants to avoid the scenario of an acquaintance of theirs seeing the lonely diner and concluding that they have no friends, and then concluding that they are unlikable and ostracizing them. Or a minor version of the same scenario.

Then one can assess just how many assetts are at stake: Realistically, nothing bad will happen if one eats out alone. Or one might decide that distant restaurants are safe. The anticipation of embarrassment might respond with further concerns, and by iterating one might arrive at a more coherent mental state.

Comment author: apeterson 21 April 2014 12:19:16PM 7 points [-]

I've been struggling with how to improve in running all last year, and now again this spring. I finally realized (after reading a lot of articles on lesswrong.com, and specifically the martial arts of rationality posts) that I've been rationalizing that Couch to 5k and other recommended methods aren't for me. So I continue to train in the wrong way, with rationalizations like: "It doesn't matter how I train as long as I get out there."

I've continued to run intensely and in short bursts, with little success, because I felt embarrassed to have to walk any, but I keep finding more and more people who report success with programs where you start slowly and gradually add in more running.

Last year, I experimented with everything except that approach, and ended up hurting myself by running too far and too intensely several days in a row.

It's time to stop rationalizing, and instead try the approach that's overwhelmingly recommended. I just thought it would be interesting to share that recognition.

Comment author: Nisan 21 April 2014 02:33:47PM 3 points [-]

You might also want to work on eliminating embarrassment.

Comment author: Nisan 07 April 2014 02:06:24PM 2 points [-]

I might attend (probability 50%). If I do, I can give a lightning talk about Modal Combat.

Comment author: Armok_GoB 05 April 2014 02:14:03AM 1 point [-]

Induction. You have uncertainty about the extent to which you care about different universes. If it turns out you don't care about the born rule for one reason or another the universe you observe is an absurdly (as in probably-a-Boltzmann-brain absurd) tiny sliver of the multiverse, but if you do, it's still an absurdly tiny sliver but immensely less so. You should anticipate as if the born rule is true, because if you don't almost only care about world where it is true, then you care almost nothing about the current world, and being wrong in it doesn't matter, relatively to otherwise.

Hmm, I'm terrible at explaining this stuff. But the tl;dr is basically that there's this long complicated reason why you should anticipate and act this way and thus it's true in the "the simple truth" sense, that's mostly tangential to if it's "true" in some specific philosophy paper sense.

Comment author: Nisan 05 April 2014 03:31:12AM 1 point [-]

Oh, interesting. So just as one should act as if one is Jesus if one seems to be Jesus, then one should act as if one cares about world-histories in proportion to their L2 measure if one seems to care about world-histories in proportion to their L2 measure and one happens to be in a world-history with relatively high L2 measure. And if probability is degree of caring, then the fact that one's world history obeys the Born rule is evidence that one cares about world-histories in proportion to their L2 measure.

I take it you would prefer option 2 in my original comment, reduce anticipation to UDT, and explain away continuity of experience.

Have I correctly characterized your point of view?

Comment author: Armok_GoB 04 April 2014 02:00:30AM 0 points [-]

You're overextending a hack intuition. "Existence", "measure", "probability density", "what you should anticipate", etc. aren't actually all the exact same thing once you get this technical. Specifically, I suspect you're trying to set the later based on one of the former, without knowing which one since you assume they are identical. I recommend learning UDT and deciding what you want agents with your input history to anticipate, or if that's not feasible just do the math and stop bothering to make the intuition fit.

Comment author: Nisan 04 April 2014 03:57:11PM 0 points [-]

Hm, so you're saying that anticipation isn't a primitive, it's just part of one's decision-making process. But isn't there a sense in which I ought to expect the Born rule to hold in ordinary circumstances? Call it a set of preferences that all humans share — we care about futures in proportion to the square of the modulus of their amplitude (in the universal wavefunction? in the successor state to our Everett branch?). Do you have an opinion on exactly how that preference works, and what sorts of decision problems it applies to?

Comment author: Benito 03 April 2014 08:16:52PM 2 points [-]

Amusing, although I'll point out that there are some subtle difference between a physics classroom and the MOR!universe. Or at least, I think there are...

Comment author: Nisan 03 April 2014 08:42:22PM 22 points [-]

I will only say that when I was a physics major, there were negative course numbers in some copies of the course catalog. And the students who, it was rumored, attended those classes were... somewhat off, ever after.

And concerning how I got my math PhD, and the price I paid for it, and the reason I left the world of pure math research afterwards, I will say not one word.

Comment author: Benito 01 April 2014 07:35:28PM 22 points [-]

Trying to actually understand what equations describe is something I'm always trying to do in school, but I find my teachers positively trained in the art of superficiality and dark-side teaching. Allow me to share two actual conversations with my Maths and Physics teachers from school.:

(Teacher derives an equation, then suddenly makes it into an iterative formula, with no explanation of why)

Me: Woah, why has it suddenly become an iterative formula? What's that got to do with anything?

Teacher: Well, do you agree with the equation when it's not an iterative formula?

Me: Yes.

Teacher: And how about if I make it an iterative formula?

Me: But why do you do that?

Friend: Oh, I see.

Me: Do you see why it works?

Friend: Yes. Well, no. But I see it gets the right answer.

Me: But sir, can you explain why it gets the right answer?

Teacher: Ooh Ben, you're asking one of your tough questions again.

(Physics class)

Me: Can you explain that sir?

Teacher: Look, Ben, something not understanding things is a good thing.

And yet to most people, I can't even vent the ridiculousness of a teacher actually saying this; they just think it's the norm!

Comment author: Nisan 03 April 2014 08:07:30PM 5 points [-]

Teacher: Look, Ben, something not understanding things is a good thing.

Ahem:

"Headmaster! " said Professor Quirrell, sounding genuinely shocked. "Mr. Potter has told you that this spell is not spoken of with those who cannot cast it! You do not press a wizard on such matters!"

Comment author: Oscar_Cunningham 02 April 2014 11:49:04AM 2 points [-]

But theory 2 predicts that Bob will probably vanish!

I don't think it does. The probability current is locally conserved. So |u'> has to give a high probability to some world very close to Bob's, i.e. one with a continuous evolution of him in it.

Comment author: Nisan 02 April 2014 04:10:34PM 0 points [-]

Hm, so you're saying that if |u> has high probability density in the subspace that contains Bob, then in the near future there must still be high probability density there, or at least nearby. But in fact |u> has very low probability density in Bob's Everett branch. Consider all the accidents of weather and history that led to Bob's birth, not to mention the quantum fluctuations that led to Bob's galaxy being created.

Comment author: Nisan 02 April 2014 05:36:13AM 2 points [-]

I have a question about quantum physics. Suppose Bob is in state |Bob>, the rest of Bob's Everett branch is in state |rest>, and the universe is in state |u>, one of whose summands is |Bob>|rest>. How should Bob make predictions?

  1. Determine |b'>, the successor state to |Bob>|rest>. Then the expectation of observable o is <b'|o|b'>.

  2. Determine |u'>, the successor state to |u>. Then the expectation of observable o is <u'|o|u'>.

Theory 1 leads to the paradox I described in last week's open thread. Two users helpfully informed me that theory 1 is not what MWI says; MWI is more like theory 2. But theory 2 predicts that Bob will probably vanish! One could restrict to worlds that contain Bob, but that would imply quantum immortality.

Am I hopelessly confused? Does MWI imply that there is no continuity of experience? Has anyone ever proposed theory 1?

Comment author: bramflakes 31 March 2014 01:34:05PM 9 points [-]

Can someone explain to me the significance of problems like Sleeping Beauty? I see a lot of digital ink being spilled over them and I can kind of see how they call into question what we mean by "probability" and "expected utility", but I can't quite pin down the thread that connects all of them. Someone will pose a solution to a paradox X, and then another reply with a modified version X' that the previous solution fails on, and I tend to have trouble seeing what the exact thing is people are trying to solve.

Comment author: Nisan 31 March 2014 02:01:44PM *  1 point [-]

I don't know about academic philosophy, but on Less Wrong there is the hope of one day coming up with an algorithm that calculates the "best", "most rational" way to act.

That's a bit of a simplification, though. It is hoped that we can separate the question of how to learn (epistemology) and what is right (moral philosophy) from the question of given one's knowledge and values, what is the "best", "most rational" way to behave? (decision theory).

The von Neumann–Morgenstern theorem is the paradigmatic result here. It suggests (but does not prove) that given one's beliefs and values, one "should" act so as to maximize a certain weighted sum. But as the various paradoxes show, this is far from the last word on the matter.

View more: Next