Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: eli_sennesh 18 September 2015 11:26:52PM 1 point [-]

Perhaps he will, if you agree to also post your computational theory of how the brain works.

That was several months ago.

Comment author: PhilGoetz 24 September 2015 09:26:25PM 0 points [-]

Nice! I'll bookmark that.

Comment author: Stuart_Armstrong 18 September 2015 10:13:26PM 3 points [-]

In my video here I look at a lot of the ramifications of SB decisions: https://www.youtube.com/watch?v=aiGOGkBiWEo

What's relevant here is the frequentist position. Imagine you do the SB experiment a thousand times in a row. If you tell SB "be correct the most often you are asked", she will behave as a thirder. If you tell SB "be correct in the most experiments", then she will behave as a halfer. So frequentism no longer converges to a unique subjective probability in the long run.

Comment author: PhilGoetz 19 September 2015 01:07:51AM 5 points [-]

No; you are asking her two different questions, so it is correct for frequentism to give different answers to the different questions.

Comment author: eli_sennesh 07 September 2015 02:22:04AM -2 points [-]

But normal human beings are extremely good at compartmentalization. In other words they are extremely good at knowing when knowing the truth is going to be useful for their goals, and when it is not. This means that they are better than Less Wrongers at attaining their goals, because the truth does not get in the way.

If you really believe this, I'd love to see a post on a computational theory of compartmentalization, so you can explain for us all how the brain performs this magical trick.

Comment author: PhilGoetz 18 September 2015 09:58:20PM 0 points [-]

Perhaps he will, if you agree to also post your computational theory of how the brain works.

If you don't have one, then it's unreasonable to demand one.

Comment author: lahwran 15 September 2015 09:54:33PM 2 points [-]

What you less wrong folks call "rationality" is not what everyone else calls "rationality" - you can't say "I also think that rationality is doing a great job in helping people", that either doesn't make sense or is a tautology, depending on your interpretation. Please stop saying "rationality" and meaning your own in-group thing, it's ridiculously offputting.

Also, my experience has been that CFAR-trained folks do sit down and do hard things, and that people who are only familiar with LW just don't. It has also been my experience that they don't do enough hard things to just "win", in the sense defined here, and that the difference between "winning" and not is actually not easily exploitable with slightly more intelligent macro-scale behavior. The branching points that differentiate between the winning and losing paths are the exploitable points - things like deciding whether or not to go to college, or whether to switch jobs - and they're alright at choosing between those, but so are other people. CFAR-trained folks are typically reasonably better than equivalently intelligent folks who have had the same experience so far, but not dramatically so.

Comment author: PhilGoetz 18 September 2015 09:54:18PM 1 point [-]

you can't say "I also think that rationality is doing a great job in helping people", that either doesn't make sense or is a tautology,

He may have meant that he thinks rationality is effective for altruists.

Comment author: DavidPlumpton 18 September 2015 09:40:28PM 1 point [-]

Usually "Monty Hall"?

Comment author: PhilGoetz 18 September 2015 09:48:07PM *  0 points [-]

Oh, yeah. Too much D&D.

Monty Hall Sleeping Beauty

1 PhilGoetz 18 September 2015 09:18PM

A friend referred me to another paper on the Sleeping Beauty problem. It comes down on the side of the halfers.

I didn't have the patience to finish it, because I think SB is a pointless argument about what "belief" means. If, instead of asking Sleeping Beauty about her "subjective probability", you asked her to place a bet, or take some action, everyone could agree what the best answer was. That it perplexes people is a sign that they're talking non-sense, using words without agreeing on their meanings.

But, we can make it more obvious what the argument is about by using a trick that works with the Monty Hall problem: Add more doors. By doors I mean days.

The Monty Hall Sleeping Beauty Problem is then:

  • On Sunday she's given a drug that sends her to sleep for a thousand years, and a coin is tossed.
  • If the coin lands heads, Beauty is awakened and interviewed once.
  • If the coin comes up tails, she is awakened and interviewed 1,000,000 times.
  • After each interview, she's given a drug that makes her fall asleep again and forget she was woken.
  • Each time she's woken up, she's asked, "With what probability do you believe that the coin landed tails?"

The halfer position implies that she should still say 1/2 in this scenario.

Does stating it this way make it clearer what the argument is about?

Comment author: Vladimir_Nesov 02 April 2009 05:11:24PM *  1 point [-]

It's a combination of Sleeping Beauty and Counterfactual Mugging, with the decision depending on the resolution of both problems. It doesn't look like the problems interact, but if you are a 1/3-er, you don't give away the money, and if you don't care about the counterfactual, you don't give it away either. You factored out the Sleeping Beauty in your example, and equivalently the Counterfactual Mugging can be factored out by asking the question before the coin toss.

Comment author: PhilGoetz 18 September 2015 08:50:12PM *  1 point [-]

I think it's not quite the Sleeping Beauty problem. That's about the semantics of belief; this is about the semantics of what a "decision" is.

Making a decision to give or not to give means making the decision for both days, and you're aware of that in the scenario. Since the problem requires that Omega can simulate you and predict your answer, you can't be a being that can say yes on one day and no on another day. It would be the same problem if there were no amnesia and he asked you to give him 200 pounds once.

In other words, you don't get to make 2 independent decisions on the two days, so it is incorrect to say you are making decisions on those days. The scenario is incoherent.

Comment author: Lumifer 01 September 2015 05:16:36PM 2 points [-]

It's not a memory error, it's a hasty pattern-match error.

Comment author: PhilGoetz 06 September 2015 03:11:30PM 0 points [-]

I think it's a strong-prior error. There are many different spellings, one or two letters apart, and I pick the one I've seen most often.

Comment author: NancyLebovitz 31 August 2015 05:50:44PM 4 points [-]

Thanks for mentioning that I'd already brought up the paper. I've got three quotes here.

My last name is Lebovitz.

I think of the way people tend to get it wrong as a rationality warning. I know about those errors because I have an interest in my name, but the commonness of the errors suggests that people get a tremendous amount wrong. How much of it matters? How could we even start to find out?

Comment author: PhilGoetz 01 September 2015 04:18:17AM *  1 point [-]

Sorry for misspelling your name. I don't think memory errors are rationality errors.

Comment author: Valentine 31 August 2015 10:55:49PM 1 point [-]

I really like this post.


Can the (physical or mental) posture that's appropriate for avoiding mistakes be opposed to the posture appropriate for focusing power on one point?

Sorry, I'm not sure what you mean.

I don't do a lot of brick-breaking with my fists, so I might not know much about doing that well. But my impression is that the principles that transfer force well through the body in aikido will also transfer force well when trying to deliver a sharp blow to exactly one spot on a brick. In aikido at least, there's no opposition between posture that helps make you do the right thing and posture that helps you avoid doing the wrong thing.

…but of course, it's possible to compromise posture and still deliver a lot of power to one point, just like it's possible to avoid falling over when throwing someone while you have a weak spine posture.

I think the analog in the mind is something like focus or concentration. I think it's certainly possible to concentrate really hard in a way that violates good mental posture in other situations, but I intuitively wouldn't anticipate very good results from that compared to the counterfactual where the focus is done while maintaining good mental posture.

But I really don't know.

And I might have totally misunderstood what you were gesturing at. Please feel free to clarify if needed!

Are there multiple styles of posture or thought that are equally effective local maxima, while hybrids of them are less effective?

I don't know. I don't know of any for the body, I don't think. Some people claim that you should never round your lower back outward, but as far as I can tell the real rule is to brace your spine so that it can transfer force well, which is much harder to do when it's rounded but not impossible. There are some situations where using the rule of thumb of "curve in your lower back" just isn't possible, so you have to go back to the reason why the rule is there. At that point you start getting things that look like violations of "good posture" but are actually quite good uses of body mechanics. (In this case you brace your spine with your abs while "lengthening" it.)

I'm less sure about mental posture. But that's because I don't have a very good reductionistic model of what "mental posture" is yet.

Comment author: PhilGoetz 01 September 2015 04:16:28AM 0 points [-]

For my second question, I was thinking for example of different fighting styles, and whether it's just that some are more effective in certain circumstances, or that each style is a local maximum.

View more: Next