Bayesian Flame

37 cousin_it 26 July 2009 04:49PM

There once lived a great man named E.T. Jaynes. He knew that Bayesian inference is the only way to do statistics logically and consistently, standing on the shoulders of misunderstood giants Laplace and Gibbs. On numerous occasions he vanquished traditional "frequentist" statisticians with his superior math, demonstrating to anyone with half a brain how the Bayesian way gives faster and more correct results in each example. The weight of evidence falls so heavily on one side that it makes no sense to argue anymore. The fight is over. Bayes wins. The universe runs on Bayes-structure.

Or at least that's what you believe if you learned this stuff from Overcoming Bias.

Like I was until two days ago, when Cyan hit me over the head with something utterly incomprehensible. I suddenly had to go out and understand this stuff, not just believe it. (The original intention, if I remember it correctly, was to impress you all by pulling a Jaynes.) Now I've come back and intend to provoke a full-on flame war on the topic. Because if we can have thoughtful flame wars about gender but not math, we're a bad community. Bad, bad community.

If you're like me two days ago, you kinda "understand" what Bayesians do: assume a prior probability distribution over hypotheses, use evidence to morph it into a posterior distribution over same, and bless the resulting numbers as your "degrees of belief". But chances are that you have a very vague idea of what frequentists do, apart from deriving half-assed results with their ad hoc tools.

continue reading »

Shut Up And Guess

79 Yvain 21 July 2009 04:04AM

Related to: Extreme Rationality: It's Not That Great

A while back, I said provocatively that the rarefied sorts of rationality we study at Less Wrong hadn't helped me in my everyday life and probably hadn't helped you either. I got a lot of controversy but not a whole lot of good clear examples of getting some use out of rationality.

Today I can share one such example.

Consider a set of final examinations based around tests with the following characteristics:

* Each test has one hundred fifty true-or-false questions.
* The test is taken on a scan-tron which allows answers of "true", "false", and "don't know".
* Students get one point for each correct answer, zero points for each "don't know", and minus one half point for each incorrect answer.
* A score of >50% is "pass", >60% is "honors", >70% is "high honors".
* The questions are correspondingly difficult, so that even a very intelligent student is not expected to get much above 70. All students are expected to encounter at least a few dozen questions which they can answer only with very low confidence, or which they can't answer at all.

At what confidence level do you guess? At what confidence level do you answer "don't know"?

continue reading »

Fourth London Rationalist Meeting?

3 RichardKennaway 02 July 2009 09:56AM

It's been the first Sunday of the month so far, but I haven't seen any announcement for this month yet. There was a discussion, but no conclusion. Is anything happening?

ETA: This would have appeared a day and a half ago, but I did not notice that it had only been stored as a draft and not published. When logged in, it was impossible to notice that I was the only person seeing this. Feature request for this site: add a visual indication that something is only a draft, e.g. a "Publish" link, perhaps with the words somewhere, Unpublished draft.

Bad reasons for a rationalist to lose

30 matt 18 May 2009 10:57PM

Reply to: Practical Advice Backed By Deep Theories

Inspired by what looks like a very damaging reticence to embrace and share brain hacks that might only work for some of us, but are not backed by Deep Theories. In support of tinkering with brain hacks and self experimentation where deep science and large trials are not available.

Eliezer has suggested that, before he will try a new anti-akraisia brain hack:

[…] the advice I need is from someone who reads up on a whole lot of experimental psychology dealing with willpower, mental conflicts, ego depletion, preference reversals, hyperbolic discounting, the breakdown of the self, picoeconomics, etcetera, and who, in the process of overcoming their own akrasia, manages to understand what they did in truly general terms - thanks to experiments that give them a vocabulary of cognitive phenomena that actually exist, as opposed to phenomena they just made up.  And moreover, someone who can explain what they did to someone else, thanks again to the experimental and theoretical vocabulary that lets them point to replicable experiments that ground the ideas in very concrete results, or mathematically clear ideas.

This doesn't look to me like an expected utility calculation, and I think it should. It looks like an attempt to justify why he can't be expected to win yet. It just may be deeply wrongheaded.

I submit that we don't "need" (emphasis in original) this stuff, it'd just be super cool if we could get it. We don't need to know that the next brain hack we try will work, and we don't need to know that it's general enough that it'll work for anyone who tries it; we just need the expected utility of a trial to be higher than that of the other things we could be spending that time on.

So… this isn't other-optimizing, it's a discussion of how to make decisions under uncertainty. What do all of us need to make a rational decision about which brain hacks to try?

  • We need a goal: Eliezer has suggested "I want to hear how I can overcome akrasia - how I can have more willpower, or get more done with less mental pain". I'd push cost in with something like "to reduce the personal costs of akraisia by more than the investment in trying and implementing brain hacks against it plus the expected profit on other activities I could undertake with that time".
  • We need some likelihood estimates:
    • Chance of a random brain hack working on first trial: ?, second trial: ?, third: ?
    • Chance of a random brain hack working on subsequent trials (after the third - the noise of mood, wakefulness, etc. is large, so subsequent trials surely have non-zero chance of working, but that chance will probably diminish): →0
    • Chance of a popular brain hack working on first (second, third) trail: ? (GTD is lauded by many many people; your brother in law's homebrew brain hack is less well tried)
    • Chance that a brain hack that would work in the first three trials would seem deeply compelling on first being exposed to it: ?
      (can these books be judged by their covers? how does this chance vary with the type of exposure? what would you need to do to understand enough about a hack that would work to increase its chance of seeming deeply compelling on first exposure?)
    • Chance that a brain hack that would not work in the first three trials would seem deeply compelling on first being exposed to it: ? (false positives)
    • Chance of a brain hack recommended by someone in your circle working on first (second, third) trial: ?
    • Chance that someone else will read up "on a whole lot of experimental psychology dealing with willpower, mental conflicts, ego depletion, preference reversals, hyperbolic discounting, the breakdown of the self, picoeconomics, etcetera, and who, in the process of overcoming their own akrasia, manages to understand what they did in truly general terms - thanks to experiments that give them a vocabulary of cognitive phenomena that actually exist, as opposed to phenomena they just made up.  And moreover, someone who can explain what they did to someone else, thanks again to the experimental and theoretical vocabulary that lets them point to replicable experiments that ground the ideas in very concrete results, or mathematically clear ideas", all soon: ? (pretty small?)
    • What else do we need to know?
  • We need some time/cost estimates (these will vary greatly by proposed brain hack):
    • Time required to stage a personal experiment on the hack: ?
    • Time to review and understand the hack in sufficient detail to estimate the time required to stage a personal experiment?
    • What else do we need?

… and, what don't we need?

  • A way to reject the placebo effect - if it wins, use it. If it wins for you but wouldn't win for someone else, then they have a problem. We may choose to spend some effort helping others benefit from this hack, but that seems to be a different task - it's irrelevant to our goal.


How should we decide how much time to spend gathering data and generating estimates on matters such as this? How much is Eliezer setting himself up to lose, and how much am I missing the point?

Religion, Mystery, and Warm, Soft Fuzzies

17 Psychohistorian 14 May 2009 11:41PM

Reaction to: Yudkowsky and Frank on Religious Experience, Yudkowksy and Frank On Religious Experience Pt 2, A Parable On Obsolete Ideologies

Frank's point got rather lost in all this. It seems to be quite simple: there's a warm fuzziness to life that science just doesn't seem to get, and some religious artwork touches on and stimulates this warm fuzziness, and hence is of value.1 Moreover, understanding this point seems rather important to being able to spread an ideology.

The main problem is viewing this warm fuzziness as a "mystery." This warm fuzziness, as an experience, is a reality. It's part of that set of things that doesn't go away no matter what you say or think about them. Women (or men) will still be alluring, food will still be delicious, and Michaelangelo's David will still be beautiful, no matter how well you describe these phenomenon. The view that shattering mysteries reduces their value is very much a result of religion trying to protect itself. EY is probably correct that science will one day destroy this mystery as it has so many others, but because it is an "experience we can't clearly describe" rather than an actual "mystery," the experience will remain. The argument is with the description, not the experience; the experience is real, and experiences of its nature are totally desirable.

The second, sub-point: Frank thinks that certain religious stories and artwork may be of artistic value. The selection of the story of Job is unfortunate, but both speakers value it for the same reason: its truth. One sees it as true (and inspiring) and likes it, the other sees it as false (and insidious) and hates it. I think both agree that if you put it on the shelf next to Tolkien, and rational atheists still buy it and enjoy it, hey, good for Job. And if not, well, throw it out with the rest of the trash.

continue reading »

Hardened Problems Make Brittle Models

51 cousin_it 06 May 2009 06:31PM

Consider a simple decision problem: you arrange a date with someone, you arrive on time, your partner isn't there. How long do you wait before giving up?

Humans naturally respond to this problem by acting outside the box. Wait a little then send a text message. If that option is unavailable, pluck a reasonable waiting time from cultural context, e.g. 15 minutes. If that option is unavailable...

Wait, what?

The toy problem was initially supposed to help us improve ourselves - to serve as a reasonable model of something in the real world. The natural human solution seemed too messy and unformalizable so we progressively remove nuances to make the model more extreme. We introduce Omegas, billions of lives at stake, total informational isolation, perfect predictors, finally arriving at some sadistic contraption that any normal human would run away from. But did the model stay useful and instructive? Or did we lose important detail along the way?

Many physical models, like gravity, have the nice property of stably approximating reality. Perturbing the positions of planets by one millimeter doesn't explode the Solar System the next second. Unfortunately, many of the models we're discussing here don't have this property. The worst offender yet seems to be Eliezer's "True PD" which requires the whole package of hostile psychopathic AIs, nuclear-scale payoffs and informational isolation; any natural out-of-the-box solution like giving the damn thing some paperclips or bargaining with it would ruin the game. The same pattern has recurred in discussions of Newcomb's Problem where people have stated that any miniscule amount of introspection into Omega makes the problem "no longer Newcomb's". That naturally led to more ridiculous use of superpowers, like Alicorn's bead jar game where (AFAIU) the mention of Omega is only required to enforce a certain assumption about its thought mechanism that's wildly unrealistic for a human.

Artificially hardened logic problems make brittle models of reality.

So I'm making a modest proposal. If you invent an interesting decision problem, please, first model it as a parlor game between normal people with stakes of around ten dollars. If the attempt fails, you have acquired a bit of information about your concoction; don't ignore it outright.

Generalizing From One Example

259 Yvain 28 April 2009 10:00PM

Related to: The Psychological Unity of Humankind, Instrumental vs. Epistemic: A Bardic Perspective

"Everyone generalizes from one example. At least, I do."

   -- Vlad Taltos (Issola, Steven Brust)

My old professor, David Berman, liked to talk about what he called the "typical mind fallacy", which he illustrated through the following example:

There was a debate, in the late 1800s, about whether "imagination" was simply a turn of phrase or a real phenomenon. That is, can people actually create images in their minds which they see vividly, or do they simply say "I saw it in my mind" as a metaphor for considering what it looked like?

Upon hearing this, my response was "How the stars was this actually a real debate? Of course we have mental imagery. Anyone who doesn't think we have mental imagery is either such a fanatical Behaviorist that she doubts the evidence of her own senses, or simply insane." Unfortunately, the professor was able to parade a long list of famous people who denied mental imagery, including some leading scientists of the era. And this was all before Behaviorism even existed.

The debate was resolved by Francis Galton, a fascinating man who among other achievements invented eugenics, the "wisdom of crowds", and standard deviation. Galton gave people some very detailed surveys, and found that some people did have mental imagery and others didn't. The ones who did had simply assumed everyone did, and the ones who didn't had simply assumed everyone didn't, to the point of coming up with absurd justifications for why they were lying or misunderstanding the question. There was a wide spectrum of imaging ability, from about five percent of people with perfect eidetic imagery1 to three percent of people completely unable to form mental images2.

Dr. Berman dubbed this the Typical Mind Fallacy: the human tendency to believe that one's own mental structure can be generalized to apply to everyone else's.

continue reading »

Evangelical Rationality

36 CannibalSmith 20 April 2009 04:51AM

Spreading the Word prompted me to report back as promised.

I have two sisters aged 17 and 14, and mom and dad aged 40-something. I'm 22, male. We're all white and Latvian. I translated the articles as I read them.

I read Never Leave Your Room to the oldest sister and she expressed great interest in it.

I read Cached Selves to them all. When I got to the part about Greenskyers the older sister asserted "the sky is green" for fun. Later in the conversation I asked her, "Is the sky blue?", and her answer was "No. I mean, yes! Gah!" They all found real life examples of this quickly - it turns out this is how the older sister schmoozes money and stuff out of dad ("Can I have this discount cereal?" followed by "Can I have this expensive yogurt to go with my cereal?").

I started reading The Apologist and the Revolutionary to them but halfway through the article they asked "what's the practical application for us?", and I realized that I couldn't answer that question - it's just a piece of trivia. So I moved on.

I tried reading about near-far thing to them, but couldn't find a single good article that describes it concisely. Thus I stumbled around, and failed to convey the idea properly.

In the end I asked whether they'd like to hear similar stuff in the future, and the reply was an unanimous yes. I asked them why, in their opinion, haven't they found this stuff by themselves and the reason seems to be that they have have no paths that lead to rationality stuff in their lives. Indeed, I found OB through Dresden Codak, which I found through Minus, which I found through some other webcomic forum. Nobody in my family reads webcomics not to mention frequenting their forums.

The takeaway, I think, is this: We must establish non-geeky paths to rationality. Go and tell people how to not be suckers. Start with people who would listen to you. You don't have to advertise LW - just be +5 informative. Rationality stuff must enter the mass media: radio, TV, newspapers. If you are in a position to make that happen, act!

I would also like to see more articles like this one on LW - go, do something, report back.

My Way

31 Eliezer_Yudkowsky 17 April 2009 01:25AM

Previously in seriesBayesians vs. Barbarians
Followup toOf Gender and Rationality, Beware of Other-Optimizing

There is no such thing as masculine probability theory or feminine decision theory.  In their pure form, the maths probably aren't even human.  But the human practice of rationality—the arts associated with, for example, motivating yourself, or compensating factors applied to overcome your own biases—these things can in principle differ from gender to gender, or from person to person.

My attention was first drawn to this possibility of individual differences in optimization (in general) by thinking about rationality and gender (in particular).  I've written rather more fiction than I've ever finished and published, including a story in which the main character, who happens to be the most rational person around, happens to be female.  I experienced no particular difficulty in writing a female character who happened to be a rationalist.  But she was not an obtrusive, explicit rationalist.  She was not Jeffreyssai.

And it occurred to me that I could not imagine how to write Jeffreyssai as a woman; his way of teaching is paternal, not maternal.  Even more, it occurred to me that in my writing there are women who are highly rational (on their way to other goals) but not women who are rationalists (as their primary, explicit role in the story).

It was at this point that I realized how much of my own take on rationality was specifically male, which hinted in turn that even more of it might be specifically Eliezer Yudkowsky.

continue reading »

Welcome to Less Wrong!

48 MBlume 16 April 2009 09:06AM

If you've recently joined the Less Wrong community, please leave a comment here and introduce yourself. We'd love to know who you are, what you're doing, or how you found us. Tell us how you came to identify as a rationalist, or describe what it is you value and work to achieve.

If you'd like to meet other LWers in real life, there's a meetup thread and a Facebook group. If you've your own blog or other online presence, please feel free to link it. If you're confused about any of the terms used on this site, you might want to pay a visit to the LW Wiki, or simply ask a question in this thread.  Some of us have been having this conversation for a few years now, and we've developed a fairly specialized way of talking about some things. Don't worry -- you'll pick it up pretty quickly.

You may have noticed that all the posts and all the comments on this site have buttons to vote them up or down, and all the users have "karma" scores which come from the sum of all their comments and posts. Try not to take this too personally. Voting is used mainly to get the most useful comments up to the top of the page where people can see them. It may be difficult to contribute substantially to ongoing conversations when you've just gotten here, and you may even see some of your comments get voted down. Don't be discouraged by this; it happened to many of us. If you've any questions about karma or voting, please feel free to ask here.

If you've come to Less Wrong to teach us about a particular topic, this thread would be a great place to start the conversation, especially until you've worked up enough karma for a top level post. By posting here, and checking the responses, you'll probably get a good read on what, if anything, has already been said here on that topic, what's widely understood and what you might still need to take some time explaining.

A note for theists: you will find LW overtly atheist. We are happy to have you participating but please be aware that other commenters are likely to treat religion as an open-and-shut case. This isn't groupthink; we really, truly have given full consideration to theistic claims and found them to be false. If you'd like to know how we came to this conclusion you may find these related posts a good starting point.

A couple technical notes: when leaving comments, you may notice a 'help' link below and to the right of the text box.  This will explain how to italicize, linkify, or quote bits of text. You'll also want to check your inbox, where you can always see whether people have left responses to your comments.

Welcome to Less Wrong, and we look forward to hearing from you throughout the site.

(Note from MBlume: though my name is at the top of this page, the wording in various parts of the welcome message owes a debt to other LWers who've helped me considerably in working the kinks out)

View more: Next