endoself comments on A History of Bayes' Theorem - Less Wrong

53 Post author: lukeprog 29 August 2011 07:04AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (85)

You are viewing a single comment's thread. Show more comments above.

Comment author: Kaj_Sotala 25 August 2011 11:09:12AM *  26 points [-]

I shared the link to this post on an IRC channel populated by a number of people, but mostly by mathematically inclined CS majors. It provoked a bunch of discussion about the way frequentism/bayesianism is generally discussed on LW. Here are a few snippets from the conversation (nicknames left out except my own, less relevant lines have been edited out):

11:03 < Person A> For fucks sake "And so at age 62, Laplace — the world's first Bayesian — converted to frequentism, which he used for the remaining 16 years of his life."
11:04 <@Guy B> well he believed that the results were the same
11:04 <@Guy B> counterexamples were invented only later
11:05 < Person A> Guy B: Still, I just hate the way that lesswrong talks about "bayesians" and "frequentists"
11:05 <@Guy B> Person A: oh, I misinterpreted you
11:06 < Person A> Every time yudkowsky writes "The Way of Bayes" i get a sudden urge to throw my laptop out of the window.
11:08 < Person A> Yudkowsky is a really good popular writer, but I hate the way he tries to create strange conflicts even where they don't exist.
11:10 <@Xuenay> I guess I should point out that the article in question wasn't written by Yudkowsky :P
11:10 <@Dude C> Xuenay: it was posted on lesswrong
11:11 <@Dude C> so obv we will talk about Yudkowski

11:13 <@Dude C> it's just htat there is no conflict, there are just several ways to do that.
11:13 <@Dude C> several models
11:16 <@Dude C> uh, several modes
11:17 <@Dude C> or I guess several schools. w/e.
11:17 <@Entity D> it's like this stupid philosophical conflict over two mathematically valid ways of doing statistical inference, a conflict some people seem to take all too seriously
11:17 <@Guy B> IME self-described bayesians are always going on about this "conflict"
11:17 <@Guy B> while most people just concentrate on science
11:18 <@Entity D> Guy B: exactly
11:18 <@Dude C> and use appropriate methods where they are appropriate

Summing up, the general consensus on the channel is that the whole frequentist/bayesian conflict gets seriously and annoyingly exaggarated on LW, and that most people doing science are happy to use either methodology if that suits the task at hand. Those who really do care and could reasonably be described as 'frequentist' or 'bayesian' are really a small minority, and LW's way of constantly bringing it up is just something that's used to make the posters feel smugly superior to "those clueless frequentists". This consensus has persisted over an extended time, and has contributed to LW suffering from a lack of credibility in the eyes of many of the channel regulars.

Does anybody better versed in the debate have a comment?

Comment author: endoself 26 August 2011 01:25:13AM *  8 points [-]

I think this is due to Yudkowsky's focus on AI theory; an AI can't use discretion to choose the right method unless we formalize this discretion. Bayes' theorem is applicable to all inference problems, while frequentist methods have domains of applicability. This may seem philosophical to working statisticians - after all, Bayes' theorem is rather inefficient for many problems, so it may still be considered inapplicable in this sense - but programming an AI to use a frequentist method without a complete understanding of its domain of applicability could be disastrous, while that problem just does not exist for Bayesianism. There is the problem of choosing a prior, but that can be dealt with by using objective priors or Solomonoff induction.

Comment author: lessdazed 26 August 2011 07:37:57AM 2 points [-]

programming an AI to use a frequentist method without a complete understanding of its domain of applicability could be disastrous

I'm not sure what you meant by that, but as far as I can tell not explicitly using Bayesian reasoning makes AIs less functional, not unfriendly.

Comment author: endoself 26 August 2011 06:03:41PM 1 point [-]

Yes, mostly that lesser meaning of disastrous, though an AI that almost works but has a few very wrong beliefs could be unfriendly. If I misunderstood your comment and you were actually asking for an example of a frequentist method failing, one of the simplest examples is a mistaken assumption of linearity.

Comment author: fool 30 August 2011 03:40:47AM 2 points [-]

"There is the problem of choosing a prior, but that can be dealt with by using objective priors or Solomonoff induction."

Yeah, well. That of course is the core of what is dubious and disputed here. Really, Bayes' theorem itself is hardly controversial, and talking about it this way is pointless.

There's sort of a continuum here. A weak claim is that these priors can be an adequate model of uncertainty in many situations. Stronger and stronger claims will assert that this works in more and more situations, and the strongest claim is that these cover all forms of uncertainty in all situations. Lukeprog makes the strongest claim, by means of examples which I find rather sketchy relative to the strength of the claim.

To Kaj Sotala's conversation, adherents of the weaker claim would be fine with the "use either methodlogy if that suits it" attitude. This is less acceptable to those who think priors should be broadly applicable. And it is utterly unacceptable from the perspective of the strongest claim.

For that matter "either" is incorrect (note the original conversation one of them actually talks about several rather than two). There is lots of work on modeling uncertainty in non-frequentist and non-bayesian ways.

Comment author: endoself 30 August 2011 04:31:26AM 1 point [-]

Anyone who bases decisions on a non-Bayesian model of uncertainty that is not equivalent to Bayesianism with some prior is vulnerable to Dutch books.

Comment author: fool 30 August 2011 04:46:37AM 1 point [-]

It seems not. Sniffnoy's recent thread asked the very question as to whether Savage's axioms could really be justified by dutch book arguments.

Comment author: endoself 31 August 2011 01:06:25AM 1 point [-]

I was thinking of the simpler case of someone who has already assigned utilities as required by the VNM axioms for the noncontroversial case of gambling with probabilities that are relative frequencies, but refuses on philosophical grounds to apply the expected utility decision procedure to other kinds of uncertainty.

(I do think the statement still stands in general. I don't have a complete proof but Savage's axioms get most of the way there.)

Comment author: fool 31 August 2011 02:03:10AM 2 points [-]

On the thread cited I gave a three state, two outcome counterexample to P2 which does just that. Having two outcomes obviously a utility function is not an issue. (It can be extended it with an arbitrary number of "fair coins" for example to satisfy P6, which covers your actual frequency requirement here)

My weak claim is that it is not vulnerable to "Dutch-book-type" arguments. My strong claim is that this behaviour is reasonable, even rational. The strong claim is being disputed on that thread. And of course we haven't agreed on any prior definition of reasonable or rational. But nobody has attempted to Dutch book me, and the weak claim is all that is needed to contradict your claim here.

Comment author: endoself 31 August 2011 06:10:26AM 0 points [-]

Sorry, I didn't check that thread for posts by you. I replied there.