You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

lmm comments on Open Thread, April 27-May 4, 2014 - Less Wrong Discussion

0 Post author: NancyLebovitz 27 April 2014 08:34PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (200)

You are viewing a single comment's thread. Show more comments above.

Comment author: ChristianKl 01 May 2014 10:42:44PM 0 points [-]

There are a bunch of issues involved. It hard to speak about them because the term Bayesianism is encompasses a wide array of ideas and everytime it's used it might refer to a different subset of that cluster of ideas.

Part of LW is that it's a place to discuss how an AGI could be structured. As such we care about the philosophic level of how you come to know that something is true. As such there an interest into going as basic as possible when looking at epistemology. There are issues about objective knowledge versus "subjective" Bayesian priors that are worth thinking about.

We live at a time where up to 70% of scientific research can't be replicated. Frequentism might not be to blame for all of that, but it does play it's part. There are issues such an the Bem paper about porno-precognition where frequentist techniques did suggest that porno-precognition is real but analysing Bems data with Bayesian methods suggested it's not.

There are further issues that a lot of additional assumptions are loaded into the word Bayesianism if you use that word on LessWrong. What Bayesianism taught me speaks about a bunch of issues that only have indirectly something to do with Bayesian tools vs. Frequentist tools.

Let's say I want to decide how much salt I should eat. I do follow the consensus that salt is bad and therefore have some prior that salt is bad. Then a new study comes along and says that low salt diets are unhealthy. If I want to make good decisions I have to ask: How much should I update? There no good formal way for making such decisions. We lack a good framework for doing this. Bayes rule is the answer to that problem that provides the promise of a solution. The solution to wait a few years and then read a meta review is unsatisfying.

In the absence of a formal way to do the reasoning, many people do use informal ways of updating towards new evidence. Cognitive bias research suggest that the average person isn't good at this.

Just understand the usage of each tools, and the fact that virtually any model of something that happens in the real world is going to be misspecified.

That sentence is quite easy to say but it effectively means there no such thing as pure absolute objective truth. If you use tools A you get truth X and if you use tools B you get truth Y. Neither X or Y are "more true". That's not an appealing conclusion to many people.

Comment author: lmm 05 May 2014 06:20:57PM 3 points [-]

We live at a time where up to 70% of scientific research can't be replicated. Frequentism might not be to blame for all of that, but it does play it's part. There are issues such an the Bem paper about porno-precognition where frequentist techniques did suggest that porno-precognition is real but analysing Bems data with Bayesian methods suggested it's not.

It seems to me that there's a bigger risk from Bayesian methods. They're more sensitive to small effect sizes (doing a frequentist meta-analysis you'd count a study that got a p=0.1 result as evidence against, doing a bayesian one it might be evidence for). If the prior isn't swamped then it's important and we don't have good best practices for choosing priors; if the prior is swamped then the bayesianism isn't terribly relevant. And simply having more statistical tools available and giving researchers more choices makes it easier for bias to creep in.

Bayes' theorem is true (duh) and I'd accept that there are situations where bayesian analysis is more effective than frequentist, but I think it would do more harm than good in formal science.

Comment author: gwern 06 May 2014 02:44:26AM 3 points [-]

doing a frequentist meta-analysis you'd count a study that got a p=0.1 result as evidence against

Why would you do that? If I got a p=0.1 result doing a meta-analysis, I wouldn't be surprised at all since things like random-effects means it takes a lot of data to turn in a positive result at the arbitrary threshold of 0.05. And as it happens, in some areas, an alpha of 0.1 is acceptable: for example, because of the poor power of tests for publication bias, you can find respected people like Ioannides using that particular threshold (I believe I last saw that in his paper on the binomial test for publication bias).

If people really acted that way, we'd see odd phenomenon where people saw successive meta-analysts on whether grapes cure cancer: 0.15 that grapes cure cancer (decreases belief grapes cure cancer), 0.10 (decreases), 0.07 (decreases), someone points out that random-effects is inappropriate because studies show very low heterogeneity and the better fixed-effects analysis suddenly reveals that the true p-value is now at 0.05 (everyone's beliefs radically flip as they go from 'grapes have been refuted and are quack alt medicine!' to 'grapes cure cancer! quick, let's apply to the FDA under a fast track'). Instead, we see people acting more like Bayesians...

And simply having more statistical tools available and giving researchers more choices makes it easier for bias to creep in.

Is that a guess, or a fact based on meta-studies showing that Bayesian-using papers cook the books more than NHST users with p-hacking etc?

Comment author: gwern 10 October 2014 02:10:38AM *  0 points [-]

everyone's beliefs radically flip as they go from 'grapes have been refuted and are quack alt medicine!' to 'grapes cure cancer! quick, let's apply to the FDA under a fast track'

Turns out I am overoptimistic and in some cases people have done just that: interpreted a failure to reject the null (due to insufficient power, despite being evidence for an effect) as disproving the alternative in a series of studies which all point the same way, only changing their minds when an individually big enough study comes out. Hauer says this is exactly what happened with a series of studies on traffic mortalities.

(As if driving didn't terrify me enough, now I realize traffic laws and road safety designs are being engineered by vulgarized NHST practitioners who apparently don't know how to patch the paradigm up with emphasis on power or meta-analysis.)

Comment author: Douglas_Knight 13 May 2014 06:53:50AM 0 points [-]

doing a frequentist meta-analysis you'd count a study that got a p=0.1 result as evidence against

No. The most basic version of meta-analysis is, roughly, that if you have two p=0.1 studies, the combined conclusion is p=0.01.