Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

IlyaShpitser comments on The Logic of the Hypothesis Test: A Steel Man - Less Wrong Discussion

5 Post author: Matt_Simpson 21 February 2013 06:19AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (37)

You are viewing a single comment's thread.

Comment author: IlyaShpitser 21 February 2013 09:07:55AM *  8 points [-]

The real bullet that is bitten to avoid induction is in step 1 (which is almost always a false dilemma). I see lots of other commenters see this too.

Comment author: Matt_Simpson 21 February 2013 07:05:07PM *  0 points [-]

I don't see how this is any different from, say, Bayesian inference. Ultimately your inferences depend on the model being true. You might add a bunch of complications to the model in order to take into account many possibilities so that this is less of a problem, but ultimately your inferences are going to rely on what the model says and if your model isn't (approximately) true, well you're in trouble whether or not you're doing Bayesian inference or NHST or anything else.

(Though I suppose you could bite the bullet and say "you're right, Bayes' isn't attempting to do induction either." That would honestly surprise me.)

Edit: This is to say that I think you (and others) have a good argument for building better models - and maybe NHST practitioners are particularly bad about this - but I'm not talking about any specific model or the details of what NHST practitioners actually do. I'm talking about the general idea of hypothesis testing.

Comment author: IlyaShpitser 22 February 2013 12:11:16PM 0 points [-]

Just to make sure we are using the same terminology, what do you mean by "model" (statistical model e.g. set of densities?) and "induction"?

Comment author: Matt_Simpson 22 February 2013 05:13:42PM 0 points [-]

By model I do mean a statistical model. I'm not being terribly precise with the term "induction" but I mean something like "drawing conclusions from observation or data."

Comment author: IlyaShpitser 23 February 2013 12:53:35PM 1 point [-]

Ok. If a Bayesian picks among a set of models, then it is true that (s)he assumes the disjunctive model is true.. (that is the set of densities that came from either H0 or H1 or H2 or ...) but I suppose any procedure for "drawing conclusions from data" must assume something like that.

I don't think there is a substantial difference between how Bayesians and frequentists deal with induction, so in that sense I am biting the bullet you mention. The real difference is frequentists make universally quantified statements, and Bayesians make statements about functions of the posterior.