Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Matt_Simpson comments on The Logic of the Hypothesis Test: A Steel Man - Less Wrong Discussion

5 Post author: Matt_Simpson 21 February 2013 06:19AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (37)

You are viewing a single comment's thread. Show more comments above.

Comment author: Matt_Simpson 21 February 2013 07:45:28AM 4 points [-]

I think that's a bad assumption, and if you're trying to steelman, you should avoid relying on bad assumptions.

In any given problem the model is almost certainly false, but whether you use frequentist or Bayesian inference you have to implicitly assume that it's (approximately) true in order to actually conduct inference. Saying "don't assume the model is true because it isn't" is unhelpful and a nonstarter. If you actually want to get an answer, you have to assume something even if you know it isn't quite right.

Going from 4 to 5 looks dependent on an inductive theory to me.

Why yes it does. Did you read what I wrote about that?

Comment author: buybuydandavis 26 February 2013 12:27:13AM 0 points [-]

Saying "don't assume the model is true because it isn't" is unhelpful and a nonstarter.

It starts fine for me.

Testing just the Null hypothesis is the least one can do. Then one can test the alternative, That way you at least get a likelihood ratio. You can add priors or not. Then one can build in terms modeling your ignorance.

See previous comment: http://lesswrong.com/lw/gqt/the_logic_of_the_hypothesis_test_a_steel_man/8ioc

One could keep going and going on modeling ignorance, but few even get that far, and I suspect it isn't helpful to go further.

Why yes it does. Did you read what I wrote about that?

Yes. It conflicted with what you subsequently wrote:

I'm avoiding claiming any inductive theory is correct

Comment author: Matt_Simpson 26 February 2013 09:01:49PM 1 point [-]

Testing just the Null hypothesis is the least one can do. Then one can test the alternative, That way you at least get a likelihood ratio. You can add priors or not. Then one can build in terms modeling your ignorance.

This doesn't address the problem that the truth isn't in your hypothesis space (which is what I thought you were criticizing me for). If your model assumes constant variance, for example, when in truth there's nonconstant variance, the truth is outside your hypothesis space. You're not even considering it as a possibility. What does considering likelihood ratios of the hypotheses in your hypothesis space do to help you out here?

See previous comment: http://lesswrong.com/lw/gqt/the_logic_of_the_hypothesis_test_a_steel_man/8ioc

Reading that thread, I think jsteinart is right - if the truth is outside of your hypothesis space, you're screwed no matter if you're a Bayesian or a frequentist (which is a much more succinct way of putting my response to you). Setting up a "everything else" hypothesis doesn't really help because you can't compute a likelihood without some assumptions that, in all probability, expose you to the problem you're trying to avoid.

Yes. It conflicted with what you subsequently wrote:

Are you happier if I say that Bayes is a "thick" inductive theory and that NHST can be viewed as induction with a "thin" theory which therefore keeps you from committing yourself to as much? (I do acknowledge that others treat NHST as a "thick" theory and that this difference seems like it should result in differences in the details of actually doing hypothesis tests.)

Comment author: buybuydandavis 26 February 2013 10:08:23PM 0 points [-]

What does considering likelihood ratios of the hypotheses in your hypothesis space do to help you out here?

The likelihood ratio was for comparing the hypotheses under consideration, the Null and the alternative. My point is that the likelihood of the alternative isn't taken into consideration at all. Prior to anything Bayesian, hypothesis testing moved from only modeling the likelihood of the null to also modeling the likelihood of a specified alternative, and comparing the two.

if the truth is outside of your hypothesis space, you're screwed no matter if you're a Bayesian or a frequentist

Therefore, you put an error placeholder of appropriate magnitude onto "it's out of my hypothesis space" so that unreasonable results have some systematic check.

And the difference between Bayesian and NHST isn't primarily how many assumptions you've committed too, which is enormous, but how many of those assumptions you've identified, and how you've specified them.