kim0 comments on Even if you have a nail, not all hammers are the same - Less Wrong

95 Post author: PhilGoetz 29 March 2010 06:09PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (125)

You are viewing a single comment's thread. Show more comments above.

Comment author: Matt_Simpson 29 March 2010 09:22:24PM *  4 points [-]

better methodology would have been to use piecewise (or "hockey-stick") regression, which assumes the data is broken into 2 sections (typically one sloping downwards and one sloping upwards), and tries to find the right breakpoint, and perform a separate linear regression on each side of the break that meets at the break. (I almost called this "The case of the missing hockey-stick", but thought that would give the answer away.)

An even better methodology would be to allow for higher order terms in the regression model. Adding squared terms, the model would look like this:

or

This would allow for nice those nice looking curves you were talking about. And it can be combined with logistic regression. Really, regression is very flexible; there's no excuse for what they did.

Also, the scientists could have done a little model checking. If what Phil says about the U/J shaped response curve is true, the first order model would have been rejected by some sensible model selection criterion (AIC, BIC, stepwise selection, lack-of-fit F test, etc)

related side note: In my grad stat classes, "Linear Regression" usually includes things like my example above - i.e. linear functions of the (potentially transformed) explanatory variables including higher order terms. Is this different from the how the term is widely used?

unrelated side note: is there a way to type pretty math in the comments?

followup question: are scientists outside of the field of statistics really this dumb when it comes to statistics? It seems like they see their standard methods (i.e., regression) as black boxes that take data as an input and then output answers. Maybe my impression is skewed by the examples popping up here on LW.

Comment author: kim0 30 March 2010 09:54:54AM 1 point [-]

Yes. Quadratic regression is better, often. The problem is that the number of coefficients to adjust in the model gets squared, which goes against Ockhams razor. This is precisely the problem I am working on these days, though in the context of the oil industry.

Comment author: Matt_Simpson 30 March 2010 07:11:36PM 0 points [-]

It's not too difficult to check to see if adding the extra terms improves the regression. In my original comment, I listed AIC and BIC among others. On the other hand, different diagnostics will give different answers, so there's the question of which diagnostic to trust if they disagree. I haven't learned much about regression diagnostics yet, but at the moment they all seem ad hoc (maybe because I haven't seen the theory behind them yet).