better methodology would have been to use piecewise (or "hockey-stick") regression, which assumes the data is broken into 2 sections (typically one sloping downwards and one sloping upwards), and tries to find the right breakpoint, and perform a separate linear regression on each side of the break that meets at the break. (I almost called this "The case of the missing hockey-stick", but thought that would give the answer away.)
An even better methodology would be to allow for higher order terms in the regression model. Adding squared terms, the model would look like this:
or
This would allow for nice those nice looking curves you were talking about. And it can be combined with logistic regression. Really, regression is very flexible; there's no excuse for what they did.
Also, the scientists could have done a little model checking. If what Phil says about the U/J shaped response curve is true, the first order model would have been rejected by some sensible model selection criterion (AIC, BIC, stepwise selection, lack-of-fit F test, etc)
related side note: In my grad stat classes, "Linear Regression" usually includes things like my example above - i.e. linear functions of the (potentially transformed) explanatory variables including higher order terms. Is this different from the how the term is widely used?
unrelated side note: is there a way to type pretty math in the comments?
followup question: are scientists outside of the field of statistics really this dumb when it comes to statistics? It seems like they see their standard methods (i.e., regression) as black boxes that take data as an input and then output answers. Maybe my impression is skewed by the examples popping up here on LW.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Good point, I hadn't even thought of that implication.
You all are quite good at picking up the implications, which means my post worked.