You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Unnamed comments on [QUESTION]: Academic social science and machine learning - Less Wrong Discussion

11 Post author: VipulNaik 19 July 2014 03:13PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (17)

You are viewing a single comment's thread.

Comment author: Unnamed 19 July 2014 07:42:04PM 10 points [-]

(6) Another possibility (related to (2) and (4)) is that academic social scientists are primarily in the business of sharing information with other academics in their field, so they tend to rely on tools that are already standard within their field. A paper that uses complicated statistics which their colleagues don't understand is going to have less impact than a paper that makes the same point using the field's standard tools. It may also have trouble making it through peer review, if the peers don't have the technical knowledge to evaluate it.

So the incentives aren't there for an academic to stay on the cutting edge in adopting new complicated techniques. People who are in the business of building things to get results have a stronger incentive to adopt complicated new methods which offer any technical advantage.

Comment author: Algernoq 24 July 2014 01:22:52AM 1 point [-]

I agree the above is true in nearly all cases. In some fields (economics), some papers try to signal value by using needlessly complicated statistics borrowed from other fields.