mkf comments on Stupid Questions May 2015 - Less Wrong

10 Post author: Gondolinian 01 May 2015 05:28PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (263)

You are viewing a single comment's thread. Show more comments above.

Comment author: IlyaShpitser 01 May 2015 08:38:41PM 7 points [-]

Causal inference research.

Comment author: mkf 03 May 2015 04:46:33PM 2 points [-]

Could you elaborate?

Comment author: IlyaShpitser 03 May 2015 06:05:30PM *  4 points [-]

Sure. I am not an MD, but this is my view as an outsider: the medical profession is quite conservative, and people that publish medical papers have an incentive to publish "fast and loose," and not necessarily be very careful (because hey you can just write another paper later if a better method comes along!)


Because medicine deals with people, you often can't do random treatment assignment in your studies (for ethical reasons), so you often have evidence in the form of observational data, where you have a ton of confounding of various types. Causal inference can help people make use of observational data properly. This is important -- most data we have is observational. And if you are not careful, you are going to get garbage from it. For example, there was a fairly recent paper by Robins et al. that basically showed that their way of adjusting for confounding in observational data was correct because they reproduced a result found in an RCT.

There is room both for people to go to graduate school and work on new methods for properly dealing with observational data for drawing causal conclusions, and for popularizers.

Popularizing this stuff is a lot of what Pearl does, and is also partly the purpose of Hernan's and Robin's new book:

https://www.facebook.com/causalinference


Full disclosure: obviously this is my area, so I am going to say that. So don't take my word for it :).

Comment author: Kazuo_Thow 03 May 2015 09:31:58PM 2 points [-]

Here on Less Wrong there are a significant number of mathematically inclined software engineers who know some probability theory, meaning they've read/worked through at least one of Jaynes and Pearl but may not have gone to graduate school. How could someone with this background contribute to making causal inference more accessible to researchers? Any tools that are particularly under-developed or missing?

Comment author: IlyaShpitser 04 May 2015 12:55:47PM *  3 points [-]

I am not sure I know what the most impactful thing to do is, by edu level. Let me think about it.


My intuition is the best thing for "raising the sanity waterline" is what the LW community would do with any other bias: just preaching association/causation to the masses that would otherwise read bad scientific reporting and conclude garbage about e.g. nutrition. Scientists will generally not outright lie, but are incentivized to overstate a bit, and reporters are incentivized to overstate a bit more. In general, we trust scientific output too much, so much of it is contingent on modeling assumptions, etc.

Explaining good clear examples of gotchas in observational data is good: e.g. doctors give sicker people a pill, so it might look like the pill is making people sick. It's like the causality version of the "rare cancer => likely you have a false positive by Bayes theorem". Unlike Bayes theorem, this is the kind of thing people immediately grasp if you point it out, because we have good causal processing natively, unlike our native probability processing which is terrible. Association/causation is just another type of bias to be aware of, it just happens to come up a lot when we read scientific literature.


If you are looking for some specific stuff to do as a programmer, email me :). There is plenty to do.