VAuroch comments on Beautiful Probability - Less Wrong

34 Post author: Eliezer_Yudkowsky 14 January 2008 07:19AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (109)

Sort By: Old

You are viewing a single comment's thread. Show more comments above.

Comment author: [deleted] 14 December 2013 05:08:24PM 4 points [-]

Just a note here: the fact that a dataset has the same likelihood function regardless of the procedure that produced it is actually NOT a trivial statement - the way I see it, it a somewhat deep result which follows from the optional stopping theorem and the fact that the likelihood function is bounded. Not trying to nitpick, just pointing out that there is something to think about here. According to my initial intuitions, this was actually rather surprising - I didn't expect experimental results constructed using biased data (in the sense of non-fixed stopping time) to end up yielding unbiased results, even with full disclosure of all data.

Comment author: Eliezer_Yudkowsky 15 December 2013 03:14:56PM *  -1 points [-]

It's worth revising your intuitions if you found if surprising that a fixed physical act had the same likelihood to data regardless of researcher thoughts. It is indeed possible to see the mathematical result as "obvious at a glance".

Comment author: VAuroch 30 August 2015 08:21:21PM 0 points [-]

You can claim that it should have the same likelihood either way, but you have to put the discrepancy somewhere. Knowing the choice of stopping rule is evidence about the experimenter's state of knowledge about the efficacy. You can say that it should be treated as a separate piece of evidence, or that knowing about the stopping rule should change your prior, but if you don't bring it in somewhere, you're ignoring critical information.

Comment author: Cyan 31 August 2015 02:22:41AM *  1 point [-]

you're ignoring critical information

No, it practical terms it's negligible. There's a reason that double-blind trials are the gold standard -- it's because doctors are as prone to cognitive biases as anyone else.

Let me put it this way: recently a pair of doctors looked at the available evidence and concluded (foolishly!) that putting fecal bacteria in the brains of brain cancer patients was such a promising experimental treatment that they did an end-run around the ethics review process -- and after leaving that job under a cloud, one of them was still considered a "star free agent". Well, perhaps so -- but I think this little episode illustrates very well that a doctor's unsupported opinion about the efficacy of his or her novel experimental treatment isn't worth the shit s/he wants to place inside your skull.

Comment author: EHeller 31 August 2015 05:41:06AM 2 points [-]

Hold on- aren't you saying the choice of experimental rule is VERY important (i.e. double blind vs. not double blind,etc)?

If so you are agreeing with VAuroch. You have to include the details of the experiment somewhere. The data does not speak for itself.

Comment author: Cyan 31 August 2015 05:28:39PM *  1 point [-]

Of course experimental design is very important in general. But VAuroch and I agree that when two designs give rise to the same likelihood function, the information that comes in from the data are equivalent. We disagree about the weight to give to the information that comes in from what the choice of experimental design tells us about the experimenter's prior state of knowledge.

Comment author: VAuroch 02 September 2015 12:27:40AM 1 point [-]

Double-blind trials aren't the gold standard, they're the best available standard. They still don't replicate far too often, because they don't remove bias (and I'm not just referring to publication bias). Which is why, when considering how to interpret a study, you look at the history of what scientific positions the experimenter has supported in the past, and then update away from that to compensate for bias which you have good reason to think will show up in their data.

In the example, past results suggest that, even if the trial was double-blind, someone who is committed to achieving a good result for the treatment will get more favorable data than some other experimenter with no involvement.

And that's on top of the trivial fact that someone with an interest in getting a successful trial is more likely to use a directionally-slanted stopping rule if they have doubts about the efficacy than if they are confident it will work, which is not explicitly relevant in Eliezer's example.

Comment author: Cyan 02 September 2015 09:28:16PM 0 points [-]

I can't say I disagree.