faul_sname comments on Outside the Laboratory - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (336)
I'm sorry, that seems just wrong. The statistics work if there's an unbiased process that determines which events you observe. If Alice conducts trails until 3 successes were achieved, that's a biased process that's sure to ensure that the data ends with a least one success.
Surely you accept that if Alice conducts 100 trials and only gives you the successes, you'll get the wrong result no matter the statistical procedure used, so you can't say that biased data collection is irrelevant. You have to either claim that continuing until 3 successes were achieved is an unbiased process, or retreat from the claim that that procedure for collecting the data does not influence the correct interpretation of the results.
I thought the exact same thing, and wrote a program to test it. Program is below:
Turns out they actually are equivalent. I tested with all manner of probabilities of success. Obviously, if what you're actually doing is running a set number of trials in one case and running trials until you reach significance or give up in the second case, you will come up with different results. However, if you have a set number of trials and a set success threshold set beforehand, it doesn't matter whether or not you run all the trials, or just run until the success threshold (which actually seems fairly obvious in retrospect). Edit: formatting sucks
Upvoted for actually testing the theory :)
I don't believe this is true. Every individual trial is individual Bayesian evidence, unrelated to the rest of the trials except in the fact that your priors are different. If you run until significance you will have updated to a certain probability, and if you run until you're bored you'll also have updated to a certain probability.
Sure, if you run a different amount of trials, you may end up with a different probability. At worst, if you keep going until you're bored, you may end up with results insignificant for the strict rules of "proof" in Science. But as long as you use Bayesian updating, neither method produces some form of invalid results.
Ding ding ding! That's my hindsight-bias-reminder-heuristic going off. It tells me when I need to check myself for hindsight bias, and goes off on thoughts like "That seems obvious in retrospect" and "I knew that all along." At the risk of doing your thinking for you, I'd say this is a case of hindsight bias: It wasn't obvious beforehand, since otherwise you wouldn't have felt the need to do the test. This means it's not an obvious concept in the first place, and only becomes clear when you consider it more closely, which you did. Then saying that "it's obvious in retrospect" has no value, and actually devalues the time you put in.
Try this:
(From the Comment Formatting Help)
You have to be very careful you're actually asking the same question in both cases. In the case I tested above, I was asking exactly the same question (my intuition said very strongly that I wasn't, but that's because I was thinking of the very similar but subtly different question below). The "fairly obvious in retrospect" refers to that particular phrasing of the problem (I would have immediately understood that the probabilities had to be equal if I had phrased it that way, but since I didn't, that insight was a little harder-earned).
The question I was actually thinking of is as follows.
Scenario A: You run 12 trials, then check whether your odds ratio reaches significance and report your results.
Scenario B: You run trials until either your odds ratio reaches significance or you hit 12 trials, then report your results.
I think scenario A is different from scenario B, and that's the one I was thinking of (it's the "run subjects until you hit significance or run out of funding" model).
A new program confirms my intuition about the question I had been thinking of when I decided to test it. I agree with Eliezer that it shouldn't matter whether the researcher goes to a certain number of trials or a certain number of positive results, but I disagree with the implication that the same dataset always gives you the same information.
The program is here, you can fiddle with the parameters if you want to look at the result yourself.
I did. It didn't indent properly. I tried again, and it still doesn't.
Actually, it's quite interesting what happens if you run trials until you reach significance. Turns out that if you want a fraction p of all trials you do to end up positive, but each trial only ends up positive with probability q<p, then with some positive probability (a function of p and q) you will have to keep going forever.
(This is a well-known result if p=1/2. Then you can think of the trials as a biased random walk on the number line, in which you go left with probability q<1/2 and right otherwise, and you want to return to the place you started. The probability that you'll ever return to the origin is 2q, which is less than 1.)
Ah, but that's not what it means to run until significance -- in my interpretation in any case. A significant result would mean that you run until you have either p < 0.005 that your hypothesis is correct, or p < 0.005 that it's incorrect. Doing the experiment in this way would actually validate it for "proof" in conventional Science.
Since he mentions "running until you're bored", his interpretation may be closer to yours though.