ike comments on Don't You Care If It Works? - Part 1 - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (58)
No, there's a limit on that as well. See http://www.ejwagenmakers.com/2007/StoppingRuleAppendix.pdf
If you can generate arbitrarily high Bayes factors, then you can reduce your posterior to .01, which means that it can only happen 1 in 100 times. You can never have a guarantee of always getting strong evidence for a false hypothesis. If you find a case that does, it will be new to me and probably change my mind.
That doesn't concern me. I'm not going to argue for why, I'll just point out that if it is a problem, it has absolutely nothing to do with optional stopping. The exact same behavior (probability 1/3 of generating a Bayes factor of 3 in favor of a false hypothesis) shows up in the following case: a coin either always lands on heads, or lands on heads 1/3 of the time and tails 2/3 of the time. I flip the coin a single time. Let's say the coin is the second coin. There's a 33% chance of getting heads, which would produce a Bayes factor of 3 in favor of the 100%H coin.
If there's something wrong with that, it's a problem with classic Bayes, not optional stopping.
It is my thesis that every optional stopping so-called paradox can be converted into a form without optional stopping, and those will be clearer as to whether the problem is real or not.
I can check my simulation for bugs. I don't have the referenced textbook to check the result being suggested.
The first part of this is trivially true. Replace the original distribution with the sampling distribution from the stopped problem, and it's not longer a stopped problem, it's normal pulls from that sampling distribution.
I'm not sure it's more clear,I think it is not. Your "remapped" problem makes it look like it's a result of low-data-volume and not a problem of how the sampling distribution was actually constructed.
You can see http://projecteuclid.org/euclid.aoms/1177704038, which proves the result.
How would this affect a frequentist?
I'm giving low data because those are the simplest kinds of cases to think of. If you had lots of data with the same distribution/likelihood, it would be the same. I leave it as an exercise to find a case with lots of data and the same underlying distribution ...
I was mainly trying to convince you that nothing's actually wrong with having 33% false positive rate in contrived cases.
It doesn't the frequentist is already measuring with the sample distribution. That is how frequentism works.
I mean it's not "wrong" but if you care about false positive rates and there is a method had has a 5% false positive rate, wouldn't you want to use that instead?
If for some reason low false positive rates were important, sure. If it's important enough to give up consistency.