# ike comments on Don't You Care If It Works? - Part 1 - Less Wrong

4 29 July 2015 02:32PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Sort By: Best

Comment author: 29 July 2015 10:51:29PM 1 point [-]

You seem to be saying that a rule based on posteriors would not be susceptible to such hacking?

I'm saying that all inferences are still correct. So if your prior is correct/well calibrated, then your posterior is as well. If you end up with 100 studies that all found an effect for different things at a posterior of 95%, 5% of them should be wrong.

The goal of the p-hacker is to increase the probability of type 1 error.

So what I should say is that the Bayesian doesn't care about the frequency of type 1 errors. If you're going to criticise that, you can do so without regard to stopping rules. I gave an example in a different reply of hacking bayes factors, now I'll give one with hacking posteriors:

Two kinds of coins: one fair, one 10%H/90%T. There are 1 billion of the fair ones, and 1 of the other kind. You take a coin, flip it 10 times, then say which coin you think it is. The Bayesian gets the biased coin, and no matter what he flips, will conclude that the coin is fair with overwhelming probability. The frequentist gets the coin, get ~9 tails, and says "no way is this fair". There, the frequentist does better because the Bayesian's prior is bad (I said there are a billion fair ones and only one biased one, but only looked at the biased ones).

It doesn't matter if you always conclude with 95% posterior that the null is false when it is true, as long as you have 20 times as many cases that the null is actually false. Yes, this opens you up to being tricked; but if you're worried about deliberate deception, you should include a prior over that. If you're worried about publication bias when reading other studies, include a prior over that, etc.

Comment author: 29 July 2015 11:28:21PM *  0 points [-]

I'm saying that all inferences are still correct. So if your prior is correct/well calibrated, then your posterior is as well. If you end up with 100 studies that all found an effect for different things at a posterior of 95%, 5% of them should be wrong.

But that is based on the posterior.

When I ask for clarification, you seem to be doing two things: 1. changing the subject to posteriors 2. asserting that a perfect prior leads to a perfect posterior.

I think 2 is uncontroversial, other than if you have a perfect prior why do any experiment at all? But it is also not what is being discussed. The issue is that with optional stopping you bias the Bayes factor.

As another poster mentioned, expected evidence is conserved. So let's think of this like a frequentist who has a laboratory full of bayesians in cages. Each Bayesian gets one set of data collected via a standard protocol. Without optional stopping, most of the Bayesians get similar evidence, and they all do roughly the same updates.

With optional stopping, you'll create either short sets of stopped data that support the favored hypothesis or very long sets of data that fail to support the favored hypothesis. So you might be able to create a rule that fools 99 out of the 100 Bayesians, but the remaining Baysian is going to be very strongly convinced of the disfavored hypothesis.

Where the Bayesian wins over the frequentist is that if you let the Bayesians out of the cages to talk, and they share their likelihood ratios, they can coherently combine evidence and the 1 correct Bayesian will convince all the incorrect Bayesians of the proper update. With frequentists, fewer will be fooled, but there isn't a coherent way to combine the confidence intervals.

So the issue for scientists writing papers is that if you are a Bayesian adopt the second, optional stopped experimental protocol (lets say it really can fool 99 out of 100 Bayesians) then at least 99 out of 100 of the experiments you run will be a success (some of the effects really will be real). The 1/100 that fails miserably doesn't have to be published.

Even if it is published, if two experimentalists both average to the truth, the one who paints most of his results as experimental successes probably goes further in his career.

Comment author: 30 July 2015 02:42:23PM 0 points [-]

With frequentists, fewer will be fooled, but there isn't a coherent way to combine the confidence intervals.

Can't frequentists just pool their data and then generate a new confidence interval from the supersized sample?

Comment author: 29 July 2015 11:36:56PM 0 points [-]

I think 2 is uncontroversial, other than if you have a perfect prior why do any experiment at all?

By perfect I mean well calibrated. I don't see why knowing that your priors in general are well calibrated implies that more information doesn't have positive expected utility.

The issue is that with optional stopping you bias the Bayes factor.

Only in some cases, and only with regard to someone who knows more than the Bayesian. The Bayesian himself can't predict that the factor will be biased; the expected factor should be 1. It's only someone who knows better that can predict this.

So let's think of this like a frequentist who has a laboratory full of bayesians in cages.

Before I analyse this case, can you clarify whether the hypothesis happens to be true, false, or chosen at random? Also give these Bayesians' priors, and perhaps an example of the rule you'd use.

Comment author: 29 July 2015 11:47:39PM *  0 points [-]

Before I analyse this case, can you clarify whether the hypothesis happens to be true, false, or chosen at random? Also give these Bayesians' priors, and perhaps an example of the rule you'd use.

Again, the prior doesn't matter, they are computing Bayes factors. We are talking about Bayes factors. Bayes factors. Prior doesn't matter. Bayes factors. Prior.Doesn't.Matter. Bayes factors. Prior.Doesn't.Matter. Bayes.factor.

Let's say the null is true, but the frequentist mastermind has devised some data generating process that (let's say he has infinite data at his disposal) that can produce evidence in favor of competing hypothesis at a Bayes factor of 3, 99% of the time.

Comment author: 30 July 2015 01:03:57AM 1 point [-]

Again, the prior doesn't matter, they are computing Bayes factors.

It matters here, because you said "So you might be able to create a rule that fools 99 out of the 100 Bayesians". The probability of getting data given a certain rule depends on which hypothesis is true, and if we're assuming the hypothesis is like the prior, then we need to know the prior to calculate those numbers.

Let's say the null is true, but the frequentist mastermind has devised some data generating process that (let's say he has infinite data at his disposal) that can produce evidence in favor of competing hypothesis at a Bayes factor of 3, 99% of the time.

That's impossible. http://doingbayesiandataanalysis.blogspot.com/2013/11/optional-stopping-in-data-collection-p.html goes through the math.

Using either Bayesian HDI with ROPE, or a Bayes factor, the false alarm rate asymptotes at a level far less than 100% (e.g., 20-25%). In other words, using Bayesian methods, the null hypothesis is accepted when it is true, even with sequential testing of every datum, perhaps 75-80% of the time.

In fact, you can show easily that this can succeed at most 33% of the time. By definition, the Bayes factor is how likely the data is given one hypothesis, divided by how likely the data is given the other. The data in the class "results in a bayes factor of 3 against the null" has a certain chance of happening given that the null is true, say p. This class of course contains many individual mutually exclusive sets of data, each with a far lower probability, but they sum to p. Now, the chance of this class of possible data sets happening given that the null is not true has an upper bound of 1. Each individual probability (which collectively sum to at most 1) must be 3 times as much as the corresponding probability in the group that sums to p. Ergo, p is upper bounded by 33%.

Comment author: 30 July 2015 02:13:18AM *  0 points [-]

I think this is problem dependent.

In simulation, I start to asymptote to around 20%, with a coin flip, but estimating mean from a normal distribution (with the null being 0) with fixed variance I keep climbing indefinitely. If you are willing to sample literally forever it seems like you'd be able to convince the Bayesian that the mean is not 0 with arbitrary Bayes factor. So for large enough N in a sample, I expect you can get a factor of 3 for 99/100 of the Bayesians in cages (so long as that last Bayesian is really, really sure the value is 0).

But it doesn't change the results if we switch and say we fool 33% of the Bayesians with Bayes factor of 3. We are still fooling them.

Comment author: 30 July 2015 02:55:05AM 2 points [-]

If you are willing to sample literally forever it seems like you'd be able to convince the Bayesian that the mean is not 0 with arbitrary Bayes factor.

No, there's a limit on that as well. See http://www.ejwagenmakers.com/2007/StoppingRuleAppendix.pdf

Instead, as pointed out by Edwards et al. (1963, p. 239): “(...) if you set out to collect data until your posterior probability for a hypothesis which unknown to you is true has been reduced to .01, then 99 times out of 100 you will never make it, no matter how many data you, or your children after you, may collect (...)”.

If you can generate arbitrarily high Bayes factors, then you can reduce your posterior to .01, which means that it can only happen 1 in 100 times. You can never have a guarantee of always getting strong evidence for a false hypothesis. If you find a case that does, it will be new to me and probably change my mind.

But it doesn't change the results if we switch and say we fool 33% of the Bayesians with Bayes factor of 3. We are still fooling them.

That doesn't concern me. I'm not going to argue for why, I'll just point out that if it is a problem, it has absolutely nothing to do with optional stopping. The exact same behavior (probability 1/3 of generating a Bayes factor of 3 in favor of a false hypothesis) shows up in the following case: a coin either always lands on heads, or lands on heads 1/3 of the time and tails 2/3 of the time. I flip the coin a single time. Let's say the coin is the second coin. There's a 33% chance of getting heads, which would produce a Bayes factor of 3 in favor of the 100%H coin.

If there's something wrong with that, it's a problem with classic Bayes, not optional stopping.

It is my thesis that every optional stopping so-called paradox can be converted into a form without optional stopping, and those will be clearer as to whether the problem is real or not.

Comment author: 30 July 2015 03:16:45AM *  0 points [-]

No, there's a limit on that as well. See http://www.ejwagenmakers.com/2007/StoppingRuleAppendix.pdf

I can check my simulation for bugs. I don't have the referenced textbook to check the result being suggested.

It is my thesis that every optional stopping so-called paradox can be converted into a form without optional stopping, and those will be clearer as to whether the problem is real or not.

The first part of this is trivially true. Replace the original distribution with the sampling distribution from the stopped problem, and it's not longer a stopped problem, it's normal pulls from that sampling distribution.

I'm not sure it's more clear,I think it is not. Your "remapped" problem makes it look like it's a result of low-data-volume and not a problem of how the sampling distribution was actually constructed.

Comment author: 30 July 2015 03:27:53AM 0 points [-]

I don't have the referenced textbook to check the result being suggested.

You can see http://projecteuclid.org/euclid.aoms/1177704038, which proves the result.

Replace the original distribution with the sampling distribution from the stopped problem, and it's not longer a stopped problem, it's normal pulls from that sampling distribution.

How would this affect a frequentist?

I'm not sure it's more clear,I think it is not. Your "remapped" problem makes it look like it's a result of low-data-volume and not a problem of how the sampling distribution was actually constructed.

I'm giving low data because those are the simplest kinds of cases to think of. If you had lots of data with the same distribution/likelihood, it would be the same. I leave it as an exercise to find a case with lots of data and the same underlying distribution ...

I was mainly trying to convince you that nothing's actually wrong with having 33% false positive rate in contrived cases.

Comment author: 30 July 2015 04:51:38AM 0 points [-]

How would this affect a frequentist?

It doesn't the frequentist is already measuring with the sample distribution. That is how frequentism works.

I was mainly trying to convince you that nothing's actually wrong with having 33% false positive rate in contrived cases.

I mean it's not "wrong" but if you care about false positive rates and there is a method had has a 5% false positive rate, wouldn't you want to use that instead?