Comment author:Cyan
27 July 2009 02:42:58PM
*
3 points
[-]

Here's one. There is one data point, distributed according to 0.5*N(0,1) + 0.5*N(mu,1).

Bayes: any improper prior for mu yields an improper posterior (because there's a 50% chance that the data are not informative about mu). Any proper prior has no calibration guarantee.

Frequentist: Neyman's confidence belt construction guarantees valid confidence coverage of the resulting interval. If the datum is close to 0, the interval may be the whole real line. This is just what we want [claims the frequentist, not me!]; after all, when the datum is close to 0, mu really could be anything.

Comment author:PhilGoetz
04 August 2009 05:30:16PM
0 points
[-]

Can you explain the terms "calibration guarantee", and what "the resulting interval" is?
Also, I don't understand why you say there is a 50% chance the data is not informative about mu. This is not a multi-modal distribution; it is blended from N(0,1) and N(mu,1). If mu can be any positive or negative number, then the one data point will tell you whether mu is positive or negative with probability 1.

Comment author:Cyan
04 August 2009 07:55:02PM
*
2 points
[-]

Can you explain the terms "calibration guarantee"...

By "calibration guarantee" I mean valid confidence coverage: if I give a number of intervals at a stated confidence, then relative frequency with which the estimated quantities fall within the interval is guaranteed to approach the stated confidence as the number of estimated quantities grows. Here we might imagine a large number of mu parameters and one datum per parameter.

... and what "the resulting interval" is?

Not easily. The second cousin of this post (a reply to wedrifid) contains a link to a paper on arXiv that gives a bare-bones overview of how confidence intervals can be constructed on page 3. When you've got that far I can tell you what interval I have in mind.

Also, I don't understand why you say there is a 50% chance the data is not informative about mu. This is not a multi-modal distribution; it is blended from N(0,1) and N(mu,1).

I think there's been a misunderstanding somewhere. Let Z be a fair coin toss. If it comes up heads the datum is generated from N(0,1); if it comes up tails, the datum is generated from N(mu,1). Z is unobserved and mu is unknown. The probability distribution of the datum is as stated above. It will be multimodal if the absolute value of mu is greater than 2 (according to some quick plots I made; I did not do a mathematical proof).

If mu can be any positive or negative number, then the one data point will tell you whether mu is positive or negative with probability 1.

If I observe the datum 0.1, is mu greater than or less than 0?

Comment author:wedrifid
29 July 2009 07:37:31PM
0 points
[-]

Thanks Cyan.

I'll get back to you when (and if) I've had time to get my head around Neyman's confidence belt construction, with which I've never had cause to acquaint myself.

Comment author:Cyan
29 July 2009 08:46:59PM
*
0 points
[-]

This paper has a good explanation. Note that I've left one of the steps (the "ordering" that determines inclusion into the confidence belt) undetermined. I'll tell you the ordering I have in mind if you get to the point of wanting to ask me.

Comment author:Cyan
30 July 2009 12:05:02AM
*
0 points
[-]

All you need is page 3 (especially the figure). If you understand that in depth, then I can tell you what the confidence belt for my problem above looks like. Then I can give you a simulation algorithm and you can play around and see exactly how confidence intervals work and what they can give you.

## Comments (155)

Best*3 points [-]Here's one. There is one data point, distributed according to 0.5*N(0,1) + 0.5*N(mu,1).

Bayes: any improper prior for mu yields an improper posterior (because there's a 50% chance that the data are not informative about mu). Any proper prior has no calibration guarantee.

Frequentist: Neyman's confidence belt construction guarantees valid confidence coverage of the resulting interval. If the datum is close to 0, the interval may be the whole real line. This is just what we want [claims the frequentist, not me!]; after all, when the datum is close to 0, mu really could be anything.

Can you explain the terms "calibration guarantee", and what "the resulting interval" is? Also, I don't understand why you say there is a 50% chance the data is not informative about mu. This is not a multi-modal distribution; it is blended from N(0,1) and N(mu,1). If mu can be any positive or negative number, then the one data point will tell you whether mu is positive or negative with probability 1.

*2 points [-]By "calibration guarantee" I mean valid confidence coverage: if I give a number of intervals at a stated confidence, then relative frequency with which the estimated quantities fall within the interval is guaranteed to approach the stated confidence as the number of estimated quantities grows. Here we might imagine a large number of mu parameters and one datum per parameter.

Not easily. The second cousin of this post (a reply to wedrifid) contains a link to a paper on arXiv that gives a bare-bones overview of how confidence intervals can be constructed on page 3. When you've got that far I can tell you what interval I have in mind.

I think there's been a misunderstanding somewhere. Let Z be a fair coin toss. If it comes up heads the datum is generated from N(0,1); if it comes up tails, the datum is generated from N(mu,1). Z is unobserved and mu is unknown. The probability distribution of the datum is as stated above. It will be multimodal if the absolute value of mu is greater than 2 (according to some quick plots I made; I did not do a mathematical proof).

If I observe the datum 0.1, is mu greater than or less than 0?

Thanks Cyan.

I'll get back to you when (and if) I've had time to get my head around Neyman's confidence belt construction, with which I've never had cause to acquaint myself.

*0 points [-]This paper has a good explanation. Note that I've left one of the steps (the "ordering" that determines inclusion into the confidence belt) undetermined. I'll tell you the ordering I have in mind if you get to the point of wanting to ask me.

That's a lot of integration to get my head around.

*0 points [-]All you need is page 3 (especially the figure). If you understand that in depth, then I can tell you what the confidence belt for my problem above looks like. Then I can give you a simulation algorithm and you can play around and see exactly how confidence intervals work and what they can give you.