Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Eliezer_Yudkowsky comments on Bayesian Flame - Less Wrong

37 Post author: cousin_it 26 July 2009 04:49PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (155)

You are viewing a single comment's thread.

Comment author: Eliezer_Yudkowsky 26 July 2009 05:35:47PM 20 points [-]

Hypothesis testing: I give you a black-box random distribution and claim it obeys a specified formula. You sample some data from the box and inspect it. Frequentism often allows you to call me a liar and be wrong no more than 10% of the time, guaranteed, no priors in sight.

Wrong. If all black boxes do obey their specified formulas, then every single time you call the other person a liar, you will be wrong. P(wrong|"false") ~ 1.

I'm thinking you still haven't quite understood here what frequentist statistics do.

It's not perfectly reliable. They assume they have perfect information about experimental setups and likelihood ratios. (Where does this perfect knowledge come from? Can Bayesians get their priors from the same source?)

A Bayesian who wants to report something at least as reliable as a frequentist statistic, simply reports a likelihood ratio between two or more hypotheses from the evidence; and in that moment has told another Bayesian just what frequentists think they have perfect knowledge of, but simply, with far less confusion and error and mathematical chicanery and opportunity for distortion, and greater ability to combine the results of multiple experiments.

And more importantly, we understand what likelihood ratios are, and that they do not become posteriors without adding a prior somewhere.

Comment author: cousin_it 26 July 2009 05:45:50PM *  2 points [-]

Thanks for the catch, struck out that part.

Yes, you can get your priors from the same source they get experimental setups: the world. Except this source doesn't provide priors.

ETA: likelihood ratios don't seem to communicate the same info about the world as confidence intervals to me. Can you clarify?

Comment author: conchis 26 July 2009 07:54:57PM *  1 point [-]

Wrong. If all black boxes do obey their specified formulas, then every single time you call the other person a liar, you will be wrong. P(wrong|"false") ~ 1.

Ok, bear with me. cousin_it's claim was that P(wrong|boxes-obey-formulas)<=.1, am I right? I get that P(wrong|"false" & boxes-obey-formulas) ~ 1, so the denial of cousin_it's claim seems to require P("false"|boxes-obey-formulas) > .1? I assumed that the point was precisely that the frequentist procedure will give you P("false"|boxes-obey-formulas)<=.1. Is that wrong?

Comment author: cousin_it 26 July 2009 09:58:57PM *  2 points [-]

My claim was what Eliezer said, and it was incorrect. Other than that, your comment is correct.

Comment author: conchis 26 July 2009 10:17:36PM 0 points [-]

Ah, I parsed it wrongly. Whoops. Would it be worth replacing it with a corrected claim rather than just striking it?

Comment author: cousin_it 26 July 2009 10:42:06PM *  0 points [-]

Done. Thanks for the help!