Manfred comments on Beyond Bayesians and Frequentists - Less Wrong

36 Post author: jsteinhardt 31 October 2012 07:03AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (51)

You are viewing a single comment's thread. Show more comments above.

Comment author: Manfred 31 October 2012 05:31:40PM 2 points [-]

There is a difference between "guaranteed performance" and "optimizing for the worst case". Guaranteed performance means that we can be confident, before the algorithm gets run, that it will hit some performance threshold.

Ah, okay. Whoops.

I don't see how you can do that with a Bayesian method, except by performing a frequentist analysis on it.

How about a deliberate approximation to an ideal use of the evidence? Or do any approximations with limited ranges of validity (i.e. all approximations) count as "frequentist"? Though then we might have to divide computer-programming frequentists into "bayesian frequentists" and "frequentist frequentists" depending on whether they made approximations or applied a toolbox of methods.

Comment author: jsteinhardt 31 October 2012 07:13:45PM *  0 points [-]

How about a deliberate approximation to an ideal use of the evidence?

I'm confused by what you are suggesting here. Even a Bayesian method making no approximations at all doesn't necessarily have guaranteed performance (see my response to Oscar_Cunningham).

Comment author: Manfred 01 November 2012 02:46:00AM 1 point [-]

I'm referring to using an approximation in order to guarantee performance. E.g. replacing the sum of a bunch of independent, well-behaved random variables with a gaussian, and using monte-carlo methods to get approximate properties of the individual random variables with known resources if necessary.