# Phlebas comments on (Subjective Bayesianism vs. Frequentism) VS. Formalism - Less Wrong

27 26 November 2011 05:05AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Sort By: Best

Comment author: [deleted] 26 November 2011 01:20:36PM *  -1 points [-]

.

Comment author: 26 November 2011 01:22:07PM *  6 points [-]

What is this comment supposed to add? Is it an ad hominem, or are you asking for clarification? If you don't understand that comment perhaps you should try rereading my original post, I have updated it a bit since you first commented, perhaps it is clearer.

(edit) clarification:

The reason that probabilities model frequency is not because our data about some phenomena are dominated by facts of frequency. If you take 10 chips, 6 of them red, 4 of them blue, 5 red ones and 1 blue one on the table, and the rest not on the table, you'll find that bayes can be used to talk about the frequencies of these predicates in the population. You only need to start with theorems that when interpreted produce the assumptions I just provided, e.g., P(red and on the table) = 1/2, P(~red and on the table) = 1/10, P(red and ~on the table) = 2/5. From those basic statements we can infer using bayes all the following results: P(red|on the table) = 5/6, P(~red|on the table) = 1/6, P( (red and on the table) or blue) = 9/10, P(red) = P(red|on the table) P(on the table) + P(red|~on the table) P(~on the table) = 6/10, etc. These are all facts about the FREQUENCY distributions of these chips' predicates, which can be reached using bayes, and the assumptions above. We can interpret P(red) as the frequency of red chips out of all the chips, and P(red|on the table) as the frequency of red chips out of chips on the table. You'll find that anything you proof about these frequencies using bayesian inference will be true claims about the frequencies of these predicates within the chips. Hence, bayes models frequency. This is all I meant by bayes models frequency. You'll also find that it works just as well with volume or area. (I am sorry I wasn't that concrete to begin with.)

In the same exact way, you can interpret probability theorems as talking about degrees of belief, and if you ask a bayesian, all those interpreted theorems will come out as true statements about rational degree of belief. In this way bayes models rational belief. You can also interpret probability theory as talking about boston's night life, but not everyone of those interpreted theorems will be true, so probabiliity theory does not model boston's night life under that interpretation. To model something, means to produce only true statements under a given interpretation about that something.

Frequentists may not treat their tool box as a set of mostly unrelated approximations to perfect learning, or treat bayes as the optimal laws of inference, but they should as far as I can tell. And if they did, they would not cease to be frequentists, they would still use the same methods, use "probability" the same way, and still focus on long run frequency over evidential support. The only difference is that rather than saying probability is frequency and that probability is not subjective degree of belief, they would say that probability models both frequency and subjective degree of belief. Subjective bayesians should make a similar update, though I am sure they don't swing the copula around as liberally as frequentists. This is what i meant when i said that frequentists could and should believe that frequentism is just a useful approximation, and that bayes is in some sense optimal. I was never really arguing about the practical advantages of bayesianism over frequntism, but about how they both seem to make a similar philosophical mistake in using identity or the copula when the relation of modeling is more applicable. A properly Hofstadterish formalism seems like the best way to deal with all of this comprehensively.

You understand what I was saying now? I really want to know. That you are confused by what seem to me to be my most basic claims, and that you are also as familiar with E. T. Jaynes as your comments suggests is worrying to me. Does this clarification make you less confused?

Comment author: [deleted] 26 November 2011 02:49:34PM *  1 point [-]

.

Comment author: 26 November 2011 03:06:08PM *  5 points [-]

Fine, let's make up a new frequentism, which is probably already in existence: finite frequentism. Bayes still models finite frequencies, like the example i gave of the chips.

When a normal frequentest would say "as the number of trials goes to infinity" the finite frequentest can say "on average" or "the expectation of". Rather than saying, as the number of die rolls goes to infinity the fraction of sixes is 1/6, we can just say that as the number rises it stabilizes around and gets closer to 1/6. That is a fact which is finitely verifiable. If we saw that the more die rolls we added to the average, the closer the fraction of sixes approached 1/2, and the closer it hovered around 1/2, the frequentest claim would be falsified.

There may be no infinite populations. But the frequentist can still make due with finite frequencies and expected frequencies, and i am not sure what he would loose. There are certainly finite frequencies in the world, and average frequencies are at least empirically testable. What can the frequentist do with infinite populations or trials, that he/she can't do with expected/average frequencies.

Also, are you a finitist when it comes to calculus? Because the differential calculus requires much more commitment to the idea of a limit, infinity, and the infinitesimal, than frequentists require, if frequentests require these concepts at all. Would you find a finitist interpretation of the calculus to be more philosophically sound than the classical approach?

Comment author: 26 November 2011 03:10:45PM 0 points [-]

potato,

I don't think there's much value in replying to Phlebas' latest reply.