Jach comments on Case study: abuse of frequentist statistics - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (96)
Non-Bayesianism for Bayesians (based on a poor understanding of Andrew Gelman and Cosma Shalizi)
Lakatos (and Kuhn) are philosophers of science who studied science as scientists actually do it, as opposed to how scientists (at the time) claimed scientists do it. This is in contrast to taking the "scientific method" that we learned in grade school literally. Theories are not rejected at the first evidence that they have failed, they are patched, and so on.
Gelman and Shalizi's criticism of Bayesian rhetoric (as far as I can make out from their blog posts and the slides of Gelman's talk) is (explicitly) similar - what Bayesians do is different than what Bayesians say Bayesians do.
In particular, humans (as opposed to ideal, which is to say nonexistent, Bayesians) do not SIMPLY update on the evidence. There are other important steps in the process, such as checking whether, given the new data, your original model still looks reasonable. (This is "posterior predictive model checking"). This step looks a lot like computing a p-value, though Gelman recommends a graphical presentation, rather than condensing to a single number. In general, the notion of doing research on which priors are decent ones for scientific practice - strong enough to capture knowledge that we really do have, and weak enough to adapt to the evidence, given sufficient evidence - is a non-Bayesian notion; a perfect Bayesian only chooses their prior once, and never changes it. Note that historically, Jaynes worked on heuristics for how to choose a good prior, making him a non-Bayesian.
I saw an example that impressed me (and I can't find the paper now to cite it!). Suppose you have an urn A, with many balls in it, labeled A, and one ball labeled Z. Also, an urn B, with many (but fewer) balls in it labeled B and one ball labeled C, et cetera, until you finally have an urn Z with the fewest balls in it, labeled Z. If we mix the urns and draw a ball from the mixture, which urn did it probably originally come from?
Suppose (because you're a computationally-limited Bayesian) that you only include in your model the N highest-probability hypotheses. That is, you include A, B, C, in your model, but you neglect Z - that is, you put zero probability on it. (We can make Z's pre-evidence probability arbitrarily small, to make this seem reasonable at the time.) When one, or even N balls turn out to be labeled Z, the model (due to the initial zero probability on Z) continues insisting that the balls came from one of the initially-specified hypotheses.
Of course, you could (and should) do a posterior predictive check, computing the probability that your model assigns to the observed data, and revise your model if the probability says your model is wack. However, that step "looks frequentist", and isn't explicitly included the rhetoric of "Bayesian Statistics = Science". Bayesians update on the evidence, they don't revise their models!
Anyway, don't get caught up in factionalism and tribal us vs. them thinking!
Tangent: I was a huge fan of Proofs and Refutations, which is about mathematics; is there a book of Lakatos's on the philosophy of science you would recommend?
I liked Proofs and Refutations a lot too. However, I'm ashamed to admit I have no special knowledge of Lakatos. All I know about his philosophy of science stuff (which I believe is closely related) is from his Wikipedia page (and Feyerabend's). Gelman's slides made the analogy with Lakatos explicitly.