All of Mayo's Comments + Replies

Mayo10

Just a couple of points on this discussion, which I'm sure I walked in at the middle of: (1) One thing it illustrates is the important difference between what one "should" believe in the sense of it being prudential in some way, versus a very different notion: what has or has not been sufficiently well probed to regard as warranted (e.g., as a solution to a problem, broadly conceived). Of course, if the problem happens to be "to promote luckiness", a well-tested solution could turn out to be "don't demand well-testedness, but thin... (read more)

Mayo50

I realize Eliezer holds great sway on this blog, but I think people here ought to question a bit more closely some of his most winning arguments in favor of casting out frequents for Bayesianism. I've only read this blog around 4 times, and each time I've found a howler apparently accepted. But putting those aside, I find it curious that the results on psychological biases that is given so much weight on this blog are arrived at and affirmed by means of error statistical methodology. error statistics.com

8gwern
Speaking as one of the LWers who has spent a fair bit of time reading up on both the heuristics & biases literature and also the problems & misuse of NHST (although I certainly couldn't compare to your general statistical expertise), my position is basically that there's no available literature which have examined the H&B topic with a superior methodology (so there's no alternative we could use) and that on the whole H&B has found real effects despite the serious weaknesses in the methodology - for example, of the Reproducibility Project's 13 targets, the ones which failed to replicate were priming effects and not the tested H&B effects (eg. sunk costs, anchoring, framing). The problems are not so bad as to drain the H&B results of all validity, just some. So while the H&B research program is no doubt undermined and hampered by the statistical tools and practices of the researchers involved, there seem little reason to think that the most-discussed biases are pure statistical mirages; and so they are entirely relevant to our discussions here. (From my perspective, the real question about the utility of the H&B literature to our practical discussions here on LW is not whether they exist in the lab settings they are studied in - it's clear that they are not artifacts of p-value hacking or anything like that - but whether they operate in the real world to a meaningful extent and shape opinions & actions on a wide scale and on the topics we care about. This is, unfortunately, something which is very difficult to study no matter what methodology one might choose to use, and for this concern, criticizing the use of error statistical methodology is largely irrelevant.)
Mayo100

Frequentism is as abused as "orthodox statistics", and in any event, tends to evoke a conception of people interested in direct inference: assigning a probability (based on observed relative frequencies) to outcomes. Frequentism in statistical inference, instead, refers to the use of error probabilities--based on sampling distributions-- in order to assess and control a method's capability to probe a given discrepancy or inferential flaw of interest. Thus, a more suitable name would be error probability statistics, or just error statistics. One i... (read more)

0Cyan
Hah! Those first few sentences also made me wonder if it was you. But then I got to the part about the "pro-natural" agenda and decided it was unlikely.
Mayo70

I'm sorry to see such wrongheaded views of frequentism here. Frequentists also assign probabilities to events where the probabilistic introduction is entirely based on limited information rather than a literal randomly generated phenomenon. If Fisher or Neyman was ever actually read by people purporting to understand frequentist/Bayesian issues, they'd have a radically different idea. Readers to this blog should take it upon themselves to check out some of the vast oversimplifications... And I'm sorry but Reichenbach's frequentism has very little to do wit... (read more)

1Cyan
Do you intend to be replying to me or to Tyrrell McAllister?
Mayo30

If there was a genuine philosophy of science illumination it would be clear that, despite the shortcomings of the logical empiricist setting in which Popper found himself , there is much more of value in a sophisticated Popperian methodological falsificationism than in Bayesianism. If scientists were interested in the most probable hypotheses, they would stay as close to the data as possible. But in fact they want interesting, informative, risky theories and genuine explanations. This goes against the Bayesian probabilist ideal. Moreover, you cannot falsif... (read more)

1Cyan
Strictly speaking, one can't falsify with any method outside of deductive logic -- even your own Severity Principle only claims to warrant hypotheses, not falsify their negations. Bayesian statistical analysis is just the same in this regard. A Bayesian analysis doesn't need to start with an exhaustive set of hypotheses to justify discarding some of them. Suppose we have a set of mutually exclusive but not exhaustive hypotheses. The posterior probability of an hypothesis under the assumption that the set is exhaustive is an upper bound for its posterior probability in an analysis with an expanded set of hypotheses. A more complete set can only make a hypotheses less likely, so if its posterior probability is already so low that it would have a negligible effect on subsequent calculations, it can safely be discarded. I'm a Bayesian probabilist, and it doesn't go against my ideal. I think you're attacking philosophical subjective Bayesianism, but I don't think that's the kind of Bayesianism to which lukeprog is referring.
Mayo40

No, the multiple comparisons problem, like optional stopping, and other selection effects that alter error probabilities are a much greater problem in Bayesian statistics because they regard error probabilities and the sampling distributions on which they are based as irrelevant to inference, once the data are in hand. That is a consequence of the likelihood principle (which follows from inference by Bayes theorem). I find it interesting that this blog takes a great interest in human biases, but guess what methodology is relied upon to provide evidence of those biases? Frequentist methods.

2lukeprog
Deborah, what do you think of jsteinhardt's Beyond Bayesians and Frequentists?
Mayo60

Y'all are/were having a better discussion here than we've had on my blog for a while....came across by chance. Corey understands error statistics.

0Cyan
"Corey" = me.