Strictly confused

Discuss the wikitag on this page. Here is the place to ask questions and propose changes.
New Comment
2 comments, sorted by

In the paragraph 4th from last, page says the sequence HHHHHT "is assigned 1/30 probability by the Rule of Succession". Where does this number come from? They don't explain. I do understand the part about that same sequence being assigned 1/64 by the fair coin hypothesis, but the part about the rule of succession isn't so clear to me.

The second example, in the paragraph 2nd from last, is also confusing to me: the part that says that the sequence HHHHH HTHHH HHTHH gives the Bayesian a 19.5 : 1 chance of the coin being biased vs it being fair.

  1. I propose that this concept be called "unexpected surprise" rather than "strictly confused":

    • "Strictly confused" suggests logical incoherence.
    • "Unexpected surprise" can be motivated the following way: let be how surprising data is on hypothesis . Then one is "strictly confused" if the observed is larger than than one would expect assuming a holds.

    This terminology is nice because the average of under is the entropy or expected surprise in . It also connects with Bayes, since is the evidential support gives .

  2. The section on "Distinction from frequentist p-values" is, I think, both technically incorrect and a bit uncharitable.

    • It's technically incorrect because the following isn't true:

      The classical frequentist test for rejecting the null hypothesis involves considering the probability assigned to particular 'obvious'-seeming partitions of the data, and asking if we ended up inside a low-probability partition.

      Actually, the classical frequentist test involves specifying an obvious-seeming measure of surprise , and seeing whether is higher than expected on . This is even more arbitrary than the above.

    • On the other hand, it's uncharitable because it's widely acknowledged one should try to choose to be sufficient, which is exactly the condition that the partition induced by is "compatible" with for different , in the sense that for all the considered .

      Clearly is sufficient in this sense. But there might be simpler functions of that do the job too ("minimal sufficient statistics").

      Note that being sufficient doesn't make it non-arbitrary, as it may not be a monotone function of .

  3. Finally, I think that this concept is clearly "extra-Bayesian", in the sense that it's about non-probabilistic ("Knightian") uncertainty over , and one is considering probabilities attached to unobserved (i.e., not conditioning on the observed ).

    I don't think being "extra-Bayesian" in this sense is problematic. But I think it should be owned-up to.

    Actually, "unexpected surprise" reveals a nice connection between Bayesian and sampling-based uncertainty intervals:

    • To get a (HPD) credible interval, exclude those that are relatively surprised by the observed (or which are a priori surprising).
    • To get a (nice) confidence interval, exclude those that are "unexpectedly surprised" by .