Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Leon D*10
  1. I propose that this concept be called "unexpected surprise" rather than "strictly confused":

    • "Strictly confused" suggests logical incoherence.
    • "Unexpected surprise" can be motivated the following way: let be how surprising data is on hypothesis . Then one is "strictly confused" if the observed is larger than than one would expect assuming a holds.

    This terminology is nice because the average of under is the entropy or expected surprise in . It also connects with Bayes, since is the evidential support gives .

  2. The section on "Distinction from frequentist p-values" is, I think, both technically incorrect and a bit uncharitable.

    • It's technically incorrect because the following isn't true:

      The classical frequentist test for rejecting the null hypothesis involves considering the probability assigned to particular 'obvious'-seeming partitions of the data, and asking if we ended up inside a low-probability partition.

      Actually, the classical frequentist test involves specifying an obvious-seeming measure of surprise , and seeing whether is higher than expected on . This is even more arbitrary than the above.

    • On the other hand, it's uncharitable because it's widely acknowledged one should try to choose to be sufficient, which is exactly the condition that the partition induced by is "compatible" with for different , in the sense that for all the considered .

      Clearly is sufficient in this sense. But there might be simpler functions of that do the job too ("minimal sufficient statistics").

      Note that being sufficient doesn't make it non-arbitrary, as it may not be a monotone function of .

  3. Finally, I think that this concept is clearly "extra-Bayesian", in the sense that it's about non-probabilistic ("Knightian") uncertainty over , and one is considering probabilities attached to unobserved (i.e., not conditioning on the observed ).

    I don't think being "extra-Bayesian" in this sense is problematic. But I think it should be owned-up to.

    Actually, "unexpected surprise" reveals a nice connection between Bayesian and sampling-based uncertainty intervals:

    • To get a (HPD) credible interval, exclude those that are relatively surprised by the observed (or which are a priori surprising).
    • To get a (nice) confidence interval, exclude those that are "unexpectedly surprised" by .