torekp comments on Putting in the Numbers - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (32)
Whoa, you think the only correct interpretation of "there's a die that returns 1, 2, or 3" is to be absolutely certain that it's fair? Or what do you think a delta function in the distribution space means?
(This will have effects, and they will not be subtle.)
One of the classic examples of this is three interpretations of "randomly select a point from a circle." You could do this by selecting a angle for a radius uniformly, then selecting a point on that radius uniformly along its length. Or you could do those two steps, and then select a point along the associated chord uniformly at random. Or you could select x and y uniformly at random in a square bounding the circle, and reject any point outside the circle. Only the last one will make all areas in the circle equally likely- the first method will make areas near the center more likely and the second method will make areas near the edge more likely (if I remember correctly).
But I think that it generally is possible to reach consensus on what criterion you want (such as "pick a method such that any area of equal size has equal probability of containing the point you select.") and then it's obvious what sort of method you want to use. (There's a non-rejection sampling way to get the equal area method for the circle, by the way.) And so you probably need to be clever about how you parameterize your distributions, and what priors you put on those parameters, and eventually you do have hyperparameters that functionally have no uncertainty. (This is, for example, seeing a uniform as a beta(1/2,1/2), where you don't have a distribution on the 1/2s.) But I think this is a reasonable way to go about things.
In a separate comment, Kurros worries about cases with "no preferred parameterisation of the problem". I have the same worry as both of you, I think. I guess I'm less optimistic about the resolution. The parameterization seems like an empirical rabbit that Jaynes and other descendants of the Principle of Insufficient Reason are trying to pull out of an a priori hat. (See also Seidenfeld <pdf> section 3 on re-partitioning the sample space.)
I'd appreciate it if someone could assuage - or aggravate - this concern. Preferably without presuming quite as much probability and statistics knowledge as Seidenfeld does (that one went somewhat over my head, toward the end).