Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: cumulant-nimbus 11 January 2008 05:09:32AM 0 points [-]

I'd say that the ball is a sphere and consider the first point of impact (i.e. the tangency point of the plane to the sphere). Otherwise, you need to know a lot about the ball and the field where it lands.

You can compare infinite sets. Take the sets A and B, A={1,2,3,...} and B={2,3,4,...}. B is, by construction, a subset of A. There's your comparison; yet, both are infinite sets.

What assumptions would you make for the golf ball and the field? (To keep things clear, can we define events and probabilities separately?)

Comment author: cumulant-nimbus 11 January 2008 03:40:32AM 2 points [-]

Caledonian: Not wrong. Take the field you're swinging at to be a plane. There are infinitely many points in that plane; that's just the density of the reals.

Now say there is some probability density of landing spots; and, let's say no one spot is special in that it attracts golf balls more than points immediately nearby (i.e. our pdf is continuous and non-atomic). Right there, you need every point (as a singleton) to have measure 0.

Go pick up Billingsley: measure 0 is not the same as impossible nor does it cause any problems.

In response to Infinite Certainty
Comment author: cumulant-nimbus 11 January 2008 02:31:05AM 0 points [-]

de Finetti assumes conditioning. If I am taking conditional expectations, then iterated expectations (with different conditionings) is very useful.

But iterated expectations, all with the same conditioning, is superfluous. That's why I took care not to put any conditioning into my expectations.

Or we can criticize the probability-of-a-probability musings another way as having undefined filtrations for each of the stated probabilities.

Comment author: cumulant-nimbus 11 January 2008 02:24:12AM 0 points [-]

What do you mean by "infinite set atheism"? You are essentially stating that you don't believe in mathematical limits -- because that is one of the major consequences of infinite sets (or sequences).

If you don't believe in those... well, you lose calculus, you lose the density of real numbers, you lose the need or understanding of man events with probability 0 or 1, and you lose the point of Zeno's Paradox. -- Janos is spot on about measure zero not implying impossibility. What is the probability of a golf ball landing at any exact point? Zero. But it has to land somewhere, so no one point is impossible.

Impossibility would mean absence from your sigma algebra. What's that you ask? Without making this painful, you need three things for probability: an idea of what constitutes "the space of everything", an idea of what constitutes possible events out of that space which we can confirm or deny, and an assignment of numbers to those events. (This is often LaTeX'ed as (\Omega, \mathcal{F}, P).) The conversation here seems to be confusing the filtration/sigma-algebra F with the numbers assigned to those events by P.

Can we choose which we're talking about: events or numbers?

In response to Infinite Certainty
Comment author: cumulant-nimbus 10 January 2008 04:01:23PM 1 point [-]

No, no, no. Three problems, one in the analogy and two in the probabilities.

First, an individual particle can briefly exceeed the speed of light; the *group* velocity cannot. Go read up on Cerenkov radiation: It's the blue glow created by (IIRC) neutrons briefly breaking through c, then slowing down. The decrease in energy registers as emitted blue light.

Second: conditional probabilities are not necessarily given by a ratio of densities. You're conditioning on (or working with) events of measure-zero. These puzzlers are why measure theory exists -- to step around the seeming 'inconsistencies'.

Third: The probability of a probability is superfluous. Probabilities are (thanks to Kolmogorov) just the expectation of indicator variables. Thus P(P(*)=1) = E(I(E(I(*))=1)) = 0 or 1; the randomness is all eliminated by the inside expectation.

Leave the musings on probabilities to the statisticians; they've already thought about these supposed paradoxes.

Comment author: cumulant-nimbus 10 January 2008 04:01:13PM 4 points [-]

You seem to think probabilities of 0 and 1 are mysterious or contradictory when discussing randomness; they aren't. When you're talking about randomness, you need to define your support. that mere action gives you places where the probability is zero. For example: Can the time to run 100m ever be negative? No? Then P(t<0) = 0. And by extension, P(T>=0) = 1.

No puzzle there. But you're transfrormation to log-odds has some regularity conditions you're violating in those cases: the transform is only defined for probabilities in (0,1). But that doesn't mean log-odds or probabilities are flawed. Probabilities or 0 and 1 -- like log-odds of plus-and-minus infinity -- are just filling in the boundaries on the system you've created. Mathematically, you want to be able to handle limits; that means handling limits as a probability approaches 0 or 1. That's it.

This shouldn't be some huge philosophical puzzle; it's merely the need to have any mathematical system you use be complete. Sir David Cox would be the first to tell you that.