Comment author:Epictetus
20 August 2015 02:30:48PM
1 point
[-]

Situations where an event will definitely or definitely not occur doesn't seem to be consistent with the idea of randomness which I've understood probability to revolve around.

"Event" is a very broad notion. Let's say, for example, that I roll two dice. The sample space is just a collection of pairs (a, b) where "a" is what die 1 shows and "b" is what die 2 shows. An event is any sub-collection of the sample space. So, the event that the numbers sum to 7 is the collection of all such pairs where a + b = 7. The probability of this event is simply the fraction of the sample space it occupies.

If I rolled eight dice, then they'll never sum to seven and I say that that event occurs with probability 0. If I secretly rolled an unknown number of dice, you could reasonably ask me the probability that they sum to seven. If I answer "0", that just means that I rolled more than one and fewer than eight dice. It doesn't make the process less random nor the question less reasonable.

If you treat an event as some question you can ask about the result of a random process, then 1 and 0 make a lot more sense as probabilities.

For the mathematical theory of probability, there are plenty of technical reasons why you want to retain 1 and 0 as probabilities (and once you get into continuous distributions, it turns out that probability 1 just means "almost certain").

Comment author:Regex
21 August 2015 08:36:41AM
0 points
[-]

This is what I meant by something being a proven truth- within the rules set one can find outcomes which are axiomatically impossible or necessary. The process itself may be random, but calling it random when something impossible didn't happen seems odd to me. The very idea that 1 may be not-quite-certain is more than a little baffling, and I suspect is the heart of the issue.

Comment author:Epictetus
21 August 2015 02:01:03PM
1 point
[-]

The very idea that 1 may be not-quite-certain is more than a little baffling, and I suspect is the heart of the issue.

If 1 isn't quite certain then neither is 0 (if something happens with probability 1, then the probability of it not happening is 0). It's one of those things that pops up when dealing with infinity.

It's best illustrated with an example. Let's say we play a game where we flip a coin and I pay you $1 if it's heads and you pay me $1 if it's tails. With probability 1, one of us will eventually go broke (see Gambler's ruin). It's easy think of a sequence of coin flips where this never happens; for example, if heads and tails alternated. The theory holds that such a sequence occurs with probability 0. Yet this does not make it impossible.

It can be thought of as the result of a limiting process. If I looked at sequences of N of coin flips, counted the ones where no one went broke and divided this by the total number of possible sequences, then as I let N go to infinity this ratio would go to zero. This event occupies an region with area 0 in the sample space.

## Comments (128)

Old"Event" is a very broad notion. Let's say, for example, that I roll two dice. The sample space is just a collection of pairs (a, b) where "a" is what die 1 shows and "b" is what die 2 shows. An event is any sub-collection of the sample space. So, the event that the numbers sum to 7 is the collection of all such pairs where a + b = 7. The probability of this event is simply the fraction of the sample space it occupies.

If I rolled eight dice, then they'll never sum to seven and I say that that event occurs with probability 0. If I secretly rolled an unknown number of dice, you could reasonably ask me the probability that they sum to seven. If I answer "0", that just means that I rolled more than one and fewer than eight dice. It doesn't make the process less random nor the question less reasonable.

If you treat an event as some question you can ask about the result of a random process, then 1 and 0 make a lot more sense as probabilities.

For the mathematical theory of probability, there are plenty of technical reasons why you want to retain 1 and 0 as probabilities (and once you get into continuous distributions, it turns out that probability 1 just means "almost certain").

This is what I meant by something being a proven truth- within the rules set one can find outcomes which are axiomatically impossible or necessary. The process itself may be random, but calling it random when something impossible didn't happen seems odd to me. The very idea that 1 may be not-quite-certain is more than a little baffling, and I suspect is the heart of the issue.

If 1 isn't quite certain then neither is 0 (if something happens with probability 1, then the probability of it not happening is 0). It's one of those things that pops up when dealing with infinity.

It's best illustrated with an example. Let's say we play a game where we flip a coin and I pay you $1 if it's heads and you pay me $1 if it's tails. With probability 1, one of us will eventually go broke (see Gambler's ruin). It's easy think of a sequence of coin flips where this never happens; for example, if heads and tails alternated. The theory holds that such a sequence occurs with probability 0. Yet this does not make it impossible.

It can be thought of as the result of a limiting process. If I looked at sequences of N of coin flips, counted the ones where no one went broke and divided this by the total number of possible sequences, then as I let N go to infinity this ratio would go to zero. This event occupies an region with area 0 in the sample space.

If the limit converges then it can hit 0 or 1. Got it. Thank you.