It means your model was inapplicable to the event. Careful Bayesean reasoners don't have any 0s or 1s in predictions of observations. They may have explicit separation of observation and probability, such as "all circles in euclidian planes have pi as their ratio of circumference to their diameter", with the non-1 probability falling into "is that thing I see actually truly a circle in a flat plane?"
Likewise, it's fine to give probability 1 to "a fair die will roll integers between 1 and 6 inclusive with equal probability", and then when a 7 rolls, say "that's evidence that it's not a fair die".
Anyone who assigns a probability of 0 or 1 for a future experience is wrong. There's an infinitesimal chance that the simulation ends or your Boltzmann brain has a glitch or aliens are messing with gravity or whatever. In casual use, we often round these off, which is convenient but not strictly applicable.
Note that there's absolutely no way to GET a 0 or 1 probability in Bayesean calculations, unless it's a prior. Any sane prior can adjust arbitrarily close to 0 or 1 with sufficient observations but can't actually get all the way there - update size is proportional to surprise, so it takes a LOT of evidence to shift a tiny bit closer when it's already close to 0 or 1.
For real, limited-calculation agents and humans, one can also model a meta-credence about "is my model and the probability assignments I have vaguely close to correct", which ALSO is not 1.
I'll give 2 examples:
The answer is it has probability 1, but that doesn't mean that we can extend the decider of the halting problem to cover all cases.
https://arxiv.org/abs/math/0504351
Another example is what's the probability that our physical constants are what they are, especially the constants that seem tuned to life?
The answer is if the constants are arbitrary real numbers, the answer is probability 0, and this applies no matter what number ...
Many of the existing answers seem to confuse model and reality.
In terms of practical prediction of reality, it would be a mistake to emit a 0 or 1, always, because there's always that one-in-a-billion chance that our information is wrong – however vivid it seems at the time. Even if you have secretly looked at the hidden coin and seen clearly that it landed on heads, 99.999 % is a more accurate forecast than 100 %. It could have landed on aardvarks and masqueraded as heads, however unlikely, that is a possibility. Or you confabulated the memory of seeing the coin from a different coin you saw a week ago – also not so likely, but happens. Or you mistook tails for heads – presumably happens every now and then.
When it comes to models, though, probabilities of 0 and 1 show up all the time. Getting a 7 when tossing a d6 with the standard dice model simply does not happen, by construction. Adding two and three and getting five under regular field arithmetic happens every time. We can argue whether the language of probability is really the right tool for those types of questions, but taking a non-normative stance, it is reasonable for someone to ask those questions phrased in terms of probabilities, and then the answers would be 0 % and 100 % respectively.
These probabilities also show up in limits and arguments of general tendency. When a coin is tossed, the probability of getting only tails is 0 % as long as you keep tossing whenever you get tails. In a random walk, the probability of eventually crossing the origin is 100 %. When throwing a d6 for long enough, the mean value will end up within the range 3-4 with probability 100 %.
These latter two paragraphs describe things that apply only to our models, not to reality, but they can serve as a useful mental shortcut as long as one is careful about applying them blindly.
Rolling a standard 6-sided die and getting a 7 has probability zero.
Tossing an ordinary coin and having it come down aardvarks has probability zero.
Every random value drawn from the uniform distribution on the real interval [0,1] has probability zero.
2=3 with probability zero.
2=2 with probability 1.
For any value in the real interval [0,1], the probability of picking some other value from the uniform distribution is 1.
In a mathematical problem, when a coin is tossed, coming down either heads or tails has probability 1.
In practice, 0 and 1 are limiting cases that from one point of view can be said not to exist, but from another point of view, sufficiently low or high probabilities may as well be rounded off to 0 or 1. The test is, is the event of such low probability that its possibility will not play a role in any decision? In mathematics, probabilities of 0 and 1 exist, and if you try to pretend they don't, all you end up doing is contorting your language to avoid mentioning them.
It seems to me that, in fact, it’s entirely possible for a coin to come up aardvarks. Imagine, for a second, that unbeknownst to you a secret society of gnomes, concealed from you(or from society as a whole), occasionally decide to turn coins into aardvarks(or fulfill whatever condition you have for a coin to come up aardvarks. Now, this is nonsense(obviously). But it’s technically possible in the sense that this race of gnomes could exist without contradicting your previous observations (only perhaps your conclusions based on them). Or, if you don’t accep...
Okay, that's a nice answer, but to ask a related question, in Bayesianism, does it mean that if we declare an event has probability 0 or 1, that means that the event never happens or always happens, respectively.
Good answer for the most part though.
I am not sure whether this is the answer you're looking for, but I think it's true and could be de-confusing, and others have given the standard/practical answer already.
You can try running a program which computes Bayesian updates to determine what happens when this program is passed as input an 'observation' to which it assigns probability 0. Two possible outcomes (of many, dependent on the exact program) that come to mind:
Bayes' theorem is an algorithm which is used because it happens to help predict the world, rather than something with metaphysical status.
We could also imagine very-different (mathematical)-worlds where prediction is not needed/useful, or, maybe, where the world is so differently-structured that Bayes' theorem is not predictive.
(Epistemic status: I know basic probability theory but am otherwise just applying common sense here)
This seems to mostly be a philosophical question. I believe the answer is that then you're hitting the limits of your model and Bayesianism doesn't necessarily apply. In practical terms, I'd say it's most likely that you were mistaken about the probability of the event in fact being 0. (Probability 1 events occuring should be fine).
Re probability 0 events, I'd say a good example of one is the question "What probability do we have to live in a universe with our specific fundamental constants?"
And our current theory relies on 20+ real number constants, but critically the probability of getting the constants we do have are always 0, no matter the number that is picked, yet one of them is picked.
Another example is the set of Turing Machines where we can't decide their halting or non-halting is a probability 0 set, but that doesn't allow us to construct a Turing Machine that decides wheth...
Okay, this one is a simple probability question/puzzle:
What does it actually mean for a probability 0 or 1 event to actually occur, or for those who like subjective credences more, what does it mean to have a probability 0 or 1 observation in Bayesian terms?
Part of my motivation here is to address the limiting cases of beliefs, where the probabilities are as extreme as they can get, and to see what results from taking the probability to the extremes.