I was thinking about what happens when you run independent Bernoulli trials forever, with probability pi of success, but you lose the first time you fail. If all pi are the same, then with probability 1 we will eventually fail[1]. Similarly, if the supremum of the sequence of probabilities p is not 1, then we're also guaranteed to fail, since P(p never fails)=∏pi≤∏supip=0. The condition that limi→∞pi=1 is necessary but not sufficient to guarantee that failure isn't certain[2]. That seems to say that not only do we need to improve, but we also can't improve too slowly, or else we're sure to fail. So just how quickly do we need to improve? Well we have that ∑(1−pi)<∞⟹∏pi>0[3], which is much easier to work with (though this isn't a bijection so it can only be used to show that a sequence p improves quickly enough.) Let qi=1−pi, which can be interpreted as our probability of failure. We know right away that any kind of geometric progression of failure minimization, e.g. q=12,14,18,… will give us a better than 0 chance. That says that for any positive constant c>1, it's sufficient for qi>qi+1c to not guarantee failure! But surely we can do better than this, i.e. there's no need to have to improve at a constant rate. We know that ∑1n2=π26 converges, so here we need only for qi>qi+1(1+2i+1i2) or simply qi>qi+1(1+o(1i)), so the further out we go, the less we need to improve relative to our previous rate of failure. This analysis was rather quick, and I'm curious if we can do better than this.
I thought about this problem after thinking about processes where even a single failure would be catastrophic. While in the real world a sufficient number of 0's for the probability of an event makes it negligible, things are different if you live forever and you're dealing with something that can never be minimized to a 0% chance of failure. You need to at least on average improve your odds between samplings, or failure is eventually certain.
Technically this isn't guaranteed, there's nothing stopping a fair coin from landing on heads forever other than our finite lifespans preventing us from observing it. But I think it's fair to call a 0% chance of success a failure.
For example, 0.9 (10 times), 0.99 (100 times), 0.999 (1000 times), ..., and if we take the product of these regions we get increasingly good approximations of 1/e≈0.37, whose products will tend to 0.
I was thinking about what happens when you run independent Bernoulli trials forever, with probability pi of success, but you lose the first time you fail. If all pi are the same, then with probability 1 we will eventually fail[1]. Similarly, if the supremum of the sequence of probabilities p is not 1, then we're also guaranteed to fail, since P(p never fails)=∏pi≤∏supip=0. The condition that limi→∞pi=1 is necessary but not sufficient to guarantee that failure isn't certain[2]. That seems to say that not only do we need to improve, but we also can't improve too slowly, or else we're sure to fail. So just how quickly do we need to improve? Well we have that ∑(1−pi)<∞⟹∏pi>0[3], which is much easier to work with (though this isn't a bijection so it can only be used to show that a sequence p improves quickly enough.) Let qi=1−pi, which can be interpreted as our probability of failure. We know right away that any kind of geometric progression of failure minimization, e.g. q=12,14,18,… will give us a better than 0 chance. That says that for any positive constant c>1, it's sufficient for qi>qi+1c to not guarantee failure! But surely we can do better than this, i.e. there's no need to have to improve at a constant rate. We know that ∑1n2=π26 converges, so here we need only for qi>qi+1(1+2i+1i2) or simply qi>qi+1(1+o(1i)), so the further out we go, the less we need to improve relative to our previous rate of failure. This analysis was rather quick, and I'm curious if we can do better than this.
I thought about this problem after thinking about processes where even a single failure would be catastrophic. While in the real world a sufficient number of 0's for the probability of an event makes it negligible, things are different if you live forever and you're dealing with something that can never be minimized to a 0% chance of failure. You need to at least on average improve your odds between samplings, or failure is eventually certain.
Technically this isn't guaranteed, there's nothing stopping a fair coin from landing on heads forever other than our finite lifespans preventing us from observing it. But I think it's fair to call a 0% chance of success a failure.
For example, 0.9 (10 times), 0.99 (100 times), 0.999 (1000 times), ..., and if we take the product of these regions we get increasingly good approximations of 1/e≈0.37, whose products will tend to 0.
https://math.stackexchange.com/questions/209108/infinite-product-problem-sum-p-n-infty-implies-prod-1-p-n0
Note that we don't care about when pi=0 since this means we're guaranteed to fail anyhow.