Edit: the title was misleading, i didn't ask about a rational agent, but about what comes out of certain inputs in Bayes theorem, so now it's been changed to reflect that.
Eliezer and others talked about how a Bayesian with a 100% prior cannot change their confidence level, whatever evidence they encounter. that's because it's like having infinite certainty. I am not sure if they meant it literary or not (is it really mathematically equal to infinity?), but assumed they do.
I asked myself, well, what if they get evidence that was somehow assigned 100%, wouldn't that be enough to get them to change their mind? In other words -
If P(H) = 100%
And P(E|H) = 0%
than what's P(H|E) equals to?
I thought, well, if both are infinities, what happens when you subtract infinities? the internet answered that it's indeterminate*, meaning (from what i understand), that it can be anything, and you have absolutely no way to know what exactly.
So i concluded that if i understood everything correct, then such a situation would leave the Bayesian infinitely confused. in a state that he has no idea where he is from 0% to a 100%, and no amount of evidence in any direction can ground him anywhere.
Am i right? or have i missed something entirely?
*I also found out about Riemann's rearrangement theorem which, in a way, let's you arrange some infinite series in a way that equals whatever you want. Dem, that's cool!
I see. so -
If P(H) = 1.0 - ϵ1
And P(E|H) = 0 + ϵ2
Then it equals "infinite confusion".
Am i correct?
and also, when you use epsilons, does it mean you get out of the "dogma" of 100%? or you still can't update down from it?
And what i did in my post may just be another example of why you don't put an actual 1.0 in your prior, cause then even if you get evidence of the same strength in the other direction, that would demand that you divide zero by zero. right?