It seems that he makes the same mistake in that post (though he makes it clear in the rest of the essay that alternatives matter). You paraphrased him right.
Incidentally, Popper also thought that you couldn't falsify a theory unless we have a non-ad hoc alternative that explains the data better.
If there is some evidence E that the assertion A can't explain, then the likelihood P(E|A) will be tiny. Thus, the numerator P(E|A)P(A) will also be tiny, and likewise the posterior probability P(A|E). Updating on the near impossibility of evidence E has driven the probability of the assertion A
This isn't quite right. The tiny probability of an observation given the hypothesis does not imply that the posterior of the hypothesis will be low. Suppose there's a lottery with 10 million tickets. We have very good reasons to believe the lottery is fair. Still, whoever the winner X is, P(X is the winner|The lottery is fair) = 1/10000000. The reason P(The lottery is fair|X is the winner) is not low is that the alternative hypothesis "The lottery is not fair" also does a poor job at predicting the result (why rigged in favor of X specifically and not the other 9999999 people?) and the prior on P(The lottery is not fair) is very low. Ok, but what about the hypothesis "The lottery is 100% rigged in favor of X"? The probability that X is the winner given this alternative is 1. But the prior on that hypothesis is basically zero, so it doesn't matter. (Things are different if we have reasons to think X is suspicious. Then the fact that X won is a good reason to suspect the lottery isn't fair.)
tl;dr: The posterior P(H1|E) is tiny iff P(H1)P(E|H1) is tiny relative to all other P(Hi)P(E|Hi).
There's a nice paper on this "informal fallacies as Bayesian reasoning" idea: https://ojs.uwindsor.ca/index.php/informal_logic/article/view/2132
(But that doesn't mean informal fallacies are always good arguments. It just means they can't be dismissed a priori, you have to analyze the argument individually.)