The category "prophecies Jesus is said to have fulfilled" is not a natural category inside the category "all prophecies ever made", or even inside "all prophecies in the Hebrew Old Testament". It's a subcategory that could not and had not been identified before Jesus. Nor does it include a majority of "all prophecies ever made by Jews", etc. And nor does it include any prophecies with such high specificity that fulfilling even a few of them constitutes significant evidence.
And that's why it holds no meaning, even if we assume he actually did everything Christians believe about him and allow arbitrary reinterpretations of prophecies to match his life.
And that category also includes a few made-up prophecies! I think particularly of the 'almah'/young-woman/virgin one, and the 'seamless robe'.
(Note: This is essentially a rehash/summarization of Jordan Sobel's Lotteries and Miracles - you may prefer the original.)
George Mavrodes wrote an interesting analogy. Scenario 1: Suppose you read a newspaper report claiming that a particular individual (say, Henry Plushbottom of Topeka, Kansas) has won a very large lottery. Before reading the newspaper, you would have given quite low odds that Henry in particular had won the lottery. However, the newspaper report flips your beliefs quite drastically. Afterward, you would give quite high odds that Henry in particular had won the lottery. Scenario 2: You have read various claims that a particular individual (Jesus of Nazareth) arose from the dead. Before hearing those claims, you would have given quite low odds of anything so unlikely happening. However (since you are reading LessWrong) you presumably do not give quite high odds that Jesus arose from the dead.
What is it about the second scenario which makes it different from the first?
Let's model Scenario 1 as a simple Bayes net. There are two nodes, one representing whether Henry wins, and one representing whether Henry is reported to win, and one arrow, from first to the second.
What are the parameters of the conditional probability tables? Before any information came in, it seemed very unlikely that Henry was the winner - perhaps he had a one in a million chance. Given that Henry did win, what is the chance that he would be reported to have won? Pretty likely - newspapers do err, but it's reasonable to believe that 9 times out of 10, they get the name of the lottery winner correct. Now suppose that Henry didn't win. What is the chance that he would be reported to have won by mistake? There's nothing in particular to single him out from the other non-winners - being misreported is just as unlikely as winning, maybe even more unlikely.
So we have (using w to abbreviate "Henry Wins" and r to abbreviate "Henry is reported"):
With a simple computation, we can verify that this model replicates the phenomenon in question. After reading the report, one's estimated probability should be:
Of course, Scenario 2 could be modeled with two nodes and one arrow in exactly the same way. If it is rational to come to a different conclusion, then the parameters must be different. How would you justify setting the parameters differently in the second case?
Somewhat relatedly, Douglas Walton has an "argumentation scheme" for Argument from Witness Testimony. An argumentation scheme is (roughly) a useful pattern of "presumptive" reasoning - that is, uncertain reasoning. In general, the argumentation/defeasible reasoning/non-monotonic logic community seems strangely isolated from the Bayesian inference community, though nominally they're both associated with artificial intelligence. Despite how odd each approach seems from the other side, there is a possibility of cross-fertilization here. Here are the so-called "premises" of the scheme (from Argumentation Schemes, p. 310):
Here are the so-called "critical questions" associated with the argument from witness testimony:
As I understand it, argumentation schemes are something like inference rules for plausible reasoning but the actual premises (including both the scheme's "premises" and its "critical questions") are treated differently. I have not yet been able to unpack Walton's description of how they ought to be treated differently into the language of single agent reasoning. Usually argumentation theory is phrased and targeted for dialog between differing agents (for example, legal advocates), but it certainly can be applied to single agent reasoning. For example, Pollack's OSCAR is based on defeasible reasoning.
(Spoiler)
Jordan Sobel's answer is that the key aspect of the sudden flip is P(r|!w), the probability of observing a false report. In Scenario 1, the probability of a false report of Henry's having won is even less likely than the probability of Henry winning. Given that humans are known to self-deceive regarding the things that are miraculous and wonderful, you should not carry that parameter through the analogy unchanged. Small increases in P(r|!w) lead to large reductions in P(w|r). For example, if P(r|!w) were equal to P(w), then the posterior probability that Henry won would drop below 0.5. If P(r|!w) were one in a hundred thousand, the posterior probability would drop below 0.1.