Even for an ideal reasoner, successful retrospective predictions clearly do not play the same role as prospective predictions. The former must inevitably be part of locating the hypothesis; they thus play a weaker role in confirming it. Eliezer's story you link to is about how the "traditional science" dictum about not using retrospective predictions can be just reversed stupidity; but just reversing young Eliezer's stupidity in the story one more time doesn't yield intelligence.
Edit: this comment has been downvoted, and in considering why that may be, I think there's ambiguities in both "ideal reasoner" and "play the same role". Yes, the value of evidence does not change depending on when a hypothesis was first articulated, so some limitless entity that was capable of simultaneously evaluating all possible hypotheses would not care. However, a perfectly rational but finite reasoner could reasonably consider some amount old evidence to have been "used up" in selecting the hypothesis from an implicit background of alternative hypotheses, without having to enumerate all of those alternatives; and thus habitually avoid recounting a certain amount of retrospective evidence. Any "successful prediction" would presumably be by a hypothesis that had already passed this threshold (otherwise it's just called a "lucky wild-ass guess"). I'm speaking in simple heuristic terms here, but this could be made more rigorous and numeric, up to and including a superhuman level I'd consider "ideal".
Recently, I completed my first systematic read-through of the sequences. One of the biggest effects this had on me was considerably warming my attitude towards Bayesianism. Not long ago, if you'd asked me my opinion of Bayesianism, I'd probably have said something like, "Bayes' theorem is all well and good when you know what numbers to plug in, but all too often you don't."
Now I realize that that objection is based on a misunderstanding of Bayesianism, or at least Bayesianism-as-advocated-by-Eliezer-Yudkowsky. "When (Not) To Use Probabilities" is all about this issue, but a cleaner expression of Eliezer's true view may be this quote from "Beautiful Probability":
The practical upshot of seeing Bayesianism as an ideal to be approximated, I think, is this: you should avoid engaging in any reasoning that's demonstrably nonsensical in Bayesian terms. Furthermore, Bayesian reasoning can be fruitfully mined for heuristics that are useful in the real world. That's an idea that actually has real-world applications for human beings, hence the title of this post, "Bayesianism for Humans."
Here's my attempt to make an initial list of more directly applicable corollaries to Bayesianism. Many of these corollaries are non-obvious, yet eminently sensible once you think about them, which I think makes for a far better argument for Bayesianism than Dutch Book-type arguments with little real-world relevance. Most (but not all) of the links are to posts within the sequences, which hopefully will allow this post to double as a decent introductory guide to the parts of the sequences that explain Bayesianism.