- This thread has run its course. You will find newer threads in the discussion section.
Another discussion thread - the fourth - has reached the (arbitrary?) 500 comments threshold, so it's time for a new thread for Eliezer Yudkowsky's widely-praised Harry Potter fanfic.
Most of the paratext and fan-made resources are listed on Mr. LessWrong's author page. There is also AdeleneDawner's collection of most of the previously-published Author's Notes.
Older threads: one, two, three, four. By tag.
Newer threads are in the Discussion section, starting from Part 6.
Spoiler policy as suggested by Unnamed and approved by Eliezer, me, and at least three other upmodders:
You do not need to rot13 anything about HP:MoR or the original Harry Potter series unless you are posting insider information from Eliezer Yudkowsky which is not supposed to be publicly available (which includes public statements by Eliezer that have been retracted).
If there is evidence for X in MOR and/or canon then it's fine to post about X without rot13, even if you also have heard privately from Eliezer that X is true. But you should not post that "Eliezer said X is true" unless you use rot13.
It would also be quite sensible and welcome to continue the practice of declaring at the top of your post which chapters you are about to discuss, especially for newly-published ones, so that people who haven't yet seen them can stop reading in time.
My short take: your decision algorithm that outputs saving or not saving the murderer is instantiated multiple times. Anyone who tries to predict your output also runs a more or less precise simulation of your algorithm. Suppose a perfect predictor murderer in the past. In this case, no matter what your decision is, the prediction was the same.
So, you can reason this way: "although I don't know my final decision yet, I know that it correlates with the prediction perfectly. Therefore I also have to consider the consequences and resulting utilities of the prediction when making the decision. Shouldn't I just act then as if was controlling the output of both my current algorithm and that of the predictor, weighing the utilities together? I should output a decision now such that maximizes utility over present and past, because the past prediction mirrors the current me perfectly."
And if there are imperfect predictors involved (or algorithms with imperfectly correlated outputs), you reason as if you had imperfect control over their outputs. As far as I managed to understand it, this is TDT. Note that there is some interesting self-referentiality: the TDT algorithm computes the expected utility of its own "possible" outputs, and then makes output with maximum utility.