- This thread has run its course. You will find newer threads in the discussion section.
Another discussion thread - the fourth - has reached the (arbitrary?) 500 comments threshold, so it's time for a new thread for Eliezer Yudkowsky's widely-praised Harry Potter fanfic.
Most of the paratext and fan-made resources are listed on Mr. LessWrong's author page. There is also AdeleneDawner's collection of most of the previously-published Author's Notes.
Older threads: one, two, three, four. By tag.
Newer threads are in the Discussion section, starting from Part 6.
Spoiler policy as suggested by Unnamed and approved by Eliezer, me, and at least three other upmodders:
You do not need to rot13 anything about HP:MoR or the original Harry Potter series unless you are posting insider information from Eliezer Yudkowsky which is not supposed to be publicly available (which includes public statements by Eliezer that have been retracted).
If there is evidence for X in MOR and/or canon then it's fine to post about X without rot13, even if you also have heard privately from Eliezer that X is true. But you should not post that "Eliezer said X is true" unless you use rot13.
It would also be quite sensible and welcome to continue the practice of declaring at the top of your post which chapters you are about to discuss, especially for newly-published ones, so that people who haven't yet seen them can stop reading in time.
Nominally, decision theory is all about giving good advice to people who make decisions.
Now I am willing to entertain the idea that the free will of the decision maker is an illusion.
But my 'willing suspension of disbelief' goes all to hell when I ask myself: "Why does an illusion need good advice?"
Decision theory absolutely requires an assumption that the will is free in some sense. However, it does seem reasonable to consider the possibility that free decision making can be spread out in time.
Traditional game theory assumes that an agent freely chooses a set of preferences over states-of-the-world well in advance. Then, at decision time, he chooses an action so as to maximize the probability of reaching a desirable state of the world. Classical game theory and decision theory offer advice on that second free decision, but they don't advise on those earlier free decisions which created the preference schedule. Perhaps they should.
Or, perhaps we need an additional theory, over and above game theory and decision theory, which will advise agents on how to set their preferences so as to take into account some of the side effects of those preferences. What do we call this new kind of normative theory? 'Moral theory', perhaps?
Um. Let me taboo some words ("free will," "prediction", "decision") here and try again.
Let us suppose that at time T1 someone either commits murder (event E1a) or doesn't (E1b), and at T2 I either spare the murderer (E2a) or don't (E2b). (I don't mean to suggest here that all combinations are possible.)
The original scenario seemed to presuppose that at T1 there is a fact of the matter about whether, given E1a, T2 contains E2a or E2b, and that some potential murderers are able to use that fact in their reasoning.
My understand... (read more)