Update: Please post new comments in the latest HPMOR discussion thread, now in the discussion section, since this thread and its first few successors have grown unwieldy (direct links: two, three, four, five, six, seven).
As many of you already know, Eliezer Yudkowsky is writing a Harry Potter fanfic, Harry Potter and the Methods of Rationality, starring a rationalist Harry Potter with ambitions to transform the world by bringing the rationalist/scientific method to magic. But of course a more powerful Potter requires a more challenging wizarding world, and ... well, you can see for yourself how that plays out.
This thread is for discussion of anything related to the story, including insights, confusions, questions, speculation, jokes, discussion of rationality issues raised in the story, attempts at fanfic spinoffs, comments about related fanfictions, and meta-discussion about the fact that Eliezer Yudkowsky is writing Harry Potter fan-fiction (presumably as a means of raising the sanity waterline).
I'm making this a top-level post to create a centralized location for that discussion, since I'm guessing people have things to say (I know I do) and there isn't a great place to put them. fanfiction.net has a different set of users (plus no threading or karma), the main discussion here has been in an old open thread which has petered out and is already near the unwieldy size that would call for a top-level post, and we've had discussions come up in a few other places. So let's have that discussion here.
Comments here will obviously be full of spoilers, and I don't think it makes sense to rot13 the whole thread, so consider this a spoiler warning: this thread contains unrot13'd spoilers for Harry Potter and the Methods of Rationality up to the current chapter and for the original Harry Potter series. Please continue to use rot13 for spoilers to other works of fiction, or if you have insider knowledge of future chapters of Harry Potter and the Methods of Rationality.
A suggestion: mention at the top of your comment which chapter you're commenting on, or what chapter you're up to, so that people can understand the context of your comment even after more chapters have been posted. This can also help people avoid reading spoilers for a new chapter before they realize that there is a new chapter.
Definitely weird. A related consideration is that I would always give reasons for any advice I give my former self. That cuts off a large swath of potential stable loops that consist of me giving myself advice for absolutely no reason at all except that it happens to be stable. The better the reasons I have been given myself the less likely it is that the self perpetuating cycle is a completely arbitrary cycle.
For example, I wouldn't have sent back "Don't mess with time". I would have sent "the universe doesn't particularly care about your rules and plans you arrogant little git! What's more likely, guessing your way through 128 bit encryption or something seriously nasty that distracts you from your games, such as ? That's right. Think." (Yes, I'd include the 'arrogant git' part. That is information I would clearly need to be reminded of!)
Now, not all scary situations give me the chance to write an explanation but a large swath of the probability mass does. While I would still follow the hastily written directive I would also know that to write that particularly message something really bad must be happening. Without having a predetermined policy for giving details I would have no idea whether the message meant something bad almost happened or not. (It also means that I am far less likely to get such a message - I'll probably get one of the many possible detailed messages.)
The problem is that you aren't source of advice, you are one of constraints to be satisfied. Any message, that you will reproduce with picometer precision and that will create stable state, will do. Precision isn't a problem in deterministic world, and maybe in quantum one too (if our neurons are sufficiently classical), but I'm hesitant to estimate influence of one's preferences on stable state.