After having read this chapter, I now believe that Eliezer intended Dumbledore to represent a failed attempt at the creation of Friendly AI.
Eliezer would never portray a failed creation of an FAI as someone so impotent and comparatively benign.
Maybe he wouldn't, but that is a fact about him, not about AI. There's a narrow slice of concept space that includes uFAI that is almost benign. Not that I think it's likely that we could intentionally build such an entity. And we shouldn't want to, for basically the same reasons that we shouldn't want to build uFAI generally.
The new discussion thread (part 15) is here.
This is a new thread to discuss Eliezer Yudkowsky’s Harry Potter and the Methods of Rationality and anything related to it. This thread is intended for discussing chapter 82. The previous thread passed 1000 comments as of the time of this writing, and so has long passed 500. Comment in the 13th thread until you read chapter 82.
There is now a site dedicated to the story at hpmor.com, which is now the place to go to find the authors notes and all sorts of other goodies. AdeleneDawner has kept an archive of Author’s Notes. (This goes up to the notes for chapter 76, and is now not updating. The authors notes from chapter 77 onwards are on hpmor.com.)
The first 5 discussion threads are on the main page under the harry_potter tag. Threads 6 and on (including this one) are in the discussion section using its separate tag system. Also: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13.
As a reminder, it’s often useful to start your comment by indicating which chapter you are commenting on.
Spoiler Warning: this thread is full of spoilers. With few exceptions, spoilers for MOR and canon are fair game to post, without warning or rot13. More specifically: