Surely not, if the story is nearer its end than its beginning, given its pacing so far. Given Eliezer's beliefs about FAI, and that the story is not supposed to lie, Harry attempting to create a godlike AI without years of careful research should result in a Bad End.
I admit I had not considered that when making my ridiculous proposal.
However, EY has suggested that a good and bad end are already written. The bad end of 3 Worlds was 'happily ever after,' so my ridiculous proposal is made no less valid by not meeting your entirely reasonable criteria for a good end, based on teaching a poor moral.
The new discussion thread (part 15) is here.
This is a new thread to discuss Eliezer Yudkowsky’s Harry Potter and the Methods of Rationality and anything related to it. This thread is intended for discussing chapter 82. The previous thread passed 1000 comments as of the time of this writing, and so has long passed 500. Comment in the 13th thread until you read chapter 82.
There is now a site dedicated to the story at hpmor.com, which is now the place to go to find the authors notes and all sorts of other goodies. AdeleneDawner has kept an archive of Author’s Notes. (This goes up to the notes for chapter 76, and is now not updating. The authors notes from chapter 77 onwards are on hpmor.com.)
The first 5 discussion threads are on the main page under the harry_potter tag. Threads 6 and on (including this one) are in the discussion section using its separate tag system. Also: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13.
As a reminder, it’s often useful to start your comment by indicating which chapter you are commenting on.
Spoiler Warning: this thread is full of spoilers. With few exceptions, spoilers for MOR and canon are fair game to post, without warning or rot13. More specifically: