But it does not serve as a solution to say, for example, "Harry should persuade Voldemort to let him out of the box" if you can't yourself figure out how.
It's a shame that nobody's going along this line of thought. It would be cool to see a full, successful AI-Box experiment out there as a fanfiction.
(I'd do it myself, but my previous attempts at such have been.... eheh. Less than successful.)
Actually, this isn't anywhere near as hard as the AI Box problem. Harry can honestly say he is the best option for eliminating the unfriendly AGI / Atlantis problem. 1) Harry just swore the oath that binds him, 2) Harry understands modern science and its associated risks, 3) Harry is 'good', 4) technological advancement will certainly result in either AGI or the Atlantis problem (probably sooner than later), and 5) Voldemort is already worried about prophecy immutability so killing Harry at this stage means the stars still get ripped apart, but without all the ways in which that could happen with Harry making the result 'good').
This is a new thread to discuss Eliezer Yudkowsky’s Harry Potter and the Methods of Rationality and anything related to it. This thread is intended for discussing chapter 113.
There is a site dedicated to the story at hpmor.com, which is now the place to go to find the authors notes and all sorts of other goodies. AdeleneDawner has kept an archive of Author’s Notes. (This goes up to the notes for chapter 76, and is now not updating. The authors notes from chapter 77 onwards are on hpmor.com.)
IMPORTANT -- From the end of chapter 113: