Duncan comments on Harry Potter and the Methods of Rationality discussion thread, February 2015, chapter 113 - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (503)
Actually, this isn't anywhere near as hard as the AI Box problem. Harry can honestly say he is the best option for eliminating the unfriendly AGI / Atlantis problem. 1) Harry just swore the oath that binds him, 2) Harry understands modern science and its associated risks, 3) Harry is 'good', 4) technological advancement will certainly result in either AGI or the Atlantis problem (probably sooner than later), and 5) Voldemort is already worried about prophecy immutability so killing Harry at this stage means the stars still get ripped apart, but without all the ways in which that could happen with Harry making the result 'good').
Other than (5), these are all things that are liable to be true of an AI asking to be let out of the box.
I see your point, but Voldemort hasn't encountered the AI Box problem has he? Further, I don't think Voldemort has encountered a problem where he's arguing with someone/something he knows is far smarter than himself. He still believes Harry isn't as smart yet.
Sure, but now your argument, it seems to me, is
6) Harry is playing against the intelligent but naïve Voldemort instead of against the intelligent and experienced Nathan Russell. (Actually, I don't know who Russell is apart from being the first person to let EY out of the box, but he may well be experienced with this problem, for all I know, and he's probably intelligent if he got into this stuff at all.)