You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

linkhyrule5 comments on Harry Potter and the Methods of Rationality discussion thread, February 2015, chapter 113 - Less Wrong Discussion

8 Post author: Gondolinian 28 February 2015 08:23PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (503)

You are viewing a single comment's thread.

Comment author: linkhyrule5 01 March 2015 12:17:39AM 4 points [-]

But it does not serve as a solution to say, for example, "Harry should persuade Voldemort to let him out of the box" if you can't yourself figure out how.

It's a shame that nobody's going along this line of thought. It would be cool to see a full, successful AI-Box experiment out there as a fanfiction.

(I'd do it myself, but my previous attempts at such have been.... eheh. Less than successful.)

Comment author: Duncan 01 March 2015 02:45:28AM *  6 points [-]

Actually, this isn't anywhere near as hard as the AI Box problem. Harry can honestly say he is the best option for eliminating the unfriendly AGI / Atlantis problem. 1) Harry just swore the oath that binds him, 2) Harry understands modern science and its associated risks, 3) Harry is 'good', 4) technological advancement will certainly result in either AGI or the Atlantis problem (probably sooner than later), and 5) Voldemort is already worried about prophecy immutability so killing Harry at this stage means the stars still get ripped apart, but without all the ways in which that could happen with Harry making the result 'good').

Comment author: TobyBartels 01 March 2015 10:09:18AM *  2 points [-]

Other than (5), these are all things that are liable to be true of an AI asking to be let out of the box.

  1. Code that appears Friendly but has not been proved Friendly
  2. Advanced intelligence of the AI
  3. General programming goals, much weaker than (1) really
  4. True verbatim in the standard AI box experiment (and arguably in the real world right now)
Comment author: Duncan 01 March 2015 02:59:02PM 2 points [-]

I see your point, but Voldemort hasn't encountered the AI Box problem has he? Further, I don't think Voldemort has encountered a problem where he's arguing with someone/something he knows is far smarter than himself. He still believes Harry isn't as smart yet.

Comment author: TobyBartels 03 March 2015 02:32:55AM 1 point [-]

Sure, but now your argument, it seems to me, is

6) Harry is playing against the intelligent but naïve Voldemort instead of against the intelligent and experienced Nathan Russell. (Actually, I don't know who Russell is apart from being the first person to let EY out of the box, but he may well be experienced with this problem, for all I know, and he's probably intelligent if he got into this stuff at all.)

Comment author: shminux 01 March 2015 12:22:12AM *  2 points [-]

LV clearly doesn't want the world to end. What would make him believe that killing HP ends the world?

Comment author: minichirops 01 March 2015 09:21:30AM 1 point [-]