You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

orthonormal comments on Yet another safe oracle AI proposal - Less Wrong Discussion

2 Post author: jacobt 26 February 2012 11:45PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (33)

You are viewing a single comment's thread.

Comment author: orthonormal 27 February 2012 04:28:37AM *  6 points [-]

Style suggestion: give an informal overview of the idea, like your original comment, before going into the details. New readers need to see the basic idea before they'll be willing to wade into code.

Content suggestion: The main reason that I find your idea intriguing is something that you barely mention above: that because you're giving the AI an optimization target that only cares about its immediate progeny, it won't start cooperating with its later descendants (which would pretty clearly lead to un-boxing itself), nor upgrade to a decision theory that would cooperate further down the line. I think that part deserves more discussion.

Comment author: jacobt 27 February 2012 05:12:19AM 2 points [-]

Thanks, I've added a small overview section. I might edit this a little more later.