You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

gRR comments on Yet another safe oracle AI proposal - Less Wrong Discussion

2 Post author: jacobt 26 February 2012 11:45PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (33)

You are viewing a single comment's thread.

Comment author: gRR 27 February 2012 10:27:45AM 0 points [-]

In order to protect against scenarios like this, we can (1) only ask to solve abstract mathematical problems, such that it is provably impossible to infer sufficient knowledge about the outside world from the problem description, and (2) restore the system to its previous state after each problem is solved, so that knowledge would not accumulate.