You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Stuart_Armstrong comments on Trapping AIs via utility indifference - Less Wrong Discussion

3 Post author: Stuart_Armstrong 28 February 2012 07:27PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (32)

You are viewing a single comment's thread. Show more comments above.

Comment author: Stuart_Armstrong 29 February 2012 05:20:33PM 0 points [-]

If that's a worry, then you must think there's a hole in the setup (assume the master AI is in the usual box, with only a single output, and that it's incinerated afterwards). Are you thinking that any (potentially) UFAI will inevitably find a hole we missed? Or are you worried that methods based around controlling potential UFAI will increase the odds of people building them, rather than FAIs?

Comment author: Armok_GoB 29 February 2012 05:27:14PM 2 points [-]

There's holes in EVERY setup, the reason setups aren't generally useless is because if a human can't find the hole in order to plug it the another human is not likely to find it in order to escape through it.