Stuart_Armstrong comments on Trapping AIs via utility indifference - Less Wrong

3 Post author: Stuart_Armstrong 28 February 2012 07:27PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (32)

You are viewing a single comment's thread. Show more comments above.

Comment author: Stuart_Armstrong 29 February 2012 01:41:48PM *  2 points [-]

A pretty reasonable analogy (using lots of negative connotations and terms, though). What specifically is it that you find horrible about the idea?

Comment author: Armok_GoB 29 February 2012 02:50:36PM 4 points [-]

Creating UFAI.

Comment author: Stuart_Armstrong 29 February 2012 05:20:33PM 0 points [-]

If that's a worry, then you must think there's a hole in the setup (assume the master AI is in the usual box, with only a single output, and that it's incinerated afterwards). Are you thinking that any (potentially) UFAI will inevitably find a hole we missed? Or are you worried that methods based around controlling potential UFAI will increase the odds of people building them, rather than FAIs?

Comment author: Armok_GoB 29 February 2012 05:27:14PM 2 points [-]

There's holes in EVERY setup, the reason setups aren't generally useless is because if a human can't find the hole in order to plug it the another human is not likely to find it in order to escape through it.

Comment author: Incorrect 29 February 2012 07:47:26PM 0 points [-]

The AI still has a motive to escape in order to prepare to optimize its sliver. It doesn't necessarily need us to ensure it escapes faster in its sliver.

Comment author: Stuart_Armstrong 01 March 2012 02:23:44PM 0 points [-]

What does this translate to in terms of the initial setup, not the analogous one?