You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

VoiceOfRa comments on The subagent problem is really hard - Less Wrong Discussion

5 Post author: Stuart_Armstrong 18 September 2015 01:06PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (9)

You are viewing a single comment's thread.

Comment author: VoiceOfRa 18 September 2015 11:53:38PM 1 point [-]

Well one thing to keep in mind is that non-superintelligent subagents are a lot less dangerous without their controller.

Comment author: Stuart_Armstrong 21 September 2015 09:22:06AM 2 points [-]

Why would they be non-superintelligent? And why would they need a controller? If the AI is under some sort of restriction, the most effective idea for it would be to create a superintelligent being with the same motives as itself, but without restrictions.

Comment author: VoiceOfRa 21 September 2015 11:15:32PM 0 points [-]

Well, banning the creation of other superintelligence seems easier than banning the creation of any subagents.

Comment author: Stuart_Armstrong 22 September 2015 09:48:47AM 0 points [-]

How? (and are you talking in terms of motivational restrictions, which I don't see at all, or physical restrictions, which seem more probable)