You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Stuart_Armstrong comments on The subagent problem is really hard - Less Wrong Discussion

5 Post author: Stuart_Armstrong 18 September 2015 01:06PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (9)

You are viewing a single comment's thread. Show more comments above.

Comment author: Stuart_Armstrong 21 September 2015 09:24:25AM 2 points [-]

We'd be considerably better at subagent creation if we could copy our brains and modify them at will...

Comment author: Houshalter 21 September 2015 09:45:48AM 0 points [-]

Well it's not impossible to restrict the AIs from accessing their own source code. Especially if they are implemented in specialized hardware like we are.

Comment author: Stuart_Armstrong 21 September 2015 10:42:17AM 2 points [-]

It's not impossible, no. But it's another failure point. And the AI might deduce stuff about itself by watching how it's run. And a world that has built an AI is a world where there will be lots of tools for building AIs around...