You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Stuart_Armstrong comments on Tackling the subagent problem: preliminary analysis - Less Wrong Discussion

5 Post author: Stuart_Armstrong 12 January 2016 12:26PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (16)

You are viewing a single comment's thread. Show more comments above.

Comment author: Stuart_Armstrong 13 January 2016 10:39:54AM 1 point [-]

There's some informal suggestions (which I don't think much of, so I didn't really go into deep analysis) that use a sense of identity as the basis of controlling subagents. I didn't want to go into the weeds of that in this post.

Comment author: Gunnar_Zarncke 13 January 2016 11:43:27AM 0 points [-]

Yes. Some notion of identity is needed in any case for the AI. it has to encompass its executive functions as least. Identity distinguishes the AI from what is not the AI. I see no reason why this couldn't include sub-agents. It is more a question of where the line is drawn not if. I'm looking forward to a future post of yours on identity.