You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Stuart_Armstrong comments on Tackling the subagent problem: preliminary analysis - Less Wrong Discussion

5 Post author: Stuart_Armstrong 12 January 2016 12:26PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (16)

You are viewing a single comment's thread. Show more comments above.

Comment author: Gunnar_Zarncke 13 January 2016 11:40:32AM 3 points [-]

I'm not sure where to post an idea for AI control research so I do it here. It somehow spun off from your post, the recent treacherous turn post and LW slack discussions.

That is the idea: Could we gameify AI safety research? The approach would be to create a setting where the players have to obey the AI safety rules and still achieve an objective in the in-game world. This can be a simulated virtual world in a computer game or a role playing world. To get sufficient motivation the in-game world would e.g. consist of a population of evil (to a typical human player) beings that interact and your most likely purpose is to make them do things you want (as in many other computer games too). Try to squeeze out as much resources as you can. While still obeying the rules. The game would progress from simple AI control rules like Asimovs robot laws to more advanced AI control rules. And find out whether people can hack these. If people can an AI probably can too.

Comment author: Stuart_Armstrong 13 January 2016 11:52:13AM 0 points [-]

Possibly. I'll keep it in mind; Jaan Tallinn is proposing some interesting programming challenges, and something like this might be able to fit in there...