You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

gjm comments on Tackling the subagent problem: preliminary analysis - Less Wrong Discussion

5 Post author: Stuart_Armstrong 12 January 2016 12:26PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (16)

You are viewing a single comment's thread. Show more comments above.

Comment author: gjm 13 January 2016 04:35:41PM 1 point [-]

There are already hacking games of this sort (the usual term is "CTF", for "capture the flag") but they don't capture any of what's alleged to be different about AI safety compared with computer security more generally.

Comment author: Lumifer 13 January 2016 04:40:26PM 0 points [-]

True. I suspect gamification of AI safety research might be fun but is unlikely to be actually useful.

Comment author: Gunnar_Zarncke 13 January 2016 08:48:58PM 1 point [-]

I think that in absence of actual AI using humans is the best approximation you can get. And games with in-game reward seems to work well as a motivator. Men die for points.

But yes, to put this to real use (but we may need all we can get) may require some more work.

Comment author: Lumifer 13 January 2016 09:38:20PM *  1 point [-]

in absence of actual AI using humans is the best approximation you can get

Humanity has been practicing trying to control and restrain humans (and vice versa, humans were practicing trying to escape and subvert control) for thousands of years.

And games with in-game reward seems to work well as a motivator.

Real life provides better motivation. No save points, y'know :-/

Comment author: Gunnar_Zarncke 13 January 2016 10:37:45PM 0 points [-]

Only that real-life is not structured in a way to make AI safety research natural for humans...