Mark_Friedenbach comments on MIRI strategy - Less Wrong

5 Post author: ColonelMustard 28 October 2013 03:33PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (94)

You are viewing a single comment's thread. Show more comments above.

Comment author: [deleted] 30 October 2013 12:33:57AM *  3 points [-]

In other words, all AGI researchers are already well aware of this problem and take precautions according to their best understanding?

s/all/most/ - you will never get them all. But yes, that's an accurate statement. Friendliness is taught in artificial intelligence classes at university, and gets mention in most recent AI books I've seen. Pull up the AGI conference proceedings and search for "friendly" or "safe" - you'll find a couple of invited talks and presented papers each year. Many project roadmaps include significant human oversight of the developing AGI, and/or boxing mechanisms, for the purpose of ensuring friendliness proactive response.