You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Artaxerxes comments on Stuart Russell: AI value alignment problem must be an "intrinsic part" of the field's mainstream agenda - Less Wrong Discussion

25 Post author: RobbBB 26 November 2014 11:02AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (39)

You are viewing a single comment's thread. Show more comments above.

Comment author: Artaxerxes 04 December 2014 10:28:41AM *  1 point [-]

The 3rd edition of Artificial Intelligence: A Modern Approach which came out in 2009, explains the intelligence explosion concept, cites Yudkowsky's 2008 paper Artificial intelligence as a positive and negative factor in global risk, and specifically mentions friendly AI and the challenges involved in creating it.

So Russell has more or less agreed with MIRI on a lot of the key issues for quite some time now.