Artaxerxes comments on Stuart Russell: AI value alignment problem must be an "intrinsic part" of the field's mainstream agenda - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Loading…
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Comments (39)
The 3rd edition of Artificial Intelligence: A Modern Approach which came out in 2009, explains the intelligence explosion concept, cites Yudkowsky's 2008 paper Artificial intelligence as a positive and negative factor in global risk, and specifically mentions friendly AI and the challenges involved in creating it.
So Russell has more or less agreed with MIRI on a lot of the key issues for quite some time now.