Eliezer_Yudkowsky comments on Less Wrong Q&A with Eliezer Yudkowsky: Video Answers - Less Wrong

41 Post author: MichaelGR 07 January 2010 04:40AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (94)

You are viewing a single comment's thread. Show more comments above.

Comment author: Eliezer_Yudkowsky 10 January 2010 01:07:48PM 8 points [-]

Maybe his work on provable friendliness would reveal ideas on AI design that could be used to produce an unfriendly AI? But even then the ideas would probably only help AI researchers who work on transparent design

All FAIs are AGIs, most of the FAI problem is solving the AGI problem in particular ways.