XerxesPraelor comments on Debunking Fallacies in the Theory of AI Motivation - LessWrong

8 Post author: Richard_Loosemore 05 May 2015 02:46AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (343)

You are viewing a single comment's thread. Show more comments above.

Comment author: XerxesPraelor 10 May 2015 12:52:39AM 3 points [-]

namely the cases where the AI is trying really hard to be friendly, but doing it in a way that we did not intend.

If the AI knows what friendly is or what mean means, than your conclusion is trivially true. The problem is programming those in - that's what FAI is all about.