JamesAndrix comments on No Universally Compelling Arguments - Less Wrong

33 Post author: Eliezer_Yudkowsky 26 June 2008 08:29AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (46)

Sort By: Old

You are viewing a single comment's thread.

Comment author: JamesAndrix 27 June 2008 02:25:09PM 0 points [-]

But the even worse failure is the One Great Moral Principle We Don't Even Need To Program Because Any AI Must Inevitably Conclude It. This notion exerts a terrifying unhealthy fascination on those who spontaneously reinvent it; they dream of commands that no sufficiently advanced mind can disobey.

This is almost where I am. I think my Great Moral Principle would be adopted by any rational and sufficiently intelligent AI that isn't given any other goals. It is fascinating.

But I don't think it's a solution to Friendly AI.