Stephen_Weeks comments on Qualitative Strategies of Friendliness - Less Wrong

10 Post author: Eliezer_Yudkowsky 30 August 2008 02:12AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (54)

Sort By: Old

You are viewing a single comment's thread.

Comment author: Stephen_Weeks 30 August 2008 07:23:15AM 6 points [-]

Ian: the issue isn't whether it could determine what humans want, but whether it would care. That's what Eliezer was talking about with the "difference between chess pieces on white squares and chess pieces on black squares" analogy. There are infinitely many computable quantities that don't affect your utility function at all. The important job in FAI is determining how to create an intelligence that will care about the things we care about.

Certainly it's necessary for such an intelligence to be able to compute it, but it's certainly not sufficient.