hairyfigment comments on What should a friendly AI do, in this situation? - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (66)
I don't understand this part. If the AI wants something from the programmers, such as information about their values that it can extrapolate, won't it always be committing "optimization within a prediction involving programmer reactions"? How does one distinguish this case without an adult FAI in hand? Are we counting on the young AI's understanding of transparency?