You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

hairyfigment comments on What should a friendly AI do, in this situation? - Less Wrong Discussion

8 Post author: Douglas_Reay 08 August 2014 10:19AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (66)

You are viewing a single comment's thread. Show more comments above.

Comment author: hairyfigment 24 August 2014 06:20:56AM -1 points [-]

the goal was detected by internal systems

I don't understand this part. If the AI wants something from the programmers, such as information about their values that it can extrapolate, won't it always be committing "optimization within a prediction involving programmer reactions"? How does one distinguish this case without an adult FAI in hand? Are we counting on the young AI's understanding of transparency?