You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Baughn comments on An example of deadly non-general AI - Less Wrong Discussion

13 Post author: Stuart_Armstrong 21 August 2014 02:15PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (25)

You are viewing a single comment's thread. Show more comments above.

Comment author: Baughn 22 August 2014 12:24:39PM *  1 point [-]

Reducing, through reducing the population beforehand.

I am sorely tempted to adjust its goal. Since this is a narrow AI, it shouldn't be smart enough to be friendly; we can't encode the real utility function into it, even if we knew what it is. I wonder if that means it can't be made safe, or just that we need to be careful?

Comment author: ancientcampus 23 August 2014 06:13:13PM 1 point [-]

Best I can tell, the lesson is to be very careful with how you code the objective.