Baughn comments on An example of deadly non-general AI - Less Wrong

13 Post author: Stuart_Armstrong 21 August 2014 02:15PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (25)

You are viewing a single comment's thread. Show more comments above.

Comment author: John_Maxwell_IV 21 August 2014 10:33:22PM 1 point [-]

Imagine a medicine designing super-AI with the goal of reducing human mortality in 50 years

Do you mean increasing human mortality?

Comment author: Baughn 22 August 2014 12:24:39PM *  1 point [-]

Reducing, through reducing the population beforehand.

I am sorely tempted to adjust its goal. Since this is a narrow AI, it shouldn't be smart enough to be friendly; we can't encode the real utility function into it, even if we knew what it is. I wonder if that means it can't be made safe, or just that we need to be careful?

Comment author: ancientcampus 23 August 2014 06:13:13PM 1 point [-]

Best I can tell, the lesson is to be very careful with how you code the objective.