Baughn comments on An example of deadly non-general AI - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (25)
Do you mean increasing human mortality?
Reducing, through reducing the population beforehand.
I am sorely tempted to adjust its goal. Since this is a narrow AI, it shouldn't be smart enough to be friendly; we can't encode the real utility function into it, even if we knew what it is. I wonder if that means it can't be made safe, or just that we need to be careful?
Best I can tell, the lesson is to be very careful with how you code the objective.