TheAncientGeek comments on Debunking Fallacies in the Theory of AI Motivation - Less Wrong

8 Post author: Richard_Loosemore 05 May 2015 02:46AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (343)

You are viewing a single comment's thread. Show more comments above.

Comment author: TheAncientGeek 20 May 2015 02:39:54PM *  1 point [-]

Well, I guess you would write the terminal goal as quite a long statement, which would summarize the things involved in friendliness, but also include language about not going to extremes, laissez-faire, and so on. It would be vague and generous.

That gets close to "do it right"

And as part of the instrumental goal there would be a stipulation that the friendliness instrumental goal should trump all other instrumentals.

Which is an open doorway to an AI that kills everyone because of miscoded friendliness,

If you want safety features, and you should, you would need them to override the ostensible purpose of the machine....they would be pointless otherwise....even the humble off switch works that way.

A simpler solution would simply be to scrap the idea of exceptional status for the terminal goal, and instead include massive contextual constraints as your guard against drift.

Arguably, those constraint would be a kind of negative goal.