Alan comments on The Strangest Thing An AI Could Tell You - Less Wrong

81 Post author: Eliezer_Yudkowsky 15 July 2009 02:27AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (574)

You are viewing a single comment's thread. Show more comments above.

Comment author: Alan 16 July 2009 03:07:04PM 0 points [-]

Well spotted! But why is it NOT strange to hold that the CI applies to an AI? Isn't the raison d'etre of AI to operate on hypothetical imperatives?

Comment author: Normal_Anomaly 27 June 2011 03:20:59PM *  0 points [-]

Depends how you define "imperative". Is "maximize human CEV according to such-and-such equations" a deontological imperative or a consequentialist utility function?