Alicorn comments on The Strangest Thing An AI Could Tell You - Less Wrong

81 Post author: Eliezer_Yudkowsky 15 July 2009 02:27AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (574)

You are viewing a single comment's thread. Show more comments above.

Comment author: Alan 16 July 2009 03:39:13AM 1 point [-]

Kant's categorical imperative applies with equal force to AI.

Comment author: Alicorn 16 July 2009 04:25:18AM 3 points [-]

If you already think the CI applies to humans, why would it be strange to hear that it also applies to an AI? If you don't think it applies to humans, then "not at all" could be "equal force", and that would also be un-strange.

Comment author: Alan 16 July 2009 03:07:04PM 0 points [-]

Well spotted! But why is it NOT strange to hold that the CI applies to an AI? Isn't the raison d'etre of AI to operate on hypothetical imperatives?

Comment author: Normal_Anomaly 27 June 2011 03:20:59PM *  0 points [-]

Depends how you define "imperative". Is "maximize human CEV according to such-and-such equations" a deontological imperative or a consequentialist utility function?