Alicorn comments on The Strangest Thing An AI Could Tell You - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (574)
If you already think the CI applies to humans, why would it be strange to hear that it also applies to an AI? If you don't think it applies to humans, then "not at all" could be "equal force", and that would also be un-strange.
Well spotted! But why is it NOT strange to hold that the CI applies to an AI? Isn't the raison d'etre of AI to operate on hypothetical imperatives?
Depends how you define "imperative". Is "maximize human CEV according to such-and-such equations" a deontological imperative or a consequentialist utility function?