Gunnar_Zarncke comments on Autonomy, utility, and desire; against consequentialism in AI design - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (5)
I have some difficulties mapping the terms you use (and roughly define) to the usual definitions of these terms. For example
Nonetheless I see some interesting differentiations in your elaborations. It makes a difference of whether a utility function is explicitly coded as part of the system and whether the implicit utility function of a system is inferred from its overall function. And also different from the utility function inferred for the composition of the system with its environment.
I also like how you relate these concepts to compassion and consequentialsm even though the connections appears vague to me. Some more elaboration - or rather more precise relationships could help.