Gunnar_Zarncke comments on Autonomy, utility, and desire; against consequentialism in AI design - Less Wrong

3 Post author: sbenthall 03 December 2014 05:34PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (5)

You are viewing a single comment's thread.

Comment author: Gunnar_Zarncke 03 December 2014 09:15:25PM 0 points [-]

I have some difficulties mapping the terms you use (and roughly define) to the usual definitions of these terms. For example

It has an internal representation of its goals. I will call this internal representation its desires.

Nonetheless I see some interesting differentiations in your elaborations. It makes a difference of whether a utility function is explicitly coded as part of the system and whether the implicit utility function of a system is inferred from its overall function. And also different from the utility function inferred for the composition of the system with its environment.

I also like how you relate these concepts to compassion and consequentialsm even though the connections appears vague to me. Some more elaboration - or rather more precise relationships could help.