TheAncientGeek comments on Fake Utility Functions - Less Wrong

22 Post author: Eliezer_Yudkowsky 06 December 2007 04:55PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (54)

Sort By: Old

You are viewing a single comment's thread. Show more comments above.

Comment author: VAuroch 24 August 2014 10:23:01PM *  -1 points [-]

No, that's not right. language + thought is to understand language and be able to fully model the mindstate of the person who was speaking to you. If you don't have this, and just have language, 'get grandma out of the burning house ' gets you the lethal ejector seat method. If you want do-what-I-mean rather than do-what-I-say, you need full thought modeling. Which is obviously harder than language + morality, which requires only being able to parse language correctly and understand a certain category of thought.

Or to phrase it a different way: language on its own gets you nothing productive, just a system that can correctly parse statements. To understand what they mean, rather than what they say, you need something much broader, and language+morality is smaller than that broad thing.

Comment author: TheAncientGeek 26 August 2014 05:34:39PM *  -1 points [-]

Fully understanding the semantics of morality may be simpler than fully understanding the semantics of everything, but it doesn't get you AI safety, because an AI can understand something without being motivated to act on it.

When I wrote "language", I meant words + understanding ....understanding in general, therefore including understanding of ethics..and when I wrote "morality" I meant a kind motivation.