It also worries me quite a lot that eliezer's post is entirely symmetric under the action of replacing his chosen notions with the pebble-sorter's notions. This property qualifies as "moral relativism" in my book, though there is no point in arguing about the meanings of words.
My posts on universal instrumental values are not symmetric under replacing UIVs with some other set of goals that an agent might have. UIVs are the unique set of values X such that in order to achieve any other value Y, you first have to do X. Maybe I find this satisfying because I have always been more at home with category theory than logic; I have defined a set of values by requiring them to satisfy a universal property.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
@ eli: nice series on lob's theorem, but I still don't think you've added any credibility to claims like "I favor the human one because it is h-right". You can do your best to record exactly what h-right is, and think carefully about convergence (or lack of) under self modification, but I think you'd do a lot better to just state "human values" as a preference, and be an out-of-the-closet-relativist.