Hariant comments on Value Deathism - Less Wrong

26 Post author: Vladimir_Nesov 30 October 2010 06:20PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (118)

You are viewing a single comment's thread. Show more comments above.

Comment author: Emile 30 October 2010 07:21:26PM *  13 points [-]

Change in values of the future agents, however sudden of gradual, means that the Future (the whole freackin' Future!) won't be optimized according to our values, won't be anywhere as good as it could've been otherwise.

That really depends of what you mean by "our values":

1) The values of modern, western, educated humans? (as opposed to those of the ancient Greek, or of Confucius, or of medieval Islam), or

2) The "core" human values common to all human civilizations so far? ("stabbing someone who just saved your life is a bit of a dick move", "It would be a shame if humanity was exterminated in order to pave the universe with paperclips", etc.)

Both of those are quite fuzzy and I would find it hard to describe either of them precisely enough that even a computer could understand them.

When Eliezer talks of Friendly AI having human value, I think he's mostly talking about the second set (in The Psychological Unity of Mankind. But when Ben or Robin talk about how it isn't such a big deal if values change, because they've already changed in the past, they seem to be referring to the first kind of value.

I would agree with Ben and Robin that it isn't a big deal if our descendents (or Ems or AIs) have values that are at odds with our current, western, values (because they might be "wrong", some might be instrumental values we confuse for terminal values, etc.); but I wouldn't extend that to changes in "fundamental human values".

So I don't think "Ben and Robin are OK with a future without our values" is a good way of phrasing it. The question is more whether there is such a thing as fundamental human values (or is everything cultural?), whether it's easy to hit those in mind-space, etc.

Counterpoints: The Psychological Diversity of Mankind, Human values differ as much as values can differ.

Comment author: [deleted] 31 October 2010 10:37:09PM *  1 point [-]

Spelling notice (bold added):

When Eliezer talks of Friendly AI having human value, I think he's mostly talking about the second set (in The Psychological Unity of Manking.

Comment author: Emile 01 November 2010 08:52:58AM 0 points [-]

Fixed, thanks.