Academian comments on Human values differ as much as values can differ - Less Wrong

13 Post author: PhilGoetz 03 May 2010 07:35PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (205)

You are viewing a single comment's thread. Show more comments above.

Comment author: PhilGoetz 04 May 2010 01:47:41AM 3 points [-]

Anyway, your analysis here (as with many others on LW) conflates feelings of status with some sort of actual position in some kind of dominance hierarchy. But this is a classification error. There are people who feel quite respectable, important, and proud, without needing to outwardly be "superior" in some fashion.

Those aren't the people I'm talking about.

Truth is, if you're worried about your place in the dominance hierarchy (by which I mean you have feelings about it, not that you're merely curious or concerned with it for tactical or strategic reasons), that's prima facie evidence of something that needs immediate fixing, and without waiting for an AI to modify your brain or convince you of something. Identify and eliminate the irrational perceived threat from your belief system.

You're not dealing with the actual values the people I described have; you're saying they should have different values. Which is unFriendly!

Comment author: Academian 04 May 2010 06:20:45PM *  4 points [-]

Phil, you're right that there's a difference between giving people their mutually unsatisfiable values and giving them the feeling that they've been satisfied. But there's a mechanism missing from this picture:

Even if I wouldn't want to try running an AI to have conversations with humans worldwide to convert them to more mutually satisfiable value systems, and even though I don't want a machine to wire-head everybody into a state of illusory high status, I certainly trust humans to convince other humans to convert to mutually satisfiable values. In fact, I do it all the time. I consider it one of the most proselytism-worthy ideas ever.

So I see your post as describing a very important initiative we should all be taking, as people: convince others to find happiness in positive-sum games :)

(If I were an AI, or even just an I, perhaps you would hence define me as "unFreindly". If so, okay then. I'm still going to go around convincing people to be better at happiness, rational-human-style.)

Comment author: pjeby 04 May 2010 07:20:58PM 0 points [-]

So I see your post as describing a very important initiative we should all be taking, as people: convince others to find happiness in positive-sum games

It's an error to assume that human brains are actually wired for zero or negative sum games in the first place, vs. having adaptations that tend towards such a situation. Humans aren't true maximizers; they're maximizer-satisficers. E.g., people don't seek the best possible mate: they seek the best mate they think they can get.

(Ironically, the greater mobility and choices in our current era often lead to decreased happiness, as our perceptions of what we ought to be able to "get" have increased.)

Anyway, ISTM that any sort of monomaniacal maximizing behavior (e.g. OCD, paranoia, etc.) is indicative of an unhealthy brain. Simple game theory suggests that putting one value so much higher than others is unlikely to be an evolutionarily stable strategy.