hairyfigment comments on Stupid Questions Open Thread Round 2 - Less Wrong

15 Post author: OpenThreadGuy 20 April 2012 07:38PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (208)

You are viewing a single comment's thread. Show more comments above.

Comment author: [deleted] 22 April 2012 10:14:42PM *  2 points [-]

"We have nothing to argue about [on this subject], we are only different optimization processes."

Calling something a terminal value is the default behavior when humans look for a justification and don't find anything. This happens because we perceive little of our own mental processes and in the absence of that information we form post-hoc rationalizations. In short, we know very little about our own values. But that lack of retrieved / constructed justification doesn't mean it's impossible to unpack moral intuitions into algorithms so that we can more fully debate which factors we recognize and find relevant.

A big sticking point between me and my friends is that I think getting angry is in general deeply morally blameworthy, whereas many of them believe that failing to get angry at outrageous things is morally blameworthy

Your friends can understand why humans have positive personality descriptors for people who don't get angry in various situations: descriptors like reflective, charming, polite, solemn, respecting, humble, tranquil, agreeable, open-minded, approachable, cooperative, curious, hospitable, sensitive, sympathetic, trusting, merciful, gracious.

You can understand why we have positive personality descriptors for people who get angry in various situations: descriptors like impartial, loyal, decent, passionate, courageous, boldness, leadership, strength, resilience, candor, vigilance, independence, reputation, and dignity.

Both you and your friends can see how either group could pattern match their behavioral bias as being friendly, supportive, mature, disciplined, or prudent.

These are not deep variations, they are relative strengths of reliance on the exact same intuitions.

You can't argue someone into changing their terminal values, but you can often persuade them to do so through literature and emotional appeal, largely due to psychological unity. I claim that this is one of the important roles that story-telling plays: it focuses and unifies our moralities through more-or-less arational means. But this isn't an argument per se and has no particular reason one would expect it to converge to a particular outcome--among other things, the result is highly contingent on what talented artists happen to believe.

Stories strengthen our associations of different emotions in response to analogous situations, which doesn't have much of a converging effect (Edit: unless, you know, it's something like the bible that a billion people read. That certainly pushes humanity in some direction), but they can also create associations to moral evaluative machinery that previously wasn't doing its job. There's nothing arational about this: neurons firing in the inferior frontal gyrus are evidence relevant to a certain useful categorizing inference, "things which are sentient".

Because generally, "morality" is defined more or less to be a consideration that would/should be compelling to all sufficiently complex optimization processes

I'm not in a mood to argue definitions, but "optimization process" is a very new concept, so I'd lean toward "less".

Comment author: Jadagul 22 April 2012 11:22:26PM 1 point [-]

You're...very certain of what I understand. And of the implications of that understanding.

More generally, you're correct that people don't have a lot of direct access to their moral intuitions. But I don't actually see any evidence for the proposition they should converge sufficiently other than a lot of handwaving about the fundamental psychological similarity of humankind, which is more-or-less true but probably not true enough. In contrast, I've seen lots of people with deeply, radically separated moral beliefs, enough so that it seems implausible that these all are attributable to computational error.

I'm not disputing that we share a lot of mental circuitry, or that we can basically understand each other. But we can understand without agreeing, and be similar without being the same.

As for the last bit--I don't want to argue definitions either. It's a stupid pastime. But to the extent Eliezer claims not to be a meta-ethical relativist he's doing it purely through a definitional argument.

Comment author: hairyfigment 24 April 2012 05:35:21PM -1 points [-]

In contrast, I've seen lots of people with deeply, radically separated moral beliefs, enough so that it seems implausible that these all are attributable to computational error.

Do you know anyone who never makes computational errors? If 'mistakes' happen at all, we would expect to see them in cases involving tribal loyalties. See von Neumann and those who trusted him on hidden variables.

Comment author: Jadagul 25 April 2012 02:37:40AM 0 points [-]

The claim wasn't that it happens too often to attribute to computation error, but that the types of differences seem unlikely to stem from computational errors.