Vladimir_M comments on Should I believe what the SIAI claims? - Less Wrong

23 Post author: XiXiDu 12 August 2010 02:33PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (600)

You are viewing a single comment's thread. Show more comments above.

Comment author: Vladimir_M 13 August 2010 02:37:44AM *  1 point [-]

Psy-Kosh:

How's that, that make sense?

It makes sense in its own terms, but it leaves the unpleasant implication that morality differs greatly between humans, at both individual and group level -- and if this leads to a conflict, asking who is right is meaningless (except insofar as everyone can reach an answer that's valid only for himself, in terms of his own morality).

So if I live in the same society with people whose morality differs from mine, and the good-fences-make-good-neighbors solution is not an option, as it often isn't, then who gets to decide whose morality gets imposed on the other side? As far as I see, the position espoused in the above comment leaves no other answer than "might is right." (Where "might" also includes more subtle ways of exercising power than sheer physical coercion, of course.)

Comment author: Psy-Kosh 13 August 2010 03:28:09AM 0 points [-]

*blinks* how did I imply that morality varies? I thought (was trying to imply) that morality is an absolute standard and that humans simply happen to be the sort of beings that care about the particular standard we call "morality". (Well, with various caveats like not being sufficiently reflective to be able to fully explicitly state our "morality algorithm", nor do we fully know all its consequences)

However, when humans and paperclippers interact, well, there will probably be some sort of fight if one doesn't end up with some sort PD cooperation or whatever. It's not that paperclippers and humans disagree on anything, it's simply, well, they value paperclips a whole lot more than lives. We're sort of stuck with having to act in a way to prevent the hypothetical them from acting on that.

(of course, the notion that most humans seem to have the same underlying core "morality algorithm", just disagreeing on the implications or such, is something to discuss, but that gets us out of executive summary territory, no?)

Comment author: Vladimir_M 13 August 2010 03:55:53AM 3 points [-]

Psy-Kosh:

(of course, the notion that most humans seem to have the same underlying core "morality algorithm", just disagreeing on the implications or such, is something to discuss, but that gets us out of executive summary territory, no?)

I would say that it's a crucial assumption, which should be emphasized clearly even in the briefest summary of this viewpoint. It is certainly not obvious, to say the least. (And, for full disclosure, I don't believe that it's a sufficiently close approximation of reality to avoid the problem I emphasized above.)

Comment author: Psy-Kosh 13 August 2010 02:56:00PM 0 points [-]

Hrm, fair enough. I thought I'd effectively implied it, but apparently not sufficiently.

(Incidentally... you don't think it's a close approximation to reality? Most humans seem to value (to various extents) happiness, love, (at least some) lives, etc... right?)

Comment author: CronoDAS 14 August 2010 09:19:09AM 3 points [-]

Different people (and cultures) seem to put very different weights on these things.

Here's an example:

You're a government minister who has to decide who to hire to do a specific task. There are two applicants. One is your brother, who is marginally competent at the task. The other is a stranger with better qualifications who will probably be much better at the task.

The answer is "obvious."

In some places, "obviously" you hire your brother. What kind of heartless bastard won't help out his own brother by giving him a job?

In others, "obviously" you should hire the stranger. What kind of corrupt scoundrel abuses his position by hiring his good-for-nothing brother instead of the obviously superior candidate?