DanArmak comments on Open thread, October 2011 - LessWrong

5 Post author: MarkusRamikin 02 October 2011 09:05AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (304)

You are viewing a single comment's thread.

Comment author: DanArmak 12 October 2016 02:02:14PM *  1 point [-]

I've been told that people use the word "morals" to mean different things. Please answer this poll or add comments to help me understand better.

When you see the word "morals" used without further clarification, do you take it to mean something different from "values" or "terminal goals"?

Submitting...

Comment author: CCC 13 October 2016 01:49:46PM 2 points [-]

"Morals" and "goals" are very different things. I might make it a goal to (say) steal an apple from a shop; this would be an example of an immoral goal. Or I might make a goal to (say) give some money to charity; this would be a moral goal. Or I might make a goal to buy a book; this would (usually) be a goal with little if any moral weight one way or another.

Morality cannot be the same as terminal goals, because a terminal goal can also be immoral, and someone can pursue a terminal goal while knowing it's immoral.

AI morals are not a category error; if an AI deliberately kills someone, then that carries the same moral weight as if a person deliberately kills someone.

Comment author: TheOtherDave 13 October 2016 04:05:12AM 2 points [-]

When you see the word "morals" used without further clarification, do you take it to mean something different from "values" or "terminal goals"?

Depends on context.

When I use it, it means something kind of like "what we want to happen." More precisely, I treat moral principles as sort keys for determining the preference order of possible worlds. When I say that X is morally superior to Y, I mean that I prefer worlds with more X in them (all else being equal) to worlds with more Y in them.

I know other people who, when they use it, mean something kind of like that, if not quite so crisply, and I understand them that way.

I know people who, when they use it, mean something more like "complying with the rules tagged 'moral' in the social structure I'm embedded in." I know people who, when they use it, mean something more like "complying with the rules implicit in the nonsocial structure of the world." In both cases, I try to understand by it what I expect them to mean.

Comment author: username2 14 October 2016 12:04:43AM *  1 point [-]

Survey assumed a consequentialist utilitarian moral framework. My moral philosophy is neither, so there was no adequate answer.

Comment author: TheAncientGeek 13 October 2016 01:32:08PM *  1 point [-]

I see morality as fundamentally a way of dealing with conflicts between values/goals, so I cant answer questions posed in terms of "our values", because I don't know whether that means a set of identical values, a set of non-identical but non conflicting values, or a set of conflicting values. One of the implications of that view is that some values/goals are automatically morally irrelevant , since they can be satisfied without potential conflict. Another implication is that my view approximates to "morality is society's rules", but without the dismissive implication..if a society as gone through a process of formulating rules that are effective at reducing conflict, then there is a non-vacuous sense in which that society's morality is its rules. Also AI and alien morality are perfectly feasible, and possibly even necessary.

Comment author: DanArmak 14 October 2016 06:51:54PM 0 points [-]

Some people think that any value, if it is the only value, naturally tries to consume all available resources. Even if you explicitly make a satisficing, non-maximizing value (e.g. "make 1000 paperclips", not just "make paperclips"), a rational agent pursuing that value may consume infinite resources making more paperclips just in case it's somehow wrong about already having made 1000 of them, or in case some of the ones it has made are destroyed.

On this view, all values need to be able to trade off one another (which implies a common quantitative utility measurement). Even if it seems obvious that the chance you're wrong about having made 1000 paperclips is very small, and you shouldn't invest more resources in that instead of working on your next value, this needs to be explicit and quantified.

In this case, since all values inherently conflict with one another, all decisions (between actions that would serve different values) are moral decisions in your terms. I think this is a good intuition pump for why some people think all actions and all decisions are necessarily moral.

Comment author: hairyfigment 13 October 2016 08:02:28PM 0 points [-]

Were the Babyeaters immoral before meeting humans?

If not, what would you like to call the thing we actually care about?

Comment author: TheAncientGeek 13 October 2016 09:03:47PM 1 point [-]

If I don't use "moral" as a rubber stamp for all and any human value, you don't run into CCCs problem of labeling theft and murder as moral because some people value them. That's the upside. Whats the downside?

Comment author: CCC 14 October 2016 10:30:22AM 0 points [-]

What they did was clearly wrong... but, at the same time, they did not know it, and that has relevance.

Consider; you are given a device with a single button. You push the button and a hamburger appears. This is repeatable; every time you push the button, a hamburger appears. To the best of your knowledge, this is the only effect of pushing the button. Pushing the button therefore does not make you an immoral person; pushing the button several times to produce enough hamburgers to feed the hungry would, in fact, be the action of a moral person.

The above paragraph holds even if the device also causes lightning to strike a different person in China every time you press the button. (Although, in this case, creating the device was presumably an immoral act).

So, back to the babyeaters; some of their actions were immoral, but they themselves were not immoral, due to their ignorance.

Comment author: hairyfigment 14 October 2016 10:37:41AM 2 points [-]

Clearly I should have asked about actions rather than people. But the Babyeaters were not ignorant that they were causing great pain and emotional distress. They may not have known how long it continued, but none of the human characters IIRC suggested this information might change their minds. Because those aliens had a genetic tendency towards non-human preferences, and the (working) society they built strongly reinforced this.