nshepperd comments on Wanting to Want - Less Wrong

16 Post author: Alicorn 16 May 2009 03:08AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (185)

You are viewing a single comment's thread. Show more comments above.

Comment author: CCC 26 October 2012 10:59:28AM 4 points [-]

Or are you trying to explain how a religious fundamentalist would use the Pebblesorter metaphor if they were making the argument.

Yes, exactly. Larry's parents' do not believe that they are mistaken, and are not easily proved mistaken.

I define morality as being a catch-all term to describe what are commonly referred to as the "good things in life," love, fairness, happiness, creativity, people achieving what they want in life, etc. So something is morally good if it tends to increase those things.

That's a good definition, and it avoids most of the obvious traps. A bit vague, though. Unfortunately, there is a non-obvious trap; this definition leads to the city of Omelas, where everyone is happy, fulfilled, creative... except for one child, locked in the dark in a cellar, starved; one child on whose suffering the glory of Omelas rests. Saving the child decreases overall happiness, health, achievement of goals, etc., etc. Despite all this, I'd still think that leaving the child locked away in the dark is a wrong thing. (This can also lead to Pascal's Mugging, as an edge case)

I think the reason that people have such a problem with the idea of objective morality is that they subscribe, knowingly or not, to motivational internalism.

In my case, it's because every attempt I've seen at defining an objective morality has potential problems. Given to you by an external source? But that presumes that the external source is not Darkseid. Written in the human psyche? There are some bad things in the dark corners of the human psyche. Take whatever action is most likely to transform the world into a paradise? Doesn't usually work, because we don't know enough to always select the correct actions. Do unto others as you would have them do unto you? That's a very nice one - but not if Bob the Masochist tries to apply it.

Of course, subjective morality is no better - and is often worse (mainly because a society in general can reap certain benefits from a shared idea of morality).

What does seem to work is to pick a society whose inhabitants seem happy and fulfilled, and trying to use whatever rules they use. The trouble with that is that it's kludgy, uncertain, and could often do with improvement (though it's been improved often enough in human history that many - not all, but many - obvious 'improvements' aren't in practice).

Comment author: nshepperd 26 October 2012 11:24:25AM 1 point [-]

What does seem to work is to pick a society whose inhabitants seem happy and fulfilled, and trying to use whatever rules they use.

If you're going to do that, why not just directly use happiness and fulfillment?

Comment author: CCC 26 October 2012 12:15:44PM 1 point [-]

If you're going to do that, why not just directly use happiness and fulfillment?

I cannot create an entire ethical framework, for everyone to follow, on any basis, and expect that it will be able to hold up for the next thousand years. If I try, I will fail, and this is why: because people cheat. Many intelligent agents will poke at the rules, seeking a possible exploit thereof that enhances their success at the possible expense of their neighbours' success. Over the next thousand years, there will be thousands, probably millions, of such intelligent agents hunting for, and attempting to exploit, flaws in the system; people who stick by the letter of the rule, and avoid the spirit of the rule. I cannot create an entire ethical framework, because I cannot outwit thousands or millions of future peoples' attempts to find and exploit gaps and loopholes in my framework.

Hence, the best that I can do is to find a system that has already endured a period of field testing and that hasn't broken yet; and perhaps attempt a small, incremental improvement (no more) in order to test that improvement.

Comment author: nshepperd 26 October 2012 01:03:22PM 0 points [-]

What does that have to do with the situation at hand? Morality is an abstract division of actions into right and wrong, not some set of laws laid down by philosophers on the rest of the population. If you're trying to work out what you mean by "morality" and use some criteria (such as something including happiness and fulfillment of populations which adopt that definition) to choose from a bunch of alternatives, then probably those criteria themselves are the most accurate definition of "morality" you could hope to find. I might add, in [almost] exactly the same way that a program which writes and then executes a program to add two numbers is, in fact, itself a program that adds two numbers.

You can write out your final definition in legalese later, if the situation calls for it.

Comment author: CCC 26 October 2012 01:55:45PM 1 point [-]

What does that have to do with the situation at hand? Morality is an abstract division of actions into right and wrong, not some set of laws laid down by philosophers on the rest of the population.

Morality comes with an implicit rule; when it says that "this action is the right action to take in this situation", then the implicit rule is "if you find yourself in this situation, take this action". There is usually no Morality Policeman ready to administer punishment if the rule is not followed, and the choice to follow the rule or not remains; but the rule is there.

f you're trying to work out what you mean by "morality" and use some criteria (such as something including happiness and fulfillment of populations which adopt that definition) to choose from a bunch of alternatives, then probably those criteria themselves are the most accurate definition of "morality" you could hope to find.

The difficulty is that I know that the algorithm that I am following is very likely not to fulfil the criteria in the very best possible way; merely in (more or less) the best possible way that they have been fulfilled in the past. If I simply list the criteria, then I falsely imply that the chosen system of morality is the best fit for those criteria; and I am trying to avoid that implication.