Esar comments on Rationality Quotes September 2012 - Less Wrong

7 Post author: Jayson_Virissimo 03 September 2012 05:18AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (1088)

You are viewing a single comment's thread. Show more comments above.

Comment author: TimS 28 September 2012 03:15:17PM 1 point [-]

Morality is contextual.

If we have four people on a life boat and food for three, morality must provide a mechanism for deciding who gets the food. Suppose that decision is made, then Omega magically provides sufficient food for all - morality hasn't changed, only the decision that morality calls for.


Technological advancement has certainly caused moral change (consider society after introduction of the Pill). But having more resources does not, in itself, change what we think is right, only what we can actually achieve.

Comment author: [deleted] 28 September 2012 03:17:01PM 2 points [-]

If we have four people on a life boat and food for three, morality must provide a mechanism for deciding who gets the food.

That's an interesting claim. Are you saying that true moral dilemmas (i.e. a situation where there is no right answer) are impossible? If so, how would you argue for that?

Comment author: [deleted] 29 September 2012 08:02:18AM 2 points [-]

My view is that a more meaningful question than ‘is this choice good or bad’ is ‘is this choice better or worse than other choices I could make’.

Comment author: [deleted] 29 September 2012 12:56:10PM 0 points [-]

Would you say that there are true practical dilemmas? Is there ever a situation where, knowing everything you could know about a decision, there isn't a better choice?

Comment author: pengvado 29 September 2012 07:46:35PM 2 points [-]

There are plenty of situations where two choices are equally good or equally bad. This is called "indifference", not "dilemma".

Comment author: [deleted] 29 September 2012 10:01:06PM 0 points [-]

There are plenty of situations where two choices are equally good or equally bad.

Those aren't the situations I'm talking about.

Comment author: [deleted] 30 September 2012 12:31:03AM *  1 point [-]

If I know there isn't a better choice, I just follow my decision. Duh. (Having to choose between losing $500 and losing $490 is equivalent to losing $500 and then having to choose between gaining nothing and gaining $10: yes, the loss will sadden me, but that had better have no effect on my decision, and if it does it's because of emotional hang-ups I'd rather not have. And replacing dollars with utilons wouldn't change much.)

Comment author: [deleted] 30 September 2012 02:52:31PM 0 points [-]

So you're saying that there are no true moral dilemmas (no undecidable moral problems)?

Comment author: [deleted] 30 September 2012 10:04:27PM *  2 points [-]

Depends on what you mean by “undecidable”. There may be situations in which it's hard in practice to decide whether it's better to do A or to do B, sure, but in principle either A is better, B is better, or the choice doesn't matter.

Comment author: [deleted] 30 September 2012 10:44:12PM 0 points [-]

Depends on what you mean by “undecidable”.

So, for example, suppose a situation where a (true) moral system demands both A and B, yet in this situation A and B are incomposssible. Or it forbids both A and B, yet in this situation doing neither is impossible. Those examples have a pretty deontological air to them...could we come up with examples of such dilemmas within consequentialism?

Comment author: TheOtherDave 01 October 2012 12:53:16AM *  2 points [-]

could we come up with examples of such dilemmas within consequentialism?

Well, the consequentialist version of a situation that demands A and B is one in which A and B provide equally positive expected consequences and no other option provides consequences that are as good. If A and B are incompossible, I suppose we can call this a moral dilemma if we like.

And, sure, consequentialism provides no tools for choosing between A and B, it merely endorses (A OR B). Which makes it undecidable using just consequentialism.

There are a number of mechanisms for resolving the dilemma that are compatible with a consequentialist perspective, though (e.g., picking one at random).

Comment author: [deleted] 01 October 2012 01:55:17AM 0 points [-]

Thanks, that was helpful. I'd been having a hard time coming up with a consequentialist example.

Comment author: [deleted] 01 October 2012 12:02:26AM 1 point [-]

So, for example, suppose a situation where a (true) moral system demands both A and B, yet in this situation A and B are incomposssible. Or it forbids both A and B, yet in this situation doing neither is impossible.

Then, either the demand/forbiddance is not absolute or the moral system is broken.

Comment author: faul_sname 01 October 2012 08:58:46PM *  1 point [-]

That one thing a couple years ago qualifies.

But unless you get into self-referencing moral problems, no. I can't think of one off the top of my head, but I suspect that you can find ones among decisions that affect your decision algorithm and decisions where your decision-making algorithm affects the possible outcomes. Probably like Newcomb's problem, only twistier.

(Warning: this may be basilisk territory.)

Comment author: Legolan 30 September 2012 03:24:00PM *  1 point [-]

How are you defining morality? If we use a shorthand definition that morality is a system that guides proper human action, then any "true moral dilemmas" would be a critique of whatever moral system failed to provide an answer, not proof that "true moral dilemmas" existed.

We have to make some choice. If a moral system stops giving us any useful guidance when faced with sufficiently difficult problems, that simply indicates a problem with the moral system.

ETA: For example, if I have completely strict sense of ethics based upon deontology, I may feel an absolute prohibition on lying and an absolute prohibition on allowing humans to die. That would create an moral dilemma for that system in the classical case of Nazis seeking Jews that I'm hiding in my house. So I'd have to switch to a different ethical system. If I switched to a system of deontology with a value hierarchy, I could conclude that human life has a higher value than telling the truth to governmental authorities under the circumstances and then decide to lie, solving the dilemma.

I strongly suspect that all true moral dilemmas are artifacts of the limitations of distinct moral systems, not morality per se. Since I am skeptical of moral realism, that is all the more the case; if morality can't tell us how to act, it's literally useless. We have to have some process for deciding on our actions.

Comment author: Legolan 30 September 2012 03:23:36PM *  0 points [-]

(Double-post, sorry)

Comment author: MixedNuts 28 September 2012 03:31:43PM 3 points [-]

I think they are impossible. Morality can say "no option is right" all it wants, but we still must pick an option, unless the universe segfaults and time freezes upon encountering a dilemma. Whichever decision procedure we use to make that choice (flip a coin?) can count as part of morality.

Comment author: [deleted] 28 September 2012 03:39:51PM 2 points [-]

I take it for granted that faced with a dilemma we must do something, so long as doing nothing counts as doing something. But the question is whether or not there is always a morally right answer. In cases where there isn't, I suppose we can just pick randomly, but that doesn't mean we've therefore made the right moral decision.

Are we ever damned if we do, and damned if we don't?

Comment author: Strange7 30 September 2012 05:24:49AM 3 points [-]

When someone is in a situation like that, they lower their standard for "morally right" and try again. Functional societies avoid putting people in those situations because it's hard to raise that standard back to it's previous level.

Comment author: CronoDAS 30 September 2012 01:53:15AM 0 points [-]

Well, if all available options are indeed morally wrong, we can still try to see if any are less wrong than others.

Comment author: [deleted] 30 September 2012 02:57:09PM 1 point [-]

Right, but choosing the lesser of two evils is simple enough. That's not the kind of dilemma I'm talking about. I'm asking whether or not there are wholly undecidable moral problems. Choosing between one evil and a lesser evil is no more difficult than choosing between an evil and a good.

But if you're saying that in any hypothetical choice, we could always find something significant and decisive, then this is good evidence for the impossibility of moral dilemmas.

Comment author: CronoDAS 01 October 2012 06:08:44AM 6 points [-]

It's hard to say, really.

Suppose we define a "moral dilemma for system X" as a situation in which, under system X, all possible actions are forbidden.

Consider the systems that say "Actions that maximize this (unbounded) utility function are permissible, all others are forbidden." Then the situation "Name a positive integer, and you get that much utility" is a moral dilemma for those systems; there is no utility maximizing action, so all actions are forbidden and the system cracks. It doesn't help much if we require the utility function to be bounded; it's still vulnerable to situations like "Name a real number less than 30, and you get that much utility" because there isn't a largest real number less than 30. The only way to get around this kind of attack by restricting the utility function is by requiring the range of the function to be a finite set. For example, if you're a C++ program, your utility might be represented by a 32 bit unsigned integer, so when asked "How much utility do you want" you just answer "2^32 - 1" and when asked "How much utility less than 30.5 do you want" you just answer "30".

(Ugh, that paragraph was a mess...)

Comment author: [deleted] 01 October 2012 03:11:47PM *  2 points [-]

That is an awesome example. I'm absolutely serious about stealing that from you (with your permission).

Do you think this presents a serious problem for utilitarian ethics? It seems like it should, though I guess this situation doesn't come up all that often.

ETA: Here's a thought on a reply. Given restrictions like time and knowledge of the names of large numbers, isn't there in fact a largest number you can name? Something like Graham's number won't work (way too small) because you can always add one to it. But trans-finite numbers aren't made larger by adding one. And likewise with the largest real number under thirty, maybe you can use a function to specify the number? Or if not, just say '29.999....' and just say nine as many times as you can before the time runs out (or until you calculate that the utility benefit reaches equilibrium with the costs of saying 'nine' over and over for a long time).

Comment author: [deleted] 01 October 2012 03:39:11PM 2 points [-]

But trans-finite numbers aren't made larger by adding one.

Transfinite cardinals aren't, but transfinite ordinals are. And anyway transfinite cardinals can be made larger by exponentiating them.

Comment author: [deleted] 01 October 2012 04:16:26PM 0 points [-]

Good point. What do you think of Chrono's dilemma?

Comment author: CronoDAS 02 October 2012 09:00:04AM 0 points [-]

That is an awesome example. I'm absolutely serious about stealing that from you (with your permission).

Sure, be my guest.

Do you think this presents a serious problem for utilitarian ethics? It seems like it should, though I guess this situation doesn't come up all that often.

Honestly, I don't know. Infinities are already a problem, anyway.

Comment author: TimS 28 September 2012 03:33:38PM -1 points [-]

I would make the more limited claim that the existence of irreconcilable moral conflicts is evidence for moral anti-realism.

In short, if you have a decision process (aka moral system) that can't resolve a particular problem that is strictly within its scope, you don't really have a moral system.

Which makes figuring out what we mean by moral change / moral progress incredibly difficult.

Comment author: [deleted] 28 September 2012 03:45:22PM 1 point [-]

In short, if you have a decision process (aka moral system) that can't resolve a particular problem that is strictly within its scope, you don't really have a moral system.

This seems to be to be a rephrasing and clarifying of your original claim, which I read as saying something like 'no true moral theory can allow moral conflicts'. But it's not yet an argument for this claim.

Comment author: TimS 28 September 2012 06:14:50PM 0 points [-]

I'm suddenly concerned that we're arguing over a definition. It's very possible to construct a decision procedure that tells one how to decide some, but not all moral questions. It might be that this is the best a moral decision procedure can do. Is it clearer to avoid using the label "moral system" for such a decision procedure?

This is a distraction from my main point, which was that asserting our morality changes when our economic resources change is an atypical way of using the label "morality."

Comment author: [deleted] 28 September 2012 06:23:01PM 1 point [-]

Is it clearer to avoid using the label "moral system" for such a decision procedure?

No, but if I understand what you've said, a true moral theory can allow for moral conflict, just because there are moral questions it cannot decide (the fact that you called them 'moral questions' leads me to think you think that these questions are moral ones even if a true moral theory can't decide them).

This is a distraction from my main point, which was that asserting our morality changes when our economic resources change is an atypical way of using the label "morality."

You're certainly right, this isn't relevant to your main point. I was just interested in what I took to be the claim that moral conflicts (i.e. moral problems that are undecidable in a true moral theory) are impossible:

If we have four people on a life boat and food for three, morality must provide a mechanism for deciding who gets the food.

This is a distraction from you main point in at least one other sense: this claim is orthogonal to the claim that morality is not relative to economic conditions.

Comment author: TimS 28 September 2012 06:30:55PM 0 points [-]

If we have four people on a life boat and food for three, morality must provide a mechanism for deciding who gets the food.
This is a distraction from you main point in at least one other sense: this claim is orthogonal to the claim that morality is not relative to economic conditions.

Yes, you correct that this was not an argument, simply my attempt to gesture at what I meant by the label "morality." The general issue is that human societies are not rigorous about the use of the label morality. I like my usage because I think it is neutral and specific in meta-ethical disputes like the one we are having. For example, moral realists must determine whether they think "incomplete" moral systems can exist.

But beyond that, I should bow out, because I'm an anti-realist and this debate is between schools of moral realists.