You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

OrphanWilde comments on Wild Moral Dilemmas - Less Wrong Discussion

17 Post author: sixes_and_sevens 12 May 2015 12:56PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (106)

You are viewing a single comment's thread.

Comment author: OrphanWilde 12 May 2015 03:51:31PM *  4 points [-]

What kind of moral dilemmas do you actually encounter?

  • None. I'm a virtue ethicist, more or less, of an Objectivist bent. A "dilemma", to me, is a choice between two equally good things (which virtue I want to emphasize), rather than two equally bad things.

Do you have any thoughts on how much moral judgement you have to exercise in your daily life? Do you think this is a typical amount?

  • It feels like "None."

Do you have any examples of pedestrian moral dilemmas to which you've applied abstract moral reasoning? How did that work out?

  • No.

Do you have any examples of personal moral dilemmas on a Trolley Problem scale that nonetheless happened?

  • No.

"Trolley Problems" are less about describing genuinely difficult situations, and more about trying to find faults with ethical systems or decision theories by describing edge scenarios. To me, they're about as applicable as "Imagine there's an evil alien god who will kill everyone if you're a utilitarian. What is the most utilitarian thing to do?"

ETA: In fairness, though, I don't see any ethical issue in the Trolley Problem to begin with, unless you tied all the people to the tracks in the first place. I regard any ethical system as fatally flawed which makes a rich man who walks through a rich neighborhood and is completely ignorant of any misery -more ethical- than a rich man who is aware of but does nothing about misery. Whether or not you qualify as a "good" person shouldn't be dependent upon your environment, and any ethical system which rewards deliberate ignorance is fatally flawed.

Comment author: Jiro 12 May 2015 06:01:56PM 1 point [-]

Failing to reward deliberate ignorance has its own problems: all ignorance is "deliberate" in the sense that you could always spend just a bit more time reducing your ignorance. How do you avoid requiring people to spend all their waking moments relieving their ignorance?

Comment author: OrphanWilde 12 May 2015 06:27:25PM 1 point [-]

"Failing to reward deliberate ignorance" doesn't equal "Punishing deliberate ignorance." The issue here is not the ignorance, the issue is in making ignorance a superior moral state to knowledge.

Take ethics out of it: Suppose you were the server admin for the Universe Server Company, where all existing universes are simulated for profit. Suppose that happy universes cost more resources to run than unhappy universes, and cost our imaginary company more money than they make; "lukewarm" universes, which are neither happy nor unhappy, make just as much money as unhappy universes. If the USC were required by law to make any universes it discovered to be less-than-Happy universes Happy, what do you suppose company policy would be about investigating the happiness level of simulated universes?

How do you suppose people who feel obligations to those worse-off than they are cope with this sense of obligation?

Comment author: Jiro 12 May 2015 06:43:34PM *  1 point [-]

"Failing to reward deliberate ignorance" doesn't equal "Punishing deliberate ignorance."

The practical effect of this system amounts to punishing ignorance. Someone who remains ignorant takes a risk that he is being unknowingly immoral and therefore will be punished, and he can only alleviate that risk by becoming less ignorant.

In your analogy, we would "fail to reward deliberate ignorance" by requiring the Universe Server Company to make all the universes happy whether they discovered that or not. That would indeed impose an obligation upon them to do nothing but check universes all the time (until they run out of universes, but if the analogy fits, this isn't possible).

Comment author: OrphanWilde 12 May 2015 06:56:21PM 0 points [-]

Ah! You're assuming you have the moral obligation with or without the knowledge.

No, I take the moral obligation away entirely. For the USC, this will generally result in universes systematically becoming lukewarm universes. (Happy universes become downgraded since it saves money, unhappy universes become upgraded since it costs the company nothing, the incentive for the search being fueled by money-saving approaches, and I'm assuming a preference by the searchers for more happiness in the universes all else being equal.)

A law which required universal "Happiness" would just result in USC going bankrupt, and all the universes being turned off, once USC started losing more money than they could make. A law which required all universes -discovered- to be less than Happy to be made into Happy universes just results in company policy prohibiting looking in the first place.

Comment author: Jiro 12 May 2015 07:05:52PM *  1 point [-]

So in your original example, both the rich man aware of misery and the rich man ignorant of it have no moral obligation?

If that's what you mean, I would describe the old system as "punishing knowledge" rather than "rewarding ignorance" since the baseline under your new system is like lack of knowledge under the old system.

I also suspect not many people would agree with this system.

Comment author: OrphanWilde 12 May 2015 07:12:27PM 0 points [-]

So in your original example, both the rich man aware of misery and the rich man ignorant of it have no moral obligation?

  • Correct.

If that's what you mean, I would describe the old system as "punishing knowledge" rather than "rewarding ignorance" since the baseline under your new system is like lack of knowledge under the old system.

  • That's what I attempted to describe it as; my apologies if I wasn't clear.

I also suspect not many people would agree with this system.

  • We are in agreement here.