You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

entirelyuseless comments on My Kind of Moral Responsibility - Less Wrong Discussion

3 Post author: Gram_Stone 02 May 2016 05:54AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (116)

You are viewing a single comment's thread.

Comment author: entirelyuseless 02 May 2016 12:54:02PM *  1 point [-]

On the object level, I think you are almost completely wrong.

You say, "There is not one culpable atom in the universe." This is true, but your implied conclusion, that there are no culpable persons in the universe, is false. Likewise, there may not be any agenty dust in the universe. But if your implied conclusion is that there are no agents in the universe, then your conclusion is false.

But if there are agents in the universe, and there are, then there can be good and bad agents there, just as there are good and bad apples in the universe.

Richard Chappell, I think, has used Singer's own argument against him. Suppose you are jogging somewhere in order to make a donation to a foreign charity. The number of expected lives saved from your donation is 3. On the way, you witness a young child drowning in a river. You have a choice: continue on, expecting to save 2 lives overall. Or save the child, expecting to lose 2 lives overall.

Everyone knows that the right choice here is to save the child, and that the utilitarian choice is wrong.

The utilitarian error is this: it is asking, "what actions will have the most beneficial effects?" But that is the wrong question. The right question is, "What is the right thing to do?"

(Edit: there is another inconsistency in your way of thinking. If you assume there is no culpability in the universe because atoms are not culpable, neither is it worthwhile to save human lives, because there are no atoms in the universe that are worth bothering about.)

Comment author: TheAncientGeek 02 May 2016 06:55:39PM *  2 points [-]

The utilitarian error is this: it is asking, "what actions will have the most beneficial effects?" But that is the wrong question. The right question is, "What is the right thing to do?"

Yes, morality has a cluster of concerns, including obligation, praise, blame and rightness of action Thats the deontologucal cluster, If you are concerned about culpability, you need to think about what responsibilities you are under. You have an obligation to pay your taxes, but not one to spend your disposable income in any particular way.

There's another cluster to do with voluntary action, outcomes and making the world better. That's e cosequentalist cluster, Utilitarianism is a good tool for spending money optimally, but if you try to use it as a theory of oblgaion, it breaks

The third cluster is virtue theoreic, concerned with self cultivation. I don't know why Pigliuci thanks you can tell whether you are obligated by examining subjective feelings, You are obligated to do something if you are likely to be blamed for not doing it.Self bame is/secondary to that, You have to took outward, not inward, to find the objective fact.

One way of fixing emotional problems s to run off the right theory,

Comment author: gjm 02 May 2016 02:39:17PM 0 points [-]

Everyone knows that the right choice here is to save the child, and that the utilitarian choice is wrong.

[citation needed]

Saving the child is the choice that feels better, the choice that will make other people think better of us, the choice that all else being equal gives most evidence of being a good person. For all those reasons, I expect many of us would choose to save the child. But is that the right choice? I am very very unconvinced.

A more reputable reason to prefer saving the child: we may reasonably doubt our impact estimates for very indirect charitable activity like donating money to help people far away, and suspect that they may be inflated (because pretty much everyone involved has an incentive to inflate them). So if our "number of expected lives" was estimated without taking that into account, we might want to reduce the estimate substantially. But all that would mean is that one of the things we're comparing against one another is wrong, and that has nothing to do with deficiencies in utilitarianism.

Of course the scenario is ridiculous anyway; it seems to require that arriving ten minutes later and damp will stop us ever making the donation (how??), or else that the donation is so time-critical that every 10 minutes of delay means three more lives lost (in which case we probably shouldn't merely be jogging).

Comment author: Lumifer 02 May 2016 02:58:56PM *  1 point [-]

But is that the right choice?

Whether it's the right choice is a function of your moral system. Under some moral systems it is, and under some it isn't. However notice the "everyone knows" part. Everyone does know. Which percentage of the population do you expect to agree that letting the child drown was the right thing to do?

Of course the scenario is ridiculous anyway

Any more than the trolley one? Hypotheticals aren't know for their realism.

Comment author: Gram_Stone 02 May 2016 04:13:51PM 1 point [-]

Whether it's the right choice is a function of your moral system. Under some moral systems it is, and under some it isn't. However notice the "everyone knows" part. Everyone does know. Which percentage of the population do you expect to agree that letting the child drown was the right thing to do?

A while back, a lot of people would have agreed that setting cats on fire for entertainment was totally cool.

Any more than the trolley one? Hypotheticals aren't know for their realism.

The idea is that the argument sneaks in intuitions about the situation that have been explicitly stipulated away.

Comment author: Lumifer 02 May 2016 04:38:54PM *  0 points [-]

A while back, a lot of people would have agreed that setting cats on fire for entertainment was totally cool.

Yes, and which conclusion do you draw from this observation?

The idea is that the argument sneaks in intuitions about the situation that have been explicitly stipulated away.

I am not sure I understand. Which intuitions have been explicitly stipulated away and where?

Comment author: Gram_Stone 02 May 2016 04:58:04PM *  1 point [-]

Yes, and which conclusion do you draw from this observation?

I don't see how defining morality as the popular vote doesn't entail moral progress being a random walk, and don't think that that definition provides any kind of answer to most of the questions that we pose within the cultural category 'moral philosophy'.

I am not sure I understand. Which intuitions have been explicitly stipulated away and where?

There's implicit uncertainty about how to compare the moral weight of children and adults. Is there not always some number of adults that would be better to save than a fixed number of children? Would you sacrifice ten million adults for one child? There's some number. People have unique intuitions about the moral weight of children, as opposed to adults, and most utilitarians don't make any kind of concrete judgments about what the weights should be. If you throw in something like this, then you're not countering a claim that anyone has actually made.

There are other intuitions that implicitly affect the judgment, like pleasure, social reputation, uncertainty about the assumptions themselves. In particular, it's hard to suspend your disbelief in a thought experiment. If it really were the case that you knew with certainty that you could live and save two people instead of dying trying to save someone else and failing, then yes, you should pick the action that leads to the outcome with the greatest number of people safe. And finally, these things never actually happen. You seem to champion pragmatism constantly; I don't see how being able to save a life for $4,000 instead of $100,000 and ignoring quirks about my ability to perceive large scopes and distant localities to come to the conclusion that, yes, in fact, I should save twenty-five lives instead of one life, is counterintuitive, unpragmatic, or morally indefensible. I see thought experiments against utilitarianism as counterintuition porn, pitting a jury-rigged human brain against the most alien, unrealistic situation that you possibly can.

Comment author: Lumifer 02 May 2016 05:33:29PM *  1 point [-]

I don't see how defining morality as the popular vote doesn't entail moral progress being a random walk

You imply that the empirically observed ("popular") morality of different societies at different times is a random walk. Is that a bullet you wish to bite?

The point I had in mind, though, wasn't defining morality through democracy. If you think that your moral opinions about cats on fire are better than those of some fellows a century or two ago, you have a couple of ways to argue for this.

One would be to claim that moral progress exists and is largely montonic and inescapable, thus your morality is better just because it comes later in time. Another would be to claim that you are in some way exceptional (in terms of your position in space and/or time), for example you can see the Truth better than those other folks because they were deficient in some way.

As you are probably well aware of, such claims tend to be controversial and have issues. I was wondering which path do you want to take. I'm guessing the moral progress path, am I right?

There's implicit uncertainty about ... other intuitions that implicitly affect the judgment, like pleasure ...

Sure, but what has been explicitly stipulated away?

I don't see how being able to save a life for $4,000 instead of $100,000 ... is counterintuitive, unpragmatic, or morally indefensible.

That's not what we are talking about, is it? We are talking more about immediate, visceral-reaction kinds of actions versus far-off, unconnected, and statistical-averages kinds. In some way it's an emotion vs intellect sort of a conflict, or, put in different terms, hardwired biological imperatives vs abstract calculations.

You are saying that abstract calculations provide the right answer, but I don't see it as self-evident: see my post above about putting all your trust into a single maximization.

Comment author: gjm 02 May 2016 09:57:50PM -1 points [-]

Under some moral systems it is, and under some it isn't.

Right. And provided some of the latter moral systems are ones endorsed by actual people, it cannot be true that "Everyone knows ...".

Which percentage of the population [...]

Oh, I'm sorry. I'd thought we were having a discussion about ethics, not a popularity contest. What percentage of the population has even heard of utilitarianism? What proportion has heard of it and has a reasonably accurate idea what it is?

Any more than the trolley one?

Nope, ridiculous to a similar extent and in similar ways. This is relevant not because there's anything wrong with using unrealistic hypothetical questions to explore moral systems, but because there's something wrong with making a naked appeal to intuition when addressing an unrealistic hypothetical question (that being what entirelyuseless just did). Because our intuitions are not calibrated for weird hypothetical situations and we shouldn't expect what they tell us about such situations to be very enlightening.

Comment author: torekp 06 May 2016 01:28:14AM 1 point [-]

Likewise, there may not be any agenty dust in the universe. But if your implied conclusion is that there are no agents in the universe, then your conclusion is false.

This. I call the inference "no X at the microlevel, therefore, no such thing as X" the Cherry Pion fallacy. (As in, no cherry pions, implies no cherry pie.) Of course more broadly speaking it's an instance of the fallacy of composition, but, this variety seems to be more tempting than most, so it merits its own moniker.

It's a shame. The OP begins with some great questions, and goes on to consider relevant observations like

When we are sad, we haven't attributed the cause of the inciting event to an agent; the cause is situational, beyond human control. When we are angry, we've attributed the cause of the event to the actions of another agent.

But from there, the obvious move is one of charitable interpretation, saying, Hey! Responsibility is declared in these sorts of situations, when an agent has caused an event that wouldn't have happened without her, so maybe, "responsibility" means something like "the agent caused an event that wouldn't have happened without her". Then one could find counterexamples to this first formulation, and come up with a new formulation that got the new (and old) examples right ... and so on.

Comment author: gjm 06 May 2016 02:06:33AM 0 points [-]

The OP has explicitly denied committing the cherry pion fallacy here. I confess, though, that I'm not sure what point the OP is making by observing that grinding the universe to dust would not produce agenty dust. I can see two non-cherry-pion-fallacy-y things they might be saying -- "agency doesn't live at the microlevel, so stop looking at the microlevel for things you need to look further up for" and "agency doesn't live at the microlevel, but it's produced by the microlevel, so let's understand that and build up from there" -- but I don't see how to fit either of them into what comes before and after what the OP says about agenty dust. Gram_Stone, would you care to do some inferential-gap bridging?

Comment author: DanArmak 02 May 2016 09:18:38PM *  1 point [-]

Suppose you are jogging somewhere in order to make a donation to a foreign charity. The number of expected lives saved from your donation is 3. On the way, you witness a young child drowning in a river. You have a choice: continue on, expecting to save 2 lives overall. Or save the child, expecting to lose 2 lives overall.

Suppose you know there are three people being held hostage across the street, who will be killed unless the ransom money is delivered in the next ten minutes. You're running there with the money in hand; there's no-one else who can make it in time. On the way, you witness a young child drowning in a river. Do you abandon your mission to save the child?

I claim that many (most?) people would be much more understanding if I ignored the child in my example, than if I did so in yours. Do you agree?

The only difference between the two scenarios is that the hostages are concrete, nearby and the danger immediate, while the people you're donating to are far away in time and space and probably aren't three specific individuals anyway. And this engages lots of well known biases - or useful heuristics, depending on your point of view.

How would one argue that it's right to save the child in your example, and right to abandon it in mine? I think most people would (intuitively) try to deny the hypothetical: they would question how you can be so sure that your donation would save exactly three lives, and why making it later wouldn't work, and so on. But if they accept the hypothetical that you have a clear choice between the two, then what difference can motivate them, other than the near-far or specific people vs. statistic distinctions? What other rule can be guiding 'what is the right thing to do'? And do you accept this rule?

Comment author: entirelyuseless 02 May 2016 10:01:18PM *  1 point [-]

I agree that the differences are more or less what you say they are, and I think those differences can be enough to determine what is right and what is not. I do not think it has anything to do with being biased.

Comment author: DanArmak 04 May 2016 08:33:04AM 0 points [-]

Certainly, you can assign moral weight to strangers according to their distance from you, their concreteness, and their familiarity or similarity to you. That is what many people do, and probably everyone instinctively does it to some degree. Modern utilitarians, EAers, etc. don't pretend to be perfect; most of them just deviate a little bit from this default behavior.

One problem with this is that, in historically recent times, a very few people are sometimes placed in positions where they can (or must) decide the lives of billions. And then most people agree we would not want them to follow this rule. We don't want the only thing stopping nuclear first strikes to be the fear of retaliation; if Reagan had had a button which would instantly wipe out all USSR citizens with no fear of revenge strikes, we would want him to not press it for moral reasons.

Another problem is that it creates moral incentives not to cooperate. If two groups are contesting a vital resource, we'd rather they share it; we don't want them to each have moral incentives to go to war over it, because it's morally more important to have a vital resources for yourself than it is not to kill some strangers or deprive them of it.

A related question is the precie function with which moral weight falls off with distance has to be very finely tuned. Should it fall off with distance squared, or cubed, or what? Is there any way for two friends to convince one another whose moral rule is more exactly correct?

Comment author: entirelyuseless 05 May 2016 10:04:58PM 0 points [-]

I started to write a response to this and then deleted it because it grew to over a page and I wasn't close to being finished. Basically you are looking at things from a utilitarian point of view and would like a description of my position in terms of a utility function. But I don't accept that point of view, even if I understand it, and the most natural description of my way of acting isn't a utility function at all.

(I accept that to the degree that my actions are consistent, it is mathematically possible to describe those actions with a utility function -- but there is no necessary reason why that utility function would look very sensible, given that the agent is not actually using a utility function, but some other method, to make its choices.)

The simple answer (the full answer isn't simple) to your questions is that I should do the right thing in my life, which might involve giving money to strangers, but which probably does not involve giving 50% of it to strangers, and those few people who are in positions of power should do the right thing in their lives, which definitely does not normally involving wiping out countries.