You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

torekp comments on My Kind of Moral Responsibility - Less Wrong Discussion

3 Post author: Gram_Stone 02 May 2016 05:54AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (116)

You are viewing a single comment's thread. Show more comments above.

Comment author: entirelyuseless 02 May 2016 12:54:02PM *  1 point [-]

On the object level, I think you are almost completely wrong.

You say, "There is not one culpable atom in the universe." This is true, but your implied conclusion, that there are no culpable persons in the universe, is false. Likewise, there may not be any agenty dust in the universe. But if your implied conclusion is that there are no agents in the universe, then your conclusion is false.

But if there are agents in the universe, and there are, then there can be good and bad agents there, just as there are good and bad apples in the universe.

Richard Chappell, I think, has used Singer's own argument against him. Suppose you are jogging somewhere in order to make a donation to a foreign charity. The number of expected lives saved from your donation is 3. On the way, you witness a young child drowning in a river. You have a choice: continue on, expecting to save 2 lives overall. Or save the child, expecting to lose 2 lives overall.

Everyone knows that the right choice here is to save the child, and that the utilitarian choice is wrong.

The utilitarian error is this: it is asking, "what actions will have the most beneficial effects?" But that is the wrong question. The right question is, "What is the right thing to do?"

(Edit: there is another inconsistency in your way of thinking. If you assume there is no culpability in the universe because atoms are not culpable, neither is it worthwhile to save human lives, because there are no atoms in the universe that are worth bothering about.)

Comment author: torekp 06 May 2016 01:28:14AM 1 point [-]

Likewise, there may not be any agenty dust in the universe. But if your implied conclusion is that there are no agents in the universe, then your conclusion is false.

This. I call the inference "no X at the microlevel, therefore, no such thing as X" the Cherry Pion fallacy. (As in, no cherry pions, implies no cherry pie.) Of course more broadly speaking it's an instance of the fallacy of composition, but, this variety seems to be more tempting than most, so it merits its own moniker.

It's a shame. The OP begins with some great questions, and goes on to consider relevant observations like

When we are sad, we haven't attributed the cause of the inciting event to an agent; the cause is situational, beyond human control. When we are angry, we've attributed the cause of the event to the actions of another agent.

But from there, the obvious move is one of charitable interpretation, saying, Hey! Responsibility is declared in these sorts of situations, when an agent has caused an event that wouldn't have happened without her, so maybe, "responsibility" means something like "the agent caused an event that wouldn't have happened without her". Then one could find counterexamples to this first formulation, and come up with a new formulation that got the new (and old) examples right ... and so on.

Comment author: gjm 06 May 2016 02:06:33AM 0 points [-]

The OP has explicitly denied committing the cherry pion fallacy here. I confess, though, that I'm not sure what point the OP is making by observing that grinding the universe to dust would not produce agenty dust. I can see two non-cherry-pion-fallacy-y things they might be saying -- "agency doesn't live at the microlevel, so stop looking at the microlevel for things you need to look further up for" and "agency doesn't live at the microlevel, but it's produced by the microlevel, so let's understand that and build up from there" -- but I don't see how to fit either of them into what comes before and after what the OP says about agenty dust. Gram_Stone, would you care to do some inferential-gap bridging?