These are some intuitions people often have:

  • You are not required to save a random person, but you are definitely not allowed to kill one
  • You are not required to create a person, but you are definitely not allowed to kill one
  • You are not required to create a happy person, but you are definitely not allowed to create a miserable one
  • You are not required to help a random person who will be in a dire situation otherwise, but you are definitely not allowed to put someone in a dire situation
  • You are not required to save a person in front of a runaway train, but you are definitely not allowed to push someone in front of a train. By extension, you are not required to save five people in front of a runaway train, and if you have to push someone in front of the train to do it, then you are not allowed.

Here are some more:

  • You are not strongly required to give me your bread, but you are not allowed to take mine
  • You are not strongly required to lend me your car, but you are not allowed to unilaterally borrow mine
  • You are not strongly required to send me money, but you are not allowed to take mine

The former are ethical intuitions. The latter are implications of a basic system of property rights. Yet they seem very similar. The ethical intuitions seem to just be property rights as applied to lives and welfare. Your life is your property. I’m not allowed to take it, but I’m not obliged to give it to you if you don’t by default have it. Your welfare is your property. I’m not allowed to lessen what you have, but I don’t have to give you more of it.

My guess is that these ethical asymmetries—which are confusing, because they defy consequentialism—are part of the mental equipment we have for upholding property rights.

In particular these well-known asymmetries seem to be explained well by property rights:

  • The act-omission distinction naturally arises where an act would involve taking someone else’s property (broadly construed—e.g. their life, their welfare), while an omission would merely fail to give them additional property (e.g. life that they are not by default going to have, additional welfare).
  • ‘The asymmetry’ between creating happy and miserable people is because to create a miserable person is to give that person something negative, which is to take away what they have, while creating a happy person is giving that person something extra.
  • Person-affecting views arise because birth gives someone a thing they don’t have, whereas death takes a thing from them.

Further evidence that these intuitive asymmetries are based on upholding property rights: we also have moral-feeling intuitions about more straightforward property rights. Stealing is wrong.

If I am right that we have these asymmetrical ethical intuitions as part of a scheme to uphold property rights, what would that imply?

It might imply something about when we want to uphold them, or consider them part of ethics, beyond their instrumental value. Property rights at least appear to be a system for people with diverse goals to coordinate use of scarce resources—which is to say, to somehow use the resources with low levels of conflict and destruction. They do not appear to be a system for people to achieve specific goals, e.g. whatever is actually good. Unless what is good is exactly the smooth sharing of resources.

I’m not actually sure what to make of that—should we write off some moral intuitions as clearly evolved for not-actually-moral reasons and just reason about the consequentialist value of upholding property rights? If we have the moral intuition, does that make the thing of moral value, regardless of its origins? Is pragmatic rules for social cohesion all that ethics is anyway? Questions for another time perhaps (when we are sorting out meta-ethics anyway).

A more straightforward implication is for how we try to explain these ethical asymmetries. If we have an intuition about an asymmetry which stems from upholding property rights, it would seem to be a mistake to treat it as evidence about an asymmetry in consequences, e.g. in value accruing to a person. For instance, perhaps I feel that I am not obliged to create a life, by having a child. Then—if I suppose that my intuitions are about producing goodness—I might think that creating a life is of neutral value, or is of no value to the created child. When in fact the intuition exists because allocating things to owners is a useful way to avoid social conflict. That intuition is part of a structure that is known to be agnostic about benefits to people from me giving them my stuff. If I’m right that these intuitions come from upholding property rights, this seems like an error that is actually happening.

New Comment
37 comments, sorted by Click to highlight new comments since:

An alternative explanation for act-omission distinction, from Joshua Greene's Moral Tribes (emphasis added):

Forget, for a moment, about morality. Why would an animal’s brain distinguish between things that it actively causes to happen and things that it merely allows to happen? Right now, as you read this book, you are actively causing your eyes to move across the page, actively causing the pages to turn, and so on. That’s what you’re doing. But think of all the things that you are not doing. You are not teaching a poodle to dance, not writing a fan letter to Rod Stewart, not juggling flaming torches, and not installing a hot tub in your basement. And that’s just the beginning. At any given moment, there are infinitely many things that you are not doing, and it would be impossible for your brain to represent all of them, or even a significant fraction of them. (Sound familiar?) What this means is that an agent’s brain must, in some sense, privilege actions over omissions. We have to represent actions in order to perform them, in order to make sure they go as planned, and to understand the actions of others. But we simply can’t keep track of all the things that we and others don’t do. This doesn’t mean that we can’t think about omissions, but it does mean that our brains have to represent actions and omissions in fundamentally different ways, such that representations of actions are more basic and accessible. [...]
...representing a specific goal-directed action, such as choosing a blue mug, is a fairly basic cognitive ability, an ability that six-month-old infants have. But representing an omission, a failure to do some specific thing, is, for humans, a less basic and more sophisticated ability. Note that this is not because representing an omission necessarily requires substantially more complex information processing. If there are only two possibilities— choosing A and not choosing A— then representing what is not done is not much harder than representing what is done. If you were programming a computer to monitor and predict someone’s two-alternative mug selections, you could program the computer to represent “didn’t choose the blue mug” almost as easily as “chose the blue mug.” (All you’d need is a little “not” operator to turn the latter representation into the former.) Nevertheless, it appears that humans find it much easier to represent what one does rather than what one doesn’t do. And that makes sense, given that in real life, it’s more important to keep track of the relatively few things that people do, compared with the millions of things that people could do but don’t.
The fact that babies represent doings more easily than non doings makes a prediction about adults: When human adults distinguish between harmful actions and omissions ( non doings) in their moral judgments, it’s the result of automatic [intuitions], not the [conscious application of a formal principle making them different]. Cushman and I tested this prediction in a brain-imaging study in which people evaluated both active and passive harmful actions. As predicted, we found that ignoring the action/ omission distinction— treating passive harm as morally equivalent to active harm— requires more [effortful dorsolateral prefrontal cortex] activity than abiding by the action/ omission distinction.** This makes sense, given that representations of omissions are inherently abstract. An action, unlike an omission, can be represented in a basic sensory way. It’s easy, for example, to draw a picture of someone running. But how do you draw a picture of someone not running? You can draw a picture of someone standing still, but this will convey something like “person” or “woman” or “standing” rather than “not running.” The conventional way to represent what something is not is to use an abstract symbol, such as a circle with a slash through it, conjoined with a conventional image. But a conventional image can’t do the job by itself. You need an abstract symbol.
Actions, in addition to having natural sensory representations, also have natural motor representations. Reading words like “lick,” “pick,” or “kick” automatically increases activation in the subregions of the motor cortex that control, respectively, the tongue, fingers, and feet. But there is no part of the brain that ramps up when people think about actions that do not involve the tongue (etc.) because there is no part of the brain specifically devoted to performing actions that do not involve the tongue.
As we saw earlier, our emotions, and ultimately our moral judgments, seem to be sensitive to the sensory and motor properties of actions, to things like pushing. (And to visual imagery of pushing; see pages 46– 48.) Omissions, unlike actions, have no distinctive sensory and motor properties, and must therefore lack at least one kind of emotional trigger. Moreover, this basic sensory/ motor distinction between actions and omissions may carry over into the realm of more physically amorphous behaviors, depending on how they are conceptualized. For example, the idea of “firing” someone (active) feels worse than “letting someone go” (passive). This parallels the results of a study by Neeru Paharia, Karim Kassam, Max Bazerman, and myself, showing that jacking up the price of cancer drugs feels less bad if it’s done indirectly through another agent, even if the physical action itself is no more indirect.
The hypothesis, then, is that harmful omissions don’t push our emotional moral buttons in the same way that harmful actions do. We represent actions in a basic motor and sensory way, but omissions are represented more abstractly. Moreover, this difference in how we represent actions and omissions has nothing to do with morality; it has to do simply with the more general cognitive constraints placed on our brains— brains that couldn’t possibly keep track of all the actions we fail to perform and that originally evolved as sensory and motor devices, not as abstract thinking devices. Once again, it seems that a hallowed moral distinction may simply be a cognitive by-product. (But, as I’ll explain shortly, there is room for some utilitarian accommodation of the action/ omission distinction.)

Crossposted from Katja's blog:

The root problem here is that the category “moral” lumps together (a) intuitions about what’s intrinsically valuable, (b) intuitions about what the correct coordination protocols are, and (c) intuitions about what’s healthy for a human.

Kantian morality, like the property intuitions you’ve identified, is about (b) (“don’t lie” doesn’t fail gracefully in a mixed world, but makes sense and is coherent as a proposed operating protocol), while Rawlsian morality and the sort of utilitarian calculus people are trying to derive from weird thought experiments about trolleys is about (a) (questions about things like distribution presuppose that we already have decent operating protocols to enable a shared deliberative mechanism, rather than a state of constant epistemic war).

I mean, yes, but I'm not sure this much impacts Katja's analysis which is mostly about moral intuitions that are in conflict with moral reasoning. That the category of things we consider when talking about morals, ethics, and axiology is not clean cut (other than perhaps along the lines of being about "things we care about/value") doesn't really change the dissonance between intuition and reasoning in particular instances.

I think that the sort of division I'm proposing offers a way to decompose apparently incoherent "moral intuitions" into much more well-defined and coherent subcategories. I think that if someone practiced making this sort of distinction, they'd find this type of dissonance substantially reduced.

In other words, I'm interpreting the dissonance as evidence that we're missing an important distinction, and then proposing a distinction. In particular I think this is a good alternative to Katja's proposed writeoff of intuitions that can be explained away by e.g. property rights.

That's flattering to Rawls, but is it actually what he meant?

Or did he just assume that you don't need a mutually acceptable protocol for deciding how to allocate resources, and you can just skip right to enforcing the desirable outcome?

A couple of guesses for why we might see this, which don't seem to depend on property:

  • An obligation to act is much more freedom-constraining than a prohibition on an action. The more and more one considers all possible actions with the obligation to take the most ethically optimal one, the less room they have to consider exploration, contemplation, or pursuing their own selfish values. Prohibition on actions does not have this effect.
  • The environment we evolved in had roughly the same level of opportunity to commit harmful acts, bur far less opportunity to take positive consequentialist action (and far less complicated situations to deal with). It was always possible to hurt your friends and suffer consequences, but it was rare to have to think about the long term consequences of every action.
  • The consequences of killing, stealing, and hurting people are easier to predict than altruistic actions. Resources are finite, therefore sharing them can be harmful or beneficial, depending on the circumstances and who they are shared with. Other people can defect or refuse to reciprocate. If you hurt someone, they are almost guaranteed to retaliate. If you help someone, there is no guarantee there will be a payoff for you.
I’m not actually sure what to make of that—should we write off some moral intuitions as clearly evolved for not-actually-moral reasons ... ?

All moral intuitions evolved for not-actually-moral reasons, because evolution is an amoral process. That is not a reason to write any of them off, though. Or perhaps I should say, it is only a reason to "write them off" to the extent that it feels like it is, and the fact that it sometimes does, to some people, is as fine an example as any of the inescapable irrationality of moral intuitions.

If we have the moral intuition, does that make the thing of moral value, regardless of its origins?

Why would one ever regard anything as having moral value, except as a consequence of some moral intuition? And if one has a second moral intuition to the effect that the first moral intuition is invalid on account of its "origins," what is one to do, except reflect on the matter, and heed whichever of these conflicting intuitions is stronger?

This actually gets at a deeper issue, which I might as well lay out now, having to do with my reasons for rejecting the idea that utilitarianism, consequentialism, or really any abstract principle or system of ethics, can be correct in a normative sense. (I think I would be called a moral noncognitivist, but my knowledge of the relevant literature is tissue-thin.) On a purely descriptive level, I agree with Kaj's take on the "parliamentary model" of ethics: I feel (as I assume most humans do) a lot of distinct and often conflicting sentiments about what is right and wrong, good and bad, just and unjust. (I could say the same about non-ethical value judgements, e.g. beautiful vs. ugly, yummy vs. yucky, etc.) I also have sentiments about what I want and don't want that I regard as being purely motivated by self-interest. It's not always easy, of course, to mark the boundary between selfishly and morally motivated sentiments, but to the extent that I can, I try to disregard the former when deciding what I endorse as morally correct, even though selfishness sometimes (often, tbh) prevails over morality in guiding my actions.

On a prescriptive level, on the other hand, I think it would be incoherent for me to endorse any abstract ethical principle, except as a rule of thumb which is liable to admit any number of exceptions, because, in my experience, trying to deduce ethical judgements from first principles invariably leads to conclusions that feel wrong to me. And whereas I can honestly say something like "Many of my current beliefs are incorrect, I just don't know which ones," because I believe that there is an objective physical reality to which my descriptive beliefs could be compared, I don't think there is any analogous objective moral reality against which my moral opinions could be held up and judged correct or incorrect. The best I can say is that, based on past experience, I anticipate that my future self is likely to regard my present self as morally misguided about some things.

Obviously, this isn't much help if one is looking to encode human preferences in a way that would be legible to AI systems. I do think it's useful, for that purpose, to study what moral intuitions humans tend to have, and how individuals resolve internal conflicts between them. So in that sense, it is useful to notice patterns like the resemblance between our intuitions about property rights and the act-omission distinction, and try to figure out why we think that way.

My guess is that these ethical asymmetries—which are confusing, because they defy consequentialism

Ponens or tollens? A standard criticism of consequentialism is that it defies these asymmetries.

I recently heard the issue put to Peter Singer, that his ethics appear to require caring not a fraction more about your own children than about any other child in the world, and I did not hear a straight answer. He just said that that would be another valid way of acting. The radio programme containing this interview is downloadable.

You're talking about altruism. Trolley problems are about consequentialism proper: they are problematic even if you're pretty selfish, as long as we can find some five people whose importance to you is about equal.

The OP draws this tension between consequentialism and ethical asymmetry, and mentions the trolley problem in that context. Therefore, the particular consequentialism under discussion is one which does enjoin symmetry; that is, altruism. We are not talking about consequentialism with respect to arbitrary utility functions here.

Indeed, the large number of people who switch but don't push seem to care enough about strangers to demonstrate the issue.

In the usual hypothetical, the people on the trolley tracks are strangers and thus their importance to you is already about equal. Shouldn't you be asking that we find people whose importance to you is large? Kurzban-DeScioli-Fein (summary table) ask the question in terms of pushing a friend to save five friends and find that this reduces the discrepancy between switch and push. (Well, how do you define the discrepancy? It reduces the additive discrepancy, but not the logit discrepancy.) In a different attempt to isolate consequences from rules, they ask whether they would want someone else in the trolley problem to take the action. They find a discrepancy with pushing, but substantially smaller than the push/switch discrepancy.

At least where I live, two out of three of those property rights are wrong.

Property rights explicitly give way to more basic human rights. For instance, you are allowed to steal a car if it's the only way that you can get an injured person to a hospital. And of course, you're allowed to steal bread if that's the only way you can get to eat.

I suspect property rights are just a subset of moral intuitions, both coming from the same cognitive and social causes rather than one coming from the other. http://www.daviddfriedman.com/Academic/Property/Property.html doesn't need much modification to apply to many moral questions.

The basic assymetry you're pointing out (not forced to give, but not allowed to take; not forced to act for good, but prevented from acting for bad) could be framed as simple humility - we don't know enough to be sure, so bias toward doing nothing. Or it could be a way to create a ratchet effect - never act to make it worse, but sometimes act to make it better. Or it could be an evolved way to maintain power structures.

On deeper reflection, it's clear that moral intuitions aren't always what we'd choose as a rational moral framework. It seems likely that this distinction between action and inaction is an artifact, not a truth. Inaction _is_ action, and you're responsible for all the harm you fail to prevent.

I'm attracted to viewing these moral intuitions as stemming from intuitions about property because the psychological notion of property biologically predates the notion of morality. Territorial behaviors are found in all kinds of different mammals, and prima facie the notion of property seems to be derived from such behaviors. The claim, then, is that during human evolution, moral psychology developed in part by coopting the psychology of territory.

I'm skeptical that anything normative follows from this though.

That means FAI might want to give us territoriality or some extrapolation of it, if that's part of what we enjoy and want. Not sure there's any deeper meaning to "normativity".

I would guess this is something someone already explored somewhere, but the act-omission distinction seems a natural consequence of intractability of "actions not taken"?

The model is this: the moral agent takes a sample from some intractably huge action space. Evaluates each sampled action by some moral function M (for example by rejection sampling based on utility), and does something.

From an external perspective, morality likely is about the moral function M (and evaluating agents based on that), in contrast to evaluating them based on the sampling procedure.

Something in this view feels a bit circular to me, correct me if I'm way off mark.

Question: why assume that moral intuitions are derived from pre-existing intuitions for property rights, and not the other way around?

Reply: because property rights work ("property rights at least appear to be a system for people with diverse goals to coordinate use of scarce resources"), and if they are based on some completely unrelated set of intuitions (morality) then that would be a huge coincidence.

Re-reply: yeah, but it can also be argued that morality 'at least appears to be a system for people with diverse goals to coordinate use of scarce resources', those resources being life and welfare. More "moral" societies seem to face less chaos and destruction, after all. It works too. It could be that these came first, and property rights followed. It even makes more evolutionary/historical sense.

So in other words, we may be able to reduce the entire comparison to just saying that moral intuitions are based on a set of rules of thumb that helped societies survive (much like property rights helped societies prosper), which is basically what every evolutionarist would say when asked what's the deal with moralilty.

And this issue is totally explored already, the general answers ranging from consequentialism - our intuitions, whatever their source, are just suggestions that need to be optimized on the basis of the outcomes of each action - and trolley-problem-morals - we ought to explore the bounds and specifics of our moral intuitions and build our ethics on top of that.

The OP is basically the fairly standard basis of american-style libertarianism.

It doesn't particularly "defy consequentialism" any more than listing the primary precepts of utilitarian consequentialist groups defys deontology.

But I don't think the moral intuitions you list are terribly universal.

The closest parallel I can think of is someone listing contemporary american copyright law and listing it's norms as if they're some kind of universally accepted system of morals.

"but you are definitely not allowed to kill one"

Johny thousand livers is of course an exception.

Or put another way, if you say to most people,

"ok, so you're in a scenario a little bit like the films Armageddon or deep impact. Things have gone wrong but it's a smaller rock and and all you can do at this point is divert it or not, it's on course for new york city, ten million+ will die, you have the choice to divert it to a sparsely populated area of the rocky mountains... but there's at least one person living there"

Most of the people who would normally declare that the trolley problem with 1vs5 makes it unethical to throw that one person in front of the trolley... will change their view once the difference in the trade is large enough.

1 vs 5 isn't big enough for them but the idea of tens of millions will suddenly turn them into consequentialist.

"You are not required to save a random person"

Also, this is a very not-universal viewpoint. Show people that video of the chinese kid being run over repeatedly while people walk past ignoring her cries and many will declare that the passers-by who ignored the child committed a very clear moral infraction.

"Duty of care" is not popular in american libertarianism but it and variations is a common concept in many countries.

The deliberate failure to provide assistance in the event of an accident is a criminal offence in France.

In many countries if you become aware of a child suffering sexual abuse there are explicit duties to report.

And once you accept the fairly commonly held concept of "duty of care", the idea that you actually do have a duty to others, and suddenly the absolutist property stuff largely sort of falls apart and it becomes entirely reasonable to require some people to give up some fraction of property to provide care for those around them just as it's reasonable to expect them to help an injured toddler out of the street or to help the victim of a car accident or to let the authorities know if they find out that a kid is being raped.

"Duty" or similar "social contract" precepts that imply that you have some positive duties purely by dint of being a human with the capacity to intervene tend to be rejected by the american libertarian viewpoint but it's a very very common aspect of the moral intuitions of a large fraction of the worlds population.

It's not unlimited and it tends towards Newtonian Ethics but moral intuitions aren't known for being perfectly fair.

An interesting observation. Somewhat weird source of information on this may come from societies with slavery, where people's lives could have been property of someone else. (that is, most human societies)

‘The asymmetry’ between creating happy and miserable people is because to create a miserable person is to give that person something negative, which is to take away what they have, while creating a happy person is giving that person something extra.

This seems to me to be an extremely strained interpretation.

There is another interpretation, which is that strong property rights *are* moral. I am currently 80% through Atlas Shrugged, which is a very strong thesis for this interpretation. Basically, when you take away property rights, whether the material kind, the action of one's labor, or the spiritual kind, you give power to those who are best at taking. Ayn Rand presents the results of this kind of thinking, the actions that result, and the society it creates. I strongly recommend you read.

I was confused about the title until I realized it means the same thing as "Do ethical asymmetries come from property rights?"

So these ‘moral’ intuitions

The sentence seems to be cut off in the middle

Testable implication: communities that strongly emphasize upholding property conventions will contain more individuals that share these intuitions, while communities that do not will contain individuals that share fewer.

Don't you agree?

I think this is absolutely right. I believe property rights — by which I mean the more general thing of having things (controllables/observables/responsibilities) that are yours that don't belong to anyone else — is the drama/conflict-minimizing strategy.

Violating property rights is attempting to control another person, ie violate their sovereignty. 1) It is impossible to 100% control other people anyway; 2) People usually fight back when you try to control them.

I'm drafting a sequence that is very closely related to this for minimizing social conflict, and also explaining why it is my favorite near-term AI safety agenda.

[-]jmh10

Perhaps I'm being a bit dense here but I have some difficulty in seeing a real asymmetry here -- though do see that these are commonly understood views. They seem to be something along the line of logical fallacies like if p then q; q therefore p and then noting that we have q but not finding p is some asymmetry.

From the PR view you are suggesting I'm wondering about the concept of killing and owning oneself. I would think the moral/ethical asymmetry is related to killing others versus killing oneself. If we "own" ourselves then we can make the PR argument about not killing others -- just like not taking their car or money. But that view implies we can kill ourselves (or sell ourselves into slavery for that matter). That we have constraints on killing our self seems to be the asymmetric setting from a PR view.

• You are not required to create a happy person, but you are definitely not allowed to create a miserable one

Who's going around enforcing this rule? There's certainly a stigma attached to people having children when those children will predictably be unhappy, but most people aren't willing to resort to, e.g., nonconsensual sterilization to enforce it, and AFAIK we haven't passed laws to the effect that people can be convicted, under penalty of fine or imprisonment, of having children despite knowing that those children would be at high risk of inheriting a severe genetic disorder, for example. Maybe this is just because it's hard to predict who will have kids, when they will have them, and how happy those kids will be, thereby making enforcement efforts unreasonably costly and invasive? I don't know, just commenting because this supposed norm struck me as much weaker than the other ones you name. Very interesting post overall though, this isn't meant as criticism.

[-]tdb10

It is not even a norm.

If I marry my true love, someone else who loves my spouse may feel miserable as a result. No one is obligated to avoid creating this sort of misery in another person. We might quibble that such a person is immature and taking the wrong attitude, but the "norm" does not make exceptions where the victims are complicit in their own misery, it just prohibits anyone from causing it.

We might be able to construct a similar thought experiment for "dire situations". If I invent a new process that puts you out of business by attracting all your customers, your situation may become dire, due to your sudden loss of income. Am I obligated in any way to avoid this? I think not.

Those two norms (don't cause misery or dire situations) only work as local norms, within your local sphere of intimate knowledge. In a large-scale society, there is no way to assure that a particular decision won't change something that someone depends upon emotionally or economically. This is just a challenge of cosmopolitan life, that I have the ultimate responsibility for my emotional and economic dependencies, in the literal sense that I am the one who will suffer if I make an unwise or unlucky choice. I can't count on the system (any system) to rectify my errors (though different systems may make my job harder or easier).

[-]tdb10

Oops, I misinterpreted "create", didn't I?

My quibble still works. I couldn't know for sure while trying to conceive a child that my situation would necessarily continue to be sufficient to care for that child (shit can happen to anyone). Even if my circumstances continue as expected my children may develop physical or mental problems that could make them miserable. It's not a yes/no question, it's a "how much rusk" question. Where do we draw the line between too much risk and a reasonable risk?

… these ethical asymmetries—which are confusing, because they defy consequentialism …

What? No they don’t. Why do you say this?

I think they sometimes do, or at least it is eminently plausible that they sometimes do. The classic trolley (especially in its bridge formulation) problem is widely considered an example of a way in which the act-omission distinction is at odds with consequentialism. I'm sure you're aware of the trolley problem, so I'm not bringing it up as an example I think you're not aware of, but more to note that I'm confused as to why, given that you're aware of it, you think it doesn't defy consequentialism.

For another example, on one plausible theory in population ethics (the total view), creating a happy person at happiness level x adds to the total amount of happiness in the world, and is therefore just as valuable as increasing an existing person's level of happiness by x. Thus, not creating this person when you could goes against consequentialism.

There are ways to argue that these asymmetries are actually optimal from a consequentialist perspective, but it seems to me the default view would be that they aren't, so I'm confused why you think that they so obviously are. (I'm not sure that the fact that these asymmetries defy consequentialism would make them confusing--I don't think (most) humans are intuitive consequentialists, at least not about all cases, so it seems to me not at all confusing that some of our intuitions would prescribe actions that aren't optimal from a consequentialist perspective.)

The classic trolley (especially in its bridge formulation) problem is widely considered an example of a way in which the act-omission distinction is at odds with consequentialism.

It is no such thing. Anyone who considers it thus, is wrong.

A world where a bystander has murdered a specific fat person by pushing him off a bridge to prevent a trolley from hitting five other specific people, and a world where a trolley was speeding toward a specific person, and a bystander has done nothing at all (when he could, at his option, have flipped a switch to make the trolley crush five other specific people instead), are very different worlds. That means that the action in question, and the omission in question, have different consequences.

For another example, on one plausible theory in population ethics (the total view), creating a happy person at happiness level x adds to the total amount of happiness in the world, and is therefore just as valuable as increasing an existing person’s level of happiness by x.

Valuable to whom?

Thus, not creating this person when you could goes against consequentialism.

No, it doesn’t. This scenario is nonsensical for various reasons (incomparability of “level of happiness” and general implausibility of treating “level of happiness” as a ratio scale is one big one), but from a person-centered view (which is the only kind of view that isn’t absurd), these are vastly different consequences.

… one plausible theory in population ethics (the total view) …

The total view (construed in the way that is implied by your comments) is not a plausible theory.

A world where a bystander has murdered a specific fat person by pushing him off a bridge to prevent a trolley from hitting five other specific people, and a world where a trolley was speeding toward a specific person, and a bystander has done nothing at all (when he could, at his option, have flipped a switch to make the trolley crush five other specific people instead), are very different worlds. That means that the action in question, and the omission in question, have different consequences.

Technically it is true that there are different consequences, but a) most consequentialists don't think that the differences are very morally relevant, and b) you can construct examples where these differences are minimised without changing people's responses very much. For instance, by specifying that you would be given amnesic drugs after the trolley problem, so that there's no difference in your memories.

The total view (construed in the way that is implied by your comments) is not a plausible theory.

Yet many people seem to find it plausible, including me. Have you written up a justification of your view that you could point me to?

a) most consequentialists don’t think that the differences are very morally relevant

That may very well be, but if—for instance—the “most consequentialists” to whom you refer are utilitarians, then the claim that their opinion on this is manifestly nonsensical is exactly the claim I am making in the first place… so any such majoritarian arguments are unconvincing.

For instance, by specifying that you would be given amnesic drugs after the trolley problem, so that there’s no difference in your memories.

The more outlandish you have to make a scenario to elicit a given moral intuition, the less plausible that moral intuition is, and the less weight we should assign to it. In any case, even if the consequences are the same in the modified scenario, that in no way at all means that they’re also the same in the original, unmodified, scenario.

The total view (construed in the way that is implied by your comments) is not a plausible theory.

Yet many people seem to find it plausible, including me. Have you written up a justification of your view that you could point me to?

Criticisms of utilitarianism (nor even of total utilitarianism in particular, nor of other similarly aggregative views) are not at all difficult to find. I don’t, in principle, object to providing references for some of my favorite ones, but I won’t put in the effort to do so if the request to provide them is made only as a rhetorical move. So, are you asking because you haven’t encountered such criticisms? or because you have, but found them unconvincing (and if so, which sort have you encountered)? or because you have, and are aware of convincing counterarguments?

(To be clear: for my part, I have never encountered convincing responses to any of [what I consider to be] the standard criticisms. At most, there are certain evasions[1], or handwaving, etc.)

Everyone's being silly. Consequentialism maximizes the expected utility of the world. Said understands "world" to mean "universe configuration history". The others understand "world" to mean "universe configuration".

Said, your "[1]" is not a link.

Consequentialism maximizes the expected utility of the world.

Consequentialist moral frameworks do not require the agent to have[1] a utility function. Without a utility function, there is no “expected utility”.

In general, I would advise avoiding such “technical-language” rephrasings of standard definitions; they often (such as here) create inaccuracies where there were none.

Said understands “world” to mean “universe configuration history”. The others understand “world” to mean “universe configuration”.

Unless you’re positing a last-Thursdayist sort of scenario where we arrive at some universe configuration “synthetically” (i.e., by divine fiat, rather than by the universe evolving into the configuration “naturally”), this distinction is illusory. Barring such bizarre, wholly hypothetical scenarios, you cannot get to a state where, for instance, people remember an event happening, there’s records and other evidence of the event happening, etc., without that event actually having happened.

Said, your “[1]” is not a link.

It wasn’t meant to be a link, it was meant to be a footnote reference (as in this comment); however, I seem to have forgotten to add the actual footnote, and now I don’t remember what it was supposed to be… perhaps something about so-called “normalizing assumptions”? Well, it’s not critical.


[1] Here “have” should be taken to mean “have preferences that, due to obeying certain axioms, may be transformed into”.

I only meant to unpack consequentialism's definition in order to get a handle on the "world" term. I'm fine with "Consequentialism chooses actions based on their consequences on the world.".

The distinction is relevant for, for example, whether to care about an AI simulating humans in detail in order to figure out their preferences.

Quantum physics combines amplitudes of equal universe configurations regardless of their history. A quantum computer could arrive in the same state through different paths, some of which had it run morally relevant algorithms.

Even if the distinction is illusory, it seems to be the crux of everyone's disagreement.