I think consequentialism describes only a subset of my wishes. For example, maximizing money is well modeled by it. But when I'm playing with something, it's mostly about the process, not the end result. Or when I want to respect the wishes of other people, I don't really know what end result I'm aiming for, but I can say what I'm willing or unwilling to do.

If I try to shoehorn everything into consequentialism, then I end up looking for "consequentialist permission" to do stuff. Like climbing a mountain: consequentialism says "I can put you on top of the mountain! Oh, that's not what you want? Then I can give you the feeling of having climbed it! You don't want that either? Then this is tricky..." This seems a lot of work, just to do something I already want to do. There are many reasons to do things - not everything has to be justified by consequences.

There are of course objections. Objection one is that non-consequentialist wishes can make you go in circles, like that Greg Egan character who spent thousands of hours carving table legs, making himself forget the last time so he could enjoy the next. But when pushed to such extremes, a consequentialist goal like maximizing happiness can also lead to weird results (vats of happiness goo...) And if we don't push quite so hard, then I can imagine utopia containing both consequentialist and non-consequentialist stuff, doing things for their own sake and such. So there's no difference here.

Objection two is that our wishes come from evolution, which wants us to actually achieve things, not go in circles. But our wishes aren't all perfectly aligned with with evolution's wish (procreate more). They are a bunch of heuristics that evolution came up with, and a bunch of culturally determined stuff on top of that. So there's no difference here either - both our consequentialist and non-consequentialist wishes come from an equally messy process, so they're equally legitimate.

New Comment
14 comments, sorted by Click to highlight new comments since:

For this argument, consequentialism is like kinetic theory of gases. The point is not that it's wrong and doesn't work (where it should), but that it's not a relevant tool for many purposes.

I started giving up on consequentialism when thinking about concepts of alignment like corrigibility and then membranes (respect for autonomy). They could in principle be framed as particular preferences, but that doesn't appear to be a natural way of thinking about them, of formulating them more clearly. Even in decision theory, with the aim of getting certain outcomes to pass, my current preferred ontology of simulation-structure of things points more towards convincing other computations to move the world in certain ways than towards anticipating their behavior before they decide what it should be themselves. It's still a sort of "consequentialism", but the property of preferences being unchanging is not a centerpiece, and the updateless manipulation of everything else is more of a technical error (like two-boxing in ASP) than a methodology.

In human thinking, issues with consequentialism seem to be about losing sight of chasing the void. Reflectively endorsed hedonistic goals (in a broad sense, which could include enjoyment of achievement) are a bit of a dead end, denying the process of looking for different kinds of aims, sometimes cynical reveling in knowing the secrets of human nature.

Yeah, I've been thinking along similar lines. Consequentialism stumbles on the richness of other creatures, and ourselves. Stumbles in the sense that many of our wishes are natively expressed in our internal "creature language", not the language of consequences in the world.

We often imagine a "consequence" as a state of the world at a particular time. But we could also include processes that stretch out in time under the label "consequence". More generally, we could allow the truth of any proposition as a potential consequence. This wouldn't be restricted to a state, and not even to a single process.

I think this is intuitive. Generally, when we want something, we do wish for something to be true. E.g. I want to climb a mountain: I want it to be true that I climb a mountain.

Yeah, you can say something like "I want the world to be such that I follow deontology" and then consequentialism includes deontology. Or you could say "it's right to follow consequentialism" and then deontology includes consequentialism. Understood this way, the systems become vacuous and don't mean anything at all. When people say "I'm an consequentialist", they usually mean something more: that their wishes are naturally expressed in terms of consequences. That's what my post is arguing against. I think some wishes are naturally consequentialist, but there are other equally valid wishes that aren't, and expressing all wishes in terms of consequences isn't especially useful.

This reminds me of the puzzle: why is death bad? After all, when you are dead, you won't be around to suffer from it. Or why worry about not being alive in the future when you weren't alive before birth either? Simple response: We just don't want to be dead in the future for evolutionary reasons. Organisms who hate death had higher rates of reproduction. What matters for us is not a fact about the consequence of dying, but what we happen to want or not want. (Related: this, but also this.)

I think consequentialism is the robust framework for achieving goals and I think my top goal is the flourishing of (most, the ones compatible with me) human values.

That uses consequentialism as the ultimate lever to move the world but refers to consequences that are (almost) entirely the results of our biology-driven thinking and desiring and existing, at least for now.

This may be a complaint about legibilism, not specifically consequentialism.  Godel was pretty clear - a formal system is either incomplete or inconsistent.  Any moral or decision system that demands that everything important about a decision is clear and well-understood is going to have similar problems.  Your TRUE reasons for a lot of things are not accessible, so you will look for legible reasons to do what you want, and you will find yourself a rationalizing agent, rather than a rational one.

That said, consequentialism is still a useful framework for evaluating how closely your analytic self matches with your acting self.  It's not going to be perfect, but you can choose to get closer, and you can get better at understanding which consequences actually matter to you.

Climbing a mountain has a lot of consequences that you didn't mention, but probably should consider.  It connects you to people in new ways.  It gives you interesting stories to tell at parties.  It's a framework for improving your body in various ways.  If you die, it lets you serve as a warning to others.  It changes your self-image (honestly, this one may be the most important impact).  

Maybe. Or maybe the wish itself is about climbing the mountain, just like it says, and the other benefits (which you can unwind all the way back to evolutionary ones) are more like part of the history of the wish.

Quite possibly, but without SOME framework of evaluating wishes, it's hard to know which wishes (even of oneself) to support and which to fight/deprioritize.

Humans (or at least this one) often have desires or ideas that aren't, when considered, actually good ideas.  Also, humans (again, at least this one) have conflicting desires, only a subset of which CAN be pursued.  

It's not perfect, and it doesn't work when extended too far into the tails (because nothing does), but consequentialism is one of the better options for judging one's desires and picking which to pursue.

This is tricky. In the post I mentioned "playing", where you do stuff without caring about any goal, and most play doesn't lead to anything interesting. But it's amazing how many of humanity's advances were made in this non-goal-directed, playing mode. This is mentioned for example in Feynman's book, the bit about the wobbling plate.

Doesn’t rule consequentialism (as opposed to act consequentialism) solve all of these problems (and also all[1] other problems that people sometimes bring up as alleged “arguments against consequentialism”)?


  1. Approximately all. ↩︎

This post strikes me as saying something extremely obvious and uncontroversial, like “I care about what happens in the future, but I also care about other things, e.g. not getting tortured right now”. OK, yeah duh, was anyone disputing that??

non-consequentialist wishes can make you go in circles

I feel like you’re responding to an objection that doesn’t make sense in the first place for more basic reasons. Why is “going around in circles” bad? Well, it’s bad by consequentialist lights—if your preferences exclusively involve the state of the world in the distant future, then going around in circles is bad according to your preferences. But that’s begging the question. If your care about other things too, then there isn’t necessarily any problem with “going around in circles”. See my silly “restaurant customer” example here.

To use a physics analogy, utility often isn't a potential function over state of affairs, and for many depends on path taken.

However, state of affairs is but a projection; state of world also includes mind states, and you might be indifferent between any quantum paths to worlds involving same mind state (including memory, beliefs) for you. (As a matter of values, I am not indifferent between paths either; rather, I endorse some integrated-utility up to an unspecified pinning point in future.)

Broadly, consequentialism requires us to ignore many of the consequences of choosing consequentialism. And since that is what matters in consequentialism it is to that exact degree self-refuting. Other ethical systems like Deontology and Virtue Ethics are not self-refuting and thus should be preferred to the degree we can't prove similar fatal weaknesses. (Virtue Ethics is the most flexible system to consider, as you can simply include other systems as virtues! Considering the consequences is virtuous, just not the only virtue! Coming up with broadly applicable rules that you follow even when they aren't what you most prefer is a combination of honor and duty, both virtues.)