I think consequentialism describes only a subset of my wishes. For example, maximizing money is well modeled by it. But when I'm playing with something, it's mostly about the process, not the end result. Or when I want to respect the wishes of other people, I don't really know what end result I'm aiming for, but I can say what I'm willing or unwilling to do.
If I try to shoehorn everything into consequentialism, then I end up looking for "consequentialist permission" to do stuff. Like climbing a mountain: consequentialism says "I can put you on top of the mountain! Oh, that's not what you want? Then I can give you the feeling of having climbed it! You don't want that either? Then this is tricky..." This seems a lot of work, just to do something I already want to do. There are many reasons to do things - not everything has to be justified by consequences.
There are of course objections. Objection one is that non-consequentialist wishes can make you go in circles, like that Greg Egan character who spent thousands of hours carving table legs, making himself forget the last time so he could enjoy the next. But when pushed to such extremes, a consequentialist goal like maximizing happiness can also lead to weird results (vats of happiness goo...) And if we don't push quite so hard, then I can imagine utopia containing both consequentialist and non-consequentialist stuff, doing things for their own sake and such. So there's no difference here.
Objection two is that our wishes come from evolution, which wants us to actually achieve things, not go in circles. But our wishes aren't all perfectly aligned with with evolution's wish (procreate more). They are a bunch of heuristics that evolution came up with, and a bunch of culturally determined stuff on top of that. So there's no difference here either - both our consequentialist and non-consequentialist wishes come from an equally messy process, so they're equally legitimate.
This post strikes me as saying something extremely obvious and uncontroversial, like “I care about what happens in the future, but I also care about other things, e.g. not getting tortured right now”. OK, yeah duh, was anyone disputing that??
I feel like you’re responding to an objection that doesn’t make sense in the first place for more basic reasons. Why is “going around in circles” bad? Well, it’s bad by consequentialist lights—if your preferences exclusively involve the state of the world in the distant future, then going around in circles is bad according to your preferences. But that’s begging the question. If your care about other things too, then there isn’t necessarily any problem with “going around in circles”. See my silly “restaurant customer” example here.
I'm thinking about cases where you want to do something, and it's a simple action, but the consequences are complex and you don't explicitly analyze them - you just want to do the thing. In such cases I argue that reducing the action to its (more complex) consequences feels like shoehorning.
For example: maybe you wa... (read more)