I think consequentialism describes only a subset of my wishes. For example, maximizing money is well modeled by it. But when I'm playing with something, it's mostly about the process, not the end result. Or when I want to respect the wishes of other people, I don't really know what end result I'm aiming for, but I can say what I'm willing or unwilling to do.
If I try to shoehorn everything into consequentialism, then I end up looking for "consequentialist permission" to do stuff. Like climbing a mountain: consequentialism says "I can put you on top of the mountain! Oh, that's not what you want? Then I can give you the feeling of having climbed it! You don't want that either? Then this is tricky..." This seems a lot of work, just to do something I already want to do. There are many reasons to do things - not everything has to be justified by consequences.
There are of course objections. Objection one is that non-consequentialist wishes can make you go in circles, like that Greg Egan character who spent thousands of hours carving table legs, making himself forget the last time so he could enjoy the next. But when pushed to such extremes, a consequentialist goal like maximizing happiness can also lead to weird results (vats of happiness goo...) And if we don't push quite so hard, then I can imagine utopia containing both consequentialist and non-consequentialist stuff, doing things for their own sake and such. So there's no difference here.
Objection two is that our wishes come from evolution, which wants us to actually achieve things, not go in circles. But our wishes aren't all perfectly aligned with with evolution's wish (procreate more). They are a bunch of heuristics that evolution came up with, and a bunch of culturally determined stuff on top of that. So there's no difference here either - both our consequentialist and non-consequentialist wishes come from an equally messy process, so they're equally legitimate.
I'm thinking about cases where you want to do something, and it's a simple action, but the consequences are complex and you don't explicitly analyze them - you just want to do the thing. In such cases I argue that reducing the action to its (more complex) consequences feels like shoehorning.
For example: maybe you want to climb a mountain because that's the way your heuristics play out, which came from evolution. So we can "back-chain" the desire to genetic fitness; or we can back-chain to some worldly consequences, like having good stories to tell at parties as another commenter said; or we can back-chain those to fitness as well, and so on. It's arbitrary. The only "bedrock" is that when you want to climb the mountain, you're not analyzing those consequences. The mountain calls you, it doesn't need to be any more complex than that. So why should we say it's about consequences? We could just say it's about the action.
And once we allow ourselves to do actions that are just about the action, it seems calling ourselves "consequentialists" is somewhere between wrong or vacuous. Which is the point I was making in the post.