I think consequentialism describes only a subset of my wishes. For example, maximizing money is well modeled by it. But when I'm playing with something, it's mostly about the process, not the end result. Or when I want to respect the wishes of other people, I don't really know what end result I'm aiming for, but I can say what I'm willing or unwilling to do.
If I try to shoehorn everything into consequentialism, then I end up looking for "consequentialist permission" to do stuff. Like climbing a mountain: consequentialism says "I can put you on top of the mountain! Oh, that's not what you want? Then I can give you the feeling of having climbed it! You don't want that either? Then this is tricky..." This seems a lot of work, just to do something I already want to do. There are many reasons to do things - not everything has to be justified by consequences.
There are of course objections. Objection one is that non-consequentialist wishes can make you go in circles, like that Greg Egan character who spent thousands of hours carving table legs, making himself forget the last time so he could enjoy the next. But when pushed to such extremes, a consequentialist goal like maximizing happiness can also lead to weird results (vats of happiness goo...) And if we don't push quite so hard, then I can imagine utopia containing both consequentialist and non-consequentialist stuff, doing things for their own sake and such. So there's no difference here.
Objection two is that our wishes come from evolution, which wants us to actually achieve things, not go in circles. But our wishes aren't all perfectly aligned with with evolution's wish (procreate more). They are a bunch of heuristics that evolution came up with, and a bunch of culturally determined stuff on top of that. So there's no difference here either - both our consequentialist and non-consequentialist wishes come from an equally messy process, so they're equally legitimate.
Yeah. I think consequentialism is a great framing that has done a lot of good in EA, where the desired state of the world is easy to describe (remove X amount of disease and such). And this created a bit of a blindspot, where people started thinking that goals not natively formulated in terms of end states ("play with this toy", "respect this person's wishes" and such) should be reformulated in terms of end states anyway, in more complex ways. To be honest I still go back and forth on whether that works - my post was a bit polemical. But it feels like there's something to the idea of keeping some goals in our "internal language", not rewriting them into the language of consequences.