Most concern about AI comes down to the scariness of goal-oriented behavior. A common response to such concerns is “why would we give an AI goals anyway?” I think there are good reasons to expect goal-oriented behavior, and I’ve been on that side of a lot of arguments. But I don’t think the issue is settled, and it might be possible to get better outcomes without them. I flesh out one possible alternative here, based on the dictum "take the action I would like best" rather than "achieve the outcome I would like best."
(As an experiment I wrote the post on medium, so that it is easier to provide sentence-level feedback, especially feedback on writing or low-level comments.)
I think maximizing versus satisficing is a question orthogonal to whether you pay attention to consequences, to hte actions that produce them, or to the character from which the actions flow. One could make a satisficing consequentialist agent, for instance. (Bostrom, IIRC, remarks that this wouldn't necessarily avoid the dangers of overzealous optimization: instead of making unboundedly many paperclips because it wants as many as possible, our agent might make unboundedly many paperclips in order to be as sure as possible that it really did make at least 10.)
Boatrom's point is valid in absence of other goals. A clippy which also values a slightly non-orthogonal goal would stop making paperclips once that other goal is interfered with by the excess of paperclips.
In virtue ethics you don't maximize anything, you are free to pick any actions compatible with the virtues, so there is no utility function to speak of.