(Edited to add: See also this addendum.)
I commented on Facebook that I think our ethics is three-tiered. There are the things we imagine we consider right, the things we consider right, and the things we actually do. I was then asked to elaborate between the difference of the first two.
For the first one, I was primarily thinking about people following any idealized, formal ethical theories. People considering themselves act utilitarians, for instance. Yet when presented with real-life situations, they may often reply that the right course of action is different than what the purely act utilitarian framework would imply, taking into account things such as keeping promises and so on. Of course, a rule utilitarian would avoid that particular trap, but in general nobody is a pure follower of any formal ethical theory.
Now, people who don't even try to follow any formal ethical systems probably have a closer match between their first and second categories. But I recently came to view as our moral intuitions as a function that takes the circumstances of the situation as an input and gives a moral judgement as an output. We do not have access to the inner workings of that function, though we can and do try to build models that attempt to capture its inner workings. Still, as our understanding of the function is incomplete, our models are bound to sometimes produce mistaken predictions.
Based on our model, we imagine (if not thinking about the situations too much) that in certain kinds of situations we would arrive at a specific judgement, but a closer examination of them reveals that the function outputs the opposite value. For instance, we might think that maximizing total welfare is always for the best, but then realize that we don't actually want to maximize total welfare if the people we consider our friends would be hurt. This might happen even if you weren't explicitly following any formal theory of ethics. And if *actually* faced with that situation, we might end up acting selfishly instead.
This implies that people pick the moral frameworks which are best at justifying the ethical intuitions they already had. Of course, we knew that much already (even if we sometimes fail to apply it - I was previously puzzled over why so many smart people reject all forms of utilitarianism, as ultimately everyone has to perform some sort of expected utility calculations in order to make moral decisions at all, but then realized it had little to do with utilitarianism's merits as such). Some of us attempt to reprogram their moral intuitions, by taking those models and following them even when they fail to predict the correct response of the moral function. With enough practice, our intuitions may be shifted towards the consciously held stance, which may be a good or bad thing.
I think you're right not to see it. Valuing happiness is a relatively recent development in human thought. Much of ethics prior to the enlightenment dealt more with duties and following rules. In fact, seeking pleasure or happiness (particularly from food, sex, etc.) was generally looked down or actively disapproved. People may generally do what they calculate to be best, but best need not mean maximizing anything related to happiness.
Ultra-orthodox adherence to religion is probably the most obvious example of this principle, particularly Judaism, since there's no infinitely-good-heaven to obfuscate the matter. You don't follow the rules because they'll make you or others happy, you follow them because you believe it's the right thing to do.
My reading of that sentence was that Kaj_Sotala focused not on the happiness part of utilitarianism, but on the expected utility calculation part. That is, that everyone needs to make an expected utility calculation to make moral decisions. I don't think any particular type of utility was meant to be implied as necessary.