(Written for Arbital in 2017.)
So we're talking about how to make good decisions, or the idea of 'bounded rationality', or what sufficiently advanced Artificial Intelligences might be like; and somebody starts dragging up the concepts of 'expected utility' or 'utility functions'.
And before we even ask what those are, we might first ask, Why?
There's a mathematical formalism, 'expected utility', that some people invented to talk about making decisions. This formalism is very academically popular, and appears in all the textbooks.
But so what? Why is that necessarily the best way of making decisions under every kind of circumstance? Why would an Artificial Intelligence care what's academically popular? Maybe there's some better way of thinking about rational agency? Heck, why is this...
Nitpick: He does link to VNM once, when giving the name for Independence, so that might techincally count as a mention. I otherwise agree ; however I think VNM gets too much of a bad rap. The finite case is simple, gives a conceptual understanding of why expected utility arises given the assumptions, and uses a few axioms of which some have supporting arguments (like semiformal dutchbooks). If we had no other coherence theorems then I think it is correct to update a lot on VNM, as I did when I first learned of it. Expected utility really does seem a lot less ad hoc after understanding VNM.