I'm not sure where this comes from, when the VNM theorem gets so many mentions on LW.
I understand the VNM theorem. I'm objecting to it.
A utility function is, by definition, that which the corresponding rational agent maximizes the expectation of
If you want to argue "by definition", then yes, according to your definition utility functions can't be used in anything other than expected utility. I'm saying that's silly.
simply an encoding of the actions which a rational agent would take in hypothetical scenarios
Not all rational agents, as my post demonstrates. An agent following median maximizing would not be describable by any utility function maximized with expected utility. I showed how to generalize this to describe more kinds of rational agents. Regular expected utility becomes a special case of this system. I think generalizing existing ideas and mathematics is a desirable thing sometimes.
It is not "optimal as the number of bets you take approaches infinity"
Yes, it is. If you assign some subjective "value" to different outcomes, and to different things, then maximizing expected u̶t̶i̶l̶i̶t̶y̶ value, will maximize it, as the number of decisions approaches infinity. For every bet I lose at certain odds, I will gain more from others some predictable percent of the time. On average it cancels out.
This might not be the standard way of explaining expected utility, but it's very simple and intuitive, and shows exactly where the problem is. It's certainly sufficient for the explanation in my post.
Humans do not have utility functions. We do not exhibit the level of counterfactual self-consistency that is required by a utility function.
That's quite irrelevant. Sure humans are irrational and make inconsistencies and errors in counterfactual situations. We should strive to be more consistent though. We should strive to figure out the utility function that most represents what we want. And if we program an AI, we certainly want it to behave consistently.
Yes, it is common, especially on LW and in discussions of utilitarianism, to use the term "utility" loosely, but don't conflate that with utility functions by creating a chimera with properties from each. If the "utility" that you want to talk about is vaguely-defined (e.g., if it depends on some account of subjective preferences, rather than on definite actions under counterfactual scenarios), then it probably lacks all of useful mathematical properties of utility functions, and its expectation is no longer meaningful.
Again, back to arguing by definition. I don't care what the definition of "utility" is. If it would please you to use a different word, then we can do so. Maybe "value function" or something. I'm trying to come up with a system that will tell us what decisions we should make, or program an AI to make. One that fits our behavior and preferences the best. One that is consistent and converges to some answer given a reasonable prior.
You haven't made any arguments against my idea or my criticisms of expected utility. It's just pedantry about the definition of a word, when it's meaning in this context is pretty clear.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
I'm interested into that research. Can you link it?
Not sure if this is what KevinGrant was referring to, but this article discusses the same phenomenon
http://rosettaproject.org/blog/02012/mar/1/language-speed-vs-density/