janos comments on [MIRIx Cambridge MA] Limiting resource allocation with bounded utility functions and conceptual uncertainty - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (28)
But that's a situation in which we have a vast number of things that might somewhat-plausibly turn out to be chocolate and severely limited resources. It's not obvious that we can do better.
"But we do OK if we use one sigmoid utility function and not if we use another!"
No, we do different things depending on our utility function. That isn't a problem; it's what utility functions are for. And what's "OK" depends on what the probabilities are, what your resources are, and how much you value different amounts of chocolate. Which, again, is not a problem but exactly how things should be.
Certainly given a utility function and a model, the best thing to do is what it is. The point was to show that some utility functions (eg using the exponential-decay sigmoid) have counterintuitive properties that don't match what we'd actually want.
Every response to this post that takes the utility function for granted and remarks that the optimum is the optimum is missing the point: we don't know what kind of utility function is reasonable, and we're showing evidence that some of them give optima that aren't what we'd actually want if we were turning the world into chocolate/hedonium.
If it seems strange to you to consider representing what you want by a bounded utility function, a post about that will be forthcoming.
No, it doesn't seem strange to me to consider representing what I want by a bounded utility function. It seems strange to consider representing what I want by a utility function that converges exponentially fast towards its bound.
I'll repeat something I said in another comment:
(Remark 1: the above is a comment that remarks that the optimum is the optimum but is visibly not missing the point by failing to appreciate that we might be constructing a utility function and trying to make it do good-looking things, rather than approximating a utility function we already have.)
(Remark 2: I think I can imagine situations in which we might consider making the relationship between chocolate and utility converge very fast -- in fact, taking "chocolate" literally rather than metaphorically might yield such a situation. But in those situations, I also think the results you get from your exponentially-converging utility function aren't obviously unreasonable.)
Cool. Regarding bounded utility functions, I didn't mean you personally, I meant the generic you; as you can see elsewhere in the thread, some people do find it rather strange to think of modelling what you actually want as a bounded utility function.
This is where I thought you were missing the point:
Sometimes we (seem to) have stronger intuitions about allocations than about the utility function itself, and parlaying that to identify what the utility function should be is what this post is about. This may seem like a non-step to you; in that case you've already got it. Cheers! I admit it's not a difficult point. Or if you always have stronger intuitions about the utility function than about resource allocation, then maybe this is useless to you.
I agree with you that there are some situations where the sublinear allocation (and exponentially-converging utility function) seems wrong and some where it seems fine; perhaps the post should initially have said "person-enjoying-chocolate-tronium" rather than chocolate.
You still haven't answered my question of why we don't want those properties. To me, they don't seem counter-intuitive at all.