You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

gjm comments on [MIRIx Cambridge MA] Limiting resource allocation with bounded utility functions and conceptual uncertainty - Less Wrong Discussion

4 Post author: Vika 02 October 2014 10:48PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (28)

You are viewing a single comment's thread. Show more comments above.

Comment author: gjm 05 October 2014 04:19:45PM 0 points [-]

Intuitively, this does seem to be the right sort of approach

It's provably the right approach.

Let the allocation I described (with whatever choice of k optimizes the result) be R. Suppose it isn't globally optimal, and let R' be strictly better. R' may have infinitely many nonzero rj, but can in any case be approximated arbitrarily closely by an R'' with only finitely many nonzero rj; do so, closely enough that R'' is still strictly better than R. Well, having only finitely many nonzero rj, R'' is no better than one of my candidates and so in particular isn't better than R, contradiction.

Comment author: skeptical_lurker 05 October 2014 06:01:18PM 0 points [-]

It's provably the right approach.

I wasn't doubting your math, I was doubting the underlying assumption of a bounded utility function.

Of course, if we want to get technical, a finite computer can't store an infinite number of models of chocolate anyway.

Comment author: AlexMennen 08 October 2014 04:12:26AM 0 points [-]

I was doubting the underlying assumption of a bounded utility function.

I can defend that assumption: It is impossible for an expected utility maximizer to have an unbounded utility function, given only the assumption that the space of lotteries is complete. http://lesswrong.com/lw/gr6/vnm_agents_and_lotteries_involving_an_infinite/

Comment author: gjm 05 October 2014 06:15:52PM 0 points [-]

I was doubting the underlying assumption

Oh, I see. OK.

a finite computer can't store an infinite number of models [...]

For sure. Nor, indeed, can our finite brains. (This is one reason why our actual utility functions, in so far as we have them, probably are bounded. Of course that isn't a good reason to use bounded utility functions in theoretical analyses unless all we're hoping to do is to understand the behaviour of a single human brain.)