Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

casebash comments on The Number Choosing Game: Against the existence of perfect theoretical rationality - Less Wrong

-1 Post author: casebash 29 January 2016 01:04AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (151)

You are viewing a single comment's thread. Show more comments above.

Comment author: casebash 06 January 2016 11:42:50AM 0 points [-]

Exactly, if you accept the definition of a perfectly rational agent as a perfect utility maximiser, then there's no utility maximiser as there's always another agent that obtains more utility, so there is no perfectly rational agent. I don't think that this is a particularly unusual way of using the term "perfectly rational agent".

Comment author: kithpendragon 06 January 2016 11:51:28AM 0 points [-]

In this context, I do not accept that definition: you cannot maximize an unbounded function. A Perfectly Rational Agent would know that.

Comment author: casebash 06 January 2016 11:55:14AM 0 points [-]

And it would still get beaten by a more rational agent, that would be beaten by a still more rational agent and so on until infinity. There's a non-terminating set of increasingly rational agents, but no final "most rational" agent.

Comment author: kithpendragon 06 January 2016 12:19:05PM 0 points [-]

If the PRA isn't trying to "maximize" an unbounded function, it can't very well get "beaten" by another agent who chooses x+n because they didn't have the same goal. I reject, therefore, that an agent that obeys its stopping function in an unbounded scenario may be called any more or less "rational" based on that reason only than any other agent that does the same, regardless of the utility it may not have collected.

By removing all constraints, you have made comparing results meaningless.

Comment author: casebash 06 January 2016 12:23:20PM 0 points [-]

So an agent that chooses only 1 utility could still be a perfectly rational agent in your books?

Comment author: kithpendragon 06 January 2016 12:49:43PM *  0 points [-]

Might be. Maybe that agent's utility function is actually bounded at 1 (it's not trying to maximize, after all). Perhaps it wants 100 utility, but already has firm plans to get the other 99. Maybe it chose a value at random from the range of all positive real numbers (distributed such that the probability of choosing X grows proportional to X) and pre-committed to the results, thus guaranteeing a stopping condition with unbounded expected return. Since it was missing out on unbounded utility in any case, getting literally any is better than none, but the difference between x and y is not really interesting.

(humorously) Maybe it just has better things to do than measuring its *ahem* stopping function against the other agents.