You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

casebash comments on The Number Choosing Game: Against the existence of perfect theoretical rationality - Less Wrong Discussion

-1 Post author: casebash 29 January 2016 01:04AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (151)

You are viewing a single comment's thread. Show more comments above.

Comment author: casebash 05 January 2016 11:30:26PM 1 point [-]

I'm kind of defining perfect rationality as the ability to maximise utility (more or less). If there are multiple optimal solutions, then picking any one maximises utility. If there is no optimal solution, then picking none maximises utility. So this is problematic for perfect rationality as defined as utility maximisation, but if you disagree with the definition, we can just taboo "perfect rationality" and talk about utility maximisation instead. In either case, this is something people often assume exists without even realising that they are making an assumption.

Comment author: Silver_Swift 06 January 2016 02:46:37PM 1 point [-]

That's fair, I tried to formulate a better definition but couldn't immediately come up with anything that sidesteps the issue (without explicitly mentioning this class of problems).

When I taboo perfect rationality and instead just ask what the correct course of action is, I have to agree that I don't have an answer. Intuitive answers to questions like "What would I do if I actually found myself in this situation?" and "What would the average intelligent person do?" are unsatisfying because they seem to rely on implicit costs to computational power/time.

On the other hand I can also not generalize this problem to more practical situations (or find a similar problem without optimal solution that would be applicable to reality) so there might not be any practical difference between a perfectly rational agent and an agent that takes the optimal solution if there is one and explodes violently if there isn't one. Maybe the solution is to simply exclude problems like this when talking about rationality, unsatisfying as it may be.

In any case, it is an interesting problem.

Comment author: Jiro 07 January 2016 03:39:16PM *  0 points [-]

If there is no optimal solution, then picking none maximises utility.

This statement is not necessarily true when there is no optimal solution because the solutions are part of an infinite set of solutions. That is, it is not true in exactly the situation described in your problem.

Comment author: casebash 08 January 2016 12:07:32AM 0 points [-]

Sorry, that was badly phrased. It should have been: "If there is no optimal solution, then no matter what solution you pick you won't be able to maximise utility"