You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

casebash comments on The Number Choosing Game: Against the existence of perfect theoretical rationality - Less Wrong Discussion

-1 Post author: casebash 29 January 2016 01:04AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (151)

You are viewing a single comment's thread.

Comment author: casebash 05 January 2016 04:22:35AM *  2 points [-]

An update to this post

It appears that this issue has been discussed before in the thread Naturalism versus unbounded (or unmaximisable) utility options. The discussion there didn't end up drawing the conclusion that perfect rationality doesn't exist, so I believe this current thread adds something new.

Instead, the earlier thread considers the Heaven and Hell scenario where you can spend X days in Hell to get the opportunity to spend 2X days in Heaven. Most of the discussion on that thread was related to the limit of how many days an agent count so as to exit at some point. Stuart Armstrong also comes up with the same solution for demonstrating that this problem isn't related to unbounded utility.

Qiaochu Yaun summarises one of the key takeaways: "This isn't a paradox about unbounded utility functions but a paradox about how to do decision theory if you expect to have to make infinitely many decisions. Because of the possible failure of the ability to exchange limits and integrals, the expected utility of a sequence of infinitely many decisions can't in general be computed by summing up the expected utility of each decision separately."

Cudos to Andreas Giger for noticing what most of the commentators seemed to miss: "How can utility be maximised when there is no maximum utility? The answer of course is that it can't." This is incredibly close to stating that perfect rationality doesn't exist, but it wasn't explicitly stated, only implied.

Further, Wei Dai's comment on a randomised strategy that obtains infinite expected utility is an interesting problem that will be addressed in my next post.

Comment author: Viliam 05 January 2016 09:25:23AM 4 points [-]

Okay, so if by 'perfect rationality' we mean "ability to solve problems that don't have a solution", then I agree, perfect rationality is not possible. Not sure if that was your point.

Comment author: casebash 05 January 2016 10:36:02AM -1 points [-]

I'm not asking you, for example, to make a word out of the two letters Q and K, or to write a program that will determine if an arbitrary program halts.

Where rationality fails if that there is always another person who scores higher than you and there was nothing stopping you from scoring the same score or higher. Such a program is more rational than you in that situation and there is another program more rational than them until infinity. That there is no maximally rational program, only successively more rational programs is a completely accurate way of characterising that situation

Comment author: Viliam 05 January 2016 11:31:18AM 3 points [-]

Seems like you are asking me to (or at least judging me as irrational for failing to) say a finite number such that I could not have said a higher number despite having unlimited time and resources. That is an impossible task.

Comment author: casebash 05 January 2016 11:53:09AM *  0 points [-]

I'm arguing against perfect rationality as defined as the ability to choose the option that maximises the agents utility. I don't believe that this at all an unusual way of using this term. But regardless, let's taboo perfect rationality and talk about utility maximisation. There is no utility maximiser for this scenario because there is no maximum utility that can be obtained. That's all that I'm saying, nothing more nothing less. Yet, people often assume that such a perfect maximiser (aka perfectly rational agent) exists without even realising that they are making an assumption.

Comment author: Viliam 05 January 2016 12:20:45PM *  2 points [-]

Oh. In that case, I guess I agree.

For some scenarios that have unbounded utility there is no such thing as an utility maximizer.

Comment author: Dagon 05 January 2016 02:59:21PM 0 points [-]

I think the scenario requires unbounded utility and unlimited resources to acquire it.

Comment author: sullyj3 06 January 2016 06:13:57AM 0 points [-]

Cudos to Andreas Giger for noticing what most of the commentators seemed to miss: "How can utility be maximised when there is no maximum utility? The answer of course is that it can't." This is incredibly close to stating that perfect rationality doesn't exist, but it wasn't explicitly stated, only implied.

I think the key is infinite vs finite universes. Any conceivable finite universe can be arranged in a finite number of states, one, or perhaps several of which, could be assigned maximum utility. You can't do this in universes involving infinity. So if you want perfect rationality, you need to reduce your infinite universe to just the stuff you care about. This is doable in some universes, but not in the ones you posit.

In our universe, we can shave off the infinity, since we presumably only care about our light cone.