There is some confusion in the comments over what utility is.
the maximum utility that it could conceivably expect to use
and Usul writes:
goes out to spend his utility on blackjack and hookers
Utility is not a resource. It is not something that you can acquire and then use, or save up and then spend. It is not that sort of thing. It is nothing more than a numerical measure of the value you ascribe to some outcome or state of affairs. The blackjack and hookers, if that's what you're into, are the things that you would be specifically seeking by seeking the highest utility, not something you would afterwards get in exchange for some acquired quantity of utility.
So, when we solve linear programming problems (say, with the simplex method), there are three possible outcomes: the problem is infeasible (there are no solutions that satisfy the constraints), the problem has at least one optimal value (which is found), or the problem is unbounded.
That is, if your "perfect theoretical rationality" requires there to not be the possibility of unbounded solutions, then your perfect theoretical rationality won't work and cannot include simple things like LP problems. So I'm not sure why you think this version of perfect theoretical rationality is interesting, and am mildly surprised and disappointed that this was your impression of rationality.
From my perspective, there's no contradiction here--or at least, the contradiction is contained within a hidden assumption, much in the same way that the "unstoppable force versus immovable object" paradox assumes the contradiction. An "unstoppable force" cannot logically exist in the same universe as an "immovable object", because the existence of one contradicts the existence of the other by definition. Likewise, you cannot have a "utility maximizer" in a universe where there is no "maximum utility"--and since you basically equate "being rational" with "maximizing utility" in your post, your argument begs the question.
The issue here isn't that rationality is impossible. The issue here is that you're letting an undefined abstract concept do all your heavy lifting, and taking it places it cannot meaningfully be.
Utilitarianism: Defining "Good" is hard. Math is easy, let X stand in for "Good", and we'll maximize X, thereby maximizing "Good".
So let's do some substitution. Let's say apples are good. Would you wait forever for an apple? No? What if make it so you live forever? No, you'd get bored? What if we make it so that you don't get b...
Very closely related: Stuart Armstrong's Naturalism versus unbounded (or unmaximisable) utility options from about three years ago.
I think all this amounts to is: there can be situations in which there is no optimal action, and therefore if we insist on defining "rational" to mean "always taking the optimal action" then no agent can be perfectly "rational" in that sense. But I don't know of any reason to adopt that definition. We can still say, e.g., that one course of action is more rational than another, even in situations where no course of action is most rational.
I'm not convinced. It takes massive amounts of evidence to convince me that the offers in each of your games are sincere and accurate. In particular it takes an infinite amount of evidence to prove that your agents can keep handing out increasing utility/tripling/whatever. When something incredible seems to happen, follow the probability.
I'm reminded of the two-envelope game, where seemingly the player can get more and more money(/utility) by swapping envelopes back and forth. Of course the solution is clear if you assume (any!) prior on the money in the envelopes, and the same is happening if we start thinking about the powers of your game hosts.
Ok, lets say you are right that there does not exist perfect theoretical rationality in your hypothetical game context with all the assumptions that helps to keep the whole game standing. Nice. So what?
This seems like another in a long line of problems that come from assuming unbounded utility functions.
Edit:The second game sounds a lot like the St. Petersburg paradox.
An update to this post
It appears that this issue has been discussed before in the thread Naturalism versus unbounded (or unmaximisable) utility options. The discussion there didn't end up drawing the conclusion that perfect rationality doesn't exist, so I believe this current thread adds something new.
Instead, the earlier thread considers the Heaven and Hell scenario where you can spend X days in Hell to get the opportunity to spend 2X days in Heaven. Most of the discussion on that thread was related to the limit of how many days an agent count so as to ex...
Why not just postulate a universe where A>B>C>A and ask the decision maker to pick the letter with the highest value? What we think of as rational doesn't necessarily work in other universes.
This seems like such an obvious result, I imagine that there's extensive discussion of it within the game theory literature somewhere. If anyone has a good paper that would be appreciated
This appears to be strongly related to the St. Petersburg Paradox - except that the prize is in utility instead of cash, and the player gets to control the coin (this second point significantly changes the situation).
To summarise the paradox - imagine a pot containing $2 and a perfectly fair coin. The coin is tossed repeatedly. Every time it lands tails, the pot is doub...
Define "dominant decision" as an action that no other option would result in bigger utility.
Then we could define an agent to be perfect if it chooses the dominant decision out of its options whenever it exists.
We could also define a dominant agent whos choice is always the dominant decision.
a dominant agent can't play the number naming game whereas a perfect agent isn't constrained to pick a unique one.
You might be assuming that when options have utility values that are not equal then there is a dominant decision. For finite option palettes this migth be the case.
The problem goes away if you add finiteness in any of a bunch of different places: restrict agents to only output decisions of bounded length, or to only follow strategies of bounded length, or expected utilities are constrained to finitely many distinct levels. (Making utility a bounded real number doesn't work, but only because there are infinitely many distinct levels close to the bound).
The problem also goes away if you allow agents to output a countable sequence of successively better decisions, and define an optimal sequence as one such that for any possible decision, a decision at least that good appears somewhere in the sequence. This seems like the most promising approach.
I would like to extract the meaning of your thought experiment, but it's difficult because the concepts therein are problematic, or at least I don't think they have quite the effect you imagine.
We will define the number choosing game as follows. You name any single finite number x. You then gain x utility and the game then ends. You can only name a finite number, naming infinity is not allowed.
If I were asked (by whom?) to play this game, in the first place I would only be able to attach some probability less than 1 to the idea that the master of the g...
Let's assume that the being that is supposed to find a strategy for this scenario operates in a universe whose laws of physics can be specified mathematically. Given this scenario, it will try to maximize the number it outputs. Its output cannot possibly surpass the maximum finite number that can be specified using a string no longer than its universes specification, so it need not try to surpass it, but it might come pretty close. Therefore, for each such universe, there is a best rational actor.
Edit: No, wait. Umm, you might want to find the error in the...
For the Unlimited Swap game, are you implicitly assuming that the time spent swapping back and forth has some small negative utility?
You are right, theory is overrated. Just because you don't have a theoretical justification for commencing an action doesn't mean that the action isn't the right action to take if you want to try to "win." Of course, it is very possible to be in a situation where "winning" is inherently impossible, in which case you could still (rationally) attempt various strategies that seem likely to make you better off than you would otherwise be...
As a practicing attorney, I've frequently encountered real-life problems similar to the above. For exa...
My gut response to the unbounded questions is that a perfectly rational agent would already know (or have a good guess as to) the maximum utility that it could conceivably expect to use within the limit of the expected lifespan of the universe.
There is also an economic objection; at some point it seems right to expect the value of every utilon to decrease in response to the addition of more utilons into the system.
In both objections I'm approaching the same thing from different angles: the upper limit on the "unbounded" utility in this case depen...
You're doing infinity wrong. always specify it as a limit. "as X approaches zero, Y grows to infinity". In your case, X is the cost of calculating a bigger number. The "more rational" agent simply is the one that can identify and communicate the bigger number in time to play the game. Taken that way, it doesn't disprove perfect rationality, just perfect calculation.
Another way to look at it is "always include costs". Even theoretical perfect rationality is about tradeoffs, not about the results of an impossible calculation.
In order to ensure that this post delivers what it promises, I have added the following content warnings:
Content Notes:
Pure Hypothetical Situation: The claim that perfect theoretical rationality doesn't exist is restricted to a purely hypothetical situation. No claim is being made that this applies to the real world. If you are only interested in how things apply to the real world, then you may be disappointed to find out that this is an exercise left to the reader.
Technicality Only Post: This post argues that perfectly theoretical rationality doesn't exist due to a technicality. If you were hoping for this post to deliver more, well, you'll probably be disappointed.
Contentious Definition: This post (roughly) defines perfect rationality as the ability to maximise utility. This is based on Wikipedia, which defines rational agents as an agent that: "always chooses to perform the action with the optimal expected outcome for itself from among all feasible actions".
We will define the number choosing game as follows. You name any single finite number x. You then gain x utility and the game then ends. You can only name a finite number, naming infinity is not allowed.
Clearly, the agent that names x+1 is more rational than the agent that names x (and behaves the same in every other situation). However, there does not exist a completely rational agent, because there does not exist a number that is higher than every other number. Instead, the agent who picks 1 is less rational than the agent who picks 2 who is less rational than the agent who picks 3 and so on until infinity. There exists an infinite series of increasingly rational agents, but no agent who is perfectly rational within this scenario.
Furthermore, this hypothetical doesn't take place in our universe, but in a hypothetical universe where we are all celestial beings with the ability to choose any number however large without any additional time or effort no matter how long it would take a human to say that number. Since this statement doesn't appear to have been clear enough (judging from the comments), we are explicitly considering a theoretical scenario and no claims are being made about how this might or might not carry over to the real world. In other words, I am claiming the the existence of perfect rationality does not follow purely from the laws of logic. If you are going to be difficult and argue that this isn't possible and that even hypothetical beings can only communicate a finite amount of information, we can imagine that there is a device that provides you with utility the longer that you speak and that the utility it provides you is exactly equal to the utility you lose by having to go to the effort to speak, so that overall you are indifferent to the required speaking time.
In the comments, MattG suggested that the issue was that this problem assumed unbounded utility. That's not quite the problem. Instead, we can imagine that you can name any number less than 100, but not 100 itself. Further, as above, saying a long number either doesn't cost you utility or you are compensated for it. Regardless of whether you name 99 or 99.9 or 99.9999999, you are still choosing a suboptimal decision. But if you never stop speaking, you don't receive any utility at all.
I'll admit that in our universe there is a perfectly rational option which balances speaking time against the utility you gain given that we only have a finite lifetime and that you want to try to avoid dying in the middle of speaking the number which would result in no utility gained. However, it is still notable that a perfectly rational being cannot exist within a hypothetical universe. How exactly this result applies to our universe isn't exactly clear, but that's the challenge I'll set for the comments. Are there any realistic scenarios where the lack of existence of perfect rationality has important practical applications?
Furthermore, there isn't an objective line between rational and irrational. You or I might consider someone who chose the number 2 to be stupid. Why not at least go for a million or a billion? But, such a person could have easily gained a billion, billion, billion utility. No matter how high a number they choose, they could have always gained much, much more without any difference in effort.
I'll finish by providing some examples of other games. I'll call the first game the Exploding Exponential Coin Game. We can imagine a game where you can choose to flip a coin any number of times. Initially you have 100 utility. Every time it comes up heads, your utility triples, but if it comes up tails, you lose all your utility. Furthermore, let's assume that this agent isn't going to raise the Pascal's Mugging objection. We can see that the agent's expected utility will increase the more times they flip the coin, but if they commit to flipping it unlimited times, they can't possibly gain any utility. Just as before, they have to pick a finite number of times to flip the coin, but again there is no objective justification for stopping at any particular point.
Another example, I'll call the Unlimited Swap game. At the start, one agent has an item worth 1 utility and another has an item worth 2 utility. At each step, the agent with the item worth 1 utility can choose to accept the situation and end the game or can swap items with the other player. If they choose to swap, then the player who now has the 1 utility item has an opportunity to make the same choice. In this game, waiting forever is actually an option. If your opponents all have finite patience, then this is the best option. However, there is a chance that your opponent has infinite patience too. In this case you'll both miss out on the 1 utility as you will wait forever. I suspect that an agent could do well by having a chance of waiting forever, but also a chance of stopping after a high finite number. Increasing this finite number will always make you do better, but again, there is no maximum waiting time.
(This seems like such an obvious result, I imagine that there's extensive discussion of it within the game theory literature somewhere. If anyone has a good paper that would be appreciated).
Link to part 2: Consequences of the Non-Existence of Rationality