You are at a casino. You have $1. A table offers you a game: you have to bet all your money; a fair coin will be tossed; if it lands heads, you triple your money; if it lands tails, you lose everything.
In the first round, it is rational to take the bet since the expected value of winning is $1.50, which is greater than what you started out with.
If you win the first round, you'll have $3. In the next round, it is rational to take the bet again, since the expected value is $4.50 which is larger than $3.
If you win the second round, you'll have $9. In the next round, it is rational to take the bet again, since the expected value is $13.50 which is larger than $9.
You get the idea. At every round, if you won the previous round, it is rational to take the next bet.
But if you follow this strategy, it is guaranteed that you will eventually lose everything. You will go home with nothing. And that seems irrational.
Intuitively, it feels that the rational thing to do is to quit while you are ahead, but how do you get that prediction out of the maximization of expected utility? Or does the above analysis only feel irrational because humans are loss-averse? Or is loss-aversion somehow optimal here?
Anyway, please dissolve my confusion.
Suppose that at the beginning of the game, you decide to play no more than N turns. If you lose all your money by then, oh well; if you don't, you call it a day and go home.
So the longer you decide to play, the higher your expected value is. But is a 1/2^100 chance of winning 3^100 dollars really better than a 1/2 chance of winning 3 dollars? Just because the expected value is higher, doesn't mean that you should keep playing. It doesn't matter how high the expected value is if a 1/2^100 probability event is unlikely to happen in the entire lifetime of the Universe.