Imagine that I'm offering a bet that costs 1 dollar to accept. The prize is X + 5 dollars, and the odds of winning are 1 in X. Accepting this bet, therefore, has an expected value of 5 dollars a positive expected value, and offering it has an expected value of -5 dollars. It seems like a good idea to accept the bet, and a bad idea for me to offer it, for any reasonably sized value of X.
Does this still hold for unreasonably sized values of X? Specifically, what if I make X really, really, big? If X is big enough, I can reasonably assume that, basically, nobody's ever going to win. I could offer a bet with odds of 1 in 10100 once every second until the Sun goes out, and still expect, with near certainty, that I'll never have to make good on my promise to pay. So I can offer the bet without caring about its negative expected value, and take free money from all the expected value maximizers out there.
What's wrong with this picture?
See also: Taleb Distribution, Nick Bostrom's version of Pascal's Mugging
(Now, in the real world, I obviously don't have 10100 +5 dollars to cover my end of the bet, but does that really matter?)
Edit: I should have actually done the math. :(
I don't think it is. The cause of the confusion is just that the sums are wrong (and the conclusion is wrong). Replace the opening statement with "I'm Bill Gates, I'm offering you a bet - the cost to take the bet is $1, the prize for winning is $58 billion. The odds of winning are 1 in 57.99999 billion".
Now we're no longer talking about unrealistic amounts of money, but it still isn't good bet for Bill to offer, because it's expected value is negative. You do need to invoke the fact that wealth is finite to explain why martingales) don't work, but this "system" isn't nearly as complicated as a martingale.