Ohh, that's easily the one on which you guys can do most harm by associating the safety concern with crankery, as long as you look like cranks but do not realize it.
Speaking of which, use of complicated things you poorly understand is a sure fire way to make it clear you don't understand what you are talking about. It is awesome for impressing people who understand those things even more poorly or are very unconfident in their understanding, but for competent experts it won't work.
Simple example [of how not to promote beliefs]: idea that Kolmogorov complexity or Solomonoff probability favours many worlds interpretation because it is 'more compact' [without having any 'observer']. Why wrong: if you are seeking lowest complexity description of your input, your theory needs to also locate yourself within what ever stuff it generates somehow (hence appropriate discount for something really huge like MWI). Why stupid: because if you don't require that, then the iterator through all possible physical theories is the lowest complexity 'explanation' and we're back to square 1. How it affects other people's opinion of your relevance: very negatively for me. edit: To clarify, the argument is bad, and I'm not even getting into details such as non-computability, our inability to represent theories in the most compact manner (so we are likely to pick not the most probable theory but the one we can compactify easier), machine/language dependence etc etc etc.
edit: Another issue: there was the mistake in phases in the interferometer. A minor mistake, maybe (or maybe the i was confused with phase of 180, in which case it is a major misunderstanding). But the one that people whom refrain of talking of the topics they don't understand, are exceedingly unlikely to make (its precisely the thing you double check). Not being sloppy with MWI and Kolmogorov complexity etc is easy: you just need to study what others have concluded. Not being sloppy with AI is a lot harder. Being less biased won't in itself make you significantly less sloppy.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
The Kelly criterion doesn't maximize expected wealth, it maximizes expected log wealth, as the article you linked mentions:
Suppose that I can make n bets, each time wagering any proportion of my bankroll that I choose and then getting three times the wagered amount if a fair coin comes out Heads, and losing the wager on Tails. Expected wealth is maximized if I always bet the entire bankroll, with an expected wealth of (initial bankroll)(3^n)(the probability of all Heads=2^-n). The Kelly criterion trades off from that maximum expected wealth in favor of log wealth.
A utility function that goes with log wealth values gains less, but it also values losses much more, with insane implications at the extremes. With log utility, multiplying wealth by a 1,000,000 has the same marginal utility whatever your wealth, and dividing wealth by 1,000,000 has the negative of that utility. Consider these two gambles:
Gamble 1) Wealth of $1 with certainty.
Gamble 2) Wealth of $0.00000001 with 50% probability, wealth of $1,000,000 with 50% probability.
Log utility would favor $1, but for humans Gamble 2 is clearly better; there is very little difference for us between total wealth levels of $1 and a millionth of a cent.
Worse, consider these gambles:
Gamble 3) Wealth of $0.000000000000000000000000001 with certainty.
Gamble 4) Wealth of $1,000,000,000 with probability (1-1/3^^^3) and wealth of $0 with probability 1/3^^^3
Log utility favors Gamble 3, since it assigns $0 wealth infinite negative utility, and will sacrifice any finite gain to avoid it. But for humans Gamble 4 is vastly better, and a 1/3^^^3 chance of bankruptcty is negligibly worse than wealth of $1. Every day humans drive to engage in leisure activities, eat pleasant but not maximally healthful foods, and otherwise accept small, go white-water rafting, and otherwise accept small (1 in 1,000,000, not 1 in 3^^^3) probabilities of death for local pleasure and consumption.
This is not my utility function. I have diminishing utility over a range of wealth levels, which log utility can represent, but it weights losses around zero too highly, and still buys a 1 in 10^100 chance of $3^^^3 in exchange for half my current wealth if no higher EV bets are available, as in Pascal's Mugging.
Abuse of a log utility function (chosen originally for analytical convenience) is what led Martin Weitzman astray in his "Dismal Theorem" analysis of catastrophic risk, suggesting that we should pay any amount to avoid zero world consumption (and not on astronomical waste grounds or the possibility of infinite computation or the like, just considering the limited populations Earth can support using known physics).
The original justification for the Kelly criterion isn't that it maximizes a utility function that's logarithmic in wealth, but that it provides a strategy that, in the infinite limit, does better than any other strategy with probability 1. This doesn't mean that it maximizes expected utility (as your examples for linear utility show), but it's not obvious to me that the attractiveness of this property comes mainly from assigning infinite negative value to zero wealth, or that using the Kelly criterion is a similar error to the one Weitzman made.