Comment author: CarlShulman 04 June 2012 07:24:11PM *  3 points [-]

Kelly asked a question: given you have finite wealth, how do you decide how much to bet on a given offered bet in order to maximize the rate at whcih your expected wealth grows?

The Kelly criterion doesn't maximize expected wealth, it maximizes expected log wealth, as the article you linked mentions:

The conventional alternative is utility theory which says bets should be sized to maximize the expected utility of the outcome (to an individual with logarithmic utility, the Kelly bet maximizes utility, so there is no conflict)

Suppose that I can make n bets, each time wagering any proportion of my bankroll that I choose and then getting three times the wagered amount if a fair coin comes out Heads, and losing the wager on Tails. Expected wealth is maximized if I always bet the entire bankroll, with an expected wealth of (initial bankroll)(3^n)(the probability of all Heads=2^-n). The Kelly criterion trades off from that maximum expected wealth in favor of log wealth.

A utility function that goes with log wealth values gains less, but it also values losses much more, with insane implications at the extremes. With log utility, multiplying wealth by a 1,000,000 has the same marginal utility whatever your wealth, and dividing wealth by 1,000,000 has the negative of that utility. Consider these two gambles:

Gamble 1) Wealth of $1 with certainty.

Gamble 2) Wealth of $0.00000001 with 50% probability, wealth of $1,000,000 with 50% probability.

Log utility would favor $1, but for humans Gamble 2 is clearly better; there is very little difference for us between total wealth levels of $1 and a millionth of a cent.

Worse, consider these gambles:

Gamble 3) Wealth of $0.000000000000000000000000001 with certainty.

Gamble 4) Wealth of $1,000,000,000 with probability (1-1/3^^^3) and wealth of $0 with probability 1/3^^^3

Log utility favors Gamble 3, since it assigns $0 wealth infinite negative utility, and will sacrifice any finite gain to avoid it. But for humans Gamble 4 is vastly better, and a 1/3^^^3 chance of bankruptcty is negligibly worse than wealth of $1. Every day humans drive to engage in leisure activities, eat pleasant but not maximally healthful foods, and otherwise accept small, go white-water rafting, and otherwise accept small (1 in 1,000,000, not 1 in 3^^^3) probabilities of death for local pleasure and consumption.

This is not my utility function. I have diminishing utility over a range of wealth levels, which log utility can represent, but it weights losses around zero too highly, and still buys a 1 in 10^100 chance of $3^^^3 in exchange for half my current wealth if no higher EV bets are available, as in Pascal's Mugging.

Abuse of a log utility function (chosen originally for analytical convenience) is what led Martin Weitzman astray in his "Dismal Theorem" analysis of catastrophic risk, suggesting that we should pay any amount to avoid zero world consumption (and not on astronomical waste grounds or the possibility of infinite computation or the like, just considering the limited populations Earth can support using known physics).

Comment author: albeola 04 June 2012 10:25:23PM 0 points [-]

The original justification for the Kelly criterion isn't that it maximizes a utility function that's logarithmic in wealth, but that it provides a strategy that, in the infinite limit, does better than any other strategy with probability 1. This doesn't mean that it maximizes expected utility (as your examples for linear utility show), but it's not obvious to me that the attractiveness of this property comes mainly from assigning infinite negative value to zero wealth, or that using the Kelly criterion is a similar error to the one Weitzman made.

Comment author: private_messaging 03 June 2012 03:24:43PM *  5 points [-]

Ohh, that's easily the one on which you guys can do most harm by associating the safety concern with crankery, as long as you look like cranks but do not realize it.

Speaking of which, use of complicated things you poorly understand is a sure fire way to make it clear you don't understand what you are talking about. It is awesome for impressing people who understand those things even more poorly or are very unconfident in their understanding, but for competent experts it won't work.

Simple example [of how not to promote beliefs]: idea that Kolmogorov complexity or Solomonoff probability favours many worlds interpretation because it is 'more compact' [without having any 'observer']. Why wrong: if you are seeking lowest complexity description of your input, your theory needs to also locate yourself within what ever stuff it generates somehow (hence appropriate discount for something really huge like MWI). Why stupid: because if you don't require that, then the iterator through all possible physical theories is the lowest complexity 'explanation' and we're back to square 1. How it affects other people's opinion of your relevance: very negatively for me. edit: To clarify, the argument is bad, and I'm not even getting into details such as non-computability, our inability to represent theories in the most compact manner (so we are likely to pick not the most probable theory but the one we can compactify easier), machine/language dependence etc etc etc.

edit: Another issue: there was the mistake in phases in the interferometer. A minor mistake, maybe (or maybe the i was confused with phase of 180, in which case it is a major misunderstanding). But the one that people whom refrain of talking of the topics they don't understand, are exceedingly unlikely to make (its precisely the thing you double check). Not being sloppy with MWI and Kolmogorov complexity etc is easy: you just need to study what others have concluded. Not being sloppy with AI is a lot harder. Being less biased won't in itself make you significantly less sloppy.

Comment author: albeola 04 June 2012 07:16:00PM 3 points [-]

if you are seeking lowest complexity description of your input, your theory needs to also locate yourself within what ever stuff it generates somehow (hence appropriate discount for something really huge like MWI)

It seems to me that such a discount exists in all interpretations (at least those that don't successfully predict measurement outcomes beyond predicting their QM probability distributions). In Copenhagen, locating yourself corresponds to specifying random outcomes for all collapse events. In hidden variables theories, locating yourself corresponds to picking arbitrary boundary conditions for the hidden variables. Since MWI doesn't need to specify the mechanism for the collapse or hidden variables, it's still strictly simpler.

In response to comment by albeola on "Progress"
Comment author: Oligopsony 04 June 2012 05:42:21AM 7 points [-]

Nobody, stated explicitly, but the word "progress" links a lot of those dimensions together, so it's easy to think, functionally, as if they are. Wiggins and all that.

In response to comment by Oligopsony on "Progress"
Comment author: albeola 04 June 2012 06:59:55PM 3 points [-]

There's a difference between thinking as if dimensions are linked together, and thinking as if there's "some cosmic niceness built into the universe that makes everything improve monotonically along every dimension at once" (emphasis mine). Switching between attacking moderate and extreme versions of the same claim is classic logical rudeness.

In response to "Progress"
Comment author: albeola 04 June 2012 04:56:44AM 3 points [-]

But there isn't some cosmic niceness built into the universe that makes everything improve monotonically along every dimension at once.

Who believes this?

Comment author: RolfAndreassen 09 May 2012 01:42:51AM 0 points [-]

I really don't see why. A zebra crossing is a sequence of black and white stripes. Exchanging the colours just means you start with white instead of black, or vice-versa. It's the stripiness that's important, not the ordering.

Comment author: albeola 09 May 2012 01:52:12AM *  1 point [-]

I was assuming you'd see both colors as the same. Then a zebra crossing would just look like an ordinary stretch of road. That wouldn't kill you. What would kill you is to see an ordinary stretch of road as a zebra crossing. If that were to happen, though, it definitely wouldn't be at the next zebra crossing.

Comment author: fubarobfusco 08 May 2012 11:00:50PM 0 points [-]

One concern is whether the newly-minted atheist will subsequently prove to emself that black is white, and be killed in the next zebra crossing, as Douglas Adams put it.

For instance, if Alice is so taken with her newfound freedom from faith that she boasts loudly about it and gets herself disowned, expelled, and otherwise disadvantaged, that would kind of suck.

Comment author: albeola 08 May 2012 11:08:31PM 0 points [-]

prove to emself that black is white, and be killed in the next zebra crossing

You wouldn't be killed, you'd just fail to cross the street.

Comment author: prase 08 May 2012 04:08:51PM *  0 points [-]

This is an example of a bias, we call it "expecting short inferential distances".

"Inferential distance" is LW jargon. Does the bias have a standard name?

Comment author: albeola 08 May 2012 09:55:24PM 0 points [-]
Comment author: MichaelGR 03 May 2012 05:33:52PM *  21 points [-]

If you want to build a ship, don't drum up the men to gather wood, divide the work and give orders. Instead, teach them to yearn for the vast and endless sea...

  • Antoine de Saint Exupery
Comment author: albeola 06 May 2012 11:11:45PM 3 points [-]
Comment author: shminux 06 May 2012 08:45:28PM -1 points [-]

I'm pretty sure I expressed my opinion on this topic precisely ("no, it's not compatible"). It's up to you how you choose to misunderstand it, I have no control over it.

Comment author: albeola 06 May 2012 09:24:51PM *  0 points [-]

spending their life complaining about how they would do this and that if only they didn't have akrasia

Do you agree the quoted property differs from the property of "having akrasia" (which is the property we're interested in); that one might have akrasia without spending one's life complaining about it, and that one might spend one's life complaining about akrasia without having (the stated amount of) akrasia (e.g. with the deliberate intent to evade obligations)? If this inaccuracy were fixed, would your original response retain all its rhetorical force?

(It's worth keeping in mind that "akrasia" is more a problem description saying someone's brain doesn't produce the right output, and not an actual specific mechanism sitting there impeding an otherwise-functioning brain from doing its thing, but I don't think that affects any of the reasoning here.)

Comment author: William_Kasper 06 May 2012 08:10:15PM *  25 points [-]

[Political "gaffe" stories] are completely information-free news events, and they absolutely dominate political news coverage and analysis. It's like asking your doctor if the X-rays show a tumor, and all he'll talk about is how stupid the radiologist's haircut looks. . . . ["Blast"] stories are. . . just as content-free as the "gaffe" stories. But they are popular for the same reason: There's a petty, tribal satisfaction in seeing a member of our team really put the other team in their place. And there's a rush of outrage adrenaline when the other team says something mean about us. So, instead of covering pending legislation or the impact it could have on your life, the news media covers the dick-measuring contest.

-David Wong, 5 Ways to Spot a B.S. Political Story in Under 10 Seconds

Comment author: albeola 06 May 2012 09:07:11PM 7 points [-]

instead of covering pending legislation or the impact it could have on your life

If "impact on your life" is the relevant criterion, then it seems to me Wong should be focusing on the broader mistake of watching the news in the first place. If the average American spent ten minutes caring about e.g. the Trayvon Martin case, then by my calculations that represents roughly a hundred lifetimes lost.

View more: Prev | Next