Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Lumifer 16 December 2014 01:22:17AM 3 points [-]

Yes, Krugman correctly predicted that the post-2008 flood of money will not lead to quick inflation. That's the example that's I've seen literally dozens of times as the "proof" that Krugman is right and everyone else is wrong.

Can I see any other pieces of evidence?

Comment author: Larks 17 December 2014 03:15:13AM 5 points [-]

There was that time when he predicted that fiscal tightening in 2013 would be refute his ideological opponents, and then ... totally failed to admit he was wrong when the evidence came out against him.

Comment author: bramflakes 03 December 2014 03:56:16PM *  27 points [-]

When you hear an economist on TV "explain" the decline in stock prices by citing a slump in the market (and I have heard this pseudo-explanation more than once) it is time to turn off the television.

Thomas J. McKay, Reasons, Explanations and Decisions

Comment author: Larks 11 December 2014 02:24:37AM 1 point [-]

I guess technically if a lot of stocks went paid their dividend on the same day (went ex-divvie) you could get a 0.5-1% fall in the stock prices (depending on the dividend yield at the time) without their being a slump - the value of those dividends which have now been paid out is simply no longer part of the market. But I agree wholeheartedly with the sentiment.

Comment author: SilentCal 09 December 2014 06:05:24PM 21 points [-]

My view, and a lot of other people here seem to also be getting at this, is that the demandingness objection comes from a misuse of utilitarianism. People want their morality to label things 'permissible' and 'impermissible', and utilitarianism doesn't natively do that. That is, we want boolean-valued morality. The trouble is, Bentham went and gave us a real-valued one. The most common way to get a bool out of that is to label the maximum 'true' and everything else false, but that doesn't give a realistically human-followable result. Some philosophers have worked on 'satisficing consequentialism', which is a project to design a better real-to-bool conversion, but I think the correct answer is to learn to use real-valued morality.

There's some oversimplification above (I suspect people have always understood non-boolean morality in some cases), but I think it captures the essential problem.

Comment author: Larks 10 December 2014 03:37:02AM 1 point [-]

I'm not sure you can really say it's a 'misuse' if it's how Bentham used it. He is essentially the founder of modern utilitarianism. If any use is a misuse, it is scalar utilitarianism. (I do not think that is a misuse either).

Comment author: Stuart_Armstrong 04 December 2014 12:27:11PM 0 points [-]

I'm not implicitly assuming it - the market models were the ones explicitly assuming it.

Comment author: Larks 05 December 2014 01:08:21AM 1 point [-]

Up to this point in the post you haven't mentioned any models. If you give a probability without first mentioning a model for it to be relative to, the implication is that you are endorsing the implicit model. But this is just nit-picking.

More importantly, there are many models used by people in the market. Plenty of people use far more sophisticated models. You can't just say "the market models" without qualification or citation.

Comment author: Vaniver 03 December 2014 08:00:54PM 1 point [-]

For this particular example, this basically means that you can predict that LTCM will fail spectacularly when rare negative events happen. But could you reliably make money knowing that LTCM will fail eventually? If you buy their options that pay off when terrible things happen, you're trusting that they'll be able to pay the debts you're betting they can't pay. If you short them, you're betting that the failure happens before you run out of money.

Comment author: Larks 04 December 2014 04:02:10AM 1 point [-]

you're trusting that they'll be able to pay the debts you're betting they can't pay.

LTCM should not be your counter-party! Also, using a clearinghouse eliminates much of the risk.

Comment author: Lumifer 01 December 2014 07:00:10PM *  2 points [-]

BS is not just an equation, it is also a model

Yes. Or, rather, there is a Black-Scholes options pricing model which gives rise to the Black-Scholes equation.

It predicts the relationships

No, it does not predict, it specifies this relationship.

In as much as you can estimate the volatility (the rest is pretty clear) you can see whether the model is correct.

Heh. And how are you going to disambiguate between your volatility estimate being wrong and the model being wrong?

Let me repeat again: Black-Scholes does not price options in the real world in the sense that it does not tell you what the option price should be. Black-Scholes is two things.

First, it's a model (a map in local terms) which describes a particular simple world. In the Black-Scholes world, for example, prices are continuous. As usual, this model resembles the real world in certain aspects and does not match it on other aspects. Within the Black-Scholes world, the Black-Scholes option price holds by arbitrage -- that is, if someone offers a market in options at non-BS prices you would be able to make riskless profits off them. However the real world is not the Black-Scholes world.

Second, it's a converter between price and implied volatility. In the options markets it's common to treat these two terms interchangeably in the recognition that given all other inputs (which are observable and so known) the Black-Scholes formula gives you a specific price for each volatility input and vice versa, gives you a specific implied volatility for each price input.

Comment author: Larks 04 December 2014 04:01:02AM 0 points [-]

Yes, there's a reason we look at options-implied vol - it's because B-S and the like are where we get our estimates of vol from!

Comment author: Larks 04 December 2014 03:54:46AM 1 point [-]

Such events have a probability of around 10-50 of happening

No, you cannot infer a probability just from a SD. You also need to know what type of distribution it is. You're implicitly assuming a normal distribution, but everyone knows asset price returns have negative skew and excess kurtosis.

You could easily correct this by adding "If you use a normal distribution...".

Comment author: Stuart_Armstrong 01 December 2014 11:05:54AM 0 points [-]

The model used is the Black-Scholes model with, as you point out, a normal distribution. It endures, despite being clearly wrong, because there doesn't seem to be any good alternatives.

Comment author: Larks 04 December 2014 03:50:03AM 3 points [-]

Why were you using an options-pricing model to predict stock returns? Black-Scholes is not used to model equity market returns.

Comment author: pcm 02 December 2014 03:38:00AM 0 points [-]

I don't see anything about access to code on p82. Are you inferring that from "closely monitor"?

Comment author: Larks 02 December 2014 04:44:45AM 0 points [-]

Yes, and good (implicit) point - perhaps Nick had in mind something slightly less close than access to their codebase.

Comment author: Lumifer 28 October 2014 04:33:25PM 3 points [-]

Alternate history is not falsifiable, of course, but that scenario doesn't look all that likely to me. Russia successfully recovered from losing a very large chunk of its territory, a great deal of its army, and most of its manufacturing capacity to Germans in 1941-1942. Losing a few cities (even assuming the bombers could get through -- there were no ICBMs and Russia in 1945 had a pretty good air force and AA capabilities) would not cripple Russia. I would guess that it would just incentivize it to roll over the remainder of Europe. It's not like Stalin ever cared about casualties.

Comment author: Larks 01 December 2014 02:53:20AM 0 points [-]

Good point, but I think Bostrom's point about risk aversion does much to ameliorate it. If the US had had a 50% chance of securing global hegemonicy, and a 50% chance of destruction from such a move, it probably would not have done it. A non-risk-averse, non-deontological AI, on the other hands, with its eye on the light cone, might consider the gamble worthwhile.

View more: Next