MattG comments on Open thread, 11-17 March 2014 - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (226)
I'm curious what others thoughts are on Black Swan Theory/Knightian Uncertainty vs. pure Bayesian Reasoning. Do you think there are things in which bayesian prediction will tend to do more harm than good?
BrienneStrohl posted something on her facebook that said she thought the phrase "Knightian Uncertainity" had negative information value, and an interesting conversation ensued between Brienne, myself, Eliezer,Kevin Carlson, and a few others. It's of particular interest to me because Black Swan Theory is so central to how I view the world. If it turns out I've been assuming unpredictability when I should be trusting my predictions, I'll have to reevaluate some of the choices I've made.
Here's the original conversation for context, if you're interested: https://www.facebook.com/strohl89/posts/10152237491864598
I think Knightian uncertainty is a very useful concept. Sometimes "I don't know" is the right answer. I can't estimate the probabilities, I have no evidence, no decent priors -- I just do not know. It's much better to accept that than to start inventing fictional probabilities.
Black Swan isn't a theory, it's basically a correct observation that statistical models of the world are limited in many important ways and depend on many implicit and explicit assumptions (a typical assumption is the stability of the underlying process). When an assumption turns out to be wrong the model breaks, sometimes in a spectacular way.
Nassim Taleb tried to make a philosophy out of that observation. I am not particularly impressed by it.
The trouble, of course, is that "I don't know" is not an action. If "I don't know means" "don't deviate from the status quo," that can be a bad plan if the status quo is bad.
Yes, and why is this "trouble"?
The only point of probabilities is to have them guide actions. How does the concept of Knightian uncertainty help in guiding actions?
More concretely than Lumifer's answer, it would encourage you to diversify your plans, and try not to rely on leveraging any one model or enterprise. It also encourages you to play odds instead of playing it safe, because safe is rarely as safe as you think it is. Try new things regularly, since cost of doing them is generally linear while pay-off could easily be exponential.
That's what I got out of it, anyways.
I'm not actually sure the concept can do all that work, mostly because we don't have plausible theories for making decisions from imprecise probabilities (with probability we have expected utility maximization). See e.g. this very readable paper.
I don't agree with that (a quick example is that speculating about the Big Bang is entirely pointless under this approach), but that's a separate discussion.
It allows you to not invent fake probabilities and suffer from believing you have a handle on something when in reality you don't.
Such speculation may help guide actions regarding future investments in telescopes, decisions on whether to try to look for aliens, etc.
OK, I'll give you that we might non-instrumentally value the accuracy of our beliefs (even so, I don't know how unpack 'accuracy' in a way that can handle both probabilities and uncertainty, but I agree this is another discussion). I still suspect that the concept of uncertainty doesn't help with instrumental rationality, bracketing the supposed immorality of assigning probabilities from sparse information. (Recall that you claimed Knightian uncertainty was 'useful'.)
When I mention black swan theory, I guess I'm talking more about Talebs thoughts on the consequences of the fact you mentioned above (mostly mentioned in [This Wiki Page[(http://en.wikipedia.org/wiki/Black_swan_theory).
Basic Tenets as I understand them:
I would like to see evidence for (1) which goes beyond "future is uncertain and large-impact events are important".
(2) is just part of the definition of what a black swan is.
(3a) is Taleb's idea of antifragility. I am not sure it's practical. For any system that you can build I can imagine an improbable event which will smash it.
As to (3b) Taleb ran a hedge fund for a while, if I recall correctly. It did badly. Taleb doesn't like to mention it.
(3c) is just good risk management and again, see (3a). I don't know what are the practical suggestions beyond diversification. Hedging against disaster (typically by buying volatility or selling short) implies losses if the disaster does not happen.
Have you read the book?
I have read The Black Swan, I have not read Antifragile.
I think anti-fragility makes sense if you think of it as existing over a range of stressors rather than being an absolute quality.
Wikipedia says:
Without having the number for 2001 to 2004 it's hard to say how badly it run.
Universa the hedge fund Taleb is currently advicing seems to run well enough to have $6 billion in assets under management but I can't easily find numbers of return.
Scott Aaronson has a concrete example of Knightian uncertainly in his paper The Ghost in the Quantum Turing Machine.
I associate Knightian Uncertainty with Eliezer's description of Expected Creative Surprises.
That is to say, I am uncertain what a rival company will do, but I know they will try to achieve a goal. When achieving that goal involves surprising me, I should expect them to surprise me even though I'm using the best model I can to model them.
There only one way to find out. Write your predictions down and calibrate yourself.
This actually assumes that the Bayesian model is accurate.
Under Black Swan Theory, you can't use past correct predictions to predict future correct predictions.
For example, most of the variance in the stock market is distributed over a few days in history. I could have calibrated on every day leading up to one of those days and felt confident in my ability to predict the stock market... but just one of those days could have wiped out my portfolio.
Calibration actually makes you necessarily overconfident in the Black Swan view of the world.
You don't need to assume that things are normally distributed to be a Bayesian.
People who calibrate themselves usually don't get more confident through the process but less confident.
Don't calibrate on a single variable.
But you do need to assume that somehow you can predict novel events based on previous data
Just going back to my stock market example, what variables would I have calibrated on to predict 9/11 and it's effects on the stock market?
I'm not arguing that you can predict the stock market. What you can do is calibrate yourself enough to see that it's frequently doing things that you didn't predict.