Manfred comments on Open thread, 11-17 March 2014 - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (226)
I think Knightian uncertainty is a very useful concept. Sometimes "I don't know" is the right answer. I can't estimate the probabilities, I have no evidence, no decent priors -- I just do not know. It's much better to accept that than to start inventing fictional probabilities.
Black Swan isn't a theory, it's basically a correct observation that statistical models of the world are limited in many important ways and depend on many implicit and explicit assumptions (a typical assumption is the stability of the underlying process). When an assumption turns out to be wrong the model breaks, sometimes in a spectacular way.
Nassim Taleb tried to make a philosophy out of that observation. I am not particularly impressed by it.
The trouble, of course, is that "I don't know" is not an action. If "I don't know means" "don't deviate from the status quo," that can be a bad plan if the status quo is bad.
Yes, and why is this "trouble"?
The only point of probabilities is to have them guide actions. How does the concept of Knightian uncertainty help in guiding actions?
More concretely than Lumifer's answer, it would encourage you to diversify your plans, and try not to rely on leveraging any one model or enterprise. It also encourages you to play odds instead of playing it safe, because safe is rarely as safe as you think it is. Try new things regularly, since cost of doing them is generally linear while pay-off could easily be exponential.
That's what I got out of it, anyways.
I'm not actually sure the concept can do all that work, mostly because we don't have plausible theories for making decisions from imprecise probabilities (with probability we have expected utility maximization). See e.g. this very readable paper.
I don't agree with that (a quick example is that speculating about the Big Bang is entirely pointless under this approach), but that's a separate discussion.
It allows you to not invent fake probabilities and suffer from believing you have a handle on something when in reality you don't.
Such speculation may help guide actions regarding future investments in telescopes, decisions on whether to try to look for aliens, etc.
OK, I'll give you that we might non-instrumentally value the accuracy of our beliefs (even so, I don't know how unpack 'accuracy' in a way that can handle both probabilities and uncertainty, but I agree this is another discussion). I still suspect that the concept of uncertainty doesn't help with instrumental rationality, bracketing the supposed immorality of assigning probabilities from sparse information. (Recall that you claimed Knightian uncertainty was 'useful'.)