"The important thing is to hold nothing back in your criticisms of how to criticize; nor should you regard the unavoidability of loopy justifications as a warrant of immunity from questioning."
This doctrine still leaves me wondering why this meta-level hermeneutic of suspicion should be exempt from its own rule. Or, if it is somehow not exempt, how is it a superior basis for knowledge as it obfuscates its own suspect status even as it discounts other modes of knowing. At least the blind faith camp is transparent about its assumptions ("you just have to believe!"), whereas the rule outlined above seems more like a risk manager hawking the methodilogical rigor of his CDO risk headging strategy.
Isn't Nassim Taleb's observations regarding our evolution in mediocristan (where inferences from small data sets were a recipe for success) and current existence in extremistan (where the all-important fat tail means that even enormous data sets can lead you right into the jaws of disaster) a serious critique of the "playing to win" strategy? If winning all the small early battles makes it more difficult to win the pivotal battle, it might be better to take the losses and win where it counts. That is, playing to lose most of the time might be the best way to win big. Or, perhaps better said, exposing yourself to probable defeat might be the only way to win. Indeed, wasn't this Copernicus method for finding truth? See Ch. 1 of Michael Polanyi's Personal Knowledge.
"The important thing is to hold nothing back in your criticisms of how to criticize; nor should you regard the unavoidability of loopy justifications as a warrant of immunity from questioning."
This doctrine still leaves me wondering why this meta-level hermeneutic of suspicion should be exempt from its own rule. Or, if it is somehow not exempt, how is it a superior basis for knowledge as it obfuscates its own suspect status even as it discounts other modes of knowing. At least the blind faith camp is transparent about its assumptions ("you just have to believe!"), whereas the rule outlined above seems more like a risk manager hawking the methodilogical rigor of his CDO risk headging strategy.
Isn't Nassim Taleb's observations regarding our evolution in mediocristan (where inferences from small data sets were a recipe for success) and current existence in extremistan (where the all-important fat tail means that even enormous data sets can lead you right into the jaws of disaster) a serious critique of the "playing to win" strategy? If winning all the small early battles makes it more difficult to win the pivotal battle, it might be better to take the losses and win where it counts. That is, playing to lose most of the time might be the best way to win big. Or, perhaps better said, exposing yourself to probable defeat might be the only way to win. Indeed, wasn't this Copernicus method for finding truth? See Ch. 1 of Michael Polanyi's Personal Knowledge.