New to LessWrong?

New Comment
4 comments, sorted by Click to highlight new comments since: Today at 9:08 AM

I guess really the only conclusion you can draw from all of this is not to put any important decisions in the hands of people from top business schools.

I used to strongly agree with this point, but now I worry that dismissal of MBA skills has more to do with a failure of our ability to scientifically explain their mechanism, rather than MBAs actually generally sucking.

The problem space of probabilistic predictions is very distinct. In this way it lends itself to a combination of applied math and heuristics, which Tetlock lays out in Supreforecasting and Expert Political Judgement.

The MBA problem space is much more chaotic. I was on a team working on new initiatives at [Mega Tech Company], and the problem space was sparse, not much or any historical data, and no way for me to use my scientific or programming skills to come to clear conclusions. The challenge was that it required more of a simulation style of thinking, which is how I now think of MBAs.

The MBAs would discuss random high-dimensional, unstructured, and piece-meal data. They would then use their incredibly powerful human brains to simulate whole sets of actions, how they would interact with the company, the customers, and the goals. and filter through an estimated probability space to choose what they thought is the right business choice. Sure, some of them sucked. But it was humbling to see the good ones.

The reality is good MBAs (and people who self-select into MBAs) aren't the type who care about careful analysis for one-shot predictions. They are, as far as I can tell, people with high IQs, a solid set of training data on Real Business Cases, and a 'gut feeling' capable of generating a probability distribution over high-dimensional probability spaces.

You don't want your president saying "I estimate there is an 80% chance of nuclear war." You want them thinking "What is the probability space given this overwhelming amount of information, and how do I steer my country through the right conditions in this n-dimensional choice set." (which I base on John Cochrane's response to Tetlock in the Cato journal "What's Wrong with Expert Predictions?" (https://www.cato-unbound.org/2011/07/15/john-h-cochrane/defense-hedgehogs) )*

Thanks for the comment. It's humbling to get a glimpse of vastly different modes of thought, optimized for radically different types of problems.

Like, I feel like I have this cognitive toolbox I've been filling up with tools for carpentry. If a mode of thought looks useful, I add it. But then I learn that there's such a thing as a shipyard, and that they use an entirely different set of tools, and I wonder how many such tools people have tried to explain to me only for me to roll my eyes and imagine how poorly it would work to drive a nail. When all you have is a hammer, everything is seen through the lens of the nail-driving paradigm.

My hindsight bias thinks it's obvious that Tetlock's questions might bias the sample toward foxes over hedghogs). He's asking people to make predictions about a wide range of domains. I predict that, if you asked leading experts in various subfields to make predictions about the results of experiments being conducted within their subfield, they would trounce outsiders, especially if the questions were open ended. (By open ended, I mean "how will the economy change if X happens" rather than "Will housing prices go up or down if X happens".)

It would be interesting if they also could make better predictions on similar types of problems. For example, are there "soft" problems, with thousands of variables and thousands of unknowns, which experts in the "soft" sciences are systematically better at solving?

It seems plausible that the "hard sciences" mental models necessary to solve problems with a couple of variables and 1 or 2 unknowns might not work well for soft problems. After all, the 2 mental architectures useful for these look radically different. The point of associative reasoning isn't to evaluate which of the first couple hypothesis is most likely. It's to make sure that the bigest factors are even on your list of potentially influential variables. Simply evaluating a couple random items on a long tailed Pareto chart of solutions will radically underpreform even the briefest seach through hypothesis space. The point is to be able to narrow down the space of hypotheses from trillions to a few dozen. It's the "explore" side of the explore/exploit tradeoff.

It would be interesting to test this, however. What happens if you take a Nobel laureate with a prize in medicine, and another with a prize in physics, give them enough funding to conduct a large number of experiments, and ask them both to spend a couple years trying to solve a wicked problem in an unrelated field? Does the physicist overestimate how applicable his or her knowledge is to the problem? Might the medical researcher take a more high-level, holistic approach, and have higher odds of success? Perhaps a less thorough but easier experiment would be to see how the two types of minds performed on candle problems.

(Also, on my second read through of your comment, I noticed an open parenthesis (which creates an unresolved tension which will stay with me all day).

especially if the questions were open ended. (By open ended, I mean "how will the economy change if X happens" rather than "Will housing prices go up or down if X happens".)

This seems consistent with Tetlock's guess in Superforecasting that hedgehogs are better for knowing what questions to ask.

Thanks again. I haven't actually read the book, just Yvain's review, but maybe I should make the time investment after all.