The best laid schemes of mice and men
Go often askew,
And leave us nothing but grief and pain,
For promised joy!
- Robert Burns (translated)
Consider the following question:
A team of decision analysts has just presented the results of a complex analysis to the executive responsible for making the decision. The analysts recommend making an innovative investment and claim that, although the investment is not without risks, it has a large positive expected net present value... While the analysis seems fair and unbiased, she can’t help but feel a bit skeptical. Is her skepticism justified?1
Or, suppose Holden Karnofsky of charity-evaluator GiveWell has been presented with a complex analysis of why an intervention that reduces existential risks from artificial intelligence has astronomical expected value and is therefore the type of intervention that should receive marginal philanthropic dollars. Holden feels skeptical about this 'explicit estimated expected value' approach; is his skepticism justified?
Suppose you're a business executive considering n alternatives whose 'true' expected values are μ1, ..., μn. By 'true' expected value I mean the expected value you would calculate if you could devote unlimited time, money, and computational resources to making the expected value calculation.2 But you only have three months and $50,000 with which to produce the estimate, and this limited study produces estimated expected values for the alternatives V1, ..., Vn.
Of course, you choose the alternative i* that has the highest estimated expected value Vi*. You implement the chosen alternative, and get the realized value xi*.
Let's call the difference xi* - Vi* the 'postdecision surprise'.3 A positive surprise means your option brought about more value than your analysis predicted; a negative surprise means you were disappointed.
Assume, too kindly, that your estimates are unbiased. And suppose you use this decision procedure many times, for many different decisions, and your estimates are unbiased. It seems reasonable to expect that on average you will receive the estimated expected value of each decision you make in this way. Sometimes you'll be positively surprised, sometimes negatively surprised, but on average you should get the estimated expected value for each decision.
Alas, this is not so; your outcome will usually be worse than what you predicted, even if your estimate was unbiased!
Why?
...consider a decision problem in which there are k choices, each of which has true estimated [expected value] of 0. Suppose that the error in each [expected value] estimate has zero mean and standard deviation of 1, shown as the bold curve [below]. Now, as we actually start to generate the estimates, some of the errors will be negative (pessimistic) and some will be positive (optimistic). Because we select the action with the highest [expected value] estimate, we are obviously favoring overly optimistic estimates, and that is the source of the bias... The curve in [the figure below] for k = 3 has a mean around 0.85, so the average disappointment will be about 85% of the standard deviation in [expected value] estimates. With more choices, extremely optimistic estimates are more likely to arise: for k = 30, the disappointment will be around twice the standard deviation in the estimates.4
This is "the optimizer's curse." See Smith & Winkler (2006) for the proof.
The Solution
The solution to the optimizer's curse is rather straightforward.
...[we] model the uncertainty in the value estimates explicitly and use Bayesian methods to interpret these value estimates. Specifically, we assign a prior distribution on the vector of true values μ = (μ1, ..., μn) and describe the accuracy of the value estimates V = (V1, ..., Vn) by a conditional distribution V|μ. Then, rather than ranking alternatives. based on the value estimates, after we have done the decision analysis and observed the value estimates V, we use Bayes’ rule to determine the posterior distribution for μ|V and rank and choose among alternatives based on the posterior means...
The key to overcoming the optimizer’s curse is conceptually very simple: treat the results of the analysis as uncertain and combine these results with prior estimates of value using Bayes’ rule before choosing an alternative. This process formally recognizes the uncertainty in value estimates and corrects for the bias that is built into the optimization process by adjusting high estimated values downward. To adjust values properly, we need to understand the degree of uncertainty in these estimates and in the true values..5
To return to our original question: Yes, some skepticism is justified when considering the option before you with the highest expected value. To minimize your prediction error, treat the results of your decision analysis as uncertain and use Bayes' Theorem to combine its results with an appropriate prior.
Notes
1 Smith & Winkler (2006).
2 Lindley et al. (1979) and Lindley (1986) talk about 'true' expected values in this way.
3 Following Harrison & March (1984).
4 Quote and (adapted) image from Russell & Norvig (2009), pp. 618-619.
5 Smith & Winkler (2006).
References
Harrison & March (1984). Decision making and postdecision surprises. Administrative Science Quarterly, 29: 26–42.
Lindley, Tversky, & Brown. 1979. On the reconciliation of probability assessments. Journal of the Royal Statistical Society, Series A, 142: 146–180.
Lindley (1986). The reconciliation of decision analyses. Operations Research, 34: 289–295.
Russell & Norvig (2009). Artificial Intelligence: A Modern Approach, Third Edition. Prentice Hall.
Smith & Winkler (2006). The optimizer's curse: Skepticism and postdecision surprise in decision analysis. Management Science, 52: 311-322.
Agreed, but it takes a high degree of luminosity to distinguish between tactical use of status to attain a specific objective, and getting emotionally involved and reactive to the signals of other (inducing this state of confusion is pretty much the function of status-signals for most humans, though).
Tactical = dress up, display "irrational confidence", and play up your achievements to maximize attraction in potential romantic partners, or do well at a job interview.
Emotional-reactive = seeking, and worrying about, the approval of perceived social betters even though there is no logical reason.