The Flaws of Fermi Estimates

Why don’t we use more Fermi estimates?[1] Many of us want to become more rational. We have lots of numbers we can think of and important variables to consider. There are a few reasons.

Fermi calculations get really messy. After a few variables introduced, they could quickly become difficult to imagine and outline a problem. Many people, especially those who were not used to writing academic papers, do not practice the skills of formalizing inputs and outputs. It can be tedious for those who do.

Fermi models typically do not include estimates of certainty. Certainty propagates. It creates bottlenecks. As a Fermi model grows, specific uncertain assumptions could underscore the result. Certainty estimates are typically not measured, and when they are they require formalization and significant calculation.

Fermi calculations are not fun to share. Most of them are pretty simple; they just involve multiplication and addition and 3–5 variables. However, in order to write them one must formalize them as few lines of math him or few long paragraphs which really should be math.

Graphical Assumption Modeling

We propose the use of simple graphical models in order to represent estimates and Fermi models. We think these have the capacity to solve the issues mentioned above and make complex estimations more simple, more sharable, and more calculable. A formal and rigorous graphical model could not only improve on existing Fermi calculations, but it could also extend them to functions they have not yet been used for.

Multiplication

Let’s say we are trying to estimate the number of smiles per day in a park. A first attempt at this may be to guess the number of people in the park and to estimate the number of smiles on average per person in the park.

This is easy to calculate directly. 100 People x 10 smiles/(day * person) = 1000 smiles/day.

As a model, we can represent the variables as lines and the function as a box in between them. This fits nicely with similar diagramming standards. The function of multiplication acts as an object with inputs and outputs.

Independent variables, or user selected variables, are shown in black, and dependent variables are shown in blue.

We can condense this diagram by moving the number of smiles per day per person into the multiplication block.

Say we wanted to find the total smiles per year in the park. We can simply extend the model as follows.

 

Addition

Perhaps we think that kids and adults have different rates of smiling and would like to separate our model accordingly. We estimate the number of kids in the park, the number of adults in the park, and their corresponding smiling estimates. Then we add them with a similar block as we used for multiplication.

Uncertainty

If we have uncertainty estimates we can make them explicit. Estimates of certainty typically get left out of Fermi calculations, but become essential when making large models.

It is not clear what the best way is to annotate an uncertainty interval. In this case, the intervals described are meant as 90% Gaussian confidence intervals, but these could vary. They do not have to be Gaussian-like intervals, but could be complex probability distributions. These may require graphical representations and additional software. However, for many estimations, even simple models of uncertainty would be advantageous.

Estimate Combination

If two people give two estimates for a number, they could be combined to find the resulting probability distribution.

Uncertainty distributions are valuable for this. If two agents both state their uncertainty distributions, we can find a weighted average of their estimations with a calculated resulting uncertainty distribution.

Model Combination

We can combine models by combining their resulting estimates. So far we have shown two unique attempts at modeling the number of smiles in a park. They produced the same unit output, so they can be combined.

Both of them still have predictive power, and a combination could produce a more accurate estimate than either alone. The model with greater certainty, in this case the adult/child split model, will have more influence in the final calculation, but it will still be moderated by it. Combining many properly calibrated models will always give a more accurate result.

Abstraction

Large sections can be combined into black boxes.[2] Black boxes can be used to summarize large models into simple objects with specified inputs and outputs. This means that one can work on a very large total model in small pieces and have it be manageable.

Decision Making

Say we must decide between two options. One common way to do so is to estimate a value for each, and choose the one with a higher (or lower) value.

In this case we make a decision of which lemonade will sell better. We use a decision ‘block’, which could hold any arbitrary decision function. In this case, it simply outputs the value of the highest input value.

This can be useful if one can assume the use of the best option of alternatives. In a larger model, there may be many decisions determined by model. The outputs of these decisions could be used for later estimations or decisions.

Larger Models

These techniques can be combined to produce large and intricate models. As these increase in size they can become more valuable.

In the model above, a person is attempting to find the best use of their time to produce money. There are several options to sell lemonade, and there’s also the opportunity to work overtime. The estimator makes an estimate for each and uses the model to understand them in relation to each other.

This larger model demonstrates the option of configuration in these models. The profit percentage of lemonade sales was expected to be similar for different kinds of lemonade in different locations. It could have instead been multiplied individually for each one, but it was simpler to move it after the decision block between them.

In this case it may have been reasonable to use a table instead of a graphical model. However, a table would not necessarily demonstrate the unique constraints and considerations of each type of input. For instance, lemonade sales had a margin of profit, and overtime work had a different net income number. In tables many of the important calculations are often difficult to read at the same time as the data. We believe this form of modeling helps make the numbers understandable as well as the assumptions and certainties that go into those numbers.

Possible Automated Analysis

Once we arrive at the model above, we would have enough information to calculate the value of information (VOI) of additional certainty for each metric. For instance, a reduction of uncertainty of the variable ‘Regular Lemonade at Dolores Park’ to 0 could produce an expected few dollars per hour, assuming that resulting decisions would be made using the model.

The value of new options could also be calculated easily if one could come up with a probability distribution of their expected earnings per hour.

While these kinds of analysis are well established in academia, they are currently difficult to use. If estimations could be simply mapped, it may make them significantly more accessible.

 

Similar Work

This work can be seen as similar to Unified Modeling Language (UML) in that it attempts to graphically specify a complex system of knowledge. UML was an attempt to define a graphical language for software architecture. There were claims that programs that produced UML could be used to produce their corresponding programs. This hasn’t really happened. The UML spec went through several versions and became so specific and complex that few programmers now bother with it. However, it did encourage the use of whiteboard modeling for other programmers and experiences some popularity with larger projects.

Graphical computer software is challenging. Most attempts have failed, but a few companies have had success with it. LabView is a popular visual programming tool used by scientists and engineers. It uses a Dataflow programming paradigm, which would also be appropriate for Graphical Assumption Modeling.

The theory of this work is similar to that of Probabilistic Graphical Models. These are typically more formal models aimed at computer input and output rather than direct human interaction.

Future Work

This research is very young. The diagrams could use more experimentation and exploration. We have not included a method for subtraction or division, for example. Even if they were better established, it could take a long time for them to become accepted by other communities.

It’s obvious that if these models are useful, it would be valuable to have a computer program to make them. Ozzie Gooen has made a simple attempt called Fermihub. Fermihub is functional, free, and open source. However, it applies only a few simple analytic approximations and does not incorporate Monte Carlo simulations. For accurate or large models, Monte Carlo simulations will be necessary.

There could be more research done in this kind of estimation. While much of the math has already been solved, the art of efficiently creating large models and collaborating with others has a lot of work left. There is also some debate on the proper way to combine estimates, which is crucial for large models.


 

Note: I realize that the math in the models above, specifically in the combinations of estimates, is incorrect.  I'm currently investigating how to do it correctly.   

References

  1. Fermi Estimates, LukeProg. 2013
  2. See wikipedia for a high level understanding of black boxes. They are a fundamental unit for systems research, which in part has lead to many diagrams we see today.
New Comment
23 comments, sorted by Click to highlight new comments since:

It would be cool to share models via URLs pointing to web services that output calculations.

And even cooler if (web) discussions of models included embedded diagrams like what you've produced.

Good point

I would use this

Thanks, I think this may be a valuable direction to pursue.

The error-tracking for multiplication in Fermilab seems like it's probably wrong. But I don't think there's an easy fix, since products of Gaussian distributions aren't Gaussian. Since multiplication is more common than addition in Fermi estimates, you might replace your distributions with log-normals (this is what I do when tracking uncertainty in back-of-the-envelope calculations), but I agree that monte carlo simulations are really the way to go.

[-]tog00

Do you think you'd use this out of interest Owen?

Maybe, if it had good enough UI and enough features?

I feel like it's quite a narrow-target/high-bar to compete with back-of-the-envelope/whiteboard at one end (for ease of use), and a software package that does monte carlos properly at the other end.

It would be neat to, from a collaborative perspective, be able to fork a given graph, and either change some assumptions or other pieces of the model, and to then have a tree showing how they evolved.

[-]Shmi20

Re your combined and larger models:

If your Fermi estimate does not fit on the back of an envelope, it's no longer a Fermi estimate.

Perhaps 'Fermi estimate' was not the best term to use but I couldn't think of an equally understandable but better one. It could be called simply 'estimate', but I think the important thing here is that its used very similarly to how a Fermi estimate would be (with very high uncertainty of the inputs, and done in a very simple manner). What would you call it? (http://lesswrong.com/lw/h5e/fermi_estimates/).

I vouch for Ozzie Estimate.

I take shminux's point to be primarily one of ease, or maybe portability. The need to understand sensitivity in heuristical estimation is a real one, and I also believe that your tools here may be the right approach for a different level of scale than was originally conceived by Fermi. It might be worth clarifying the kinds of decisions that require the level of analysis involved with your method to prevent confusion.

Have you seen the work of Sanjoy Mahajan? Street-Fighting Mathematics, or The Art of Insight in Science and Engineering?

I actually watched his TED talk last night. Will look more into his stuff.

The main issues I'm facing are understanding the math behind combining estimates and actually making the program right now. However, he definitely seems to be one of the top world experts on actually making these kinds of models.

[-]Gust00

If you keep the project open source, I might be able help with the programming (although I don't know much about Rails, I could help with the client side). The math is a mystery to me, too, but can't you charge ahead with a simple geometric mean for the combination of estimates while you figure it out?

[-]tog00

I like the name it sounds like you may be moving to - "guesstimate".

Thanks!

Guesstimates as a thing aren't very specific, what I am proposing is at least a lot more involved than what has been typically considered a guesstimate. That said, very few people seem familiar with the old word, so it seems like it could b extended easily.

Pictures are broken for me on this page !! /dan

If you have not yet learned lisp, or Wolfram, you may find them very interesting.

This is quite interesting. I strongly suspect there are already software packages aimed at nonprogrammers that can do things much like this, but I don't know anything about the field. Anybody know anything like this?

Hubbard recommends a few commercial Monte Carlo tools for risk analysis that seem very related: Oracle Crystal Ball, @Risk, XLSim, Risk Solver Engine, Analytica.

Note: I realize that the math in the models above, specifically in the combinations of estimates, is incorrect. I'm currently investigating how to do it correctly.

I was curious about the combinations of estimates problem, so here is what I came up with.

Of course this will depend on some assumptions about how the estimates should be interpreted. For example, in the unlikely case that the model says "the true value is 100% in this range", you simply take the overlap of the ranges: 100±10 and 110±5 together give you [105..110] or 107.5±2.5.

Much more interesting is the assumption that the ± value is a multiple of the standard deviation. (This covers the Gaussian interval case, but is more general; they could be using some other distribution parametrized by mean and variance, or simply appealing to Chebyshev's inequality.)

Let us say that 100±30 and 115±5 are such estimates, generated via sample means. Thus, an average of n samples gave us a mean of 100 and a variance of 900. (Thus, the total of the n samples had variance 900n^2, and each sample had variance 900n.) To get a variance of 25 by sampling from the same distribution, we would need 36n samples.

Now we can combine the averages. All 37n samples together add up to 100(n)+110(36n)=4060n, which yields a mean of roughly 114.6. Each sample still has variance 900n, so the total variance is (900n)x(37n)=33300n^2. Dividing by (37n)^2, we get a variance of 24.3, or a standard deviation of 4.93; thus, the correct combined estimate is 114.6±4.93.

It doesn't matter if ±30 and ±5 are, say, 2 or 3 standard deviations, as long as they're the same multiple.

Quick comment; I'm still having a lot of questions with the problem of combining estimate probability distributions. If any of you know of good research on how to combined large group estimates / probability distributions I would be very interested. I realize that the field of 'decision research' and similar is quite significant, but the specific math for combining probabilistic estimates is something I'm having a hard time finding literature on. (Much of this may be because a lot of it is behind academic paywalls)