Comment author: CronoDAS 03 August 2013 06:40:09PM 3 points [-]

So... how do you measure programmer productivity? ;)

Comment author: Heka 31 December 2013 12:32:34AM 2 points [-]

Hubbard writes about performance measurement in chapter 11. He notes that management typically knows what are the relevant performance metrics. However it has trouble prioritizing between them. Hubbard's proposal is to let the managament create utility charts of the required trade-offs. For instance on curve for programmer could have on-time completion rate in one axis and error-free rate in the other (page 214). Thus the management is required to document how much one must increase to compensate for drop in the other. The end product of the charts should be a single index for measuring employee performance.

Comment author: ygert 28 December 2013 08:55:04PM *  13 points [-]

That's a good rationalist success story. You remind me of my own story with the tooth fairy: I will not relate it in detail here, as it is similar to yours, just less dramatic. At a certain point, I doubted the existence of the tooth fairy, so the next time a tooth fell out I put it under my pillow without telling anyone, and it was still there the next day. I confronted my parents, and they readily admitted the non-existence of the tooth fairy.

In fact, it went off as a perfect experiment, which kind of ruins its value as a story, at least when compared with yours. I did an experiment, got a result, and that was that. The one thing I'm still kind of bitter about is my parents' first reaction to my confrontation of them: Rather than praising me on my discovery and correct use of the scientific method, their reaction was along the lines of "If you suspected, why didn't you just tell us? We would have just admitted it. There was no need for that test to find proof to confront us with."

Comment author: Heka 28 December 2013 11:22:39PM 5 points [-]

Too bad you didn't get positive feedback. The awaited praise for discoveries keeps scientists going. In terms of money the smart decision would have been to hide the results from parents to keep the dollars flowing in.

Comment author: ksvanhorn 24 December 2011 07:19:34PM 3 points [-]

"But this example also helps show the limits of VoI: VoI is best suited to situations where you've done the background research and are now considering further experiments."

Do you mean this in the sense that there is usually some low-hanging fruit (e.g. the background research itself) where the VOI is obviously so high that you there's no need to calculate it -- you obviously should obtain the information?

I think Douglas Hubbard, author of How to Measure Anything, makes a good case for making VOI calculations the default in important decisions. When he acts as a consultant, he first spends a couple of days training the decision makers in calibrating their probability assessments, and then they do a VOI calculation for all the important unknowns, based on those subjective probabilities. It's often precisely those questions for which they can't get much from background research and haven't even considered measuring -- because they don't know how to -- that have the highest VOI.

Maybe these cases are atypical, as they are biased towards difficult decisions that warrant hiring a consultant. But difficult decisions are the raison d'etre for the field of decision analysis.

Comment author: Heka 26 December 2013 11:11:53PM 0 points [-]

Hubbard talks about measurement inversion. "In a business case, the economic value of measuring a variable is usually inversely proportional to how much measurement attention it usually gets." This thread contains discussion about possible reasons for it. The easiness/familiarity aspect that you imply is probably one of them. Others include declining marginal value of information on certain subject and the destabilizing effect new measurements might have for an organization.

It's easy to imagine that measurement inversion would also apply to common measurements on personal life.

Comment author: Heka 25 December 2013 01:19:12PM *  2 points [-]

I read the chapter on diet. Adams claims that "Science has demonstrated that humans have a limited supply of willpower." This idea is important for at least this chapter. However as Robert Kurzban has noted it is a weak theory that cannot be falsified. I would prefer Adams to use the concept of willpower as another self-delusion to help optimize one's systems.

Comment author: owencb 05 August 2013 02:08:14PM 2 points [-]

Thanks, I liked this post.

However, I was initially a bit confused by the section on EVPI. I think it is important, but it could be a lot clearer.

The expected opportunity loss (EOL) for a choice is the probability of the choice being “wrong” times the cost of it being wrong. So for example the EOL if the campaign is approved is $5M × 40% = $2M, and the EOL if the campaign is rejected is $40M × 60% = $24M.

The difference between EOL before and after a measurement is called the “expected value of information” (EVI).

It seems quite unclear what's meant by "the difference between EOL before and after a measurement" (EOL of which option? is this in expectation?).

I think what must be intended is: your definition is for the EOL of an option. Now the EOL of a choice is the EOL of the option we choose given current beliefs. Then EVI is the expected reduction in EOL upon measurement.

Even this is more confusing than it often needs to be. At heart it's the expected amount better you'll do with the information. Sometimes you can factor out the EOL calculation entirely. For example say you're betting $10 at even odds on a biased coin. You currently think there's a 70% chance of it landing heads; more precisely you know it was either from a batch which lands heads 60% of the time, or from a batch which lands heads 80% of the time, but these are equiprobable. You could take a measurement to find out which batch it was from. Then you are certain that this measurement will change the EOL, but if you do it carefully the expected gain is equal to the expected loss, so there is no EVI. We could spot this directly because we know that whatever the answer is, we'll bet on heads.

I think it might be useful to complete your simple example for EVPI (as in, this would have helped me to understand it faster, so may help others too): Currently you'll run the campaign, with EOL of $2M. With perfect information, you always choose the right option, so you expect the EOL to go down to 0. Hence the EVPI is $2M (this comes from the 40% of the time that the information stops you running the campaign and saving you $5M).

Then in the section on the more advanced model:

In this case, the EVPI turns out to be about $337,000. This means that we shouldn’t spend more than $337,000 to reduce our uncertainty about how many units will be sold as a result of the campaign.

Does this figure come from the book? It doesn't come from the spreadsheet you linked to. By the way, there's a mistake in the spreadsheet: when it assumes a uniform distribution it uses different bounds for two different parts of the calculation.

Comment author: Heka 03 September 2013 07:57:50PM 0 points [-]

I like the coin example. In my experience the situation with clear choice is typical in small businesses. It often isn't worth honing the valuation models for projects very long when it is very improbably that the presumed second best choice would turn out to be the best.

I guess the author is used to working for bigger companies that do everything in larger scale and thus have generally more options to choose from. Nothing untrue in the chapter but this point could have been pointed out.

In response to Fermi Estimates
Comment author: Morendil 06 April 2013 11:12:03PM *  6 points [-]

Tip: frame your estimates in terms of intervals with confidence levels, i.e. "90% probability that the answer is within <low end> and <high end>". Try to work out both a 90% and a 50% interval.

I've found interval estimates to be much more useful than point estimates, and they combine very well with Fermi techniques if you keep track of how much rounding you've introduced overall.

In addition, you can compute a Brier score when/if you find out the correct answer, which gives you a target for improvement.

In response to comment by Morendil on Fermi Estimates
Comment author: Heka 27 April 2013 07:04:36PM *  0 points [-]

Douglas W. Hubbard has a book titled How to Measure Anything where he states that half a day of exercising confidence interval calibration makes most people nearly perfectly calibrated. As you noted and as is said here, that method fits nicely with Fermi estimates.

This combination seems to have a great ratio between training time and usefulness.

In response to Fermi Estimates
Comment author: lukeprog 06 April 2013 07:53:14PM 3 points [-]

Write down your own Fermi estimation attempts here. One Fermi estimate per comment, please!

In response to comment by lukeprog on Fermi Estimates
Comment author: Heka 11 April 2013 10:53:50PM 1 point [-]

I estimated how much the population of Helsinki (capital of Finland) grew in 2012. I knew from the news that the growth rate is considered to be steep.

I knew there are currently about 500 000 habitants in Helsinki. I set the upper bound to 3 % growth rate or 15 000 residents for now. With that rate the city would grow twentyfold in 100 years which is too much. But the rate might be steeper now. For lower bound i chose 1000 new residents. I felt that anything less couldnt really produce any news. AGM is 3750.

My second method was to go through the number of new apartments. Here I just checked that in recent years about 3000 apartments have been built yearly. Guessing that the household size could be 2 persons I got 6000 new residents.

It turned out that the population grew by 8300 residents which is highest in 17 years. Otherwise it has recently been around 6000. So both methods worked well. Both have the benefit that one doesnt need to care whether the growth comes from births/deaths or people flow. They also didn't require considering how many people move out and how many come in.

Obviously i was much more confident on the second method. Which makes me think that applying confidence intervals to fermi estimates would be useful.