I'm wondering if what the researchers observed was not what the test subjects think, but what they think they think. This is because they did not observe the behavior, but only asked the subjects how they would behave.
For example, at what odds would those who said that the odds of the bus arrival do not depend on the time remaining till midnight (and are 50/50) actually bet on the bus arriving (provided they had to place a bet), when it's 11:59? My suspicion is that it would not be 50/50.
One has to wonder at the ethics of such an experiment - when you know tons of the subjects won't get even close to the right answer and thus would accept unfair bets!
You can certainly set it up in an ethical way. For example, tell the subject that they have to find something as fast as they can. It could be a set of drawers and a large bin nearby. One could deduce their (admittedly sunk-cost biased) intuitive probabilities from where they start looking and when/whether they switch from looking in the drawers to the bin. As described, this would not be easy or clean, but you can certainly modify the experiment to achieve both.
I don't think you'd have to go so far as to bet. If people actually experience waiting until 11:59, they'll probably realise that the bus isn't likely to come.
I didn't see enough graphs, so I put together a spreadsheet for finding the hope function given a graph of the likelihood of finding what you're looking for (and a prior for finding it). I think it's right, but I'd appreciate someone sanity checking it.
I found it nice to be able to change the distribution of probability for the drawers and the prior probability and see what it does to the long term hope function.
Yesterday I finished transcribing "The Ups and Downs of the Hope Function In a Fruitless Search". This is a statistics & psychology paper describing a simple probabilistic search problem and the sheer difficulty subjects have in producing the correct Bayesian answer. Besides providing a great but simple illustration of the mind projection fallacy in action, the simple search problem maps onto a number of forecasting problems: the problem may be looking in a desk for a letter that may not be there, but we could also look at a problem in which we check every year for the creation of AI and ask how our beliefs change over time - which turns out to defuse a common scoffing criticism of past technological forecasting. (This last problem was why I went back and used it, after I first read of it.)
The math is all simple - arithmetic and one application of Bayes's law - so I think all LWers can enjoy it, and it has amusing examples to analyze. I have also taken the trouble to annotate it with Wikipedia links, relevant materials, and many PDF links (some jailbroken just for this transcript). I hope everyone finds it as interesting as I did.
I thank John Salvatier for doing the ILL request which got me a scan of this book chapter.