Just giving a short table-summary of an article by James Shanteau on which areas and tasks experts developed a good intuition - and which ones they didn't. Though the article is old, the results seem to be in agreement with more recent summaries, such as Kahneman and Klein's. The heart of the article was a decomposition of characteristics (for professions and for tasks within those professions) where we would expert experts to develop good performance:
Good performance | Poor performance |
---|---|
Static stimuli Decisions about things Experts agree on stimuli More predictable problems Some errors expected Repetitive tasks Feedback available Objective analysis available Problem decomposable Decision aids common |
Dynamic (changeable) stimuli Decisions about behavior Experts disagree on stimuli Less predictable problems Few errors expected Unique tasks Feedback unavailable Subjective analysis only Problem not decomposable Decision aids rare |
I do feel that this may go some way to explaining the expert's performance here.
The lower entries on the table seem to be susceptible for moving from right to left. As for the top ones - well they proclaim that you should widen your error bars.
In practice, it's often been found that simple algorithms can perform better than experts on the right handed problems. We can't really have an algorithm for designing AI, but maybe for timeline work it could be good?
I strongly suspect that the primary result of such an algorithm would be very wide error bars on the timeline, and that it would indeed outperform most experts for this reason. You can't get water from a stone, nor narrow estimates out of ignorance and difficult problems, no matter what simple algorithm you use. Though I would be quite intrigued to be proven wrong about this, and I have seen Fermi estimates for quantities like e.g. the mass of the Earth apparently extract narrow and correct estimations out of the sums of multiple widely erroneous steps.