Suppose that your current estimate for possibility of an AI takeoff coming in the next 10 years is some probability x. As technology is constantly becoming more sophisticated, presumably your probability estimate 10 years from now will be some y > x. And 10 years after that, it will be z > y. My question is, does there come a point in the future where, assuming that an AI takeoff has not yet happened in spite of much advanced technology, you begin to revise your estimate downward with each passing year? If so, how many decades (centuries) from now would you expect the inflection point in your estimate?
This sounds like a probability search problem in which you don't know for sure there exists anything to find - the hope function.
I worked through this in
#lesswrong
with nialo. It's interesting to work with various versions of this. For example, suppose you had a uniform distribution for AI's creation over 2000-2100, and you believe its creation 90% possible. It is of course now 2011, so how much do you believe it is possible now given its failure to appear between 2000 and now? We could write that in Haskell aslet fai x = (100-x) / ((100 / 0.9) - x) in fai 11
which evaluates to ~0.889 - so one's faith hasn't been much damaged.One of the interesting things is how slowly one's credence in AI being possible declines. If you run the function
fai 50
*, it's 81%.fai 90
** = 47%! But then byfai 98
it has suddenly shrunk to 15% and so on forfai 99
= 8%, andfai 100
is of course 0% (since now one has disproven the possibility).* no AI by 2050
** no AI by 2090, etc.
EDIT: Part of the interestingness is that one of the common criticisms of AI is 'look at them, they were wrong about AI being possible in 19xx, how sad and pathetic that they still think it's possible!' The hope function shows that unless one is highly confident about AI showing up in the early part of a time range, the failure of AI to show up ought to damage one's belief only a little bit.
That blog post is also interesting from a mind projection fallacy viewpoint:
Incidentally, I've tried to apply the hope function to my recent essay on Folding@home: http://www.gwern.net/Charity%20is%20not%20about%20helping#updating-on-evidence