gwern comments on An inflection point for probability estimates of the AI takeoff? - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (45)
This sounds like a probability search problem in which you don't know for sure there exists anything to find - the hope function.
I worked through this in
#lesswrongwith nialo. It's interesting to work with various versions of this. For example, suppose you had a uniform distribution for AI's creation over 2000-2100, and you believe its creation 90% possible. It is of course now 2011, so how much do you believe it is possible now given its failure to appear between 2000 and now? We could write that in Haskell aslet fai x = (100-x) / ((100 / 0.9) - x) in fai 11which evaluates to ~0.889 - so one's faith hasn't been much damaged.One of the interesting things is how slowly one's credence in AI being possible declines. If you run the function
fai 50*, it's 81%.fai 90** = 47%! But then byfai 98it has suddenly shrunk to 15% and so on forfai 99= 8%, andfai 100is of course 0% (since now one has disproven the possibility).* no AI by 2050
** no AI by 2090, etc.
EDIT: Part of the interestingness is that one of the common criticisms of AI is 'look at them, they were wrong about AI being possible in 19xx, how sad and pathetic that they still think it's possible!' The hope function shows that unless one is highly confident about AI showing up in the early part of a time range, the failure of AI to show up ought to damage one's belief only a little bit.
That blog post is also interesting from a mind projection fallacy viewpoint:
Incidentally, I've tried to apply the hope function to my recent essay on Folding@home: http://www.gwern.net/Charity%20is%20not%20about%20helping#updating-on-evidence