gwern comments on An inflection point for probability estimates of the AI takeoff? - Less Wrong

11 Post author: Prismattic 29 April 2011 11:37PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (45)

You are viewing a single comment's thread. Show more comments above.

Comment author: gwern 30 April 2011 05:50:27PM *  15 points [-]

This sounds like a probability search problem in which you don't know for sure there exists anything to find - the hope function.

I worked through this in #lesswrong with nialo. It's interesting to work with various versions of this. For example, suppose you had a uniform distribution for AI's creation over 2000-2100, and you believe its creation 90% possible. It is of course now 2011, so how much do you believe it is possible now given its failure to appear between 2000 and now? We could write that in Haskell as let fai x = (100-x) / ((100 / 0.9) - x) in fai 11 which evaluates to ~0.889 - so one's faith hasn't been much damaged.

One of the interesting things is how slowly one's credence in AI being possible declines. If you run the function fai 50*, it's 81%. fai 90** = 47%! But then by fai 98 it has suddenly shrunk to 15% and so on for fai 99 = 8%, and fai 100 is of course 0% (since now one has disproven the possibility).

* no AI by 2050

** no AI by 2090, etc.

EDIT: Part of the interestingness is that one of the common criticisms of AI is 'look at them, they were wrong about AI being possible in 19xx, how sad and pathetic that they still think it's possible!' The hope function shows that unless one is highly confident about AI showing up in the early part of a time range, the failure of AI to show up ought to damage one's belief only a little bit.


That blog post is also interesting from a mind projection fallacy viewpoint:

"What I found most interesting was, the study provides evidence that people seem to reason as though probabilities were physical properties of matter. In the example with the desk with the eight drawers and an 80% chance a letter is in the desk, many people reasoned as though “80% chance-of-letter” was a fundamental property of the furniture, up there with properties like weight, mass, and density.

Many reasoned that the odds the desk has the letter, stay 80% throughout the fruitless search. Thus, they reasoned, it would still be 80%, even if they searched seven drawers and found no letter. And these were people with some education about probability! One problem is people were tending to overcompensate to avoid falling into the Gambler’s Fallacy. They were educated, well-learned people, and they knew that the probability of a fair coin falling heads remains 50%, no matter how many times in a row heads have already been rolled. They seemed to generalize this to the letter search. There’s an important difference, though: the coin flips are independent of each other. The drawer searches are not.

In a followup study, when the modified questions were posed, with two extra “locked” drawers and a 100% initial probability of a letter, miraculously the respondents’ answers showed dramatic improvement. Even though, formally, the exercises were isomorphic."

Comment author: gwern 27 September 2011 05:46:51PM 1 point [-]

Incidentally, I've tried to apply the hope function to my recent essay on Folding@home: http://www.gwern.net/Charity%20is%20not%20about%20helping#updating-on-evidence