gwern comments on An inflection point for probability estimates of the AI takeoff? - Less Wrong

11 Post author: Prismattic 29 April 2011 11:37PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (45)

You are viewing a single comment's thread. Show more comments above.

Comment author: nshepperd 30 April 2011 06:41:13PM *  9 points [-]

For a non-uniform distribution we can use the similar formula (1.0 - p(before 2011)) / (1.0/0.9 - p(before 2011)) which is analogous to adding a extra blob of (uncounted) probability density (such that if the AI is "actually built" anywhere within the distribution including the uncounted bit, the prior probability (0.9) is the ratio (counted) / (counted + uncounted)), and then cutting off the part where we know the AI to have not been built.

For a normal(mu = 2050, sigma=10) distribution, in Haskell this is let ai year = (let p = cumulative (normalDistr 2050 (10^2)) year in (1.0 - p) / (1.0/0.9 - p))¹. Evaluating on a few different years:

  • P(AI|not by 2011) = 0.899996
  • P(AI|not by 2030) = 0.8979
  • P(AI|not by 2050) = 0.8181...
  • P(AI|not by 2070) = 0.16995
  • P(AI|not by 2080) = 0.012
  • P(AI|not by 2099) = 0.00028

This drops off far faster than the uniform case, once 2050 is reached. We can also use this survey as an interesting source for a distribution. The median estimate for P=0.5 is 2050, which gives us the same mu, and the median for P=0.1 was 2028, which fits with sigma ~ 17 years². We also have P=0.9 by 2150, suggesting our prior of 0.9 is in the ballpark. Plugging the same years into the new distribution:

  • P(AI|not by 2011) = 0.899
  • P(AI|not by 2030) = 0.888
  • P(AI|not by 2050) = 0.8181...
  • P(AI|not by 2070) = 0.52
  • P(AI|not by 2080) = 0.26
  • P(AI|not by 2099) = 0.017

Even by 2030 our confidence will have changed little.

¹Using Statistics.Distribution.Normal from Hackage.

²Technically, the survey seems to have asked about unconditional probabilities, not conditional on that AI is possible, whereas the latter is what we want. We may want then to actually fit a normal distribution so that cdf(2028) = 0.1/0.9 and cdf(2050) = 0.5/0.9, which would be a bit harder (we can't just use 2050 as mu).

Comment author: gwern 30 April 2011 08:05:16PM *  2 points [-]

This drops off far faster than the uniform case, once 2050 is reached.

The intuitive explanation for this behavior where the normal distribution drops off faster is because it makes such strong predictions about the region around 2050 and once you've reached 2070 with no AI, you've 'wasted' most of your possible drawers, to continue the original blog post's metaphor.

To get a visual analogue of the probability mass, you could map the normal curve onto a uniform distribution, something like 'if we imagine each year at the peak corresponds to 30 years in a uniform version, then it's like we were looking at the period 1500-2100AD, so 2070 is very late in the game indeed!' To give a crude ASCII diagram, the mapped normal curve would look like this where every space/column is 1 equal chance to make AI:

2k/2k1/2002/... 2040 2041 2042 2043 2044 2045 2046 2047 2048 2049 2050 etc.