A few months back somebody posted an article by a scientist giving lower and upper bounds on the probability of superintelligence. He broke up the calculation as a Fermi calculation with three parts (EDIT: See LocustBeanGum's answer). Does anybody remember this article and if so can you provide a link?

New Comment
2 comments, sorted by Click to highlight new comments since: Today at 7:03 AM
[-][anonymous]13y80

The original article; the LW link post.

Perhaps you mean this, but the probabilities involved in the argument are different. (human level AI will be built in 100 years; the AI will be able to undergo a recursive self-improvement in intelligence; this intelligence explosion will unpredictably transform our world)

Thanks, this is what I was looking for.