Strong AI is hard to predict: see this recent study. Thus, my own position on Strong AI timelines is one of normative agnosticism: "I don't know, and neither does anyone else!"
Increases in computing power are pretty predictable, but for AI you probably need fundamental mathematical insights, and it's damn hard to predict those.
In 1900, David Hilbert posed 23 unsolved problems in mathematics. Imagine trying to predict when those would be solved. His 3rd problem was solved that same year. His 7th problem was solved in 1935. His 8th problem still hasn't been solved.
Or imagine trying to predict, back in 1990, when we'd have self-driving cars. Even in 2003 it wasn't obvious we were very close. Now it's 2013 and they totally work, they're just not legal yet.
Same problem with Strong AI. We can't be confident AI will come in the next 30 years, and we can't be confident it'll take more than 100 years, and anyone who is confident of either claim is pretending to know too much.
you probably need fundamental mathematical insights, and it's damn hard to predict those.
We can still try. As it happens, a perfectly relevant paper was just released: "On the distribution of time-to-proof of mathematical conjectures"
...What is the productivity of Science? Can we measure an evolution of the production of mathematicians over history? Can we predict the waiting time till the proof of a challenging conjecture such as the P-versus-NP problem? Motivated by these questions, we revisit a suggestion published recently and debated in th
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.