lukeprog comments on Open Thread, April 1-15, 2013 - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (254)
Strong AI is hard to predict: see this recent study. Thus, my own position on Strong AI timelines is one of normative agnosticism: "I don't know, and neither does anyone else!"
Increases in computing power are pretty predictable, but for AI you probably need fundamental mathematical insights, and it's damn hard to predict those.
In 1900, David Hilbert posed 23 unsolved problems in mathematics. Imagine trying to predict when those would be solved. His 3rd problem was solved that same year. His 7th problem was solved in 1935. His 8th problem still hasn't been solved.
Or imagine trying to predict, back in 1990, when we'd have self-driving cars. Even in 2003 it wasn't obvious we were very close. Now it's 2013 and they totally work, they're just not legal yet.
Same problem with Strong AI. We can't be confident AI will come in the next 30 years, and we can't be confident it'll take more than 100 years, and anyone who is confident of either claim is pretending to know too much.
We can still try. As it happens, a perfectly relevant paper was just released: "On the distribution of time-to-proof of mathematical conjectures"
They took the 144 from the Wikipedia list of conjectures; their population covariate is just an exponential equation they borrowed from somewhere. Regardless, they turn in the result one would basically expect: a constant chance of solving a problem in each time period. (In turn, this and the correlation with population suggests to me that solving conjectures is more parallel than serial: delays are related more to how much mathematical effort is being devoted to each problem.)
Nice.