Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

lukeprog comments on Open Thread, April 1-15, 2013 - Less Wrong

3 Post author: Vaniver 01 April 2013 03:00PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (253)

You are viewing a single comment's thread.

Comment author: lukeprog 04 April 2013 02:19:19AM *  3 points [-]

Strong AI is hard to predict: see this recent study. Thus, my own position on Strong AI timelines is one of normative agnosticism: "I don't know, and neither does anyone else!"

Increases in computing power are pretty predictable, but for AI you probably need fundamental mathematical insights, and it's damn hard to predict those.

In 1900, David Hilbert posed 23 unsolved problems in mathematics. Imagine trying to predict when those would be solved. His 3rd problem was solved that same year. His 7th problem was solved in 1935. His 8th problem still hasn't been solved.

Or imagine trying to predict, back in 1990, when we'd have self-driving cars. Even in 2003 it wasn't obvious we were very close. Now it's 2013 and they totally work, they're just not legal yet.

Same problem with Strong AI. We can't be confident AI will come in the next 30 years, and we can't be confident it'll take more than 100 years, and anyone who is confident of either claim is pretending to know too much.

Comment author: gwern 04 April 2013 03:40:49AM 10 points [-]

you probably need fundamental mathematical insights, and it's damn hard to predict those.

We can still try. As it happens, a perfectly relevant paper was just released: "On the distribution of time-to-proof of mathematical conjectures"

What is the productivity of Science? Can we measure an evolution of the production of mathematicians over history? Can we predict the waiting time till the proof of a challenging conjecture such as the P-versus-NP problem? Motivated by these questions, we revisit a suggestion published recently and debated in the "New Scientist" that the historical distribution of time-to-proof's, i.e., of waiting times between formulation of a mathematical conjecture and its proof, can be quantified and gives meaningful insights in the future development of still open conjectures. We find however evidence that the mathematical process of creation is too much non-stationary, with too little data and constraints, to allow for a meaningful conclusion. In particular, the approximate unsteady exponential growth of human population, and arguably that of mathematicians, essentially hides the true distribution. Another issue is the incompleteness of the dataset available. In conclusion we cannot really reject the simplest model of an exponential rate of conjecture proof with a rate of 0.01/year for the dataset that we have studied, translating into an average waiting time to proof of 100 years. We hope that the presented methodology, combining the mathematics of recurrent processes, linking proved and still open conjectures, with different empirical constraints, will be useful for other similar investigations probing the productivity associated with mankind growth and creativity.

They took the 144 from the Wikipedia list of conjectures; their population covariate is just an exponential equation they borrowed from somewhere. Regardless, they turn in the result one would basically expect: a constant chance of solving a problem in each time period. (In turn, this and the correlation with population suggests to me that solving conjectures is more parallel than serial: delays are related more to how much mathematical effort is being devoted to each problem.)

Comment author: lukeprog 04 April 2013 04:14:02AM 1 point [-]

Nice.