Eliezer_Yudkowsky comments on AI timeline predictions: are we getting better? - Less Wrong

54 Post author: Stuart_Armstrong 17 August 2012 07:07AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (78)

You are viewing a single comment's thread. Show more comments above.

Comment author: shminux 22 August 2012 03:31:51PM 0 points [-]

Zeroth approximation: even the experts don't know, I am not an expert, so I know even less, thus I should not make any of my decisions based on singularity-related arguments.

First approximation: find a reference class of predictions that were supposed to come true within 50 years or so, unchanged for decades, and see when (some of them) are resolved. This does not require an AI expert, but rather a historian of sorts. I am not one, and the only obvious predictions in this class are the Rapture/2nd coming and other religious end-of-the-world scares. Another standard example is the proverbial flying car. I'm sure there ought to be more examples, some of them are technological predictions that actually came true. Maybe someone here can suggest a few. Until then, I'm stuck with the zeroth approximation.

Comment author: Eliezer_Yudkowsky 22 August 2012 05:35:48PM 12 points [-]

Putting smarter-than-human AI into the same class as the Rapture instead of the same class as, say, predictions for progress of space travel or energy or neuroscience, sounds to me suspiciously like reference class tennis. Your mind knows what it expects the answer to be, and picks a reference class accordingly. No doubt many of these experts did the same.

And so, once again, "distrust experts" ends up as "trust the invisible algorithm my brain just used or whatever argument I just made up, which of course isn't going to go wrong the way those experts did".

(The correct answer was to broaden confidence intervals in both/all directions.)

Comment author: shminux 22 August 2012 06:32:00PM *  1 point [-]

I do not believe that I was engaging in the reference class tennis. I tried hard to put AI into the same class as "predictions for progress of space travel or energy or neuroscience", but it just didn't fit. Space travel predictions (of the low-earth-orbit variety) slowly converged in the 40s and 50s with the development of rocket propulsion, ICBMs and later satellites. I am not familiar with the history of abundant energy predictions before and after the discovery of nuclear energy, maybe someone else is. Not sure what neuroscience predictions you are talking about, feel free to clarify.

Comment author: wedrifid 25 August 2012 04:47:54PM *  1 point [-]

I do not believe that I was engaging in the reference class tennis.

You weren't, given the way Eliezer defines the term and the assumptions specified in your comment. I happen to disagree with you but your comment does not qualify as reference class tennis. Especially since you ended up assuming that the reference class is insufficiently populated to even be used unless people suggest things to include.