Eliezer_Yudkowsky comments on In defense of the outside view - Less Wrong

14 Post author: cousin_it 15 January 2010 11:01AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (26)

You are viewing a single comment's thread.

Comment author: Eliezer_Yudkowsky 15 January 2010 02:28:04PM 11 points [-]

You forgot to subscript; I think you meant Eliezer_1998, who had just turned old enough to vote, believed in ontologically basic human-external morality, and was still babbling about Moore's Law in unquestioning imitation of his elders. I really get offended when people compare the two of us.

Growing up on the Internet is like walking around with your baby pictures stapled to your forehead.

I also consider it an extremely basic fallacy and extremely annoying, to lump together "people who predict AI arriving in 10 years" and "people who predict AI arriving at some unknown point in the future" into the same reference class so that the previous failure of the former class of predictions is an argument for the failure of the latter class, that is, since some AI scientists have overpromised in the short run AI must be physically impossible in the long run. After all, it's the same charge of negative affect in both cases, right?

Comment author: ciphergoth 15 January 2010 02:34:06PM 0 points [-]

Which reference to you calls for that subscript?

Comment author: RobinZ 15 January 2010 02:55:20PM *  2 points [-]

Every reader is encouraged to Google on their own for past announcements by Doug Lenat, Ben Goertzel, Eliezer Yudkowsky (those are actually the heroes of the bunch), or other people that I'm afraid to name at the moment.

Presumably.