Eliezer_Yudkowsky comments on In defense of the outside view - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (26)
You forgot to subscript; I think you meant Eliezer_1998, who had just turned old enough to vote, believed in ontologically basic human-external morality, and was still babbling about Moore's Law in unquestioning imitation of his elders. I really get offended when people compare the two of us.
Growing up on the Internet is like walking around with your baby pictures stapled to your forehead.
I also consider it an extremely basic fallacy and extremely annoying, to lump together "people who predict AI arriving in 10 years" and "people who predict AI arriving at some unknown point in the future" into the same reference class so that the previous failure of the former class of predictions is an argument for the failure of the latter class, that is, since some AI scientists have overpromised in the short run AI must be physically impossible in the long run. After all, it's the same charge of negative affect in both cases, right?
Which reference to you calls for that subscript?
Presumably.