Eliezer_Yudkowsky comments on Thoughts on the Singularity Institute (SI) - Less Wrong

256 Post author: HoldenKarnofsky 11 May 2012 04:31AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (1270)

You are viewing a single comment's thread. Show more comments above.

Comment author: Eliezer_Yudkowsky 18 May 2012 10:11:15PM *  12 points [-]

I agree that top mainstream AI guy Peter Norvig was way the heck more sensible than the reference class of declared "AGI researchers" when I talked to him about FAI and CEV, and that estimates should be substantially adjusted accordingly.

Comment author: thomblake 20 May 2012 07:45:18PM 1 point [-]

Yes. I wonder if there's a good explanation why narrow AI folks are so much more sensible than AGI folks on those subjects.

Comment author: DanArmak 27 May 2012 10:10:12PM 5 points [-]

Because they have some experience of their products actually working, they know that 1) these things can be really powerful, even though narrow, and 2) there are always bugs.