You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

jacob_cannell comments on [Link] AlphaGo: Mastering the ancient game of Go with Machine Learning - Less Wrong Discussion

14 Post author: ESRogs 27 January 2016 09:04PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (122)

You are viewing a single comment's thread. Show more comments above.

Comment author: jacob_cannell 29 January 2016 11:37:55PM *  0 points [-]

It's actually much worse than that, because huge breakthroughs themselves are what create new experts. So on the eve of huge breakthroughs, currently recognized experts invariably predict the future is far, simply because they can't see the novel path towards the solution.

In this sense everyone who is currently an AI expert is, trivially, someone who has failed to create AGI. The only experts who have any sort of clear understanding of how far AGI is are either not currently recognized or do not yet exist.

Comment author: IlyaShpitser 30 January 2016 05:20:41AM *  1 point [-]

Btw, I don't consider myself an AI expert. I am not sure what "AI expertise" entails, I guess knowing a lot about lots of things that include stuff like stats/ML but also other things, including a ton of engineering. I think an "AI expert" is sort of like "an airplane expert." Airplanes are too big for one person -- you might be an expert on modeling fluids or an expert on jet engines, but not an expert on airplanes.