You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Houshalter comments on [Link]: The Unreasonable Effectiveness of Recurrent Neural Networks - Less Wrong Discussion

7 Post author: turchin 04 June 2015 08:47PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (19)

You are viewing a single comment's thread. Show more comments above.

Comment author: Houshalter 06 June 2015 01:44:33AM 6 points [-]

But a year before the author made this prediction:

My impression from this exercise is that it will be hard to go above 80%, but I suspect improvements might be possible up to range of about 85-90%, depending on how wrong I am about the lack of training data.

And then 4 years later:

2015 update: Obviously this prediction was way off, with state of the art now in 95%, as seen in this Kaggle competition leaderboard. I'm impressed!

A few percent is a huge deal on a machine learning benchmark, because improving each percentage point is exponentially harder than the previous.

I'm not saying I think strong AI is really close. At least not based on RNNs are becoming more popular. But it's worth noting that experts can underestimate progress just as easily as overestimate it.