DeepMind's go AI, called AlphaGo, has beaten the European champion with a score of 5-0. A match against top ranked human, Lee Se-dol, is scheduled for March.
Games are a great testing ground for developing smarter, more flexible algorithms that have the ability to tackle problems in ways similar to humans. Creating programs that are able to play games better than the best humans has a long history
[...]
But one game has thwarted A.I. research thus far: the ancient game of Go.
I actually think self-driving cars are more interesting than strong go playing programs (but they don't worry me much either).
I guess I am not sure why I should pay attention to EY's opinion on this. I do ML-type stuff for a living. Does EY have an unusual track record for predicting anything? All I see is a long tail of vaguely silly things he says online that he later renounces (e.g. "ignore stuff EY_2004 said"). To be clear: moving away from bad opinions is great! That is not what the issue is.
edit: In general I think LW really really doesn't listen to experts enough (I don't even mean myself, I just mean the sensible Bayesian thing to do is to just go with expert opinion prior on almost everything.) EY et al. take great pains to try to move people away from that behavior, talking about how the world is mad, about civiliational inadequacy, etc. In other words, don't trust experts, they are crazy anyways.
I don't have a source on this, but I remember an anecdote from Kurzweil that scientists who worked on early transistors were extremely skeptical about the future of the technology. They were so focused on solving specific technical problems that they didn't see the big picture. Whereas an outside could have just looked at the general trend and predicted a doubling every 18 months, and they would have been accurate for at least 50 years.
So that's why I wouldn't trust various ML experts like Ng that have said not to worry about AGI. No, the specific algorith... (read more)