The above-mentioned researchers are skeptical in different ways. Andrew Ng thinks that human-level AI is ridiculously far away, and that trying to predict the future more than 5 years out is useless. Yann LeCun and Yoshua Bengio believe that advanced AI is far from imminent, but approve of people thinking about long-term AI safety.
Okay, but surely it’s still important to think now about the eventual consequences of AI. - Absolutely. We ought to be talking about these things.
+1 To go even further, I would add that it's unproductive to think of these researchers as being on anyone's "side". These are smart, nuanced people and rounding their comments down to a specific agenda is a recipe for misunderstanding.
Comparing with articles from a year ago, e.g. http://www.popsci.com/bill-gates-fears-ai-ai-researchers-know-better, this represents significant progress.
I'm a PhD student in Yoshua's lab. I've spoken with him about this issue several times, and he has moved on this issue, as have Yann and Andrew. From my perspective following this issue, there was tremendous progress in the ML community's attitude towards Xrisk.
I'm quite optimistic that such progress with continue, although pessimistic that it will be fast enough and that the ML community's attitude will be anything like sufficient for a positive outcome.
I am curious if this has changed over the past 6 years since you posted this comment. Do you get the feeling that high profile researchers have shifted even further towards Xrisk concern, or if they continue with the same views as in 2016? Thanks!
There has been continued progress at about the rate I would've expected -- maybe a bit faster. I think GPT-3 has helped change people's views somewhat, as have further appreciation of other social issues of AI.
I'm a PhD student in Yoshua's lab. I've spoken with him about this issue several times, and he has moved on this issue,
Thank you!
skepticism towards imminent human-extinction-level AI.
Got around to reading the actual interview. The 'imminent' part is well and thoroughly skepted, but as has been talked to death around here, non-imminent human extinction still seems important. And that part just seems to get totally passed over, which leaves me feeling like there's some disconnect somewhere.
It's almost like this a viewpoint got some celebrity endorsements, which had some idiosyncrasies and were necessarily brief, and then members of the media formed their own opinions based largely just on those celebrity statements, plus their own preconceptions and interests.
But people underestimate how much more science needs to be done.
The big thing that is missing is meta-cognitive self reflection. It might turn out that even today's RNN structures are sufficient and the only lacking answer is how to interconnect multi-columnar networks with meta-cognition networks.
it’s probably not going to be useful to build a product tomorrow.
Yes. Given the architecture is right and capable few science is needed to train this AGI. It will learn on its own.
The amount of safety related research is for sure underestimated. Evolution of biological brains never needed extra constraints. Society needed and created constraints. And it had time to do so. If science gets the architecture right - do the scientists really know what is going on inside their networks? How can developers integrate safety? There will not be a society of similarly capable AIs that can self-constrain its members. These are critical science issues especially because we have little we can copy from.
I'm afraid we will never know whether someone is "close" to (super)human AGI, unless this entity reveals it. Now think nuclear bomb... and superAGI is supposed to be orders of magnitude more powerful/dangerous.
So, not unlike the wartime disappearance of scientific articles on nuclear topics, certain (sudden?) lack of progress reporting press could be an indicator.
LINK
Yoshua Bengio, one the world's leading expert on machine learning, and neural networks in particular, explains his view on these issues in an interview. Relevant quotes:
I think it's fair to say that Bengio has joined the ranks of AI researchers like his colleagues Andrew Ng and Yann LeCun who publicly express skepticism towards imminent human-extinction-level AI.