Most AI researchers have not done any research into the topic of AI research, so their opinions are irrelevant. That's like pointing to the opinions of sailors on global warming. Because global warming is about oceans and sailors should be experts on that kind of thing.
I think AI researchers are slowly warming up to AI risk. A few years ago it was a niche thing that no one had ever heard of. Now it's gotten some media attention and there is a popular book about it. Slate Star Codex has compiled a list of notable AI researchers that take AI risk seriously.
Personally my favorite name on there is Schmidhuber, who is very well known and I think has been ahead of his time in many areas in AI. With a focus particularly on general intelligence, and methods that are more general like reinforcement learning and recurrent nets, instead of the standard machine learning stuff. His opinions on AI risk are nuanced though, I think he expects AIs to leave Earth and go into space, but he does accept most of the premises of AI risk.
Bostrom did a survey back in 2014 that found AI researchers think there is at least a 30% probability that AI will be "bad" or "extremely bad" for humanity. I imagine that opinion has changed since then as AI risk has become more well known. And it will only increase with time.
Lastly this is not an outlier or 'extremist' view on this website. This is the majority opinion here and has been discussed to death in the past, and I think it's as settled as it can be expected. If you have any new points to make or share, please feel free. Otherwise you aren't adding anything at all. There is literally no argument in your comment at all, just an appeal to authority.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)