You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.
Comment author:siIver
20 October 2016 01:41:10AM
*
3 points
[-]
This may be a naive and over-simplified stance, so educate me if I'm being ignorant--
but isn't promiting anything that speeds up AI reasearch the absolute worst thing we can do? If the fate of humanity rests on the outcome of the race between solving the friendly AI problem and reaching intelligent AI, shouldn't we only support research that goes exclusively into the former, and perhaps even try to slow down the latter? The link you shared seems to fall into the latter category, aiming for general promotion of the idea and accelerating research.
Feel free to just provide a link if the argument has been discussed before.
Comment author:username2
20 October 2016 02:02:55PM
*
0 points
[-]
What you've expressed is the outlier, extremist view. Most AI researchers are of the opinion, if they have expressed a thought at all, that there is a long series of things that must happen in conjunction for a skynet like failure to occur. It is hardly obvious at all that AI should be a highly regulated research field, like say nuclear weapons research.
I highly suggest expanding your reading beyond the Yudjowsky, Bostrom et all clique.
Comment author:Houshalter
20 October 2016 08:48:36PM
*
2 points
[-]
Most AI researchers have not done any research into the topic of AI risk, so their opinions are irrelevant. That's like pointing to the opinions of sailors on global warming. Because global warming is about oceans and sailors should be experts on that kind of thing.
I think AI researchers are slowly warming up to AI risk. A few years ago it was a niche thing that no one had ever heard of. Now it's gotten some media attention and there is a popular book about it. Slate Star Codex has compiled a list of notable AI researchers that take AI risk seriously.
Personally my favorite name on there is Schmidhuber, who is very well known and I think has been ahead of his time in many areas in AI. With a focus particularly on general intelligence, and methods that are more general like reinforcement learning and recurrent nets, instead of the standard machine learning stuff. His opinions on AI risk are nuanced though, I think he expects AIs to leave Earth and go into space, but he does accept most of the premises of AI risk.
Bostrom did a survey back in 2014 that found AI researchers think there is at least a 30% probability that AI will be "bad" or "extremely bad" for humanity. I imagine that opinion has changed since then as AI risk has become more well known. And it will only increase with time.
Lastly this is not an outlier or 'extremist' view on this website. This is the majority opinion here and has been discussed to death in the past, and I think it's as settled as it can be expected. If you have any new points to make or share, please feel free. Otherwise you aren't adding anything at all. There is literally no argument in your comment at all, just an appeal to authority.
EDIT: fixed "research into the topic of AI research"
Comment author:username2
20 October 2016 11:41:19PM
-1 points
[-]
Most AI researchers have not done any research into the topic of AI [safety], so their opinions are irrelevant.
(I assume my edit is correct?)
One could also say: most AI safety researchers have not done any research into the topic of (practical) AI research, so their opinions are irrelevant. How is this statement any different?
Lastly this is not an outlier or 'extremist' view on this website. This is the majority opinion here and has been discussed to death in the past, and I think it's as settled as it can be expected. If you have any new points to make or share, please feel free. Otherwise you aren't adding anything at all. There is literally no argument in your comment at all, just an appeal to authority.
Really? There's a lot of frequent posters here that don't hold the Bostrom extremist view. skeptical_lurker and TheAncientGeek come to mind.
But if this site really has an orthodoxy, then it has no remaining purpose to me. Goodbye.
Comment author:Houshalter
21 October 2016 12:40:33AM
3 points
[-]
most AI safety researchers have not done any research into the topic of (practical) AI research, so their opinions are irrelevant. How is this statement any different?
Because that statement is simply false. Researchers do deal with real world problems and datasets. There is a huge overlap between research and practice. There is little or no overlap between AI risk/safety research, and current machine learning research. The only connection I can think of, is that people familiar with reinforcement learning might have a better understanding of AI motivation.
Really? There's a lot of frequent posters here that don't hold the Bostrom extremist view. skeptical_lurker and TheAncientGeek come to mind.
I didn't say there wasn't dissent. I said it wasn't an outlier view, and seems to be the majority opinion.
But if this site really has an orthodoxy, then it has no remaining purpose to me. Goodbye.
Look I'm sorry if I came across as overly hostile. I certainly welcome any debate and discussion on this issue. If you have anything to say feel free to say it. But your above comment didn't really add anything. There was no argument, just an appeal to authority, and calling GP "extremist" for something that's a common view on this site. At the very least, read some of the previous discussions first. You don't need to read everything, but there is a list of posts here.
Comment author:dxu
23 October 2016 10:46:24PM
-1 points
[-]
But if this site really has an orthodoxy, then it has no remaining purpose to me. Goodbye.
Considering that you're using an anonymous account to post this comment, the above is a statement that carries much less weight than it normally would.
Comments (10)
This may be a naive and over-simplified stance, so educate me if I'm being ignorant--
but isn't promiting anything that speeds up AI reasearch the absolute worst thing we can do? If the fate of humanity rests on the outcome of the race between solving the friendly AI problem and reaching intelligent AI, shouldn't we only support research that goes exclusively into the former, and perhaps even try to slow down the latter? The link you shared seems to fall into the latter category, aiming for general promotion of the idea and accelerating research.
Feel free to just provide a link if the argument has been discussed before.
What you've expressed is the outlier, extremist view. Most AI researchers are of the opinion, if they have expressed a thought at all, that there is a long series of things that must happen in conjunction for a skynet like failure to occur. It is hardly obvious at all that AI should be a highly regulated research field, like say nuclear weapons research.
I highly suggest expanding your reading beyond the Yudjowsky, Bostrom et all clique.
Most AI researchers have not done any research into the topic of AI risk, so their opinions are irrelevant. That's like pointing to the opinions of sailors on global warming. Because global warming is about oceans and sailors should be experts on that kind of thing.
I think AI researchers are slowly warming up to AI risk. A few years ago it was a niche thing that no one had ever heard of. Now it's gotten some media attention and there is a popular book about it. Slate Star Codex has compiled a list of notable AI researchers that take AI risk seriously.
Personally my favorite name on there is Schmidhuber, who is very well known and I think has been ahead of his time in many areas in AI. With a focus particularly on general intelligence, and methods that are more general like reinforcement learning and recurrent nets, instead of the standard machine learning stuff. His opinions on AI risk are nuanced though, I think he expects AIs to leave Earth and go into space, but he does accept most of the premises of AI risk.
Bostrom did a survey back in 2014 that found AI researchers think there is at least a 30% probability that AI will be "bad" or "extremely bad" for humanity. I imagine that opinion has changed since then as AI risk has become more well known. And it will only increase with time.
Lastly this is not an outlier or 'extremist' view on this website. This is the majority opinion here and has been discussed to death in the past, and I think it's as settled as it can be expected. If you have any new points to make or share, please feel free. Otherwise you aren't adding anything at all. There is literally no argument in your comment at all, just an appeal to authority.
EDIT: fixed "research into the topic of AI research"
(I assume my edit is correct?)
One could also say: most AI safety researchers have not done any research into the topic of (practical) AI research, so their opinions are irrelevant. How is this statement any different?
Really? There's a lot of frequent posters here that don't hold the Bostrom extremist view. skeptical_lurker and TheAncientGeek come to mind.
But if this site really has an orthodoxy, then it has no remaining purpose to me. Goodbye.
Because that statement is simply false. Researchers do deal with real world problems and datasets. There is a huge overlap between research and practice. There is little or no overlap between AI risk/safety research, and current machine learning research. The only connection I can think of, is that people familiar with reinforcement learning might have a better understanding of AI motivation.
I didn't say there wasn't dissent. I said it wasn't an outlier view, and seems to be the majority opinion.
Look I'm sorry if I came across as overly hostile. I certainly welcome any debate and discussion on this issue. If you have anything to say feel free to say it. But your above comment didn't really add anything. There was no argument, just an appeal to authority, and calling GP "extremist" for something that's a common view on this site. At the very least, read some of the previous discussions first. You don't need to read everything, but there is a list of posts here.
Considering that you're using an anonymous account to post this comment, the above is a statement that carries much less weight than it normally would.