This is a linkpost for http://ai-on.org/

New to LessWrong?

New Comment
17 comments, sorted by Click to highlight new comments since: Today at 6:35 AM

This may be a naive and over-simplified stance, so educate me if I'm being ignorant--

but isn't promiting anything that speeds up AI reasearch the absolute worst thing we can do? If the fate of humanity rests on the outcome of the race between solving the friendly AI problem and reaching intelligent AI, shouldn't we only support research that goes exclusively into the former, and perhaps even try to slow down the latter? The link you shared seems to fall into the latter category, aiming for general promotion of the idea and accelerating research.

Feel free to just provide a link if the argument has been discussed before.

I agree. But it's worthwhile to try to get AI researchers on our side, and get them researching things relevant to FAI. Perhaps lesswrong could have some influence on this group. If nothing else it's interesting to keep an eye on how AI is progressing.

First of all, it's hard to imagine anyone slowing down AI research. Even a ban by n large governments would only encourage research in places beyond government control.

Second, there is quite a lot of uncertainty about the difficulty of these tasks. Both human-comparable software and AI value alignment probably involve multiple difficult subproblems that have barely been researched so far.

One could slow it down by convincing people who would otherwise speed it up.

What you've expressed is the outlier, extremist view. Most AI researchers are of the opinion, if they have expressed a thought at all, that there is a long series of things that must happen in conjunction for a skynet like failure to occur. It is hardly obvious at all that AI should be a highly regulated research field, like say nuclear weapons research.

I highly suggest expanding your reading beyond the Yudjowsky, Bostrom et all clique.

Most AI researchers have not done any research into the topic of AI risk, so their opinions are irrelevant. That's like pointing to the opinions of sailors on global warming. Because global warming is about oceans and sailors should be experts on that kind of thing.

I think AI researchers are slowly warming up to AI risk. A few years ago it was a niche thing that no one had ever heard of. Now it's gotten some media attention and there is a popular book about it. Slate Star Codex has compiled a list of notable AI researchers that take AI risk seriously.

Personally my favorite name on there is Schmidhuber, who is very well known and I think has been ahead of his time in many areas in AI. With a focus particularly on general intelligence, and methods that are more general like reinforcement learning and recurrent nets, instead of the standard machine learning stuff. His opinions on AI risk are nuanced though, I think he expects AIs to leave Earth and go into space, but he does accept most of the premises of AI risk.

Bostrom did a survey back in 2014 that found AI researchers think there is at least a 30% probability that AI will be "bad" or "extremely bad" for humanity. I imagine that opinion has changed since then as AI risk has become more well known. And it will only increase with time.

Lastly this is not an outlier or 'extremist' view on this website. This is the majority opinion here and has been discussed to death in the past, and I think it's as settled as it can be expected. If you have any new points to make or share, please feel free. Otherwise you aren't adding anything at all. There is literally no argument in your comment at all, just an appeal to authority.

EDIT: fixed "research into the topic of AI research"

This is the majority opinion here and has been discussed to death in the past,

If a discussion involves dissenters being insulted and asked to leave, then it doesn't count.

http://lesswrong.com/lw/igf/the_genie_knows_but_doesnt_care

Most AI researchers are of the opinion, if they have expressed a thought at all, that there is a long series of things that must happen in conjunction for a skynet like failure to occur.

Most AI researchers have not done any research into the topic of AI risk, so their opinions are irrelevant.

Go back to he object level: is it true or false that a skyney scenario requires the conjunction of a string of unlikely events?

False. It requires only a few events, like smarter-than-human AI being invented, and the control problem not being solved. I don't think any of these things is very unlikely.

Not solving the control problem isn't a sufficient condition for AI danger: the AI also needs inimical motivations. So that is a third premise. Also fast takeoff of a singleton AI is being assumed.

ETA: The last two assumptions are so frequently made in AI risk circles that they lack salience -- people seem to have ceased to regard them as assumptions at all.

Well the control problem is all about making AIs without "inimical motivations", so that covers the same thing IMO. And fast takeoff is not at all necessary for AI risk. AI is just as dangerous if it takes it's time to grow to superintelligence. I guess it gives us somewhat more time to react, at best.

Well the control problem is all about making AIs without "inimical motivations",

Only if you use language very loosely. If you don't. the Value Alignment problem is about making an AI without inimical motivations, and the Control Problem is about making an AI you can steer irrespective of its motivations.

And fast takeoff is not at all necessary for AI risk. AI

This is about Skynet scenarios specifically. If you have mutlipolar slow development of ASI, then you can fix the problems as you go along.

I guess it gives us somewhat more time to react, at best.

Which is to say that in order to definitely have a Skynet scenario, you definitely do need things to develop at more than a certain rate. So speed of takeoff is an assumption, however dismsively you phrase it.

Most AI researchers have not done any research into the topic of AI [safety], so their opinions are irrelevant.

(I assume my edit is correct?)

One could also say: most AI safety researchers have not done any research into the topic of (practical) AI research, so their opinions are irrelevant. How is this statement any different?

Lastly this is not an outlier or 'extremist' view on this website. This is the majority opinion here and has been discussed to death in the past, and I think it's as settled as it can be expected. If you have any new points to make or share, please feel free. Otherwise you aren't adding anything at all. There is literally no argument in your comment at all, just an appeal to authority.

Really? There's a lot of frequent posters here that don't hold the Bostrom extremist view. skeptical_lurker and TheAncientGeek come to mind.

But if this site really has an orthodoxy, then it has no remaining purpose to me. Goodbye.

[-]dxu8y80

But if this site really has an orthodoxy, then it has no remaining purpose to me. Goodbye.

Considering that you're using an anonymous account to post this comment, the above is a statement that carries much less weight than it normally would.

most AI safety researchers have not done any research into the topic of (practical) AI research, so their opinions are irrelevant. How is this statement any different?

Because that statement is simply false. Researchers do deal with real world problems and datasets. There is a huge overlap between research and practice. There is little or no overlap between AI risk/safety research, and current machine learning research. The only connection I can think of, is that people familiar with reinforcement learning might have a better understanding of AI motivation.

Really? There's a lot of frequent posters here that don't hold the Bostrom extremist view. skeptical_lurker and TheAncientGeek come to mind.

I didn't say there wasn't dissent. I said it wasn't an outlier view, and seems to be the majority opinion.

But if this site really has an orthodoxy, then it has no remaining purpose to me. Goodbye.

Look I'm sorry if I came across as overly hostile. I certainly welcome any debate and discussion on this issue. If you have anything to say feel free to say it. But your above comment didn't really add anything. There was no argument, just an appeal to authority, and calling GP "extremist" for something that's a common view on this site. At the very least, read some of the previous discussions first. You don't need to read everything, but there is a list of posts here.

. There was no argument, just an appeal to authority, and calling GP "extremist" for something that's a common view on this site.

A view can be extreme within the wider AI community, and normal within less wrong. The disconnection between LW and everyone else is part of the problem.

So. How do we contact these people and find out their stance on how important the alignment problem is?