You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Mirzhan_Irkegulov comments on Top 9+2 myths about AI risk - Less Wrong Discussion

44 Post author: Stuart_Armstrong 29 June 2015 08:41PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (45)

You are viewing a single comment's thread. Show more comments above.

Comment author: Mirzhan_Irkegulov 30 June 2015 09:01:08PM 1 point [-]

Can you expand on the Point #7, if that's possible? There are some people, who honestly think Friendliness-researchers in MIRI and other places actually discourage AI research. Which sounds to me ridiculous, I've never seen such attitude from Friendliness-researchers, nor can even imagine that. But this was the primary reason for Mark Friedenbach's leaving LW: he said that there's a massive tendency against solving world problems on LW, specifically because actual AI research is supposedly dangerous. He considered LW a memetic hazard that he doesn't want to participate in. Although I completely disagree on his evaluation of current memes of LW and MIRI, he claimed he received 2 separate death threats on #lesswrong IRC channel, when mentioned that he wants to do actual AI research.

So if there's somebody who is actually against ongoing AI research, I want to know that. And if that's not an isolated event, but a tendency, even small, MIRI or somebody should make a statement. I mean, people are getting ridiculous distorted ideas of MIRI and LW, and little effort is done to correct them.

Comment author: Wei_Dai 01 July 2015 08:16:02AM 3 points [-]

I'm not a Friendliness researcher, but I did once consider whether trying to slow down AI research might be a good idea. Current thinking is probably not, but only because we're forced to live in a third-best world:

First best: Do AI research until just before we're ready to create an AGI. Either Friendliness is already solved by then, or else everyone stop and wait until Friendliness is solved.

Second best: Friendliness looks a lot harder than AGI, and we can't expect everyone to resist the temptation of fame and fortune when the possibility of creating AGI is staring them in the face. So stop or slow down AI research now.

Third best: Don't try to stop or slow down AI research because we don't know how to do it effectively, and doing it ineffectively will just antagonize AI researchers and create PR problems.

There are some people, who honestly think Friendliness-researchers in MIRI and other places actually discourage AI research. Which sounds to me ridiculous, I've never seen such attitude from Friendliness-researchers, nor can even imagine that.

Why is this so ridiculous as to be unimaginable? Isn't the second-best world above actually better than the third-best, if only it was feasible?

Comment author: Mirzhan_Irkegulov 01 July 2015 08:26:51AM *  1 point [-]

I meant I can't imagine Friendliness-researchers seriously taking the stance for the same reason you subscribe to third-best choice.

Comment author: Stuart_Armstrong 01 July 2015 05:12:21AM 3 points [-]

I can only talk about those I've interacted with, and I haven't seen AI research blocking being discussed as a viable option.