Andy_McKenzie comments on Top 9+2 myths about AI risk - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (45)
Is it really the case that nobody interested in AI risk/safety wants to stop or slow down progress in AI research? It seemed to me there was perhaps at least substantial minority that wanted to do this, to buy time.
I remember that we were joking at the NYC Singularity Summit workshop a few years back that maybe we should provide AI researchers with heroin and philosophers to slow them down.
As far as I have noticed, there are few if any voices in the academic/nearby AI safety community that promote slowing AI research as the best (or even a good) option. People talking about relinquishment or slowing seem to be far outside the main discourse, typically people who have only a passing acquaintance with the topic or a broader technology scepticism.
The best antidote is to start thinking about the details of how one would actually go about it: that generally shows why differential development is sensible.
I think differential technological development - prioritising some areas over others - is the current approach. It achirves the same result but has a higher chance of working.
Thanks for your response and not to be argumentative, but honest question: doesn't that mean that you want some forms of AI research to slow down, at least on a relative scale?
I personally don't see any thing wrong with this stance, but it seems to me like you're trying to suggest that this trade-off doesn't exist, and that's not at all what I took from reading Bostrom's Superintelligence.
The trade off exists. There are better ways of resolving it than others, and there are better ways of phrasing it than others.
An important distinction that jumps out to me- if we slowed down all technological progress equally, that wouldn't actually "buy time" for anything in particular- I can't think of anything we'd want to be doing with that time besides either 1. researching other technologies that might help with avoiding AI (can't think of any ATM though- one that comes to mind is technologies that would allow downloading or simulating a human mind before we build AI from scratch, which sounds at least somewhat less dangerous from a human perspective than building AI from scratch), or 2. thinking about AI value systems.
The 2 is presumably the reason why anyone would suggest slowing down AI research, but I think a notable obstacle to 2 at present is large numbers of people not being concerned about AI risk because it's so far away. If we get to the point where people actually expect an AI very soon, then slowing down while we discuss it might make sense.
Can you expand on the Point #7, if that's possible? There are some people, who honestly think Friendliness-researchers in MIRI and other places actually discourage AI research. Which sounds to me ridiculous, I've never seen such attitude from Friendliness-researchers, nor can even imagine that. But this was the primary reason for Mark Friedenbach's leaving LW: he said that there's a massive tendency against solving world problems on LW, specifically because actual AI research is supposedly dangerous. He considered LW a memetic hazard that he doesn't want to participate in. Although I completely disagree on his evaluation of current memes of LW and MIRI, he claimed he received 2 separate death threats on #lesswrong IRC channel, when mentioned that he wants to do actual AI research.
So if there's somebody who is actually against ongoing AI research, I want to know that. And if that's not an isolated event, but a tendency, even small, MIRI or somebody should make a statement. I mean, people are getting ridiculous distorted ideas of MIRI and LW, and little effort is done to correct them.
I'm not a Friendliness researcher, but I did once consider whether trying to slow down AI research might be a good idea. Current thinking is probably not, but only because we're forced to live in a third-best world:
First best: Do AI research until just before we're ready to create an AGI. Either Friendliness is already solved by then, or else everyone stop and wait until Friendliness is solved.
Second best: Friendliness looks a lot harder than AGI, and we can't expect everyone to resist the temptation of fame and fortune when the possibility of creating AGI is staring them in the face. So stop or slow down AI research now.
Third best: Don't try to stop or slow down AI research because we don't know how to do it effectively, and doing it ineffectively will just antagonize AI researchers and create PR problems.
Why is this so ridiculous as to be unimaginable? Isn't the second-best world above actually better than the third-best, if only it was feasible?
I meant I can't imagine Friendliness-researchers seriously taking the stance for the same reason you subscribe to third-best choice.
I can only talk about those I've interacted with, and I haven't seen AI research blocking being discussed as a viable option.
I am one of those proponents of stopping all AI research and I will explain why.
(1) Don't stand too close to the cliff. We don't know how AGI will emerge and by the time we are close enough to know, it's probably too late. Either human error or malfeasance will bring us over the edge.
(2) Friendly AGI might be impossible. Computer scientists cannot even predict the behavior of simple programs. The halting problem, a specific type of prediction, is provably impossible in non-trivial code. I doubt we'll even grasp why the first AGI we build works.
Neither of these statements seems controversial, so if we are determined to not produce unfriendly AGI, the only safe approach is to stop AI research well before it becomes dangerous. It's playing with fire in a straw cabin, our only shelter on a deserted island. Things would be different if someday we solve the friendliness problem, build a provably secure "box", or are well distributed across the galaxy.