In this essay I argue the following:
Brain emulation requires enormous computing power; enormous computing power requires further progression of Moore’s law; further Moore’s law relies on large-scale production of cheap processors in ever more-advanced chip fabs; cutting-edge chip fabs are both expensive and vulnerable to state actors (but not non-state actors such as terrorists). Therefore: the advent of brain emulation can be delayed by global regulation of chip fabs.
Full essay: http://www.gwern.net/Slowing%20Moore%27s%20Law
I think it might be a hard sell to convince governments to intentionally retard their own technological progress. Any country who willingly does this will put themselves at a competitive disadvantage economically and defense-wise.
Nukes are probably an easier sell because they are specific to war - there's no other good use for them.
I think this might be more like Eliezer's "let it out of the box" experiments: The prospect of using the technology is too appealing to restrain it.
Also, another problem is that this is abstract. Nuclear weapons are a very tangible problem - they go boom, people die. Pretty much everyone can universally understand that.
With AI, the problems aren't so easy to understand. First of all, people might not even believe AI is possible in order to believe it is a risk. Secondly, people regard IT people practically the way they'd regard a real life wizard. I am called a genius at work for doing stupid tasks and thanked up and down for accomplishing small things that took five minutes. This is simply because others don't know how to do them. Simultaneously, it is assumed that no matter what type of IT problem I am given, I will be able to solve it. They assume a web developer can fix their computer for instance. I can fix some problems, but I'm no computer tech.
I wonder if they don't understand the risks of AI well enough to realize that the IT people can't fix it.
And then there's optimism bias. I can't think of a potentially useful technology we've passed up because it was dangerous. Can you think of an example where that has actually happened? Or where a large number of people understood an abstract problem, believed in it's feasibility, and took appropriate measures to counteract it?
I'll be thinking about this now...
Yes, I've pointed out most of those as reasons effective regulation would not be done (especially in China).