Epiphany comments on Slowing Moore's Law: Why You Might Want To and How You Would Do It - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (90)
I think it might be a hard sell to convince governments to intentionally retard their own technological progress. Any country who willingly does this will put themselves at a competitive disadvantage economically and defense-wise.
Nukes are probably an easier sell because they are specific to war - there's no other good use for them.
I think this might be more like Eliezer's "let it out of the box" experiments: The prospect of using the technology is too appealing to restrain it.
Also, another problem is that this is abstract. Nuclear weapons are a very tangible problem - they go boom, people die. Pretty much everyone can universally understand that.
With AI, the problems aren't so easy to understand. First of all, people might not even believe AI is possible in order to believe it is a risk. Secondly, people regard IT people practically the way they'd regard a real life wizard. I am called a genius at work for doing stupid tasks and thanked up and down for accomplishing small things that took five minutes. This is simply because others don't know how to do them. Simultaneously, it is assumed that no matter what type of IT problem I am given, I will be able to solve it. They assume a web developer can fix their computer for instance. I can fix some problems, but I'm no computer tech.
I wonder if they don't understand the risks of AI well enough to realize that the IT people can't fix it.
And then there's optimism bias. I can't think of a potentially useful technology we've passed up because it was dangerous. Can you think of an example where that has actually happened? Or where a large number of people understood an abstract problem, believed in it's feasibility, and took appropriate measures to counteract it?
I'll be thinking about this now...
Yes, I've pointed out most of those as reasons effective regulation would not be done (especially in China).
Oh, sorry about that! After this dawned on me, I just kind of skimmed the rest and the subtitle "The China question" did not trigger a blip on my "you must read this before posting that idea" radar.
What did you think of my ideas for slowing Moore's law?
I wish this was on the idea comment rather than over here... I'm sorry but I think I will have to relocate my response to you by putting it on the other thread where my comment is. This is because discussing it here will result in a bunch of people jumping into the conversation on this thread when the comment we're talking about is on a different thread. So, for the sake of keeping it organized, my response to you regarding the feasibility of convincing programmers to refuse risky AI jobs is on the other thread.