You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Artaxerxes comments on Musk on AGI Timeframes - Less Wrong Discussion

19 Post author: Artaxerxes 17 November 2014 01:36AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (70)

You are viewing a single comment's thread. Show more comments above.

Comment author: Artaxerxes 17 November 2014 01:12:30PM *  2 points [-]

What are you worried he might do?

If he believes what he's said, he should really throw lots of money at FHI and MIRI. Such an action would be helpful at best, harmless at worst.

Comment author: XiXiDu 17 November 2014 03:44:31PM *  1 point [-]

What are you worried he might do?

Start a witch hunt against the field of AI? Oh wait...he's kind of doing this already.

If he believes what he's said, he should really throw lots of money at FHI and MIRI.

Seriously? How much money do they need to solve "friendly AI" within 5-10 years? Or else, what are their plans? If what MIRI imagines will happen in at most 10 years then I strongly doubt that throwing money at MIRI will make a difference. You'll need people like Musk who can directly contact and convince politicians or summon up the fears of general public in order to force politicians to notice and take actions.

Comment author: Artaxerxes 17 November 2014 04:21:14PM 3 points [-]

I mean more that it seems that his views line up a lot closer to MIRI/FHI than most AI researchers. Hell, his views are closer to MIRI's than Thiel's are at this point.

How much money do they need to solve "friendly AI" within 5-10 years?

Good question. I'd like to see what they could do with 10x what they have now, for a start.

If what MIRI imagines will happen in at most 10 years then I strongly doubt that throwing money at MIRI will make a difference.

I don't even think many of those at MIRI think that they would have much chance if they were only given 10 years, so you're in good company there.

Comment author: ArisKatsaris 19 November 2014 03:28:58AM *  1 point [-]

Start a witch hunt against the field of AI? Oh wait...he's kind of doing this already.

You believe he's calling for the execution, imprisonment or other punishment of AI researchers? I doubt it.

So what exactly is this 'witch hunt' composed of? What evil thing has Musk done other than disagree with you on how dangerous AI is?

Comment author: XiXiDu 19 November 2014 04:09:42PM *  1 point [-]

So what exactly is this 'witch hunt' composed of? What evil thing has Musk done other than disagree with you on how dangerous AI is?

What I meant is that he and others will cause the general public to adopt a perception of the field of AI that is comparable to the public perception of GMOs, vaccination, nuclear power etc., non-evidence-backed fear of something that is generally benign and positive.

He could have used his influence and reputation to directly contact AI researchers or e.g. hold a quarterly conference about risks from AI. He could have talked to policy makers on how to ensure safety while promoting the positive aspects. There is a lot you can do. But making crazy statements in public about summoning demons and comparing AI to nukes is just completely unwarranted given the current state of evidence about AI risks, and will probably upset lots of AI people.

You believe he's calling for the execution, imprisonment or other punishment of AI researchers?

I doubt that he is that stupid. But I do believe that certain people, if they were to seriously believe into doom by AI, would consider violence to be an option. John von Neumann was in favor of a preventive nuclear attack against Russia. Do you think that if von Neumann was still around and thought that Google would within 5-10 years launch a doomsday device he would refrain from using violence if he thought that only violence could stop them? I believe that if the U.S. administration was highly confident that e.g. some Chinese lab was going to start an intelligence explosion by tomorrow, they would consider nuking it.

The problem here is not that it would be wrong to deactivate a doomsday device forcefully, if necessary, but rather that there are people out there who are stupid enough to use force unnecessarily or decide to use force based on insufficient evidence (evidence such as claims made by Musk).

ETA: Just take those people who destroy GMO test fields. Musk won't do something like that. But other people, who would commit such acts, might be inspired by his remarks.

Comment author: artemium 24 November 2014 09:37:00PM *  0 points [-]

John von Neumann was in favor of a preventive nuclear attack against Russia. Do you think that if von Neumann was still around and thought that Google would within 5-10 years launch a doomsday device he would refrain from using violence if he thought that only violence could stop them? I believe that if the U.S. administration was highly confident that e.g. some Chinese lab was going to start an intelligence explosion by tomorrow, they would consider nuking it.

There is some truth to that, especially how crazy von Neumann was. But I'm not sure if anyone would be launching pre-emtive nuclear attack on other country because of AGI research. I mean this countries already have nukes, pretty solid doomsday weapon so I dont think that adding another superweapon to its arsenal will change situation. Whether you are blown to bits by chinese nuke or turn into paperclips by chinese-built AGI doesn't make much difference.

Comment author: artemium 24 November 2014 09:42:01PM *  0 points [-]

He will probably try to buy influence in every AI company he can find. There are limits to this strategy thought. I think raising public awareness about this problem and donating money to MRI and FHI would also help.

BTW someone should make a movie where Elon Musk becomes Ironman and than accidentally develops ufAI...oh wait