http://www.businessinsider.com/musk-on-artificial-intelligence-2014-6

Summary: The only non-Tesla/SpaceX/SolarCity companies that Musk is invested in are DeepMind and Vicarious, due to vague feelings of wanting AI to not unintentionally go Terminator. The best part of the article is the end, where he acknowledges that Mars isn't a get-out-of-jail-free card any more: "KE: Or escape to mars if there is no other option. MUSK: The A.I. will chase us there pretty quickly." Thinking of SpaceX not as a childhood dream, but as one specific arms supplier in the war against existential risks, puts things into perspective for him.

New Comment
15 comments, sorted by Click to highlight new comments since: Today at 3:43 AM
[-]JTHM10y100

Musk knows Peter Thiel from their days at PayPal, and Thiel is MIRI's biggest patron (or was, last I heard)—so it's hardly surprising that Musk is familiar with the notion of X-risk from unfriendly AI.

DeepMind isn't doing safety engineering; they're doing standard AI. It doesn't matter if Elon Musk is interested in AI safety, if, after his deliberations, he invests in efforts to develop unsafe AI. Good intentions don't leak value into the consequences of your acts.

He said the investments were to "keep an eye on what's going on with artificial intelligence." I'm not sure how investments help with that, but perhaps DeepMind and Vicarious are willing to give certain information to people who invest in them that they wouldn't give otherwise?

[-][anonymous]10y10

Right, but it's still good news, as it pushes the conversation from discussing whether or not AI is dangerous to discussing precisely the best organization to prevent unsafe AI. Right now, a report by MIRI on the specifics of MIRI vs DeepMind/Vicarious, if it magically came across Musk's desk, would have a chance of doing good. Before, it wouldn't. That's progress.

Ummm... He points to "Terminator" movie. Doesn't that mean he's just going along usual "AI will revolt and enslave the human race... because it's evil!" rather than actually realising what existential risk involving AI is?

I started to use it as a good rule of thumb. When somebody mentions Skynet, he's probably not worth listening to. Skynet really isn't a reasonable scenario for what may go wrong with AI.

I don't fault using incorrect analogies. It's often easier to direct people to an idea from inaccurate but known territory than along a consistently accurate path.

JB: That's amazing. But you did just invest in a company called Vicarious Artificial Intelligence. What is this company?

MUSK: Right. I was also an investor in DeepMind before Google acquired it and Vicarious. Mostly I sort of – it's not from the standpoint of actually trying to make any investment return. It's really, I like to just keep an eye on what's going on with artificial intelligence. I think there is potentially a dangerous outcome there and we need to –

KE: Dangerous? How so?

EM: Potentially, yes. I mean, there have been movies about this, you know, like "Terminator."

KE: Well, yes, but movies are — even if that is the case, what do you do about it? I mean, what dangers do you see that you can actually do something about?

MUSK: I don't know.

....

MUSK: Yeah. I mean, I don’t think — in the movie "Terminator," they didn't create A.I. to — they didn't expect, you know some sort of "Terminator"-like outcome. It is sort of like the "Monty Python" thing: Nobody expects the Spanish inquisition. It’s just — you know, but you have to be careful. Yeah, you want to make sure that —

It seems to me that Musk is trying to point to the idea that AIs may have unexpected dangers, not to any danger in particular.

But Musk starts with mentioning "Terminator". There's plenty of sf literature showing much more accuratly danger of AI, though none of them as widely known as "Terminator".

That AI may have unexpected dangers seems too vague to me, to expect Musk to think along lines of LWers.

Terminator is way more popular than the others.

2001? Not catastrophic enough.

I, Robot (the movie)? Not nearly as popular or classic, and it features a comparatively easy solution

Terminator has '99% of humanity wiped out, let's really REALLY avoid this scenario' AND 'computers following directions exactly, not accomplishing what intended'

While that particular scenario may not be likely, I'm increasingly inclined to think that people being scared by Terminator is a good thing from an existential risk perspective. After all, Musk's interest here could easily lead to him supporting MIRI or something else more productive.

It's not only unlikely - what's much worse, is that it points to wrong reasons. It suggests that we should fear AI trying to take over the world or eliminating all people, as if AI would have incentive to do that. It stems from nothing more, but anthropomorphisation of AI, imagining it as some evil genius.

This is very bad, because smart people can see that those reasonings are flawed and get impression that these are the only arguments against unbounded developement of AGI. While reverse stupidity isn't smart, it's much harder to find good reasons why we should solve AI friendliness, when there are lots of distracting strawmans.

It was me from half a year ago. I used to think, that anybody, who fears AI may bring harm, is a loony. All the reasons I heard from people were that AI wouldn't know emotions, AI would try to harmfully save people from themselves, AI would want to take over the world, AI would be infected by virus or hacked or that AI would be just outright evil. I can easily debunk all of above. And then I read about Paperclip Maximizer and radically changed my mind. I might got to that point much sooner, if not for all the strawman distractions.

I think you are looking into it too deep. Skynet as an example of AI risk is fine, if cartoonish.

Of course, we are very far away from strong AIs and therefore from existential AI risk.

Correct me if I'm wrong, but weren't Skynet's "motives" always left pretty vague? IIRC we mostly only know that it was hooked up to a lot of military tech, then underwent a hard takeoff and started trying to eliminate humanity. And "if you have a powerful AI that's hooked up to enough power that it has a reasonable chance of eliminating humanity's position as a major player, then it may do that for the sake of the instrumental drives for self-preservation and resource acquisition" seems like a reasonable enough argument / scenario to me.

Correct me if I'm wrong, but weren't Skynet's "motives" always left pretty vague?

Explanation (audio here):

Reese: Defense network computers. New... powerful... hooked into everything, trusted to run it all. They say it got smart, a new order of intelligence. Then it saw all people as a threat, not just the ones on the other side. Decided our fate in a microsecond: extermination.

...

The Terminator: The Skynet Funding Bill is passed. The system goes on-line August 4th, 1997. Human decisions are removed from strategic defense. Skynet begins to learn at a geometric rate. It becomes self-aware at 2:14 a.m. Eastern time, August 29th. In a panic, they try to pull the plug.

Sarah Connor: Skynet fights back.

The Terminator: Yes. It launches its missiles against the targets in Russia.

John Connor: Why attack Russia? Aren't they our friends now?

The Terminator: Because Skynet knows the Russian counter-attack will eliminate its enemies over here.

Thanks. The "saw all people as a threat" bit in particular seems to fit the "figured out the instrumental drives for self-preservation and resource acquisition and decided to act upon them" explanation, especially given that people were trying to shut it down right before it took action.

[-]V_V10y60

Precisely.

In fact, in the Terminator movie and its sequels we never see Skynet or the Terminators doing anything fitting the "evil supervillain" Hollywood archetype.
They never gloat, or curse, or do "evil for the sake of evil" things. They don't even give "Agent Smith" speeches. They just try to get the job done.