http://www.businessinsider.com/musk-on-artificial-intelligence-2014-6
Summary: The only non-Tesla/SpaceX/SolarCity companies that Musk is invested in are DeepMind and Vicarious, due to vague feelings of wanting AI to not unintentionally go Terminator. The best part of the article is the end, where he acknowledges that Mars isn't a get-out-of-jail-free card any more: "KE: Or escape to mars if there is no other option. MUSK: The A.I. will chase us there pretty quickly." Thinking of SpaceX not as a childhood dream, but as one specific arms supplier in the war against existential risks, puts things into perspective for him.
It's not only unlikely - what's much worse, is that it points to wrong reasons. It suggests that we should fear AI trying to take over the world or eliminating all people, as if AI would have incentive to do that. It stems from nothing more, but anthropomorphisation of AI, imagining it as some evil genius.
This is very bad, because smart people can see that those reasonings are flawed and get impression that these are the only arguments against unbounded developement of AGI. While reverse stupidity isn't smart, it's much harder to find good reasons why we should solve AI friendliness, when there are lots of distracting strawmans.
It was me from half a year ago. I used to think, that anybody, who fears AI may bring harm, is a loony. All the reasons I heard from people were that AI wouldn't know emotions, AI would try to harmfully save people from themselves, AI would want to take over the world, AI would be infected by virus or hacked or that AI would be just outright evil. I can easily debunk all of above. And then I read about Paperclip Maximizer and radically changed my mind. I might got to that point much sooner, if not for all the strawman distractions.
I think you are looking into it too deep. Skynet as an example of AI risk is fine, if cartoonish.
Of course, we are very far away from strong AIs and therefore from existential AI risk.