When talking about Elon Musk's impact on the world, I mostly look at "how has he influenced extinction risk?".
This forces a stark ordering of priorities: If he created a "backup" human civilization on Mars, that would (by consequentialist reasoning) do enough good to probably outweigh even some historically bad Twitter policies... but it wouldn't matter if an OpenAI AGI killed everyone on Earth and then sent itself to Mars.
(And, of course, he didn't have to do the smaller-scale-than-Mars-yet-bad things! It's not like those were "necessary" on the path to Mars! If the guy can make decisions, it doesn't make sense for him to get to say "My good impact has bought me the right to have not-even-remotely-related bad things, and I really feel like doing bad things!". But I'm getting ahead of myself...)
So far, Musk has probably accelerated AI capabilities, which I'd consider overall pretty bad w.r.t. extinction risk originating from AGI. This includes cofounding OpenAI, founding X.ai, and (far more indirectly/debatably) funding DeepMind early on. However, Musk also brought more publicity to "hardcore" AI alignment research, by recommending the original Bostrom Superintelligence book. Then again, Bill Gates and also lots of AI domain experts seemed to already be recommending that book, back when it was released. So probably Musk's counterfactual impact here isn't even that good.
Sadly, this post is not about AI or X.ai. (But please, if you have thoughts on that, turn them into posts!). Instead, this post is one of those "how I changed my mind about a thing" rationality examples posts.
It's also a reference post, by which I mean it summarizes and links to both Wikipedia and non-Wikipedia sources.
My Belief Timeline
A few days ago: A friend tells me that Musk had "turned off" Starlink to prevent a Ukraine military action against Russia. I chalked it up as one more piece of evidence for Elon Musk having a net-bad impact in the world.
However, that wording makes it sound more reasoned and mechanical than it was. In actuality, an extra factor was lurking in the background: Preexisting social (mumblemumble) Elon Musk.
Like a lot of people on this forum, I was a huge fan of Elon Musk back in the "doing hardtech startups that could benefit humanity" phase of his career. Today, of course, Musk is in a later stage of his career, the "literally why would you make these decisions, you could've literally just not made these exact fucking decisions and things would be much better for everyone including yourself" phase.
(See: Pretty much anything Musk has been in the news for doing in the past 3ish years, plus the badness of some fraction of the manymanyallegations being true, plus things that were covered earlier but we shrugged off as the cost of progress (not always unreasonably, see "stark ordering" discussion above). Although this is of course partly media overcorrection compared to their oft-fawning past coverage of Musk, he has in real life done a lot of horrible things that he had zero need to do.)
In addition, as you may notice by my belabored, repetitive, and uncommonly-prioritizing writing on this topic, people think imprecisely about Musk's impact on the world. This has led to me to heatedly discussing this topic with many friends (even defending(!) some of Musk's impacts). It also led me to the next event in the timeline...
I posted this comment discussing Musk's impact on the world. I still stand by the gist of the comment (that people focus on the wrong things about Musk), but I offhandedly mention Musk not-activating Starlink for a Ukraine military operation. (This was after I skimmed the relevant-looking parts of the Wikipedia article on this topic).
Feels weird talking about Musk, since his biggest impacts are fuzzier ones on x-risk (cofounding OpenAI and also the Ukraine Starlink non-activation event). AI risk and global geopolitical/nuclear risk. So far, what he's done in those areas is questionable at best and unusually terrible at worst.
Taking near-term extinction risk seriously, even getting to Mars wouldn't necessarily outweigh nudging the AGI field in a more dangerous direction (i.e. if OpenAI has contributed more to capabilities than alignment, or if X.ai does anything big).
IMHO these are the 3 things (X.ai, openai, and Ukraine) that matter most about Musk, and so far he seems net negative. The other massive things are rounding errors in the face of that, yet get more attention. (The extreme case: Twitter/X is a rounding error on those other rounding errors, and ofc that gets discussed 1000x more than everything else.)
My comment got highlighted by Scott Alexander as the "serious EA perspective"! Score!
Likely due to the new attention from being highlighted, another commenter read and asked for clarification about how I was doing my ethical weighting for the Starlink non-activation event. This led to me researching some of the empirical facts I was discussing. The back-and-forth resulted in me updating a few times and eventually writing this very post. (I highly recommend reading that entire thread, if you have a complaint or counterpoint about anything in this post.)
Things I Learned
About Starlink
Starlink "activation" (via the "pizza box" transmitters that can interface with the satellites) was initially gifted to Ukraine. Then, the following events happened, in an order that I still don't quite understand:
The providing-Starlink-in-Ukraine initiative got funded by a US Department of Defense contract.
Musk refused to activate Starlink in Russian-occupied-since-2014 Crimea, even though the Ukraine military was trying to do an attack there. This was the event everyone talked about during the week I was learning about this, which I called the "Starlink non-activation event". From Wikipedia:
In September 2022, Ukrainian submarines drones strapped with explosives were attempting a sneak attack the Russian fleet in Sevastopol using Starlink to guide them to target.[35][13][68] Musk's biographer Isaacson had claimed Musk had told his engineers to turn off Starlink coverage within 100 kilometers of the Crimean coast,[35] though Isaacson has retracted and corrected the claim.[69] According to Isaacson's clarification, Ukraine thought the coverage was activated up to Crimea but it was not.[69][70] Ukraine requested Musk to enable Starlink up to Crimea.[69][13] Musk declined the request but did not disable any existing coverage.[69] Some drones lost connectivity and washed ashore without exploding, others returned to Ukraine undamaged.[71] Ukrainian presidential adviser Podolyak responded that civilians and children were being killed as a result,[28] adding that this was "the price of a cocktail of ignorance and big ego".[28]
Musk/SpaceX claim Starlink is not a weapon of warfare, although I'm not entirely sure how much they enforced this for the defensive parts.
This all maybe becomes moot anyway as SpaceX unveils Starshield and it maybe has or will be used by Ukraine for military operations.
Most, probably all, of the above events happened well before the recent Elon Musk biography by Walter Isaacson released! The public didn't learn of the non-activation event (and thus could not comment on it) when it was probably the most relevant.
The real-life order of the above events is one of the main factors in what impact Musk actually had on nuclear-war/geopolitical risk.
If a Wikipedia page has lots of citations and also no top-of-page warnings(!) for such a geopolitical topic, I should read more of it, and read through relevant citations, all in a long go before writing about it, so I can update all-at-once on the available information.
Thinking precisely about impacts and ethics is still good. I need to also think more precisely about the empirical facts underpinning such impacts in the real world. "Mere details" are not always "mere".
If I'm doing anything resembling a "calculation of what to briefly mention vs what to deeply research, as weighted by mental-energy costs vs benefits to discourse vs wanting to express my opinions"... well, I should at least double-check and/or hedge my statements, unless I plan to do more research on the topic.
When talking about Elon Musk's impact on the world, I mostly look at "how has he influenced extinction risk?".
This forces a stark ordering of priorities: If he created a "backup" human civilization on Mars, that would (by consequentialist reasoning) do enough good to probably outweigh even some historically bad Twitter policies... but it wouldn't matter if an OpenAI AGI killed everyone on Earth and then sent itself to Mars.
(And, of course, he didn't have to do the smaller-scale-than-Mars-yet-bad things! It's not like those were "necessary" on the path to Mars! If the guy can make decisions, it doesn't make sense for him to get to say "My good impact has bought me the right to have not-even-remotely-related bad things, and I really feel like doing bad things!". But I'm getting ahead of myself...)
So far, Musk has probably accelerated AI capabilities, which I'd consider overall pretty bad w.r.t. extinction risk originating from AGI. This includes cofounding OpenAI, founding X.ai, and (far more indirectly/debatably) funding DeepMind early on. However, Musk also brought more publicity to "hardcore" AI alignment research, by recommending the original Bostrom Superintelligence book. Then again, Bill Gates and also lots of AI domain experts seemed to already be recommending that book, back when it was released. So probably Musk's counterfactual impact here isn't even that good.
Sadly, this post is not about AI or X.ai. (But please, if you have thoughts on that, turn them into posts!). Instead, this post is one of those "how I changed my mind about a thing" rationality examples posts.
It's also a reference post, by which I mean it summarizes and links to both Wikipedia and non-Wikipedia sources.
My Belief Timeline
A few days ago: A friend tells me that Musk had "turned off" Starlink to prevent a Ukraine military action against Russia. I chalked it up as one more piece of evidence for Elon Musk having a net-bad impact in the world. However, that wording makes it sound more reasoned and mechanical than it was. In actuality, an extra factor was lurking in the background: Preexisting social (mumblemumble) Elon Musk. Like a lot of people on this forum, I was a huge fan of Elon Musk back in the "doing hardtech startups that could benefit humanity" phase of his career. Today, of course, Musk is in a later stage of his career, the "literally why would you make these decisions, you could've literally just not made these exact fucking decisions and things would be much better for everyone including yourself" phase. (See: Pretty much anything Musk has been in the news for doing in the past 3ish years, plus the badness of some fraction of the many many allegations being true, plus things that were covered earlier but we shrugged off as the cost of progress (not always unreasonably, see "stark ordering" discussion above). Although this is of course partly media overcorrection compared to their oft-fawning past coverage of Musk, he has in real life done a lot of horrible things that he had zero need to do.) In addition, as you may notice by my belabored, repetitive, and uncommonly-prioritizing writing on this topic, people think imprecisely about Musk's impact on the world. This has led to me to heatedly discussing this topic with many friends (even defending(!) some of Musk's impacts). It also led me to the next event in the timeline...
I posted this comment discussing Musk's impact on the world. I still stand by the gist of the comment (that people focus on the wrong things about Musk), but I offhandedly mention Musk not-activating Starlink for a Ukraine military operation. (This was after I skimmed the relevant-looking parts of the Wikipedia article on this topic).
My comment got highlighted by Scott Alexander as the "serious EA perspective"! Score!
Likely due to the new attention from being highlighted, another commenter read and asked for clarification about how I was doing my ethical weighting for the Starlink non-activation event. This led to me researching some of the empirical facts I was discussing. The back-and-forth resulted in me updating a few times and eventually writing this very post. (I highly recommend reading that entire thread, if you have a complaint or counterpoint about anything in this post.)
Things I Learned
About Starlink
Starlink "activation" (via the "pizza box" transmitters that can interface with the satellites) was initially gifted to Ukraine. Then, the following events happened, in an order that I still don't quite understand:
The providing-Starlink-in-Ukraine initiative got funded by a US Department of Defense contract.
Musk refused to activate Starlink in Russian-occupied-since-2014 Crimea, even though the Ukraine military was trying to do an attack there. This was the event everyone talked about during the week I was learning about this, which I called the "Starlink non-activation event". From Wikipedia:
Musk/SpaceX claim Starlink is not a weapon of warfare, although I'm not entirely sure how much they enforced this for the defensive parts.
This all maybe becomes moot anyway as SpaceX unveils Starshield and it maybe has or will be used by Ukraine for military operations.
Most, probably all, of the above events happened well before the recent Elon Musk biography by Walter Isaacson released! The public didn't learn of the non-activation event (and thus could not comment on it) when it was probably the most relevant.
The real-life order of the above events is one of the main factors in what impact Musk actually had on nuclear-war/geopolitical risk.
About researching and writing
If a Wikipedia page has lots of citations and also no top-of-page warnings(!) for such a geopolitical topic, I should read more of it, and read through relevant citations, all in a long go before writing about it, so I can update all-at-once on the available information.
Thinking precisely about impacts and ethics is still good. I need to also think more precisely about the empirical facts underpinning such impacts in the real world. "Mere details" are not always "mere".
If I'm doing anything resembling a "calculation of what to briefly mention vs what to deeply research, as weighted by mental-energy costs vs benefits to discourse vs wanting to express my opinions"... well, I should at least double-check and/or hedge my statements, unless I plan to do more research on the topic.
So... X.ai. Let's get back to discussing that, yeah? Those GPUs sure sound capabilities-accelerating, and their plan sure seems pretty bad.
Actually, dare I say, it seems... offhand? Impulsive? Poorly thought out?