It's not acceptable to him, so he's trying to manipulate people into thinking existential risk is approaching 100% when it clearly isn't. He pretends there aren't obvious reasons AI would keep us alive, and also pretends the Grabby Alien Hypothesis is fact (so people think alien intervention is basically impossible), and also pretends there aren't probably sun-sized unknown-unknowns in play here.
If it weren't so transparent, I'd appreciate that it could actually trick the world into caring more about AI-safety, but if it's so transparent that even I can see through it, then it's not going to trick anyone smart enough to matter.
Pascal's wager is pascal's wager, no matter what box you put it in. You could try to rescue it by directly making the argument that we should expect a greater measure of "entities with resources that they are willing to acausally trade for things like humanity continuing to exist" compared to entities with the opposite preferences, and though I haven't seen a rigorous case for that it seems possible, but that's not sufficient; you need the expected measure of entities that have that preference to be large enough that dealing with the transaction costs/uncertainy of acausally trading at all to make sense. And that seems like a much harder case to make.
I'm sorry, I read the tone of it ruder than it was intended.
[Rogan pivots to talking about aliens for a while, which I have no interest in and do not believe the hypothesis is worth privileging. I point you to (and endorse) the bets on this that many LessWrongers have made of up to $150k against the hypothesis.
This reeks of soldier mindset, instead of just ignoring that part of the transcript, you felt the need to seek validation in your opposing opinion by telling us what to think in an unrelated section. The readers can think for themselves and do not need your help to do so.
This is why I'm expecting an international project for safe AI. The USA government isn't going to leave powerful AI in the hands of Altman or Google, and the rest of the world isn't going to sit idly while the USA becomes the sole AGI powerhouse.
An international project to create utopian AI is the only path I can imagine which avoids MAD. If there's a better plan, I haven't heard it.
What are you specifically planning to accomplish?
In a post-ASI world, the assumption that investment capital returns are honored by society is basically gone. Like the last game in a very long series of iterated prisoner's dilemma, there's no longer a need to Cooperate. There's still time between now and then to invest, but the generic "more long-term capital = good" mindset seems insufficient without an exit strategy or final use case.
Personally, I'm trying to balance various risks regarding the choppy years right before ASI, and also maximize charitable outcomes while I still have some agency in this world.