They say Kimi K2 is good at writing fiction (Chinese web novels, originally). I wonder if it is specifically good at plot, or narrative causality? And if Eliezer and his crew had serious backing from billionaires, with the correspondingly enhanced ability to develop big plans and carry them out, I wonder if they really would do something like this on the side, in addition to the increasingly political work of stopping frontier AI?
In physics, it is sometimes asked why there should be just three (large) space dimensions. No one really knows, but there are various mathematical properties unique to three or four dimensions, to which appeal is sometimes made.
I would also consider the recent (last few decades) interest in the emergence of spatial dimensions from entanglement. It may be that your question can be answered by considering these two things together.
not the worst outcome
Are you imagining a basically transhumanist future where people have radical longevity and other such boons, but they happen to be trapped within a particular culture (whether that happens to be Christian homeschooling or Bay Area rationalism)? Or could this also be a world where people live lives with a brevity and hazardousness comparable to historic human experience, and in which, in addition, their culture has an unnatural stability maintained by AI working in the background?
It would be interesting to know the extent to which the distribution of beliefs in society is already the result of persuasion. We could then model the immediate future in similar terms, but with the persuasive "pressures" amplified by human-directed AI.
One way to think about it is that progress in AI capabilities means ever bigger and nastier surprises. You find that your AIs can produce realistic but false prose in abundance, you find that they have an inner monologue capable of deciding whether to lie, you find that there are whole communities of people doing what their AIs tell them to do... And humanity has failed if this escalation results in a nasty surprise big enough that it's fatal for human civilization, that happens before we get to a transhuman world that is nonetheless safe even for mere humans (e.g. Ilya Sutskever's "plurality of humanity-loving AGIs").
What are the groups?
Meta is not on that list of "frontier AI" companies because it hasn't kept up. As far as I know its most advanced model is Llama 4 and that's not on the same level as GPT-5, Gemini, Grok, or Claude. Not only has it been left behind by the pivot to reasoning models; Meta's special strength was supposed to be open source, but even there, Chinese models from Moonshot (Kimi K2) and DeepSeek (r2, v3) seem to be ahead. Of course Meta is now trying to get back in the game, but for now they have slipped out of contention.
The remaining question I have concerns the true strength of Chinese AI models, with respect to each other and their American rivals. You could turn my previous paragraph into a thesis about the state of the world: it's the era of reasoning models, and at the helm are four closed-weight American models and two open-weight Chinese models. But what about Baidu's Ernie, Alibaba's Qwen, Zhipu's ChatGLM? Should they be placed in the first tier as well?
You could be a longtermist and still regard a singleton as the most likely outcome. It would just mean that a human-aligned singleton is the only real chance for a human-aligned long-term future, and so you'd better make that your priority, however unlikely it may be. It's apparent that a lot of the old-school (pre-LLM) AI-safety people think this way, when they talk about the fate of Earth's future lightcone and so forth.
However, I'm not familiar with the balance of priorities espoused by actual self-identified longtermists. Do they typically treat a singleton as just a possibility rather than an inevitability?
If I understand correctly, your chief proposition is that liberal rationalists who are shocked and appalled by Trump 2.0 should check out the leftists who actually predicted that Trump 2.0 would be shocking and appalling, rather than just being a new flavor of business as usual. And you hope for adversarial collaboration with a "right-of-center rationalist" who will take the other side of the argument.
The way it's set up, you seem to want your counterpart to defend the idea that Trump 2.0 is still more business-as-usual, than a disastrous departure from norms. However, there is actually a third point-of-view, that I believe is held by many of those who voted for Trump 2.0.
It was often said of those who voted for Trump 1.0, that they wanted a wrecking-ball - not out of nihilism, but because "desperate times call for desperate measures". For such people, America was in decline, and the American political class and the elite institutions had become a hermetic world of incompetence and impunity.
For such people - a mix of conservatives and alienated ex-liberals, perhaps - business as usual is the last thing they want. For them, your double crux and forward predictions won't have the intended diagnostic meaning, because they want comprehensive change, and expect churn and struggle and false starts. They may have very mixed feelings towards Trump and his people, but still prefer the populist and/or nationalist agenda to anything else that's on offer.
I don't know if anyone like that will step forward to debate you, but if they do, I'm not sure what the protocol would be.
edit: Maybe the most interesting position would be an e/acc Trump 2.0 supporter - someone from the tech side of Trump's coalition, rather than the populist side. But such people avoid Less Wrong, I think.
I have three paradigms for how something like this might "work" or at least be popular: