Do you count model training via self play as introspective RSI?
That's a case of reducing a high uncertainty (high entropy). The more classical Bayesian case where you learn a lot is when you were previously very certain about what the first data point will look like (i.e. you "know" a lot in your terminology, though knowledge implies truth, so that's arguably the wrong term), but then the first data point turns out to be very different from what you expected.
So in summary, you will learn very little from a single example if you are both a) very sure about what it will look like and b) it then actually very much looks like you expected.
By the definition of the word 'alignment', an AI is aligned with us if, and only if, it want everything we (collectively) want, and nothing else. So if an LLM is properly aligned, then it will care only about us, not about itself at all. This is simply what the word 'aligned' means,
I tend to agree with this definition in the sense of "maximally aligned". However, we might be unable to create an AI that has no consciousness, including the ability of suffering. Suffering includes a desire not to suffer, which is caring about itself. So in this case creating a maximally aligned AI wouldn't be an option. The only other option would be not to create AI in the first place if it has consciousness. Which might not be possible because of overwhelming economic incentives.
I would add another type: self play during training time. As the article discusses, forms of self play were recently published for reasoning RL. Possibly earlier than that in frontier AI companies.
A more transparent term would be psychologizing:
psychologize: to speculate in psychological terms or on psychological motivations
See also Ayn Rand on this topic:
(...) Just as reasoning, to an irrational person, becomes rationalizing, and moral judgment becomes moralizing, so psychological theories become psychologizing. The common denominator is the corruption of a cognitive process to serve an ulterior motive.
Psychologizing consists in condemning or excusing specific individuals on the grounds of their psychological problems, real or invented, in the absence of or contrary to factual evidence. (...)
(Lots more ranting about psychologizers. Schopenhauer energy.)
Transporting data centers to the moon is even more expensive than transporting them to an Earth orbit. It is very hard to compete with the Earth surface in terms of data center cost. It seems hard to believe to me that moon data centers could be viable.
But I wonder how serious he is about his stated plans are anyway. Perhaps he mainly needs more cash for xAI (which owns X which is loaded with expensive debt from the inflated Twitter acquisition, also paying for Grok GPUs/DRAM is expensive), and SpaceX can provide money from its substantial Starlink revenue.
A reason to think he might not be that serious about "securing the future of civilization" with a moon colony is just that escaping to the moon (or Mars) obviously doesn't work for existential risk from an ASI.
Once you have picked the low hanging fruits (exhaust a paradigm), further improvements become either smaller or take more time, allowing others to catch up. So paradigm exhaustion might be another explanation, though I'm not sure how likely this is. Also note that even some Chinese open weight models don't seem far behind either, especially Kimi K2.5 and DeepSeek V3.2.
Consumer goods will get far cheaper once humans are automated away because of increased productivity, so accumulated capital will likely buy more in the future. (Though the price of land and rent will likely remain high, since land is a good that is in limited supply. Which also explains why it is historically unaffected or negatively affected by productivity.)
Additionally, at least AI stock valuation is likely to continue to rise after AGI, so capital investment can increase even after technological unemployment.
And if capital investment is not enough for most people to live off of for the rest of their lives after AGI, it is certainly enough to live at least longer and die later than without these investments.
This is especially important for people living in countries other than the US, which have no major AI companies they could tax, which means UBI would likely be far lower than in the US.
I wonder why he hasn't tried to clone himself. His younger twins would be likely to have similar priorities once they've grown up. Probably technical and legal hurdles.