Yes, this is nothing like e/acc arguments. e/acc don't argue in favour of AI takeover; they refuse to even think about AI takeover. e/acc was "we need more AI for everyone now, or else the EAs will trap us all in stagnant woke dystopia indefinitely". Now it's "American AI must win or China will trap us in totalitarian communist dystopia indefinitely".
if you get enough meditative insight you'll transcend the concept of a self
What is the notion of self that you transcend, what does it mean to transcend it, and how does meditation cause this to happen?
Is there some way to use LLMs to efficiently simulate different kinds of AI futures, including extremely posthuman scenarios? I mean "simulation" in a primarily literary sense - via fictional vignettes and what-if essays - though if that can usefully be supplemented with other tools, all the better.
But this isn't an all-or-nothing choice. If you hurt your fingers getting them caught in a door, you suffer, but you don't want to die because of it, do you? Any ideas on where to draw the line?
Knowledge is power, and superintelligence means access to scientific and engineering knowledge at a new level. Your analysis seems to overlook this explanatory factor. We expect that superintelligence generically grants access to a level of technological capability, that includes engineering on astronomical scales; the ability to read, copy, and modify human minds, as well as simply make humanlike minds with arbitrary dispositions; and a transformative control over material things (including living beings) whose limits are hard to identify. In other words, any superintelligence should have a capacity to transform the world with "godlike" power and precision. This is why the advent of superintelligence, whether good, bad, or weird in its values, has this apocalyptic undercurrent in all scenarios.
In "The Autodidactic Universe", the authors try to import concepts from machine learning into physics. In particular, they want to construct physical models in which "the Universe learns its own physical laws". In my opinion they're not very successful, but one might wish to see whether their physicalized ML concepts can be put to work back in ML, in the context of a program like yours.
This comment has been on my mind a lot the past week - not because I'm not ambitious, but because I've always been ambitious (intellectually at least) and frustrated in my ambitions. I've always had goals that I thought were important and neglected, I always directly pursued them from a socially marginal position rather than trying to make money first (or whatever people do when they put off their real ambitions), but I can't say I ever had a decisive breakthrough, certainly not to recognition. So I only have partial progress on a scattered smorgasbord of unfulfilled agendas, and meanwhile, after OpenAI's "o3 Christmas" and the imminent inauguration of an e/acc administration in the USA, it looks more than ever that we are out of time. I would be deeply unsurprised if it's all over by the end of the year.
I'm left with choices like (1) concentrate on family in the final months (2) patch together what I have and use AI to quickly make the best of it (3) throw myself into AI safety. In practice they overlap, I'm doing all three, but there are tensions between them, and I feel the frustration of being badly positioned while also thinking I have no time for the meta-task of improving my position.
You first might want to distinguish between national AI projects that are just about boosting the AI economy or managing the use of AI within government, and government-backed research which is specifically aimed at the AGI frontier. Presumably it's the latter that you're talking about.
There is also the question of what a government would think it was doing, in embarking on such a project. The commercial enterprise of creating AI is already haunted by the idea that it would be bad for business if your creation wiped out the human race. That hasn't stopped anyone, but the fear is there, overcome only by greed.
Now, what about politicians and public servants, generals and spymasters? How would they feel about leading a race to create AI? What would they think they were doing? Creating artificial super-scientists, super-soldiers, super-strategists? Compared to Silcon Valley, these people are more about the power motive than the profit motive. What, apart from the arms race, do they have to lure them along the AI path, comparable to the dream of uber-wealth that drives the tech oligarchs? (In dictatorships, I suppose there is also the dream of absolute personal power to motivate them.)
Apart from the arms race, the vision that seems to animate pro-AI western elites, is economic and strategic competition among nations. If China takes the lead in AI, it will have the best products and the best technologies and it will conquer the world that way. So I guess the thinking of Trump 2.0's AI czar David Sacks (a friend of Thiel and Musk), and the people around him, is going to be some mixture of these themes - the US must lead because AI is the key to economic, technological, and military superiority in the 21st century.
Now I think that even the most self-confident, gung-ho, born-to-rule man-of-destiny who gets involved in the AI race, is surely going to have a moment when they think, am I just creating my own replacement here? Can even my intellect, and my charisma, and my billions, and my social capital, really compete with something smarter than me, and a thousand times faster than me, and capable of putting any kind of human face on its activities?
I'm not saying they're going to have a come-to-Yudkowsky moment and realize, holy crap, we'd better shut this down after all. Their Darwinist instincts will tell them that if they don't create AI first, someone else will. But perhaps they will want to be reassured. And this may be one area where techies similar to Ilya Sutskever, and Yann Lecun, and Alec Radford - i.e. the technical leads in these frontier AI research programs - may have a role in addition to their official role as chief of R&D.
The technical people have their own dreams about what a world of AGI and ASI could look like too. They may have a story about prosperity and human flourishing with AI friends and partners. Or maybe they have a story just for their CEO masters, that even the most powerful AI, if properly trained, will just be 100% an extension of their own existing will. And who knows what kind of transhuman dreams they entertain privately, as well?
These days, there's even the possibility that the AI itself is whispering to the corporate, political, and military leadership, telling them what they want to hear...
I am very much speculating here, I have no personal experience of, or access to, these highest levels of power. But the psychology and ideology of the "decision-makers" - who really just seem to be riding the tiger of technical progress at this point - is surely an important feature of any such AGI Manhattan Project, too.
Regarding Musk and Thiel, foremost they are billionaire capitalists, individuals who built enormous business empires. Even if we assume your thinking about the future is correct, we shouldn't assume that they have reproduced every step of it. You may simply be more advanced in your thinking about the future than they are. Their thought about the future crystallized in the 1980s, when they were young. Since then they have been preoccupied with building their empires.
This raises the question, how do they see the future, and their relationship to it? I think Musk's life purpose is the colonization of Mars, so that humanity's fate isn't tied to what happens on Earth. Everything else is subordinate to that, and even robots and AI are just servants and companions for humanity in its quest for other worlds. As for Thiel, I have less sense of the gestalt of his business activities, but philosophically, the culture war seems very important to him. He may have a European sense of how self-absorbed cultural elites can narrow a nation's horizons, that drives his sponsorship of "heterodox" intellectuals outside the academy.
If I'm right, the core of Musk's futurism is space colonization, and the core of Thiel's futurism is preserving an open society. They don't have the idea of an intelligence singularity whose outcome determines everything afterwards. In this regard, they're closer to e/acc than singularity thinking, because e/acc believes in a future that always remains open, uncertain, and pluralist, whereas singularity thinking tends towards a single apocalyptic moment in which superintelligence is achieved and irreversibly shapes the world.
There are other reasons I can see why they would involve themselves in the culture war. They don't want a socialism that would interfere with their empires; they think (or may have thought until the last few years) that superintelligence is decades away; they see their culture war opponents as a threat to a free future (whether that is seen in e/acc or singularity terms), or even to the very existence of any kind of technological future society.
But if I were to reduce it to one thing: they don't believe in models of the future according to which you get one thing right and then utopia follows, and they believe such thinking actually leads to totalitarian outcomes (where their definition of totalitarian may be, a techno-political order capable of preventing the building of a personal empire). Musk started OpenAI so Google wouldn't be the sole AI superpower; he was worried about centralization as such, not about whether they would get the value system right. Thiel gave up on MIRI's version of AI futurology years ago as a salvationist cult; I think he would actually prefer no AI to aligned AI, if the latter means alignment with a particular value system rather than alignment with what the user wants.
I can't bring myself to read it properly. The author has an ax to grind, he wants interplanetary civilization and technological progress for humanity, and it's inconvenient to that vision if progress in one form of technology (AI) has the natural consequence of replacing humanity, or at the very least removing it from the driver's seat. So he simply declares "There is No Reason to Think Superintelligence is Coming Soon", and the one doomer strategy he does approve of - the enhancement of human biological intelligence - happens to be one that once again involves promoting a form of technological progress.
If there is a significant single failure behind getting to where we are now, perhaps it is the dissociation between "progress in AI" and "humanity being surpassed and replaced by AI" that has occurred. It should be common sense that the latter is the natural outcome of creating superhuman AI.