I have tried looking at this from the perspective that we have had AGI since 2022 and ChatGPT. Creating ChatGPT didn't require an ecosystem, did it? Just a well-resourced nonprofit/startup with good researchers.
So according to me, we've already had AGI for 2.5 years. We are still in a period of relative parity between humans and AI, in which we're still different enough, and it is still weak enough, that humans have the upper hand, and we're focused on exploring all possible variations on the theme of AI and human-AI relationship.
The real question is when and how will it escape human control. That will be the real sign that we have "achieved" ASI. Will that result from an ecosystem and not a single firm?
There seems to be an assumption that ASI will be achieved by continuing to scale up. All these analyses revolve around the economics of ever larger data centers and training runs, whether there's enough data and whether synthetic data is a good enough substitute, and so on.
But surely that is just the dumbest way to achieve ASI. I'm sure some actors will keep pursuing that path. But we're also now in a world of aggressive experimentation with AI on every front. I have been saying that the birthplace of ASI will not be ever larger foundation models, but rather research swarms of existing and very-near-future AIs. Think of something that combines China's Absolute Zero Reasoner with the darwinism of Google AI's AlphaEvolve. Once they figure out how to implement John-von-Neumann-level intelligence in a box, I think we're done - and I don't think the economics of that requires an ecosystem, though it may be carried out by a Big Tech company that has an ecosystem (such as Google).
This has never happened at OpenAI.
"This prompt (sometimes) makes ChatGPT think about terrorist organisations"
One is not philosophically obliged to regard the nature of reality as ineffable or inescapably uncertain.
Quarks are a good place to explore this point. The human race once had no concept of quarks. Now it does. You say that inevitably, one day, we'll have some other concept. Maybe we will. But why is that inevitable? Why can't quarks just turn out to be part of how reality actually is?
You cite Nagarjuna and talk about emptiness, so that gives me some idea of where you are coming from. This is a philosophy which emphasizes the role of concepts in constituting experience, and the role of the mind in constituting concepts, and typically concludes that reality has no essence, no nature that can be affirmed, because all such affirmations involve concepts that are introduced by the mind, rather than being inherent to anything.
This conclusion I think is overreaching. I actually consider direct experience to be the ultimate proof that reality is not just formlessness carved by mind. Consciousness is not just raw being, it is filled with form. My words and concepts may not capture it properly, I may not even notice everything that is implied by what I see or what I am. But I do see that complexity and multiplicity are there in reality - at the very least, they are there in my own consciousness.
Non-attachment to theories and concepts is a good thing if you're interested in truth, and know that you don't know the truth. It also has some pragmatic value if reality changes around you and you need to adapt. But in fundamental matters, one does not have to regard every concept and hypothesis that we have, as necessarily temporary. In some of them we may have latched onto the actual objective truth.
P.S. Having criticized the philosophy of emptiness, let me add ironically that just a few hours ago, I investigated a proposal for AI alignment that someone had posted here a few months ago, and found it to be very good - and its model of the mind, a nondual viewpoint is the highest form. So your philosophy may actually put you in a good position to appreciate the nuances of this potentially important work.
You could say there are two conflicting scenarios here: superintelligent AI taking over the world, and open-source AI taking over daily life. In the works that you mention, superintelligence comes so quickly that AI mostly remains a service offered by a few big companies, and open-source AI is just somewhere in the background. In an extreme opposite scenario, superintelligence might take so long to arrive, that the human race gets completely replaced by human-level AI before superintelligent AI ever exists.
It would be healthy to have all kinds of combinations of these scenarios being explored. For example, you focus a bit on open-source AI as a bioterror risk. I don't think a supervirus is going to wipe out the human race or even end civilization, because (as Covid experience shows), we are capable of extreme measures in order to contain truly deadly disease. But a supervirus could certainly bring the world to a halt again, and if it was known to have been designed with open-source AI, that would surely have a huge impact on AI's trajectory. (I suspect that in such a scenario, AI for civilian purposes would suffer, but deep states worldwide would insist on pressing forward, and that there would also be a lobby arguing for AI as a defense against superviruses. Also, it's very plausible that a supervirus might be designed by AI, but that there would be no proof of it, in which case there wouldn't even be a backlash.)
Another area where futurology about open-source AI might be good, is in the area of gradual disempowerment and replacement of humanity. We have societies with a division of roles, humans presently fill those roles but AI and robots will be capable of filling more and more of them; eventually every role in the economic, cultural, and political structure could be filled by AIs rather than by humans. The story of how that could happen, certainly deserves to be explored.
Still another area when open-source AI scenarios deserve to be studied, is in the highly concrete realm of near-future economics and culture. What does an AI economy look like if o4-level models are just freely available? This really is an urgent question for anyone concerned with concrete questions like, who will lead the AI industry and how will it be structured, because there seem to be factions in both China and America who are thinking in this direction. One should want to understand what they envision, and what kind of competitive landscape they are likely to create in the short term.
My own belief is that this would be such an upheaval, that it would inevitably end up invalidating many conventional political and economic premises. The current world order of billionaires and venture capitalists, stock markets and human democracies, I just don't see it surviving such a transition, even without superintelligence appearing. There are just too many explosive possibilities, too many new symbioses of AI with human mind, for the map of the world and the solar system to not be redrawn.
However, in the end I believe in short timelines to superintelligence, and that makes all the above something of a secondary concern, because something is going to emerge that will overshadow humans and human-level AI equally. It's a little monotonous to keep referring back to Iain Banks's Culture universe, but it really is the outstanding depiction of a humanly tolerable world in which superintelligence has emerged. His starfaring society is really run by the "Minds", which are superintelligent AIs characteristically inhabiting giant spaceships or whole artificial worlds, and the societies over which they invisibly preside, include both biological intelligences (such as humans) and human-level AIs (e.g. the drones). The Culture is a highly permissive anarchy which mostly regulates itself via culture, i.e. shared values among human-level intelligences, but it has its own deep state, in the form of special agencies and the Minds behind them, who step in when there's a crisis that has escaped the Minds' preemptive strategic foresight.
This is one model of what relations between superintelligence and lesser intelligences might be like. There are others. You could have an outcome in which there are no human-level intelligences at all, just one or more superintelligences. You could have superintelligences that have a far more utilitarian attitude to lower intelligences, creating them for temporary purposes and then retiring them when they are no longer needed. I'm sure there are other possibilities.
The point is that from the perspective of a governing superintelligence, open-source AIs are just another form of lower intelligence, that may be useful or destabilizing depending on circumstance, and I would expect a superintelligence to decide how things should be on this front, and then to make it so, just as it would with every other aspect of the world that it cared about. The period in which open-source AI was governed only by corporate decisions, user communities, and human law would only be transitory.
So if you're focused on superintelligence, the real question is whether open-source AI matters in the development of superintelligence. I think potentially it does - for example, open source is both a world of resources that Big Tech can tap into, as well as a source of destabilizing advances that Big Tech has to keep up with. But in the end, superintelligence - not just reasoning models, but models that reason and solve problems with strongly superhuman effectiveness - looks like something that is going to emerge in a context that is well-resourced and very focused on algorithmic progress. And by definition, it's not something that emerges incrementally and gets passed back and forth and perfected by the work of many independent hands. At best, that would describe a precursor of superintelligence.
Superintelligence is necessarily based on some kind of incredibly powerful algorithm or architecture, that gets maximum leverage out of minimum information, and bootstraps its way to overwhelming advantage in all domains at high speed. To me, that doesn't sound like something invented by hobbyists or tinkerers or user communities. It's something that is created by highly focused teams of genius, using the most advanced tools, who are also a bit lucky in their initial assumptions and strategies. That is something you're going to find in an AI think tank, or a startup like Ilya Sutskever's, or a rich Big Tech company that has set aside serious resources for the creation of superintelligence.
I recently posted that superintelligence is likely to emerge from the work of an "AI hive mind" or "research swarm" of reasoning models. Those could be open-source models, or they could be proprietary. What matters is that the human administrators of the research swarm (and ultimately, the AIs in the swarm itself) have access to their source code and their own specs and weights, so that they can engaged in informed self-modification. From a perspective that cares most about superintelligence, this is the main application of open source that matters.
Somehow this has escaped comment, so I'll have a go. I write from the perspective of whether it's suitable as the value system of a superintelligence. If PRISM became the ethical operating system of a posthuman civilization born on Earth, for as long as that civilization managed to survive in the cosmos - would that be a satisfactory outcome?
My immediate thoughts are: It has a robustness, due to its multi-perspective design, that gives it some plausibility. At the same time, it's not clear to me where the seven basis worldviews come from. Why those seven, and no others? Is there some argument that these seven form a necessary and sufficient basis for ethical behavior by human-like beings and their descendants?
If I dig a little deeper into the paper, the justification is actually in part 2. Specifically, on page 12, six brain regions and their functions are singled out, as contributing to human decision-making at increasingly abstract levels (for the hierarchy, see page 15). The seven basis worldviews correspond to increasing levels of mastery of this hierarchy.
I have to say I'm impressed. I figured that the choice of worldviews would just be a product of the author's intuition, but they are actually grounded in a theory of the brain. One of the old dreams associated with CEV, was that the decision procedure for a human-friendly AI would be extrapolated in a principled way from biological facts about human cognition, rather than just from a philosophical system, hallowed tradition, or set of community principles. June Ku's MetaEthical AI, for example, is an attempt to define an algorithm for doing this. Well, this is a paper written by a human being, but the principles in part 2 are sufficiently specific, that one could actually imagine an automated process following them, and producing a form of PRISM as its candidate for CEV! I'd like @Steven Byrnes to have a look at this.
We had Golden Gate Claude, now we have White Genocide Grok...
This seems like a Chinese model for superintelligence! (All the authors are Chinese, though a few are working in the West.) Not in the AIXI sense of something which is optimal from the beginning, but rather something that could bootstrap its way to superintelligence. One could compare it to Schmidhuber's Godel machine concept, but more concrete, and native to the deep learning era.
(If anyone has an argument as to why this isn't a model that can become arbitrarily intelligent, I'm interested.)
If I have understood correctly, you're saying that OpenAI should be forecasting greater revenue than this, if they truly think they will have AIs capable of replacing entire industries. But maybe they're just being cautious in their forecasts?
Suppose I have a 3d printing / nanotechnology company, and I think that a year from now I'll have an unlimited supply of infinity boxes capable of making any material artefact. World manufacturing is worth over US$10 trillion. If I thought I could put it all out of business, by charging just 10% of what current manufacturers charge, I could claim expected revenue of $1 trillion.
Such a prediction would certainly be attention-grabbing, but maybe it would be reckless to make it? Maybe my technology won't be ready. Maybe my products will be blocked from most markets. Maybe someone will reverse-engineer and open-source the infinity boxes, and prices will crash to $0. Maybe I don't want the competition or the government to grasp just how big my plans are. Maybe the investors I want wouldn't believe such a scenario. There are a lot of reasons why a company that thinks it might be able to take over the economy or even the world, would nonetheless not put that in its prospectus.
Presumably you wouldn't say this of actual physicists who believe in MWI?