I've been researching coordination explosions for three years now. The rabbit hole goes deep, and there's much more empirical research on near-term applications than there appears to be at first glance, and surprisingly little of it has anything to do with LLMs or soft takeoffs.
You can do research by yourself, in a group, or even go in bizarre and dangerous and terrible directions like Janus where you dump your entire existence into various LLMs and see what happens, and there's so much low-hanging fruit that it doesn't even matter where you go or what you do. This domain is basically the wild west of AI safety.
I'm not really comfortable talking about the details publicly on the internet, but what I can say is that there's so much uncharted territory with coordination explosions, that almost any individual who goes in this general direction gets to collide with game-changing discoveries.
Are you comfortable talking about the details privately on the internet? I'd appreciate a DM. You've got my curiosity with the whole "juicy secrets" aura. Also I'm intending to do that "dump my entire existence into an LLM" thing at some point...
I think this is worth exploring and seeing what the risks are here.
That said, I also take a bit of an outside view that coordination problems are unusually hard, and humans have been trying to solve them really hard for a long time. Although perhaps a bit naive on the inside view, the outside view says coordination problems are probably the last thing to be solved, or not at all. In fact, if we die for AI, it's arguably because even with AI assistance we still couldn't figure out how to solve the coordination problems we cared out.
It is somewhat of a tangent, but if better communication is one effect of more powerful AI, that suggests another way to measure AI capability gain: Changes in the volume of (textual) information exchanged between people, number of messages exchanged, or number of contacts maintained.
Related, I have an old post named "Intelligence Explosion vs. Co-operative Explosion", though it's more about the argument that AGIs might overpower humanity with a superhuman ability to cooperate even if they can't become superhumanly intelligent.
or hacking
Hacking can probably be done to a superhuman level using self-play since code is ultimately something like chess - it can all be simulated.
I was considering coordination improvement from the other angle. Making a flexible network that can be shaped by any user to their will.
Imagine Semantic Web. But anyone can add any triples and documents at will. And anyone (or anything if it's done by algorithm) can sign any triple and document, confirming it's validity with their own reputation.
It's up to the reader to decide which signatures they recognize as credible and not in which case.
Server(s) just accept new data and allows running arbitrary queries on the data it has. With some protection from spam and DDOS, of cause. So, clients can interpret and filter data anyhow without needing changes on server architecture.
This network has unlimited flexibility at the cost of clients having to query more data and process it in more complex way. So, it's possible to reproduce any way of communication on it (forum, wiki, blog, chat, etc) with just some triples like "tr:inReplyTo".
Or make something that was not possible before. Imagine a chatroom were million people are talking at once. But with every client seeing only posts that are important enough for them - be it because post has enough upvotes (by those client trusts), is from someone respected or known by client, or it maybe even some compound comment that was distilled from many different similar comments.
Epistemic status: musing that I wanted to throw out there.
A traditional AI risk worry has been the notion of an intelligence explosion: An AI system will rapidly grow in intelligence and become able to make huge changes using small[1] subtle[2] tricks such as bioengineering or hacking. Since small actions are not that tightly regulated, these huge changes would be made in a relatively unregulated way, probably destroying a lot of things, maybe even the entire world or human civilization.
Modern AI systems such as LLMs seem to be making rapid progress in turning sensory data into useful information, aggregating information from messy sources, processing information in commonsense ways, and delivering information to people. These abilities do not seem likely to generalize to bioengineering or hacking (which involve generating novel capabilities), but they do seem plausibly useful for some things.
Two scenarios of interest:
Coordination implosion: Some people suggest that because modern AI systems are extremely error-prone, they will not be useful, except for stuff like spam, which degrades our coordination abilities. I'm not sure this scenario is realistic because there seem to be a lot of people working on making it work for useful stuff.
Coordination explosion: By being able to automatically do basic information processing, it seems like we might be able to do better coordination. We are already seeing this with chatbots that work as assistants, sometimes being able to give useful advice based on their mountains of integrated knowledge. But we could imagine going further, e.g. by automatically registering people's experiences and actions, and aggregating this information and routing it to relevant places.
(For instance, maybe a software company installs AI-based surveillance, and this surveillance notices when developers encounter bugs, and takes note of how they solve the bugs so that it can advise future developers who encounter similar bugs about what to do.)
This might revolutionize the way we act. Rather than having to create, spread, and collect information, maybe we would end up always having relevant information at hand, ready for our decisions. With a bit of rationing, we might even be able to keep spam down to a workable level.
I'm not particularly sure this is what things are going to look like. However I think the possibility is useful to keep in mind: There may be an intermediate phase between "full AGI" and now, where we have a sort of transformative artificial intelligence, but not in the sense of leading to an intelligence explosion. There may still be an intelligence explosion afterwards. Or not, if you don't believe in intelligence explosions.
I foresee privacy to be one counteracting force. These sorts of systems seem like they work better when they invade your privacy more, so people will resist that.
Small = Involving relatively minor changes in terms of e.g. matter manually moved.
Subtle = Dependent on getting many "bits" right at a distance.