What are some research directions for "improving coordination?"
In light of a recent post and comment, and several months of thinking, I have come to the position that one of our (humanity's) biggest problems is that we suck at precise coordination at every level.
This is not very specifically defined but I am trying to gesture at a problem area I think is super important. Some thoughts to convey my intuition here:
* If the extreme risk of the AI development trajectory is as true and obvious as many believe (everyone's life at risk), humanity's thinking about it should appear a lot more sophisticated than it does now.
* For the last few years Eliezer has basically been throwing his hands up in exasperation at the incompetence of the world and many have shifted to public-facing communication, presumably believing that trying to convince AI insiders is hopeless.
Broadly, I think there are two cases of problems with coordination:
1. Two people/groups genuinely agree to honest, rigorous exchange of information, but can't effectively coordinate.
2. Someone is withholding information or doesn't really want to coordinate in the first place.
I think the first problem is workable, and if improved sufficiently, makes progress on the second problem by clearly exposing parties that are avoiding productive exchange.
* Specifically, I think there is a lot of progress to be made with augmenting the exchange of information between people. I think LessWrong, the knowledge commons arguably at the frontier of ensuring humanity's survival, is lacking in features for this purpose. Maybe because most users here are already conscientious and strongly value truth-seeking, which makes improvement seem less necessary.
Hopefully I'm making this line of thought clear enough. Key points:
* Trustworthy, robust, and future-proof governance is the ultimate problem for humanity and anything else is a band-aid on a bullet hole.
* Highly effective coordination is part of that problem,