We talk about a wide variety of stuff on LW, but we don't spend much time trying to identify the very highest-utility stuff to discuss and promoting additional discussion of it. This thread is a stab at that. Since it's just comments, you can feel more comfortable bringing up ideas that might be wrong or unoriginal (but nevertheless have relatively high expected value, since existential risks are such an important topic).
A question Katja Grace posed at a CFAR minicamp (wording mine):
Are there things we can do that aren't targeted to specific x-risks but mitigate a great many x-risks at once?
In discussions about AI risks, the possibility of a dangerous arms race between the US and China sometimes comes up. It seems like this kind of arms race could happen with other dangerous techs like nano and bio. Pushing for more democratic governments in states like Russia and China might also decrease the chances of nuclear war, etc.
This article from the Christian Science Monitor suggests that if the Chinese government decided to stop helping North Korea, that might cause the country to "implode", which feels like a good thing from an x-risk ... (read more)