Ludwig
Ludwig has not written any posts yet.

Ludwig has not written any posts yet.

Intersting, yes I am interested in coordination problems. Let me follow this framework, to make a better case. There are three considerations I would like to point out.
I understand and appreciate your discussion. I wonder if perhaps we could consider is that it may be more morally imperative to work on AI safety for the hugely impactful problems AI is contributing right now, if we assume that in finding solutions to these current and near-term AI problems we would also be lowering the risk of AGI X-risk (albiet indirectly).
Given that the likelyhood for narrow AI risk being 1 and the likelyhood of AGI in the next 10 years being (as in your example) <0.1 - It seems obvious we should focus on addressing the former as not only will it reduce suffering that we know with certainity is already... (read more)
Why should we throw immense resources on AGI x-risk when the world faces enormous issues with narrow AI right now? (eg. destabalised democracy/mental health crisis/worsening inequality)
Is it simply a matter of how imminent you think AGI is? Surely the opportunity cost is enormous given the money and brainpower we are spending on AGI something many dont even think is possible versus something that is happening right now.
Interesting, I see what you mean reagrding probability and it makes sense. I guess perhaps, what is missing is that when it comes to questions of peoples lives we may have a stronger imperative to be more risk-averse.
I completely agree with you about effect size. I guess what I would say is that that given my point 1 from earlier about the variety of X-risks coordination would contirbute in solving then the effect size will always be greater. If we want to maximise utility its the best chance we have. The added bonuses are that it is comparatively tractable and immediate avoiding the recent criticicisms about longtermism, while simoultnously being a longtermist solution.
Regadless, it does seem that coordination problems are underdiscussed in the community, will try and make a decent main post once my academic committments clear up a bit.