In this post you can make several predictions for how different factors affect the probability that the creation of AGI leads to an extinction level catastrophe. This might be useful for planning.
Please let me know if you have other ideas for questions that could be valuable to ask.
Predictions based on who develops AGI:
Predictions based on technology used for developing AGI:
Prediction based on approach for creating AGI:
Predictions on how money affects probability of AGI X-risk:
I think more money spent right now, even with the best of intentions, is likely to increase capabilities much faster than it reduces risk. I think OpenAI and consequent capability races are turning out to be an example of this.
There are hypothetical worlds where spending an extra ten billion (or a trillion) dollars on AI research with good intentions doesn't do this, but I don't think they're likely to be our world. I don't think that directing who gets the money is likely to prevent it, without pretty major non-monetary controls in addition.