(I am currently writing up a post for my personal blog where I list all requirements that need to be true in conjunction for SIAI to be the best choice when it comes to charitable giving.)
Be careful, it's very common for people to gerrymander such probability estimates by unjustifiably assuming complete independence or complete dependence of certain terms. (This is true even if the "probability estimate" is only implicit in the qualitative structure of the argument.) If people think that's what you're doing then they're likely to disregard your conclusions even if the conclusions could have been supported by a weaker argument.
Many people complain that the Singularity Institute's "Big Scary Idea" (AGI leads to catastrophe by default) has not been argued for with the clarity of, say, Chalmers' argument for the singularity. The idea would be to make explicit what the premise-and-inference structure of the argument is, and then argue about the strength of those premises and inferences.
Here is one way you could construe one version of the argument for the Singularity Institute's "Big Scary Idea":
My questions are: