What methodology will be used to produce SI's strategic recommendations (and FHI's, if you know the answer)? As far as I can tell, we currently don't have a way to make the many known strategic considerations/arguments commensurable (e.g., suitable for integrating into a quantitative strategic framework) except by using our intuitions which seem especially unreliable on matters related to Singularity strategy. The fact that you think the AI risk wiki can be finished in 2 years seems to indicate that you either disagree with this evaluation of the current state of affairs, or think we can make very rapid progress in strategic reasoning. Can you explain?
We certainly could integrate known strategic arguments into a quantitative framework like this, but I'm worried that, for example, "putting so many made-up probabilities into a probability tree like this is not actually that helpful."
I think for now both SI and FHI are still in the qualitative stage that normally precedes quantitative analysis. Big projects like Nick's monograph and SI's AI risk wiki will indeed constitute "rapid progress" in strategic reasoning, but it will be rapid progress toward more quantitative analyses, not rapid...
Previously: round 1, round 2, round 3
From the original thread:
Ask away!