We certainly could integrate known strategic arguments into a quantitative framework like this, but I'm worried that, for example, "putting so many made-up probabilities into a probability tree like this is not actually that helpful."
I think for now both SI and FHI are still in the qualitative stage that normally precedes quantitative analysis. Big projects like Nick's monograph and SI's AI risk wiki will indeed constitute "rapid progress" in strategic reasoning, but it will be rapid progress toward more quantitative analyses, not rapid progress within a quantitative framework that we have already built.
Of course, some of the work on strategic sub-problems is already at the quantitative/formal stage, so quantitative/formal progress can be made on them immediately if SI/FHI can raise the resources to find and hire the right people to work on them. Two examples: (1) What do reasonable economic models of past jumps in optimization power imply about what would happen once we get self-improving AGI? (2) If we add lots more AI-related performance curve data to Nagy's Performance Curve Database and use his improved tech forecasting methods, what does it all imply about AI and WBE timelines?
I think for now both SI and FHI are still in the qualitative stage that normally precedes quantitative analysis.
There are many strategic considerations that greatly differ in nature from one another. It seems to me that at best they will require diverse novel methods to analyze quantitatively, and at worst a large fraction may resist attempts at quantitative analysis until the Singularity occurs.
For example we can see that there is an upper bound on how confident a small FAI team, working in secret and with limited time, can be (assuming it's rational)...
Previously: round 1, round 2, round 3
From the original thread:
Ask away!