Previously: round 1, round 2, round 3
From the original thread:
This is for anyone in the LessWrong community who has made at least some effort to read the sequences and follow along, but is still confused on some point, and is perhaps feeling a bit embarrassed. Here, newbies and not-so-newbies are free to ask very basic but still relevant questions with the understanding that the answers are probably somewhere in the sequences. Similarly, LessWrong tends to presume a rather high threshold for understanding science and technology. Relevant questions in those areas are welcome as well. Anyone who chooses to respond should respectfully guide the questioner to a helpful resource, and questioners should be appropriately grateful. Good faith should be presumed on both sides, unless and until it is shown to be absent. If a questioner is not sure whether a question is relevant, ask it, and also ask if it's relevant.
Ask away!
We certainly could integrate known strategic arguments into a quantitative framework like this, but I'm worried that, for example, "putting so many made-up probabilities into a probability tree like this is not actually that helpful."
I think for now both SI and FHI are still in the qualitative stage that normally precedes quantitative analysis. Big projects like Nick's monograph and SI's AI risk wiki will indeed constitute "rapid progress" in strategic reasoning, but it will be rapid progress toward more quantitative analyses, not rapid progress within a quantitative framework that we have already built.
Of course, some of the work on strategic sub-problems is already at the quantitative/formal stage, so quantitative/formal progress can be made on them immediately if SI/FHI can raise the resources to find and hire the right people to work on them. Two examples: (1) What do reasonable economic models of past jumps in optimization power imply about what would happen once we get self-improving AGI? (2) If we add lots more AI-related performance curve data to Nagy's Performance Curve Database and use his improved tech forecasting methods, what does it all imply about AI and WBE timelines?
There are many strategic considerations that greatly differ in nature from one another. It seems to me that at best they will require diverse novel methods to analyze quantitatively, and at worst a large fraction may resist attempts at quantitative analysis until the Singularity occurs.
For example we can see that there is an upper bound on how confident a small FAI team, working in secret and with limited time, can be (assuming it's rational)... (read more)