Many people complain that the Singularity Institute's "Big Scary Idea" (AGI leads to catastrophe by default) has not been argued for with the clarity of, say, Chalmers' argument for the singularity. The idea would be to make explicit what the premise-and-inference structure of the argument is, and then argue about the strength of those premises and inferences.
Here is one way you could construe one version of the argument for the Singularity Institute's "Big Scary Idea":
- At some point in the development of AI, there will be a very swift increase in the optimization power of the most powerful AI, moving from a non-dangerous level to a level of superintelligence. (Fast takeoff)
- This AI will maximize a goal function.
- Given fast takeoff and maximizing a goal function, the superintelligent AI will have a decisive advantage unless adequate controls are used.
- Adequate controls will not be used. (E.g. Won’t box/boxing won’t work)
- Therefore, the superintelligent AI will have a decisive advantage
- Unless that AI is designed with goals that stably align with ours, if the superintelligent AI has a decisive advantage, civilization will be ruined. (Friendliness is necessary)
- Unless the first team that develops the superintelligent AI makes adequate preparations, the superintelligent AI will not have goals that stably align with ours.
- Therefore, unless the first team that develops the superintelligent AI makes adequate preparations, civilization will be ruined shortly after fast takeoff
- The first team that develops the superintelligent AI will fail to make adequate preparations
- Therefore, civilization will be ruined shortly after fast takeoff.
My questions are:
- Have I made any errors in the argument structure?
- Can anyone suggest an alternative argument structure?
- Which of these premises seem the weakest to you?
I created a new page on the wiki to collect links like the Big Scary Idea one: Criticism of the sequences. If anyone knows of more links to intelligent disagreement with ideas prevailing on Less Wrong, please add them!
Wasn't there a professional physicist who criticizes something specific in Eliezer's QM sequence? Can't remember where...
Also, not sure if these count, but: