Hi folks,
My supervisor and I co-authored a philosophy paper on the argument that AI represents an existential risk. That paper has just been published in Ratio. We figured LessWrong would be able to catch things in it which we might have missed and, either way, hope it might provoke a conversation.
We reconstructed what we take to be the argument for how AI becomes an xrisk as follows:
- The "Singularity" Claim: Artificial Superintelligence is possible and would be out of human control.
- The Orthogonality Thesis: More or less any less of intelligence is compatible with more or less any final goal. (as per Bostrom's 2014 definition)
From the conjuction of these two presmises, we can conclude that ASI is possible, it might have a goal, instrumental or final, which is at odds with human existence, and, given the ASI would be out of our control, that the ASI is an xrisk.
We then suggested that each premise seems to assume a different interpretation of 'intelligence", namely:
- The "Singularity" claim assumes general intelligence
- The Orthogonality Thesis assumes instrumental intelligence
If this is the case, then the premises cannot be joined together in the original argument, aka the argument is invalid.
We note that this does not mean that AI or ASI is not an xrisk, only that the the current argument to that end, as we have reconstructed it, is invalid.
Eagerly, earnestly, and gratefully looking forward to any responses.
We tried to find the strongest argument in the literature. This is how we came up with our version:
"
Premise 1: Superintelligent AI is a realistic prospect, and it would be out of human control. (Singularity claim)
Premise 2: Any level of intelligence can go with any goals. (Orthogonality thesis)
Conclusion: Superintelligent AI poses an existential risk for humanity
"
====
A more formal version with the same propositions might be this:
1. IF there is a realistic prospect that there will be a superintelligent AI system that is a) out of human control and b) can have any goals, THEN there is existential risk for humanity from AI
2. There is a realistic prospect that there will be a superintelligent AI system that is a) out of human control and b) can have any goals
->
3. There is existential risk for humanity from AI
====
And now our concern is whether a superintelligence can be both a) and b) - given that a) must be understood in a way that is strong enough to generate existential risk, including "widening the frame", and b) must be understood as strong enough to exclude reflection on goals. Perhaps that will work only if "intelligent" is understood in two different ways? Thus Premise 2 is doubtful.