Hi folks,
My supervisor and I co-authored a philosophy paper on the argument that AI represents an existential risk. That paper has just been published in Ratio. We figured LessWrong would be able to catch things in it which we might have missed and, either way, hope it might provoke a conversation.
We reconstructed what we take to be the argument for how AI becomes an xrisk as follows:
- The "Singularity" Claim: Artificial Superintelligence is possible and would be out of human control.
- The Orthogonality Thesis: More or less any less of intelligence is compatible with more or less any final goal. (as per Bostrom's 2014 definition)
From the conjuction of these two presmises, we can conclude that ASI is possible, it might have a goal, instrumental or final, which is at odds with human existence, and, given the ASI would be out of our control, that the ASI is an xrisk.
We then suggested that each premise seems to assume a different interpretation of 'intelligence", namely:
- The "Singularity" claim assumes general intelligence
- The Orthogonality Thesis assumes instrumental intelligence
If this is the case, then the premises cannot be joined together in the original argument, aka the argument is invalid.
We note that this does not mean that AI or ASI is not an xrisk, only that the the current argument to that end, as we have reconstructed it, is invalid.
Eagerly, earnestly, and gratefully looking forward to any responses.
Cutting away all the word games, this paper appears to claim that if an agent is intelligent in a way that isn't limited to some narrow part of the world, then it can't stably have a narrow goal, because reasoning about its goals will destabilize them. This is incorrect. I think AIXI-tl is a straightforward counterexample.
(AIXI-tl is an AI that is mathematically simple to describe, but which can't be instantiated in this universe because it uses too much computation. Because it is mathematically simple, its properties are easy to reason about. It is unambiguously superintelligent, and does not exhibit the unstable-goal behavior you predict.)
... plus we say that in the paper :)