Given one AI, why not more?
Suppose there is a threshold of capability beyond which an AI may pose a non-negligible existential risk to humans. What is the argument against this reasoning: If one AI passes or seems likely to pass this threshold, then humans, to lower x-risk, ought to push other AI past this threshold...
Mar 11, 20237