Let’s think about slowing down AI
Averting doom by not building the doom machine If you fear that someone will build a machine that will seize control of the world and annihilate humanity, then one kind of response is to try to build further machines that will seize control of the world even earlier without destroying it, forestalling the ruinous machine’s conquest. An alternative or complementary kind of response is to try to avert such machines being built at all, at least while the degree of their apocalyptic tendencies is ambiguous. The latter approach seems to me like the kind of basic and obvious thing worthy of at least consideration, and also in its favor, fits nicely in the genre ‘stuff that it isn’t that hard to imagine happening in the real world’. Yet my impression is that for people worried about extinction risk from artificial intelligence, strategies under the heading ‘actively slow down AI progress’ have historically been dismissed and ignored (though ‘don’t actively speed up AI progress’ is popular). The conversation near me over the years has felt a bit like this: > Some people: AI might kill everyone. We should design a godlike super-AI of perfect goodness to prevent that. > > Others: wow that sounds extremely ambitious > > Some people: yeah but it’s very important and also we are extremely smart so idk it could work > > [Work on it for a decade and a half] > > > Some people: ok that’s pretty hard, we give up > > Others: oh huh shouldn’t we maybe try to stop the building of this dangerous AI? > > Some people: hmm, that would involve coordinating numerous people—we may be arrogant enough to think that we might build a god-machine that can take over the world and remake it as a paradise, but we aren’t delusional This seems like an error to me. (And lately, to a bunch of other people.) I don’t have a strong view on whether anything in the space of ‘try to slow down some AI research’ should be done. But I think a) the naive first-pass guess should be a strong ‘probabl