Foreword
Inspired by someone who asked about “representative” AGI x-risk arguments, I wondered: how might an AGI takeover and catastrophe (not necessarily extinction) actually play out in detail? It's extremely tough to build a realistic mental picture of all possibilities, but looking at one detailed story might help give us a feeling for some of the probability space, despite the unknowability of how AGI will actually behave.
So I set forth to create such a story.
I am personally limited by some factors:
- My limited knowledge of ML/DL/AI, and having mere above-average intelligence
- My limited knowledge of the tactics of ruthless beings
- As the complexity of a situation increases, or as the intelligence of AGI increases, predictability drops.
... (read 15184 more words →)