Well, no life form has achieved what Bostrom calls a decisive strategic advantage. Instead, they live their separate lives in various environmental niches.
One way to look at it is this: Suppose a dominant AGI emerged which was largely running the planet and expanding into the galaxy.
Would it then be impossible to engineer another AGI which survived modestly in some niche or flew off at near the speed of light in a new direction? No.
For the first AGI to be the only AGI, all other AGI development would have to cease without such "niche AGIs" ever being created.
An AGI could be extremely unobtrusive for tens of thousands of years at a time, and even be engaging in some form of self-improvement or replication.
"Sterilizing" matter of all of the "niche AGI" it contains could be quite an involved process.
For the first AGI to be the only AGI, all other AGI development would have to cease without such "niche AGIs" ever being created.
That AGI does not need to stay the only one to solidly stay in power. Since it has been playing the game for longer, it would be reasonable for it to be able to keep tabs on other intelligent entities, and only interfere with their development if they became too powerful. You can still have other entities doing their own thing, there just has to be a predictable ceiling on how much power they can acquire - indeed, th...
This is part of a weekly reading group on Nick Bostrom's book, Superintelligence. For more information about the group, and an index of posts so far see the announcement post. For the schedule of future topics, see MIRI's reading guide.
Welcome. This week we discuss the seventh section in the reading guide: Decisive strategic advantage. This corresponds to Chapter 5.
This post summarizes the section, and offers a few relevant notes, and ideas for further investigation. Some of my own thoughts and questions for discussion are in the comments.
There is no need to proceed in order through this post, or to look at everything. Feel free to jump straight to the discussion. Where applicable and I remember, page numbers indicate the rough part of the chapter that is most related (not necessarily that the chapter is being cited for the specific claim).
Reading: Chapter 5 (p78-91)
Summary
5. Disagreement. Note that though few people believe that a single AI project will get to dictate the future, this is often because they disagree with things in the previous chapter - e.g. that a single AI project will plausibly become more capable than the world in the space of less than a month.
If you are particularly interested in these topics, and want to do further research, these are a few plausible directions, some inspired by Luke Muehlhauser's list, which contains many suggestions related to parts of Superintelligence. These projects could be attempted at various levels of depth.
How to proceed
This has been a collection of notes on the chapter. The most important part of the reading group though is discussion, which is in the comments section. I pose some questions for you there, and I invite you to add your own. Please remember that this group contains a variety of levels of expertise: if a line of discussion seems too basic or too incomprehensible, look around for one that suits you better!
Next week, we will talk about Cognitive superpowers (section 8). To prepare, read Chapter 6. The discussion will go live at 6pm Pacific time next Monday 3 November. Sign up to be notified here.