If you continuously improve a system's speed, then the speed with which each fixed task can be accomplished will be continuously reduced. However, if you continuously improve a system's quality, then you may see discontinuous jumps in the time required to accomplish certain tasks. So if we think about these dimensions as possible improvements rather than types of superintelligence, it seems there is a distinction.
This is something which we see often. For example, I might improve an approximation algorithm by speeding it up, or by improving its approximation ratio (and in practice we see both kinds of improvements, at least in theory). In the former case, every problem gets 10% faster with each 10% improvement. In the latter case, there are certain problems (such as "find a cut in this graph which is within 15% of the maximal possible size") for which the running time jumps discontinuously overnight.
You see a similar tradeoff in machine learning, where some changes improve the quality of solution you can achieve (e.g. reducing the classification error) and others let you achieve similar quality solutions faster.
This seems like a really important distinction from the perspective of evaluating the plausibility of a fast takeoff. One quesiton I'd love to see more work on is exactly what is going on in normal machine learning progress. In particular, to what extent are we really seeing quality improvements, vs. speed improvements + an unwillingness to do fine-tuning for really expensive algorithms? The latter model is consistent with my knowledge of the field, but has very different implications for forecasts.
If we push ourselves a bit, I think we can establish the plausibility of a fast takeoff. We have to delve into the individual components of intelligence deeply, however.
This is part of a weekly reading group on Nick Bostrom's book, Superintelligence. For more information about the group, and an index of posts so far see the announcement post. For the schedule of future topics, see MIRI's reading guide.
Welcome. This week we discuss the fifth section in the reading guide: Forms of superintelligence. This corresponds to Chapter 3, on different ways in which an intelligence can be super.
This post summarizes the section, and offers a few relevant notes, and ideas for further investigation. Some of my own thoughts and questions for discussion are in the comments.
There is no need to proceed in order through this post, or to look at everything. Feel free to jump straight to the discussion. Where applicable and I remember, page numbers indicate the rough part of the chapter that is most related (not necessarily that the chapter is being cited for the specific claim).
Reading: Chapter 3 (p52-61)
Summary
Notes
In-depth investigations
If you are particularly interested in these topics, and want to do further research, these are a few plausible directions, some inspired by Luke Muehlhauser's list, which contains many suggestions related to parts of Superintelligence. These projects could be attempted at various levels of depth.
How to proceed
This has been a collection of notes on the chapter. The most important part of the reading group though is discussion, which is in the comments section. I pose some questions for you there, and I invite you to add your own. Please remember that this group contains a variety of levels of expertise: if a line of discussion seems too basic or too incomprehensible, look around for one that suits you better!
Next week, we will talk about 'intelligence explosion kinetics', a topic at the center of much contemporary debate over the arrival of machine intelligence. To prepare, read Chapter 4, The kinetics of an intelligence explosion (p62-77). The discussion will go live at 6pm Pacific time next Monday 20 October. Sign up to be notified here.