What leads you to be confident that these are the bottlenecks?
One important piece of data is the distribution of citations within fields. There have been many studies of this. What you find, generally, is that a field of study has a finite amount of attention available--if it has N researchers, they collectively perform cN paper-readings per year. The distribution of these paper-readings is a power law (a Zipf distribution), so that the number of researchers whose papers get read much grows much more slowly than N. No model based on the expected distribution of the merits of the papers or the scientists makes sense, particularly given how the Zipf distribution changes with the size of the field. The models that make sense are models that say that the odds of somebody reading a paper by person X are proportional to the odds that someone else cited X. That is, if you break down your model of citation distribution into a component to model randomly trawling the literature for citations, and a component to model quality of the papers, you find the random model explains nearly 100% of the data.
Interesting. Is your research up online?
No, but check your email.
You mean, we would have a lot more effective research, quickly? Or something more specific?
If we achieved a linear relationship between input and output, we would have maybe 6 orders of magnitude more important scientific and technological advances per year. If we actually achieved "synergy", that oft-theorized state where the accumulation of knowledge grows at a rate proportional to accumulated knowledge, we would have a fast take-off scenario, just without AI. dk/dt = k, dk/k = dt, ln(k) = t+C, k = Ce^t.
If we achieved a linear relationship between input and output, we would have maybe 6 orders of magnitude more important scientific and technological advances per year. If we actually achieved "synergy", that oft-theorized state where the accumulation of knowledge grows at a rate proportional to accumulated knowledge, we would have a fast take-off scenario, just without AI.
How much should the fact that we do not have a fast take-off of organizations make us more pessimistic about one with AIs being likely?
This is part of a weekly reading group on Nick Bostrom's book, Superintelligence. For more information about the group, and an index of posts so far see the announcement post. For the schedule of future topics, see MIRI's reading guide.
Welcome. This week we discuss the fifth section in the reading guide: Forms of superintelligence. This corresponds to Chapter 3, on different ways in which an intelligence can be super.
This post summarizes the section, and offers a few relevant notes, and ideas for further investigation. Some of my own thoughts and questions for discussion are in the comments.
There is no need to proceed in order through this post, or to look at everything. Feel free to jump straight to the discussion. Where applicable and I remember, page numbers indicate the rough part of the chapter that is most related (not necessarily that the chapter is being cited for the specific claim).
Reading: Chapter 3 (p52-61)
Summary
Notes
In-depth investigations
If you are particularly interested in these topics, and want to do further research, these are a few plausible directions, some inspired by Luke Muehlhauser's list, which contains many suggestions related to parts of Superintelligence. These projects could be attempted at various levels of depth.
How to proceed
This has been a collection of notes on the chapter. The most important part of the reading group though is discussion, which is in the comments section. I pose some questions for you there, and I invite you to add your own. Please remember that this group contains a variety of levels of expertise: if a line of discussion seems too basic or too incomprehensible, look around for one that suits you better!
Next week, we will talk about 'intelligence explosion kinetics', a topic at the center of much contemporary debate over the arrival of machine intelligence. To prepare, read Chapter 4, The kinetics of an intelligence explosion (p62-77). The discussion will go live at 6pm Pacific time next Monday 20 October. Sign up to be notified here.