Bostrom flies by an issue that's very important:
Suppose that a scientific genius of the caliber of a Newton or an Einstein arises at least once for every 10 billion people: then on MegaEarth there would be 700,000 such geniuses living contemporaneously, alongside proportionally vast multitudes of slightly lesser talents. New ideas and technologies would be developed at a furious pace,
Back up. The population of Europe was under 200 million in 1700, less than a sixth of what it is today. The number of intellectuals was a tiny fraction of the number it is today. And the number of intellectuals in Athens in the 4th century BC was probably a few hundred. Yet we had Newton and Aristotle. Similarly, the greatest composers of the 18th and 19th century were trained in Vienna, one city. Today we may have 1000 or 10,000 times as many composers, with much better musical training than people could have in the days before recorded music, yet we do not have 1000 Mozarts or 1000 Beethovens.
Unless you believe human intelligence has been steadily declining, there is one Einstein per generation, regardless of population. The limiting factor is not the number of geniuses. The number of geniuses, a...
The limiting factor is organizational. Scientific activity can scale; recognition or propagation of it doesn't.
While failure to recognize & propagate new scientific discoveries probably explains some of our apparent deficit of current scientific geniuses, I think a bigger factor is just that earlier scientists ate the low-hanging fruit.
(I have no idea whether a similar effect would kick in for superintelligences and throttle them.)
This seems an important issue to me.
Back up. The population of Europe was under 200 million in 1700, less than a sixth of what it is today. The number of intellectuals was a tiny fraction of the number it is today. And the number of intellectuals in Athens in the 4th century BC was probably a few hundred. Yet we had Newton and Aristotle.
Those places were selected for having Newton and Aristotle though.
The limiting factor is organizational. Scientific activity can scale; recognition or propagation of it doesn't.
What leads you to be confident that these are the bottlenecks?
I measured science and technology output per scientist using four different lists of significant advances, and found that significant advances per scientist declined by 3 to 4 orders of magnitude from 1800 to 2000.
Interesting. Is your research up online?
On the other hand, merely recognizing and solving the organizational problems of science that we currently have would produce results similar to a fast singularity.
You mean, we would have a lot more effective research, quickly? Or something more specific?
yet we do not have 1000 Mozarts or 1000 Beethovens
What do you mean by this? We have plenty of composers and musicians today, and I'd bet that many modern prodigies can do the same kinds of technical tricks that Mozart could at a young age.
I'm confused about Bostrom's definition of superintelligence for collectives. The following quotes suggest that it is not the same as the usual definition of superintelligence (greatly outperforming a human in virtually all domains), but instead means something like 'greatly outperforming current collective intelligences', which have been improving for a long time:
...To obtain a collective superintelligence from any present-day collective intelligence would require a very great degree of enhancement. The resulting system would need to be capable of vastly o
Present-day humanity is a collective intelligence that is clearly 'superintelligent' relative to individual humans; yet Bostrom expresses little to no interest in this power disparity, and he clearly doesn't think his book is about the 2014 human race.
So I think his definitions of 'superintelligence' are rough, and Bostrom is primarily interested in the invincible inhuman singleton scenario: the possibility of humans building something other than humanity itself that can vastly outperform the entire human race in arbitrary tasks. He's also mainly interested in sudden, short-term singletons (the prototype being seed AI). Things like AGI and ems mainly interest him because they might produce an invincible singleton of that sort.
Wal-Mart and South Korea have a lot more generality and optimization power than any living human, but they're not likely to become invincibly superior to rival collectives anytime soon, in the manner of a paperclipper, and they're also unlikely to explosively self-improve. That matters more to Bostrom than whether they technically get defined as 'superintelligences'. I get the impression Bostrom ignores that kind of optimizer more because it doesn't fit his pr...
Bostrom says that machines can clearly have much better working memory than ours, which can remember a puny 4-5 chunks of information (p60). I'm not sure why this is so clear, except that it seems likely that everything can be much better for machine intelligences given the hardware advantages already mentioned, and given the much broader range of possible machine intelligences than biological ones.
To the extent that working memory is just like having a sheet of paper to one side where you can write things, we more or less already have that, though I agre...
And one can speculate that the tardiness and wobbliness of humanity’s progress on many of the “eternal problems” of philosophy are due to the unsuitability of the human cortex for philosophical work. On this view, our most celebrated philosophers are like dogs walking on their hind legs—just barely attaining the threshold level of performance required for engaging in the activity at all.
I think there are two better explanations.
First, assuming that philosophical questions have answers, the tools needed to find those answers will be things like evolution...
What are some possible but non-realized cognitive talents that an artificial intelligence could have, analogous to our talent for interpreting visual scenes? (p57)
I recommend Goertzel "Kinds of minds", Chapter 2 (pp 14 ff) in The Hidden Pattern, on this topic.
As pointed out in note 14, humans can solve all computable problems, because they can carry out the steps of running a Turing machine (very slowly), which we know/suspect can do everything computable. It would seem then that a quality superintelligence is just radically faster than a human at these problems. Is it different to a speed superintelligence?
Bostrom offers the skills of isolated hunter-gatherer bands as support for the claim that the achievements of humans are substantially due to our improved cognitive architecture over that of other sophisticated animals, rather than due to our participation in a giant collective intelligence (p57). However as he notes in footnote 13, this is fairly hard to interpret because isolated hunter-gatherer tribes are still part of substantially larger groups - at a minimum, including many earlier generations, who passed down information to them via language. If hum...
How strongly does the fact that neurons fire ten million times less frequently than rates of modern microprocessors suggest that biological brains are radically less efficient than artificial minds could be? (p59)
How much have 'collective intelligences' been improved by communication channels speeding up, from letters and telegrams to instant messaging?
I was quite interested in the distinction that Bostrom made in passing between intelligence and wisdom. What does everyone think about it?
Did you change your mind about anything as a result of this week's reading? Did you learn anything interesting or surprising?
...This chapter is about different kinds of superintelligent entities that could exist. I like to think about the closely related question, 'what kinds of better can intelligence be?' You can be a better baker if you can bake a cake faster, or bake more cakes, or bake better cakes. Similarly, a system can become more intelligent if it can do the same intelligent things faster, or if it does things that are qualitatively more intelligent. (Collective intelligence seems somewhat different, in that it appears to be a means to be faster or ab
Target was probably much smarter than an individual human about setting up the procedures and the incentives to have a person there ready to respond quickly and effectively, but that might have happened over months or years.
We have not to underestimate slow superintelligences. Our judiciary is also slow. So some acts we could do are very slow.
Humanity could be overtaken also by slow (and alien) superintelligence.
It does not matter if you would quickly see that it is in wrong way. You still could slowly lose step by step your rights and power to act.....
We can make progress if we break down "Quality Intelligence" into component parts. I started working on it, but before I go first does anyone care to take a try?
Bostrom argues that the existence of people who are generally functional but have specific deficits - e.g. in social cognition or in the ability to recognize or hum simple tunes (congenital amusia) - demonstrates that these cognitive skills are performed with specialized neural circuitry, not just using general intelligence (p57). Do you agree? What are other cognitive skills that are revealed to have dedicated neural circuitry in this way?
Three types of information in the brain (and perhaps other platforms), and (coming soon) why we should care
Before I make some remarks, I would recommend Leonard Susskind’s (for those who don’t know him already – though most folks in here probably do -- he is a physicist at the Stanford Institute for Theoretical Physics) very accessible 55 min YouTube presentation called “The World as Hologram.” It is not as corny as it might sound, but is a lecture on the indestructibility of information, black holes (which is a convenient lodestone for him to discuss the ...
Two of those types are what type of "better" an intelligence can be, and the rest are concerned with implementation details, so it's a bit confusing to read. Though one could replace "collective intelligence" with "highly parallel intelligence" and end up with three types of better.
This is part of a weekly reading group on Nick Bostrom's book, Superintelligence. For more information about the group, and an index of posts so far see the announcement post. For the schedule of future topics, see MIRI's reading guide.
Welcome. This week we discuss the fifth section in the reading guide: Forms of superintelligence. This corresponds to Chapter 3, on different ways in which an intelligence can be super.
This post summarizes the section, and offers a few relevant notes, and ideas for further investigation. Some of my own thoughts and questions for discussion are in the comments.
There is no need to proceed in order through this post, or to look at everything. Feel free to jump straight to the discussion. Where applicable and I remember, page numbers indicate the rough part of the chapter that is most related (not necessarily that the chapter is being cited for the specific claim).
Reading: Chapter 3 (p52-61)
Summary
Notes
In-depth investigations
If you are particularly interested in these topics, and want to do further research, these are a few plausible directions, some inspired by Luke Muehlhauser's list, which contains many suggestions related to parts of Superintelligence. These projects could be attempted at various levels of depth.
How to proceed
This has been a collection of notes on the chapter. The most important part of the reading group though is discussion, which is in the comments section. I pose some questions for you there, and I invite you to add your own. Please remember that this group contains a variety of levels of expertise: if a line of discussion seems too basic or too incomprehensible, look around for one that suits you better!
Next week, we will talk about 'intelligence explosion kinetics', a topic at the center of much contemporary debate over the arrival of machine intelligence. To prepare, read Chapter 4, The kinetics of an intelligence explosion (p62-77). The discussion will go live at 6pm Pacific time next Monday 20 October. Sign up to be notified here.