I really liked Bostrom's unfinished fable of the sparrows. And endnote #1 from the Preface is cute.
I would say one of the key strong points about the fable of the sparrows is that it provides a very clean intro to the idea of AI risk. Even someone who's never read a word on the subject, when given the title of the book and the story, gets a good idea of where the book is going to go. It doesn't communicate all the important insights, but it points in the right direction.
EDIT: So I actually went to the trouble of testing this by having a bunch of acquaintances read the fable, and, even given the title of the book, most of them didn't come anywhere near getting the intended message. They were much more likely to interpret it as about the "futility of subjugating nature to humanity's whims". This is worrying for our ability to make the case to laypeople.
Bostrom's wonderful book lays out many important issues and frames a lot of research questions which it is up to all of us to answer.
Thanks to Katja for her introduction and all of these good links.
One issue that I would like to highlight: The mixture of skills and abilities that a person has is not the same as the set of skills which could result in the dangers Bostrom will discuss later, or other dangers and benefits which he does not discuss.
For this reason, in the next phase of this work, we have to understand what specific future technologies could lead us to what specific outcomes.
Systems which are quite deficient in some ways, relative to people, may still be extremely dangerous.
Meanwhile, the intelligence of a single person, even a single genius, taken in isolation and only allowed to acquire limited resources actually is not all that dangerous. People become dangerous when they form groups, access the existing corpus of human knowledge, coordinate among each other to deploy resources and find ways to augment their abilities.
"Human-level intelligence" is only a first-order approximation to the set of skills and abilities which should concern us.
If we want to prevent disaster, we have to be able to distinguish dangerous systems. Unfortunately, checking whether a machine can do all of the things a person can is not the correct test.
I like Bostrom's book so far. I think Bostrom's statement near the beginning that much of the book is probably wrong is commendable. If anything, I think I would have taken this statement even further... it seems like Bostrom holds a position of such eminence in the transhumanist community that many will be liable to instinctively treat what he says as quite likely to be correct, forgetting that predicting the future is extremely difficult and even a single very well educated individual is only familiar with a fraction of human knowledge.
I'm envisioning an alternative book, Superintelligence: Gonzo Edition, that has a single bad argument deliberately inserted at random in each chapter that the reader is tasked with finding. Maybe we could get a similar effect by having a contest among LWers to find the weakest argument in each chapter. (Even if we don't have a contest, I'm going to try to keep track of the weakest arguments I see on my own. This chapter it was gnxvat gur abgvba bs nv pbzcyrgrarff npghnyyl orvat n guvat sbe tenagrq.)
Also, supposedly being critical is a good way to generate new ideas.
I was under the impression (after reading the sections) that the argument hinges a lot less on (economic) growth than what might be gleamed from the summary here.
'The computer scientist Donald Knuth was struck that "AI has by now succeeded in doing essentially everything that requires 'thinking' but has failed to do most of what people and animals do 'without thinking' - that, somehow, is so much harder!'. (p14) There are some activities we think of as involving substantial thinking that we haven't tried to automate much, presumably because they require some of the 'not thinking' skills as precursors. For instance, theorizing about the world, making up grand schemes, winning political struggles, and starting s...
The chapter gives us a reasonable qualitative summary of what has happened in AI so far. It would be interesting to have a more quantitative picture, though this is hard to get. e.g. How much better are the new approaches than the old ones, on some metric? How much was the area funded at different times? How much time has been spent on different things? How has the economic value of the outputs grown over time?
The 'optimistic' quote from the Dartmouth Conference seems ambiguous in its optimism to me. They say 'a significant advance can be made in one or more of these problems', rather than that any of them can be solved (as they are often quoted as saying). What constitutes a 'significant advance' varies with optimism, so their statement seems consistent with them believing they can make an arbitrarily small step. The whole proposal is here, if anyone is curious about the rest.
How much smarter than a human could a thing be? (p4) How about the same question, but using no more energy than a human? What evidence do we have about this?
What is the relationship between economic growth and AI? (Why does a book about AI begin with economic growth?)
If you don't know what causes growth mode shifts, but there have been two or three of them and they seem kind of regular (see Hanson 2000, p14), how likely do you think another one is? (p2) How much evidence do you think history gives us about the timing and new growth rate of a new growth mode?
"The train might not pause or even decelerate at Humanville Station. It is likely to swoosh right by." (p4)
There isn't much justification for this claim near where it's made. I could imagine it causing a reader to think that the author is prone to believing important things without much evidence -- or that he expects his readers to do so.
(It might help if the author noted that the topic is discussed in Chapter 4)
How large a leap in cognitive ability do you think occurred between our last common ancestor with the great apes, and us? (p1) Was it mostly a change in personal intelligence, or could human success be explained by our greater ability to accumulate knowledge from others in society? How can we tell how much smarter, in the relevant sense, a chimp is than a human? This chapter claims Koko the Gorilla has a tested IQ of about 80 (see table 2).
What can we infer from answers to these questions?
Comments:
It would be nice for at least one futurist who shows a graph of GDP to describe some of the many, many difficulties in comparing GDP across years, and to talk about the distribution of wealth. The power-law distribution of wealth means that population growth without a shift in the wealth distribution can look like an exponential increase in wealth, while actually the wealth of all but the very wealthy must decrease to preserve the same distribution. Arguably, this has happened repeatedly in American history.
I was very glad Nick mentioned that
In fact for the thousand years until 1950 such extrapolation would place an infinite economy in the late 20th Century! The time since 1950 has been strange apparently.
(Forgive me for playing with anthropics, a tool I do not understand) - maybe something happened to all the observers in worlds that didn't see stagnation set in during the '70s. I guess this is similar to the common joke 'explanation' for the bursting of the tech bubble.
With respect to "Growth of Growth", wouldn't the chart ALWAYS look like that, with the near end trailing downwards? The sampling time is decreasing logarithmically, so unless you are sitting right on top of the singularity/production revolution, it should always look like that.
Just got thinking about what happened around 1950 specifically and couldn't find any real reason for it to drop off right there. WWII was well over, and the gold exchange standard remained for another 21 years, and those are the two primary framing events for that timeframe, so far as I can tell.
Without the benefit of hindsight, which past technologies would you have expected to make a big difference to human productivity? For an example, if you think that humans' tendency to share information through language is hugely important to their success, then you might expect the printing press to help a lot, or the internet.
Relatedly, if you hadn't already been told, would you have expected agriculture to be a bigger deal than almost anything else?
I'm curious if any of you feel that future widespread use of commercial scale quantum computing (here I am thinking of at least thousands of quantum computers in the private domain with a multitude of programs already written, tested, available, economic and functionally useful) will have any impact on the development of strong A.I.? Has anyone read or written any literature with regards to potential windfalls this could bring to A.I.'s advancement (or lack thereof)?
I'm also curious if other paradigm shifting computing technologies could rapidly accelerate the path toward superintelligence?
Have you seen any demonstrations of AI which made a big impact on your expectations, or were particularly impressive?
Are there foreseeable developments other than human-level AI which might produce much faster economic growth? (p2)
Bostrom says that it is hard to imagine the world economy having a doubling time as short as weeks, without minds being created that are much faster and more efficient than those of humans (p2-3). Do you think humans could maintain control of an economy that grew so fast? How fast could it grow while humans maintained control?
Common sense and natural language understanding are suspected to be 'AI complete'. (p14) (Recall that 'AI complete' means 'basically equivalent to solving the whole problem of making a human-level AI')
Do you think they are? Why?
Was there anything in particular in this week's reading that you would like to learn more about, or think more about?
Whatever the nature, cause, and robustness of growth modes, the important observation seems to me to be that the past behavior of the economy suggests very much faster growth is plausible.
Are there any ongoing efforts to model the intelligent behaviour of other organisms besides the human model?
How should someone familiar with past work in AI use that knowledge judge how much work is left to be done before reaching human-level AI, or human-level ability at a particular kind of task?
This question of thresholds for 'comprehension' -- to use the judiciously applied scare quotes Katja used abut comprehension (I’ll have more to say about that in coming posts, as many contributors in here doubtless will) – i.e. thresholds for discernment of features of reality, particularly abstract features of “reality” be it across species (existent ones and future ones included, biological and nonbiological included) is one I, too, have thought about seriously and in several guises over the years.
First though, about the scare quotes. Comprehension vs di...
This is part of a weekly reading group on Nick Bostrom's book, Superintelligence. For more information about the group, see the announcement post. For the schedule of future topics, see MIRI's reading guide.
Welcome to the Superintelligence reading group. This week we discuss the first section in the reading guide, Past developments and present capabilities. This section considers the behavior of the economy over very long time scales, and the recent history of artificial intelligence (henceforth, 'AI'). These two areas are excellent background if you want to think about large economic transitions caused by AI.
This post summarizes the section, and offers a few relevant notes, thoughts, and ideas for further investigation. My own thoughts and questions for discussion are in the comments.
There is no need to proceed in order through this post. Feel free to jump straight to the discussion. Where applicable, page numbers indicate the rough part of the chapter that is most related (not necessarily that the chapter is being cited for the specific claim).
Reading: Foreword, and Growth modes through State of the art from Chapter 1 (p1-18)
Economic growth:
The history of AI:
If you are particularly interested in these topics, and want to do further research, these are a few plausible directions:
This has been a collection of notes on the chapter. The most important part of the reading group though is discussion, which is in the comments section. I pose some questions for you there, and I invite you to add your own. Please remember that this group contains a variety of levels of expertise: if a line of discussion seems too basic or too incomprehensible, look around for one that suits you better!
Next week, we will talk about what AI researchers think about human-level AI: when it will arrive, what it will be like, and what the consequences will be. To prepare, read Opinions about the future of machine intelligence from Chapter 1 and also When Will AI Be Created? by Luke Muehlhauser. The discussion will go live at 6pm Pacific time next Monday 22 September. Sign up to be notified here.