I really liked Bostrom's unfinished fable of the sparrows. And endnote #1 from the Preface is cute.
I would say one of the key strong points about the fable of the sparrows is that it provides a very clean intro to the idea of AI risk. Even someone who's never read a word on the subject, when given the title of the book and the story, gets a good idea of where the book is going to go. It doesn't communicate all the important insights, but it points in the right direction.
EDIT: So I actually went to the trouble of testing this by having a bunch of acquaintances read the fable, and, even given the title of the book, most of them didn't come anywhere near getting the intended message. They were much more likely to interpret it as about the "futility of subjugating nature to humanity's whims". This is worrying for our ability to make the case to laypeople.
Bostrom's wonderful book lays out many important issues and frames a lot of research questions which it is up to all of us to answer.
Thanks to Katja for her introduction and all of these good links.
One issue that I would like to highlight: The mixture of skills and abilities that a person has is not the same as the set of skills which could result in the dangers Bostrom will discuss later, or other dangers and benefits which he does not discuss.
For this reason, in the next phase of this work, we have to understand what specific future technologies could lead us to what specific outcomes.
Systems which are quite deficient in some ways, relative to people, may still be extremely dangerous.
Meanwhile, the intelligence of a single person, even a single genius, taken in isolation and only allowed to acquire limited resources actually is not all that dangerous. People become dangerous when they form groups, access the existing corpus of human knowledge, coordinate among each other to deploy resources and find ways to augment their abilities.
"Human-level intelligence" is only a first-order approximation to the set of skills and abilities which should concern us.
If we want to prevent disaster, we have to be able to distinguish dangerous systems. Unfortunately, checking whether a machine can do all of the things a person can is not the correct test.
I like Bostrom's book so far. I think Bostrom's statement near the beginning that much of the book is probably wrong is commendable. If anything, I think I would have taken this statement even further... it seems like Bostrom holds a position of such eminence in the transhumanist community that many will be liable to instinctively treat what he says as quite likely to be correct, forgetting that predicting the future is extremely difficult and even a single very well educated individual is only familiar with a fraction of human knowledge.
I'm envisioning an alternative book, Superintelligence: Gonzo Edition, that has a single bad argument deliberately inserted at random in each chapter that the reader is tasked with finding. Maybe we could get a similar effect by having a contest among LWers to find the weakest argument in each chapter. (Even if we don't have a contest, I'm going to try to keep track of the weakest arguments I see on my own. This chapter it was gnxvat gur abgvba bs nv pbzcyrgrarff npghnyyl orvat n guvat sbe tenagrq.)
Also, supposedly being critical is a good way to generate new ideas.
I was under the impression (after reading the sections) that the argument hinges a lot less on (economic) growth than what might be gleamed from the summary here.
'The computer scientist Donald Knuth was struck that "AI has by now succeeded in doing essentially everything that requires 'thinking' but has failed to do most of what people and animals do 'without thinking' - that, somehow, is so much harder!'. (p14) There are some activities we think of as involving substantial thinking that we haven't tried to automate much, presumably because they require some of the 'not thinking' skills as precursors. For instance, theorizing about the world, making up grand schemes, winning political struggles, and starting s...
The chapter gives us a reasonable qualitative summary of what has happened in AI so far. It would be interesting to have a more quantitative picture, though this is hard to get. e.g. How much better are the new approaches than the old ones, on some metric? How much was the area funded at different times? How much time has been spent on different things? How has the economic value of the outputs grown over time?
The 'optimistic' quote from the Dartmouth Conference seems ambiguous in its optimism to me. They say 'a significant advance can be made in one or more of these problems', rather than that any of them can be solved (as they are often quoted as saying). What constitutes a 'significant advance' varies with optimism, so their statement seems consistent with them believing they can make an arbitrarily small step. The whole proposal is here, if anyone is curious about the rest.
How much smarter than a human could a thing be? (p4) How about the same question, but using no more energy than a human? What evidence do we have about this?
What is the relationship between economic growth and AI? (Why does a book about AI begin with economic growth?)
If you don't know what causes growth mode shifts, but there have been two or three of them and they seem kind of regular (see Hanson 2000, p14), how likely do you think another one is? (p2) How much evidence do you think history gives us about the timing and new growth rate of a new growth mode?
"The train might not pause or even decelerate at Humanville Station. It is likely to swoosh right by." (p4)
There isn't much justification for this claim near where it's made. I could imagine it causing a reader to think that the author is prone to believing important things without much evidence -- or that he expects his readers to do so.
(It might help if the author noted that the topic is discussed in Chapter 4)
How large a leap in cognitive ability do you think occurred between our last common ancestor with the great apes, and us? (p1) Was it mostly a change in personal intelligence, or could human success be explained by our greater ability to accumulate knowledge from others in society? How can we tell how much smarter, in the relevant sense, a chimp is than a human? This chapter claims Koko the Gorilla has a tested IQ of about 80 (see table 2).
What can we infer from answers to these questions?
Comments:
It would be nice for at least one futurist who shows a graph of GDP to describe some of the many, many difficulties in comparing GDP across years, and to talk about the distribution of wealth. The power-law distribution of wealth means that population growth without a shift in the wealth distribution can look like an exponential increase in wealth, while actually the wealth of all but the very wealthy must decrease to preserve the same distribution. Arguably, this has happened repeatedly in American history.
I was very glad Nick mentioned that
In fact for the thousand years until 1950 such extrapolation would place an infinite economy in the late 20th Century! The time since 1950 has been strange apparently.
(Forgive me for playing with anthropics, a tool I do not understand) - maybe something happened to all the observers in worlds that didn't see stagnation set in during the '70s. I guess this is similar to the common joke 'explanation' for the bursting of the tech bubble.
With respect to "Growth of Growth", wouldn't the chart ALWAYS look like that, with the near end trailing downwards? The sampling time is decreasing logarithmically, so unless you are sitting right on top of the singularity/production revolution, it should always look like that.
Just got thinking about what happened around 1950 specifically and couldn't find any real reason for it to drop off right there. WWII was well over, and the gold exchange standard remained for another 21 years, and those are the two primary framing events for that timeframe, so far as I can tell.
Without the benefit of hindsight, which past technologies would you have expected to make a big difference to human productivity? For an example, if you think that humans' tendency to share information through language is hugely important to their success, then you might expect the printing press to help a lot, or the internet.
Relatedly, if you hadn't already been told, would you have expected agriculture to be a bigger deal than almost anything else?
I'm curious if any of you feel that future widespread use of commercial scale quantum computing (here I am thinking of at least thousands of quantum computers in the private domain with a multitude of programs already written, tested, available, economic and functionally useful) will have any impact on the development of strong A.I.? Has anyone read or written any literature with regards to potential windfalls this could bring to A.I.'s advancement (or lack thereof)?
I'm also curious if other paradigm shifting computing technologies could rapidly accelerate the path toward superintelligence?
Have you seen any demonstrations of AI which made a big impact on your expectations, or were particularly impressive?
Are there foreseeable developments other than human-level AI which might produce much faster economic growth? (p2)
Bostrom says that it is hard to imagine the world economy having a doubling time as short as weeks, without minds being created that are much faster and more efficient than those of humans (p2-3). Do you think humans could maintain control of an economy that grew so fast? How fast could it grow while humans maintained control?
Common sense and natural language understanding are suspected to be 'AI complete'. (p14) (Recall that 'AI complete' means 'basically equivalent to solving the whole problem of making a human-level AI')
Do you think they are? Why?
Was there anything in particular in this week's reading that you would like to learn more about, or think more about?
Whatever the nature, cause, and robustness of growth modes, the important observation seems to me to be that the past behavior of the economy suggests very much faster growth is plausible.
Are there any ongoing efforts to model the intelligent behaviour of other organisms besides the human model?
How should someone familiar with past work in AI use that knowledge judge how much work is left to be done before reaching human-level AI, or human-level ability at a particular kind of task?
This question of thresholds for 'comprehension' -- to use the judiciously applied scare quotes Katja used abut comprehension (I’ll have more to say about that in coming posts, as many contributors in here doubtless will) – i.e. thresholds for discernment of features of reality, particularly abstract features of “reality” be it across species (existent ones and future ones included, biological and nonbiological included) is one I, too, have thought about seriously and in several guises over the years.
First though, about the scare quotes. Comprehension vs di...
This question of thresholds for 'comprehension' -- to use the judiciously applied scare quotes Katja used abut comprehension (I’ll have more to say about that in coming posts, as many contributors in here doubtless will) – i.e. thresholds for discernment of features of reality, particularly abstract features of “reality” be it across species (existent ones and future ones included, biological and nonbiological included) is one I, too, have thought about seriously and in several guises over the years.
First though, about the scare quotes. Comprehension vs discovery is worth distinguishing. When I was a math major, back in the day ( I was a double major at UCB, math and philosophy, and wrote my honors thesis on the mind body problem), I, like most math majors, frequently experienced the distinction between grasping in a full intuitive sense, some concept or theorem, and technically understanding that it was true, by step-wise going through a proof, seeing the validity of each step, and thus accepting the conclusion.
But what I was always after … and lacking this I never was satisfied with myself that I had really understood the concept, even though I accepted the demonstration of its truth… was the “ah-ha” moment of seeing that it was “conceptually necessary”, as I used to think of it to myself. If fact, I wouldn't quit trying to intuit the thing, until I finally achieved this full understanding.
It’s well known in math that frequently an intuitively penetrable (by human math people) first demonstration of a theorem, is later replaced in some book by a more compact, but intuitively opaque proof. Math students often hate these more “efficient” and compact proofs, logically valid though they be.
Hence I bring up the conundrum of “theorem proving programs”. They can “discover” a new piece of mathematical “knowledge”, but do they experience these intuitions? Hardly. These intuitions are a form of what I call conceptual qualia.
The question is, if a machine OR human stumbles upon a proof of a new theorem, has anything been “comprehended”, until or unless some conscious agent capable of conceptual qualia (live intuitive “ah-ha’s”) has been able to understand the meaning of the proof, not just walk through each step and say, “yes, logically valid; yes, logically valid….. yes, logically valid.”
The million dollar question, one of them, is whether we have yet accepted the distinction between intelligence and consciousness that was treated so dismissively and derisively in the positiviistic and behavioristic era, providing the intellectual which made the Turing test so palatable, and replaced any talk of comprehension, with talk about behavior.
Do we, now, want superintelligence, or supercomprension?
If we learn how to use Big Data to take the output form the iconic "million monkeys at a million typewriters", and filter it with sophisticated statistical methods based on mining Big Data, and in the aggregate of these two processes develop machines that "discover" but do not "comprehend", will we consider ourselves better off?
Well, for some purposes, sure. Drug "discovery" that we do not "understand" but which we can use to reverse Alzheimers, is fine.
Program trading that makes money, makes money.
But for other purposes... I think we ought have people also pursuing supercomprehension, machines that really feel, imagine (not just "search" and combinatorially combine, then filter), feel the joys and ironies of life, and give companionship, devotion, loyalty, altruism, maybe even moral and aesthetic inspiration.
Further, I think our best chance at "taming" superintelligence, is to give it conceptual qualia, emotion, experience, and conditions that allow it to have empathy, and develop moral intuition. For me, I have wanted my whole life to build a companion race of AIs, that truely is sentient, and can be full partners in the experience and perfection of life, the pursuit of "meaning", and so on.
Building such minds requires we understand and delve into problems we have been, on the whole, too collectively lazy to solve on our own behalf, like developing a decent theory of meta-ethics, so that we know what traits (if any) in the over all space of possible minds, promote the independent discovery or evolution of "ethics".
I actually think an independently grounded theory that does all this, and solves the mind-body problem in general, is within reach.
One of the things I like about the possibility -- and the inherent risk -- of imminent superintelligence, is that it will force us to develop answers to these neglected "philosophical" issues, because a mind and intelligence that becomes arbitrarily smart is, as many contemporary authors (Bostrom included) point out, ultimately a much too dangerous power to play with, unless it is given the ability to control itself voluntarily, and "ethically."
It wasn't airplanes and physics that brought down the world trade center, it was philosophical stupidity and intellectual immaturity.
By going down the path toward superintelligence, I think we must give it sentience, so that it is more than a mindless, electromechanical apparatus that will steam roller over us, not with malice, but the same way a poorly controlled nuclear power plant will kill us: it is a thing that doesn't have any clue what it is "doing*.
We need to build brilliant machines with conscious agency, not just behavior. We need to take on the task of building sentient machines.
I think we can do it if we think really, really hard about the problems. We have all the intellectual pieces, the "data", in hand now. We just need to give up this legacy positivism, and stop equivocating about intelligence and "understanding".
Phenomenal experience is a necessary (though not sufficient) condition for moral agency. I think we can figure out with a decent chance of being right, what the sufficient conditons are, too. But we cannot (and AI lags very behind neurobiology and neuroscience on this one) drag our feet and continue to default to the legacy positivism of the Turing test era (because we are too lazy to think harder and aim higher) when it comes to discussing, not just information processing behavior, but awareness.
Well, a little preachy, but we are in here to make each other think. I have wanted to build a mind since I was a teenager, but for these reasons. I don't want just a souped up, Big Data, calculating machine. Does anyone believe Watson "understood" anything?
...But for other purposes... I think we ought have people also pursuing supercomprehension, machines that really feel, imagine (not just "search" and combinatorially combine, then filter), feel the joys and ironies of life, and give companionship, devotion, loyalty, altruism, maybe even moral and aesthetic inspiration.
Further, I think our best chance at "taming" superintelligence, is to give it conceptual qualia, emotion, experience, and conditions that allow it to have empathy, and develop moral intuition. For me, I have wanted my whole
This is part of a weekly reading group on Nick Bostrom's book, Superintelligence. For more information about the group, see the announcement post. For the schedule of future topics, see MIRI's reading guide.
Welcome to the Superintelligence reading group. This week we discuss the first section in the reading guide, Past developments and present capabilities. This section considers the behavior of the economy over very long time scales, and the recent history of artificial intelligence (henceforth, 'AI'). These two areas are excellent background if you want to think about large economic transitions caused by AI.
This post summarizes the section, and offers a few relevant notes, thoughts, and ideas for further investigation. My own thoughts and questions for discussion are in the comments.
There is no need to proceed in order through this post. Feel free to jump straight to the discussion. Where applicable, page numbers indicate the rough part of the chapter that is most related (not necessarily that the chapter is being cited for the specific claim).
Reading: Foreword, and Growth modes through State of the art from Chapter 1 (p1-18)
Summary
Economic growth:
The history of AI:
Notes on a few things
In case you are too curious about what the topic of this book is to wait until week 3, a 'superintelligence' will soon be described as 'any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest'. Vagueness in this definition will be cleared up later.
In particular, how does 'AI' differ from other computer software? The line is blurry, but basically AI research seeks to replicate the useful 'cognitive' functions of human brains ('cognitive' is perhaps unclear, but for instance it doesn't have to be squishy or prevent your head from imploding). Sometimes AI research tries to copy the methods used by human brains. Other times it tries to carry out the same broad functions as a human brain, perhaps better than a human brain. Russell and Norvig (p2) divide prevailing definitions of AI into four categories: 'thinking humanly', 'thinking rationally', 'acting humanly' and 'acting rationally'. For our purposes however, the distinction is probably not too important.
We are going to talk about 'human-level' AI a lot, so it would be good to be clear on what that is. Unfortunately the term is used in various ways, and often ambiguously. So we probably can't be that clear on it, but let us at least be clear on how the term is unclear.
One big ambiguity is whether you are talking about a machine that can carry out tasks as well as a human at any price, or a machine that can carry out tasks as well as a human at the price of a human. These are quite different, especially in their immediate social implications.
Other ambiguities arise in how 'levels' are measured. If AI systems were to replace almost all humans in the economy, but only because they are so much cheaper - though they often do a lower quality job - are they human level? What exactly does the AI need to be human-level at? Anything you can be paid for? Anything a human is good for? Just mental tasks? Even mental tasks like daydreaming? Which or how many humans does the AI need to be the same level as? Note that in a sense most humans have been replaced in their jobs before (almost everyone used to work in farming), so if you use that metric for human-level AI, it was reached long ago, and perhaps farm machinery is human-level AI. This is probably not what we want to point at.
Another thing to be aware of is the diversity of mental skills. If by 'human-level' we mean a machine that is at least as good as a human at each of these skills, then in practice the first 'human-level' machine will be much better than a human on many of those skills. It may not seem 'human-level' so much as 'very super-human'.
We could instead think of human-level as closer to 'competitive with a human' - where the machine has some super-human talents and lacks some skills humans have. This is not usually used, I think because it is hard to define in a meaningful way. There are already machines for which a company is willing to pay more than a human: in this sense a microscope might be 'super-human'. There is no reason for a machine which is equal in value to a human to have the traits we are interested in talking about here, such as agency, superior cognitive abilities or the tendency to drive humans out of work and shape the future. Thus we talk about AI which is at least as good as a human, but you should beware that the predictions made about such an entity may apply before the entity is technically 'human-level'.
Example of how the first 'human-level' AI may surpass humans in many ways.
Because of these ambiguities, AI researchers are sometimes hesitant to use the term. e.g. in these interviews.
Robin Hanson wrote the seminal paper on this issue. Here's a figure from it, showing the step changes in growth rates. Note that both axes are logarithmic. Note also that the changes between modes don't happen overnight. According to Robin's model, we are still transitioning into the industrial era (p10 in his paper).
One might be happier making predictions about future growth mode changes if one had a unifying explanation for the previous changes. As far as I know, we have no good idea of what was so special about those two periods. There are many suggested causes of the industrial revolution, but nothing uncontroversially stands out as 'twice in history' level of special. You might think the small number of datapoints would make this puzzle too hard. Remember however that there are quite a lot of negative datapoints - you need an explanation that didn't happen at all of the other times in history.
It is also interesting to compare world economic growth to the total size of the world economy. For the last few thousand years, the economy seems to have grown faster more or less in proportion to it's size (see figure below). Extrapolating such a trend would lead to an infinite economy in finite time. In fact for the thousand years until 1950 such extrapolation would place an infinite economy in the late 20th Century! The time since 1950 has been strange apparently.
(Figure from here)
You can see them in action: SHRDLU, Shakey, General Problem Solver (not quite in action), ELIZA.
Algorithmically generated Beethoven, algorithmic generation of patentable inventions, artificial comedy (requires download).
Here is a neural network doing image recognition. Here is artificial evolution of jumping and of toy cars. Here is a face detection demo that can tell you your attractiveness (apparently not reliably), happiness, age, gender, and which celebrity it mistakes you for.
Bostrom points out that many types of artificial neural network can be viewed as classifiers that perform 'maximum likelihood estimation'. If you haven't come across this term before, the idea is to find the situation that would make your observations most probable. For instance, suppose a person writes to you and tells you that you have won a car. The situation that would have made this scenario most probable is the one where you have won a car, since in that case you are almost guaranteed to be told about it. Note that this doesn't imply that you should think you won a car, if someone tells you that. Being the target of a spam email might only give you a low probability of being told that you have won a car (a spam email may instead advise you of products, or tell you that you have won a boat), but spam emails are so much more common than actually winning cars that most of the time if you get such an email, you will not have won a car. If you would like a better intuition for maximum likelihood estimation, Wolfram Alpha has several demonstrations (requires free download).
The second large class of algorithms Bostrom mentions are hill climbing algorithms. The idea here is fairly straightforward, but if you would like a better basic intuition for what hill climbing looks like, Wolfram Alpha has a demonstration to play with (requires free download).
In-depth investigations
If you are particularly interested in these topics, and want to do further research, these are a few plausible directions:
How to proceed
This has been a collection of notes on the chapter. The most important part of the reading group though is discussion, which is in the comments section. I pose some questions for you there, and I invite you to add your own. Please remember that this group contains a variety of levels of expertise: if a line of discussion seems too basic or too incomprehensible, look around for one that suits you better!
Next week, we will talk about what AI researchers think about human-level AI: when it will arrive, what it will be like, and what the consequences will be. To prepare, read Opinions about the future of machine intelligence from Chapter 1 and also When Will AI Be Created? by Luke Muehlhauser. The discussion will go live at 6pm Pacific time next Monday 22 September. Sign up to be notified here.