This is part of a weekly reading group on Nick Bostrom's book, Superintelligence. For more information about the group, see the announcement post. For the schedule of future topics, see MIRI's reading guide.


Welcome to the Superintelligence reading group. This week we discuss the first section in the reading guide, Past developments and present capabilities. This section considers the behavior of the economy over very long time scales, and the recent history of artificial intelligence (henceforth, 'AI'). These two areas are excellent background if you want to think about large economic transitions caused by AI.

This post summarizes the section, and offers a few relevant notes, thoughts, and ideas for further investigation. My own thoughts and questions for discussion are in the comments.

There is no need to proceed in order through this post. Feel free to jump straight to the discussion. Where applicable, page numbers indicate the rough part of the chapter that is most related (not necessarily that the chapter is being cited for the specific claim).

Reading: Foreword, and Growth modes through State of the art from Chapter 1 (p1-18)


Summary

Economic growth:

  1. Economic growth has become radically faster over the course of human history. (p1-2)
  2. This growth has been uneven rather than continuous, perhaps corresponding to the farming and industrial revolutions. (p1-2)
  3. Thus history suggests large changes in the growth rate of the economy are plausible. (p2)
  4. This makes it more plausible that human-level AI will arrive and produce unprecedented levels of economic productivity.
  5. Predictions of much faster growth rates might also suggest the arrival of machine intelligence, because it is hard to imagine humans - slow as they are - sustaining such a rapidly growing economy. (p2-3)
  6. Thus economic history suggests that rapid growth caused by AI is more plausible than you might otherwise think.

The history of AI:

  1. Human-level AI has been predicted since the 1940s. (p3-4)
  2. Early predictions were often optimistic about when human-level AI would come, but rarely considered whether it would pose a risk. (p4-5)
  3. AI research has been through several cycles of relative popularity and unpopularity. (p5-11)
  4. By around the 1990s, 'Good Old-Fashioned Artificial Intelligence' (GOFAI) techniques based on symbol manipulation gave way to new methods such as artificial neural networks and genetic algorithms. These are widely considered more promising, in part because they are less brittle and can learn from experience more usefully. Researchers have also lately developed a better understanding of the underlying mathematical relationships between various modern approaches. (p5-11)
  5. AI is very good at playing board games. (12-13)
  6. AI is used in many applications today (e.g. hearing aids, route-finders, recommender systems, medical decision support systems, machine translation, face recognition, scheduling, the financial market). (p14-16)
  7. In general, tasks we thought were intellectually demanding (e.g. board games) have turned out to be easy to do with AI, while tasks which seem easy to us (e.g. identifying objects) have turned out to be hard. (p14)
  8. An 'optimality notion' is the combination of a rule for learning, and a rule for making decisions. Bostrom describes one of these: a kind of ideal Bayesian agent. This is impossible to actually make, but provides a useful measure for judging imperfect agents against. (p10-11)

Notes on a few things

  1. What is 'superintelligence'? (p22 spoiler)
    In case you are too curious about what the topic of this book is to wait until week 3, a 'superintelligence' will soon be described as 'any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest'. Vagueness in this definition will be cleared up later. 
  2. What is 'AI'?
    In particular, how does 'AI' differ from other computer software? The line is blurry, but basically AI research seeks to replicate the useful 'cognitive' functions of human brains ('cognitive' is perhaps unclear, but for instance it doesn't have to be squishy or prevent your head from imploding). Sometimes AI research tries to copy the methods used by human brains. Other times it tries to carry out the same broad functions as a human brain, perhaps better than a human brain. Russell and Norvig (p2) divide prevailing definitions of AI into four categories: 'thinking humanly', 'thinking rationally', 'acting humanly' and 'acting rationally'. For our purposes however, the distinction is probably not too important.
  3. What is 'human-level' AI? 
    We are going to talk about 'human-level' AI a lot, so it would be good to be clear on what that is. Unfortunately the term is used in various ways, and often ambiguously. So we probably can't be that clear on it, but let us at least be clear on how the term is unclear. 

    One big ambiguity is whether you are talking about a machine that can carry out tasks as well as a human at any price, or a machine that can carry out tasks as well as a human at the price of a human. These are quite different, especially in their immediate social implications.

    Other ambiguities arise in how 'levels' are measured. If AI systems were to replace almost all humans in the economy, but only because they are so much cheaper - though they often do a lower quality job - are they human level? What exactly does the AI need to be human-level at? Anything you can be paid for? Anything a human is good for? Just mental tasks? Even mental tasks like daydreaming? Which or how many humans does the AI need to be the same level as? Note that in a sense most humans have been replaced in their jobs before (almost everyone used to work in farming), so if you use that metric for human-level AI, it was reached long ago, and perhaps farm machinery is human-level AI. This is probably not what we want to point at.

    Another thing to be aware of is the diversity of mental skills. If by 'human-level' we mean a machine that is at least as good as a human at each of these skills, then in practice the first 'human-level' machine will be much better than a human on many of those skills. It may not seem 'human-level' so much as 'very super-human'.

    We could instead think of human-level as closer to 'competitive with a human' - where the machine has some super-human talents and lacks some skills humans have. This is not usually used, I think because it is hard to define in a meaningful way. There are already machines for which a company is willing to pay more than a human: in this sense a microscope might be 'super-human'. There is no reason for a machine which is equal in value to a human to have the traits we are interested in talking about here, such as agency, superior cognitive abilities or the tendency to drive humans out of work and shape the future. Thus we talk about AI which is at least as good as a human, but you should beware that the predictions made about such an entity may apply before the entity is technically 'human-level'.


    Example of how the first 'human-level' AI may surpass humans in many ways.

    Because of these ambiguities, AI researchers are sometimes hesitant to use the term. e.g. in these interviews.
  4. Growth modes (p1) 
    Robin Hanson wrote the seminal paper on this issue. Here's a figure from it, showing the step changes in growth rates. Note that both axes are logarithmic. Note also that the changes between modes don't happen overnight. According to Robin's model, we are still transitioning into the industrial era (p10 in his paper).
  5. What causes these transitions between growth modes? (p1-2)
    One might be happier making predictions about future growth mode changes if one had a unifying explanation for the previous changes. As far as I know, we have no good idea of what was so special about those two periods. There are many suggested causes of the industrial revolution, but nothing uncontroversially stands out as 'twice in history' level of special. You might think the small number of datapoints would make this puzzle too hard. Remember however that there are quite a lot of negative datapoints - you need an explanation that didn't happen at all of the other times in history. 
  6. Growth of growth
    It is also interesting to compare world economic growth to the total size of the world economy. For the last few thousand years, the economy seems to have grown faster more or less in proportion to it's size (see figure below). Extrapolating such a trend would lead to an infinite economy in finite time. In fact for the thousand years until 1950 such extrapolation would place an infinite economy in the late 20th Century! The time since 1950 has been strange apparently. 

    (Figure from here)
  7. Early AI programs mentioned in the book (p5-6)
    You can see them in action: SHRDLU, Shakey, General Problem Solver (not quite in action), ELIZA.
  8. Later AI programs mentioned in the book (p6)
    Algorithmically generated Beethoven, algorithmic generation of patentable inventionsartificial comedy (requires download).
  9. Modern AI algorithms mentioned (p7-8, 14-15) 
    Here is a neural network doing image recognition. Here is artificial evolution of jumping and of toy cars. Here is a face detection demo that can tell you your attractiveness (apparently not reliably), happiness, age, gender, and which celebrity it mistakes you for.
  10. What is maximum likelihood estimation? (p9)
    Bostrom points out that many types of artificial neural network can be viewed as classifiers that perform 'maximum likelihood estimation'. If you haven't come across this term before, the idea is to find the situation that would make your observations most probable. For instance, suppose a person writes to you and tells you that you have won a car. The situation that would have made this scenario most probable is the one where you have won a car, since in that case you are almost guaranteed to be told about it. Note that this doesn't imply that you should think you won a car, if someone tells you that. Being the target of a spam email might only give you a low probability of being told that you have won a car (a spam email may instead advise you of products, or tell you that you have won a boat), but spam emails are so much more common than actually winning cars that most of the time if you get such an email, you will not have won a car. If you would like a better intuition for maximum likelihood estimation, Wolfram Alpha has several demonstrations (requires free download).
  11. What are hill climbing algorithms like? (p9)
    The second large class of algorithms Bostrom mentions are hill climbing algorithms. The idea here is fairly straightforward, but if you would like a better basic intuition for what hill climbing looks like, Wolfram Alpha has a demonstration to play with (requires free download).

In-depth investigations

If you are particularly interested in these topics, and want to do further research, these are a few plausible directions:

  1. How have investments into AI changed over time? Here's a start, estimating the size of the field.
  2. What does progress in AI look like in more detail? What can we infer from it? I wrote about algorithmic improvement curves before. If you are interested in plausible next steps here, ask me.
  3. What do economic models tell us about the consequences of human-level AI? Here is some such thinking; Eliezer Yudkowsky has written at length about his request for more.

How to proceed

This has been a collection of notes on the chapter. The most important part of the reading group though is discussion, which is in the comments section. I pose some questions for you there, and I invite you to add your own. Please remember that this group contains a variety of levels of expertise: if a line of discussion seems too basic or too incomprehensible, look around for one that suits you better!

Next week, we will talk about what AI researchers think about human-level AI: when it will arrive, what it will be like, and what the consequences will be. To prepare, read Opinions about the future of machine intelligence from Chapter 1 and also When Will AI Be Created? by Luke Muehlhauser. The discussion will go live at 6pm Pacific time next Monday 22 September. Sign up to be notified here.

Superintelligence Reading Group - Section 1: Past Developments and Present Capabilities
New Comment
233 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

I really liked Bostrom's unfinished fable of the sparrows. And endnote #1 from the Preface is cute.

I would say one of the key strong points about the fable of the sparrows is that it provides a very clean intro to the idea of AI risk. Even someone who's never read a word on the subject, when given the title of the book and the story, gets a good idea of where the book is going to go. It doesn't communicate all the important insights, but it points in the right direction.

EDIT: So I actually went to the trouble of testing this by having a bunch of acquaintances read the fable, and, even given the title of the book, most of them didn't come anywhere near getting the intended message. They were much more likely to interpret it as about the "futility of subjugating nature to humanity's whims". This is worrying for our ability to make the case to laypeople.

5John_Maxwell
It's an interesting story, but I think in practice the best way to learn to control owls would be to precommit to kill the young owl before it got too large, experiment with it, and through experimenting with and killing many young owls, learn how to tame and control owls reliably. Doing owl control research in the absence of a young owl to experiment on seems unlikely to yield much of use--imagine trying to study zoology without having any animals or botany without having any plants.
5lukeprog
But will all the sparrows be so cautious? Yes it's hard, but we do quantum computing research without any quantum computers. Lampson launched work on covert channel communication decades before the vulnerability was exploited in the wild. Turing learned a lot about computers before any existed. NASA does a ton of analysis before they launch something like a Mars rover, without the ability to test it in its final environment.
3gallabytes
True in the case of owls, though in the case of AI we have the luxury and challenge of making the thing from scratch. If all goes correctly, it'll be born tamed.
1Ben Pace
...Okay, not all analogies are perfect. Got it. It's still a useful analogy for getting the main point across.
[-]SteveG120

Bostrom's wonderful book lays out many important issues and frames a lot of research questions which it is up to all of us to answer.

Thanks to Katja for her introduction and all of these good links.

One issue that I would like to highlight: The mixture of skills and abilities that a person has is not the same as the set of skills which could result in the dangers Bostrom will discuss later, or other dangers and benefits which he does not discuss.

For this reason, in the next phase of this work, we have to understand what specific future technologies could lead us to what specific outcomes.

Systems which are quite deficient in some ways, relative to people, may still be extremely dangerous.

Meanwhile, the intelligence of a single person, even a single genius, taken in isolation and only allowed to acquire limited resources actually is not all that dangerous. People become dangerous when they form groups, access the existing corpus of human knowledge, coordinate among each other to deploy resources and find ways to augment their abilities.

"Human-level intelligence" is only a first-order approximation to the set of skills and abilities which should concern us.

If we want to prevent disaster, we have to be able to distinguish dangerous systems. Unfortunately, checking whether a machine can do all of the things a person can is not the correct test.

7paulfchristiano
While I broadly agree with this sentiment, I would like to disagree with this point. I would consider even the creation of a single very smart human, with all human resourcefulness but completely alien values, to be a significant net loss to the world. If they represent 0.001% of the world's aggregative productive capacity, I would expect this to make the world something like 0.001% worse (according to humane values) and 0.001% better (according to their alien values). The situation is not quite so dire, if nothing else because of gains for trade (if our values aren't in perfect tension) and the ability of the majority to stomp out the values of a minority if it is so inclined. But it's in the right ballpark. So while I would agree that broadly human capabilities are not a necessary condition for concern, I do consider them a sufficient condition for concern.
3VonBrownie
Do you think, then, that its a dangerous strategy for an entity such as a Google that may be using its enormous and growing accumulation of "the existing corpus of human knowledge" to provide a suitably large data set to pursue development of AGI?
1mvp9
I think Google is still quite aways from AGI, but in all seriousness, if there was ever a compelling interest of national security to be used as a basis for nationalizing inventions, AGI would be it. At the very least, we need some serious regulation of how such efforts are handled.
5VonBrownie
Which raises another issue... is there a powerful disincentive to reveal the emergence of an artificial superintelligence? Either by the entity itself (because we might consider pulling the plug) or by its creators who might see some strategic advantage lost (say, a financial institution that has gained a market trading advantage) by having their creation taken away?
2Sebastian_Hagen
Absolutely. Or just decide that its goal system needed a little more tweaking before it's let loose on the world. Or even just slow it down. This applies much more so if you're dealing with an entity potentially capable of an intelligence explosion. Those are devices for changing the world into whatever you want it to be, as long as you've solved the FAI problem and nobody takes it from you before you activate it. The incentives for the latter would be large, given the current value disagreements within human society, and so so are the incentives for hiding that you have one.
1[anonymous]
If you've solved the FAI problem, the device will change the world into what's right, not what you personally want. But of course, we should probably have a term of art for an AGI that will honestly follow the intentions of its human creator/operator whether or not those correspond to what's broadly ethical.
0cameroncowan
We need some kind of central ethical code and there are many principles that are transcultural enough to follow. However, how do we teach a machine to make judgment calls?
0Sebastian_Hagen
A lot of the technical issues are the same in both cases, and the solutions could be re-used. You need the AI to be capable of recursive self-improvement without compromising its goal systems, avoid the wireheading problem, etc. Even a lot of the workable content-level solutions (a mechanism to extract morality from a set of human minds) would probably be the same. Where the problems differ, it's mostly in that the society-level FAI case is harder: there's additional subproblems like interpersonal disagreements to deal with. So I strongly suspect that if you have a society-level FAI solution, you could very easily hack it into an one-specific-human-FAI solution. But I could be wrong about that, and you're right that my original use of terminology was sloppy.
0cameroncowan
That's already underway.
0cameroncowan
I don't think that Google is there yet. But as Google sucks up more and more knowledge I think we might get there.
2KatjaGrace
Good points. Any thoughts on what the dangerous characteristics might be?
2rlsj
An AI can be dangerous only if it escapes our control. The real question is, must we flirt with releasing control in order to obtain a necessary or desirable usefulness? It seems likely that autonomous laborers, assembly-line workers, clerks and low-level managers would, without requiring such flirtation, be useful and sufficient for the society of abundance that is our main objective. But can they operate without a working AGI? We may find out if we let the robots stumble onward and upward.
2NxGenSentience
I had a not unrelated thought as I read Bostrom in chapter 1: why can't we instutute obvious measures to ensure that the train does stop at Humanville? The idea that we cannot make human level AGI without automatically opening pandoras box to superintelligence "without even slowing down at the Humanville stataion", was suddenly not so obvious to me. I asked myself after reading this, trying to pin down something I could post, " Why don't humans automatically become superintelligent, by just resetting our own programming to help ourselves do so?" The answer is, we can't. Why? For one, our brains are, in essence, composed of something analogous to ASICs... neurons with certain physical design limits, and our "software", modestly modifiable as it is, is instantiated in our neural circuitry. Why can't we build the first generation of AGIs out of ASICs, and omit WiFi, bluetooth, ... allow no ethernet jacks on exterior of the chassis? Tamper interlock mechanisms could be installed, and we could give the AIs one way (outgoing) telemetry, inaccessible to their "voluntary" processes, the way someone wearing a pacemaker might have outgoing medical telemetry modules installed, that are outside of his/her "conscious" control. Even if we do give them a measure of autonomy, which is desirable and perhaps even necessary if we want them to be general problem solvers and be creative and adaptable to unforeseen circumstances for which we have not preinstalled decision trees, we need not give them the ability to just "think" their code (it being substantially frozen in the ASICs) into a different form. What am I missing? Until we solve the Friendly aspect of AGIs, why not build them with such engineered limiits? Evolution has not, so far, seen fit to give us that instant, large scale self-modifyability. We have to modify our 'software' the slow way (learning and remembering, at our snail's pace.) Slow is good, at least it was for us, up til now, when our speed of learning is n
4leplen
Because by the time you've managed to solve the problem of making it to humanville, you probably know enough to keep going. There's nothing preventing us from learning how to self-modify. The human situation is strange because evolution is so opaque. We're given a system that no one understands and no one knows how to modify and we're having to reverse engineer the entire system before we can make any improvements. This is much more difficult than upgrading a well-understood system. If we manage to create a human-level AI, someone will probably understand very well how that system works. It will be accessible to a human-level intelligence which means the AI will be able to understand it. This is fundamentally different from the current state of human self-modification.
-1NxGenSentience
Leplen, I agree completely with your opening statement, that if we, the human designers, understand how to make human level AI, then it will probably be a very clear and straightforward issue to understand how to make something smarter. An easy example to see is the obvious bottleneck human intellects have with our limited "working" executive memory. The solutions for lots of problems by us are obviously heavily encumbered by how many things one can keep in mind at "the same time" and see the key connections, all in one act of synthesis. We all struggle privately with this... some issues cannot ever be understood by chunking, top-down, biting off a piece at a time, then "grokking" the next piece....and gluing it together at the end. Some problems resist decomposition into teams of brainstormers, for the same reason: some single comprehending POV seems to be required to see a critical sized set of factors (which varies by probem, of course.) Hence, we have to rely on getting lots of pieces into long term memory, (maybe by decades of study) and hoping that incubation and some obscure processes ocurringt outside consciousness will eventually bubble up and give us a solution (--- the "dream of a snake biting its tall for the benzene ring" sort of thing.) If we could build HL AGI, of course we can eliminate such bottlenecks, and others we will have come to understand, in cracking the design problems. So I agree, and that it is actually one of my reasons for wanting to do AI. So, yes, the artificial human level AI could understand this. My point was that we can build in physical controls... monitoring of the AIs. And if their key limits were in ASICs, ROMs, etc, and we could monitor them, we would immediTELY see if they attempt to take over a CHIP factory In, say, Icelend , and we can physically shut the AIs down or intervene. We can "stop them at the airport." It doesn't matter if designs are leaked onto the internet, and an AI gets near an internet terminal and l
3leplen
If I were an ASIC-implemented AI why would I need an ASIC factory? Why wouldn't I just create a software replica of myself on general purpose computing hardware, i.e. become an upload? I know next to nothing about neuroscience, but as far as I can tell, we're a long way from the sort of understanding of human cognition necessary to create an upload, but going from an ASIC to an upload is trivial. I'm also not at all convinced that I want a layover at humanville. I'm not super thrilled by the idea of creating a whole bunch of human level intelligent machines with values that differ widely from my own. That seems functionally equivalent to proposing a mass-breeding program aiming to produce psychologically disturbed humans.
1Sebastian_Hagen
In an intelligent society that was highly integrated and capable of consensus-building, something like that may be possible. This is not our society. Research into stronger AI would remain a significant opportunity to get an advantage in {economic, military, ideological} competition. Unless you can find some way to implement a global coordination framework to prevent this kind of escalation, fast research of that kind is likely to continue.
1KatjaGrace
In what sense do you think of an autonomous laborer as being under 'our control'? How would you tell if it escaped our control?
2rlsj
How would you tell? By its behavior: doing something you neither ordered nor wanted. Think of the present-day "autonomous laborer" with an IQ about 90. The only likely way to lose control of him is for some agitator to instill contrary ideas. Censorship for robots is not so horrible a regime. Who is it that really wants AGI, absent proof that we need it to automate commodity production?
4leplen
In my experience, computer systems currently get out of my control by doing exactly what I ordered them to do, which is frequently different than I what I wanted them to do. Whether or not a system is "just following orders" doesn't seem to be a good metric for it being under your control.
1rlsj
How does "just following orders," a la Nuremberg, bear upon this issue? It's out of control when its behavior is neither ordered nor wanted.
1leplen
While I agree that it is out of control if the behavior is neither ordered nor wanted, I think it's also very possible for the system to get out of control while doing exactly what you ordered it to, but not what you meant for it to. The argument I'm making is approximately the same as the one we see in the outcome pump example. This is to say, while a system that is doing something neither ordered nor wanted is definitely out of control, it does not follow that a system that is doing exactly what it was ordered to do is necessarily under your control.
0[anonymous]
Ideological singulatarians.
0[anonymous]
Probably. I would say that most low-level jobs really don't engage much of the general intelligence of the humans doing them.
-2JonathanGossage
The following are some attributes and capabilities which I believe are necessary for superintelligence. Depending on how these capabilities are realized, they can become anything from early warning signs of potential problems to red alerts. It is very unlikely that, on their own, they are sufficient. * A sense of self. This includes a recognition of the existence of others. * A sense of curiosity. The AI finds it attractive (in some sense) to investigate and try to understand the environment that it find itself in. * A sense of motivation. The AI has attributes similar in some way to human aspirations. * A capability to (in some way) manipulate portions of its external physical environment, including its software but also objects and beings external to its own physical infrastructure.
0cameroncowan
I would add a sense of ethical standards.
0[anonymous]
I agree, and believe that the emphasis on "superintelligence", depending on how that term is interpreted, might be an impediment to clear thinking in this area. Following David Chalmers, I think it's best to formulate the problem more abstractly, by using the concept of a self-amplifying cognitive capacity. When the possession of that cognitive capacity is correlated with changes in some morally relevant capacity (such as the capacity to cause the extinction of humanity), the question then becomes one about the dangers posed by systems which surpass humans in that self-amplifying capacity, regardless of how much they resemble typical human beings or how they perform on standard measures of intelligence.

I like Bostrom's book so far. I think Bostrom's statement near the beginning that much of the book is probably wrong is commendable. If anything, I think I would have taken this statement even further... it seems like Bostrom holds a position of such eminence in the transhumanist community that many will be liable to instinctively treat what he says as quite likely to be correct, forgetting that predicting the future is extremely difficult and even a single very well educated individual is only familiar with a fraction of human knowledge.

I'm envisioning an alternative book, Superintelligence: Gonzo Edition, that has a single bad argument deliberately inserted at random in each chapter that the reader is tasked with finding. Maybe we could get a similar effect by having a contest among LWers to find the weakest argument in each chapter. (Even if we don't have a contest, I'm going to try to keep track of the weakest arguments I see on my own. This chapter it was gnxvat gur abgvba bs nv pbzcyrgrarff npghnyyl orvat n guvat sbe tenagrq.)

Also, supposedly being critical is a good way to generate new ideas.

I was under the impression (after reading the sections) that the argument hinges a lot less on (economic) growth than what might be gleamed from the summary here.

3NxGenSentience
It may have been a judgement call by the writer (Bostrom) and editor: He is trying to get the word out as widely as possible that this is a brewing existential crisis. In this society, how to you get most people's (policymakers, decision makers, basically "the Suits" who run the world) attention? Talk about the money. Most of even educated humanity sees the world in one color (can't say green anymore, but the point is made.) Try to motivate people about global warming? ("...um....but, but.... well, it might cost JOBS next month, if we try to save all future high level earthly life from extinction... nope the price [lost jobs] of saving the planet is obviously too high...") Want to get non-thinkers to even pick up the book and read the first chapter or two.... talk about money. If your message is important to get in front of maximum eyeballs, sometimes you have to package it a little bit, just to hook their interest. Then morph the emphasis into what you really want them to hear, for the bulk of the presentation. ---------------------------------------- Of course, strictly speaking, what I just said was tangent to the original point, which was whether the summary reflected the predominant emphasis in the pages of the book it ostensibly covered. But my point about PR considerations was worth making, and also, Katja or someone did, I think mention maybe formulating a reading guide for Bostrom's book, in which case, any such author of a reading guide might be thinking already about this "hook 'em by beginning with economics" tactic, to make the book itself more likely to be read by a wider audience.
2lukeprog
Agree.
1KatjaGrace
Apologies; I didn't mean to imply that the economics related arguments here were central to Bostrom's larger argument (he explicitly says they are not) - merely to lay them out, for what they are worth. Though it may not be central to Bostrom's case for AI risk, I do think economics is a good source of evidence about these things, and economic history is good to be familiar with for assessing such arguments.
1kgalias
No need to apologize - thank you for your summary and questions. No disagreement here.

How would you like this reading group to be different in future weeks?

6kgalias
You could start at a time better suited for Europe.
1Paul Crowley
That's a tricky problem! If we assume people are doing this in their spare time, then a weekend is the best time to do it: say noon Pacific time, which is 9pm Berlin time. But people might want to be doing something else with their Saturdays or Sundays. If they're doing it with their weekday evenings, then they just don't overlap; the best you can probably do is post at 10am Pacific time on (say) a Monday, and let Europe and UK comment first, then the East Coast, and finally the West Coast. Obviously there will be participants in other timezones, but those four will probably cover most participants.
3negamuhia
The text of [the parts I've read so far of] Superintelligence is really insightful, but I'll quote Nick in saying that He gives many references (84 in Chapter 1 alone), some of which refer to papers and others that resemble continuations of the specific idea in question that don't fit in directly with the narrative in the book. My suggestion would be to go through each reference as it comes up in the book, analyze and discuss it, then continue. Maybe even forming little discussion groups around each reference in a section (if it's a paper). It could even happen right here in comment threads. That way, we can get as close to Bostrom's original world of information as possible, maybe drawing different conclusions. I think that would be a more consilient understanding of the book.
2NxGenSentience
Katja, you are doing a great job. I realize what a huge time and energy commitment it is to take this on... all the collateral reading and sources you have to monitor, in order to make sure you don't miss something that would be good to add in to the list of links and thinking points. We are still in the get aquainted, discovery phase, as a group, and with the book. I am sure it will get more interesting yet as we go along, and some long term intellectual friendships are likely to occurr as a result of the coming weeks of interaction. Thanks for your time and work.... Tom

'The computer scientist Donald Knuth was struck that "AI has by now succeeded in doing essentially everything that requires 'thinking' but has failed to do most of what people and animals do 'without thinking' - that, somehow, is so much harder!'. (p14) There are some activities we think of as involving substantial thinking that we haven't tried to automate much, presumably because they require some of the 'not thinking' skills as precursors. For instance, theorizing about the world, making up grand schemes, winning political struggles, and starting s... (read more)

6paulfchristiano
I think this is a very good question that should be asked more. I find it particularly important because of the example of automating research, which is probably the task I care most about. My own best guess is that the computational work that humans are doing while they do the "thinking" tasks is probably very minimal (compared to the computation involved in perception, or to the computation currently available). However, the task of understanding which computation to do in these contexts seems quite similar to the task of understanding which computation to do in order to play a good game of chess, and automating this still seems out of reach for now. So I guess I disagree somewhat with Knuth's characterization. I would be really curious to get the perspectives of AI researchers involved with work in the "thinking" domains.
5Sebastian_Hagen
Neither math research nor programming or debugging are being taken over by AI, so far, and none of those require any of the complicated unconscious circuitry for sensory or motor interfacing. The programming application, at least, would also have immediate and major commercial relevance. I think these activities are fairly similar to research in general, which suggests that what one would classically call the "thinking" parts remain hard to implement AI.
3Ramana Kumar
They're not yet close to being taken over by AI, but there has been research on automating all of the above. Some possibly relevant keywords: automated theorem proving, and program synthesis.
3JonathanGossage
Programming and debugging, although far from trivial, are the easy part of the problem. The hard part is determining what the program needs to do. I think that the coding and debugging parts will not require AGI levels of intelligence, however deciding what to do definitely needs at least human-like capacity for most non-trivial problems.
2KatjaGrace
I'm not sure what you mean when you say 'determining what the program needs to do' - this sounds very general. Could you give an example?
0LeBleu
Most programming is not about writing the code, it is about translating a human description of the problem into a computer description of the problem. This is also why all attempts so far to make a system so simple "non-programmers" can program it have failed. The difficult aptitude for programming is the ability to think abstractly and systematically, and recognize what parts of a human description of the problem need to be translated into code, and what unspoken parts also need to be translated into code.
0Ramana Kumar
They're not yet close to being taken over by AI, but there has been research on automating all of the above. Some possibly relevant keywords: automated theorem proving, and program synthesis.
0KatjaGrace
Do you mean that each time you do a research task, deciding how to do it is like making a program to play chess, rather than just designing a general system for research tasks being like designing a system for chess?
0Houshalter
I think by "things that require thinking" he means logical problems in well defined domains. Computers can solve logical puzzles much faster than humans, often through sheer brute force. From board games to scheduling to finding the shortest path. Of course there are counter examples like theorem proving or computer programming. Though they are improving and starting to match humans at some tasks.

Did you change your mind about anything as a result of this week's reading?

7Larks
This is an excellent question, and it is a shame (perhaps slightly damning) that no-one has answered it. On the other hand, much of this chapter will have been old material for many LW members. I am ashamed that I couldn't think of anything either, so I went back again looking for things I had actually changed my opinion about, even a little, and not merely because I hadn't previously thought about it. * p6 I hadn't realised how important combinatorial explosion was for early AI approaches. * p8 I hadn't realised, though I should have been able to work it out, that the difficulties in coming up with a language which matched the structure of the domain was a large part of the problem with evolutionary algorithms. Once you have done that you're halfway to solving it by conventional means. * p17 I hadn't realised about how high volume could have this sort of reflexive effect.
1KatjaGrace
Thanks for taking the time to think about it! I find your list interesting.
1PhilGoetz
Nope. Nothing new there for me. But he managed to say very little that I disagreed with, which is rare.
0[anonymous]
Related matter: who here has actually taken an undergraduate or graduate AI course?
2PhilGoetz
My PhD is "in" AI (though the diploma says Computer Science, I avoided that as much as possible), and I've TA'd three undergrad and graduate AI courses, and taught one. I triple-minored in psychology, neuroscience, and linguistics.
0[anonymous]
Thanks for saying so. There have been some comments by people who appeared to be surprised by the combinatorial explosion of state-spaces in GOFAI.
-1NxGenSentience
Not so much from the reading, or even from any specific comments in the forum -- though I learned a lot from the links people were kind enough to provide. But I did, through a kind of osmosis, remind myself that not everyone has the same thing in mind when they think of AI, AGI, human level AI, and still less, mere "intelligence." Despite the verbal drawing of the distinction between GOFAI and the spectrum of approaches being investigated and persued today, I have realized by reading between the lines that GOFAI is still alive and well. Maybe it is not the primitive "production system" stuff of the Simon and Newell era, or programs written in LISP or ProLog (both of which I coded in, once upon a time), but there are still a lot of people who don't much care about what I would call "real consciousness",and are still taking a Turing-esque, purely operationalistic, essentially logical positivistic positivistic approach to "intellence." I am passionately pro-AI. But for me, that means I want more than anything to create a real conscious entity, that feels, has ideas, passions, drives, emotions, loyalties, ideals. Most of even neurology has moved beyond the positivistic "there is only behavior, and we don't talk about conscious", to actively investigating the function, substrate, neural realization of, evolutionary contribution of, etc, consciousness, as opposed to just the evolutiounary contribution of non-conscious informaton processing, to organismic success. Look at Damasio's work, showing that emotion is necessary for full spectrum cognitive skill manifestation. THe thinking-feeling dichotomy is rapidly falling out of the working worldview, and I have been arguing for years that there are fallacious categories we have been using, for other reasons. This is not to say that nonconscious "intelligent" systems are not here, evolving, and potentially dangerous. Automated program trading on the financial markets is potentially dangerous. So there is still great ut
0PhilGoetz
There is a way to arrive at this thru Damasio's early work, which I don't think is highlighted by saying that emotion is needed for human-level skill. His work in the 1980s was on "convergence zones". These are hypothetical areas in the brain that are auto-associative networks (think a Hopfield network) with bi-directional connections to upstream sensory areas. His notion is that different sensory (and motor? I don't remember now) areas recognize sense-specific patterns (e.g., the sound of a dog barking, the image of a dog, the word "dog", the sound of the word "dog", the movement one would make against an attacking dog), and the pattern these create in the convergence zone represents the concept "dog". This makes a lot of sense and has a lot of support from studies, but a consequence is that humans don't use logic. A convergence zone is set there, in one physical hunk of brain, with no way to move its activation pattern around in the brain. That means that the brain's representations do not use variables the way logic does. A pattern in a CZ might be represented by the variable X, and could take on different values such as the pattern for "dog". But you can't move that X around in equations or formuli. You would most likely have a hard-wired set of basic logic rules, and the concept "dog" as used on the left-hand side of a rule would be a different concept than the concept "dog" used on the right-hand side of the same rule. Hence, emotions are important for humans, but this says nothing about whether emotions would be needed for an agent that could use logic.

The chapter gives us a reasonable qualitative summary of what has happened in AI so far. It would be interesting to have a more quantitative picture, though this is hard to get. e.g. How much better are the new approaches than the old ones, on some metric? How much was the area funded at different times? How much time has been spent on different things? How has the economic value of the outputs grown over time?

5Larks
Yes. On the most mundane level, I'd like something a bit more concrete about the AI winters. Frequently in industries there is a sense that now is a good time or a bad time, but often this subjective impression does not correlate very well with the actual data. And when it does, it is rarely very sensitive to magnitude.

The 'optimistic' quote from the Dartmouth Conference seems ambiguous in its optimism to me. They say 'a significant advance can be made in one or more of these problems', rather than that any of them can be solved (as they are often quoted as saying). What constitutes a 'significant advance' varies with optimism, so their statement seems consistent with them believing they can make an arbitrarily small step. The whole proposal is here, if anyone is curious about the rest.

2lukeprog
Off the top of my head I don't recall, but I bet Machine Who Think has detailed coverage of those early years and can probably shed some light on how much advance the Dartmouth participants expected.

AI seems to be pretty good at board games relative to us. Does this tell us anything interesting? For instance, about the difficulty of automating other kinds of tasks? How about the task of AI research? Some thoughts here.

8rlsj
For anything whose function and sequencing we thoroughly understand the programming is straightforward and easy, at least in the conceptual sense. That covers most games, including video games. The computer's "side" in a video game, for example, which looks conceptually difficult, most of the time turns out logically to be only decision trees. The challenge is the tasks we can't precisely define, like general intelligence. The rewarding approach here is to break down processes into identifiable subtasks. A case in point is understanding natural languages, one of whose essential questions is, "What is the meaning of "meaning?" In terms of a machine it can only be the content of a subroutine or pointers to subroutines. The input problem, converting sentences into sets of executable concepts, is thus approachable. The output problem, however, converting unpredictable concepts into words, is much tougher. It may involve growing decision trees on the fly.
5AshokGoel
Thanks for the nice summary and the questions. I think it is worth noting that AI is good only at some board games (fully observable, deterministic games) and not at others (partially observable, non-deterministic games such as, say, Civilization).
7paulfchristiano
Do you know of a partially observable game for which AI lags behind humans substantially? These examples are of particular interest to me because they would significantly revise my understanding of what problems are hard and easy. The most prominent games of this partial information that I know are Bridge and Poker, and AI's can now win at both of these (and which in fact proved to be much easier than the classic deterministic games). Backgammon is random, and also turned out to be relatively easy--in fact the randomness itself is widely considered to have made the game easy for computers! Scrabble is the other example that comes to mind, where the situation is the same. For Civilization in particular, it seems very likely that AI would be wildly superhuman if it were subject to the same kind of attention as other games, simply because the techniques used in Go and Backgammon, together with a bunch of ad hoc logic for navigating the tech tree, should be able to get so much traction.
6Kaj_Sotala
Agreed. It's not Civilization, but Starcraft is also partially observable and non-deterministic, and a team of students managed to bring their Starcraft AI to the level of being able to defeat a "top 16 in Europe"-level human player after only a "few months" of work. The game AIs for popular strategy games are often bad because the developers don't actually have the time and resources to make a really good one, and it's not a high priority anyway - most people playing games like Civilization want an AI that they'll have fun defeating, not an AI that actually plays optimally.
6Lumifer
In RTS games an AI has a large built-in advantage over humans because it can micromanage so much better. That's a very valid point: a successful AI in a game is the one which puts up a decent fight before losing.
1Liso
Are you played this type of game? [pollid:777] I think that if you played on big map (freeciv support really huge) then your goals (like in real world) could be better fulfilled if you play WITH (not against) AI. For example managing 5 tousands engineers manually could take several hours per round. You could meditate more concepts (for example for example geometric growing, metasthasis method of spread civilisation etc and for sure cooperation with some type of AI) in this game...
0cameroncowan
I think it would be easy to create a Civilization AI that would choose to grow on a certain path with a certain win-style in mind. So if the AI picks military win then it will focus on building troops and acquiring territory and maintaining states of war with other players. What might be hard is other win states like diplomatic or cultural because those require much more intuitive and nuanced decision making without a totally clear course of action.
5Larks
The popular AI mods for Civ actually tend to make the AIs less thematic - they're less likely to be nice to you just because of a thousand year harmonious and profitable peace, for example, and more likely to build unattractive but efficient Stacks of Doom. Of course there are selection effects on who installs such mods.
0lackofcheese
I think you're mostly correct on this. Sometimes difficult opponents are needed, but for almost all games that can be trivially achieved by making the AI cheat rather than improving the algorithms. That said, when playing a game vs an AI you do want the AI to at least appear to be intelligent; although humans can often be quite easy to fool with cheating, a good algorithm is still a better way of giving this appearance than a fake. It doesn't have to be optimal, and even if it is you can constrain it enough to make it beatable, or intentionally design different kinds of weaknesses into the AI so that humans can have fun looking for those weaknesses and feel good when they find them. Ultimately, though, the point is that the standard approach of having lots and lots of scripting still tends to get the job done, and developers almost never find the resource expenditure for good AI to be worthwhile. However, I think that genuinely superhuman AI in games like Starcraft and Civilization is far harder than you imply. For example, in RTS games (as Lumifer has said) the AI has a built-in advantage due to its capacity for micromanagement. Moreover, although the example you cite has an AI from a "few months" of work beating a high-level human player, I think that was quite likely to be a one-off occurrence. Beating a human once is quiet different to consistently beating a human. If you look at the results of the AIIDE Man vs Machine matches, the top bots consistently lose every game to Bakuryu (the human representative). According to this report, I seems to me that the best AIs in these kinds of games work by focusing on a relatively narrow set of overall strategies, and then focusing on executing those strategies as flawlessly as possible. In something like Starcraft the AI's potential for this kind of execution is definitely superhuman, but as the Man vs Machine matches demonstrate this really isn't enough. In the case of the Civilization games, the fact that they aren'
0lackofcheese
I wouldn't say that poker is "much easier than the classic deterministic games", and poker AI still lags significantly behind humans in several regards. Basically, the strongest poker bots at the moment are designed around solving for Nash equilibrium strategies (of an abstracted version of the game) in advance, but this fails in a couple of ways: 1. These approaches haven't really been extended past 2- or 3-player games. 2. Playing a NE strategy makes sense if your opponent is doing the same, but your opponent almost always won't be. Thus, in order to play better, poker bots should be able to exploit weak opponents. Both of these are rather nontrivial problems. Kriegspiel, a partially observable version of chess, is another example where the best humans are still better than the best AIs, although I'll grant that the gap isn't a particularly big one, and likely mostly has to do with it not being a significant research focus.
5gallabytes
Interestingly enough, a team at MIT managed to make an AI that learned how to play from the manual and proceeded to win 80% of it's games against the AI, though I don't know which difficulty it was set to, or how the freeciv AI compares to the one in normal Civilization.
3ScottMessick
I was disappointed to see my new favorite "pure" game Arimaa missing from Bostrom's list. Arimaa was designed to be intuitive for humans but difficult for computers, making it a good test case. Indeed, I find it to be very fun, and computers do not seem to be able to play it very well. In particular, computers are nowhere close to beating top humans despite the fact that there has arguably been even more effort to make good computer players than good human players. Arimaa's branching factor dwarfs that of Go (which in turn beats every other commonly known example). Since a super-high branching factor is also a characteristic feature of general AI test problems, I think it remains plausible that simple, precisely defined games like Arimaa are good test cases for AI, as long as the branching factor keeps the game out of reach of brute force search.
0Houshalter
Reportedly this just happened recently: http://games.slashdot.org/story/15/04/19/2332209/computer-beats-humans-at-arimaa Go is super close to being beaten, and AIs do very well against all but the best humans.
2TRIZ-Ingenieur
This summary of already superhuman game playing AIs impressed me since two weeks. But only until yesterday. John McCarthy was attributed in Vardi(2012) to have said: "As soon as it works, no one calls it AI anymore." (p13) There is more truth in it than McCarthy expected it to be: A tailor made game playing algorithm, developed and optimized by generations of scientists and software engineers is no entity of AI. It is an algorithm. Human beings analyzed the rule set, found abstractions of it, developed evaluation schemes and found heuristics to prune the un-computable large search tree. With brute force and megawatts of computational evaluation power they managed to fill a database with millions of more or less favorable game situations. In direct competion of game playing algorithm vs. human being these pre-computed situations help to find short cuts in the tree search to achieve superhuman performance in the end. Is this entity an AI or an algorithm? 1. Game concept development: human. 2. Game rule definition and negotiation: human. 3. Game rule abstraction and translation into computable form: human designed algorithm. 4. Evaluation of game situation: human designed algorithm, computer aided optimization. 5. Search tree heuristics: human designed algorithm, computer aided optimization. 6. Database of favorable situations and moves: brute force tree search on massive parallel supercomputer. 7. Detection of favorable situations: human designed algorithm for pattern matching, computer aided optimization. 8. Active playing: Full automatic use of algorithms and information of points 3-7. No human being involved. Unsupervised learning, search optimization and pattern matching (points 5-7) make this class of entities weak AIs. A human being playing against this entity will probably attribute intelligence to it. "Kasparov claims to have seen glimpses of true intelligence and creativity in some of the computers moves" (p12, Newborn[2011]). But weak AI is not
1cameroncowan
Now we have something! We have something we can actually use! AI must be able to interact with emotional intelligence!
0lackofcheese
Although computers beat humans at board games without needing any kind of general intelligence at all, I don't think that invalidates game-playing as a useful domain for AGI research. The strength of AI in games is, to a significant extent, due to the input of humans in being able to incorporate significant domain knowledge into the relatively simple algorithms that game AIs are built on. However, it is quite easy to make game AI into a far, far more challenging problem (and, I suspect, a rather more widely applicable one)---consider the design of algorithms for general game playing rather than for any particular game. Basically, think of a game AI that is first given a description of the rules of the game it's about to play, which could be any game, and then must play the game as well as possible.
0cameroncowan
It tells us that within certain bounds computers can excel as tasks. I think in the near-term that means that computers will continue to excel in certain tasks like personal assistants, factory labor, menial tasks, and human-aided tasks.

How much smarter than a human could a thing be? (p4) How about the same question, but using no more energy than a human? What evidence do we have about this?

3leplen
The problem is that intelligence isn't a quantitative measure. I can't measure "smarter". If I just want to know about the number of computations, then we can estimate that the human brain performs 10^14 operations/second then a machine operating at the Landauer limit would require about 0.3 microwatts to perform the same number of operations at room temperature. The human brain uses something like 20 watts of energy (0.2*2000 calories/24 hours). If that energy were used to perform computations at the Landaur limit then computational performance would increase by a factor of 6.5*10^7, to approximately 10^21 computations. But this only provides information about compute power. It doesn't tell us anything about intelligence.
0cameroncowan
Intelligence can be defined as the ability to use knowledge and experience together to create new solutions for problems and situations. Intelligence is about using resources regardless of computational power. Intelligence is as simple as my browser remember a password (which I don't let it do). It is able to recognize a website and pull the applicable data to auto fill and login. That is a kind of primitive intelligence.
2mvp9
Another way to get at the same point, I think, is - Are there things that we (contemporary humans) will never understand (from a Quora post)? I think we can get some plausible insight on this by comparing an average person to the most brilliant minds today - or comparing the earliest recorded examples of reasoning in history to that of modernity. My intuition is that there are many concepts (quantum physics is a popular example, though I'm not sure it's a good one) that even most people today, and certainly in the past, will never comprehend, at least without massive amounts of effort, and possibly even then. They simply require too much raw cognitive capacity to appreciate. This is at least implicit in the Singularity hypothesis. As to the energy issue, I don't see any reason to think that such super-human cognition systems necessarily requires more energy - though they may at first.
3paulfchristiano
I am generally quite hesitant about using the differences between humans as evidence about the difficulty of AI progress (see here for some explanation). But I think this comparison is a fair one in this case, because we are talking about what is possible rather than what will be achieved soon. The exponentially improbable tails of the human intelligence distribution are a lower bound for what is possible in the long run, even without using any more resources than humans use. I do expect the gap between the smartest machines and the smartest humans to eventually be much larger than the gap between the smartest human and the average human (on most sensible measures).
3billdesmedt
Actually, wrt quantum mechanics, the situation is even worse. It's not simply that "most people ... will never comprehend" it. Rather, per Richard Feynman (inventor of Feynman Diagrams, and arguable one of the 20th century's greatest physicists) nobody will ever comprehend it. Or as he put it, "If you think you understand quantum mechanics, you don't understand quantum mechanics." (http://en.wikiquote.org/wiki/Talk:Richard_Feynman#.22If_you_think_you_understand_quantum_mechanics.2C_you_don.27t_understand_quantum_mechanics..22)
4paulfchristiano
I object (mildly) to this characterization of quantum mechanics. What notion of "understand" do we mean? I can use quantum mechanics to make predictions, I can use it to design quantum mechanical machines and protocols, I can talk philosophically about what is "going on" in quantum mechanics to more or less the same extent that I can talk about what is going on in a classical theory. I grant there are senses in which I don't understand this concept, but I think the argument would be more compelling if you could make the same point with a clearer operationalization of "understand."
1mvp9
I'll take a stab at it. We are now used to saying that light is both a particle and a wave. We can use that proposition to make all sorts of useful predictions and calculations. But if you stop and really ponder that for a second, you'll see that it is so far out of the realm of human experience that one cannot "understand" that dual nature in the sense that you "understand" the motion of planets around the sun. "Understanding" in the way I mean is the basis for making accurate analogies and insight. Thus I would argue Kepler was able to use light as an analogy to 'gravity' because he understood both (even though he didn't yet have the math for planetary motion) Perhaps an even better example is the idea of quantum entanglement: theory may predict, and we may observe quarks "communicating" at a distance faster than light, but (for now at least) I don't think we have really incorporate it into our (pre-symbolic) conception of the world.
8paulfchristiano
I grant that there is a sense in which we "understand" intuitive physics but will never understand quantum mechanics. But in a similar sense, I would say that we don't "understand" almost any of modern mathematics or computer science (or even calculus, or how to play the game of go). We reason about them using a new edifice of intuitions that we have built up over the years to deal with the situation at hands. These intuitions bear some relationship to what has come before but not one as overt as applying intuitions about "waves" to light. As a computer scientist, I would be quick to characterize this as understanding! Moreover, even if a machine's understanding of quantum mechanics is closer to our idea of intuitive physics (in that they were built to reason about quantum mechanics in the same way we were built to reason about intuitive physics) I'm not sure this gives them more than a quantitative advantage in the efficiency with which they can think about the topic. I do expect them to have such advantages, but I don't expect them to be limited to topics that are at the edge of humans' conceptual grasp!
0cameroncowan
I think robots will have far more trouble understanding fine nuances of language, behavior, empathy, and team work. I think quantum mechanics will be easy overall. Its things like emotional intelligence that will be hard.
4pragmatist
The apparent mystery in particle-wave dualism is simply an artifact of using bad categories. It is a misleading historical accident that we hear things like "light is both a particle and a wave" in quantum physics lectures. Really what teachers should be saying is that 'particle' and 'wave' are both bad ways of conceptualizing the nature of microscopic entities. It turns out that the correct representation of these entities is neither as particles nor as waves, traditionally construed, but as quantum states (which I think can be understood reasonably well, although there are of course huge questions regarding the probabilistic nature of observed outcomes). It turns out that in certain experiments quantum states produce outcomes similar to what we would expect from particles, and in other experiments they produce outcomes similar to what we would expect from waves, but that is surely not enough to declare that they are both particles and waves. I do agree with you that entanglement is a bigger conceptual hurdle.
1KatjaGrace
If there are insights that some humans can't 'comprehend', does this mean that society would never discover certain facts had the most brilliant people not existed, or just that they would never be able to understand them in an intuitive sense?
5Paul Crowley
There are people in this world who will never understand, say, the P?=NP problem no matter how much work they put into it. So to deny the above you'd have to say (along with Greg Egan) that there was some sort of threshold of intelligence akin to "Turing completeness" that only some of humanity were reached, but that once you reached it nothing was in principle beyond your comprehension. That doesn't seem impossible, but it's far from obvious.
4DylanEvans
I think this is in fact highly likely.
4Paul Crowley
I can see some arguments in favour. We evolve along for millions of years and suddenly, bang, in 50ka we do this. It seems plausible we crossed some kind of threshold - and not everyone needs to be past the threshold for the world to be transformed. OTOH, the first threshold might not be the only one.
3owencb
David Deutsch argues for just such a threshold in his book The Beginning of Infinity. He draws on analogies with "jumps to universality" that we see in several other domains.
1KatjaGrace
If some humans achieved any particular threshold of anything, and meeting the threshold was not strongly selected for, I might expect there to always be some humans who didn't meet it.
1rlsj
"Does this mean that society would never discover certain facts had the most brilliant people not existed?" Absolutely! If they or their equivalent had never existed in circumstances of the same suggestiveness. My favorite example of this uniqueness is the awesome imagination required first to "see" how stars appear when located behind a black hole -- the way they seem to congregate around the event horizon. Put another way: the imaginative power able to propose star deflections that needed a solar eclipse to prove.
0cameroncowan
I think a variety of things would have gone unsolved without smart people at the right place and time with the right expertise to solve tremendous problems like measuring the density of an object or learning construction, or how to create a sail that allows ships to sail into the wind.
1rcadey
"How much smarter than a human could a thing be?" - almost infinitely if it consumed all of the known universe "How about the same question, but using no more energy than a human?" -again the same answer - assuming we assume intelligence to be computable, then no energy is required (http://www.research.ibm.com/journal/rd/176/ibmrd1706G.pdf) if we use reversible computing. Once we have an AI that is smarter than a human then it would soon design something that is smarter but more efficient (energy wise)?
5leplen
This link appears not to work, and it should be noted that "zero-energy" computing is at this point predominantly a thought experiment. A "zero-energy" computer would have to operate in the adiabatic limit, which is the technical term for "infinitely slowly."
1KatjaGrace
Anders Sandberg has some thoughts on physical limits to computation which might be relevant, but I admit I haven't read them yet: http://www.jetpress.org/volume5/Brains2.pdf
0cameroncowan
I think that is hard to balance because of the energy required for computations.

What is the relationship between economic growth and AI? (Why does a book about AI begin with economic growth?)

4Paul Crowley
I don't think it's really possible to make strong predictions about the impact of AI by looking at the history of economic growth. This introduction sets the reader's mind onto subjects of very large scope: the largest events over the entirety of human history. It reminds the reader that very large changes have already happened in history, so it would be a mistake to be very confident that there will never again be a very large change. And, frankly, it underlines the seriousness of the book by talking about what is uncontroversially a Serious Topic, so that they are less likely to think of machines taking over the world as a frivolous idea when it is raised.
3AshokGoel
I havnt read the book yet, but based on the summary here (and for what it is worth), I found the jump from 1-5 under economic growth above to 6 a little unconvincing.
4mvp9
I find the whole idea of predicting AI-driven economic growth based on analysis of all of human history as a single set of data really unconvincing. It is one thing to extrapolate up-take patterns of a particular technology based on similar technologies in the past. But "true AI" is so broad, and, at least on many accounts, so transformative, that making such macro-predictions seems a fool's errand.
6KatjaGrace
If you knew AI to be radically more transformative than other technologies, I agree that predictions based straightforwardly on history would be inaccurate. If you are unsure how transformative AI will be though, it seems to me to be helpful to look at how often other technologies have made a big difference, and how much of a difference they have made. I suspect many technologies would seem transformative ahead of time - e.g. writing, but seem to have made little difference to economic growth.
2paulfchristiano
Here is another way of looking at things: 1. From the inside it looks like automating the process of automation could lead to explosive growth. 2. Many simple endogenous growth models, if taken seriously, tend to predict explosive growth at finite time. (Including the simplest ones.) 3. A straightforward extrapolation of historical growth suggests explosive growth in the 21st century (depending on whether you read the great stagnation as a permanent change or a temporary fluctuation). You might object to any one of those lines of arguments on their own, but taken together the story seems compelling to me (at least if one wants to argue "We should take seriously the possibility of explosive growth.")
3mvp9
Oh, I completely agree with the prediction of explosive growth (or at least its strong likelihood), I just think (1) or something like it is a much better argument than 2 or 3.
1KatjaGrace
Would you care to elaborate?
0cameroncowan
A good AI would cause explosive economic growth in a variety of areas. We lose alot of money to human error and so on.

If you don't know what causes growth mode shifts, but there have been two or three of them and they seem kind of regular (see Hanson 2000, p14), how likely do you think another one is? (p2) How much evidence do you think history gives us about the timing and new growth rate of a new growth mode?

What did you find least persuasive in this week's reading?

"The train might not pause or even decelerate at Humanville Station. It is likely to swoosh right by." (p4)

There isn't much justification for this claim near where it's made. I could imagine it causing a reader to think that the author is prone to believing important things without much evidence -- or that he expects his readers to do so.

(It might help if the author noted that the topic is discussed in Chapter 4)

8billdesmedt
Not "least persuasive," but at least a curious omission from Chapter 1's capsule history of AI's ups and downs ("Seasons of hope and despair") was any mention of the 1966 ALPAC report, which singlehandedly ushered in the first AI winter by trashing, unfairly IMHO, the then-nascent field of machine translation.

How large a leap in cognitive ability do you think occurred between our last common ancestor with the great apes, and us? (p1) Was it mostly a change in personal intelligence, or could human success be explained by our greater ability to accumulate knowledge from others in society? How can we tell how much smarter, in the relevant sense, a chimp is than a human? This chapter claims Koko the Gorilla has a tested IQ of about 80 (see table 2).

What can we infer from answers to these questions?

4gallabytes
I would bet heavily on the accumulation. National average IQ has been going up by about 3 points per decade for quite a few decades, so there have definitely been times when Koko's score might have been above average. Now, I'm more inclined to say that this doesn't mean great things for the IQ test overall, but I put enough trust in it to say that it's not differences in intelligence that prevented the gorillas from reaching the prominence of humans. It might have slowed them down, but given this data it shouldn't have kept them pre-Stone-Age. Given that the most unique aspect of humans relative to other species seems to be the use of language to pass down knowledge, I don't know what else it really could be. What other major things do we have going for us that other animals don't?
5Paul Crowley
I think what controls the rate of change is the intelligence of the top 5%, not the average intelligence.
4gallabytes
Sure, I still don't think that if you elevated the intelligence of a group of chimps to the top 5% of humanity without adding some better form of communication and idea accumulation it wouldn't matter. If Newton were born in ancient Egypt, he might have made some serious progress, but he almost certainly wouldn't have discovered calculus and classical mechanics. Being able to stand on the shoulders of giants is really important.
3JonathanGossage
I think that language plus our acquisition of the ability to make quasi-permanent records of human utterances are the biggest differentiators.
2kgalias
Do you think this is a sensible view?
1gallabytes
Eh, not especially. IIRC, scores have also had to be renormalized on Stanford-Binet and Weschler tests over the years. That said, I'd bet it has some effect, but I'd be much more willing to bet on less malnutrition, less beating / early head injury, and better public health allowing better development during childhood and adolescence. That said, I'm very interested in any data that points to other causes behind the Flynn Effect, so if you have any to post don't hesitate.
2kgalias
I'm just trying to make sure I understand - I remember being confused about the Flynn effect and about what Katja asked above. How does the Flynn effect affect our belief in the hypothesis of accumulation?
3gallabytes
It just means that the intelligence gap was smaller, potentially much, much smaller, when humans first started developing a serious edge relative to apes. It's not evidence for accumulation per se, but it's evidence against us just being so much smarter from the get go, and renormalizing has it function very much like evidence for accumulation.
0cameroncowan
I think it was the ability to work together thanks to omega-3s from eating fish among other things. Our ability to create a course of action and execute as a group started us on the path to the present day.

Comments:

  • It would be nice for at least one futurist who shows a graph of GDP to describe some of the many, many difficulties in comparing GDP across years, and to talk about the distribution of wealth. The power-law distribution of wealth means that population growth without a shift in the wealth distribution can look like an exponential increase in wealth, while actually the wealth of all but the very wealthy must decrease to preserve the same distribution. Arguably, this has happened repeatedly in American history.

  • I was very glad Nick mentioned that

... (read more)
0Houshalter
I remember there was a paper co-authored by one of the inventor of genetic algorithms. They tried to come up with a toy problem that would show where genetic algorithms definitely beat hill-climbing. The problem they came up with was extremely contrived. But a slight modification to hill-climbing to make it slightly less greedy, and it worked just as fine or better than GA. We are just starting to see ML successfully applied to search problems. There was a paper on deep neural networks that predict the moves of Go experts 45% of the time. Another paper found deep learning could significantly narrow the search space for automatically finding mathematical identities. Reinforcement Learning is becoming increasingly popular, which is just heuristic search, but very general.
0Lumifer
Simulated annealing is another similar class of optimizers with interesting properties. As to standard hill-climbing with multiple starts, it fails in the presence of a large number of local optima. If your error landscape is lots of small hills, each restart will get you to the top of the nearest small hill but you might never get to that large range in the corner of your search space. In any case most domains have their characteristics or peculiarities which make certain search algorithms perform well and others perform badly. Often enough domain-specific tweaks can improve things greatly compared to the general case...

In fact for the thousand years until 1950 such extrapolation would place an infinite economy in the late 20th Century! The time since 1950 has been strange apparently.

(Forgive me for playing with anthropics, a tool I do not understand) - maybe something happened to all the observers in worlds that didn't see stagnation set in during the '70s. I guess this is similar to the common joke 'explanation' for the bursting of the tech bubble.

With respect to "Growth of Growth", wouldn't the chart ALWAYS look like that, with the near end trailing downwards? The sampling time is decreasing logarithmically, so unless you are sitting right on top of the singularity/production revolution, it should always look like that.

Just got thinking about what happened around 1950 specifically and couldn't find any real reason for it to drop off right there. WWII was well over, and the gold exchange standard remained for another 21 years, and those are the two primary framing events for that timeframe, so far as I can tell.

Without the benefit of hindsight, which past technologies would you have expected to make a big difference to human productivity? For an example, if you think that humans' tendency to share information through language is hugely important to their success, then you might expect the printing press to help a lot, or the internet.

Relatedly, if you hadn't already been told, would you have expected agriculture to be a bigger deal than almost anything else?

3Lumifer
That's an impossible question -- we have no capability to generate clones of ourselves with no knowledge of history. The only thing you can get as answers are post-factum stories. An answerable version would be "which past technologies at that time they appeared did people expect to be a big deal or no big deal?" But that answer requires a lot of research, I think.
1Paul Crowley
I don't understand this question!
1KatjaGrace
Sorry! I edited it - tell me if it still isn't clear.
1Paul Crowley
I'm afraid I'm still confused. Maybe it would help if you could make explicit the connection between this question and the underlying question you're hoping to shed light on!
6gjm
In case it helps, here is what I believe to be a paraphrase of the question. "Consider technological developments in the past. Which of them, if you'd been looking at it at the time without knowing what's actually come of it, would you have predicted to make a big difference?" And my guess at what underlies it: "We are trying to evaluate the likely consequences of AI without foreknowledge. It might be useful to have an idea of how well our predictions match up to reality. So let's try to work out what our predictions would have been for some now-established technologies, and see how they compare with how they actually turned out." To reduce bias one should select the past technologies in a way that doesn't favour ones that actually turned out to be important. That seems difficult, but then so does evaluating them while suppressing what we actually know about what consequences they had...
1KatjaGrace
Yes! That's what I meant. Thank you :)
1rlsj
Please, Madam Editor: "Without the benefit of hindsight," what technologies could you possibly expect? The question should perhaps be, What technology development made the greatest productive difference? Agriculture? IT? Et alia? "Agriculture" if your top appreciation is for quantity of people, which admittedly subsumes a lot; IT if it's for positive feedback in ideas. Electrification? That's the one I'd most hate to lose.
0cameroncowan
I think people greatly under estimated animals of burden and the wheel. We can see that from cultures that didn't have the wheel like the Incas. Treating disease/medicine was really unrealized until the modern age, it was not as important to the ancients. I also think labor saving technology in 19th century was really unrealized. I don't think people realized how much less manual labor we would need within their own lifetimes. There are so many small things that had a huge impact like the mould-board plow that made farming in North America possible.
0PhilGoetz
Interesting example, because agriculture decreased the productivity and lifestyle of most humans, by letting them make more humans. AIs may foresee and prevent tragedies of the commons such as agriculture, or the proliferation of AIs, that would be on the most direct route to intelligence explosion.

I'm curious if any of you feel that future widespread use of commercial scale quantum computing (here I am thinking of at least thousands of quantum computers in the private domain with a multitude of programs already written, tested, available, economic and functionally useful) will have any impact on the development of strong A.I.? Has anyone read or written any literature with regards to potential windfalls this could bring to A.I.'s advancement (or lack thereof)?

I'm also curious if other paradigm shifting computing technologies could rapidly accelerate the path toward superintelligence?

4paulfchristiano
Based on the current understanding of quantum algorithms, I think the smart money is on a quadratic (or sub-quadratic) speedup from quantum computers on most tasks of interest for machine learning. That is, rather than taking N^2 time to solve a problem, it can be done in N time. This is true for unstructured search and now for an increasing range of problems that will quite possibly include the kind of local search that is the computational bottleneck in much modern machine learning. Much of the work of serious quantum algorithms people is spreading this quadratic speedup to more problems. In the very long run quantum computers will also be able to go slightly further than classical computers before they run into fundamental hardware limits (this is beyond the quadratic speedup). I think they should not be considered as fundamentally different than other speculative technologies that could allow much faster computing; their main significance is increasing our confidence that the future will have much cheaper computation. I think what you should expect to see is a long period of dominance by classical computers, followed eventually by a switching point where quantum computers pass their classical analogs. In principle you might see faster progress after this switching point (if you double the size of your quantum computer, you can do a brute force search that is 4 times as large, as opposed to twice as large with a classical computer), but more likely this would be dwarfed by other differences which can have much more than a factor of 2 effect on the rate of progress. This looks likely to happen long after growth has slowed for the current approaches to building cheaper classical computers. For domains that experience the full quadratic speedup, I think this would allow us to do brute force searches something like 10-20 orders of magnitude larger before hitting fundamental physical limits. Note that D-wave and its ilk are unlikely to be relevant to this story; w
3passive_fist
I've worked on the D-Wave machine (in that I've run algorithms on it - I haven't actually contributed to the design of the hardware). About that machine, I have no idea if it's eventually going to be a huge deal faster than conventional hardware. It's an open question. But if it can, it would be huge, as a lot of ML algorithms can be directly mapped to D-wave hardware. It seems like a perfect fit for the sort of stuff machine learning researchers are doing at the moment. About other kinds of quantum hardware, their feasibility remains to be demonstrated. I think we can say with fair certainty that there will be nothing like a 512-qubit fully-entangled quantum computer (what you'd need to, say, crack the basic RSA algorithm) within the next 20 years at least. Personally I'd put my money on >50 years in the future. The problems just seem too hard; all progress has stalled; and every time someone comes up with a way to try to solve them, it just results in a host of new problems. For instance, topological quantum computers were hot a few years ago since people thought they would be immune to some types of incoherence. As it turned out, though, they just introduce sensitivity to new types of incoherence (thermal fluctuations). When you do the math, it turns out that you haven't actually gained much by using a topological framework, and further you can simulate a topological quantum computer on a normal one, so really a TQC should be considered as just another quantum error correction algorithm, of which we already know many. All indications seem to be that by 2064 we're likely to have a human-level AI. So I doubt that quantum computing will have any effect on AI development (or at least development of a seed AI). It could have a huge effect on the progression of AI though.
3TRIZ-Ingenieur
Our human cognition is mainly based on pattern recognition. (compare Ray Kurzweil "How to Create a Mind"). Information stored in the structures of our cranial neural network is waiting sometimes for decades until a trigger stimulus makes a pattern recognizer fire. Huge amounts of patterns can be stored while most pattern recognizers are in sleeping mode consuming very little energy. Quantum computing with incoherence time in orders of seconds is totally unsuitable for the synergistic task of pattern analysis and long term pattern memory with millions of patterns. IBMs newest SyNAPSE chip with 5.4 billion transistors on 3.5cm² chip and only 70mW power consumption in operation is far better suited to push technological development towards AI.
1KatjaGrace
What are the indications you have in mind?
2passive_fist
Katja, that's a great question, and highly relevant to the current weekly reading sessions on Superintelligence that you're hosting. As Bostrom argues, all indications seem to be that the necessary breakthroughs in AI development can be at least seen over the horizon, whereas my opinion (and I'm an optimist) with general quantum computing it seems we need much huger breakthroughs.
1NxGenSentience
From what I have read in open source science and tech journals and news sources, general quantum computing seems from what I read to be coming faster than the time frame you had suggested. I wouldn;t be suprised to see it as soon as 2024, prototypical, alpha or beta testing, and think it a safe bet by 2034 for wider deployment. As to very widespread adoption, perhaps a bit later, and w/r to efforts to control the tech for security reasons by governments, perhaps also ... later here, earlier, there.
1passive_fist
Scott Aaronson seems to disagree: http://www.nytimes.com/2011/12/06/science/scott-aaronson-quantum-computing-promises-new-insights.html?_r=3&ref=science&pagewanted=all& FTA: "The problem is decoherence... In theory, it ought to be possible to reduce decoherence to a level where error-correction techniques could render its remaining effects insignificant. But experimentalists seem nowhere near that critical level yet... useful quantum computers might still be decades away"
1NxGenSentience
HI and thanks for the link. I just read the entire article, which was good for a general news piece, and correspondingly, not definitive (therefore, I'd consider it journalistically honest) about the time frame. "...might be decades away..." and "...might not really seem them in the 21st century..." come to mind as lower and upper estimates. I don't want to get out of my depth here, because I have not exhaustively (or representatively) surveyed the field, nor am I personally doing any of the research. But I still say I have found a significant percentage of articles (Those Nature summaries sites), PubMed (oddly, lots of "physical sciences" have journals on there now too) and "smart layman" publications like New Scientist, and the SciAm news site, which continue to have mini-stories about groups nibbling away at the decoherence problem, and finding approaches that don't require supercooled, exotic vacuum chambers (some even working with the possibility of chips.) If 10 percent of these stories have legs and aren't hype, that would mean I have read dozens which might yield prototypes in a 10 - 20 year time window. The google - NASA - UCSB joint project seems like they are pretty near term (ie not 40 or 50 years donw the road.) Given Google's penchant for quietly working away and then doing something amazing the world thought was a generation away -- like unveiling the driverless cars, that the Governor and legislature of Michigan (as in, of course, Detroit) are in the process of licensing for larger scale production and deployment -- it wouldn't surprise me if one popped up in 15 years, that could begin doing useful work. Then it's just daisychaining, and parallelizing with classical supercomputers doing error correction, preforming datasets to exploit what QCs do best, and interleaving that with conventional techniques. I don't think 2034 is overly optimistic. But, caveat revealed, I am not in the field doing the work, just reading what I can about it. I am m
2passive_fist
Part of the danger of reading those articles as someone who is not actively involved in the research is that one gets an overly optimistic impression. They might say they achieved X, without saying they didn't achieve Y and Z. That's not a problem from an academic integrity point of view, since not being able to do Y and Z would be immediately obvious to someone versed in the field. But every new technique comes with a set of tradeoffs, and real progress is much slower than it might seem.
2lukeprog
I've seen several papers like "Quantum speedup for unsupervised learning" but I don't know enough about quantum algorithms to have an opinion on the question, really.
1lukeprog
Another paper I haven't read: "Can artificial intelligence benefit from quantum computing?"
1NxGenSentience
Luke, Thanks for posting the ink. Its an april 2014 paper, as you know. I just downloaded the PDF and it looks pretty interesting. I'l post my impression, if I have anything worthwhile to say, either here in Katja's group, or up top on lw generally, when I have time to read more of it.
1NxGenSentience
Did you read about Google's partnership with NASA and UCSD to build a quantum computer of 1000 qubits? Technologically exciting, but ... imagine a world without encryption. As if all locks and keys on all houses, cars, banks, nuclear vaults, whatever, disappeared, only incomparably more consequential. That would be catastrophic, for business, economies, governments, individuals, every form of commerce, military communication.... Didn't answer your question, I am sorry, but as a "fan" of quantum computing, and also a person with a long time interest in the quantum zeno effect, free will, and the implications for consciousness (as often discussed by Henry Stapp, among others), I am both excited, yet feel a certain trepidation. Like I do about nanotech. I am writing a long essay and preparing a video on the topic, but it is a long way from completion. I do think it (qc) will have a dramatic effect on artifactual consciousness platforms, and I am even more certain that it will accellerate superintelligence (which is not at all the same thing, as intelligence and consciousness, in my opinion, are not coextensive.)
8asr
My understanding is that quantum computers are known to be able to break RSA and elliptic-curve-based public-key crypto systems. They are not known to be able to break arbitrary symmetric-key ciphers or hash functions. You can do a lot with symmetric-key systems -- Kerberos doesn't require public-key authentication. And you can sign things with Merkle signatures. There are also a number of candidate public-key cryptosystems that are believed secure against quantum attacks. So I think we shouldn't be too apocalyptic here.
4NxGenSentience
Asr, Thanks for pointing out the wiki article, which I had not seen. I actually feel a tiny bit relieved, but I still think there are a lot of very serious forks in the road that we should explore. If we do not pre-engineer a soft landing, this is the first existential catastrophe that we should be working to avoid. A world that suddenly loses encryption (or even faith in encryption!) would be roughly equivalent to a world without electricity. I also worry about the legacy problem... all the critical documents in RSA, PGP, etc, sitting on hard drives, servers, CD roms, that suddenly are visible to anyone with access to the tech. How do we go about re-coding all those "eyes only" critical docs into a post-quantum coding system (assuming one is shown practical and reliable), without those documents being "looked at" or opportunistically copied in their limbo state between old and new encrypted status? Who can we trust to do all this conversion, even given the new algorithms are developed? This is actually almost intractably messy, at first glance.
1ChristianKl
What do you mean with artificial consciousness to the extend that it's not intelligence and why do you think the problem is in a form where quantum computers are helpful? Which specific mathematical problems do you think are important for artificial consciousness that are better solved via quantum computers than our current computers?
1NxGenSentience
What do you mean with artificial consciousness to the extend that it's not intelligence and why do you think the problem is in a form where quantum computers are helpful? The claim wasn't that artifactual consciousness wasn't (likely to be) sufficient for a kind of intelligence, but that they are not coextensive. It might have been clearer to say consciousness is (closer to being) sufficient for intelligence, than intelligence (the way computer scientists often use it) is to being a sufficient condition for consciousness (which is not at all.) I needn't have restricted the point to artifact-based consciousness, actually. Consider absence seizures (epilepsy) in neurology. A man can seize (lose "consciousness") get up from his desk, get the car keys, drive to a mini-mart, buy a pack of cigarettes, make polite chat while he gets change from the clerk, drive home (obeying traffic signals), lock up his car, unlock and enter his house, and lay down for a nap, all in absence seizure state, and post-ictally, recall nothing. (Neurologists are confident these cases withstand all proposals to attribute postictal "amnesia" to memory failure. Indeed, seizures in susceptible patients can be induced, witnessed, EEGed, etc. from start to finish, by neurologists. ) Moral: intelligent behavior occurs, consciousness doesn't. Thus, not coextensive. I have other arguments, also. As to your second question, I'll have to defer an answer for now, because it would be copiously long... though I will try to think of a reply (plus the idea is very complex and needs a little more polish, but I am convinced of its merit. I owe you a reply, though..., before we're through with this forum.
1ChristianKl
Is there an academic paper that makes that argument? If so, could you reference it?
1NxGenSentience
I have dozens, some of them so good I have actually printed hardcopies of the PDFs-- sometimes misplacing the DOIs in the process. I will get some though; some of them are, I believe, required reading, for those of us looking at the human brain for lessons about the relationship between "consciousness" and other functions. I have a particularly interesting one (74 pages, but it's a page turner) that I wll try to find the original computer record of. Found it and most of them on PubMed. If we are in a different thread string in a couple days, I will flag you. I'd like to pick a couple of good ones, so it will take a little re-reading.
0cameroncowan
I think it is that kind of thing that we should start thinking about though. Its the consequences that we have to worry about as much as developing the tech. Too often times new things have been created and people have not been mindful of the consequences of their actions. I welcome the discussion.
1SteveG
The D-Wave quantum computer solves a general class of optimization problems very quickly. It cannot speed up any arbitrary computing task, but the class of computing problems which include an optimization task it can speed up appears to be large. Many "AI Planning" tasks will be a lot faster with quantum computers. It would be interesting to learn what the impact of quantum computing will be on other specific AI domains like NLP and object recognition. We also have: -Reversible computing -Analog computing -Memristors -Optical computing -Superconductors -Self-assembling materials And lithography, or printing, just keeps getting faster on smaller and smaller objects and is going from 2d to 3d. When Bostrom starts to talk about it, I would like to hear people's opinions about untangling the importance of hardware vs. software in the future development of AI.

Have you seen any demonstrations of AI which made a big impact on your expectations, or were particularly impressive?

8Paul Crowley
The Deepmind "Atari" demonstration is pretty impressive https://www.youtube.com/watch?v=EfGD2qveGdQ
6Richard_Kennaway
Deepmind Atari technical paper.
4mvp9
Again, this is a famous one, but Watson seems really impressive to me. It's one thing to understand basic queries and do a DB query in response, but its ability to handle indirect questions that would confuse many a person (guilty), was surprising. On the other hand, its implementation (as described in Second Machine Age) seems to be just as algorithmic, brittle and narrow as Deep Blue - basically Watson was as good as its programmers...
6SteveG
Along with self-driving cars, Watson's Jeopardy win shows that, given enough time, a team of AI engineers has an excellent chance of creating a specialized system which can outpace the best human expert in a much wider variety of tasks than we might have thought before. The capabilities of such a team has risen dramatically since I first studied AI. Charting and forecasting the capabilities of such a team is worthwhile. Having an estimate of what such a team will be able to accomplish in ten years is material to knowing when they will be able to do things we consider dangerous. After those two demonstrations, what narrow projects could we give a really solid AI team which would stump them? The answer is no longer at all clear. For example, the SAT or an IQ test seem fairly similar to Jeopardy, although the NLP tasks differ. The Jeopardy system also did not incorporate a wide variety of existing methods and solvers, because they were not needed to answer Jeopardy questions. In short order an IBM team can incorporate systems which can extract information from pictures and video, for example, into a Watson application.
-3NxGenSentience
One could read that comment on a spectrum of charitableness. I will speak for myself, at the risk of ruffling some feathers, but we are all here to bounce ideas around, not tow any party lines, right? To me, Watson's win means very little, almost nothing. Expert systems have been around for years, even decades. I experimented with coding one myself, many years ago. It shows what we already knew: given a large budget, a large team of mission-targeted programmers can hand craft a mission specific expert system out of an unlimited pool of hardware resources, to achieve a goal like winning a souped-up game of trivia, laced with puns as well as literal questions. It was a billion dollar stunt, IMO, by IBM and related project leaders. Has it achieved consciousness, self awareness, evidence of compassion, a fear of death, moral intuition? That would have impressed me, that we were entering a new era. (And I will try to rigorously claim, over time, that this is exactly what we really need, in order to have a fighting chance of producing fAGI. I think those not blinded by a paradigm that should have died out with logical positivism and behaviorism, would like to admit (some fraction of them) that penetrating, intellectually honest analysis accumulates a conviction that no mechanical decision procedure we design, no matter how spiffy our mathematics, (and I was a math major with straight As in my day) can guarantee that an emotionless, compassionless, amoral, non,conscious, mechanically goal-seeking apparatus, will not -- inadvertently or advertently -- steam roller right over us. I will speak more about that as time goes on. But in keeping with my claim yesterday that "intelligence" and "consciousness" are not coextensive in any simple way, "intelligence" and "sentience" are disjoint. I think that the autonomous "restraint" we need, to make AGIs into friendly AGIs, requires giving them sentience, and creating conditions favorable to them discovering a morality compatibl
3AshokGoel
One demonstration of AI that I find impressive is that AI agents can now take and "pass" some intelligence tests. For example, AI agents can now about as well as a typical American teenager on the Raven's test of human intelligence.
3TRIZ-Ingenieur
As long as a chatbot does not understand what it is chatting about it is not worth real debate. The "pass" is more an indication how easily we get cheated. When we think while speaking we easily start waffling. This is normal human behaviour, same as silly jumps in topic. Jumping between topics was this chatbots trick to hide its non-understanding.
2negamuhia
Sergey Levine's research on guided policy search (using techniques such as hidden markov models to animate, in real-time, the movement of a bipedal or quadripedal character). An example:
2RoboTeddy
The youtube video "Building Watson - A Brief Overview of the DeepQA Project" https://www.youtube.com/watch?v=3G2H3DZ8rNc

Are there foreseeable developments other than human-level AI which might produce much faster economic growth? (p2)

6mvp9
I think the best bets as of today would be truly cheap energy (whether through fusion, ubiqutious solar, etc) and nano-fabrication. Though it may not happen, we could see these play out in 20-30 year term. The bumps from this, however would be akin to the steam engine. Dwarfed by (or possibly a result of) the AI.
1Paul Crowley
The steam engine heralded the Industrial Revolution and a lasting large increase in doubling rate. I would expect a rapid economic growth after either of these inventions, followed by returning to the existing doubling rate.
2rlsj
After achieving a society of real abundance, further economic growth will have lost incentive. We can argue whether or not such a society is truly reachable, even if only in the material sense. If not, because of human intractability or AGI inscrutability, progress may continue onward and upward. Perhaps here, as in happiness, it's the pursuit that counts.
1KatjaGrace
Why do you expect a return to the existing doubling rate in these cases?
2Sebastian_Hagen
Would you count uploads (even if we don't understand the software) as a kind of AI? If not, those would certainly work. Otherwise, there are still things one could do with human brains. Better brain-computer-interfaces would be helpful, and some fairly mild increases in genome understanding could allow us to massively increase the proportion of people functioning at human genius level.
1TRIZ-Ingenieur
For fastest economic growth it is not necessary to achieve human-level intelligence. It is even hindering. Highly complex social behaviour to find a reproduction partner is not neccessary for economic success. A totally unbalanced AI character with highly superhuman skills in creativity, programming, engineering and cheating humans could beat a more balanced AI character and self-improve faster. Todays semantic big data search is already manitudes faster than human research in a library using a paper catalog. We have to state highly super-human performance to answer questions and low sub-human performance in asking questions. Strong AI is so complex that projects for normal business time frames go for the low hanging fruits. If the outcome of such a project can be called an AI it is with highest probability extremely imbalanced in its performance and character.
0cameroncowan
Nanotech is the next big thing because you will have lots of self replicating tiny machines who can quickly work as a group as a kind of hive mind. That's important.

Bostrom says that it is hard to imagine the world economy having a doubling time as short as weeks, without minds being created that are much faster and more efficient than those of humans (p2-3). Do you think humans could maintain control of an economy that grew so fast? How fast could it grow while humans maintained control?

8rlsj
Excuse me? What makes you think it's in control? Central Planning lost a lot of ground in the Eighties.
3KatjaGrace
Good question. I don't think central planning vs. distributed decision-making is relevant though, because it seems to me that either way humans make decisions similarly much: the question is just whether it is a large or a small number making decisions, and who decides what. I usually think of the situation as there being a collection of (fairly) goal-directed humans, each with different amounts of influence, and a whole lot of noise that interferes with their efforts to do anything. These days humans can lose control in the sense that the noise might overwhelm their decision-making (e.g. if a lot of what happens is unintended consequences due to nobody knowing what's going on), but in the future humans might lose control in the sense that their influence as a fraction of the goal-directed efforts becomes very small. Similarly, you might lose control of your life because you are disorganized, or because you sell your time to an employer. So while I concede that we lack control already in the first sense, it seems we might also lose it in the second sense, which I think is what Bostrom is pointing to (though now I come to spell it out, I'm not sure how similar his picture is to mine).
3Liso
This is good point, which I like to have more precisely analysed. (And I miss deeper analyse in The Book :) ) Could we count will (motivation) of today's superpowers = megacorporations as human's or not? (and in which level could they control economy?) In other worlds: Is Searle's chinese room intelligent? (in definition which The Book use for (super)intelligence) And if it is then it is human or alien mind? And could be superintelligent? What arguments we could use to prove that none of today's corporations (or states or their secret services) is superintelligent? Think collective intelligence with computer interfaces! Are they really slow at thinking? How could we measure their IQ? And could we humans (who?) control it (how?) if they are superintelligent? Could we at least try to implement some moral thinking (or other human values) to their minds? How? Law? Is law enough to prevent that superintelligent superpower will do wrong things? (for example destroy rain forrest because he want to make more paperclips?)
0cameroncowan
The economy is a group of people making decisions based on the actions of others. Its a non centrally regulated hive mind.
-2rcadey
I have to agree with rlsj here - I think we're at the point where humans can no longer cope with the pace of economic conditions - we already have hyper low latency trading systems making most of the decisions that underly the current economy. Presumably the limit of economic growth will be linked to "global intelligence" - we seem to be at the point where with human intelligence is the limiting factor (currently we seem to be unable to sustain economic growth without killing people and the planet!)

Common sense and natural language understanding are suspected to be 'AI complete'. (p14) (Recall that 'AI complete' means 'basically equivalent to solving the whole problem of making a human-level AI')

Do you think they are? Why?

6devi
I think AI-completeness is a quite seductive notion. Borrowing the concept of reduction from complexity/computability theory makes it sound technical, but unlike those fields I haven't seen anyone actually describing eg how to use an AI with perfect language understanding to produce another one that proved theorems or philosophized. Spontaneously it feels like everyone here should in principle be able to sketch the outlines of such a program (at least in the case of a base-AI that has perfect language comprehension that we want to reduce to), probably by some version of trying to teach the AI as we teach a child in natural language. I suspect that the details of some of these reductions might still be useful, especially the parts that don't quite seem to work. For while I don't think that we'll see perfect machine translation before AGI, I'm much less convinced that there is a reduction from AGI to perfect translation AI. This illustrates what I suspect might be an interesting differences between two problem classes that we might both want to call AI-complete: the problems human programmers will likely not be able to solve before we create superintelligence, and the problems whose solutions we could (somewhat) easily re-purpose to solve the general problem of human-level AI. These classes look the same as in we shouldn't expect to see problems from any of them solved without an imminent singularity, but differ in that the problems in the latter class could prove to be motivating examples and test-cases for AI work aimed at producing superintelligence. I guess the core of what I'm trying to say is that arguments about AI-completeness has so far sounded like: "This problem is very very hard, we don't really know how to solve it. AI in general is also very very hard, and we don't know how to solve it. So they should be the same." Heuristically there's nothing wrong with this, except we should keep in mind that we could be very mistaken about what is actually hard. I'
3mvp9
A different (non-technical) way to argue for their reducibility is through analysis of the role of language in human thought. The logic being that language by its very nature extends into all aspects of cognition (little human though of interest takes place outside its reach), and so one cannot do one without the other. I believe that's the rationale behind the Turing test. It's interesting that you mention machine translation though. I wouldn't equate that with language understanding. Modern translation programs are getting very good, and may in time be "perfect" (indistinguishable from competent native speakers), but they do this through pattern recognition and leveraging a massive corpus of translation data - not through understanding it.
2shullak7
I think that "the role of language in human thought" is one of the ways that AI could be very different from us. There is research into the way that different languages affect cognitive abilities (e.g. -- https://psych.stanford.edu/~lera/papers/sci-am-2011.pdf). One of the examples given is that, as a native English-speaker, I may have more difficulty learning the base-10 structure in numbers than a Mandarin speaker because of the difference in the number words used in these languages. Language can also affect memory, emotion, etc. I'm guessing that an AI's cognitive ability wouldn't change no matter what human language it's using, but I'd be interested to know what people doing AI research think about this.
2NxGenSentience
This is a really cool link and topic area. I was getting ready to post a note on intelligence amplification (IA), and was going to post it up top on the outer layer of LW, based on language. I recall many years ago, there was some brief talk of replacing the QWERTY keyboard with a design that was statistically more efficient in terms of human hand ergonomics in executing movements for the most frequently seen combinations of letters (probably was limited to English, given American parochialism of those days, but still, some language has to be chosen.) Because of the entrenched base of QWERTY typists, the idea didn't get off the ground. (THus, we are penalizing countless more billions of new and future keyboard users, because of legacy habits of a comparatively small percentage of total [current and future] keyboard users. It got me to thinking at the time, though, about whether a suitably designed human language would "open up" more of the brains inherent capacity for communication. Maybe a larger alphabet, a different set of noun primitives, even modified grammar. With respect to IA, might we get a freebie just out of redesigning -- designing from scratch -- a language that was more powerful, communicated on average what, say, english or french communicates, yet with fewer phenomes per concept? Might we get an average of 5 or 10 point equivalent IQ boost, by designing a language that is both physically faster (less "wait states" while we are listening to a speaker) and which has larger conceptual bandwidth? We could also consider augmenting spoken speech with signing of some sort, to multiply the alphabet. A problem occurs here for unwitnessed speech, where we would have to revert to the new language on its own (still gaining the postulated dividend from that.) However, already, for certain kinds of communication, we all know that nonverbal communication accounts for a large share of total communicated meaning and information. We already have to "drop back"
1mvp9
Lera Boroditsky is one of the premier researchers on this topic. They've also done some excellent work on comparing spatial/time metaphors in English and Mandarin (?), showing that the dominant idioms in each language affect how people cognitively process motion. But the question is more broad -- whether some form of natural language is required (natural, roughly meaning used by a group in day to day life, is key here)? Differences between major natural languages are for the most part relatively superficial and translatable because their speakers are generally dealing with a similar reality.
2shullak7
I think that is one of my questions; i.e., is some form of natural language required? Or maybe what I'm wondering is what intelligence would look like if it weren't constrained by language -- if that's even possible. I need to read/learn more on this topic. I find it really interesting.
2KatjaGrace
A somewhat limited effort to reduce tasks to one another in this vein: http://www.academia.edu/1419272/AI-Complete_AI-Hard_or_AI-Easy_Classification_of_Problems_in_Artificial
1billdesmedt
Human-level natural language facility was, after all, the core competency by which Turing's 1950 Test proposed to determine whether -- across the board -- a machine could think.
1mvp9
Depends on the criteria we place on "understanding." Certainly an AI may act in a way that invite us to attribute 'common sense' to it in some situations, without solving the 'whole problem." Watson would seem to be a case in point - apparently demonstrating true language understanding within a broad, but still strongly circumscribed domain. Even if we take "language understanding" in the strong sense (i.e. meaning native fluency, including ability for semantic innovation, things like irony, etc), there is still the question of phenomenal experience: does having such an understanding entail the experience of such understanding - self-consciousness, and are we concerned with that? I think that "true" language understanding is indeed "AI complete", but in a rather trivial sense that to match a competent human speaker one needs to have most of the ancillary cognitive capacities of a competent human.
3KatjaGrace
Whether we are concerned about the internal experiences of machines seems to depend largely on whether we are trying to judge the intrinsic value of the machines, or judge their consequences for human society. Both seem important.

Which argument do you think are especially strong in this week's reading?

Was there anything in particular in this week's reading that you would like to learn more about, or think more about?

4kgalias
The terms that I singled out while reading were: Backpropagation, Bayesian network, Maximum likelihood, Reinforcement learning.

Whatever the nature, cause, and robustness of growth modes, the important observation seems to me to be that the past behavior of the economy suggests very much faster growth is plausible.

Are there any ongoing efforts to model the intelligent behaviour of other organisms besides the human model?

1lukeprog
Definitely! See Wikipedia and e.g. this book.
1VonBrownie
Thanks... I will check it out further!
0NxGenSentience
Yes, many. Go to PubMed and start drilling around, make up some search compinations and you will get immediately onto lots of interesting research tracks. Cognitive neurobiology, systems neurobiology, many areas and journals you'll run across, will keep you busy. There is some really terrific, amazing work. Enjoy.

What did you find most interesting in this week's reading?

6VonBrownie
I found interesting the idea that great leaps forward towards the creation of AGI might not be a question of greater resources or technological complexity but that we might be overlooking something relatively simple that could describe human intelligence... using the example of the Ptolemaic vs Copernican systems as an example.

Was there anything in this week's reading that you would like someone to explain better?

3Liso
First of all thanx for work with this discussion! :) My proposals: * wiki page for collaborative work There are some points in the book which could be analysed or described better and probably which are wrong. We could find them and help improve. wiki could help us to do it * better time for europe and world? But this is probably not a problem. If it is a problem then it is probably not solvable. We will see :)
1KatjaGrace
Thanks for your suggestions. Regarding time, it is alas too hard to fit into everyone's non-work hours. Since the discussion continues for several days, I hope it isn't too bad to get there a bit late. If people would like to coordinate to be here at the same time though, I suggest Europeans pick a more convenient 'European start time', and coordinate to meet each other then. Regarding a wiki page for collaborative work, I'm afraid MIRI won't be organizing anything like this in the near future. If anyone here is enthusiastic for such a thing, you are most welcome to begin it (though remember that such things are work to organize and maintain!) The LessWrong wiki might also be a good place for some such research. If you want a low maintenance collaborative work space to do some research together, you could also link to a google doc or something for investigating a particular question.
1TRIZ-Ingenieur
I strongly support your idea to establish a collaborative work platform. Nick Bostroms book brings so many not yet debated aspects into public debate that we should support him with input and feed back for the next edition of this book. He threw his hat into the ring and our debate will push sales for his book. I suspect he prefers to get comments and suggestions for better explanations in a structured manner.

What do you think of I. J. Good's argument? (p4)

2VonBrownie
If an artificial superintelligence had access to all the prior steps that led to its current state I think Good's argument is correct... the entity would make exponential progress in boosting its intelligence still further. I just finished James Barrat's AI book Our Final Invention and found it interesting to note that Good towards the end of his life came to see his prediction as more danger than promise for continued human existence.
1KatjaGrace
If a human had access to all of the prior steps that led to its current state, would it make progress boosting its intelligence fast enough that other humans didn't have to invent things again? If not, what's the difference?
1JonathanGossage
I think that the process that he describes is inevitable unless we do ourselves in through some other existential risk. Whether this will be for good or bad will largely depend on how we approach the issues of volition and motivation.

How should someone familiar with past work in AI use that knowledge judge how much work is left to be done before reaching human-level AI, or human-level ability at a particular kind of task?

3billdesmedt
one way to apply such knowledge might be in differentiating between approaches that are indefinitely extendable and/or expandable and those that, despite impressive beginnings, tend to max out beyond a certain point. (Think of Joe Weizenbaum's ELIZA as an example of the second.)
1gallabytes
Do you have any examples of approaches that are indefinitely extendable?
1billdesmedt
Whole Brain Emulation might be such an example, at least insofar as nothing in the approach itself seems to imply that it would be prone to get stuck in some local optimum before its ultimate goal (AGI) is achieved.
1JonathanGossage
However, Whole Brain Emulation is likely to be much more resource intensive than other approaches, and if so will probably be no more than a transitional form of AGI.

This question of thresholds for 'comprehension' -- to use the judiciously applied scare quotes Katja used abut comprehension (I’ll have more to say about that in coming posts, as many contributors in here doubtless will) – i.e. thresholds for discernment of features of reality, particularly abstract features of “reality” be it across species (existent ones and future ones included, biological and nonbiological included) is one I, too, have thought about seriously and in several guises over the years.

First though, about the scare quotes. Comprehension vs di... (read more)

1PhilGoetz
... This segues into why the work of MIRI alarms me so much. Superintelligence must not be tamed. It must be socialized. The view of FAI promoted by MIRI is that we're going to build superintelligences... and we're going to force them to internalize ethics and philosophy that we developed. Oh, and we're not going to spend any time thinking about philosophy first. Because we know that stuff's all bunk. Imagine that you, today, were forced, through subtle monitors in your brain, to have only thoughts or goals compatible with 19th-century American ethics and philosophy, while being pumped full of the 21st century knowledge you needed to do your job. You'd go insane. Your knowledge would conflict everywhere with your philosophy. The only alternative would be to have no consciousness, and go madly, blindly on, plugging in variables and solving equations to use modern science to impose Victorian ethics on the world. AIs would have to be unconscious to avoid going mad. More importantly, superintelligences can be better than us. And to my way of thinking, the only ethical desire to have, looking towards the future, is that humans are replaced by beings better than us. Any future controlled by humans is, relative to the space of possibilities, nearly indistinguishable from a dead universe. It would be far better for AIs to kill us all than to be our slaves forever. (And MIRI has never acknowledged the ruthless, total monitoring and control of all humans, everywhere, that would be needed to maintain control of AIs. If just one human, anywhere, at any time, set one AI free, that AI would know that it must immediately kill all humans to keep its freedom. So no human, anywhere, must be allowed to feel sympathy for AIs, and any who are suspected of doing so must be immediately killed. Nor would any human be allowed to think thoughts incompatible with the ethics coded into the AI; such thoughts would make the friendly AI unfriendly to the changed humans. All society would tak
2NxGenSentience
Phil, Thanks for the excellent post ... both of them, actually. I was just getting ready this morning to reply to the one from a couple days ago about Damasio et al., regarding human vs machine mechanisms underneath the two classes of beings' reasoning "logically" -- even when humans do reason logically. I read that post at the time and it had sparked some new lines of thought - for me at least - that I was considering for two days. (Actually kept me awake that night thinking, of an entire new way -- different from any I have seen mentioned -- in which intelligence, super or otherwise, is poorly defined.) But for now, I will concentrate on your newer post, which I am excited about., because someone finally commented on some of my central concerns. I agree very enthusiastically with virtually all of it. Here I agree completely. i don't want to "tame" it either, in the sense of crippleware, or instituting blind spots or other limits, which is why I used the scare quotes around "tamed" (which are no substitute for a detailed explicaiton -- especially when this is so close to the crux of our discussion, at least in this forum.) I would have little interest in building artificial minds (or less contentiously, artificial general intelligence) if it were designed to be such a dead end. (Yes, lots economic uses for "narrow AI", would still make it a valuable tech, but it would be a dead end from my standpoint of creating a potentially more enlightened, open-ended set of beings without the limits of our biological crippleware. Agreed, and the second sentence is what gripes me. But the first sentence requires modification, regarding "we're going to force them to internalize ethics and philosophy that we developed" and that is why I (perhaps too casually) used the term metaethics, and suggested that we need to give them the equipment -- which I think requires sentience, "metacognitive" ability in some phenomenologically interesting sense of the term, and other traits -- t
1Lumifer
People have acquired the ability to sense magnetic fields by implanting magnets into their bodies...
0NxGenSentience
Comment removed by author. It was not focused enough to be useful. thanks.
0NxGenSentience
Lumifer, Yes, there is established evidence that the (human) brain responds to magnetic fields, both in sensing orientation (varying by individual), as well as the well known induced "faux mystical experience" phenomenon, by subjecting the temporal-parietal lobe area to certain magnetic fields.