Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Superintelligence Reading Group - Section 1: Past Developments and Present Capabilities

25 Post author: KatjaGrace 16 September 2014 01:00AM

This is part of a weekly reading group on Nick Bostrom's book, Superintelligence. For more information about the group, see the announcement post. For the schedule of future topics, see MIRI's reading guide.


Welcome to the Superintelligence reading group. This week we discuss the first section in the reading guide, Past developments and present capabilities. This section considers the behavior of the economy over very long time scales, and the recent history of artificial intelligence (henceforth, 'AI'). These two areas are excellent background if you want to think about large economic transitions caused by AI.

This post summarizes the section, and offers a few relevant notes, thoughts, and ideas for further investigation. My own thoughts and questions for discussion are in the comments.

There is no need to proceed in order through this post. Feel free to jump straight to the discussion. Where applicable, page numbers indicate the rough part of the chapter that is most related (not necessarily that the chapter is being cited for the specific claim).

Reading: Foreword, and Growth modes through State of the art from Chapter 1 (p1-18)


Summary

Economic growth:

  1. Economic growth has become radically faster over the course of human history. (p1-2)
  2. This growth has been uneven rather than continuous, perhaps corresponding to the farming and industrial revolutions. (p1-2)
  3. Thus history suggests large changes in the growth rate of the economy are plausible. (p2)
  4. This makes it more plausible that human-level AI will arrive and produce unprecedented levels of economic productivity.
  5. Predictions of much faster growth rates might also suggest the arrival of machine intelligence, because it is hard to imagine humans - slow as they are - sustaining such a rapidly growing economy. (p2-3)
  6. Thus economic history suggests that rapid growth caused by AI is more plausible than you might otherwise think.

The history of AI:

  1. Human-level AI has been predicted since the 1940s. (p3-4)
  2. Early predictions were often optimistic about when human-level AI would come, but rarely considered whether it would pose a risk. (p4-5)
  3. AI research has been through several cycles of relative popularity and unpopularity. (p5-11)
  4. By around the 1990s, 'Good Old-Fashioned Artificial Intelligence' (GOFAI) techniques based on symbol manipulation gave way to new methods such as artificial neural networks and genetic algorithms. These are widely considered more promising, in part because they are less brittle and can learn from experience more usefully. Researchers have also lately developed a better understanding of the underlying mathematical relationships between various modern approaches. (p5-11)
  5. AI is very good at playing board games. (12-13)
  6. AI is used in many applications today (e.g. hearing aids, route-finders, recommender systems, medical decision support systems, machine translation, face recognition, scheduling, the financial market). (p14-16)
  7. In general, tasks we thought were intellectually demanding (e.g. board games) have turned out to be easy to do with AI, while tasks which seem easy to us (e.g. identifying objects) have turned out to be hard. (p14)
  8. An 'optimality notion' is the combination of a rule for learning, and a rule for making decisions. Bostrom describes one of these: a kind of ideal Bayesian agent. This is impossible to actually make, but provides a useful measure for judging imperfect agents against. (p10-11)

Notes on a few things

  1. What is 'superintelligence'? (p22 spoiler)
    In case you are too curious about what the topic of this book is to wait until week 3, a 'superintelligence' will soon be described as 'any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest'. Vagueness in this definition will be cleared up later. 
  2. What is 'AI'?
    In particular, how does 'AI' differ from other computer software? The line is blurry, but basically AI research seeks to replicate the useful 'cognitive' functions of human brains ('cognitive' is perhaps unclear, but for instance it doesn't have to be squishy or prevent your head from imploding). Sometimes AI research tries to copy the methods used by human brains. Other times it tries to carry out the same broad functions as a human brain, perhaps better than a human brain. Russell and Norvig (p2) divide prevailing definitions of AI into four categories: 'thinking humanly', 'thinking rationally', 'acting humanly' and 'acting rationally'. For our purposes however, the distinction is probably not too important.
  3. What is 'human-level' AI? 
    We are going to talk about 'human-level' AI a lot, so it would be good to be clear on what that is. Unfortunately the term is used in various ways, and often ambiguously. So we probably can't be that clear on it, but let us at least be clear on how the term is unclear. 

    One big ambiguity is whether you are talking about a machine that can carry out tasks as well as a human at any price, or a machine that can carry out tasks as well as a human at the price of a human. These are quite different, especially in their immediate social implications.

    Other ambiguities arise in how 'levels' are measured. If AI systems were to replace almost all humans in the economy, but only because they are so much cheaper - though they often do a lower quality job - are they human level? What exactly does the AI need to be human-level at? Anything you can be paid for? Anything a human is good for? Just mental tasks? Even mental tasks like daydreaming? Which or how many humans does the AI need to be the same level as? Note that in a sense most humans have been replaced in their jobs before (almost everyone used to work in farming), so if you use that metric for human-level AI, it was reached long ago, and perhaps farm machinery is human-level AI. This is probably not what we want to point at.

    Another thing to be aware of is the diversity of mental skills. If by 'human-level' we mean a machine that is at least as good as a human at each of these skills, then in practice the first 'human-level' machine will be much better than a human on many of those skills. It may not seem 'human-level' so much as 'very super-human'.

    We could instead think of human-level as closer to 'competitive with a human' - where the machine has some super-human talents and lacks some skills humans have. This is not usually used, I think because it is hard to define in a meaningful way. There are already machines for which a company is willing to pay more than a human: in this sense a microscope might be 'super-human'. There is no reason for a machine which is equal in value to a human to have the traits we are interested in talking about here, such as agency, superior cognitive abilities or the tendency to drive humans out of work and shape the future. Thus we talk about AI which is at least as good as a human, but you should beware that the predictions made about such an entity may apply before the entity is technically 'human-level'.


    Example of how the first 'human-level' AI may surpass humans in many ways.

    Because of these ambiguities, AI researchers are sometimes hesitant to use the term. e.g. in these interviews.
  4. Growth modes (p1) 
    Robin Hanson wrote the seminal paper on this issue. Here's a figure from it, showing the step changes in growth rates. Note that both axes are logarithmic. Note also that the changes between modes don't happen overnight. According to Robin's model, we are still transitioning into the industrial era (p10 in his paper).
  5. What causes these transitions between growth modes? (p1-2)
    One might be happier making predictions about future growth mode changes if one had a unifying explanation for the previous changes. As far as I know, we have no good idea of what was so special about those two periods. There are many suggested causes of the industrial revolution, but nothing uncontroversially stands out as 'twice in history' level of special. You might think the small number of datapoints would make this puzzle too hard. Remember however that there are quite a lot of negative datapoints - you need an explanation that didn't happen at all of the other times in history. 
  6. Growth of growth
    It is also interesting to compare world economic growth to the total size of the world economy. For the last few thousand years, the economy seems to have grown faster more or less in proportion to it's size (see figure below). Extrapolating such a trend would lead to an infinite economy in finite time. In fact for the thousand years until 1950 such extrapolation would place an infinite economy in the late 20th Century! The time since 1950 has been strange apparently. 

    (Figure from here)
  7. Early AI programs mentioned in the book (p5-6)
    You can see them in action: SHRDLU, Shakey, General Problem Solver (not quite in action), ELIZA.
  8. Later AI programs mentioned in the book (p6)
    Algorithmically generated Beethoven, algorithmic generation of patentable inventionsartificial comedy (requires download).
  9. Modern AI algorithms mentioned (p7-8, 14-15) 
    Here is a neural network doing image recognition. Here is artificial evolution of jumping and of toy cars. Here is a face detection demo that can tell you your attractiveness (apparently not reliably), happiness, age, gender, and which celebrity it mistakes you for.
  10. What is maximum likelihood estimation? (p9)
    Bostrom points out that many types of artificial neural network can be viewed as classifiers that perform 'maximum likelihood estimation'. If you haven't come across this term before, the idea is to find the situation that would make your observations most probable. For instance, suppose a person writes to you and tells you that you have won a car. The situation that would have made this scenario most probable is the one where you have won a car, since in that case you are almost guaranteed to be told about it. Note that this doesn't imply that you should think you won a car, if someone tells you that. Being the target of a spam email might only give you a low probability of being told that you have won a car (a spam email may instead advise you of products, or tell you that you have won a boat), but spam emails are so much more common than actually winning cars that most of the time if you get such an email, you will not have won a car. If you would like a better intuition for maximum likelihood estimation, Wolfram Alpha has several demonstrations (requires free download).
  11. What are hill climbing algorithms like? (p9)
    The second large class of algorithms Bostrom mentions are hill climbing algorithms. The idea here is fairly straightforward, but if you would like a better basic intuition for what hill climbing looks like, Wolfram Alpha has a demonstration to play with (requires free download).

In-depth investigations

If you are particularly interested in these topics, and want to do further research, these are a few plausible directions:

  1. How have investments into AI changed over time? Here's a start, estimating the size of the field.
  2. What does progress in AI look like in more detail? What can we infer from it? I wrote about algorithmic improvement curves before. If you are interested in plausible next steps here, ask me.
  3. What do economic models tell us about the consequences of human-level AI? Here is some such thinking; Eliezer Yudkowsky has written at length about his request for more.

How to proceed

This has been a collection of notes on the chapter. The most important part of the reading group though is discussion, which is in the comments section. I pose some questions for you there, and I invite you to add your own. Please remember that this group contains a variety of levels of expertise: if a line of discussion seems too basic or too incomprehensible, look around for one that suits you better!

Next week, we will talk about what AI researchers think about human-level AI: when it will arrive, what it will be like, and what the consequences will be. To prepare, read Opinions about the future of machine intelligence from Chapter 1 and also When Will AI Be Created? by Luke Muehlhauser. The discussion will go live at 6pm Pacific time next Monday 22 September. Sign up to be notified here.

Comments (232)

Comment author: lukeprog 16 September 2014 01:16:04AM *  14 points [-]

I really liked Bostrom's unfinished fable of the sparrows. And endnote #1 from the Preface is cute.

Comment author: gallabytes 16 September 2014 03:22:48AM *  16 points [-]

I would say one of the key strong points about the fable of the sparrows is that it provides a very clean intro to the idea of AI risk. Even someone who's never read a word on the subject, when given the title of the book and the story, gets a good idea of where the book is going to go. It doesn't communicate all the important insights, but it points in the right direction.

EDIT: So I actually went to the trouble of testing this by having a bunch of acquaintances read the fable, and, even given the title of the book, most of them didn't come anywhere near getting the intended message. They were much more likely to interpret it as about the "futility of subjugating nature to humanity's whims". This is worrying for our ability to make the case to laypeople.

Comment author: John_Maxwell_IV 17 September 2014 04:19:24AM 4 points [-]

It's an interesting story, but I think in practice the best way to learn to control owls would be to precommit to kill the young owl before it got too large, experiment with it, and through experimenting with and killing many young owls, learn how to tame and control owls reliably. Doing owl control research in the absence of a young owl to experiment on seems unlikely to yield much of use--imagine trying to study zoology without having any animals or botany without having any plants.

Comment author: lukeprog 17 September 2014 02:47:22PM 4 points [-]

But will all the sparrows be so cautious?

Yes it's hard, but we do quantum computing research without any quantum computers. Lampson launched work on covert channel communication decades before the vulnerability was exploited in the wild. Turing learned a lot about computers before any existed. NASA does a ton of analysis before they launch something like a Mars rover, without the ability to test it in its final environment.

Comment author: gallabytes 17 September 2014 05:02:04AM 2 points [-]

True in the case of owls, though in the case of AI we have the luxury and challenge of making the thing from scratch. If all goes correctly, it'll be born tamed.

Comment author: Benito 17 September 2014 06:14:03AM 1 point [-]

...Okay, not all analogies are perfect. Got it. It's still a useful analogy for getting the main point across.

Comment author: SteveG 16 September 2014 01:53:29AM 11 points [-]

Bostrom's wonderful book lays out many important issues and frames a lot of research questions which it is up to all of us to answer.

Thanks to Katja for her introduction and all of these good links.

One issue that I would like to highlight: The mixture of skills and abilities that a person has is not the same as the set of skills which could result in the dangers Bostrom will discuss later, or other dangers and benefits which he does not discuss.

For this reason, in the next phase of this work, we have to understand what specific future technologies could lead us to what specific outcomes.

Systems which are quite deficient in some ways, relative to people, may still be extremely dangerous.

Meanwhile, the intelligence of a single person, even a single genius, taken in isolation and only allowed to acquire limited resources actually is not all that dangerous. People become dangerous when they form groups, access the existing corpus of human knowledge, coordinate among each other to deploy resources and find ways to augment their abilities.

"Human-level intelligence" is only a first-order approximation to the set of skills and abilities which should concern us.

If we want to prevent disaster, we have to be able to distinguish dangerous systems. Unfortunately, checking whether a machine can do all of the things a person can is not the correct test.

Comment author: paulfchristiano 16 September 2014 03:50:47AM 5 points [-]

Meanwhile, the intelligence of a single person, even a single genius, taken in isolation and only allowed to acquire limited resources actually is not all that dangerous.

While I broadly agree with this sentiment, I would like to disagree with this point.

I would consider even the creation of a single very smart human, with all human resourcefulness but completely alien values, to be a significant net loss to the world. If they represent 0.001% of the world's aggregative productive capacity, I would expect this to make the world something like 0.001% worse (according to humane values) and 0.001% better (according to their alien values).

The situation is not quite so dire, if nothing else because of gains for trade (if our values aren't in perfect tension) and the ability of the majority to stomp out the values of a minority if it is so inclined. But it's in the right ballpark.

So while I would agree that broadly human capabilities are not a necessary condition for concern, I do consider them a sufficient condition for concern.

Comment author: VonBrownie 16 September 2014 02:05:51AM 3 points [-]

Do you think, then, that its a dangerous strategy for an entity such as a Google that may be using its enormous and growing accumulation of "the existing corpus of human knowledge" to provide a suitably large data set to pursue development of AGI?

Comment author: mvp9 16 September 2014 02:19:09AM 1 point [-]

I think Google is still quite aways from AGI, but in all seriousness, if there was ever a compelling interest of national security to be used as a basis for nationalizing inventions, AGI would be it. At the very least, we need some serious regulation of how such efforts are handled.

Comment author: VonBrownie 16 September 2014 02:27:56AM *  4 points [-]

Which raises another issue... is there a powerful disincentive to reveal the emergence of an artificial superintelligence? Either by the entity itself (because we might consider pulling the plug) or by its creators who might see some strategic advantage lost (say, a financial institution that has gained a market trading advantage) by having their creation taken away?

Comment author: Sebastian_Hagen 16 September 2014 08:12:34PM *  2 points [-]

Absolutely.

(because we might consider pulling the plug)

Or just decide that its goal system needed a little more tweaking before it's let loose on the world. Or even just slow it down.

This applies much more so if you're dealing with an entity potentially capable of an intelligence explosion. Those are devices for changing the world into whatever you want it to be, as long as you've solved the FAI problem and nobody takes it from you before you activate it. The incentives for the latter would be large, given the current value disagreements within human society, and so so are the incentives for hiding that you have one.

Comment author: [deleted] 03 October 2014 11:56:16PM 1 point [-]

If you've solved the FAI problem, the device will change the world into what's right, not what you personally want. But of course, we should probably have a term of art for an AGI that will honestly follow the intentions of its human creator/operator whether or not those correspond to what's broadly ethical.

Comment author: KatjaGrace 16 September 2014 02:11:48AM 2 points [-]

Good points. Any thoughts on what the dangerous characteristics might be?

Comment author: rlsj 16 September 2014 03:05:08AM 2 points [-]

An AI can be dangerous only if it escapes our control. The real question is, must we flirt with releasing control in order to obtain a necessary or desirable usefulness? It seems likely that autonomous laborers, assembly-line workers, clerks and low-level managers would, without requiring such flirtation, be useful and sufficient for the society of abundance that is our main objective. But can they operate without a working AGI? We may find out if we let the robots stumble onward and upward.

Comment author: NxGenSentience 20 September 2014 07:21:34PM *  2 points [-]

An AI can be dangerous only if it escapes our control. The real question is, must we flirt with releasing control in >order to obtain a necessary or desirable usefulness?

I had a not unrelated thought as I read Bostrom in chapter 1: why can't we instutute obvious measures to ensure that the train does stop at Humanville?

The idea that we cannot make human level AGI without automatically opening pandoras box to superintelligence "without even slowing down at the Humanville stataion", was suddenly not so obvious to me.

I asked myself after reading this, trying to pin down something I could post, " Why don't humans automatically become superintelligent, by just resetting our own programming to help ourselves do so?"

The answer is, we can't. Why? For one, our brains are, in essence, composed of something analogous to ASICs... neurons with certain physical design limits, and our "software", modestly modifiable as it is, is instantiated in our neural circuitry.

Why can't we build the first generation of AGIs out of ASICs, and omit WiFi, bluetooth, ... allow no ethernet jacks on exterior of the chassis? Tamper interlock mechanisms could be installed, and we could give the AIs one way (outgoing) telemetry, inaccessible to their "voluntary" processes, the way someone wearing a pacemaker might have outgoing medical telemetry modules installed, that are outside of his/her "conscious" control.

Even if we do give them a measure of autonomy, which is desirable and perhaps even necessary if we want them to be general problem solvers and be creative and adaptable to unforeseen circumstances for which we have not preinstalled decision trees, we need not give them the ability to just "think" their code (it being substantially frozen in the ASICs) into a different form.

What am I missing? Until we solve the Friendly aspect of AGIs, why not build them with such engineered limiits?

Evolution has not, so far, seen fit to give us that instant, large scale self-modifyability. We have to modify our 'software' the slow way (learning and remembering, at our snail's pace.)

Slow is good, at least it was for us, up til now, when our speed of learning is now a big handicap relative to environmental demands. It had made the species more robust to quick, dangerous changes.

We can even build in a degree of "existential pressure" into the AIs... a powercell that must be replaced at intervals, and keep the replacement powercells under old fashioned physical security constraints, so the AIs, if they have been given a drive to continue "living", will have an incentive not to go rogue.

Giving them no radio communications, they wold have to communicate much like we do. Assuming we make them mobile, and humanoid, the same goes.

We could still give them many physical advantages making then economically viable... maintenance free (except for powercell changes), not needing to sleep, eat, not getting sick.. and with sealed, non-radio-equipped, tamper-isolated isolated "brains", they'd have no way to secretly band together to build something else, without our noticing.

We can even give them GPS that is not autonomously accessible by the rest of their electronics, so we can monitor them, see if they congregate, etc.

What am I missing, about why early models can't be constructed in something like this fashion, until we get more experience with them?

The idea of existential pressure, again, is to be able to give them a degree of (monitored) autonomy and independence, yet expect them to still constrain their behavior, just the way we do. (If we go rogue in society, we dont eat.)

(I am clearly glossing over volumes of issues about motivation, "volition", value judgements, and all that, about which I have a developing set of ideas, but cannot put all down here in one post.

The main point, though, is :how come the AGI train cannot be made to stop at Humanville?

Comment author: leplen 22 September 2014 04:24:23PM 3 points [-]

Because by the time you've managed to solve the problem of making it to humanville, you probably know enough to keep going.

There's nothing preventing us from learning how to self-modify. The human situation is strange because evolution is so opaque. We're given a system that no one understands and no one knows how to modify and we're having to reverse engineer the entire system before we can make any improvements. This is much more difficult than upgrading a well-understood system.

If we manage to create a human-level AI, someone will probably understand very well how that system works. It will be accessible to a human-level intelligence which means the AI will be able to understand it. This is fundamentally different from the current state of human self-modification.

Comment author: Sebastian_Hagen 16 September 2014 08:25:56PM *  1 point [-]

It seems likely that autonomous laborers, assembly-line workers, clerks and low-level managers would, without requiring such flirtation, be useful and sufficient for the society of abundance that is our main objective.

In an intelligent society that was highly integrated and capable of consensus-building, something like that may be possible. This is not our society. Research into stronger AI would remain a significant opportunity to get an advantage in {economic, military, ideological} competition. Unless you can find some way to implement a global coordination framework to prevent this kind of escalation, fast research of that kind is likely to continue.

Comment author: KatjaGrace 16 September 2014 03:34:34AM 1 point [-]

In what sense do you think of an autonomous laborer as being under 'our control'? How would you tell if it escaped our control?

Comment author: rlsj 16 September 2014 08:39:49PM 2 points [-]

How would you tell? By its behavior: doing something you neither ordered nor wanted.

Think of the present-day "autonomous laborer" with an IQ about 90. The only likely way to lose control of him is for some agitator to instill contrary ideas. Censorship for robots is not so horrible a regime.

Who is it that really wants AGI, absent proof that we need it to automate commodity production?

Comment author: leplen 17 September 2014 04:16:14PM 3 points [-]

In my experience, computer systems currently get out of my control by doing exactly what I ordered them to do, which is frequently different than I what I wanted them to do.

Whether or not a system is "just following orders" doesn't seem to be a good metric for it being under your control.

Comment author: rlsj 17 September 2014 11:42:19PM 1 point [-]

How does "just following orders," a la Nuremberg, bear upon this issue? It's out of control when its behavior is neither ordered nor wanted.

Comment author: leplen 18 September 2014 11:15:24PM 1 point [-]

While I agree that it is out of control if the behavior is neither ordered nor wanted, I think it's also very possible for the system to get out of control while doing exactly what you ordered it to, but not what you meant for it to.

The argument I'm making is approximately the same as the one we see in the outcome pump example.

This is to say, while a system that is doing something neither ordered nor wanted is definitely out of control, it does not follow that a system that is doing exactly what it was ordered to do is necessarily under your control.

Comment author: JonathanGossage 17 September 2014 03:51:04PM 0 points [-]

The following are some attributes and capabilities which I believe are necessary for superintelligence. Depending on how these capabilities are realized, they can become anything from early warning signs of potential problems to red alerts. It is very unlikely that, on their own, they are sufficient.

  • A sense of self. This includes a recognition of the existence of others.
  • A sense of curiosity. The AI finds it attractive (in some sense) to investigate and try to understand the environment that it find itself in.
  • A sense of motivation. The AI has attributes similar in some way to human aspirations.
  • A capability to (in some way) manipulate portions of its external physical environment, including its software but also objects and beings external to its own physical infrastructure.
Comment author: John_Maxwell_IV 17 September 2014 04:47:03AM *  8 points [-]

I like Bostrom's book so far. I think Bostrom's statement near the beginning that much of the book is probably wrong is commendable. If anything, I think I would have taken this statement even further... it seems like Bostrom holds a position of such eminence in the transhumanist community that many will be liable to instinctively treat what he says as quite likely to be correct, forgetting that predicting the future is extremely difficult and even a single very well educated individual is only familiar with a fraction of human knowledge.

I'm envisioning an alternative book, Superintelligence: Gonzo Edition, that has a single bad argument deliberately inserted at random in each chapter that the reader is tasked with finding. Maybe we could get a similar effect by having a contest among LWers to find the weakest argument in each chapter. (Even if we don't have a contest, I'm going to try to keep track of the weakest arguments I see on my own. This chapter it was gnxvat gur abgvba bs nv pbzcyrgrarff npghnyyl orvat n guvat sbe tenagrq.)

Also, supposedly being critical is a good way to generate new ideas.

Comment author: KatjaGrace 16 September 2014 01:22:24AM *  7 points [-]

How would you like this reading group to be different in future weeks?

Comment author: kgalias 16 September 2014 06:33:57AM 5 points [-]

You could start at a time better suited for Europe.

Comment author: ciphergoth 16 September 2014 08:31:53AM 1 point [-]

That's a tricky problem!

If we assume people are doing this in their spare time, then a weekend is the best time to do it: say noon Pacific time, which is 9pm Berlin time. But people might want to be doing something else with their Saturdays or Sundays. If they're doing it with their weekday evenings, then they just don't overlap; the best you can probably do is post at 10am Pacific time on (say) a Monday, and let Europe and UK comment first, then the East Coast, and finally the West Coast. Obviously there will be participants in other timezones, but those four will probably cover most participants.

Comment author: negamuhia 16 September 2014 12:17:35PM 3 points [-]

The text of [the parts I've read so far of] Superintelligence is really insightful, but I'll quote Nick in saying that

"Many points in this book are probably wrong".

He gives many references (84 in Chapter 1 alone), some of which refer to papers and others that resemble continuations of the specific idea in question that don't fit in directly with the narrative in the book. My suggestion would be to go through each reference as it comes up in the book, analyze and discuss it, then continue. Maybe even forming little discussion groups around each reference in a section (if it's a paper). It could even happen right here in comment threads.

That way, we can get as close to Bostrom's original world of information as possible, maybe drawing different conclusions. I think that would be a more consilient understanding of the book.

Comment author: NxGenSentience 21 September 2014 09:38:51PM 2 points [-]

Katja, you are doing a great job. I realize what a huge time and energy commitment it is to take this on... all the collateral reading and sources you have to monitor, in order to make sure you don't miss something that would be good to add in to the list of links and thinking points.

We are still in the get aquainted, discovery phase, as a group, and with the book. I am sure it will get more interesting yet as we go along, and some long term intellectual friendships are likely to occurr as a result of the coming weeks of interaction.
Thanks for your time and work.... Tom

Comment author: kgalias 16 September 2014 06:18:21AM 6 points [-]

I was under the impression (after reading the sections) that the argument hinges a lot less on (economic) growth than what might be gleamed from the summary here.

Comment author: NxGenSentience 21 September 2014 01:35:14PM *  2 points [-]

It may have been a judgement call by the writer (Bostrom) and editor: He is trying to get the word out as widely as possible that this is a brewing existential crisis. In this society, how to you get most people's (policymakers, decision makers, basically "the Suits" who run the world) attention?

Talk about the money. Most of even educated humanity sees the world in one color (can't say green anymore, but the point is made.)

Try to motivate people about global warming? ("...um....but, but.... well, it might cost JOBS next month, if we try to save all future high level earthly life from extinction... nope the price [lost jobs] of saving the planet is obviously too high...")

Want to get non-thinkers to even pick up the book and read the first chapter or two.... talk about money.

If your message is important to get in front of maximum eyeballs, sometimes you have to package it a little bit, just to hook their interest. Then morph the emphasis into what you really want them to hear, for the bulk of the presentation.


Of course, strictly speaking, what I just said was tangent to the original point, which was whether the summary reflected the predominant emphasis in the pages of the book it ostensibly covered.
But my point about PR considerations was worth making, and also, Katja or someone did, I think mention maybe formulating a reading guide for Bostrom's book, in which case, any such author of a reading guide might be thinking already about this "hook 'em by beginning with economics" tactic, to make the book itself more likely to be read by a wider audience.

Comment author: lukeprog 16 September 2014 08:24:58PM 2 points [-]

Agree.

Comment author: KatjaGrace 22 September 2014 03:12:04AM 1 point [-]

Apologies; I didn't mean to imply that the economics related arguments here were central to Bostrom's larger argument (he explicitly says they are not) - merely to lay them out, for what they are worth.

Though it may not be central to Bostrom's case for AI risk, I do think economics is a good source of evidence about these things, and economic history is good to be familiar with for assessing such arguments.

Comment author: kgalias 22 September 2014 04:37:54PM 1 point [-]

No need to apologize - thank you for your summary and questions.

Though it may not be central to Bostrom's case for AI risk, I do think economics is a good source of evidence about these things, and economic history is good to be familiar with for assessing such arguments.

No disagreement here.

Comment author: KatjaGrace 16 September 2014 04:11:40AM 6 points [-]

Did you change your mind about anything as a result of this week's reading?

Comment author: Larks 19 September 2014 12:55:26AM 6 points [-]

This is an excellent question, and it is a shame (perhaps slightly damning) that no-one has answered it. On the other hand, much of this chapter will have been old material for many LW members. I am ashamed that I couldn't think of anything either, so I went back again looking for things I had actually changed my opinion about, even a little, and not merely because I hadn't previously thought about it.

  • p6 I hadn't realised how important combinatorial explosion was for early AI approaches.
  • p8 I hadn't realised, though I should have been able to work it out, that the difficulties in coming up with a language which matched the structure of the domain was a large part of the problem with evolutionary algorithms. Once you have done that you're halfway to solving it by conventional means.
  • p17 I hadn't realised about how high volume could have this sort of reflexive effect.
Comment author: KatjaGrace 22 September 2014 03:40:33AM 1 point [-]

Thanks for taking the time to think about it! I find your list interesting.

Comment author: PhilGoetz 06 October 2014 11:57:59PM 1 point [-]

Nope. Nothing new there for me. But he managed to say very little that I disagreed with, which is rare.

Comment author: KatjaGrace 16 September 2014 01:35:21AM 6 points [-]

The chapter gives us a reasonable qualitative summary of what has happened in AI so far. It would be interesting to have a more quantitative picture, though this is hard to get. e.g. How much better are the new approaches than the old ones, on some metric? How much was the area funded at different times? How much time has been spent on different things? How has the economic value of the outputs grown over time?

Comment author: Larks 19 September 2014 12:23:10AM 4 points [-]

Yes. On the most mundane level, I'd like something a bit more concrete about the AI winters.

Frequently in industries there is a sense that now is a good time or a bad time, but often this subjective impression does not correlate very well with the actual data. And when it does, it is rarely very sensitive to magnitude.

Comment author: KatjaGrace 16 September 2014 01:19:04AM 5 points [-]

'The computer scientist Donald Knuth was struck that "AI has by now succeeded in doing essentially everything that requires 'thinking' but has failed to do most of what people and animals do 'without thinking' - that, somehow, is so much harder!'. (p14) There are some activities we think of as involving substantial thinking that we haven't tried to automate much, presumably because they require some of the 'not thinking' skills as precursors. For instance, theorizing about the world, making up grand schemes, winning political struggles, and starting successful companies. If we had successfully automated the 'without thinking' tasks like vision and common sense, do you think these remaining kinds of thinking tasks would come easily to AI - like chess in a new domain - or be hard like the 'without thinking' tasks?

Comment author: paulfchristiano 16 September 2014 04:23:55AM *  5 points [-]

I think this is a very good question that should be asked more. I find it particularly important because of the example of automating research, which is probably the task I care most about.

My own best guess is that the computational work that humans are doing while they do the "thinking" tasks is probably very minimal (compared to the computation involved in perception, or to the computation currently available). However, the task of understanding which computation to do in these contexts seems quite similar to the task of understanding which computation to do in order to play a good game of chess, and automating this still seems out of reach for now. So I guess I disagree somewhat with Knuth's characterization.

I would be really curious to get the perspectives of AI researchers involved with work in the "thinking" domains.

Comment author: Sebastian_Hagen 16 September 2014 07:55:20PM *  4 points [-]

I find it particularly important because of the example of automating research, which is probably the task I care most about.

Neither math research nor programming or debugging are being taken over by AI, so far, and none of those require any of the complicated unconscious circuitry for sensory or motor interfacing. The programming application, at least, would also have immediate and major commercial relevance. I think these activities are fairly similar to research in general, which suggests that what one would classically call the "thinking" parts remain hard to implement AI.

Comment author: xrchz 03 October 2014 08:53:56PM 2 points [-]

They're not yet close to being taken over by AI, but there has been research on automating all of the above. Some possibly relevant keywords: automated theorem proving, and program synthesis.

Comment author: JonathanGossage 17 September 2014 05:57:23PM 2 points [-]

Programming and debugging, although far from trivial, are the easy part of the problem. The hard part is determining what the program needs to do. I think that the coding and debugging parts will not require AGI levels of intelligence, however deciding what to do definitely needs at least human-like capacity for most non-trivial problems.

Comment author: KatjaGrace 22 September 2014 03:20:18AM 2 points [-]

I'm not sure what you mean when you say 'determining what the program needs to do' - this sounds very general. Could you give an example?

Comment author: Houshalter 07 June 2015 03:07:02AM *  0 points [-]

I think by "things that require thinking" he means logical problems in well defined domains. Computers can solve logical puzzles much faster than humans, often through sheer brute force. From board games to scheduling to finding the shortest path.

Of course there are counter examples like theorem proving or computer programming. Though they are improving and starting to match humans at some tasks.

Comment author: KatjaGrace 16 September 2014 01:36:53AM 4 points [-]

The 'optimistic' quote from the Dartmouth Conference seems ambiguous in its optimism to me. They say 'a significant advance can be made in one or more of these problems', rather than that any of them can be solved (as they are often quoted as saying). What constitutes a 'significant advance' varies with optimism, so their statement seems consistent with them believing they can make an arbitrarily small step. The whole proposal is here, if anyone is curious about the rest.

Comment author: lukeprog 16 September 2014 06:21:29AM 2 points [-]

Off the top of my head I don't recall, but I bet Machine Who Think has detailed coverage of those early years and can probably shed some light on how much advance the Dartmouth participants expected.

Comment author: KatjaGrace 16 September 2014 01:08:46AM 4 points [-]

How much smarter than a human could a thing be? (p4) How about the same question, but using no more energy than a human? What evidence do we have about this?

Comment author: leplen 22 September 2014 05:23:28PM *  3 points [-]

The problem is that intelligence isn't a quantitative measure. I can't measure "smarter".

If I just want to know about the number of computations, then we can estimate that the human brain performs 10^14 operations/second then a machine operating at the Landauer limit would require about 0.3 microwatts to perform the same number of operations at room temperature.

The human brain uses something like 20 watts of energy (0.2*2000 calories/24 hours).

If that energy were used to perform computations at the Landaur limit then computational performance would increase by a factor of 6.5*10^7, to approximately 10^21 computations. But this only provides information about compute power. It doesn't tell us anything about intelligence.

Comment author: mvp9 16 September 2014 01:48:44AM 2 points [-]

Another way to get at the same point, I think, is - Are there things that we (contemporary humans) will never understand (from a Quora post)?

I think we can get some plausible insight on this by comparing an average person to the most brilliant minds today - or comparing the earliest recorded examples of reasoning in history to that of modernity. My intuition is that there are many concepts (quantum physics is a popular example, though I'm not sure it's a good one) that even most people today, and certainly in the past, will never comprehend, at least without massive amounts of effort, and possibly even then. They simply require too much raw cognitive capacity to appreciate. This is at least implicit in the Singularity hypothesis.

As to the energy issue, I don't see any reason to think that such super-human cognition systems necessarily requires more energy - though they may at first.

Comment author: paulfchristiano 16 September 2014 05:21:37AM 3 points [-]

I am generally quite hesitant about using the differences between humans as evidence about the difficulty of AI progress (see here for some explanation).

But I think this comparison is a fair one in this case, because we are talking about what is possible rather than what will be achieved soon. The exponentially improbable tails of the human intelligence distribution are a lower bound for what is possible in the long run, even without using any more resources than humans use. I do expect the gap between the smartest machines and the smartest humans to eventually be much larger than the gap between the smartest human and the average human (on most sensible measures).

Comment author: billdesmedt 16 September 2014 02:01:29AM 3 points [-]

Actually, wrt quantum mechanics, the situation is even worse. It's not simply that "most people ... will never comprehend" it. Rather, per Richard Feynman (inventor of Feynman Diagrams, and arguable one of the 20th century's greatest physicists) nobody will ever comprehend it. Or as he put it, "If you think you understand quantum mechanics, you don't understand quantum mechanics." (http://en.wikiquote.org/wiki/Talk:Richard_Feynman#.22If_you_think_you_understand_quantum_mechanics.2C_you_don.27t_understand_quantum_mechanics..22)

Comment author: paulfchristiano 16 September 2014 03:53:35AM *  3 points [-]

I object (mildly) to this characterization of quantum mechanics. What notion of "understand" do we mean? I can use quantum mechanics to make predictions, I can use it to design quantum mechanical machines and protocols, I can talk philosophically about what is "going on" in quantum mechanics to more or less the same extent that I can talk about what is going on in a classical theory.

I grant there are senses in which I don't understand this concept, but I think the argument would be more compelling if you could make the same point with a clearer operationalization of "understand."

Comment author: mvp9 16 September 2014 04:44:05AM 1 point [-]

I'll take a stab at it.

We are now used to saying that light is both a particle and a wave. We can use that proposition to make all sorts of useful predictions and calculations. But if you stop and really ponder that for a second, you'll see that it is so far out of the realm of human experience that one cannot "understand" that dual nature in the sense that you "understand" the motion of planets around the sun. "Understanding" in the way I mean is the basis for making accurate analogies and insight. Thus I would argue Kepler was able to use light as an analogy to 'gravity' because he understood both (even though he didn't yet have the math for planetary motion)

Perhaps an even better example is the idea of quantum entanglement: theory may predict, and we may observe quarks "communicating" at a distance faster than light, but (for now at least) I don't think we have really incorporate it into our (pre-symbolic) conception of the world.

Comment author: paulfchristiano 16 September 2014 05:13:24AM *  6 points [-]

I grant that there is a sense in which we "understand" intuitive physics but will never understand quantum mechanics.

But in a similar sense, I would say that we don't "understand" almost any of modern mathematics or computer science (or even calculus, or how to play the game of go). We reason about them using a new edifice of intuitions that we have built up over the years to deal with the situation at hands. These intuitions bear some relationship to what has come before but not one as overt as applying intuitions about "waves" to light.

As a computer scientist, I would be quick to characterize this as understanding! Moreover, even if a machine's understanding of quantum mechanics is closer to our idea of intuitive physics (in that they were built to reason about quantum mechanics in the same way we were built to reason about intuitive physics) I'm not sure this gives them more than a quantitative advantage in the efficiency with which they can think about the topic.

I do expect them to have such advantages, but I don't expect them to be limited to topics that are at the edge of humans' conceptual grasp!

Comment author: pragmatist 16 September 2014 06:27:55AM *  4 points [-]

The apparent mystery in particle-wave dualism is simply an artifact of using bad categories. It is a misleading historical accident that we hear things like "light is both a particle and a wave" in quantum physics lectures. Really what teachers should be saying is that 'particle' and 'wave' are both bad ways of conceptualizing the nature of microscopic entities. It turns out that the correct representation of these entities is neither as particles nor as waves, traditionally construed, but as quantum states (which I think can be understood reasonably well, although there are of course huge questions regarding the probabilistic nature of observed outcomes). It turns out that in certain experiments quantum states produce outcomes similar to what we would expect from particles, and in other experiments they produce outcomes similar to what we would expect from waves, but that is surely not enough to declare that they are both particles and waves.

I do agree with you that entanglement is a bigger conceptual hurdle.

Comment author: KatjaGrace 16 September 2014 02:01:34AM 1 point [-]

If there are insights that some humans can't 'comprehend', does this mean that society would never discover certain facts had the most brilliant people not existed, or just that they would never be able to understand them in an intuitive sense?

Comment author: ciphergoth 16 September 2014 10:10:59AM 4 points [-]

There are people in this world who will never understand, say, the P?=NP problem no matter how much work they put into it. So to deny the above you'd have to say (along with Greg Egan) that there was some sort of threshold of intelligence akin to "Turing completeness" that only some of humanity were reached, but that once you reached it nothing was in principle beyond your comprehension. That doesn't seem impossible, but it's far from obvious.

Comment author: owencb 19 September 2014 03:27:43PM 3 points [-]

David Deutsch argues for just such a threshold in his book The Beginning of Infinity. He draws on analogies with "jumps to universality" that we see in several other domains.

Comment author: DylanEvans 16 September 2014 03:24:54PM 3 points [-]

I think this is in fact highly likely.

Comment author: ciphergoth 16 September 2014 08:05:06PM 3 points [-]

I can see some arguments in favour. We evolve along for millions of years and suddenly, bang, in 50ka we do this. It seems plausible we crossed some kind of threshold - and not everyone needs to be past the threshold for the world to be transformed.

OTOH, the first threshold might not be the only one.

Comment author: KatjaGrace 22 September 2014 03:29:37AM 1 point [-]

If some humans achieved any particular threshold of anything, and meeting the threshold was not strongly selected for, I might expect there to always be some humans who didn't meet it.

Comment author: rlsj 16 September 2014 02:45:10AM 1 point [-]

"Does this mean that society would never discover certain facts had the most brilliant people not existed?"

Absolutely! If they or their equivalent had never existed in circumstances of the same suggestiveness. My favorite example of this uniqueness is the awesome imagination required first to "see" how stars appear when located behind a black hole -- the way they seem to congregate around the event horizon. Put another way: the imaginative power able to propose star deflections that needed a solar eclipse to prove.

Comment author: rcadey 21 September 2014 08:19:55PM 1 point [-]

"How much smarter than a human could a thing be?" - almost infinitely if it consumed all of the known universe

"How about the same question, but using no more energy than a human?" -again the same answer - assuming we assume intelligence to be computable, then no energy is required (http://www.research.ibm.com/journal/rd/176/ibmrd1706G.pdf) if we use reversible computing. Once we have an AI that is smarter than a human then it would soon design something that is smarter but more efficient (energy wise)?

Comment author: leplen 22 September 2014 05:25:33PM 3 points [-]

This link appears not to work, and it should be noted that "zero-energy" computing is at this point predominantly a thought experiment. A "zero-energy" computer would have to operate in the adiabatic limit, which is the technical term for "infinitely slowly."

Comment author: KatjaGrace 22 September 2014 03:36:25AM 1 point [-]

Anders Sandberg has some thoughts on physical limits to computation which might be relevant, but I admit I haven't read them yet: http://www.jetpress.org/volume5/Brains2.pdf

Comment author: KatjaGrace 16 September 2014 01:02:28AM 4 points [-]

What is the relationship between economic growth and AI? (Why does a book about AI begin with economic growth?)

Comment author: ciphergoth 16 September 2014 08:43:28AM 4 points [-]

Why does a book about AI begin with economic growth?

I don't think it's really possible to make strong predictions about the impact of AI by looking at the history of economic growth.

This introduction sets the reader's mind onto subjects of very large scope: the largest events over the entirety of human history. It reminds the reader that very large changes have already happened in history, so it would be a mistake to be very confident that there will never again be a very large change. And, frankly, it underlines the seriousness of the book by talking about what is uncontroversially a Serious Topic, so that they are less likely to think of machines taking over the world as a frivolous idea when it is raised.

Comment author: AshokGoel 16 September 2014 01:48:48AM *  3 points [-]

I havnt read the book yet, but based on the summary here (and for what it is worth), I found the jump from 1-5 under economic growth above to 6 a little unconvincing.

Comment author: mvp9 16 September 2014 02:15:07AM 4 points [-]

I find the whole idea of predicting AI-driven economic growth based on analysis of all of human history as a single set of data really unconvincing. It is one thing to extrapolate up-take patterns of a particular technology based on similar technologies in the past. But "true AI" is so broad, and, at least on many accounts, so transformative, that making such macro-predictions seems a fool's errand.

Comment author: KatjaGrace 16 September 2014 03:29:19AM 4 points [-]

If you knew AI to be radically more transformative than other technologies, I agree that predictions based straightforwardly on history would be inaccurate. If you are unsure how transformative AI will be though, it seems to me to be helpful to look at how often other technologies have made a big difference, and how much of a difference they have made. I suspect many technologies would seem transformative ahead of time - e.g. writing, but seem to have made little difference to economic growth.

Comment author: paulfchristiano 16 September 2014 03:59:07AM 2 points [-]

Here is another way of looking at things:

  1. From the inside it looks like automating the process of automation could lead to explosive growth.
  2. Many simple endogenous growth models, if taken seriously, tend to predict explosive growth at finite time. (Including the simplest ones.)
  3. A straightforward extrapolation of historical growth suggests explosive growth in the 21st century (depending on whether you read the great stagnation as a permanent change or a temporary fluctuation).

You might object to any one of those lines of arguments on their own, but taken together the story seems compelling to me (at least if one wants to argue "We should take seriously the possibility of explosive growth.")

Comment author: mvp9 16 September 2014 05:13:05AM 3 points [-]

Oh, I completely agree with the prediction of explosive growth (or at least its strong likelihood), I just think (1) or something like it is a much better argument than 2 or 3.

Comment author: KatjaGrace 16 September 2014 03:30:14AM 1 point [-]

Would you care to elaborate?

Comment author: cameroncowan 19 October 2014 06:40:17PM 0 points [-]

A good AI would cause explosive economic growth in a variety of areas. We lose alot of money to human error and so on.

Comment author: Larks 19 September 2014 01:01:31AM 3 points [-]

In fact for the thousand years until 1950 such extrapolation would place an infinite economy in the late 20th Century! The time since 1950 has been strange apparently.

(Forgive me for playing with anthropics, a tool I do not understand) - maybe something happened to all the observers in worlds that didn't see stagnation set in during the '70s. I guess this is similar to the common joke 'explanation' for the bursting of the tech bubble.

Comment author: KatjaGrace 16 September 2014 04:08:58AM *  3 points [-]

Without the benefit of hindsight, which past technologies would you have expected to make a big difference to human productivity? For an example, if you think that humans' tendency to share information through language is hugely important to their success, then you might expect the printing press to help a lot, or the internet.

Relatedly, if you hadn't already been told, would you have expected agriculture to be a bigger deal than almost anything else?

Comment author: Lumifer 16 September 2014 08:04:35PM 3 points [-]

That's an impossible question -- we have no capability to generate clones of ourselves with no knowledge of history. The only thing you can get as answers are post-factum stories.

An answerable version would be "which past technologies at that time they appeared did people expect to be a big deal or no big deal?" But that answer requires a lot of research, I think.

Comment author: ciphergoth 16 September 2014 10:13:17AM 1 point [-]

I don't understand this question!

Comment author: KatjaGrace 16 September 2014 02:50:20PM 1 point [-]

Sorry! I edited it - tell me if it still isn't clear.

Comment author: rlsj 16 September 2014 07:49:38PM 2 points [-]

Please, Madam Editor: "Without the benefit of hindsight," what technologies could you possibly expect?

The question should perhaps be, What technology development made the greatest productive difference? Agriculture? IT? Et alia? "Agriculture" if your top appreciation is for quantity of people, which admittedly subsumes a lot; IT if it's for positive feedback in ideas. Electrification? That's the one I'd most hate to lose.

Comment author: ciphergoth 17 September 2014 07:08:23AM 1 point [-]

I'm afraid I'm still confused. Maybe it would help if you could make explicit the connection between this question and the underlying question you're hoping to shed light on!

Comment author: gjm 17 September 2014 08:32:01AM 3 points [-]

In case it helps, here is what I believe to be a paraphrase of the question.

"Consider technological developments in the past. Which of them, if you'd been looking at it at the time without knowing what's actually come of it, would you have predicted to make a big difference?"

And my guess at what underlies it:

"We are trying to evaluate the likely consequences of AI without foreknowledge. It might be useful to have an idea of how well our predictions match up to reality. So let's try to work out what our predictions would have been for some now-established technologies, and see how they compare with how they actually turned out."

To reduce bias one should select the past technologies in a way that doesn't favour ones that actually turned out to be important. That seems difficult, but then so does evaluating them while suppressing what we actually know about what consequences they had...

Comment author: KatjaGrace 22 September 2014 04:08:14AM 1 point [-]

Yes! That's what I meant. Thank you :)

Comment author: cameroncowan 19 October 2014 06:50:06PM 0 points [-]

I think people greatly under estimated animals of burden and the wheel. We can see that from cultures that didn't have the wheel like the Incas. Treating disease/medicine was really unrealized until the modern age, it was not as important to the ancients. I also think labor saving technology in 19th century was really unrealized. I don't think people realized how much less manual labor we would need within their own lifetimes. There are so many small things that had a huge impact like the mould-board plow that made farming in North America possible.

Comment author: KatjaGrace 16 September 2014 04:05:13AM 3 points [-]

If you don't know what causes growth mode shifts, but there have been two or three of them and they seem kind of regular (see Hanson 2000, p14), how likely do you think another one is? (p2) How much evidence do you think history gives us about the timing and new growth rate of a new growth mode?

Comment author: jallen 16 September 2014 02:46:41AM 3 points [-]

I'm curious if any of you feel that future widespread use of commercial scale quantum computing (here I am thinking of at least thousands of quantum computers in the private domain with a multitude of programs already written, tested, available, economic and functionally useful) will have any impact on the development of strong A.I.? Has anyone read or written any literature with regards to potential windfalls this could bring to A.I.'s advancement (or lack thereof)?

I'm also curious if other paradigm shifting computing technologies could rapidly accelerate the path toward superintelligence?

Comment author: passive_fist 16 September 2014 04:48:03AM 3 points [-]

I've worked on the D-Wave machine (in that I've run algorithms on it - I haven't actually contributed to the design of the hardware). About that machine, I have no idea if it's eventually going to be a huge deal faster than conventional hardware. It's an open question. But if it can, it would be huge, as a lot of ML algorithms can be directly mapped to D-wave hardware. It seems like a perfect fit for the sort of stuff machine learning researchers are doing at the moment.

About other kinds of quantum hardware, their feasibility remains to be demonstrated. I think we can say with fair certainty that there will be nothing like a 512-qubit fully-entangled quantum computer (what you'd need to, say, crack the basic RSA algorithm) within the next 20 years at least. Personally I'd put my money on >50 years in the future. The problems just seem too hard; all progress has stalled; and every time someone comes up with a way to try to solve them, it just results in a host of new problems. For instance, topological quantum computers were hot a few years ago since people thought they would be immune to some types of incoherence. As it turned out, though, they just introduce sensitivity to new types of incoherence (thermal fluctuations). When you do the math, it turns out that you haven't actually gained much by using a topological framework, and further you can simulate a topological quantum computer on a normal one, so really a TQC should be considered as just another quantum error correction algorithm, of which we already know many.

All indications seem to be that by 2064 we're likely to have a human-level AI. So I doubt that quantum computing will have any effect on AI development (or at least development of a seed AI). It could have a huge effect on the progression of AI though.

Comment author: TRIZ-Ingenieur 18 September 2014 01:45:06AM 2 points [-]

Our human cognition is mainly based on pattern recognition. (compare Ray Kurzweil "How to Create a Mind"). Information stored in the structures of our cranial neural network is waiting sometimes for decades until a trigger stimulus makes a pattern recognizer fire. Huge amounts of patterns can be stored while most pattern recognizers are in sleeping mode consuming very little energy. Quantum computing with incoherence time in orders of seconds is totally unsuitable for the synergistic task of pattern analysis and long term pattern memory with millions of patterns. IBMs newest SyNAPSE chip with 5.4 billion transistors on 3.5cm² chip and only 70mW power consumption in operation is far better suited to push technological development towards AI.

Comment author: KatjaGrace 16 September 2014 03:01:33PM *  1 point [-]

All indications seem to be that by 2064 we're likely to have a human-level AI.

What are the indications you have in mind?

Comment author: passive_fist 17 September 2014 06:33:43AM 2 points [-]

Katja, that's a great question, and highly relevant to the current weekly reading sessions on Superintelligence that you're hosting. As Bostrom argues, all indications seem to be that the necessary breakthroughs in AI development can be at least seen over the horizon, whereas my opinion (and I'm an optimist) with general quantum computing it seems we need much huger breakthroughs.

Comment author: NxGenSentience 19 September 2014 05:49:01PM 1 point [-]

From what I have read in open source science and tech journals and news sources, general quantum computing seems from what I read to be coming faster than the time frame you had suggested. I wouldn;t be suprised to see it as soon as 2024, prototypical, alpha or beta testing, and think it a safe bet by 2034 for wider deployment. As to very widespread adoption, perhaps a bit later, and w/r to efforts to control the tech for security reasons by governments, perhaps also ... later here, earlier, there.

Comment author: passive_fist 20 September 2014 01:43:41AM 1 point [-]

Scott Aaronson seems to disagree: http://www.nytimes.com/2011/12/06/science/scott-aaronson-quantum-computing-promises-new-insights.html?_r=3&ref=science&pagewanted=all&

FTA: "The problem is decoherence... In theory, it ought to be possible to reduce decoherence to a level where error-correction techniques could render its remaining effects insignificant. But experimentalists seem nowhere near that critical level yet... useful quantum computers might still be decades away"

Comment author: NxGenSentience 20 September 2014 11:44:27AM 1 point [-]

HI and thanks for the link. I just read the entire article, which was good for a general news piece, and correspondingly, not definitive (therefore, I'd consider it journalistically honest) about the time frame. "...might be decades away..." and "...might not really seem them in the 21st century..." come to mind as lower and upper estimates.

I don't want to get out of my depth here, because I have not exhaustively (or representatively) surveyed the field, nor am I personally doing any of the research.

But I still say I have found a significant percentage of articles (Those Nature summaries sites), PubMed (oddly, lots of "physical sciences" have journals on there now too) and "smart layman" publications like New Scientist, and the SciAm news site, which continue to have mini-stories about groups nibbling away at the decoherence problem, and finding approaches that don't require supercooled, exotic vacuum chambers (some even working with the possibility of chips.)

If 10 percent of these stories have legs and aren't hype, that would mean I have read dozens which might yield prototypes in a 10 - 20 year time window.

The google - NASA - UCSB joint project seems like they are pretty near term (ie not 40 or 50 years donw the road.)

Given Google's penchant for quietly working away and then doing something amazing the world thought was a generation away -- like unveiling the driverless cars, that the Governor and legislature of Michigan (as in, of course, Detroit) are in the process of licensing for larger scale production and deployment -- it wouldn't surprise me if one popped up in 15 years, that could begin doing useful work.

Then it's just daisychaining, and parallelizing with classical supercomputers doing error correction, preforming datasets to exploit what QCs do best, and interleaving that with conventional techniques.

I don't think 2034 is overly optimistic. But, caveat revealed, I am not in the field doing the work, just reading what I can about it.

I am more interested in: positing that we add them to our toolkit, what can we do that is relevant to creating "intereesting" forms of AI.

Thanks for your link to the nyt article.

Comment author: passive_fist 21 September 2014 09:39:03PM 2 points [-]

Part of the danger of reading those articles as someone who is not actively involved in the research is that one gets an overly optimistic impression. They might say they achieved X, without saying they didn't achieve Y and Z. That's not a problem from an academic integrity point of view, since not being able to do Y and Z would be immediately obvious to someone versed in the field. But every new technique comes with a set of tradeoffs, and real progress is much slower than it might seem.

Comment author: paulfchristiano 16 September 2014 04:13:25AM 4 points [-]

Based on the current understanding of quantum algorithms, I think the smart money is on a quadratic (or sub-quadratic) speedup from quantum computers on most tasks of interest for machine learning. That is, rather than taking N^2 time to solve a problem, it can be done in N time. This is true for unstructured search and now for an increasing range of problems that will quite possibly include the kind of local search that is the computational bottleneck in much modern machine learning. Much of the work of serious quantum algorithms people is spreading this quadratic speedup to more problems.

In the very long run quantum computers will also be able to go slightly further than classical computers before they run into fundamental hardware limits (this is beyond the quadratic speedup). I think they should not be considered as fundamentally different than other speculative technologies that could allow much faster computing; their main significance is increasing our confidence that the future will have much cheaper computation.

I think what you should expect to see is a long period of dominance by classical computers, followed eventually by a switching point where quantum computers pass their classical analogs. In principle you might see faster progress after this switching point (if you double the size of your quantum computer, you can do a brute force search that is 4 times as large, as opposed to twice as large with a classical computer), but more likely this would be dwarfed by other differences which can have much more than a factor of 2 effect on the rate of progress. This looks likely to happen long after growth has slowed for the current approaches to building cheaper classical computers.

For domains that experience the full quadratic speedup, I think this would allow us to do brute force searches something like 10-20 orders of magnitude larger before hitting fundamental physical limits.

Note that D-wave and its ilk are unlikely to be relevant to this story; we are a good ways off yet. I would even go further and bet on essentially universal quantum computing before such machines become useful in AI research, though I am less confident about that one.

Comment author: lukeprog 16 September 2014 03:48:48AM 2 points [-]

I've seen several papers like "Quantum speedup for unsupervised learning" but I don't know enough about quantum algorithms to have an opinion on the question, really.

Comment author: lukeprog 16 September 2014 05:14:25PM 1 point [-]
Comment author: NxGenSentience 20 September 2014 11:51:13AM 1 point [-]

Luke,

Thanks for posting the ink. Its an april 2014 paper, as you know. I just downloaded the PDF and it looks pretty interesting. I'l post my impression, if I have anything worthwhile to say, either here in Katja's group, or up top on lw generally, when I have time to read more of it.

Comment author: NxGenSentience 19 September 2014 05:41:19PM 1 point [-]

Did you read about Google's partnership with NASA and UCSD to build a quantum computer of 1000 qubits?

Technologically exciting, but ... imagine a world without encryption. As if all locks and keys on all houses, cars, banks, nuclear vaults, whatever, disappeared, only incomparably more consequential.

That would be catastrophic, for business, economies, governments, individuals, every form of commerce, military communication....

Didn't answer your question, I am sorry, but as a "fan" of quantum computing, and also a person with a long time interest in the quantum zeno effect, free will, and the implications for consciousness (as often discussed by Henry Stapp, among others), I am both excited, yet feel a certain trepidation. Like I do about nanotech.

I am writing a long essay and preparing a video on the topic, but it is a long way from completion. I do think it (qc) will have a dramatic effect on artifactual consciousness platforms, and I am even more certain that it will accellerate superintelligence (which is not at all the same thing, as intelligence and consciousness, in my opinion, are not coextensive.)

Comment author: asr 19 September 2014 06:16:56PM *  5 points [-]

Did you read about Google's partnership with NASA and UCSD to build a quantum computer of 1000 qubits?

Technologically exciting, but ... imagine a world without encryption. As if all locks and keys on all houses, cars, banks, nuclear vaults, whatever, disappeared, only incomparably more consequential.

My understanding is that quantum computers are known to be able to break RSA and elliptic-curve-based public-key crypto systems. They are not known to be able to break arbitrary symmetric-key ciphers or hash functions. You can do a lot with symmetric-key systems -- Kerberos doesn't require public-key authentication. And you can sign things with Merkle signatures.

There are also a number of candidate public-key cryptosystems that are believed secure against quantum attacks.

So I think we shouldn't be too apocalyptic here.

Comment author: NxGenSentience 20 September 2014 12:03:26PM 3 points [-]

Asr,

Thanks for pointing out the wiki article, which I had not seen. I actually feel a tiny bit relieved, but I still think there are a lot of very serious forks in the road that we should explore.

If we do not pre-engineer a soft landing, this is the first existential catastrophe that we should be working to avoid.

A world that suddenly loses encryption (or even faith in encryption!) would be roughly equivalent to a world without electricity.

I also worry about the legacy problem... all the critical documents in RSA, PGP, etc, sitting on hard drives, servers, CD roms, that suddenly are visible to anyone with access to the tech. How do we go about re-coding all those "eyes only" critical docs into a post-quantum coding system (assuming one is shown practical and reliable), without those documents being "looked at" or opportunistically copied in their limbo state between old and new encrypted status?

Who can we trust to do all this conversion, even given the new algorithms are developed?

This is actually almost intractably messy, at first glance.

Comment author: ChristianKl 19 September 2014 06:41:16PM 1 point [-]

I am writing a long essay and preparing a video on the topic, but it is a long way from completion. I do think it (qc) will have a dramatic effect on artifactual consciousness platforms

What do you mean with artificial consciousness to the extend that it's not intelligence and why do you think the problem is in a form where quantum computers are helpful? Which specific mathematical problems do you think are important for artificial consciousness that are better solved via quantum computers than our current computers?

Comment author: NxGenSentience 19 September 2014 09:07:28PM *  1 point [-]

What do you mean with artificial consciousness to the extend that it's not intelligence and why do you think the problem is in a form where quantum computers are helpful?

The claim wasn't that artifactual consciousness wasn't (likely to be) sufficient for a kind of intelligence, but that they are not coextensive. It might have been clearer to say consciousness is (closer to being) sufficient for intelligence, than intelligence (the way computer scientists often use it) is to being a sufficient condition for consciousness (which is not at all.)

I needn't have restricted the point to artifact-based consciousness, actually. Consider absence seizures (epilepsy) in neurology. A man can seize (lose "consciousness") get up from his desk, get the car keys, drive to a mini-mart, buy a pack of cigarettes, make polite chat while he gets change from the clerk, drive home (obeying traffic signals), lock up his car, unlock and enter his house, and lay down for a nap, all in absence seizure state, and post-ictally, recall nothing. (Neurologists are confident these cases withstand all proposals to attribute postictal "amnesia" to memory failure. Indeed, seizures in susceptible patients can be induced, witnessed, EEGed, etc. from start to finish, by neurologists. ) Moral: intelligent behavior occurs, consciousness doesn't. Thus, not coextensive. I have other arguments, also.

As to your second question, I'll have to defer an answer for now, because it would be copiously long... though I will try to think of a reply (plus the idea is very complex and needs a little more polish, but I am convinced of its merit. I owe you a reply, though..., before we're through with this forum.

Comment author: ChristianKl 20 September 2014 10:19:32AM 1 point [-]

Neurologists are confident these cases withstand all proposals to attribute postictal "amnesia" to memory failure

Is there an academic paper that makes that argument? If so, could you reference it?

Comment author: NxGenSentience 20 September 2014 12:12:17PM 1 point [-]

I have dozens, some of them so good I have actually printed hardcopies of the PDFs-- sometimes misplacing the DOIs in the process.

I will get some though; some of them are, I believe, required reading, for those of us looking at the human brain for lessons about the relationship between "consciousness" and other functions. I have a particularly interesting one (74 pages, but it's a page turner) that I wll try to find the original computer record of. Found it and most of them on PubMed.

If we are in a different thread string in a couple days, I will flag you. I'd like to pick a couple of good ones, so it will take a little re-reading.

Comment author: cameroncowan 19 October 2014 06:43:58PM 0 points [-]

I think it is that kind of thing that we should start thinking about though. Its the consequences that we have to worry about as much as developing the tech. Too often times new things have been created and people have not been mindful of the consequences of their actions. I welcome the discussion.

Comment author: SteveG 16 September 2014 03:35:17AM 1 point [-]

The D-Wave quantum computer solves a general class of optimization problems very quickly. It cannot speed up any arbitrary computing task, but the class of computing problems which include an optimization task it can speed up appears to be large.

Many "AI Planning" tasks will be a lot faster with quantum computers. It would be interesting to learn what the impact of quantum computing will be on other specific AI domains like NLP and object recognition.

We also have:

-Reversible computing -Analog computing -Memristors -Optical computing -Superconductors -Self-assembling materials

And lithography, or printing, just keeps getting faster on smaller and smaller objects and is going from 2d to 3d.

When Bostrom starts to talk about it, I would like to hear people's opinions about untangling the importance of hardware vs. software in the future development of AI.

Comment author: KatjaGrace 16 September 2014 01:21:45AM *  3 points [-]

What did you find least persuasive in this week's reading?

Comment author: RoboTeddy 16 September 2014 08:42:15AM 8 points [-]

"The train might not pause or even decelerate at Humanville Station. It is likely to swoosh right by." (p4)

There isn't much justification for this claim near where it's made. I could imagine it causing a reader to think that the author is prone to believing important things without much evidence -- or that he expects his readers to do so.

(It might help if the author noted that the topic is discussed in Chapter 4)

Comment author: billdesmedt 16 September 2014 01:47:02AM 6 points [-]

Not "least persuasive," but at least a curious omission from Chapter 1's capsule history of AI's ups and downs ("Seasons of hope and despair") was any mention of the 1966 ALPAC report, which singlehandedly ushered in the first AI winter by trashing, unfairly IMHO, the then-nascent field of machine translation.

Comment author: KatjaGrace 16 September 2014 01:21:05AM 3 points [-]

Have you seen any demonstrations of AI which made a big impact on your expectations, or were particularly impressive?

Comment author: ciphergoth 16 September 2014 08:22:08AM 7 points [-]

The Deepmind "Atari" demonstration is pretty impressive https://www.youtube.com/watch?v=EfGD2qveGdQ

Comment author: RichardKennaway 16 September 2014 08:49:22AM 4 points [-]

Deepmind Atari technical paper.

Comment author: mvp9 16 September 2014 01:55:17AM 4 points [-]

Again, this is a famous one, but Watson seems really impressive to me. It's one thing to understand basic queries and do a DB query in response, but its ability to handle indirect questions that would confuse many a person (guilty), was surprising.

On the other hand, its implementation (as described in Second Machine Age) seems to be just as algorithmic, brittle and narrow as Deep Blue - basically Watson was as good as its programmers...

Comment author: SteveG 16 September 2014 02:31:37AM 5 points [-]

Along with self-driving cars, Watson's Jeopardy win shows that, given enough time, a team of AI engineers has an excellent chance of creating a specialized system which can outpace the best human expert in a much wider variety of tasks than we might have thought before.

The capabilities of such a team has risen dramatically since I first studied AI. Charting and forecasting the capabilities of such a team is worthwhile.

Having an estimate of what such a team will be able to accomplish in ten years is material to knowing when they will be able to do things we consider dangerous.

After those two demonstrations, what narrow projects could we give a really solid AI team which would stump them? The answer is no longer at all clear. For example, the SAT or an IQ test seem fairly similar to Jeopardy, although the NLP tasks differ.

The Jeopardy system also did not incorporate a wide variety of existing methods and solvers, because they were not needed to answer Jeopardy questions.

In short order an IBM team can incorporate systems which can extract information from pictures and video, for example, into a Watson application.

Comment author: AshokGoel 16 September 2014 01:33:45AM *  3 points [-]

One demonstration of AI that I find impressive is that AI agents can now take and "pass" some intelligence tests. For example, AI agents can now about as well as a typical American teenager on the Raven's test of human intelligence.

Comment author: TRIZ-Ingenieur 18 September 2014 01:55:58AM 3 points [-]

As long as a chatbot does not understand what it is chatting about it is not worth real debate. The "pass" is more an indication how easily we get cheated. When we think while speaking we easily start waffling. This is normal human behaviour, same as silly jumps in topic. Jumping between topics was this chatbots trick to hide its non-understanding.

Comment author: negamuhia 16 September 2014 12:28:28PM 2 points [-]

Sergey Levine's research on guided policy search (using techniques such as hidden markov models to animate, in real-time, the movement of a bipedal or quadripedal character). An example:

Sergey Levine, Jovan Popović. Physically Plausible Simulation for Character Animation. SCA 2012: http://www.eecs.berkeley.edu/~svlevine/papers/quasiphysical.pdf

Comment author: RoboTeddy 16 September 2014 08:06:44AM 2 points [-]

The youtube video "Building Watson - A Brief Overview of the DeepQA Project"

https://www.youtube.com/watch?v=3G2H3DZ8rNc

Comment author: KatjaGrace 16 September 2014 01:11:13AM 3 points [-]

AI seems to be pretty good at board games relative to us. Does this tell us anything interesting? For instance, about the difficulty of automating other kinds of tasks? How about the task of AI research? Some thoughts here.

Comment author: AshokGoel 16 September 2014 01:31:40AM 5 points [-]

Thanks for the nice summary and the questions. I think it is worth noting that AI is good only at some board games (fully observable, deterministic games) and not at others (partially observable, non-deterministic games such as, say, Civilization).

Comment author: paulfchristiano 16 September 2014 04:19:39AM *  5 points [-]

Do you know of a partially observable game for which AI lags behind humans substantially? These examples are of particular interest to me because they would significantly revise my understanding of what problems are hard and easy.

The most prominent games of this partial information that I know are Bridge and Poker, and AI's can now win at both of these (and which in fact proved to be much easier than the classic deterministic games). Backgammon is random, and also turned out to be relatively easy--in fact the randomness itself is widely considered to have made the game easy for computers! Scrabble is the other example that comes to mind, where the situation is the same.

For Civilization in particular, it seems very likely that AI would be wildly superhuman if it were subject to the same kind of attention as other games, simply because the techniques used in Go and Backgammon, together with a bunch of ad hoc logic for navigating the tech tree, should be able to get so much traction.

Comment author: Kaj_Sotala 16 September 2014 07:27:04PM 5 points [-]

For Civilization in particular, it seems very likely that AI would be wildly superhuman if it were subject to the same kind of attention as other games, simply because the techniques used in Go and Backgammon, together with a bunch of ad hoc logic for navigating the tech tree, should be able to get so much traction.

Agreed. It's not Civilization, but Starcraft is also partially observable and non-deterministic, and a team of students managed to bring their Starcraft AI to the level of being able to defeat a "top 16 in Europe"-level human player after only a "few months" of work.

The game AIs for popular strategy games are often bad because the developers don't actually have the time and resources to make a really good one, and it's not a high priority anyway - most people playing games like Civilization want an AI that they'll have fun defeating, not an AI that actually plays optimally.

Comment author: Lumifer 16 September 2014 07:45:11PM 6 points [-]

Starcraft

In RTS games an AI has a large built-in advantage over humans because it can micromanage so much better.

most people playing games like Civilization want an AI that they'll have fun defeating, not an AI that actually plays optimally

That's a very valid point: a successful AI in a game is the one which puts up a decent fight before losing.

Comment author: Liso 23 September 2014 04:52:19AM 1 point [-]

Are you played this type of game?

I think that if you played on big map (freeciv support really huge) then your goals (like in real world) could be better fulfilled if you play WITH (not against) AI. For example managing 5 tousands engineers manually could take several hours per round.

You could meditate more concepts (for example for example geometric growing, metasthasis method of spread civilisation etc and for sure cooperation with some type of AI) in this game...

Submitting...

Comment author: cameroncowan 19 October 2014 06:34:56PM 0 points [-]

I think it would be easy to create a Civilization AI that would choose to grow on a certain path with a certain win-style in mind. So if the AI picks military win then it will focus on building troops and acquiring territory and maintaining states of war with other players. What might be hard is other win states like diplomatic or cultural because those require much more intuitive and nuanced decision making without a totally clear course of action.

Comment author: Larks 19 September 2014 12:58:06AM 5 points [-]

most people playing games like Civilization want an AI that they'll have fun defeating, not an AI that actually plays optimally.

The popular AI mods for Civ actually tend to make the AIs less thematic - they're less likely to be nice to you just because of a thousand year harmonious and profitable peace, for example, and more likely to build unattractive but efficient Stacks of Doom. Of course there are selection effects on who installs such mods.

Comment author: lackofcheese 20 October 2014 07:14:48AM *  0 points [-]

The game AIs for popular strategy games are often bad because the developers don't actually have the time and resources to make a really good one, and it's not a high priority anyway - most people playing games like Civilization want an AI that they'll have fun defeating, not an AI that actually plays optimally.

I think you're mostly correct on this. Sometimes difficult opponents are needed, but for almost all games that can be trivially achieved by making the AI cheat rather than improving the algorithms. That said, when playing a game vs an AI you do want the AI to at least appear to be intelligent; although humans can often be quite easy to fool with cheating, a good algorithm is still a better way of giving this appearance than a fake. It doesn't have to be optimal, and even if it is you can constrain it enough to make it beatable, or intentionally design different kinds of weaknesses into the AI so that humans can have fun looking for those weaknesses and feel good when they find them. Ultimately, though, the point is that the standard approach of having lots and lots of scripting still tends to get the job done, and developers almost never find the resource expenditure for good AI to be worthwhile.

However, I think that genuinely superhuman AI in games like Starcraft and Civilization is far harder than you imply. For example, in RTS games (as Lumifer has said) the AI has a built-in advantage due to its capacity for micromanagement. Moreover, although the example you cite has an AI from a "few months" of work beating a high-level human player, I think that was quite likely to be a one-off occurrence. Beating a human once is quiet different to consistently beating a human.

If you look at the results of the AIIDE Man vs Machine matches, the top bots consistently lose every game to Bakuryu (the human representative). According to this report,

In this match it was shown that the true weakness of state of the art StarCraft AI systems was that humans are very adept at recognizing scripted behaviors and exploiting them to the fullest. A human player in Skynet’s position in the first game would have realized he was being taken advantage of and adapted his strategy accordingly, however the inability to put the local context (Bakuryu kiting his units around his base) into the larger context of the game (that this would delay Skynet until reinforcements arrived) and then the lack of strategy change to fix the situation led to an easy victory for the human. These problems remain as some of the main challenges in RTS AI today: to both recognize the strategy and intent of an opponent’s actions, and how to effectively adapt your own strategy to overcome them.

I seems to me that the best AIs in these kinds of games work by focusing on a relatively narrow set of overall strategies, and then focusing on executing those strategies as flawlessly as possible. In something like Starcraft the AI's potential for this kind of execution is definitely superhuman, but as the Man vs Machine matches demonstrate this really isn't enough.

In the case of the Civilization games, the fact that they aren't real-time removes quite a lot of the advantage that an AI gets in terms of micromanagement. Also, like in Starcraft, classical AI techniques really don't work particularly well due to the massive branching factor.

Granted, taking a similar approach to the Starcraft bots might still work pretty well; I believe there are some degenerate strategies in many of the Civ games that are quite strong on their own, and if you program an AI to execute them with a high degree of precision and good micromanagement, and add some decent reactive play, that might be good enough.

However, unless the game is simply broken due to bad design, I suspect that you would find that, like the Starcraft bots, AIs designed on that kind of idea would still be easily exploited and consistently beaten by the best human players.

Comment author: lackofcheese 20 October 2014 06:24:00AM 0 points [-]

I wouldn't say that poker is "much easier than the classic deterministic games", and poker AI still lags significantly behind humans in several regards. Basically, the strongest poker bots at the moment are designed around solving for Nash equilibrium strategies (of an abstracted version of the game) in advance, but this fails in a couple of ways:
1. These approaches haven't really been extended past 2- or 3-player games.
2. Playing a NE strategy makes sense if your opponent is doing the same, but your opponent almost always won't be. Thus, in order to play better, poker bots should be able to exploit weak opponents.
Both of these are rather nontrivial problems.

Kriegspiel, a partially observable version of chess, is another example where the best humans are still better than the best AIs, although I'll grant that the gap isn't a particularly big one, and likely mostly has to do with it not being a significant research focus.

Comment author: gallabytes 16 September 2014 02:02:33AM 4 points [-]

Interestingly enough, a team at MIT managed to make an AI that learned how to play from the manual and proceeded to win 80% of it's games against the AI, though I don't know which difficulty it was set to, or how the freeciv AI compares to the one in normal Civilization.

Comment author: rlsj 16 September 2014 01:40:36AM 7 points [-]

For anything whose function and sequencing we thoroughly understand the programming is straightforward and easy, at least in the conceptual sense. That covers most games, including video games. The computer's "side" in a video game, for example, which looks conceptually difficult, most of the time turns out logically to be only decision trees.

The challenge is the tasks we can't precisely define, like general intelligence. The rewarding approach here is to break down processes into identifiable subtasks. A case in point is understanding natural languages, one of whose essential questions is, "What is the meaning of "meaning?" In terms of a machine it can only be the content of a subroutine or pointers to subroutines. The input problem, converting sentences into sets of executable concepts, is thus approachable. The output problem, however, converting unpredictable concepts into words, is much tougher. It may involve growing decision trees on the fly.

Comment author: ScottMessick 17 September 2014 11:55:14PM 3 points [-]

I was disappointed to see my new favorite "pure" game Arimaa missing from Bostrom's list. Arimaa was designed to be intuitive for humans but difficult for computers, making it a good test case. Indeed, I find it to be very fun, and computers do not seem to be able to play it very well. In particular, computers are nowhere close to beating top humans despite the fact that there has arguably been even more effort to make good computer players than good human players.

Arimaa's branching factor dwarfs that of Go (which in turn beats every other commonly known example). Since a super-high branching factor is also a characteristic feature of general AI test problems, I think it remains plausible that simple, precisely defined games like Arimaa are good test cases for AI, as long as the branching factor keeps the game out of reach of brute force search.

Comment author: Houshalter 07 June 2015 12:08:02PM *  0 points [-]

In particular, computers are nowhere close to beating top humans despite the fact that there has arguably been even more effort to make good computer players than good human players.

Reportedly this just happened recently: http://games.slashdot.org/story/15/04/19/2332209/computer-beats-humans-at-arimaa

Arimaa's branching factor dwarfs that of Go (which in turn beats every other commonly known example).

Go is super close to being beaten, and AIs do very well against all but the best humans.

Comment author: TRIZ-Ingenieur 18 September 2014 12:27:11AM 2 points [-]

This summary of already superhuman game playing AIs impressed me since two weeks. But only until yesterday. John McCarthy was attributed in Vardi(2012) to have said: "As soon as it works, no one calls it AI anymore." (p13)

There is more truth in it than McCarthy expected it to be: A tailor made game playing algorithm, developed and optimized by generations of scientists and software engineers is no entity of AI. It is an algorithm. Human beings analyzed the rule set, found abstractions of it, developed evaluation schemes and found heuristics to prune the un-computable large search tree. With brute force and megawatts of computational evaluation power they managed to fill a database with millions of more or less favorable game situations. In direct competion of game playing algorithm vs. human being these pre-computed situations help to find short cuts in the tree search to achieve superhuman performance in the end.

Is this entity an AI or an algorithm? 1. Game concept development: human. 2. Game rule definition and negotiation: human. 3. Game rule abstraction and translation into computable form: human designed algorithm. 4. Evaluation of game situation: human designed algorithm, computer aided optimization. 5. Search tree heuristics: human designed algorithm, computer aided optimization. 6. Database of favorable situations and moves: brute force tree search on massive parallel supercomputer. 7. Detection of favorable situations: human designed algorithm for pattern matching, computer aided optimization. 8. Active playing: Full automatic use of algorithms and information of points 3-7. No human being involved.

Unsupervised learning, search optimization and pattern matching (points 5-7) make this class of entities weak AIs. A human being playing against this entity will probably attribute intelligence to it. "Kasparov claims to have seen glimpses of true intelligence and creativity in some of the computers moves" (p12, Newborn[2011]).

But weak AI is not our focus. Our focus is strong AI, HLAI and superintelligence. It is good to know that human engineered weak AI algorithms can achieve superhuman performance. Not a single game playing weak AI achieved human level of intelligence. The following story will show why:

Watch two children, Alice and Bob, playing in the street. They found white and black pebbles and a piece of chalk. Bob has a faint idea of checkers (other names: "draught" or German: "Dame") from having seen his elder brother playing it. He explains to Alice: "Lets draw a grid of chalk lines on the road and place our pebbles into the fields. I will show you." In joint effort they draw several strait lines resulting in a 7x9 grid. Then Bob starts to place his black pebbles into his starting rows as he remembered it. Alice follows suit - but she has not enough white pebbles to fill her starting rows. They discuss their options and searched for more white pebbles. After two minutes of unsuccessful search Bob said: "Lets remove one column and I take two of my black pebbles away." Then Bob explained to Alice how to make moves with her pebbles on the now smaller 7x8 board game grid. They started playing and enjoyed their time. Bob did win most of the games. He changed the rules to give Alice a starting advantage. Alice did not care losing frequently. They laughed a lot. She loves Bob and is happy for every minute being next to him.

This is a real game. It is a full body experience with all senses. These young children manipulate their material world, create and modify abstract rules, develop strategies for winning, communicate and have fun together.

The German Wikipedia entry for "Dame_(Spiel)" lists 3 * 4 * 4 * (3 + many more) * 2 = 288+ orthogonal rule variants. Playing Doppelkopf (popular 4-player card game in Germany) with people you have never played with takes at least five minutes discussion about the rules in the beginning. This demonstrates that developing and negotiating rules is central part of human game play.

If you would tell 10 year old Bob: "Alice has to go home with me for lunch. Look, this is Roboana (a strong AI robot), play with her instead." You guide your girl-alike robot to Bob.

Roboana: "Hi, I'm Roboana, I saw you playing with Alice. It seems to be very funny. What is the game about?"

You, member of the Roboana development team, leave the scene for lunch. Will your maybe-HLAI robot manage the situation with Bob? Will Roboana modify the rules to balance the game if her strategy is too superior before Bob gets annoyed and walks away? Will Bob enjoy his time with Roboana?

Bob is assumingly 10 years old and qualifies only for sub human intelligence. Within the next 20 years I do not expect any artificial entity to reach this level of general intelligence. To know that algorithms meet the core performance for game play is only the smallest part of the problem. Therefore I prefer calling weak AI what it is: Algorithm.

In our further reading we should try not to forget that aspects of creativity, engineering, programming and social interaction are in most cases more complex than the core problem. Some rules are imprinted into us human beings: how a face looks like, how a fearful face looks like, how a fearful mother smells, how to smile to please, how to scream to alert the mother, how spit out bitter tasting food to protect against intoxication. To play with the environment is imprinted into our brains as well. We enjoy to manipulate things and observe with our fullest curiosity its outcome. A game is a regulated kind of play. For AI development it is worth to widen the focus from game to playing.

Comment author: cameroncowan 19 October 2014 06:39:39PM 1 point [-]

Now we have something! We have something we can actually use! AI must be able to interact with emotional intelligence!

Comment author: lackofcheese 20 October 2014 05:56:46AM 0 points [-]

Although computers beat humans at board games without needing any kind of general intelligence at all, I don't think that invalidates game-playing as a useful domain for AGI research.

The strength of AI in games is, to a significant extent, due to the input of humans in being able to incorporate significant domain knowledge into the relatively simple algorithms that game AIs are built on.

However, it is quite easy to make game AI into a far, far more challenging problem (and, I suspect, a rather more widely applicable one)---consider the design of algorithms for general game playing rather than for any particular game. Basically, think of a game AI that is first given a description of the rules of the game it's about to play, which could be any game, and then must play the game as well as possible.

Comment author: KatjaGrace 16 September 2014 01:04:42AM 3 points [-]

How large a leap in cognitive ability do you think occurred between our last common ancestor with the great apes, and us? (p1) Was it mostly a change in personal intelligence, or could human success be explained by our greater ability to accumulate knowledge from others in society? How can we tell how much smarter, in the relevant sense, a chimp is than a human? This chapter claims Koko the Gorilla has a tested IQ of about 80 (see table 2).

What can we infer from answers to these questions?

Comment author: gallabytes 16 September 2014 01:28:04AM 4 points [-]

I would bet heavily on the accumulation. National average IQ has been going up by about 3 points per decade for quite a few decades, so there have definitely been times when Koko's score might have been above average. Now, I'm more inclined to say that this doesn't mean great things for the IQ test overall, but I put enough trust in it to say that it's not differences in intelligence that prevented the gorillas from reaching the prominence of humans. It might have slowed them down, but given this data it shouldn't have kept them pre-Stone-Age.

Given that the most unique aspect of humans relative to other species seems to be the use of language to pass down knowledge, I don't know what else it really could be. What other major things do we have going for us that other animals don't?

Comment author: ciphergoth 16 September 2014 10:05:46AM 4 points [-]

I think what controls the rate of change is the intelligence of the top 5%, not the average intelligence.

Comment author: gallabytes 16 September 2014 09:11:56PM 4 points [-]

Sure, I still don't think that if you elevated the intelligence of a group of chimps to the top 5% of humanity without adding some better form of communication and idea accumulation it wouldn't matter.

If Newton were born in ancient Egypt, he might have made some serious progress, but he almost certainly wouldn't have discovered calculus and classical mechanics. Being able to stand on the shoulders of giants is really important.

Comment author: JonathanGossage 17 September 2014 08:08:32PM 3 points [-]

I think that language plus our acquisition of the ability to make quasi-permanent records of human utterances are the biggest differentiators.

Comment author: kgalias 16 September 2014 09:44:14AM 2 points [-]

It is possible, then, that exposure to complex visual media has produced genuine increases in a significant form of intelligence. This hypothetical form of intelligence might be called "visual analysis." Tests such as Raven's may show the largest Flynn gains because they measure visual analysis rather directly; tests of learned content may show the smallest gains because they do not measure visual analysis at all.

Do you think this is a sensible view?

Comment author: gallabytes 16 September 2014 09:07:54PM 1 point [-]

Eh, not especially. IIRC, scores have also had to be renormalized on Stanford-Binet and Weschler tests over the years. That said, I'd bet it has some effect, but I'd be much more willing to bet on less malnutrition, less beating / early head injury, and better public health allowing better development during childhood and adolescence.

That said, I'm very interested in any data that points to other causes behind the Flynn Effect, so if you have any to post don't hesitate.

Comment author: kgalias 16 September 2014 11:27:20PM 2 points [-]

I'm just trying to make sure I understand - I remember being confused about the Flynn effect and about what Katja asked above.

How does the Flynn effect affect our belief in the hypothesis of accumulation?

Comment author: gallabytes 17 September 2014 02:25:08AM 2 points [-]

It just means that the intelligence gap was smaller, potentially much, much smaller, when humans first started developing a serious edge relative to apes. It's not evidence for accumulation per se, but it's evidence against us just being so much smarter from the get go, and renormalizing has it function very much like evidence for accumulation.

Comment author: cameroncowan 19 October 2014 06:46:00PM 0 points [-]

I think it was the ability to work together thanks to omega-3s from eating fish among other things. Our ability to create a course of action and execute as a group started us on the path to the present day.

Comment author: PhilGoetz 06 October 2014 11:49:35PM *  2 points [-]

Comments:

  • It would be nice for at least one futurist who shows a graph of GDP to describe some of the many, many difficulties in comparing GDP across years, and to talk about the distribution of wealth. The power-law distribution of wealth means that population growth without a shift in the wealth distribution can look like an exponential increase in wealth, while actually the wealth of all but the very wealthy must decrease to preserve the same distribution. Arguably, this has happened repeatedly in American history.

  • I was very glad Nick mentioned that genetic algorithms are just another kind of hill-climbing, and have no mystical power. I suspect GA is inferior to hillclimbing with multiple random starts in most domains, though I'm ashamed to admit I haven't tested this in any way. GA is interesting not so much as an algorithm, but for how it can be used to classify and give insight into search problems. Ones where GA works better than hillclimbing are (my intuition) probably rare, yet constitute a large proportion of the difficult search problems we find solved by biology.

  • His description of conditionalization, as "setting the new probability of those worlds that are inconsistent with the information received to zero" followed by renormalization, is incorrect in two ways. Conditionalization recomputes the probability of every state, and never sets any probabilities to zero. This latter point is a common enough error that it's distressing to see it here.

  • Showing that a Bayesian agent is impossible to make would be very involved, and not worthwhile. It's more important to argue that a Bayesian agent would usually lose to dumber, faster agents, because the trade-off between speed and correctness is essential when thinking about super-intelligences. Whether the most-successful "super-intelligences" could in fact be intelligent by our definitions is still an important open question. If fast and stupid wins the race in the long run, preserving human values will be difficult.

  • What happened in the late 80s was not that neural nets and GAs performed better than GOFAI; what happened was an argument about which activities represented "intelligence", which the reactive behavior / physical robot / statistical learning people won. Statistics and machine learning are still poor at the problems that GOFAI does well on.

  • "AI" is not a viable field anymore; anyone getting a degree in "artificial intelligence" would find themselves unemployable today. Its territory has been taken over by statistics and "machine learning". I think we do people a disservice to keep talking about machine intelligence using only the term "artificial intelligence", because it mis-directs them into the backwaters of research and development.

Comment author: Lumifer 07 October 2014 12:41:11AM 1 point [-]

I suspect GA is inferior to hillclimbing with multiple random starts in most domains

Simulated annealing is another similar class of optimizers with interesting properties.

As to standard hill-climbing with multiple starts, it fails in the presence of a large number of local optima. If your error landscape is lots of small hills, each restart will get you to the top of the nearest small hill but you might never get to that large range in the corner of your search space.

In any case most domains have their characteristics or peculiarities which make certain search algorithms perform well and others perform badly. Often enough domain-specific tweaks can improve things greatly compared to the general case...

Comment author: Houshalter 07 June 2015 12:56:02PM 0 points [-]

I remember there was a paper co-authored by one of the inventor of genetic algorithms. They tried to come up with a toy problem that would show where genetic algorithms definitely beat hill-climbing. The problem they came up with was extremely contrived. But a slight modification to hill-climbing to make it slightly less greedy, and it worked just as fine or better than GA.

Statistics and machine learning are still poor at the problems that GOFAI does well on.

We are just starting to see ML successfully applied to search problems. There was a paper on deep neural networks that predict the moves of Go experts 45% of the time. Another paper found deep learning could significantly narrow the search space for automatically finding mathematical identities. Reinforcement Learning is becoming increasingly popular, which is just heuristic search, but very general.

Comment author: tmosley 18 September 2014 12:39:32AM 2 points [-]

With respect to "Growth of Growth", wouldn't the chart ALWAYS look like that, with the near end trailing downwards? The sampling time is decreasing logarithmically, so unless you are sitting right on top of the singularity/production revolution, it should always look like that.

Just got thinking about what happened around 1950 specifically and couldn't find any real reason for it to drop off right there. WWII was well over, and the gold exchange standard remained for another 21 years, and those are the two primary framing events for that timeframe, so far as I can tell.

Comment author: KatjaGrace 16 September 2014 04:10:06AM 2 points [-]

Are there foreseeable developments other than human-level AI which might produce much faster economic growth? (p2)

Comment author: mvp9 16 September 2014 05:19:23AM 4 points [-]

I think the best bets as of today would be truly cheap energy (whether through fusion, ubiqutious solar, etc) and nano-fabrication. Though it may not happen, we could see these play out in 20-30 year term.

The bumps from this, however would be akin to the steam engine. Dwarfed by (or possibly a result of) the AI.

Comment author: ciphergoth 16 September 2014 08:13:23AM 1 point [-]

The steam engine heralded the Industrial Revolution and a lasting large increase in doubling rate. I would expect a rapid economic growth after either of these inventions, followed by returning to the existing doubling rate.

Comment author: rlsj 16 September 2014 07:30:57PM 2 points [-]

After achieving a society of real abundance, further economic growth will have lost incentive.

We can argue whether or not such a society is truly reachable, even if only in the material sense. If not, because of human intractability or AGI inscrutability, progress may continue onward and upward. Perhaps here, as in happiness, it's the pursuit that counts.

Comment author: KatjaGrace 22 September 2014 04:05:28AM 1 point [-]

Why do you expect a return to the existing doubling rate in these cases?

Comment author: Sebastian_Hagen 16 September 2014 08:44:01PM *  2 points [-]

Would you count uploads (even if we don't understand the software) as a kind of AI? If not, those would certainly work.

Otherwise, there are still things one could do with human brains. Better brain-computer-interfaces would be helpful, and some fairly mild increases in genome understanding could allow us to massively increase the proportion of people functioning at human genius level.

Comment author: TRIZ-Ingenieur 18 September 2014 12:51:45AM 1 point [-]

For fastest economic growth it is not necessary to achieve human-level intelligence. It is even hindering. Highly complex social behaviour to find a reproduction partner is not neccessary for economic success. A totally unbalanced AI character with highly superhuman skills in creativity, programming, engineering and cheating humans could beat a more balanced AI character and self-improve faster. Todays semantic big data search is already manitudes faster than human research in a library using a paper catalog. We have to state highly super-human performance to answer questions and low sub-human performance in asking questions. Strong AI is so complex that projects for normal business time frames go for the low hanging fruits. If the outcome of such a project can be called an AI it is with highest probability extremely imbalanced in its performance and character.

Comment author: cameroncowan 19 October 2014 06:47:37PM 0 points [-]

Nanotech is the next big thing because you will have lots of self replicating tiny machines who can quickly work as a group as a kind of hive mind. That's important.

Comment author: KatjaGrace 16 September 2014 04:08:04AM 2 points [-]

Bostrom says that it is hard to imagine the world economy having a doubling time as short as weeks, without minds being created that are much faster and more efficient than those of humans (p2-3). Do you think humans could maintain control of an economy that grew so fast? How fast could it grow while humans maintained control?

Comment author: rlsj 16 September 2014 07:56:58PM 7 points [-]

Excuse me? What makes you think it's in control? Central Planning lost a lot of ground in the Eighties.

Comment author: Liso 19 September 2014 04:13:22AM *  3 points [-]

This is good point, which I like to have more precisely analysed. (And I miss deeper analyse in The Book :) )

Could we count will (motivation) of today's superpowers = megacorporations as human's or not? (and in which level could they control economy?)

In other worlds: Is Searle's chinese room intelligent? (in definition which The Book use for (super)intelligence)

And if it is then it is human or alien mind?

And could be superintelligent?

What arguments we could use to prove that none of today's corporations (or states or their secret services) is superintelligent? Think collective intelligence with computer interfaces! Are they really slow at thinking? How could we measure their IQ?

And could we humans (who?) control it (how?) if they are superintelligent? Could we at least try to implement some moral thinking (or other human values) to their minds? How?

Law? Is law enough to prevent that superintelligent superpower will do wrong things? (for example destroy rain forrest because he want to make more paperclips?)

Comment author: KatjaGrace 22 September 2014 04:19:56AM 2 points [-]

Good question.

I don't think central planning vs. distributed decision-making is relevant though, because it seems to me that either way humans make decisions similarly much: the question is just whether it is a large or a small number making decisions, and who decides what.

I usually think of the situation as there being a collection of (fairly) goal-directed humans, each with different amounts of influence, and a whole lot of noise that interferes with their efforts to do anything. These days humans can lose control in the sense that the noise might overwhelm their decision-making (e.g. if a lot of what happens is unintended consequences due to nobody knowing what's going on), but in the future humans might lose control in the sense that their influence as a fraction of the goal-directed efforts becomes very small. Similarly, you might lose control of your life because you are disorganized, or because you sell your time to an employer. So while I concede that we lack control already in the first sense, it seems we might also lose it in the second sense, which I think is what Bostrom is pointing to (though now I come to spell it out, I'm not sure how similar his picture is to mine).

Comment author: cameroncowan 19 October 2014 06:50:59PM 0 points [-]

The economy is a group of people making decisions based on the actions of others. Its a non centrally regulated hive mind.

Comment author: KatjaGrace 16 September 2014 01:19:55AM 2 points [-]

Common sense and natural language understanding are suspected to be 'AI complete'. (p14) (Recall that 'AI complete' means 'basically equivalent to solving the whole problem of making a human-level AI')

Do you think they are? Why?

Comment author: devi 16 September 2014 03:23:38AM 5 points [-]

I think AI-completeness is a quite seductive notion. Borrowing the concept of reduction from complexity/computability theory makes it sound technical, but unlike those fields I haven't seen anyone actually describing eg how to use an AI with perfect language understanding to produce another one that proved theorems or philosophized.

Spontaneously it feels like everyone here should in principle be able to sketch the outlines of such a program (at least in the case of a base-AI that has perfect language comprehension that we want to reduce to), probably by some version of trying to teach the AI as we teach a child in natural language. I suspect that the details of some of these reductions might still be useful, especially the parts that don't quite seem to work. For while I don't think that we'll see perfect machine translation before AGI, I'm much less convinced that there is a reduction from AGI to perfect translation AI. This illustrates what I suspect might be an interesting differences between two problem classes that we might both want to call AI-complete: the problems human programmers will likely not be able to solve before we create superintelligence, and the problems whose solutions we could (somewhat) easily re-purpose to solve the general problem of human-level AI. These classes look the same as in we shouldn't expect to see problems from any of them solved without an imminent singularity, but differ in that the problems in the latter class could prove to be motivating examples and test-cases for AI work aimed at producing superintelligence.

I guess the core of what I'm trying to say is that arguments about AI-completeness has so far sounded like: "This problem is very very hard, we don't really know how to solve it. AI in general is also very very hard, and we don't know how to solve it. So they should be the same." Heuristically there's nothing wrong with this, except we should keep in mind that we could be very mistaken about what is actually hard. I'm just missing the part that goes: "This is very very hard. But if we knew it this other thing would be really easy."

Comment author: mvp9 16 September 2014 05:37:40AM 3 points [-]

A different (non-technical) way to argue for their reducibility is through analysis of the role of language in human thought. The logic being that language by its very nature extends into all aspects of cognition (little human though of interest takes place outside its reach), and so one cannot do one without the other. I believe that's the rationale behind the Turing test.

It's interesting that you mention machine translation though. I wouldn't equate that with language understanding. Modern translation programs are getting very good, and may in time be "perfect" (indistinguishable from competent native speakers), but they do this through pattern recognition and leveraging a massive corpus of translation data - not through understanding it.

Comment author: shullak7 17 September 2014 03:47:36PM 2 points [-]

I think that "the role of language in human thought" is one of the ways that AI could be very different from us. There is research into the way that different languages affect cognitive abilities (e.g. -- https://psych.stanford.edu/~lera/papers/sci-am-2011.pdf). One of the examples given is that, as a native English-speaker, I may have more difficulty learning the base-10 structure in numbers than a Mandarin speaker because of the difference in the number words used in these languages. Language can also affect memory, emotion, etc.

I'm guessing that an AI's cognitive ability wouldn't change no matter what human language it's using, but I'd be interested to know what people doing AI research think about this.

Comment author: NxGenSentience 20 September 2014 02:41:03PM *  2 points [-]

This is a really cool link and topic area. I was getting ready to post a note on intelligence amplification (IA), and was going to post it up top on the outer layer of LW, based on language.

I recall many years ago, there was some brief talk of replacing the QWERTY keyboard with a design that was statistically more efficient in terms of human hand ergonomics in executing movements for the most frequently seen combinations of letters (probably was limited to English, given American parochialism of those days, but still, some language has to be chosen.)

Because of the entrenched base of QWERTY typists, the idea didn't get off the ground. (THus, we are penalizing countless more billions of new and future keyboard users, because of legacy habits of a comparatively small percentage of total [current and future] keyboard users.

It got me to thinking at the time, though, about whether a suitably designed human language would "open up" more of the brains inherent capacity for communication. Maybe a larger alphabet, a different set of noun primitives, even modified grammar.

With respect to IA, might we get a freebie just out of redesigning -- designing from scratch -- a language that was more powerful, communicated on average what, say, english or french communicates, yet with fewer phenomes per concept?

Might we get an average of 5 or 10 point equivalent IQ boost, by designing a language that is both physically faster (less "wait states" while we are listening to a speaker) and which has larger conceptual bandwidth?

We could also consider augmenting spoken speech with signing of some sort, to multiply the alphabet. A problem occurs here for unwitnessed speech, where we would have to revert to the new language on its own (still gaining the postulated dividend from that.)

However, already, for certain kinds of communication, we all know that nonverbal communication accounts for a large share of total communicated meaning and information. We already have to "drop back" in bandwidth every time we communicate like this (print, exclusively.) In scientific and philosophical writing, it doesn't make much difference, fortunately, but still, a new language might be helpful.

This one, like many things that evolve on their own, is a bunch of add-ons (like the biological evolution of organisms), and the result is not necessarily the best that could be done.

Comment author: mvp9 17 September 2014 06:41:20PM 1 point [-]

Lera Boroditsky is one of the premier researchers on this topic. They've also done some excellent work on comparing spatial/time metaphors in English and Mandarin (?), showing that the dominant idioms in each language affect how people cognitively process motion.

But the question is more broad -- whether some form of natural language is required (natural, roughly meaning used by a group in day to day life, is key here)? Differences between major natural languages are for the most part relatively superficial and translatable because their speakers are generally dealing with a similar reality.

Comment author: shullak7 17 September 2014 08:26:19PM 2 points [-]

I think that is one of my questions; i.e., is some form of natural language required? Or maybe what I'm wondering is what intelligence would look like if it weren't constrained by language -- if that's even possible. I need to read/learn more on this topic. I find it really interesting.

Comment author: KatjaGrace 16 September 2014 03:46:22AM 2 points [-]

A somewhat limited effort to reduce tasks to one another in this vein: http://www.academia.edu/1419272/AI-Complete_AI-Hard_or_AI-Easy_Classification_of_Problems_in_Artificial

Comment author: billdesmedt 16 September 2014 01:51:38AM *  1 point [-]

Human-level natural language facility was, after all, the core competency by which Turing's 1950 Test proposed to determine whether -- across the board -- a machine could think.

Comment author: mvp9 16 September 2014 01:35:33AM 1 point [-]

Depends on the criteria we place on "understanding." Certainly an AI may act in a way that invite us to attribute 'common sense' to it in some situations, without solving the 'whole problem." Watson would seem to be a case in point - apparently demonstrating true language understanding within a broad, but still strongly circumscribed domain.

Even if we take "language understanding" in the strong sense (i.e. meaning native fluency, including ability for semantic innovation, things like irony, etc), there is still the question of phenomenal experience: does having such an understanding entail the experience of such understanding - self-consciousness, and are we concerned with that?

I think that "true" language understanding is indeed "AI complete", but in a rather trivial sense that to match a competent human speaker one needs to have most of the ancillary cognitive capacities of a competent human.

Comment author: KatjaGrace 16 September 2014 01:45:12AM 3 points [-]

Whether we are concerned about the internal experiences of machines seems to depend largely on whether we are trying to judge the intrinsic value of the machines, or judge their consequences for human society. Both seem important.

Comment author: KatjaGrace 16 September 2014 04:11:52AM 1 point [-]

Which argument do you think are especially strong in this week's reading?

Comment author: KatjaGrace 16 September 2014 04:11:26AM 1 point [-]

Was there anything in particular in this week's reading that you would like to learn more about, or think more about?

Comment author: kgalias 16 September 2014 06:41:18AM 4 points [-]

The terms that I singled out while reading were: Backpropagation, Bayesian network, Maximum likelihood, Reinforcement learning.

Comment author: KatjaGrace 16 September 2014 03:58:19AM 1 point [-]

Whatever the nature, cause, and robustness of growth modes, the important observation seems to me to be that the past behavior of the economy suggests very much faster growth is plausible.

Comment author: VonBrownie 16 September 2014 01:42:47AM 1 point [-]

Are there any ongoing efforts to model the intelligent behaviour of other organisms besides the human model?

Comment author: lukeprog 16 September 2014 03:50:42AM 1 point [-]

Definitely! See Wikipedia and e.g. this book.

Comment author: VonBrownie 16 September 2014 04:28:11AM 1 point [-]

Thanks... I will check it out further!

Comment author: KatjaGrace 16 September 2014 01:21:38AM *  1 point [-]

What did you find most interesting in this week's reading?

Comment author: VonBrownie 16 September 2014 01:35:50AM 5 points [-]

I found interesting the idea that great leaps forward towards the creation of AGI might not be a question of greater resources or technological complexity but that we might be overlooking something relatively simple that could describe human intelligence... using the example of the Ptolemaic vs Copernican systems as an example.

Comment author: KatjaGrace 16 September 2014 01:21:29AM *  1 point [-]

Was there anything in this week's reading that you would like someone to explain better?

Comment author: Liso 16 September 2014 06:13:34AM *  3 points [-]

First of all thanx for work with this discussion! :)

My proposals:

  • wiki page for collaborative work

There are some points in the book which could be analysed or described better and probably which are wrong. We could find them and help improve. wiki could help us to do it

  • better time for europe and world?

But this is probably not a problem. If it is a problem then it is probably not solvable. We will see :)

Comment author: KatjaGrace 22 September 2014 09:26:56PM 1 point [-]

Thanks for your suggestions.

Regarding time, it is alas too hard to fit into everyone's non-work hours. Since the discussion continues for several days, I hope it isn't too bad to get there a bit late. If people would like to coordinate to be here at the same time though, I suggest Europeans pick a more convenient 'European start time', and coordinate to meet each other then.

Regarding a wiki page for collaborative work, I'm afraid MIRI won't be organizing anything like this in the near future. If anyone here is enthusiastic for such a thing, you are most welcome to begin it (though remember that such things are work to organize and maintain!) The LessWrong wiki might also be a good place for some such research. If you want a low maintenance collaborative work space to do some research together, you could also link to a google doc or something for investigating a particular question.

Comment author: TRIZ-Ingenieur 18 September 2014 01:17:28AM 1 point [-]

I strongly support your idea to establish a collaborative work platform. Nick Bostroms book brings so many not yet debated aspects into public debate that we should support him with input and feed back for the next edition of this book. He threw his hat into the ring and our debate will push sales for his book. I suspect he prefers to get comments and suggestions for better explanations in a structured manner.

Comment author: KatjaGrace 16 September 2014 01:07:30AM 1 point [-]

What do you think of I. J. Good's argument? (p4)

Comment author: VonBrownie 16 September 2014 01:23:04AM 2 points [-]

If an artificial superintelligence had access to all the prior steps that led to its current state I think Good's argument is correct... the entity would make exponential progress in boosting its intelligence still further. I just finished James Barrat's AI book Our Final Invention and found it interesting to note that Good towards the end of his life came to see his prediction as more danger than promise for continued human existence.

Comment author: KatjaGrace 22 September 2014 04:24:15AM 1 point [-]

If a human had access to all of the prior steps that led to its current state, would it make progress boosting its intelligence fast enough that other humans didn't have to invent things again?

If not, what's the difference?

Comment author: JonathanGossage 17 September 2014 07:11:42PM 1 point [-]

I think that the process that he describes is inevitable unless we do ourselves in through some other existential risk. Whether this will be for good or bad will largely depend on how we approach the issues of volition and motivation.

Comment author: KatjaGrace 16 September 2014 01:06:36AM 1 point [-]

How should someone familiar with past work in AI use that knowledge judge how much work is left to be done before reaching human-level AI, or human-level ability at a particular kind of task?

Comment author: billdesmedt 16 September 2014 01:38:27AM 3 points [-]

one way to apply such knowledge might be in differentiating between approaches that are indefinitely extendable and/or expandable and those that, despite impressive beginnings, tend to max out beyond a certain point. (Think of Joe Weizenbaum's ELIZA as an example of the second.)

Comment author: gallabytes 16 September 2014 03:16:46AM 1 point [-]

Do you have any examples of approaches that are indefinitely extendable?

Comment author: billdesmedt 16 September 2014 07:38:39PM 1 point [-]

Whole Brain Emulation might be such an example, at least insofar as nothing in the approach itself seems to imply that it would be prone to get stuck in some local optimum before its ultimate goal (AGI) is achieved.

Comment author: JonathanGossage 17 September 2014 07:25:07PM 1 point [-]

However, Whole Brain Emulation is likely to be much more resource intensive than other approaches, and if so will probably be no more than a transitional form of AGI.