This summary of already superhuman game playing AIs impressed me since two weeks. But only until yesterday. John McCarthy was attributed in Vardi(2012) to have said: "As soon as it works, no one calls it AI anymore." (p13)
There is more truth in it than McCarthy expected it to be: A tailor made game playing algorithm, developed and optimized by generations of scientists and software engineers is no entity of AI. It is an algorithm. Human beings analyzed the rule set, found abstractions of it, developed evaluation schemes and found heuristics to prune the un-computable large search tree. With brute force and megawatts of computational evaluation power they managed to fill a database with millions of more or less favorable game situations. In direct competion of game playing algorithm vs. human being these pre-computed situations help to find short cuts in the tree search to achieve superhuman performance in the end.
Is this entity an AI or an algorithm?
Unsupervised learning, search optimization and pattern matching (points 5-7) make this class of entities weak AIs. A human being playing against this entity will probably attribute intelligence to it. "Kasparov claims to have seen glimpses of true intelligence and creativity in some of the computers moves" (p12, Newborn[2011]).
But weak AI is not our focus. Our focus is strong AI, HLAI and superintelligence. It is good to know that human engineered weak AI algorithms can achieve superhuman performance. Not a single game playing weak AI achieved human level of intelligence. The following story will show why:
Watch two children, Alice and Bob, playing in the street. They found white and black pebbles and a piece of chalk. Bob has a faint idea of checkers (other names: "draught" or German: "Dame") from having seen his elder brother playing it. He explains to Alice: "Lets draw a grid of chalk lines on the road and place our pebbles into the fields. I will show you." In joint effort they draw several strait lines resulting in a 7x9 grid. Then Bob starts to place his black pebbles into his starting rows as he remembered it. Alice follows suit - but she has not enough white pebbles to fill her starting rows. They discuss their options and searched for more white pebbles. After two minutes of unsuccessful search Bob said: "Lets remove one column and I take two of my black pebbles away." Then Bob explained to Alice how to make moves with her pebbles on the now smaller 7x8 board game grid. They started playing and enjoyed their time. Bob did win most of the games. He changed the rules to give Alice a starting advantage. Alice did not care losing frequently. They laughed a lot. She loves Bob and is happy for every minute being next to him.
This is a real game. It is a full body experience with all senses. These young children manipulate their material world, create and modify abstract rules, develop strategies for winning, communicate and have fun together.
The German Wikipedia entry for "Dame_(Spiel)" lists 3 4 4 (3 + many more) 2 = 288+ orthogonal rule variants. Playing Doppelkopf (popular 4-player card game in Germany) with people you have never played with takes at least five minutes discussion about the rules in the beginning. This demonstrates that developing and negotiating rules is central part of human game play.
If you would tell 10 year old Bob: "Alice has to go home with me for lunch. Look, this is Roboana (a strong AI robot), play with her instead." You guide your girl-alike robot to Bob.
Roboana: "Hi, I'm Roboana, I saw you playing with Alice. It seems to be very funny. What is the game about?"
You, member of the Roboana development team, leave the scene for lunch. Will your maybe-HLAI robot manage the situation with Bob? Will Roboana modify the rules to balance the game if her strategy is too superior before Bob gets annoyed and walks away? Will Bob enjoy his time with Roboana?
Bob is assumingly 10 years old and qualifies only for sub human intelligence. Within the next 20 years I do not expect any artificial entity to reach this level of general intelligence. To know that algorithms meet the core performance for game play is only the smallest part of the problem. Therefore I prefer calling weak AI what it is: Algorithm.
In our further reading we should try not to forget that aspects of creativity, engineering, programming and social interaction are in most cases more complex than the core problem. Some rules are imprinted into us human beings: how a face looks like, how a fearful face looks like, how a fearful mother smells, how to smile to please, how to scream to alert the mother, how spit out bitter tasting food to protect against intoxication. To play with the environment is imprinted into our brains as well. We enjoy to manipulate things and observe with our fullest curiosity its outcome. A game is a regulated kind of play. For AI development it is worth to widen the focus from game to playing.
Now we have something! We have something we can actually use! AI must be able to interact with emotional intelligence!
This is part of a weekly reading group on Nick Bostrom's book, Superintelligence. For more information about the group, see the announcement post. For the schedule of future topics, see MIRI's reading guide.
Welcome to the Superintelligence reading group. This week we discuss the first section in the reading guide, Past developments and present capabilities. This section considers the behavior of the economy over very long time scales, and the recent history of artificial intelligence (henceforth, 'AI'). These two areas are excellent background if you want to think about large economic transitions caused by AI.
This post summarizes the section, and offers a few relevant notes, thoughts, and ideas for further investigation. My own thoughts and questions for discussion are in the comments.
There is no need to proceed in order through this post. Feel free to jump straight to the discussion. Where applicable, page numbers indicate the rough part of the chapter that is most related (not necessarily that the chapter is being cited for the specific claim).
Reading: Foreword, and Growth modes through State of the art from Chapter 1 (p1-18)
Summary
Economic growth:
The history of AI:
Notes on a few things
In case you are too curious about what the topic of this book is to wait until week 3, a 'superintelligence' will soon be described as 'any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest'. Vagueness in this definition will be cleared up later.
In particular, how does 'AI' differ from other computer software? The line is blurry, but basically AI research seeks to replicate the useful 'cognitive' functions of human brains ('cognitive' is perhaps unclear, but for instance it doesn't have to be squishy or prevent your head from imploding). Sometimes AI research tries to copy the methods used by human brains. Other times it tries to carry out the same broad functions as a human brain, perhaps better than a human brain. Russell and Norvig (p2) divide prevailing definitions of AI into four categories: 'thinking humanly', 'thinking rationally', 'acting humanly' and 'acting rationally'. For our purposes however, the distinction is probably not too important.
We are going to talk about 'human-level' AI a lot, so it would be good to be clear on what that is. Unfortunately the term is used in various ways, and often ambiguously. So we probably can't be that clear on it, but let us at least be clear on how the term is unclear.
One big ambiguity is whether you are talking about a machine that can carry out tasks as well as a human at any price, or a machine that can carry out tasks as well as a human at the price of a human. These are quite different, especially in their immediate social implications.
Other ambiguities arise in how 'levels' are measured. If AI systems were to replace almost all humans in the economy, but only because they are so much cheaper - though they often do a lower quality job - are they human level? What exactly does the AI need to be human-level at? Anything you can be paid for? Anything a human is good for? Just mental tasks? Even mental tasks like daydreaming? Which or how many humans does the AI need to be the same level as? Note that in a sense most humans have been replaced in their jobs before (almost everyone used to work in farming), so if you use that metric for human-level AI, it was reached long ago, and perhaps farm machinery is human-level AI. This is probably not what we want to point at.
Another thing to be aware of is the diversity of mental skills. If by 'human-level' we mean a machine that is at least as good as a human at each of these skills, then in practice the first 'human-level' machine will be much better than a human on many of those skills. It may not seem 'human-level' so much as 'very super-human'.
We could instead think of human-level as closer to 'competitive with a human' - where the machine has some super-human talents and lacks some skills humans have. This is not usually used, I think because it is hard to define in a meaningful way. There are already machines for which a company is willing to pay more than a human: in this sense a microscope might be 'super-human'. There is no reason for a machine which is equal in value to a human to have the traits we are interested in talking about here, such as agency, superior cognitive abilities or the tendency to drive humans out of work and shape the future. Thus we talk about AI which is at least as good as a human, but you should beware that the predictions made about such an entity may apply before the entity is technically 'human-level'.
Example of how the first 'human-level' AI may surpass humans in many ways.
Because of these ambiguities, AI researchers are sometimes hesitant to use the term. e.g. in these interviews.
Robin Hanson wrote the seminal paper on this issue. Here's a figure from it, showing the step changes in growth rates. Note that both axes are logarithmic. Note also that the changes between modes don't happen overnight. According to Robin's model, we are still transitioning into the industrial era (p10 in his paper).
One might be happier making predictions about future growth mode changes if one had a unifying explanation for the previous changes. As far as I know, we have no good idea of what was so special about those two periods. There are many suggested causes of the industrial revolution, but nothing uncontroversially stands out as 'twice in history' level of special. You might think the small number of datapoints would make this puzzle too hard. Remember however that there are quite a lot of negative datapoints - you need an explanation that didn't happen at all of the other times in history.
It is also interesting to compare world economic growth to the total size of the world economy. For the last few thousand years, the economy seems to have grown faster more or less in proportion to it's size (see figure below). Extrapolating such a trend would lead to an infinite economy in finite time. In fact for the thousand years until 1950 such extrapolation would place an infinite economy in the late 20th Century! The time since 1950 has been strange apparently.
(Figure from here)
You can see them in action: SHRDLU, Shakey, General Problem Solver (not quite in action), ELIZA.
Algorithmically generated Beethoven, algorithmic generation of patentable inventions, artificial comedy (requires download).
Here is a neural network doing image recognition. Here is artificial evolution of jumping and of toy cars. Here is a face detection demo that can tell you your attractiveness (apparently not reliably), happiness, age, gender, and which celebrity it mistakes you for.
Bostrom points out that many types of artificial neural network can be viewed as classifiers that perform 'maximum likelihood estimation'. If you haven't come across this term before, the idea is to find the situation that would make your observations most probable. For instance, suppose a person writes to you and tells you that you have won a car. The situation that would have made this scenario most probable is the one where you have won a car, since in that case you are almost guaranteed to be told about it. Note that this doesn't imply that you should think you won a car, if someone tells you that. Being the target of a spam email might only give you a low probability of being told that you have won a car (a spam email may instead advise you of products, or tell you that you have won a boat), but spam emails are so much more common than actually winning cars that most of the time if you get such an email, you will not have won a car. If you would like a better intuition for maximum likelihood estimation, Wolfram Alpha has several demonstrations (requires free download).
The second large class of algorithms Bostrom mentions are hill climbing algorithms. The idea here is fairly straightforward, but if you would like a better basic intuition for what hill climbing looks like, Wolfram Alpha has a demonstration to play with (requires free download).
In-depth investigations
If you are particularly interested in these topics, and want to do further research, these are a few plausible directions:
How to proceed
This has been a collection of notes on the chapter. The most important part of the reading group though is discussion, which is in the comments section. I pose some questions for you there, and I invite you to add your own. Please remember that this group contains a variety of levels of expertise: if a line of discussion seems too basic or too incomprehensible, look around for one that suits you better!
Next week, we will talk about what AI researchers think about human-level AI: when it will arrive, what it will be like, and what the consequences will be. To prepare, read Opinions about the future of machine intelligence from Chapter 1 and also When Will AI Be Created? by Luke Muehlhauser. The discussion will go live at 6pm Pacific time next Monday 22 September. Sign up to be notified here.