[Click here to see a list of all interviews]
Michael L. Littman is a computer scientist. He works mainly in reinforcement learning, but has done work in machine learning, game theory, computer networking, Partially observable Markov decision process solving, computer solving of analogy problems and other areas. He is currently a professor of computer science and department chair at Rutgers University.
Homepage: cs.rutgers.edu/~mlittman/
Google Scholar: scholar.google.com/scholar?q=Michael+Littman
The Interview:
Michael Littman: A little background on me. I've been an academic in AI for not-quite 25 years. I work mainly on reinforcement learning, which I think is a key technology for human-level AI---understanding the algorithms behind motivated behavior. I've also worked a bit on topics in statistical natural language processing (like the first human-level crossword solving program). I carried out a similar sort of survey when I taught AI at Princeton in 2001 and got some interesting answers from my colleagues. I think the survey says more about the mental state of researchers than it does about the reality of the predictions.
In my case, my answers are colored by the fact that my group sometimes uses robots to demonstrate the learning algorithms we develop. We do that because we find that non-technical people find it easier to understand and appreciate the idea of a learning robot than pages of equations and graphs. But, after every demo, we get the same question: "Is this the first step toward Skynet?" It's a "have you stopped beating your wife" type of question, and I find that it stops all useful and interesting discussion about the research.
Anyhow, here goes:
Q1: Assuming no global catastrophe halts progress, by what year would you assign a 10%/50%/90% chance of the development of roughly human-level machine intelligence?
Michael Littman:
10%: 2050 (I also think P=NP in that year.)
50%: 2062
90%: 2112
Q2: What probability do you assign to the possibility of human extinction as a result of badly done AI?
Michael Littman: epsilon, assuming you mean: P(human extinction caused by badly done AI | badly done AI)
I think complete human extinction is unlikely, but, if society as we know it collapses, it'll be because people are being stupid (not because machines are being smart).
Q3: What probability do you assign to the possibility of a human level AGI to self-modify its way up to massive superhuman intelligence within a matter of hours/days/< 5 years?
Michael Littman: epsilon (essentially zero). I'm not sure exactly what constitutes intelligence, but I don't think it's something that can be turbocharged by introspection, even superhuman introspection. It involves experimenting with the world and seeing what works and what doesn't. The world, as they say, is its best model. Anything short of the real world is an approximation that is excellent for proposing possible solutions but not sufficient to evaluate them.
Q3-sub: P(superhuman intelligence within days | human-level AI running at human-level speed equipped with a 100 Gigabit Internet connection) = ?
Michael Littman: Ditto.
Q3-sub: P(superhuman intelligence within < 5 years | human-level AI running at human-level speed equipped with a 100 Gigabit Internet connection) = ?
Michael Littman: 1%. At least 5 years is enough for some experimentation.
Q4: Is it important to figure out how to make AI provably friendly to us and our values (non-dangerous), before attempting to solve artificial general intelligence?
Michael Littman: No, I don't think it's possible. I mean, seriously, humans aren't even provably friendly to us and we have thousands of years of practice negotiating with them.
Q5: Do possible risks from AI outweigh other possible existential risks, e.g. risks associated with the possibility of advanced nanotechnology?
Michael Littman: In terms of science risks (outside of human fundamentalism which is the only non-negligible risk I am aware of), I'm most afraid of high energy physics experiments, then biological agents, then, much lower, information technology related work like AI.
Q6: What is the current level of awareness of possible risks from AI, relative to the ideal level?
Michael Littman: I think people are currently hypersensitive. As I said, every time I do a demo of any AI ideas, no matter how innocuous, I am asked whether it is the first step toward Skynet. It's ridiculous. Given the current state of AI, these questions come from a simple lack of knowledge about what the systems are doing and what they are capable of. What society lacks is not a lack of awareness of risks but a lack of technical understanding to *evaluate* risks. It shouldn't just be the scientists assuring people everything is ok. People should have enough background to ask intelligent questions about the dangers and promise of new ideas.
Q7: Can you think of any milestone such that if it were ever reached you would expect human‐level machine intelligence to be developed within five years thereafter?
Michael Littman: Slightly subhuman intelligence? What we think of as human intelligence is layer upon layer of interacting subsystems. Most of these subsystems are complex and hard to get right. If we get them right, they will show very little improvement in the overall system, but will take us a step closer. The last 5 years before human intelligence is demonstrated by a machine will be pretty boring, akin to the 5 years between the ages of 12 to 17 in a human's development. Yes, there are milestones, but they will seem minor compared to first few years of rapid improvement.
I don't know what that means. It's always possible to assign probabilities, even if you don't have a clue. And assigning utilities seems trivial, too. Let's say the AI thinks "Hm. Improving my intelligence will lead to world dominion, if a) vast intelligence improvement doesn't cost too much ressources, b) doesn't take too long, c) and if intelligence really is as useful as it seems to be, i.e. is more efficient at discovering "unknown unkowns in design space" than other processes (which seems to me tautological since intelligence is by definition optimization power divided by ressources used; but I could be wrong). Let me assign a 50% probability to each of these claims." (Or less, but it always can assign a probability and can therefore compute an expected utility. )
And so even if P(Gain World dominion|successful, vast intelligence-improvement) * P(succesful, vast intelligence-improvement) is small (and I think it's easily larger than 0.05) , the expected utility could be great nonetheless. If the AI is a maximizer, not a satisficer, it will try to take over the world.
The biggest problem that I have with recursive self-improvement is that it's not at all clear that intelligence is an easily "scalable" process. Some folks seem to think that intelligence is a relatively easy algorithm, that a few mathematical insights it will be possible to "grok" intelligence. But maybe you need many different modules and heuristics for general intelligence (just like the human brain) and there is no "one true and easy path". But I'm just guessing....
I agree that the EU of small improvements is easier to compute and that they are easier to implement. But if intelligence is an "scalable" process you can make those small improvements fairly rapidly and after 100 of those you should be pretty, friggin powerful.
Do you think the discovery of General Relativity was a well-defined problem? And what about writing of inspiring novels and creating beautiful art and music? Creativity is a subset of intelligence. There are no creative chimps.
What do you mean by intelligence?
And yes, I believe that people with very high IQ and advanced social skills (another instantiation of high intelligence; chimpanzees just don't have high social skills) are far more likely to take over the world than people with IQ 110, although it's still very unlikely.
My intuition tells me that they are. :-) If you give them the goal of creating 10 paperclips and nothing else, they will try everything to achieve this goal.
But Eliezer's arguments in the AI-Foom debate are far more convincing than mine, so my arguments tell you probably nothing new. The whole discussion is frustrating, because our (subconscious) intuitions seem to differ greatly, and there is little we can do about it. (Just like the debate between Hanson and Yudkowsky was pretty fruitless.) That doesn't mean that I don't want to participate in further discussions, but the probability of an agreement seems somewhat thin. I try to do my best :-)
I'm currently rereading the Sequences and I'm trying to summarize the various arguments and counter-arguments for the intelligence explosion, though that will take some time...