[Click here to see a list of all interviews]
Michael L. Littman is a computer scientist. He works mainly in reinforcement learning, but has done work in machine learning, game theory, computer networking, Partially observable Markov decision process solving, computer solving of analogy problems and other areas. He is currently a professor of computer science and department chair at Rutgers University.
Homepage: cs.rutgers.edu/~mlittman/
Google Scholar: scholar.google.com/scholar?q=Michael+Littman
The Interview:
Michael Littman: A little background on me. I've been an academic in AI for not-quite 25 years. I work mainly on reinforcement learning, which I think is a key technology for human-level AI---understanding the algorithms behind motivated behavior. I've also worked a bit on topics in statistical natural language processing (like the first human-level crossword solving program). I carried out a similar sort of survey when I taught AI at Princeton in 2001 and got some interesting answers from my colleagues. I think the survey says more about the mental state of researchers than it does about the reality of the predictions.
In my case, my answers are colored by the fact that my group sometimes uses robots to demonstrate the learning algorithms we develop. We do that because we find that non-technical people find it easier to understand and appreciate the idea of a learning robot than pages of equations and graphs. But, after every demo, we get the same question: "Is this the first step toward Skynet?" It's a "have you stopped beating your wife" type of question, and I find that it stops all useful and interesting discussion about the research.
Anyhow, here goes:
Q1: Assuming no global catastrophe halts progress, by what year would you assign a 10%/50%/90% chance of the development of roughly human-level machine intelligence?
Michael Littman:
10%: 2050 (I also think P=NP in that year.)
50%: 2062
90%: 2112
Q2: What probability do you assign to the possibility of human extinction as a result of badly done AI?
Michael Littman: epsilon, assuming you mean: P(human extinction caused by badly done AI | badly done AI)
I think complete human extinction is unlikely, but, if society as we know it collapses, it'll be because people are being stupid (not because machines are being smart).
Q3: What probability do you assign to the possibility of a human level AGI to self-modify its way up to massive superhuman intelligence within a matter of hours/days/< 5 years?
Michael Littman: epsilon (essentially zero). I'm not sure exactly what constitutes intelligence, but I don't think it's something that can be turbocharged by introspection, even superhuman introspection. It involves experimenting with the world and seeing what works and what doesn't. The world, as they say, is its best model. Anything short of the real world is an approximation that is excellent for proposing possible solutions but not sufficient to evaluate them.
Q3-sub: P(superhuman intelligence within days | human-level AI running at human-level speed equipped with a 100 Gigabit Internet connection) = ?
Michael Littman: Ditto.
Q3-sub: P(superhuman intelligence within < 5 years | human-level AI running at human-level speed equipped with a 100 Gigabit Internet connection) = ?
Michael Littman: 1%. At least 5 years is enough for some experimentation.
Q4: Is it important to figure out how to make AI provably friendly to us and our values (non-dangerous), before attempting to solve artificial general intelligence?
Michael Littman: No, I don't think it's possible. I mean, seriously, humans aren't even provably friendly to us and we have thousands of years of practice negotiating with them.
Q5: Do possible risks from AI outweigh other possible existential risks, e.g. risks associated with the possibility of advanced nanotechnology?
Michael Littman: In terms of science risks (outside of human fundamentalism which is the only non-negligible risk I am aware of), I'm most afraid of high energy physics experiments, then biological agents, then, much lower, information technology related work like AI.
Q6: What is the current level of awareness of possible risks from AI, relative to the ideal level?
Michael Littman: I think people are currently hypersensitive. As I said, every time I do a demo of any AI ideas, no matter how innocuous, I am asked whether it is the first step toward Skynet. It's ridiculous. Given the current state of AI, these questions come from a simple lack of knowledge about what the systems are doing and what they are capable of. What society lacks is not a lack of awareness of risks but a lack of technical understanding to *evaluate* risks. It shouldn't just be the scientists assuring people everything is ok. People should have enough background to ask intelligent questions about the dangers and promise of new ideas.
Q7: Can you think of any milestone such that if it were ever reached you would expect human‐level machine intelligence to be developed within five years thereafter?
Michael Littman: Slightly subhuman intelligence? What we think of as human intelligence is layer upon layer of interacting subsystems. Most of these subsystems are complex and hard to get right. If we get them right, they will show very little improvement in the overall system, but will take us a step closer. The last 5 years before human intelligence is demonstrated by a machine will be pretty boring, akin to the 5 years between the ages of 12 to 17 in a human's development. Yes, there are milestones, but they will seem minor compared to first few years of rapid improvement.
If I was an AI in such a situation, I'd make a modified copy of myself (or of the relevant modules) interfaced with a simulation environment with some physics-based puzzle to solve, such that it only gets a video feed and only has some simple controls (say, have it play Portal - the exact challenge is a bit irrelevant, just something that requires general intelligence). A modified AI that performs better (learns faster, comes up with better solutions) in a wide variety of simulated environments will probably also work better in the real world.
Even if the combinations of parameters that makes functional intelligence is very fragile, i.e. the search space has high-dimensionality and the "surface" is very jagged, it's still a search space that can be explored and mapped.
That's a bit hand-wavy, but enough to get me to suspect that an agent that can self-modify and run simulations of itself has a non-negligible chance of self-improving successfully (for a broad meaning of "successfully", that includes accidentally rewriting the utility function, as long as the resulting system is more powerful).
Meaning, a 1% chance of superhuman intelligence within 5 years, right?
Sorry, I meant to say that it does not seem unreasonable to me that an AGI might take five years to self-improve. 1% does seem unreasonably low. I'm not sure what probability I would assign to "superhuman AGI in 5 years", but under say 40% seems quite low.