[Click here to see a list of all interviews]
Michael L. Littman is a computer scientist. He works mainly in reinforcement learning, but has done work in machine learning, game theory, computer networking, Partially observable Markov decision process solving, computer solving of analogy problems and other areas. He is currently a professor of computer science and department chair at Rutgers University.
Homepage: cs.rutgers.edu/~mlittman/
Google Scholar: scholar.google.com/scholar?q=Michael+Littman
The Interview:
Michael Littman: A little background on me. I've been an academic in AI for not-quite 25 years. I work mainly on reinforcement learning, which I think is a key technology for human-level AI---understanding the algorithms behind motivated behavior. I've also worked a bit on topics in statistical natural language processing (like the first human-level crossword solving program). I carried out a similar sort of survey when I taught AI at Princeton in 2001 and got some interesting answers from my colleagues. I think the survey says more about the mental state of researchers than it does about the reality of the predictions.
In my case, my answers are colored by the fact that my group sometimes uses robots to demonstrate the learning algorithms we develop. We do that because we find that non-technical people find it easier to understand and appreciate the idea of a learning robot than pages of equations and graphs. But, after every demo, we get the same question: "Is this the first step toward Skynet?" It's a "have you stopped beating your wife" type of question, and I find that it stops all useful and interesting discussion about the research.
Anyhow, here goes:
Q1: Assuming no global catastrophe halts progress, by what year would you assign a 10%/50%/90% chance of the development of roughly human-level machine intelligence?
Michael Littman:
10%: 2050 (I also think P=NP in that year.)
50%: 2062
90%: 2112
Q2: What probability do you assign to the possibility of human extinction as a result of badly done AI?
Michael Littman: epsilon, assuming you mean: P(human extinction caused by badly done AI | badly done AI)
I think complete human extinction is unlikely, but, if society as we know it collapses, it'll be because people are being stupid (not because machines are being smart).
Q3: What probability do you assign to the possibility of a human level AGI to self-modify its way up to massive superhuman intelligence within a matter of hours/days/< 5 years?
Michael Littman: epsilon (essentially zero). I'm not sure exactly what constitutes intelligence, but I don't think it's something that can be turbocharged by introspection, even superhuman introspection. It involves experimenting with the world and seeing what works and what doesn't. The world, as they say, is its best model. Anything short of the real world is an approximation that is excellent for proposing possible solutions but not sufficient to evaluate them.
Q3-sub: P(superhuman intelligence within days | human-level AI running at human-level speed equipped with a 100 Gigabit Internet connection) = ?
Michael Littman: Ditto.
Q3-sub: P(superhuman intelligence within < 5 years | human-level AI running at human-level speed equipped with a 100 Gigabit Internet connection) = ?
Michael Littman: 1%. At least 5 years is enough for some experimentation.
Q4: Is it important to figure out how to make AI provably friendly to us and our values (non-dangerous), before attempting to solve artificial general intelligence?
Michael Littman: No, I don't think it's possible. I mean, seriously, humans aren't even provably friendly to us and we have thousands of years of practice negotiating with them.
Q5: Do possible risks from AI outweigh other possible existential risks, e.g. risks associated with the possibility of advanced nanotechnology?
Michael Littman: In terms of science risks (outside of human fundamentalism which is the only non-negligible risk I am aware of), I'm most afraid of high energy physics experiments, then biological agents, then, much lower, information technology related work like AI.
Q6: What is the current level of awareness of possible risks from AI, relative to the ideal level?
Michael Littman: I think people are currently hypersensitive. As I said, every time I do a demo of any AI ideas, no matter how innocuous, I am asked whether it is the first step toward Skynet. It's ridiculous. Given the current state of AI, these questions come from a simple lack of knowledge about what the systems are doing and what they are capable of. What society lacks is not a lack of awareness of risks but a lack of technical understanding to *evaluate* risks. It shouldn't just be the scientists assuring people everything is ok. People should have enough background to ask intelligent questions about the dangers and promise of new ideas.
Q7: Can you think of any milestone such that if it were ever reached you would expect human‐level machine intelligence to be developed within five years thereafter?
Michael Littman: Slightly subhuman intelligence? What we think of as human intelligence is layer upon layer of interacting subsystems. Most of these subsystems are complex and hard to get right. If we get them right, they will show very little improvement in the overall system, but will take us a step closer. The last 5 years before human intelligence is demonstrated by a machine will be pretty boring, akin to the 5 years between the ages of 12 to 17 in a human's development. Yes, there are milestones, but they will seem minor compared to first few years of rapid improvement.
You can't really compare technological designs for which there was no selection pressure and therefore no optimization with superficially similar evolutionary inventions. For example, you would have to compare the energy efficiency with which insects or birds can carry certain amounts of weight with a similar artificial means of transport carrying the same amount of weight. Or you would have to compare the energy efficiency and maneuverability of bird and insect flight with artificial flight. But comparing a train full of hard disk drives with the bandwidth of satellite communication is not useful. Saying that a rocket can fly faster than anything that evolution came up with is not generalizable to intelligence. And if even if I was to accept that argument, then there are many counter-examples. The echolocation of bats, economic photosynthesis or human gait. And the invention of rockets did not led to space colonization either, space exploration is actually retrogressive.
You also mention that human intelligence is primarily responsible for the creation of technology. I do think this is misleading. What is responsible is that we are goal-oriented while evolution is not. But the advance of scientific knowledge is largely an evolutionary process. I don't see that intelligence is currently tangible enough to measure that the return of increased intelligence is proportional to the resources it would take to amplify it. The argument from the gap between chimpanzees and humans is interesting but can not be used to extrapolate onwards from human general intelligence. It is pure speculation that humans are not Turing complete and that there are levels above our own. That chimpanzees exist, and humans exist, is not a proof for the existence of anything that bears, in any relevant respect, the same relationship to a human that a human bears to a chimpanzee.
It is in principle possible to create artificial intelligence that is as capable as human intelligence. But this says nothing about how quickly we will be able to come up with it. I believe that intelligence is fundamentally dependent on the complexity of the goals against which it is measured. Goals give rise to agency and define an agent's drives. As long as we won't be able to precisely hard-code a complexity of values similar to that of humans we won't achieve levels of general intelligence similar to humans.
It is true that humans have created a lot of tools that help them to achieve their goals. But it is not clear that incorporating those tools into some sort of self-perception, some sort of guiding agency, is superior to humans using a combination of tools and expert systems. In other words, it is not clear that there does exist a class of problems that is solvable by Turing machines in general, but not by a combination of humans and expert systems. And if that was the case then I think that, just like chimpanzees would be unable to invent science, we won't be able to come up with a meta-heuristic that would allow us to discover algorithms that can solve a class of problems that we can't (other than by using guided evolution).
Besides, recursive self-improvement does not demand sentience, consciousness or agency. Even if humans are not able to "recursively improve" their own algorithms we can still "recursively improve" our tools. And the supremacy of recursively improving agent's over humans and their tools is a reasonable conjecture but not a fact. It largely relies on the idea that the integration of tools into a coherent framework of agencies has huge benefits.
I also object to assigning numerical probability estimates to informal arguments and predictions. When faced with data from empirical experiments, or goats behind doors in a gameshow, it is reasonable. But using formalized methods to evaluate informal evidence can be very misleading. For real-world, computationally limited agents it is a recipe to fail spectacularly. Using formalized methods to to evaluate vague ideas like risks from AI can lead you to dramatically over or underestimate evidence by forcing you to use your intuition to assign numbers to your intuitive judgement of informal arguments.
And as a disclaimer: Don't jump to the conclusion that I generally rule out the possibility that very soon someone will stumble upon a simple algorithm that can be run on a digital computer, that can be improved to self-improve, become superhuman and take over the universe. All am saying is that the possibility isn't as inevitable as some seem to believe. If forced, I would probably assign a 1% probability to it but still feel uncomfortable about that (which isn't to equate with risks from AI in general, I don't think FOOM is required for AI's to pose a risk).
I think that Eliezer crossed the border of what can sensibly be said about this topic at the present time when he says that AI will likely invent molecular nanotechnology in a matter of hours or days. Jürgen Schmidhuber is the only person I could find who might agree with that. Even Shane Legg is more skeptical. And since I do not yet have the education to evaluate state of the art AI research myself I will side with the experts and say that Eliezer is likely wrong. Of course, I have no authority but I have to make a decision. I don't feel it would be reasonable to believe Eliezer here without restrictions.
Just because the possibility of superhuman AI seems to be disjunctive on some level doesn't mean that there are no untested assumptions underlying the claims that such an outcome is possible. Reduce the vagueness and you will discover a set assumptions that need to be true in conjunction.
So, I'm having a lot of difficulty mapping your response to the question I asked. But if I've understood your response, you are arguing that technology analogous to the technology-developing functions of human intelligence might not be in principle possible, or that if developed might not be capable of significantly greater technology-developing power than human intelligence is.
In other words, that assumptions 5 and/or 6 might be false.
I agree that it's possible. Similar things are true of the other examples you give: it's possible that technological ech... (read more)