I think this expert is anthropomorphizing too much. To pose an extinction risk, a machine doesn't even need to talk, much less replicate all the accidental complexity of human minds. It just has to be good at physics and engineering.
These tasks seem easier to formalize than many other things humans do: in particular, you could probably figure out the physics of our universe from very little observational data, given a simplicity prior and lots of computing power (or a good enough algorithm). Some engineering tasks are limited by computing power too, e.g. protein folding is an already formalized problem, and a machine that could solve it efficiently could develop nanotech faster than humans do.
We humans probably suck at physics and engineering on an absolute scale just like we suck at multiplying 32-bit numbers, see Moravec's paradox. And we probably suck at these tasks about as much as it's possible to suck and still build a technological civilization, because otherwise we would have built it at an earlier point in our evolution.
We now know that playing chess doesn't require human-level intelligence as Littman understands it. It may turn out that destroying the world doesn't require human-level intelligence either. A narrow AI could do just fine.
We now know that playing chess doesn't require human-level intelligence as Littman understands it. It may turn out that destroying the world doesn't require human-level intelligence either. A narrow AI could do just fine.
Interesting: this framing moved me more than your previous explanation.
I think Moravec's paradox is more than a selection effect. Face recognition requires more computing power than multiplying two 32-bit numbers, and it's not just because we've learned to formalize one but not the other. We will never get so good at programming computers that our face-recognition programs get faster than our number-multiplication programs.
This is a well-known argument. I got it from Eliezer somewhere, don't remember where.
Yes, and I'm sick of trying to explain to people why "we have no evidence that it is possible to have higher than human intelligence" is trivially absurd for approximately this reason. Hence encouragement of others saying the same thing.
10%: 2050 (I also think P=NP in that year.) 50%: 2062
+40% in 12 specific years? Now that's a bold distribution.
Not a typo---I was mostly being cheeky. But, I have studied complexity theory quite a bit (mostly in analyzing the difficulty of problems in AI) and my 2050 number came from the following thought experiment. The problem 3-SAT is NP complete. It can be solved in time 2^n (where n is the number of variables in the formula). Over the last 20 or 30 years, people have created algorithms that solve the problem in c^n for ever decreasing values of c. If you plot the rate of decrease of c over time, it's a pretty good fit (or was 15 years ago when I did this analysis) for a line that goes below 1 in 2050. (If that happens, an NP-hard problem would be solvable in polynomial time and thus P=NP.) I don't put much stake in the idea that the future can be predicted by a graph like that, but I thought it was fun to think about. Anyhow, sorry for being flip.
Q3-sub: P(superhuman intelligence within < 5 years | human-level AI running at human-level speed equipped with a 100 Gigabit Internet connection) = ?
Michael Littman: 1%. At least 5 years is enough for some experimentation.
That's the answer that surprised me the most. I'm willing to defer to his experience when it comes to the feasibility of human-level AI itself, but human-level AI + a blueprint of how it was built + better resources than a human in terms of raw computing power and memory + having a much closer interface to code than a human does + s...
No, I don't think it's possible. I mean, seriously, humans aren't even provably friendly to us and we have thousands of years of practice negotiating with them.
Not sure this is a fair comparison for 2 reasons: 1) We don't have the complete source code to human consciousness yet, so we can't do a good analysis of it, and 2) If anything primates are provably unfriendly to each other (at least outside their tribal group).
EDIT: Yes, I realize that a human genome is sort of a source code to our behavior, but having it without a complete theory of physics is rather like being given the source code to an AI in an unknown format.
The last 5 years before human intelligence is demonstrated by a machine will be pretty boring, akin to the 5 years between the ages of 12 to 17 in a human's development.
Those were some of the most exciting years of my life.
Similarly, I expect the run up to machine intelligence to consist of interesting times.
Is it plausible that fair-to-middling AI could be enough to break civilization? There are a lot of factors, especially whether civilization will become more fragile or more resilient as tech advances, but it does seem to me that profit-maximizing and status-maximizing AI have a lot of possibilities for trouble.
I'm not sure exactly what constitutes intelligence, but I don't think it's something that can be turbocharged by introspection, even superhuman introspection. It involves experimenting with the world and seeing what works and what doesn't.
A common sentiment. Shane Legg even says something similar:
...We then use this fact to prove that although very powerful prediction algorithms exist, they cannot be mathematically discovered due to Godel incompleteness. Given how fundamental prediction is to intelligence, this result implies that beyond a moderate lev
[Click here to see a list of all interviews]
Homepage: cs.rutgers.edu/~mlittman/
Google Scholar: scholar.google.com/scholar?q=Michael+Littman
The Interview:
Michael Littman: A little background on me. I've been an academic in AI for not-quite 25 years. I work mainly on reinforcement learning, which I think is a key technology for human-level AI---understanding the algorithms behind motivated behavior. I've also worked a bit on topics in statistical natural language processing (like the first human-level crossword solving program). I carried out a similar sort of survey when I taught AI at Princeton in 2001 and got some interesting answers from my colleagues. I think the survey says more about the mental state of researchers than it does about the reality of the predictions.
In my case, my answers are colored by the fact that my group sometimes uses robots to demonstrate the learning algorithms we develop. We do that because we find that non-technical people find it easier to understand and appreciate the idea of a learning robot than pages of equations and graphs. But, after every demo, we get the same question: "Is this the first step toward Skynet?" It's a "have you stopped beating your wife" type of question, and I find that it stops all useful and interesting discussion about the research.
Anyhow, here goes:
Q1: Assuming no global catastrophe halts progress, by what year would you assign a 10%/50%/90% chance of the development of roughly human-level machine intelligence?
Michael Littman:
10%: 2050 (I also think P=NP in that year.)
50%: 2062
90%: 2112
Q2: What probability do you assign to the possibility of human extinction as a result of badly done AI?
Michael Littman: epsilon, assuming you mean: P(human extinction caused by badly done AI | badly done AI)
I think complete human extinction is unlikely, but, if society as we know it collapses, it'll be because people are being stupid (not because machines are being smart).
Q3: What probability do you assign to the possibility of a human level AGI to self-modify its way up to massive superhuman intelligence within a matter of hours/days/< 5 years?
Michael Littman: epsilon (essentially zero). I'm not sure exactly what constitutes intelligence, but I don't think it's something that can be turbocharged by introspection, even superhuman introspection. It involves experimenting with the world and seeing what works and what doesn't. The world, as they say, is its best model. Anything short of the real world is an approximation that is excellent for proposing possible solutions but not sufficient to evaluate them.
Q3-sub: P(superhuman intelligence within days | human-level AI running at human-level speed equipped with a 100 Gigabit Internet connection) = ?
Michael Littman: Ditto.
Q3-sub: P(superhuman intelligence within < 5 years | human-level AI running at human-level speed equipped with a 100 Gigabit Internet connection) = ?
Michael Littman: 1%. At least 5 years is enough for some experimentation.
Q4: Is it important to figure out how to make AI provably friendly to us and our values (non-dangerous), before attempting to solve artificial general intelligence?
Michael Littman: No, I don't think it's possible. I mean, seriously, humans aren't even provably friendly to us and we have thousands of years of practice negotiating with them.
Q5: Do possible risks from AI outweigh other possible existential risks, e.g. risks associated with the possibility of advanced nanotechnology?
Michael Littman: In terms of science risks (outside of human fundamentalism which is the only non-negligible risk I am aware of), I'm most afraid of high energy physics experiments, then biological agents, then, much lower, information technology related work like AI.
Q6: What is the current level of awareness of possible risks from AI, relative to the ideal level?
Michael Littman: I think people are currently hypersensitive. As I said, every time I do a demo of any AI ideas, no matter how innocuous, I am asked whether it is the first step toward Skynet. It's ridiculous. Given the current state of AI, these questions come from a simple lack of knowledge about what the systems are doing and what they are capable of. What society lacks is not a lack of awareness of risks but a lack of technical understanding to *evaluate* risks. It shouldn't just be the scientists assuring people everything is ok. People should have enough background to ask intelligent questions about the dangers and promise of new ideas.
Q7: Can you think of any milestone such that if it were ever reached you would expect human‐level machine intelligence to be developed within five years thereafter?
Michael Littman: Slightly subhuman intelligence? What we think of as human intelligence is layer upon layer of interacting subsystems. Most of these subsystems are complex and hard to get right. If we get them right, they will show very little improvement in the overall system, but will take us a step closer. The last 5 years before human intelligence is demonstrated by a machine will be pretty boring, akin to the 5 years between the ages of 12 to 17 in a human's development. Yes, there are milestones, but they will seem minor compared to first few years of rapid improvement.