[Click here to see a list of all interviews]
Brandon Rohrer
Sandia National Laboratories
Cited by 536
Education
PhD, Mechanical Engineering, Massachusetts Institute of Technology, 2002.
Neville Hogan, Advisor and Thesis Committee Chair.
MS, Mechanical Engineering, Massachusetts Institute of Technology, 1999.
National Science Foundation Fellowship
BS cum laude, Mechanical Engineering, Brigham Young University, 1997.
Ezra Taft Benson (BYU's Presidential) Scholarship
National Merit Scholarship
Experience
Sandia National Laboratories, Albuquerque, NM.
Principal Member of the Technical Staff, 2006 - present
Senior Member of the Technical Staff, 2002 - 2006
University of New Mexico, Albuquerque, NM.
Adjunct Assistant Professor,
Department of Electrical and Computer Engineering, 2007 - present
Homepage: sandia.gov/~brrohre/
Papers: sandia.gov/rohrer/papers.html
Google Scholar: scholar.google.com/scholar?q=Brandon+Rohrer
Tim Finin
Professor of Computer Science and Electrical Engineering, University of Maryland
Cited by 20832
Tim Finin is a Professor of Computer Science and Electrical Engineering at the University of Maryland, Baltimore County (UMBC). He has over 30 years of experience in applications of Artificial Intelligence to problems in information systems and language understanding. His current research is focused on the Semantic Web, mobile computing, analyzing and extracting information from text and online social media, and on enhancing security and privacy in information systems.
Finin received an S.B. degree in Electrical Engineering from MIT and a Ph.D. degree in Computer Science from the University of Illinois at Urbana-Champaign. He has held full-time positions at UMBC, Unisys, the University of Pennsylvania, and the MIT AI Laboratory. He is the author of over 300 refereed publications and has received research grants and contracts from a variety of sources. He participated in the DARPA/NSF Knowledge Sharing Effort and helped lead the development of the KQML agent communication language and was a member of the W3C Web Ontology Working Group that standardized the OWL Semantic Web language.
Finin has chaired of the UMBC Computer Science Department, served on the board of directors of the Computing Research Association, been a AAAI councilor, and chaired several major research conferences. He is currently an editor-in-chief of the Elsevier Journal of Web Semantics.
Homepage: csee.umbc.edu/~finin/
Google Scholar: scholar.google.com/scholar?q=Tim+Finin
Pat Hayes
Pat Hayes has a BA in mathematics from Cambridge University and a PhD in Artificial Intelligence from Edinburgh. He has been a professor of computer science at the University of Essex and philosophy at the University of Illinois, and the Luce Professor of cognitive science at the University of Rochester. He has been a visiting scholar at Universite de Geneve and the Center for Advanced Study in the Behavioral Studies at Stanford, and has directed applied AI research at Xerox-PARC, SRI and Schlumberger, Inc.. At various times, Pat has been secretary of AISB, chairman and trustee of IJCAI, associate editor of Artificial Intelligence, a governor of the Cognitive Science Society and president of AAAI.
Pat's research interests include knowledge representation and automatic reasoning, especially the representation of space and time; the semantic web; ontology design; image description and the philosophical foundations of AI and computer science. During the past decade Pat has been active in the Semantic Web initiative, largely as an invited member of the W3C Working Groups responsible for the RDF, OWL and SPARQL standards. Pat is a member of the Web Science Trust and of OASIS, where he works on the development of ontology standards.
In his spare time, Pat restores antique mechanical clocks and remodels old houses. He is also a practicing artist, with works exhibited in local competitions and international collections. Pat is a charter Fellow of AAAI and of the Cognitive Science Society, and has professional competence in domestic plumbing, carpentry and electrical work.
Homepage: ihmc.us/groups/phayes/
Selected research: ihmc.us/groups/phayes/wiki/a3817/Pat_Hayes_Selected_Research.html
The Interview:
Brandon Rohrer: This is an entertaining survey. I appreciate the specificity with which you've worded some of the questions. I don't have a defensible or scientific answer to any of the questions, but I've included some answers below that are wild-ass guesses. You got some good and thoughtful responses. I've been enjoying reading them. Thanks for compiling them.
Q1: Assuming no global catastrophe halts progress, by what year would you assign a 10%/50%/90% chance of the development of roughly human-level machine intelligence?
Explanatory remark to Q1:
P(human-level AI by (year) | no wars ∧ no disasters ∧ beneficially political and economic development) = 10%/50%/90%
Brandon Rohrer: 2032/2052/2072
Tim Finin: 20/100/200 years
Pat Hayes: I do not consider this question to be answerable, as I do not accpet this (common) notion of "human-level intelligence" as meaningful. Artificially intelligent artifacts are in some ways superhuman, and have been for many years now; but in other ways, they are sub-human, or perhaps it would be better to say, non-human. They simply differ from human intelligences, and it is inappropriate to speak of "levels" of intelligence in this way. Intelligence is too complex and multifacetted a topic to be spoken of as though it were something like sea level that can be calibrated on a simple linear scale.
If by 'human-level' you mean, the AI will be an accurate simalcrum of a human being, or perhaps a human personality (as is often envisioned in science fiction, eg HAL from "2001") my answer would be, never. We will never create such a machine intelligence, because it is probably technically close to impossible, and not technically useful (note that HAL failed in its mission through being TOO "human": it had a nervous breakdown. Bad engineering.) But mostly because we have absolutely no need to do so. Human beings are not in such short supply at resent that it makes sense to try to make artificial ones at great cost. And actual AI work, as opposed to the fantasies often woven around it by journalists and futurists, is not aiming to create such things. A self-driving car is not an artificial human, but it is likely to be a far better driver than any human, because it will not be limited by human-level attention spans and human-level response times. It will be, in these areas, super-human, just as present computers are superhuman at calculation and keeping track of large numbers of complex patterns, etc.. .
Q2: What probability do you assign to the possibility of human extinction as a result of badly done AI?
Explanatory remark to Q2:
P(human extinction | badly done AI) = ?
(Where 'badly done' = AGI capable of self-modification that is not provably non-dangerous.)
Brandon Rohrer: < 1%
Tim Finin: 0.001
Pat Hayes: Zero. The whole idea is ludicrous.
Q3: What probability do you assign to the possibility of a human level AGI to self-modify its way up to massive superhuman intelligence within a matter of hours/days/< 5 years?
Explanatory remark to Q3:
P(superhuman intelligence within hours | human-level AI running at human-level speed equipped with a 100 GB Internet connection) = ?
P(superhuman intelligence within days | human-level AI running at human-level speed equipped with a 100 GB Internet connection) = ?
P(superhuman intelligence within < 5 years | human-level AI running at human-level speed equipped with a 100 GB Internet connection) = ?
Brandon Rohrer: < 1%
Tim Finin: 0.0001/0.0001/0.01
Pat Hayes: Again, zero. Self-modification in any useful sense has never been technically demonstrated. Machine learning is possible and indeed is a widely used technique (no longer only in AI) but a learning engine is the same thing after it has learnt something as it was before., just as biological learners are. When we learn, we get more informed, but not more intelligent: similarly with machines.
Q4: Is it important to figure out how to make AI provably friendly to us and our values (non-dangerous), before attempting to solve artificial general intelligence?
Explanatory remark to Q4:
How much money is currently required to mitigate possible risks from AI (to be instrumental in maximizing your personal long-term goals, e.g. surviving this century), less/no more/little more/much more/vastly more?
Brandon Rohrer: No more.
Tim Finin: No.
Pat Hayes: No. There is no reason to suppose that any manufactured system will have any emotional stance towards us of any kind, friendly or unfriendly. In fact, even if the idea of "human-level" made sense, we could have a more-than-human-level super-intelligent machine, and still have it bear no emotional stance towards other entities whatsoever. Nor need it have any lust for power or political ambitions, unless we set out to construct such a thing (which AFAIK, nobody is doing.) Think of an unworldly boffin who just wants to be left alone to think, and does not care a whit for changing the world for better or for worse, and has no intentions or desires, but simply answers questions that are put to it and thinks about htings that it is asked to think about. It has no ambition and in any case no means to achieve any far-reaching changes even if it "wanted" to do so. It seems to me that this is what a super-intelligent question-answering system would be like. I see no inherent, even slight, danger arising from the presence of such a device.
Q5: Do possible risks from AI outweigh other possible existential risks, e.g. risks associated with the possibility of advanced nanotechnology?
Explanatory remark to Q5:
What existential risk (human extinction type event) is currently most likely to have the greatest negative impact on your personal long-term goals, under the condition that nothing is done to mitigate the risk?
Brandon Rohrer: Evolved variants of currently existing biological viruses and bacteria.
Tim Finin: No.
Pat Hayes: No. Nanotechnology has the potential to make far-reaching changes to the actual physical environment. AI poses no such threat. Indeed, I do not see that AI itself (that is, actual AI work being done, rather than the somewhat uninformed fantasies that some authors, such as Ray Kurtzwiel, have invented) poses any serious threat to anyone.
I would say that any human-extinction type event is likely to make a serious dent in my personal goals. (But of course I am being sarcastic, as the question as posed seems to me to be ridiculous.)
When I think of the next century, say, the risk I amost concerned about is global warming and the resulting disruption to the biosphere and human society. I do not think that humans will become extinct, but I think that our current global civilization might not survive.
Q6: What is the current level of awareness of possible risks from AI, relative to the ideal level?
Brandon Rohrer: High.
Tim Finin: About right.
Pat Hayes: The actual risks are negligible: the perceived risks (thanks to the popularization of such nonsensical ideas as the "singularity") are much greater.
Q7: Can you think of any milestone such that if it were ever reached you would expect human‐level machine intelligence to be developed within five years thereafter?
Brandon Rohrer: No, but the demonstrated ability of a robot to learn from its experience in a complex and unstructured environment is likely to be a milestone on that path, perhaps signalling HLI is 20 years away.
Tim Finin: Passing a well constructed, open ended turing test.
Pat Hayes: No. There are no 'milestones' in AI. Progress is slow but steady, and there are no magic bullets.
Anonymous
The following are replies from experts who either did not answer the questions for various reasons or didn't want them to be published.
Expert 1: Sorry, I don't want to do an email interview - it is too hard to qualify comments.
Expert 2: Thanks for your inquiry - but as you note I am a roboticist and not a futurist, so I generally try to avoid speculation.
Expert 3: my firmest belief about the timeline for human-level AI is that we can't estimate it usefully. partly this is because i don't think "human level AI" will prove to be a single thing (or event) that we can point to and say "aha there it is!". instead i think there will be a series of human level abilities that are achieved. in fact some already have (though many more haven't).
(on the other hand, i think shooting for human-level AI is a good long term research goal. it doesn't need to be one thing in the end to be a good focus of work.)
another important catch, with respect to the "risk from human level AI" equation, is that i don't think human level AI immediately leads to super-human level AI. we have had many human-level human's working on AI for a long time, and haven't added up to even a single human. i don't think it's is necessarily (or even likely) the case that a human level AI would have much more luck at making itself smarter than we have been....
Expert 4: Thanks for this - fascinating questions, and I am a great supporter of probability elicitation, but only from people who are well-informed about the subject-matter! And I am afraid this does not include me - I am sure I should know more about this, but I don't, and so am unwilling to express publicly any firm opinion.
Of course in private in a bar I may be more forthcoming!
Expert 5: Interesting questions, I'll enjoy seeing your published results! Unfortunately, now that I work at ****** (through the acquisition of one of my companies, ******), there are policies in place that prohibit me from participating in this kind of exercise.
Expert 6: I don't think I can answer your questions in a meaningful way...
Expert 7: Thanks for your interest. I feel that this is not in the area of my primary expertise. However, I'd refer you to ****** ( a colleague, and co-chair of the *******) who I think might be in a better position to give you current and informed answers.
Expert 8: Unfortunately, most of these questions do not have a simple answer, in my opinion, so I can't just say "five years" or whatever -- I would have to write a little essay in order to give an answer that reflects what I really believe. For example, the concept of "roughly human-level intelligence" is a complicated one, and any simple answer would be misleading. By some measures we're already there; by other measures, the goal is still far in the future. And I think that the idea of a "provably friendly" system is just meaningless.
Anyway, good luck with your survey. I'm sure you'll get simple answers from some people, but I suspect that you will find them confusing or confused.
Expert 9: Thank you for your email. I do not feel comfortable answering your questions for a public audience.
Expert 10: sorry no reply for such questions
Expert 11: I regard speculation about AI as a waste of time. We are at an impasse: none of our current techniques seems likely to provide truly human-like intelligence. I think what's needed is a conceptual breakthrough from someone comparable to Newton or Einstein. Until that happens, we're going to remain stuck, although there will be lots of useful technology coming along. It won't be "intelligent" or "conscious" the way humans are, but it might do a really good job of guessing what movies we want to watch or what news stories interest us the most.
Given our current state of ignorance, I feel that speculating about either the timeline or the impact of AI is best left to science fiction writers.
More interviews forthcoming (hopefully). At least one person told me that the questions are extremely important and that he would work out some answers over the next few days.
I think experts' opinions on the possibility of AI self-improvement may covary with their awareness of work on formal, machine-representable concepts of optimal AI design, particularly Solomonoff induction, including its application to reinforcement learning as in AIXI, and variations of Levin search such as Hutter's algorithm M and Gödel machines. If an expert is unaware of those concepts, this unawareness may serve to explain away the expert's belief that there are no approaches to engineering self-improvement-capable AI on any foreseeable horizon.
If it's not too late, you should probably include a question to judge the expert's awareness of these concepts in your questionnaires, such as:
"Qn: Are you familiar with formal concepts of optimal AI design which relate to searches over complete spaces of computable hypotheses or computational strategies, such as Solomonoff induction, Levin search, Hutter's algorithm M, AIXI, or Gödel machines?"
...bearing in mind that the presence of such a question may affect their other answers.
(This was part of what I was getting at with my analysis of the AAAI panel interim report: "What cached models of the planning abilities of future machine intelligences did the academics have available [...]?" "What fraction of the academics are aware of any current published AI architectures which could reliably reason over plans at the level of abstraction of 'implement a proxy intelligence'?")
Other errors which might explain away an expert's unconcern for AI risk are:
incautious thinking about the full implications of a given optimization criterion or motivational system;
when considering AI self-improvement scenarios, incautious thinking about parameter uncertainty and structural uncertainty in economic descriptions of computational complexity costs and efficiency gains over time (particularly given that a general AI will be motivated to investigate many different possible structures for the process for self-improvement, including structures one may not oneself have considered, in order to choose a process whose economics are as favorable as possible); and
incomplete reasoning about options for gathering information about technical factors affecting AI risk scenarios, when considering the potential relative costs of delaying AI safety projects until better information is available (on the implicit expectation that, in the event that the technical factors turn out to imply safety, delaying will have prevented the cost of the AI safety projects, and (more viscerally) that having advocated delay will prevent one's own loss of prestige, unthinkingly taken as a proxy for correctness, whereas failure to have advocated an immediate start to AI safety projects could not result in loss of one's own prestige in any event).
However, it's harder to find uncontroversial questions which would be diagnostic of these errors.
Perhaps an expert's beliefs about the costs of better information and the costs of delay might be assessed with a willingness-to-pay question, such as a tradeoff involving a hypothetical benefit to everyone now living on Earth which could be sacrificed to gain hypothetical perfect understanding of some technical unknowns related to AI risks, or a hypothetical benefit gained at the cost of perfect future helplessness against AI risks. However, even this sort of question might seem to frame things hyperbolically.