[Click here to see a list of all interviews]

I am emailing experts in order to raise and estimate the academic awareness and perception of risks from AI.

Dr. Pei Wang is trying to build general-purpose AI systems, compare them with human intelligence, analyze their theoretical assumptions, and evaluate their potential and limitation. [Curriculum Vitae] [Pei Wang on the Path to Artificial General Intelligence]

Dr. J. Storrs Hall is an independent scientist and author. His most recent book is Beyond AI: Creating the Conscience of the Machine, published by Prometheus Books. It is about the (possibly) imminent development of strong AI, and the desirability, if and when that happens, that such AIs be equipped with a moral sense and conscience. This is an outgrowth of his essay Ethics for Machines. [Homepage]

Professor Paul Cohen is the director of the School of Information: Science, Technology, and Arts at the University of Arizona. His research is in artificial intelligence. He wants to model human cognitive development in silico, with robots or softbots in game environments as the "babies" they're trying to raise up. he is particularly interested in the sensorimotor foundations of human language. Several of his projects in the last decade have developed algorithms for sensor-to-symbol kinds of processing in service of learning the meanings of words, most recently, verbs. He also works in what they call Education Informatics, which includes intelligent tutoring systems, data mining and statistical modeling of students' mastery and engagement, assessment technologies, ontologies for representing student data and standards for content, architectures for content delivery, and so on. [Homepage]

The Interview:

Q1: Assuming beneficial political and economic development and that no global catastrophe halts progress, by what year would you assign a 10%/50%/90% chance of the development of artificial intelligence that is roughly as good as humans at science, mathematics, engineering and programming?

Pei Wang: My estimations are, very roughly, 2020/2030/2050, respectively.

Here by "roughly as good as humans" I mean the AI will follow roughly the same principles as human in information processing, though it does not mean that the system will have the same behavior or capability as human, due to the difference in body, experience, motivation, etc.

J. Storrs Hall: 2020 / 2030 / 2040

Paul Cohen: I wish the answer were simple.  As early as the 1970s, AI programs were making modest scientific discoveries and discovering (or more often, rediscovering) bits of mathematics.  Computer-based proof checkers are apparently common in math, though I don't know anything about them.  If you are asking when machines will function as complete, autonomous scientists (or anything else) I'd say there's little reason to think that that's what we want.  For another few decades we will be developing assistants, amplifiers, and parts of the scientific/creative process.  There are communities who strive for complete and autonomous automated scientists, but last time I looked, a couple of years back, it was "look ma, no hands" demonstrations with little of interest under the hood. On the other hand, joint machine-human efforts, especially those that involve citizen scientists (e.g., Galaxy Zoo, Foldit) are apt to be increasingly productive.

Q2: Once we build AI that is roughly as good as humans at science, mathematics, engineering and programming, how much more difficult will it be for humans and/or AIs to build an AI which is substantially better at those activities than humans?

Pei Wang: After that, AI can become more powerful (in hardware), more knowledgeable, and therefore more capable in problem solving, than human beings. However, there is no evidence to believe that it can be "substantially better" in the principles defining intelligence.

J. Storrs Hall: Difficult in what sense?  Make 20 000 copies of your AI and organize them as Google or Apple. The difficulty is economic, not technical.

Paul Cohen: It isn't hard to do better than humans. The earliest expert systems outperformed most humans. You can't beat a machine at chess. etc.  Google is developing cars that I think will probably drive better than humans. The Google search engine does what no human can. 

Q3: Do you ever expect artificial intelligence to overwhelmingly outperform humans at typical academic research, in the way that they may soon overwhelmingly outperform humans at trivia contests, or do you expect that humans will always play an important role in scientific progress?

Pei Wang: Even when AI follows the same principles as humans, and has more computational power and other resources than humans, they won't "overwhelmingly outperform humans" in all activities, due to the difference in hardware, experience, and motivations. There will always be some tasks that humans do better, and others that machines do better.

J. Storrs Hall: A large part of academic research is entertainment and status fights, and it doesn't really matter whether machines are good at that or not.  A large part of scientific research and technical development is experimentation and data gathering, and these are mostly resource-limited rather than smarts-limited. increasing AI intelligence doesn't address the bottleneck.

Paul Cohen: One fundamental observation from sixty years of AI is that generality is hard, specialization is easy.  This is one reason that Watson is a greater accomplishment than Deep Blue.  Scientists specialize (although, arguably, the best scientists are not ultra-specialists but maintain a broad-enough perspective to see connections that lead to new work).  So a narrow area of science is easier than the common sense that a home-help robot will need.   I think it's very likely that in some areas of science, machines will do much of the creative work and also the drudge work.

Q4: What probability do you assign to the possibility of an AI with initially (professional) human-level competence at general reasoning (including science, mathematics, engineering and programming) to self-modify its way up to vastly superhuman capabilities within a matter of hours/days/< 5 years?

Pei Wang: Though there are speculations for such a "self-modifying to superhuman" scenario, all of them contains various wrong or unsupported assumptions. I haven't been convinced for such a possibility at all. It is possible for AI systems to become more and more capable, but I don't think they will become completely uncontrollable or incomprehensible.

J. Storrs Hall: This depends entirely on when it starts, i.e. what is the current marginal cost of computation along the Moore's Law curve. A reductio ad adsurdum: in 1970, when all the computers in the world might possibly have sufficed to run one human-equivalent program, the amount of work it would have had to do to improve to superhuman would be way out of its grasp. In 2050, it will probably be trivial, since computation will be extremely cheap and the necessary algorithms and knowledge bases will likely be available as open source.

Paul Cohen: The first step is the hardest:  "human level competence at general reasoning" is our greatest challenge.  I am quite sure that anything that could, say, read and understand what it reads would in a matter of days, weeks or months become vastly more generative than humans.  But the first step is still well beyond our grasp.

Q5: How important is it to figure out how to make AI provably friendly to us and our values (non-dangerous), before attempting to build AI that is good enough at general reasoning (including science, mathematics, engineering and programming) to undergo radical self-modification?

Pei Wang: I think the idea "to make superhuman AI provably friendly" is similar to the idea "to make airplane provably safe" and "to make baby provably ethical" --- though the motivation is respectful, the goal cannot be accurately defined, and the approximate definitions cannot be reached.

What if the Wright brothers were asked "to figure out how to make airplane provably safe before attempting to build it", or all parents are asked "to figure out how to make children provably ethical before attempting to have them"?

Since an AI system is adaptive (according to my opinion, as well as many others'), its behaviors won't be fully determined by its initial state or design (nature), but strongly influenced by its experience (nurture). You cannot make a friendly AI (whatever it means), but have to educate an AI to become friendly. Even in that case, it cannot be "provably friendly" --- only mathematical conclusions can be proved, and empirical predictions are always fallible.

J. Storrs Hall: This is approximately like saying we need to require a proof, based on someone's DNA sequence, that they can never commit a sin, and that we must not allow any babies to be born until they can offer such a proof.

Paul Cohen:  Same answer as above. Today we can build ultra specialist assistants (and so maintain control and make the ethical decisions ourselves) and we can't go further until we solve the problems of general intelligence -- vision, language understanding, reading, reasoning...

Q6: What probability do you assign to the possibility of human extinction as a result of AI capable of self-modification (that is not provably non-dangerous, if that is even possible)? P(human extinction by AI | AI capable of self-modification and not provably non-dangerous is created)

Pei Wang: I don't think it makes much sense to talk about "probability" here, except to drop all of its mathematical meaning.

Which discovery is "provably non-dangerous"?  Physics, chemistry, and biology are all responsible for known ways to human extinction. Should we pause all these explorations until they are "provably safe"? How about the use of fire? Would the human species do better without using this "provably dangerous" technique?

AI systems, like all major scientific and technical results, can lead to human extinction, but it is not the reason to stop or pause this research. Otherwise we cannot do anything, since every non-trivial action has unanticipated consequences. Though it is important to be aware of the potential danger of AI, we probably have no real alternative but to take this opportunity and challenge, and making our best decisions according to their predicted consequences.

J. Storrs Hall:  This is unlikely but not inconceivable.  If it happens, however, it will be because the AI was part of a doomsday device probably built by some military for "mutual assured destruction", and some other military tried to call their bluff. The best defense against this is for the rest of the world to be as smart as possible as fast as possible.

To sum up, AIs can and should be vetted with standard and well-understood quality assurance and testing techniques, but defining "friendliness to the human race", much less proving it, is a pipe dream.

Paul Cohen: From where I sit today, near zero.  Besides, the danger is likely to be mostly on the human side: Irrespective of what machines can or cannot do, we will continue to be lazy, self-righteous, jingoistic, squanderers of our tiny little planet. It seems to me much more likely that we destroy will our universities and research base and devote ourselves to wars over what little remains of our water and land.  If the current anti-intellectual rhetoric continues, if we continue to reject science for ignorance and God, then we will first destroy the research base that can produce intelligent machines and then destroy the planet.  So I wouldn't worry too much about Dr. Evil and her Annihilating AI.  We have more pressing matters to worry about.

William Uther

Dr. William Uther [Homepage] answered two sets of old questions and also made some additional comments.

William Uther: I can only answer for myself, not for my employer or anyone else (or even my future self).

I have a few comments before I start:

  • You ask a lot about 'human level AGI'.  I do not think this term is well defined.  It assumes that 'intelligence' is a one-dimensional quantity.  It isn't.  We already have AI systems that play chess better than the best humans, and mine data (one definition of 'learn') better than humans.  Robots can currently drive cars roughly as well as humans can.  We don't yet have a robot than can clean up a child's messy bedroom.  Of course, we don't have children that can do that either. :)
  • Intelligence is different from motivation.  Each is different from consciousness.  You seem to be envisioning a robot as some sort of super-human, self-motivated, conscious device.  I don't know any AI researchers working towards that goal.  (There may well be some, but I don't know them.)  As such the problems we're likely to have with AI are less 'Terminator' and more 'Sorcerer's apprentice' (see http://en.wikipedia.org/wiki/The_Sorcerer's_Apprentice ).  These types of problems are less worrying as, in general, the AI isn't trying to actively hurt humans.
  • As you bring up in one of you later questions, I think there are far more pressing worries at the moment than AI run amok.

Q1: Assuming no global catastrophe halts progress, by what year would you assign a 10%/50%/90% chance of the development of roughly human-level machine intelligence?

Explanatory remark to Q1:

P(human-level AI by (year) | no wars ∧ no disasters ∧ beneficially political and economic development) = 10%/50%/90%

William Uther: As I said above, I don't think this question is well specified.  It assumes that 'intelligence' is a one-dimensional quantity.  It isn't.  We already have AI systems that play chess better than the best humans, and mine data (one definition of learn) better than humans.  Robots can currently drive cars roughly as well as humans can.  We don't yet have a robot than can clean up a child's messy bedroom.  Of course, we don't have children that can do that either. :)

Q2: What probability do you assign to the possibility of human extinction as a result of badly done AI?

Explanatory remark to Q2:

P(human extinction | badly done AI) = ?

(Where 'badly done' = AGI capable of self-modification that is not provably non-dangerous.)

William Uther: Again, I don't think your question is well specified.  Most AI researchers are working on AI as a tool: given a task, the AI tries to figure out how to do it.  They're working on artificial intelligence, not artificial self-motivation.  I don't know that we could even measure something like 'artificial consciousness'.

All tools increase the power of those that use them.  But where does the blame lie if something goes wrong with the tool?  In the terms of the US gun debate: Do guns kill people?  Do people kill people?  Do gun manufacturers kill people?  Do kitchen knife manufacturers kill people?

Personally, I don't think 'Terminator' style machines run amok is a very likely scenario.  Hrm - I should be clearer here.  I believe that there are already AI systems that have had malfunctions and killed people (see http://www.wired.com/dangerroom/2007/10/robot-cannon-ki/ ).  I also believe that when fire was first discovered there was probably some early caveman that started a forest fire and got himself roasted.  He could even have roasted most of his village.  I do not believe that mankind will build AI systems that will systematically seek out and deliberately destroy all humans (e.g. 'Skynet'), and I further believe that if someone started a system like this it would be destroyed by everyone else quite quickly.

It isn't hard to build in an 'off' switch.  In most cases that is a very simple solution to 'Skynet' style problems.

I think there are much more worrying developments in the biological sciences.  See http://www.nytimes.com/2012/01/08/opinion/sunday/an-engineered-doomsday.html

Q3: What probability do you assign to the possibility of a human level AGI to self-modify its way up to massive superhuman intelligence within a matter of hours/days/< 5 years?

Explanatory remark to Q3:

P(superhuman intelligence within hours | human-level AI running at human-level speed equipped with a 100 GB Internet connection) = ?
P(superhuman intelligence within days | human-level AI running at human-level speed equipped with a 100 GB Internet connection) = ?
P(superhuman intelligence within < 5 years | human-level AI running at human-level speed equipped with a 100 GB Internet connection) = ?

William Uther: Again, your question is poorly specified.  What do you mean by 'human level AGI'?  Trying to tease this apart, do you mean a robotic system that if trained up for 20 years like a human would end up as smart as a human 20-year-old? Are you referring to that system before the 20 years learning, or after?

In general, if the system has 'human level' AGI, then surely it will behave the same way as a human.  In which case none of your scenarios are likely - I've had an internet connection for years and I'm not super-human yet.

Q4: Is it important to figure out how to make AI provably friendly to us and our values (non-dangerous), before attempting to solve artificial general intelligence?

Explanatory remark to Q4:

How much money is currently required to mitigate possible risks from AI (to be instrumental in maximizing your personal long-term goals, e.g. surviving this century), less/no more/little more/much more/vastly more?

William Uther: I think this is a worthwhile goal for a small number of researchers to think about, but I don't think we need many.  I think we are far enough away from 'super-intelligences' that it isn't urgent.  In particular, I don't think that having 'machines smarter than humans' is some sort of magical tipping point.  AI is HARD.  Having machines that are smarter than humans means they'll make progress faster than humans would.  It doesn't mean they'll make progress massively faster than humans would in the short term.

I also think there are ethical issues worth considering before we have AGI.  See http://m.theatlantic.com/technology/print/2011/12/drone-ethics-briefing-what-a-leading-robot-expert-told-the-cia/250060/

Note that none of those ethical issues assume some sort of super-intelligence.  In the same that ethics in humans doesn't assume super-intelligence.

Q5: Do possible risks from AI outweigh other possible existential risks, e.g. risks associated with the possibility of advanced nanotechnology?

Explanatory remark to Q5:

What existential risk (human extinction type event) is currently most likely to have the greatest negative impact on your personal long-term goals, under the condition that nothing is done to mitigate the risk?

William Uther: I have a few worries.  From the top of my head:

Q6: What is the current level of awareness of possible risks from AI, relative to the ideal level?

William Uther: I think most people aren't worried about AI risks.  I don't think they should be.  I don't see a problem here.

Q7: Can you think of any milestone such that if it were ever reached you would expect human-level machine intelligence to be developed within five years thereafter?

William Uther: I still don't know what you mean by 'human level intelligence'.  I expect artificial intelligence to be quite different to human intelligence.  AI is already common in many businesses - if you have a bank loan then the decision about whether to lend to you was probably taken by a machine learnt system.

Q1a: Assuming beneficially political and economic development and that no global catastrophe halts progress, by what year would you assign a 10%/50%/90% chance of the development of machine intelligence with roughly human-level efficient cross-domain optimization competence?

Q1b: Once our understanding of AI and our hardware capabilities are sufficiently sophisticated to build an AI which is as good as humans at engineering or programming, how much more difficult will it be to build an AI which is substantially better than humans at mathematics or engineering?

William Uther: There is a whole field of 'automatic programming'.  The main difficulties in that field were in specifying what you wanted programmed.  Once you'd done that the computers were quite effective at making it.  (I'm not sure people have tried to make computers design complex algorithms and data structures yet.)

Q2a: Do you ever expect automated systems to overwhelmingly outperform humans at typical academic research, in the way that they may soon overwhelmingly outperform humans at trivia contests, or do you expect that humans will always play an important role in scientific progress?

William Uther: I think asking about 'automated science' is a much clearer question than asking about 'Human level AGI'.  At the moment there is already huge amounts of automation in science (from Peter Cheeseman's early work with AutoClass to the biological 'experiments on a chip' that allow a large number of parallel tests to be run).  What is happening is similar to automation in other areas - the simpler tasks (both intellectual and physical) are being automated away and the humans are working at higher levels of abstraction.  There will always be *a* role for humans in scientific research (in much the same way that there is currently a role for program managers in current research - they decide at a high level what research should be done after understanding as much of it as they choose).

Q2b: To what extent does human engineering and mathematical ability rely on many varied aspects of human cognition, such as social interaction and embodiment? For example, would an AI with human-level skill at mathematics and programming be able to design a new AI with sophisticated social skills, or does that require an AI which already possesses sophisticated social skills?

William Uther: Social skills require understanding humans.  We have no abstract mathematical model of humans as yet to load into a machine, and so the only way you can learn to understand humans is by experimenting on them... er, I mean, interacting with them. :)  That takes time, and humans who are willing to interact with you.

Once you have the model, coming up with optimal plans for interacting with it, i.e. social skills, can happen offline.  It is building the model of humans that is the bottleneck for an infinitely powerful machine.

I guess you cold parallelise it by interacting with each human on the planet simultaneously.  That would gather a large amount of data quite quickly, but be tricky to organise.  And certain parts of learning about a system cannot be parallelised.

Q2c: What probability do you assign to the possibility of an AI with initially (professional) human-level competence at mathematics and programming to self-modify its way up to massive superhuman efficient cross-domain optimization competence within a matter of hours/days/< 5 years?

William Uther: One possible outcome is that we find out that humans are close to optimal problem solvers given the resources they allocate to the problem.  In which case, 'massive superhuman cross-domain optimisation' may simply not be possible.

Humans are only an existence proof for human level intelligence.

Q7: How much have you read about the formal concepts of optimal AI design which relate to searches over complete spaces of computable hypotheses or computational strategies, such as Solomonoff induction, Levin search, Hutter's algorithm M, AIXI, or Gödel machines?

William Uther: I know of all of those.  I know some of the AIXI approximations quite well.  The lesson I draw from all of those is that AI is HARD.  In fact, the real question isn't how do you perform optimally, but how do you perform well enough given the resources you have at hand.  Humans are a long way from optimal, but they do quite well given the resources they have.

I'd like to make some other points as well:

  • When trying to define 'Human level intelligence' it is often useful to consider how many humans meet your standard.  If the answer is 'not many' then you don't have a very good measure of human level intelligence.  Does Michael Jordan (the basketball player) have human level intelligence?  Does Stephen Hawking?  Does George Bush?
  • People who are worried about the singularity often have two classes of concerns.  First there is the class of people who worry about robots taking over and just leaving humans behind.  I think that is highly unlikely.  I think it much more likely that humans and machines will interact and progress together.  Once I have my brain plugged in to an advanced computer there will be no AI that can out-think me.  Computers already allow us to 'think' in ways that we couldn't have dreamt of 50 years ago.

This brings up the second class of issues that people have.  Once we are connected to machines, will we still be truly human.  I have no idea what people who worry about this mean by 'truly human'.  Is a human with a prosthetic limb truly human?  How about a human driving a car?  Is a human who wears classes or a hearing aid truly human?  If these prosthetics make you non-human, then we're already past the point where they should be concerned - and they're not.  If these prosthetics leave you human, then why would a piece of glass that allows me to see clearly be ok, and a computer that allows me to think clearly not be ok?  Asimov investigated ideas similar to this, but from a slightly different point of view, with his story 'The Bicentennial Man'.

The real questions are ones of ethics.  As people become more powerful, what are the ethical ways of using that power?  I have no great wisdom to share there, unfortunately.

Some more thoughts...

Does a research lab (with, say, 50 researchers) have "above human level intelligence"?  If not, then it isn't clear to me that AI will ever have significantly "above human level intelligence" (and see below for why AI is still worthwhile).  If so, then why haven't we had a 'research lab singularity' yet?  Surely research labs are smarter than humans and so they can work on making still smarter research labs, until a magical point is passed and research labs have runaway intelligence.  (That's a socratic question designed to get you to think about possible answers yourself.  Maybe we are in the middle of a research lab singularity.)

As for why study of AI might still be useful even if we never get above human level intelligence: there is the same Dirty, Dull, Dangerous argument that has been used many times.  To that I'd add a point I made in a previous email: intelligence is different to motivation.  If you get yourself another human you get both - they're intelligent, but they also have their own goals and you have to spend time convincing them to work towards your goals.  If you get an AI, then even if it isn't more intelligent than a human at least all that intelligence is working towards your goals without argument.  It's similar to the 'Dull' justification, but with a slightly different spin.

Alan Bundy

Professor Alan Bundy [homepage] did not answer my questions directly.

Alan Bundy: Whenever I see questions like this I want to start by questioning the implicit assumptions behind them.

  • I don't think the concept of "human-level machine intelligence" is well formed. AI is defining a huge space of different kinds of intelligence. Most of the points in this space are unlike anything either human or animal, but are new kinds of intelligence. Most of them are very specialised to particular areas of expertise. As an extreme example, consider the best chess playing programs. They are more intelligent than any human at playing chess, but can do nothing else, e.g., pick up and move the pieces. There's a popular myth that intelligence is on a linear scale, like IQ, and AI is progressing along it. If so, where would you put the chess program?
  • The biggest threat from computers comes not from intelligence but from ignorance, i.e., from computer programs that are not as smart as they need to be. I'm thinking especially of safety critical and security critical systems, such as fly-by-wire aircraft and financial trading systems. When these go wrong, aircraft crash and people are killed or the economy collapses. Worrying about intelligent machines distracts us from the real threats.
  • As far as threats go, you can't separate the machines from the intentions of their owners. Quite stupid machines entrusted to run a war with weapons of mass destruction could cause quite enough havoc without waiting for the mythical "human-level machine intelligence". It will be human owners that endow their machines with goals and aims. The less intelligent the machines the more likely this is to end in tears.
  • Given the indeterminacy of their owner's intentions, it's quite impossible to put probabilities on the questions you ask. Even if we could precisely predict the progress of the technology, which we can't, the intentions of the owners would defeat our estimates.

I'm familiar with what you call the 'standard worry'. I've frequently been recruited to public debate with Kevin Warwick, who has popularised this 'worry'. I'd be happy for you to publish my answers. I'd add one more point, which I forgot to include yesterday.

  • Consider the analogy with 'bird level flight'. Long before human flight, people aspired to fly like birds. The reality of human flight turned out to be completely different. In some respects, 'artificial' flying machines are superior to birds, e.g., they are faster. In some respects they are inferior, e.g., you have to be at the airport hours before take-off and book well in advance. The flight itself is very different, e.g., aircraft don't flap their wings. There is not much research now on flying like birds. If we really wanted to do it, we could no doubt come close, e.g., with small model birds with flapping wings, but small differences would remain and a law of diminishing returns would set in if we wanted to get closer. I think the aspiration for 'human level machine intelligence' will follow a similar trajectory --- indeed, it already has.
New Comment
28 comments, sorted by Click to highlight new comments since:

Moving away from the phrase "human-level" seems to have improved the quality of the responses a lot.

Questions about strange scenarios that people can regard as unlikely or remote should be more explicitly phrased as hypotheticals, asking to describe the "what if", instead of commenting on plausibility of the assumptions. For example, you have this question:

Q5: How important is it to figure out how to make AI provably friendly to us and our values (non-dangerous), before attempting to build AI that is good enough at general reasoning (including science, mathematics, engineering and programming) to undergo radical self-modification?

This could be rephrased as follows:

Q5': Suppose, hypothetically, that in 100 years it will become possible to build an AI that's good enough at general reasoning (including science, mathematics, engineering and programming) and that is able to improve its own competence to levels far surpassing human ability. How important would it be to figure out (by that time) how to make this hypothetical AI provably friendly to us and our values (non-dangerous), before actually building (running) it?

Additionally, the questions about probability of hypotheticals could be tweaked, so that each hypothetical is addressed in two questions: one asking about probability of its assumptions being met, and another about implications of its assumptions being met.

[-]XiXiDu-10

Questions about strange scenarios that people can regard as unlikely or remote should be more explicitly phrased as hypotheticals, asking to describe the "what if"

Q: Suppose hypothetically that in 100 years it would be possible to build a dangerous AI. How important would it be to figure out how to make it non-dangerous?

Expert: How important is your life to you?

Formulated like that it would be a suggestive question posed to yield the desired answer. The problem in question is the hypothesis and not the implications of its possible correctness.

Your original question already asked about this particular possibility. If you want to gauge how likely this possibility is seen, ask directly, without mixing that with the question of value. And previous responses show that the answer is not determined by my variant of the question: three popular responses are "It's going to be fine by default" (wrong), "It's not possible to guarantee absence of danger, so why bother?" (because of the danger) and "If people worried about absence of danger so much, they won't have useful things X,Y,Z." (these things weren't existential risks).

As such the problems we're likely to have with AI are less 'Terminator' and more 'Sorcerer's apprentice'

This is true and important and a lot of the other experts don't get it. Unfortunately, Uther seems to think that SIAI/LW/Xixidu doesn't get it either, and

These types of problems are less worrying as, in general, the AI isn't trying to actively hurt humans.

Shows that he hasn't thought about all the ways that "sorcerer's apprentice" AIs could go horribly wrong.

[-]Emile100

Yeah, I agree that Xixidu's mails could make it clearer that he's aware (or that LessWrong is aware) that "Sorcerer's Apprentice" is a better analogy than "Terminator", to get slightly responses that aren't "Terminator is fiction, silly!".

[-][anonymous]70

i) Global warming. While not as urgent or sexy as AI-run-amok, I think it a far more important issue for humankind.

Reading these letters so far, the experts very often make such statements. I think that either they systematically overestimate the likley risk of global warming in itself, which wouldn't be too surprising for a politicized issue (in the US at least), or they feel the need to play it up.

I think a lot of people make this mistake, to think that "very bad things" is equivalently bad to extinction - or even is extinction. It is unlikely that large scale nuclear war will extinguish the species, it is far beyond unlikely that global warning would extinguish humans. It is extremely unlikely large scale biological weapons usage by terrorists or states would extinguish humanity. But because we know for a certain fact that these things could happen and have even come close to happening or are beginning to happen, and because they are so terrible its just not really possible for most people to keep enough perspective to recognize that things not likely to happen really soon but that will eventually be possible are actually much more dangerous in terms of capability for extinction.

Or some people place high negative value on half of all humans dying, comparable to extinction.

You didn't address my criticism of the question about provably friendly AI nor my point about the researchers lacking relevant context for thinking about AI risk. Again, the issues that I point to seems to make the researchers' response to the questions about friendliness & existential risk due to AI carry little information

I rephrased the question now:

Q5: How important is it to research risks associated with artificial intelligence that is good enough at general reasoning (including science, mathematics, engineering and programming) to be capable of radical self-modification, before attempting to build one?

Good. I think that's a much less mind-killing question, and will get more interesting responses.

I doubt that researchers will know what you have in mind by "provably friendly." For that matter I myself don't know what you have in mind by "probably friendly" despite having read a number of relevant posts on Less Wrong.

That doesn't matter too much. Their interpretation of the question is the interesting part and that they are introduced to the concept is the important part. All of them will be able to make sense of the idea of AI that is non-dangerous.

And besides, after they answered the questions I asked them about "friendly AI" and a lot of them are familiar with the idea of making AI's non-dangerous, some even heard about SI. Here is one reply I got:

I was surprised that none of your questions mentioned paperclips! :P I am (of course!) familiar with lesswrong.com, goals of the SIAI, prominence of EY's work within these communities etc.

And regarding your other objection,

the researchers in question do not appear to be familiar with the most serious existential risk from AGI: the one discussed in Omohundro's The Basic AI Drives.

Many seem to think that AI's won't have any drives and that it is actually part of the problem to give them the incentive to do anything that they haven't been explicitly programmed to do. But maybe I am the wrong person with respect to Omohundro's paper. I find the arguments in it unconvincing. And if I think that way, someone who takes risks from AI much more seriously, then I doubt that even if they knew about the paper it would change their opinion.

The problem is that the paper tries to base its arguments on assumptions about the nature of hypothetical AGI. But just like AI won't automatically share our complex values it won't value self-protection/improvement or rational economic behavior. In short, the paper is tautological. If your premise is an expected utility-maximizer capable of undergoing explosive recursive self-improvement that tries to take every goal to its logical extreme, whether that is part of the specifications or not, then you already answered your own question and arguing about drives becomes completely useless.

You should talk to wallowinmaya who is soon going to start his own interview series.

If your premise is an expected utility-maximizer capable of undergoing explosive recursive self-improvement that tries to take every goal to its logical extreme, whether that is part of the specifications or not, then you already answered your own question and arguing about drives becomes completely useless.

I like your point, but I wonder what doubts you have about the premise. Is an expected-utility-maximizer likely to be absurdly difficult to construct, or do you think all or almost all AI designers would prefer other designs? I think that AI designers would prefer such a design if they could have it, and "maximize my company's profits" is likely to be the design objective.

I think that most researchers are not interested in fully autonomous AI (AI with persistent goals and a "self") and more interested in human-augmented intelligence (meaning tools like data-mining software).

I do think that an expected utility-maximizer is the ideal in GAI. But, just like general purpose quantum computers, I believe that expected utility-maximizer's that - 1) find it instrumentally useful to undergo recursive self-improvement 2) find it instrumentally useful to take over the planet/universe to protect their goals - are, if at all feasible, the end-product of a long chain of previous AI designs with no quantum leaps in-between. That they are at all feasible is dependent on 1) how far from the human level intelligence hits diminishing returns 2) that intelligence is more useful than other kinds of resources in stumbling upon unknown unknowns in solution space 3) that expected utility-maximizer's and their drives are not fundamentally dependent on the precision with which their utility-function is defined.

I further believe that long before we get to the point of discovering how to build expected utility-maximizer's, capable of undergoing explosive recursive self-improvement, we will have automatic scientists that can brute-force discoveries on hard problem in bio and nanotech and enable unfriendly humans to wreck havoc and control large groups of people. If we survive that, which I think is the top risk rather than GAI, then we might at some point be able to come up with an universal artificial intelligence. (ETA: Note that for automatic scientists to work well the goals need to be well-defined, which isn't the case for intelligence amplification.)

I just don't have enough background knowledge to conclude that it is likely that humans can stumble upon simple algorithms that could be improved to self-improve and then reach vastly superhuman capabilities. From my point of view that seems like pure speculation, although speculation that should be taken seriously and that does legitimate the existence of an organisation like SI. Which is the reason why I have donated a few times already. But from my uneducated point of view it seems unreasonable to claim that the possibility is obviously correct and that the arguments might not simply sound convincing.

I approach this problem the same way that I approach climate change. Just because one smart person believes that climate change is bunk I don't believe it as well. All his achievements do not legitimate his views. And since I am yet too uneducated and do not have the time to evaluate all the data and calculations I am using the absurdity heuristic in combination with an appeal to authority to conclude that climate change is real. And the same goes for risks from AI. I can hardly evaluate universal AI research or understand approximation's to AIXI. But if the very people who came up with it disagree on various points with those who say that their research poses a risk, then I side with the experts but still assign enough weight to the other side to conclude that they are doing important work nonetheless.

Just because one smart person believes that climate change is bunk I don't believe it as well. All his achievements do not legitimate his views.

"Climate change is bunk" seems like a pretty terrible summary of Freeman Dyson's position. If you disagree, a more specific criticism would be helpful. Freeman Dyson's views on the topic mostly seem to be sensible to me.

And since I am yet too uneducated and do not have the time to evaluate all the data and calculations I am using the absurdity heuristic in combination with an appeal to authority to conclude that climate change is real.

Freeman Dyson agrees. The very first line from your reference reads: "Dyson agrees that anthropogenic global warming exists".

I think that most researchers are not interested in fully autonomous AI (AI with persistent goals and a "self") and more interested in human-augmented intelligence (meaning tools like data-mining software).

Intelligence augmentation can pay your bills today.

William Uther: There is a whole field of 'automatic programming'. The main difficulties in that field were in specifying what you wanted programmed. Once you'd done that the computers were quite effective at making it. (I'm not sure people have tried to make computers design complex algorithms and data structures yet.)

Tools that take "a description of what you want programmed" and turn it into code already exist. They're called compilers. If you've specified what you want in a way that doesn't leave any ambiguities, you've already written your program.

Why is this so heavily upvoted? Does anyone here really think that Uther doesn't know what a compiler is? The goal here is obviously to have something better than a compiler that does as much of the work for the programmer as possible.

Sorry if I'm coming off as confrontational, but I'm sick of the complete lack of charity towards the people xixidu is interviewing, especially on technical questions where, unless you do research in machine learning, they almost certainly know more than you.

It was supposed to be a joke. :(

I also suspect that any "automatic programming" tool that ends up being useful will eventually end up being called a compiler anyway, even if it does do something useful that today's compilers don't.

[-]asr20

There has been a lot of work on inferring programs from ambiguous and incomplete specs. These systems aren't what we normally mean by compilers, since they typically take inputs that don't belong to any well-defined programming language.

For example, see "Macho: Programming with Man Pages" by Anthony Cozzie, Murph Finnicum, and Samuel T. King. It appeared at HotOS '11: http://www.usenix.org/events/hotos11/tech/final_files/Cozzie.pdf The idea is that man pages go in one end of their system and a working program comes out the other.

Another piece of modern work on automatic programming is Programming by Sketching. There was a paper at PLDI '05 by Solar-Lezama et al : http://cs.berkeley.edu/~bodik/research/pldi05-streambit.pdf (PLDI is the ACM conference on Programming Language Design and Implementation)

Both of those papers are from mainline computer scientists outside the AI community -- HotOS is a [well regarded] systems workshop, and PLDI is probably the top venue for programming-language research.

This is approximately like saying we need to require a proof, based on someone's DNA sequence, that they can never commit a sin, and that we must not allow any babies to be born until they can offer such a proof.

I like this line of analogy, but I think it's more like requiring proof, based on DNA, that someone isn't a sociopath. That's already possible. I'm not particularly worried about an AI that occasionally lies, steals, or cheats if it feels something like remorse for doing those things.

proof, based on DNA, that someone isn't a sociopath. That's already possible.

Cite? The first page of Google Scholar does not seem to support you.

The linked paper could be interpreted to be talking about anti-social personality disorder though I think that would be pushing it. While this is a superset of sociopathy (or at least contains the vast majority of sociopaths) it is not the same thing. Also, IIRC Han Chinese have the low activity MAOA variant, like the Maori, but their incidence of violent angry hotheads is lower so some other allele is probably regulating behaviour. It does seem likely quite reliable tests for sociopathy and psychopathy will show up.

MAOA is associated with ASPD in Caucasians. Also, weren't sociopathy and psychopathy deprecated in favor of ASPD? (still useful in common parlance because they're familiar)

Associated with does not mean presence/absence can be used as a singular, reliable diagnostic test. On the deprecation, I'm no psychologist, merely an interested layman but I don't understand the reasoning the APA used to justify getting rid of the psychopathy diagnosis. It seems to be quite a distinct subgroup within ASPD. At the extreme, using the old labels, no matter how lovely, caring etc. they are raised psychopaths are bad news, persons with similar dispositions towards psychopathic behaviour but different childhood environments can either express (act like psychopaths) or not, and then there are APSD, who can have traits that would disqualify them from either diagnosis, e.g. feeling sincere guilt even momentarily, or having relationships that are not purely instrumental.

[-][anonymous]00

I don't understand the reasoning the APA used to justify getting rid of the psychopathy diagnosis. It seems to be quite a distinct subgroup within ASPD.

If that were the case, the solution would be easy: recognize psychopathy as a subgroup. They could call it Antisocial Personality Disorder, malignant variety, perhaps.

But the classic Cleckley psychopath often isn't an anti-social personality. Antisocial personality is based on concrete diagnostic criteria that high-functioning, intelligent psychopaths don't necessarily manifest; they may be political leaders, attorneys, judges, businessmen, anywhere arbitrary power can be found. I think they are probably better conceived as a subset of Narcissistic Personality Disorder. But the psychiatrists who pioneered in applying that diagnosis, the psychoanalyst Kohut and colleagues, have a more romantic understanding of their narcissistic patients. Politics figure large in the diagnostic manual's catalog.