An easy to read list of evidence and simple arguments in support of risks from artificial intelligence could help to raise awareness. Such a list could be the first step to draw attention, to spark interest and make people read some more advanced papers or the sequences. To my knowledge so far nobody has put the evidence, various arguments and indications together in one place.

My intention is to enable people interested to mitigate risks from AI to be able to offer a brochure that allows them to raise awareness without having to spend a lot of time explaining the details or tell people to read through hundreds of posts of marginal importance. There has to be some promotional literature that provides a summary of the big picture and some strong arguments for action. Such a brochure has to be simple and concise enough to arouse interest in the reader even if they just skim over the text.

Post a comment with the the best argument(s) for risks from AI

Some rules:

  • The argument or summary has to be simple and concise.
  • Allow for some inferential distance. Make your arguments self-evident.
  • If possible cite an example, provide references or link to further reading.
  • Disclosure: Note if your argument is controversial and account for it.

For starters I wrote a quick draft below. But there sure do exist a lot of other arguments and indications for why risks from artificial intelligence should be taken serious. What convinced you?


Claim: Creation of general intelligence is possible.

Status: Uncontroversial

Claim: Intelligence can be destructive.

Status: Uncontroversial

Claim: Algorithmic intelligence can be creative and inventive.

Status: Uncontroversial1

Claim: Improvements of algorithms can in many cases lead to dramatic performance gains.

Status: Uncontroversial2

Claim: Human-level intelligence is not the maximum.

Status: Very likely3

Claim: Any sufficiently advanced AI will do everything to continue to keep pursuing terminal goals indefinitely.

Status: Controversial but a possibility to be taken seriously.4 We don't yet have a good understanding of intelligence but given all that we know there are no good reasons to rule out this possibility. Overconfidence can have fatal consequences in this case.5

Claim: Morality is fragile and not imperative (i.e. is not a natural law)

Status: Uncontroversial.6 Even humans who have been honed by social evolution to consider the well-being of other agents can overcome their instincts and commit large scale atrocities in favor of various peculiar instrumental goals.


1.

We report the development of Robot Scientist “Adam,” which advances the automation of both. Adam has autonomously generated functional genomics hypotheses about the yeast Saccharomyces cerevisiae and experimentally tested these hypotheses by using laboratory automation.

The Automation of Science

Without any prior knowledge about physics, kinematics, or geometry, the algorithm discovered Hamiltonians, Lagrangians, and other laws of geometric and momentum conservation. The discovery rate accelerated as laws found for simpler systems were used to bootstrap explanations for more complex systems, gradually uncovering the “alphabet” used to describe those systems.

Computer Program Self-Discovers Laws of Physics

This aim was achieved within 3000 generations, but the success was even greater than had been anticipated. The evolved system uses far fewer cells than anything a human engineer could have designed, and it does not even need the most critical component of human-built systems - a clock. How does it work? Thompson has no idea, though he has traced the input signal through a complex arrangement of feedback loops within the evolved circuit. In fact, out of the 37 logic gates the final product uses, five of them are not even connected to the rest of the circuit in any way - yet if their power supply is removed, the circuit stops working. It seems that evolution has exploited some subtle electromagnetic effect of these cells to come up with its solution, yet the exact workings of the complex and intricate evolved structure remain a mystery. (Davidson 1997)

When the GA was applied to this problem, the evolved results for three, four and five-satellite constellations were unusual, highly asymmetric orbit configurations, with the satellites spaced by alternating large and small gaps rather than equal-sized gaps as conventional techniques would produce. However, this solution significantly reduced both average and maximum revisit times, in some cases by up to 90 minutes. In a news article about the results, Dr. William Crossley noted that "engineers with years of aerospace experience were surprised by the higher performance offered by the unconventional design". (Williams, Crossley and Lang 2001)

Genetic Algorithms and Evolutionary Computation

UC Santa Cruz emeritus professor David Cope is ready to introduce computer software that creates original, modern music.

Triumph of the Cyborg Composer

2.

Everyone knows Moore’s Law – a prediction made in 1965 by Intel co-founder Gordon Moore that the density of transistors in integrated circuits would continue to double every 1 to 2 years. (…)  Even more remarkable – and even less widely understood – is that in many areas, performance gains due to improvements in algorithms have vastly exceeded even the dramatic performance gains due to increased processor speed.

The algorithms that we use today for speech recognition, for natural language translation, for chess playing, for logistics planning, have evolved remarkably in the past decade.  It’s difficult to quantify the improvement, though, because it is as much in the realm of quality as of execution time.

In the field of numerical algorithms, however, the improvement can be quantified.  Here is just one example, provided by Professor Martin Grötschel of Konrad-Zuse-Zentrum für Informationstechnik Berlin.  Grötschel, an expert in optimization, observes that a benchmark production planning model solved using linear programming would have taken 82 years to solve in 1988, using the computers and the linear programming algorithms of the day.  Fifteen years later – in 2003 – this same model could be solved in roughly 1 minute, an improvement by a factor of roughly 43 million.  Of this, a factor of roughly 1,000 was due to increased processor speed, whereas a factor of roughly 43,000 was due to improvements in algorithms!  Grötschel also cites an algorithmic improvement of roughly 30,000 for mixed integer programming between 1991 and 2008.

— Page 71, Progress in Algorithms Beats Moore’s Law (Report to the President and Congress: Designing a Digital Future: Federally FUnded R&D in Networking and IT)

3.

The following argument is not directly applicable but can similarly be made for human intelligence, computational capacity or processing speed.

An AI might go from infrahuman to transhuman in less than a week?  But a week is 10^49 Planck intervals - if you just look at the exponential scale that stretches from the Planck time to the age of the universe, there's nothing special about the timescale that 200Hz humans happen to live on.

If we're talking about a starting population of 2GHz processor cores, then any given AI that FOOMs at all, is likely to FOOM in less than 10^15 sequential operations or more than 10^19 sequential operations, because the region between 10^15 and 10^19 isn't all that wide a target.  So less than a week or more than a century, and in the latter case that AI will be trumped by one of a shorter timescale.

Disjunctions, Antipredictions, Etc.

4.

There are “basic AI drives” we can expect to emerge in sufficiently advanced AIs, almost regardless of their initial programming. Across a wide range of top goals, any AI that uses decision theory will want to 1) self-improve, 2) have an accurate model of the world and consistent preferences (be rational), 3) preserve their utility functions, 4) prevent counterfeit utility, 5) be self-protective, and 6) acquire resources and use them efficiently. Any AI with a sufficiently open-ended utility function (absolutely necessary if you want to avoid having human beings double-check every decision the AI makes) will pursue all these instrumental goals indefinitely as long as it can eke out a little more utility from doing so. AIs will not have built in satiation points where they say, “I’ve had enough”. We have to program those in, and if there’s a potential satiation point we miss, the AI will just keep pursuing instrumental goals indefinitely. The only way we can keep an AI from continuously expanding like an endless nuclear explosion is to make it to want to be constrained

— Basic AI Drives, Yes, The Singularity is the Biggest Threat to Humanity

5.

I’m not going to argue for specific values for these probabilities. Instead, I’ll argue for ranges of probabilities that I believe a person might reasonably assert for each probability on the right-hand side. I’ll consider both a hypothetical skeptic, who is pessimistic about the possibility of the Singularity, and also a hypothetical enthusiast for the Singularity. In both cases I’ll assume the person is reasonable, i.e., a person who is willing to acknowledge limits to our present-day understanding of the human brain and computer intelligence, and who is therefore not overconfident in their own predictions. By combining these ranges, we’ll get a range of probabilities that a reasonable person might assert for the probability of the Singularity.

What should a reasonable person believe about the Singularity?

6.

The Patrician took a sip of his beer. “I have told this to few people, gentlemen, and I suspect I never will again, but one day when I was a young boy on holiday in Uberwald I was walking along the bank of a stream when I saw a mother otter with her cubs. A very endearing sight, I’m sure you will agree, and even as I watched, the mother otter dived into the water and came up with a plump salmon, which she subdued and dragged onto a half-submerged log. As she ate it, while of course it was still alive, the body split and I remember to its day the sweet pinkness of its roes as they spilled out, much to the delight of the baby otters who scrambled over themselves to feed on the delicacy. One of nature’s wonders, gentlemen: mother and children dining upon mother and children. And that’s when I first learned about evil. It is built in to the very nature of the universe. Every world spins in pain. If there is any kind of supreme being, I told myself, it is up to all of us to become his moral superior.

— Terry Pratchett, Unseen Academicals

New to LessWrong?

New Comment
28 comments, sorted by Click to highlight new comments since: Today at 11:47 PM

I'd prefer a list that includes not just arguments for but relevant arguments against. That's better not just for rationality, but in general simply as a matter of rhetoric, lists that are deliberately one sided don't look nearly as persuasive.

In particular, I'd expand "Improvements of algorithms can in many cases lead to dramatic performance gains." to have the following counterarguments:

1) Many important problems such as linear programming and finding GCDs have close to optimal algorithms already.
(Status: uncontroversial)
2) If the complexity hierarchy doesn't collapse then many practical problems are intrinsically difficult.
(Status:uncontroversial)
2a) The hierarchy probably does not collapse.
(Status: uncontroversial for P and NP. Among experts, it is considered likely that P, NP, co-NP, EXP, and PSPACE are all distinct.)

Counterargument to 2/2a: Basic complexity classes only look at worse case scenarios. The vast majority of instances of "hard"classes of problems are in fact easy.
(Status:Uncontroversial)

I'd prefer a list that includes not just arguments for but relevant arguments against. That's better not just for rationality, but in general simply as a matter of rhetoric, lists that are deliberately one sided don't look nearly as persuasive.

More on the Light Side of things, you want to argue with contrary positions that are probably held by the target audience.

To XiXiDu: please stop deleting your comments, it distorts the flow of discussion. Take full ownership of your mistakes, it won't actually hurt you here, and associated emotions could drive you more to improve in the future.

Take full ownership of your mistakes, it won't actually hurt you here...

You should know that I am doing that quite often. As I said in the other reply, I deleted them shortly after I posted them when I noticed that they didn't add anything valuable. I saw no replies when I deleted them. It was not my intention to disrupt the discussion.

I would find them useful if they came with an EDIT: appendix explaining how they were worthless and how you came to realize their lack of worth; as Quirrel says, learn to lose.

Alright, I'll refrain from deleting from now on and instead edit to add a note.

I've been assuming strikethrough like Reddit has is on the list of minor features to add eventually, incidentally.

You should know that I am doing that quite often.

I disapprove of the other instances for the same reasons.

To XiXiDu: please stop deleting your comments, it distorts the flow of discussion.

I deleted my comments before there have been any replies to them. Have you opened them up in tabs shortly after I posted them?

Yes. You should anticipate this possibility and plan accordingly.

The algorithms that we use today for speech recognition, for natural language translation, for chess playing, for logistics planning, have evolved remarkably in the past decade. It’s difficult to quantify the improvement, though, because it is as much in the realm of quality as of execution time.

So... speech recognition has plateaued for the last decade. (Quality is easily quantifiable by error rate.) I don't know about the others, though I hear there is improvement in Go-playing algorithms.

For speech recognition recent progress has varied by problem. Speech recognition on a conversation with multiple speakers and background noise has not made good progress recently, but restricted conditions (e.g. one speaker doing voice dictation or interacting with a computational agent) have shown good progress, e.g. Dragon NaturallySpeaking.

Claim: Morality is fragile and not imperative (i.e. is not a natural law)

Status: Uncontroversial.

This sense of "uncontroversial" is not helpful if you are talking to people outside LW, and that's your stated goal. This is a very much controversial question (as well as some others that you've termed this way). Convincing people to take AI risks seriously is hard, and anything less than LW sequences isn't known to work anywhere as reliably, so it won't be a small feat to create a short summary that does work, and claiming that many controversial claims are actually "uncontroversial" won't help in this task.

[-][anonymous]13y00

This sense of "uncontroversial" is not helpful if you are talking to people outside LW, and that's your stated goal.

It wasn't my intention to tell people that the arguments are uncontroversial, that was meant to support the discussion here. But now that you mention it, I believe it could actually make people take a second look. "Wait, they say that argument is uncontroversial? Interesting! Where does that belief come from?"

This is a very much controversial question.

It was also not my intention that one should talk to religious nutters about this topic. And I don't think it is controversial for anyone else. If humans can kill humans, machines can do so more effectively. That's pretty much self-evident.

But now that you mention it, I believe it could actually make people take a second look. "Wait, they say that argument is uncontroversial? Interesting! Where does that belief come from?"

This is a very weak argument. I could as well imagine the other reply: "They say that conclusion is uncontroversial? But this statement is false, I know that many people dispute that statement. They must be presenting a sloppy and one-sided argument, I won't waste my time reading further."

Claim: Morality is fragile and not imperative (i.e. is not a natural law)

It was also not my intention that one should talk to religious nutters about this topic. And I don't think it is controversial for anyone else.

Unfortunately, it is.

[-][anonymous]13y40

Yes, The Singularity is the Biggest Threat to Humanity Saturday

Saturday, and every other day as well.

An amusing copy/paste error that you probably ought to fix.

Regarding, footnote 1, two more relevant examples: First, Simon Colton has made a series of computer programs which can construct new definitions in number theory and make conjectures about them. See this paper. Second, the Robbins conjecture, a long time open problem in abstract algebra, was proven with some human intervention but mainly an automated theorem prover.

Claim: Any sufficiently advanced AI will do everything to continue to keep pursuing instrumental goals indefinitely.

I think you meant to say "terminal goals" here.

[-][anonymous]13y00

I copied that sentence from Michael Anissimov's essay.

In that case I suggest changing it from Anissimov's version to "terminal goals". Because "instrumental goals" in that context is at the very best misleading.

I rephrased this in the original post as "instrumental to us, terminal to it". Awkward and cumbersome but it avoids being misleading and it closer to what I mean.

"Terminal to it" has the downside of being ambiguous. :P

Claim: Algorithmic intelligence can be creative and inventive.

What do you mean, "algorithmic"? The reference you give is PR nonsense of little substance and doesn't demonstrate creativity in any relevant sense. That intelligence can be sufficiently creative is demonstrated by humans.

The reference you give is PR nonsense...

The abstract of scientific papers as PR nonsense? If such studies are not enough, what else do you suggest is valuable evidence?

...and doesn't demonstrate creativity in any relevant sense.

If composing music, deriving scientific laws and generating functional hypotheses doesn't count, what else?

That intelligence can be sufficiently creative is demonstrated by humans.

This kind of creativity may arise from our imperfect nature. It is important to demonstrate that this can be formalized, intelligently designed.

The abstract of scientific papers as PR nonsense?

What wedrifid said. In this particular case, the interpretation of the results as supporting possibility of "creativity" in any interesting sense is nonsense, even if the results themselves are genuine. Also, in the vast majority of cases, abstracts are adequate to tell you whether you want to read the paper, not to communicate the (intended meaning of) results.

The abstract of scientific papers as PR nonsense?

Frequently. Most notably in medical science (where the most money is involved) but to a lesser extent elsewhere. This particular instance is borderline. I would have to read the rest of the paper to judge just how far the spin has taken the description.

If composing music, deriving scientific laws and generating functional hypotheses doesn't count, what else?

Actually doing those things.

It is, of course, impressive that the researchers were able to solve the problems in question in a particularly general way.

To my knowledge so far nobody has put the evidence, various arguments and indications together in one place.

A relevant link to SIAI short summary of a case for AI risk: