You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

AndreInfante comments on Steelmaning AI risk critiques - Less Wrong Discussion

26 Post author: Stuart_Armstrong 23 July 2015 10:01AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (98)

You are viewing a single comment's thread.

Comment author: AndreInfante 27 July 2015 09:59:37PM *  7 points [-]

Here's one from a friend of mine. It's not exactly an argument against AI risk, but it is an argument that the problem may be less urgent than it's traditionally presented.

  1. There's plenty of reason to believe that Moore's Law will slow down in the near future

  2. Progress on AI algorithms has historically been rather slow.

  3. AI programming is an extremely high level cognitive task, and will likely be among the hardest things to get an AI to do.

  4. These three things together suggest that there will be a 'grace period' between the development of general agents, and the creation of a FOOM-capable AI.

  5. Our best guess for the duration of this grace period is on the order of multiple decades.

  6. During this time, general-but-dumb agents will be widely used for economic purposes.

  7. These agents will have exactly the same perverse instantiation problems as a FOOM-capable AI, but on a much smaller scale. When they start trying to turn people into paperclips, the fallout will be limited by their intelligence.

  8. This will ensure that the problem is taken seriously, and these dumb agents will make it much easier to solve FAI-related problems, by giving us an actual test bed for our ideas where they can't go too badly wrong.


This is a plausible-but-not-guaranteed scenario for the future, which feels much less grim than the standard AI-risk narrative. You might be able to extend it into something more robust.

Comment author: turchin 28 July 2015 11:42:54PM *  2 points [-]

Dumb agent could also cause human extinction. "To kill all humans" is computationly simpler task than to create superintelligence. And it may be simplier by many orders of magnitude.

Comment author: AndreInfante 29 July 2015 12:35:23AM 2 points [-]

I seriously doubt that. Plenty of humans want to kill everyone (or, at least, large groups of people). Very few succeed. These agents would be a good deal less capable.

Comment author: turchin 29 July 2015 12:43:30AM 1 point [-]

Just imagine a Stuxnet-style computer virus which will find DNA-synthesisers and print different viruses on each of them, calculating exact DNA mutations for hundreds different flu strains.

Comment author: V_V 29 July 2015 09:57:23AM 0 points [-]

You can't manufacture new flu stains with just by just hacking a DNA synthesizer, And anyway, most of non-intelligently created flu strains would be non-viable or non-lethal.

Comment author: turchin 29 July 2015 01:07:03PM 1 point [-]

I mean that the virus will be as intelligent as human bioligist, may be EM. It is enough for virus synthesis but not for personal self-imprivement

Comment author: Gram_Stone 29 July 2015 09:27:28PM 1 point [-]

There are parts that are different, but it seems worth mentioning that this is quite similar to certain forms of Bostrom's second-guessing arguments, as discussed in Chapter 14 of Superintelligence and in Technological Revolutions: Ethics and Policy in the Dark:

A related type of argument is that we ought—rather callously—to welcome small and medium-scale catastrophes on grounds that they make us aware of our vulnerabilities and spur us into taking precautions that reduce the probability of an existential catastrophe. The idea is that a small or medium-scale catastrophe acts like an inoculation, challenging civilization with a relatively survivable form of a threat and stimulating an immune response that readies the world to deal with the existential variety of the threat.

I should mention that he does seem to be generally against attempting to manipulate people into doing the best thing.

Comment author: [deleted] 03 August 2015 03:52:47AM 0 points [-]

I should mention that he does seem to be generally against attempting to manipulate people into doing the best thing.

Well that's actually quite refreshing.