dspeyer comments on 2013 Survey Results - Less Wrong

74 Post author: Yvain 19 January 2014 02:51AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (558)

You are viewing a single comment's thread. Show more comments above.

Comment author: XiXiDu 19 January 2014 01:33:48PM *  5 points [-]

Unfriendly AI: 233, 14.2%

Nanotech/grey goo: 57, 3.5%

Could someone who voted for unfriendly AI explain how nanotech or biotech isn't much more of a risk than unfriendly AI (I'll assume MIRI's definition here)?

I ask this question because it seems to me that even given a technological singularity there should be enough time for "unfriendly humans" to use precursors to fully fledged artificial general intelligence (e.g. advanced tool AI) in order to solve nanotechnology or advanced biotech. Technologies which themselves will enable unfriendly humans to cause a number of catastrophic risks (e.g. pandemics, nanotech wars, perfect global surveillance (an eternal tyranny) etc.).

Unfriendly AI, as imagined by MIRI, seems to be the end product of a developmental process that provides humans ample opportunity to wreck havoc.

I just don't see any good reason to believe that the tools and precursors to artificial general intelligence are not themselves disruptive technologies.

And in case you believe advanced nanotechnology to be infeasible, but unfriendly AI to be an existential risk, what concrete scenarios do you imagine on how such an AI could cause human extinction without nanotech?

Comment author: dspeyer 20 January 2014 05:03:30AM 5 points [-]

Two reasons: uFAI is deadlier than nano/biotech and easier to cause by accident.

If you build an AGI and botch friendliness, the world is in big trouble. If you build a nanite and botch friendliness, you have a worthless nanite. If you botch growth-control, it's still probably not going to eat more than your lab before it runs into micronutrient deficiencies. And if you somehow do build grey goo, people have a chance to call ahead of it and somehow block its spread. What makes uFAI so dangerous is that it can outthink any responders. Grey goo doesn't do that.

Comment author: XiXiDu 20 January 2014 09:37:30AM *  1 point [-]

This seems like a consistent answer to my original question. Thank you.

If you botch growth-control, it's still probably not going to eat more than your lab before it runs into micronutrient deficiencies.

You on the one hand believe that grey goo is not going to eat more than your lab before running out of steam and on the other hand believe that AI in conjunction with nanotechnology will not run out of steam, or only after humanity's demise.

And if you somehow do build grey goo, people have a chance to call ahead of it and somehow block its spread.

You further believe that AI can't be stopped but grey goo can.

Comment author: dspeyer 23 January 2014 01:05:02AM 7 points [-]

Accidental grey goo is unlikely to get out of the lab. If I design a nanite to self-replicate and spread through a living brain to report useful data to me, and I have an integer overflow bug in the "stop reproducing" code so that it never stops, I will probably kill the patient but that's it. Because the nanites are probably using glucose+O2 as their energy source. I never bothered to design them for anything else. Similarly if I sent solar-powered nanites to clean up Chernobyl I probably never gave them copper-refining capability -- plenty of copper wiring to eat there -- but if I botch the growth code they'll still stop when there's no more pre-refined copper to eat. Designing truely dangerous grey goo is hard and would have to be a deliberate effort.

As for stopping grey goo, why not? There'll be something that destroys it. Extreme heat, maybe. And however fast it spreads, radio goes faster. So someone about to get eaten radios a far-off military base saying "help! grey goo!" and the bomber planes full of incindiaries come forth to meet it.

Contrast uFAI, which has thought of this before it surfaces, and has already radioed forged orders to take all the bomber planes apart for maintenance or something.

Comment author: Eugine_Nier 23 January 2014 02:20:04AM 0 points [-]

Also, the larger the difference between the metabolisms of the nanites and the biosphere, the easier it is to find something toxic to one but not the other.