FAWS comments on Politicians stymie human colonization of space to save make-work jobs - Less Wrong

11 [deleted] 18 July 2010 12:57PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (70)

You are viewing a single comment's thread. Show more comments above.

Comment author: FAWS 18 July 2010 10:51:47PM 3 points [-]

Less than 1 in 5000 sounds about right to me. I'm much more worried about other nano-dangers (e. g. clandestine brain washing) than grey goo.

Not only is there the problem of the technological feasibility, but even if its possible there is the still larger problem of economic feasibility. Molecular von Neumann Machines, if possible, should be vastly more difficult to develop than vastly more efficient static nano-assemblers in a controlled environment (probably vacuum?) and integrated in an economy with mixed nano- and macrotech taking advantage of specialization, economics of scale etc. The static nano-assemblers should already be ubiquitous long before molecular von Neumann Machines start to become feasible. So why develop them in the first place? For medical applications specialized medical nanobots running on glucose and cheaply mass-produced in the static nano-assemblers should also beat them. They'd be useful in space and for sending to other planets, but there wouldn't be all that much money in that, and sending a larger probe with nano-assemblers and assorted equipment would also do.

Since there would be no overwhelming incentive against outlawing the development of MvNM doing so would be feasible, and considering how easy it should be to scare people of the gg scenario in such a world, very likely.

That pretty much leaves secret development as some sort of weapon. That would leave gg defense a military issue. Nano-assemblers should be much better at producing nano-hunters and nano-killers (or more assemblers, mining equipment, planes, rockets, bombs) than MvNM more of themselves, and nano-hunters and nano-killers much better at finding and destroying them, and there'd also be the option of using macroscopic weapons against larger concentrations.

Comment deleted 19 July 2010 12:01:04PM *  [-]
Comment author: FAWS 19 July 2010 12:38:10PM 2 points [-]

Other nano-risks aren't necessarily extinction risks, though. And while I'm sort of worried that someone might secretly use nano to rewire the brains of important people and later of everyone to absolute loyalty to them (an outcome that would be a lot better than extinction, but still pretty bad) or something along those lines it doesn't seem obvious that there is anything effective we could spend money on now that would help protect us, unlike asteroids. At least at the levels of spending asteroid danger prevention could usefully absorb.

Comment deleted 19 July 2010 12:50:21PM *  [-]
Comment author: FAWS 19 July 2010 02:54:38PM *  3 points [-]

But now you have to catalogue all the possible risks of nanotech, and add a category for "risks I haven't thought of", and then claim that the total probability of all that is < 1/5000.

The question wasn't whether nanotech is potentially more dangerous than asteroids overall, though. It was whether all money available for extential risk prevention/migitation would be better spend on nano than on space based dangers.

There doesn't seem to be any good way to spent money so that all possible nano risks will be migitated (other than lobbying to ban all nano reseach everywhere, and I'm far from convinced that the potential dangers of nano are greater than the benefits). I'm not even sure there is a good way to spend money on migitation of any single nano risk.

The most obvious migitation/prevention technology would be really good detectors for autonomous nanobots, whether self reproducing or not. But until we know how they work and what energy source they use we can't do all that much useful research in that direction, and spending after we know what we need would probably be much more efficient. This also looks like an issue where the military will spend such enourmous amounts once the possibilities are clear that money spent previously will not affect the result all that much.

you have to consider the effects of nanotech on nuclear proliferation (had you thought of that one?)

Yes, I did, that's one of the most ovious ones. It's not going to be possible to prevent a nation with access to unranium from building nuclear weapons, but I think that would be the case anyway, with or without nano. The risk of private persons building them might be somewhat increased. I'm not sure if there is any need to seperate isotopes in whatever machines pre-process materials in/for nano-assemblers, or if they lead themself to be modifiable for that. Assuming they do you'd need to look at anyone who processes large amounts of sea water, or any other material that contains uranium. Perhaps you could mandate that only designs that are vulnerable to radioactivity can be sold commercially, or make the machines refuse to work with uranium in a way that is hard to remove. I don't see how spending money now could help in any way.

This argument reminds me of the case of physicists claiming to have an argument showing that the probability of an LHC disaster was less than 1 in a million, and Toby Ord pointing out that the probability that there was a mistake in their argument was surely > 1 in 1,000, invalidating their conclusion that total probability of an LHC disaster was < 1 in 1 million.

I 'm not sure the probability of a serious error in the best avaiable argument against something can be considered a lower bound to the proability you should assign it in general. In the case of the LHC if there is a 1 in 20 chance of a mistake that doesn't really change the conclusion much, a 1 in 100 chance of a mistake such that the real probablility is 1 in 100,000, and a 1 in 10,000 chance of a mistake such that the real probablility is 1 in 1000 then 1 in a million could still be roughly the correct estimate.

Comment author: JoshuaZ 19 July 2010 01:46:58PM 2 points [-]

But now you have to catalogue all the possible risks of nanotech, and add a category for "risks I haven't thought of", and then claim that the total probability of all that is < 1/5000

The 1/5000 number only works for the really large asteroids (> 1 km in diameter). Note that as I pointed earlier, much smaller asteroids can be locally devastating. The resources that go to finding the very large asteroids also helps track the others, reducing the chance of human life lost even outside existential risk scenarios. And as I pointed out, there are a lot of other potential space based existential risks. That said, I think you've made a very good point above about the many non-gray goo scenarios that make nanotech a severe potential existential risk. So I think I'll agree that if one's comparing probability of a nanotech existential risk scenario compared to probability of a meteorite existential risk scenario, the nanotech is more likely.

Your point about the impact of nanotech on nuclear proliferation I find particularly disturbing. The potential for nanotech to greatly increase the efficiency of enriching uranium seems deeply worrisome and that's really the main practical limitation in building fission weapons.

Comment author: whpearson 19 July 2010 01:10:49PM 1 point [-]

A lot of it seems to hinge on the probability you assign to those threats being developed in the next century.

Comment author: Vladimir_Nesov 19 July 2010 03:21:25PM 2 points [-]

Accidental grey goo doesn't seem plausible, and purposeful destructive use of nanotech doesn't necessarily fall in that category. We can have nanomachines that act as bioweapons, infecting people and killing them.

Comment author: FAWS 19 July 2010 03:38:13PM *  2 points [-]

Are you disagreeing with something I said? I'm not sure nanotech would be better at killing that way than a designer virus, which should be a lot easier and cheaper (possibly even when accounting for the need to find a way to prevent it from spreading to your own side, if that's necessary). Nanotech might be able to do things that a virus can't, but that would be the sort of thing I mentioned. Anyway I don't see how we could effectively spend money now to prevent either.

Comment author: Vladimir_Nesov 19 July 2010 03:44:58PM 1 point [-]

Anyway I don't see how we could effectively spend money now to prevent either.

I agree with this. I disagree that there are no clear non-goo extinction risks associated with nano, and gave an example of one.