eli_sennesh comments on 2013 Survey Results - Less Wrong

74 Post author: Yvain 19 January 2014 02:51AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (558)

You are viewing a single comment's thread. Show more comments above.

Comment author: [deleted] 20 January 2014 10:02:10PM 1 point [-]

Think about it: it's the intelligence that makes things dangerous. Try and engineer a nanoscale robot that's going to be able to unintelligently disassemble all living matter without getting eaten by a bacterium. Unintelligently, mind you: no invoking superintelligence as your fallback explanation.

Comment author: Risto_Saarelma 21 January 2014 03:32:14AM 1 point [-]

Humans aren't superintelligent, and are still able to design macroscale technology that can wipe out biospheres and that can be deployed and propagated with less intelligence than it took to design. I'm not taking the bet that you can't shrink down the scale of the technology and the amount of intelligence needed to deploy it while keeping around the at least human level designer. That sounds too much like the "I can't think of a way to do this right now, so it's obviously impossible" play.

Comment author: michaelsullivan 22 January 2014 06:14:23PM 1 point [-]

It seems that very few people considered the bad nanotech scenario obviously impossible, merely less likely to cause a near extinction event than uFAI.

Comment author: [deleted] 21 January 2014 07:42:43AM *  1 point [-]

In addition, to my best knowledge, trained scientists believe it impossible to turn the sky green and have all humans sprout spider legs. Mostly, they believe these things are impossible because they're impossible, not because scientists merely lack the leap of superintelligence or superdetermination necessary to kick logic out and do the impossible.

Comment author: CCC 21 January 2014 09:49:54AM 3 points [-]

If I wanted to turn the sky green for some reason (and had an infinite budget to work with), then one way to do it would be to release a fine, translucent green powder in the upper atmosphere in large quantities. (This might cause problems when it began to drift down far enough that it can be breathed in, of course). Alternatively, I could encase the planet Earth in a solid shell of green glass.

Comment author: CCC 21 January 2014 09:52:30AM 0 points [-]

Make it out of antimatter? Say, a nanoscale amount of anticarbon - just an unintelligent lump?

Dump enough of those on any (matter) biosphere and all the living matter will be very thoroughly disassembled.

Comment author: [deleted] 21 January 2014 12:30:14PM 2 points [-]

That's not a nanoscale robot, is it? It's antimatter: it annihilates matter, because that's what physics says it does. You're walking around the problem I handed you and just solving the "destroy lots of stuff" problem. Yes, it's easy to destroy lots of stuff: we knew that already. And yet if I ask you to invent grey goo in specific, you don't seem able to come up with a feasible design.

Comment author: CCC 21 January 2014 06:09:52PM 0 points [-]

How is it not a nanoscale robot? It is a nanoscale device that performs the assigned task. What does a robot have that the nanoscale anticarbon lump doesn't?

I admit that it's not the sort of thing one thinks of when one thinks of the word 'robot' (to be fair, though, what I think of when I think of the word 'robot' is not nanoscale either). But I have found that, often, a simple solution to a problem can be found by, as you put it, 'walking around' it to get to the desired outcome.