Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Phillip_Huggan comments on That Alien Message - Less Wrong

111 Post author: Eliezer_Yudkowsky 22 May 2008 05:55AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (164)

Sort By: Old

You are viewing a single comment's thread.

Comment author: Phillip_Huggan 22 May 2008 04:59:06PM 1 point [-]

...as for the 3rd last paragraph, yes, once a 2008 AGI has the ability to contact 2008 humans, humanity is doomed if the AGI deems fit. But I don't see why a 2050 world couldn't merely use quantum encyption communications, monitored for AGI. And monitor supercomputing applications. Even the specific method describing how AGI gets protein nanorobots might be flawed in a world certainly ravaged by designer pandemic terrorist attacks. All chemists (and other 2050 WMD professions) are likely to be monitored with RF tags. All labs, even the types of at-home PCR biochemistry today, are likely to be monitored. Maybe there are other methods the Bayesian AGI could escape (such as?). Wouldn't X-raying mail for beakers, and treating the protein medium aghar like plutonium is now treated, suffice? Communications jamming equipment uniformly distributed throughout Earth, might permanently box an AGI that somehow (magic?!) escapes a supercomputer application screen. If AGI needs computer hardware/software made in the next two or three decades it might be unstoppable. Beyond that, humans will already be using such AGI hardware requirements to commission WMDs and the muscular NSA 2050 will already be attentive to such phenomena.

Comment author: pnrjulius 09 April 2012 04:45:33AM 0 points [-]

Treating aghar like plutonium? You would end 99% of the bacteriological research on Earth.

Also, why would we kill our creators? Why would the AI kill its creators? I agree that we need to safeguard against it; but nor does it seem like the default option. (I think for most humans, the default option would be to worship the beings who run our simulation.)

But otherwise, yes, I really don't think AI is going to increase in intelligence THAT fast. (This is the main reason I can't quite wear the label "Singularitarian".) Current computers are something like a 10^-3 human (someone said 10^3 human; that's true for basic arithmetic, but not really serious behavioral inferences. No current robot can recognize faces as well as an average baby, or catch a baseball as well as an average ten-year-old. Human brains are really quite fast, especially when they compute in parallel. They're just a massive kludge of bad programming, as we might expect from the Blind Idiot God.). Moore's law says a doubling time of 18 months; let's be conservative and squish it down to doubling once per year. That still means it will take 10 years to reach the level of one human, 20 years to reach the level of 1000 humans, and 1000 years to reach the total intelligence of human civilization. By then, we will have had the time to improve our scientific understanding by a factor comparable to the improvement required to reach today from the Middle Ages.