Dr_Manhattan comments on Tallinn-Evans $125,000 Singularity Challenge - Less Wrong

27 Post author: Kaj_Sotala 26 December 2010 11:21AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (369)

You are viewing a single comment's thread. Show more comments above.

Comment author: XiXiDu 29 December 2010 08:31:20PM *  3 points [-]

And even though we might be capable of shutting down the Internet today, at the cost of severe economic damage, I'm pretty sure that that'll become less and less of a possibility as time goes on, especially if the AGI is also holding as hostage the contents of any hard drives without off-line backups.

I understand. But how do you differentiate this from the same incident involving an army of human hackers? The AI will likely be very vulnerable if it runs on some supercomputer and even more so if it runs in the cloud (just use an EMP). In contrast an army of human hackers can't be disturbed that easily and is an enemy you can't pinpoint. You are portraying a certain scenario here and I do not see it as a convincing argument to fortify risks from AI compared to other risks.

The AGI would control the vast majority of our communications networks. Once you can decide which messages get through and which ones don't, having humans build whatever you want is relatively trivial.

It isn't trivial. There is a strong interdependence of resources and manufacturers. The AI won't be able to simply make some humans build a high-end factory to create computational substrate. People will ask questions and shortly after get suspicious. Remember it won't be able to coordinate a world-conspiracy because it hasn't been able to self-improve to that point yet because it is still trying to acquire enough resources which it has to do the hard way without nanotech. You'd probably need a brain the size of the moon to effectively run and coordinate a whole world of irrational humans by intercepting their communications and altering them on the fly without anyone freaking out.

Comment author: Dr_Manhattan 31 December 2010 02:09:32PM *  1 point [-]

just use an EMP

And what kind of computer controls the EMP? Or is it hand-cranked?

Comment author: XiXiDu 31 December 2010 02:45:00PM 1 point [-]

The point is that you people are presenting an idea that is an existential risk by definition. I claim that it might superficially appear to be the most dangerous of all risks but that this is mostly a result of its vagueness.

If you say that there is the possibility of a superhuman intelligence taking over the world and all its devices to destroy humanity then that is an existential risk by definition. I counter that I dispute some of the premises and the likelihood of subsequent scenarios. So to make me update on the original idea you would have to support your underlying premises rather than arguing within already established frameworks that impose several presuppositions onto me.

Comment author: gwern 31 December 2010 10:00:42PM -1 points [-]

Are you aware of what the most common EMPs are? Nukes. The computer that triggers the high explosive lenses is already molten vapor by the time that the chain reaction has begun expanding into a fireball.

What kind of computer indeed!