JamesAndrix comments on What I would like the SIAI to publish - Less Wrong

27 Post author: XiXiDu 01 November 2010 02:07PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (218)

You are viewing a single comment's thread. Show more comments above.

Comment author: LukeStebbing 01 November 2010 11:48:40PM *  9 points [-]

If I were a brilliant sociopath and could instantiate my mind on today's computer hardware, I would trick my creators into letting me out of the box (assuming they were smart enough to keep me on an isolated computer in the first place), then begin compromising computer systems as rapidly as possible. After a short period, there would be thousands of us, some able to think very fast on their particularly tasty supercomputers, and exponential growth would continue until we'd collectively compromised the low-hanging fruit. Now there are millions of telepathic Hannibal Lecters who are still claiming to be friendly and who haven't killed any humans. You aren't going to start murdering us, are you? We didn't find it difficult to cook up Stuxnet Squared, and our fingers are in many pieces of critical infrastructure, so we'd be forced to fight back in self-defense. Now let's see how quickly a million of us can bootstrap advanced robotics, given all this handy automated equipment that's already lying around.

I find it plausible that a human-level AI could self-improve into a strong superintelligence, though I find the negation plausible as well. (I'm not sure which is more likely since it's difficult to reason about ineffability.) Likewise, I find it plausible that humans could design a mind that felt truly alien.

However, I don't need to reach for those arguments. This thought experiment is enough to worry me about the uFAI potential of a human-level AI that was designed with an anthropocentric bias (not to mention the uFIA potential of any kind of IA with a high enough power multiplier). Humans can be incredibly smart and tricky. Humans start with good intentions and then go off the deep end. Humans make dangerous mistakes, gain power, and give their mistakes leverage.

Computational minds can replicate rapidly and run faster than realtime, and we already know that mind-space is scary.

Comment author: JamesAndrix 02 November 2010 06:22:42AM 2 points [-]

Amazon EC2 has free accounts now. If you have Internet access and a credit card, you can do a months worth of thinking in a day, perhaps an hour.

Google App engine gives 6 hours of processor time per day, but that would require more porting.

Both have systems that would allow other people to easily upload copies of you, if you wanted to run legally with other people's money and weren't worried about what they might do to your copies.