alyssavance comments on Less Wrong Q&A with Eliezer Yudkowsky: Ask Your Questions - Less Wrong

16 Post author: MichaelGR 11 November 2009 03:00AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (682)

You are viewing a single comment's thread. Show more comments above.

Comment author: MichaelGR 12 November 2009 06:22:14AM 4 points [-]

Fear of others stealing your ideas is a crank trope, which suggests it may be a common human failure mode.

I think it might be correct in the entrepreneur/startup world, but it probably isn't when it comes to technologies that are this powerful. Just think of nuclear espionage and of the kind of security that surrounds the development of military and intelligence hardware and software. If you're building something that could overthrow all the power structures in the world, it would be surprising if nobody tried to spy on you (or worse; kill you, derail the project, steal the almost finished code, etc).

I'm not saying it only applies to the SIAI (though my original post was directed only at them, my question here is about the AGI research world in general, which includes the SIAI), or that it isn't just one of many many things that can go wrong. But I still think that when you're playing with stuff this powerful, you should be concerned with security and not just expect to forever fly under the radar.

Comment author: alyssavance 13 November 2009 12:14:49AM *  6 points [-]

"Just think of nuclear espionage and of the kind of security that surrounds the development of military and intelligence hardware and software."

The reason the idea of the nuclear chain reaction was kept secret, was because one man named Leo Szilard realized the damage it could do, and had his patent for the idea classified as a military secret. It wasn't kept secret by default; if it weren't for Szilard, it would probably have been published in physics journals like every other cool new idea about atoms, and the Nazis might well have gotten nukes before we did.

"If you're building something that could overthrow all the power structures in the world, it would be surprising if nobody tried to spy on you (or worse; kill you, derail the project, steal the almost finished code, etc)."

Only if they believe you, which they almost certainly won't. Even in the (unlikely) case that someone thought that an AI taking over the world was realistic, there's still an additional burden of proof on top of that, because they'd also have to believe that SIAI is competent enough to have a decent shot at pulling it off, in a field where so many others have failed.