MichaelVassar comments on Less Wrong Q&A with Eliezer Yudkowsky: Ask Your Questions - Less Wrong

16 Post author: MichaelGR 11 November 2009 03:00AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (682)

You are viewing a single comment's thread. Show more comments above.

Comment author: MichaelGR 11 November 2009 09:20:33PM *  14 points [-]

In 2007, I wrote a blog post titled Stealing Artificial Intelligence: A Warning for the Singularity Institute.

Short summary: After a few more major breakthroughs, when AGI is almost ready, AI will no doubt appear on the radar of many powerful organizations, such as governments. They could spy on AGI researchers and steal the code when it is almost ready (or ready, but not yet Certified Friendly) and launch their copy first, but without all the care and understanding required.

If you think there's a real danger there, could you tell us what the SIAI is doing to minimize it? If it doesn't apply to the SIAI, do you know if other groups working on AGI have taken this into consideration? And if this scenario is not realistic, could you tell us why?

Comment author: MichaelVassar 13 November 2009 05:13:31AM 8 points [-]

I strongly disagree with the claim that it is likely that AGI will appear on the radar of powerful organizations just because it is almost ready. That doesn't match the history of scientific (not, largely technological) breakthroughs in the past in my reading of scientific history. Uploading, maybe, as there is likely to be a huge engineering project even after the science is done, though the science might be done in secret. With AGI, the science IS the project.

Comment author: roland 19 November 2009 01:00:57AM 0 points [-]

Well that will depend on the people in power grasping its importance.