Vladimir_Nesov comments on Less Wrong Q&A with Eliezer Yudkowsky: Ask Your Questions - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (682)
In 2007, I wrote a blog post titled Stealing Artificial Intelligence: A Warning for the Singularity Institute.
Short summary: After a few more major breakthroughs, when AGI is almost ready, AI will no doubt appear on the radar of many powerful organizations, such as governments. They could spy on AGI researchers and steal the code when it is almost ready (or ready, but not yet Certified Friendly) and launch their copy first, but without all the care and understanding required.
If you think there's a real danger there, could you tell us what the SIAI is doing to minimize it? If it doesn't apply to the SIAI, do you know if other groups working on AGI have taken this into consideration? And if this scenario is not realistic, could you tell us why?
Isn't it too early to start solving this problem? There is a good chance SIAI won't even have a direct hand in programming the FAI.
That's what I've been told, but I'm not entirely convinced. Since there are so many timelines out there, and since fundamental breakthroughs are hard to predict, I think it still deserves some attention as soon as possible, if only to know what to do if things start moving rapidly (an AGI team might not have many chances to recover from security mistakes).
I'll broaden my question a bit so that it applies to all people working on AGI and not just the SIAI.
Care to elaborate?
Why? It's not like SIAI is on a teleological track to be the one true organization to actually save the world. They have some first-mover advantage to be the focus of this movement, to the extent it's effective in gravitating activity their way. They are currently doing important work on spreading awareness. But if things catch up, others will start seriously working on the problem elsewhere.
By things catching up, you mean awareness spreading, right? It doesn't seem like a stretch to guess that SIAI will continue to do a large portion of that.
There's no advantage associated with FAI programmers starting a second group if they know they'll get funded by SIAI and don't have any major disagreements with SIAI's philosophy.
Not a rule strictly followed by how things work out in practice.