Nick_Tarleton comments on Less Wrong Q&A with Eliezer Yudkowsky: Ask Your Questions - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (682)
If they're going to have that exact wrong level of cluefulness, why wouldn't they already have a (much better-funded, much less careful) AGI project of their own?
As Vladimir says, it's too early to start solving this problem, and if "things start moving rapidly" anytime soon, then AFAICS we're just screwed, government involvement or no.
Maybe they do, maybe they don't. I won't try to add more details to a scenario because that's not the right way to think about this, IMO. If it happens, it probably won't be a movie plot scenario anyway ("Spies kidnap top AI research team and torture them until they make a few changes to program, granting our Great Leader dominion over all")...
What I'm interested in is security of AGI research in general. It would be extremely sad to see FAI theory go very far only to be derailed by (possibly well-intentioned) people who see AGI as a great source of power and want to have it "on their side" or whatever.