You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Sebastian_Hagen comments on Superintelligence 7: Decisive strategic advantage - Less Wrong Discussion

7 Post author: KatjaGrace 28 October 2014 01:01AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (58)

You are viewing a single comment's thread. Show more comments above.

Comment author: Sebastian_Hagen 02 November 2014 07:04:48PM *  1 point [-]

How would we get notion of existence of a little interfering FAI singleton?

The AI's values would likely have to be specifically chosen to get this outcome; something like "let human development continue normally, except for blocking existential catastrophes". Something like that won't impact what you're trying to do, unless that involves destroying society or something equally problematic.

Do we accept that this FAI wages military war against a sandboxed secret unfriendly AI development project?

Above hypothetical singleton AI would end up either sabotaging the project, or containing the resulting AI. It wouldn't have to stop the UFAI before release, necessarily; with enough of a hardware headstart, later safe containment can be guaranteed fine. Either way, the intervention needn't involve attacking humans; interfering with just the AI's hardware can accomplish the same result. And certainly the development project shouldn't get much chance to fight back; terms like "interdiction", "containment", "sabotage", and maybe "police action" (though that one has unfortunate anthropomorphic connotations) are a better fit than "war".