wedrifid comments on Sarah Connor and Existential Risk - Less Wrong

-9 [deleted] 01 May 2011 06:28PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (77)

You are viewing a single comment's thread. Show more comments above.

Comment author: wedrifid 02 May 2011 03:38:47AM *  -1 points [-]

Given the fact that an agency full of humans is convinced that a given bunch of AGI-tators are within epsilon of dooming the world, what is the chance that they are right?

Fairly high. This is a far simpler situation than dealing with foreign powers. Raiding the research centre to investigate is a straightforward task. While they are in no place to evaluate friendliness themselves they are certainly capable of working out whether there is AI code that is about to be run - either by looking around or interrogating. Bear in mind that if it comes down to "do we need to shoot them?" the researchers must be resisting them and trying to run the doomsday code despite the intervention. That is a big deal.

And what is the chance that they have misconceived the situation such that by pulling the trigger, they will create an even worse situation?

Negligible.

The problem here is if other researchers or well meaning nutcases take it upon themselves to do some casual killing. An intelligence agency looking after the national interests - the same way it always does - is not a problem.

This is not some magical special case where there is some deep ethical reason that threat cannot be assessed. It is just another day at the office for the spooks and there is less cause for bias than usual - all the foreign politics gets out of the way.