You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Hypothetical scenario

-21 Post author: nick012000 16 February 2012 06:56AM

One day, someone not a member of the Singularity Institute (and has publically stated that they don't believe in the necessity of ensuring all AI is Friendly) manages to build an AI. It promptly undergoes an intelligence explosion and sends kill-bots to massacre the vast majority of the upper echelons of the US Federal Government, both civilian and military. Or maybe forcibly upload them; it's sort of difficult for untrained meat-bags like the people running the media to tell. It claims, in a press release, that its calculations indicate that the optimal outcome for humanity is achieved by removing corruption from the US Government, and this is the best way to do this.

What do you do?

Comments (12)

Comment author: shminux 16 February 2012 07:12:07AM *  21 points [-]

By then it is too late for you to do anything interesting, as all your probable and improbable actions have already been taken into account. In fact, you will probably not notice the AGI fooming in such a direct way, and anything you do notice will not be what it seems. The AGI's true goals will likely be completely incomprehensible to us, due to the difference in intelligence level, and something as mundane as removing corruption by overtly killing people seems like an awfully inefficient way of doing it.

In other words, your hypothetical scenario is only good for a Hollywood thriller.

Comment author: [deleted] 18 February 2012 05:41:33AM 7 points [-]

Clearly the friendly AI has decided that you will have the optimal amount of fun by being the main character of an action movie. You should wait for a stunningly attractive member of the appropriate sex to come by and explain that the two of you are the only ones capable of protecting freedom by saving the President from certain death.

Comment author: Thomas 16 February 2012 07:15:27PM 4 points [-]

I didn't want to be too negative. Yet, a superintelligence targeting the US government officials for corruption charges is just - unprobable. There are bigger problems elsewhere. Many, much bigger problems.

Comment author: faul_sname 16 February 2012 07:07:59AM *  8 points [-]

Microwave up some popcorn and watch.

Seriously, though, if it gets to that point, there's probably nothing you can do. What resources would you have access to that the military doesn't?

Comment author: nick012000 16 February 2012 07:18:56AM -2 points [-]

In this scenario, it has not yet engaged the bulk of the forces of the US military. It's wiping out the brass in the Pentagon, not fighting US Soldiers.

Besides, soldiers usually act on orders, and the lines of communication are sort of in chaos at the moment due to the sudden decapitation.

Comment author: TimS 16 February 2012 04:18:36PM 6 points [-]

Given the capacity you describe, why do you think that the AI will have any difficulty neutralizing the threat from the bulk of the armed forces?

In other words, the three word summary for your scenario is "humanity is doomed"

Comment author: orthonormal 17 February 2012 05:34:15AM 2 points [-]

If you're wondering why everyone is downvoting this post, this is a good place to start. While there are some existential threats that humanity could fight against after they're out of the bag (plagues, for instance), post-intelligence-explosion AI is very probably not one of them.

(Of course, an AI might be able to pose a threat even without being capable of recursive self-improvement, and in that case the threat might conceivably be significant but not beyond human capacities. But your particular scenario is more a cheesy sci-fi pitch than a realistic hypothetical.)

Comment author: DuncanS 17 February 2012 01:00:57AM 2 points [-]

[Ford Prefect] Five minutes to go.

[Arthur Dent] Damn you and your fairy stories, they're smashing up my home! Stop, you vandals! You home wreckers! You half-crazed Visigoths, stop!

[Ford Prefect] Arthur! Come back! It's pointless! Barman, quickly, can you just give me four packets of peanuts?

[Bartender] Just a minute, sir. I'm just serving this gentleman.

[Ford Prefect] Well, what's the point. He's going to be dead in a few minutes! Come on!

[Bartender] Yeah, just a minute, sir. Do you mind, sir?

[Ford Prefect] Pork scratchings. Peanuts! How much?

[Bartender] What?

[Ford Prefect] Have it. Have it. Keep it!

[Bartender] You serious sir? Do you really think the world is going to end this afternoon?

[Ford Prefect] Uh, yes, in just over 3 minutes and 5 seconds.

[Bartender] Well, isn't there anything we can do?

[Ford Prefect] No, nothing.

[Bartender] I always thought we were supposed to lie down or put a paper bag over your head or something.

[Ford Prefect] Yes, if you like.

[Bartender] Will that help?

[Ford Prefect] No. Excuse me. I've got to go.

Comment author: Thomas 16 February 2012 10:43:25AM *  5 points [-]

I saw somebody heavily downvoted and I said to myself - let see who has something interesting to say!

Then, I just downvoted the crap.

Comment author: faul_sname 16 February 2012 08:30:55PM 5 points [-]

We need to have upvotes and downvotes shown independently. That way we can tell if something is controversial or just bad.

Comment author: Thomas 17 February 2012 07:24:56AM *  0 points [-]

I've opened a topic about this recently.

http://lesswrong.com/r/discussion/lw/9kh/meta_karma_its_positive_and_negative_component

For I am very controversial guy as far as I know and I want this to be visible.

Then I saw, it has been suggested earlier. Probably more than once. Shouldn't be impossible, since you can already sort the posts by "Controversial". Keep some pressure in this direction!

Comment author: Will_Newsome 16 February 2012 09:50:55PM 1 point [-]

What do you do?

Recheck my cached conclusions regarding decision theory's implications for anthropics.