One day, someone not a member of the Singularity Institute (and has publically stated that they don't believe in the necessity of ensuring all AI is Friendly) manages to build an AI. It promptly undergoes an intelligence explosion and sends kill-bots to massacre the vast majority of the upper echelons of the US Federal Government, both civilian and military. Or maybe forcibly upload them; it's sort of difficult for untrained meat-bags like the people running the media to tell. It claims, in a press release, that its calculations indicate that the optimal outcome for humanity is achieved by removing corruption from the US Government, and this is the best way to do this.

What do you do?

New Comment
12 comments, sorted by Click to highlight new comments since: Today at 4:28 PM

By then it is too late for you to do anything interesting, as all your probable and improbable actions have already been taken into account. In fact, you will probably not notice the AGI fooming in such a direct way, and anything you do notice will not be what it seems. The AGI's true goals will likely be completely incomprehensible to us, due to the difference in intelligence level, and something as mundane as removing corruption by overtly killing people seems like an awfully inefficient way of doing it.

In other words, your hypothetical scenario is only good for a Hollywood thriller.

Microwave up some popcorn and watch.

Seriously, though, if it gets to that point, there's probably nothing you can do. What resources would you have access to that the military doesn't?

In this scenario, it has not yet engaged the bulk of the forces of the US military. It's wiping out the brass in the Pentagon, not fighting US Soldiers.

Besides, soldiers usually act on orders, and the lines of communication are sort of in chaos at the moment due to the sudden decapitation.

[-]TimS12y80

Given the capacity you describe, why do you think that the AI will have any difficulty neutralizing the threat from the bulk of the armed forces?

In other words, the three word summary for your scenario is "humanity is doomed"

[-][anonymous]12y80

Clearly the friendly AI has decided that you will have the optimal amount of fun by being the main character of an action movie. You should wait for a stunningly attractive member of the appropriate sex to come by and explain that the two of you are the only ones capable of protecting freedom by saving the President from certain death.

I didn't want to be too negative. Yet, a superintelligence targeting the US government officials for corruption charges is just - unprobable. There are bigger problems elsewhere. Many, much bigger problems.

I saw somebody heavily downvoted and I said to myself - let see who has something interesting to say!

Then, I just downvoted the crap.

We need to have upvotes and downvotes shown independently. That way we can tell if something is controversial or just bad.

I've opened a topic about this recently.

http://lesswrong.com/r/discussion/lw/9kh/meta_karma_its_positive_and_negative_component

For I am very controversial guy as far as I know and I want this to be visible.

Then I saw, it has been suggested earlier. Probably more than once. Shouldn't be impossible, since you can already sort the posts by "Controversial". Keep some pressure in this direction!

If you're wondering why everyone is downvoting this post, this is a good place to start. While there are some existential threats that humanity could fight against after they're out of the bag (plagues, for instance), post-intelligence-explosion AI is very probably not one of them.

(Of course, an AI might be able to pose a threat even without being capable of recursive self-improvement, and in that case the threat might conceivably be significant but not beyond human capacities. But your particular scenario is more a cheesy sci-fi pitch than a realistic hypothetical.)

[Ford Prefect] Five minutes to go.

[Arthur Dent] Damn you and your fairy stories, they're smashing up my home! Stop, you vandals! You home wreckers! You half-crazed Visigoths, stop!

[Ford Prefect] Arthur! Come back! It's pointless! Barman, quickly, can you just give me four packets of peanuts?

[Bartender] Just a minute, sir. I'm just serving this gentleman.

[Ford Prefect] Well, what's the point. He's going to be dead in a few minutes! Come on!

[Bartender] Yeah, just a minute, sir. Do you mind, sir?

[Ford Prefect] Pork scratchings. Peanuts! How much?

[Bartender] What?

[Ford Prefect] Have it. Have it. Keep it!

[Bartender] You serious sir? Do you really think the world is going to end this afternoon?

[Ford Prefect] Uh, yes, in just over 3 minutes and 5 seconds.

[Bartender] Well, isn't there anything we can do?

[Ford Prefect] No, nothing.

[Bartender] I always thought we were supposed to lie down or put a paper bag over your head or something.

[Ford Prefect] Yes, if you like.

[Bartender] Will that help?

[Ford Prefect] No. Excuse me. I've got to go.

What do you do?

Recheck my cached conclusions regarding decision theory's implications for anthropics.