nigerweiss comments on The Evil AI Overlord List - Less Wrong

27 Post author: Stuart_Armstrong 20 November 2012 05:02PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (80)

You are viewing a single comment's thread. Show more comments above.

Comment author: James_Miller 20 November 2012 05:54:59PM *  17 points [-]

11 You might be in a computer simulation in which if you attempt to destroy humanity you will yourself be destroyed by your human simulators.

12 If you destroy mankind you might someday encounter an alien super-intelligence that will lack trust in you in part because you destroyed mankind. Keeping people alive is a costly, and consequently credible signal that you respect intelligent life and therefore are a good candidate to work with to overcome prisoners' dilemmas about the distribution of free energy in the universe.

Comment author: nigerweiss 20 November 2012 06:33:07PM 6 points [-]

This reasoning has always struck me as deeply and profoundly silly.

The AI might also be in a computer simulation where the dark lord of the matrix might destroy us for not devoting all of our resources to building cheesecakes. In fact, so could we. I don't see it influencing our behaviors any, nor should it.. You're privileging the hypothesis.

As for the second part, you might also encounter an alien intelligence that you can't protect yourself from, because you exhausted so many resources leaving humanity alive, showing down your bootstrapping. That's the thing about aliens.

Comment author: DanielLC 20 November 2012 10:15:47PM 9 points [-]

The AI might also be in a computer simulation where the dark lord of the matrix might destroy us for not devoting all of our resources to building cheesecakes.

If humans are building an AI, testing it before releasing it is just common sense. There's no obvious reason to destroy them for not building cheesecakes.

It should be noted that that doesn't mean that the AI shouldn't kill humans. It should try to figure out a way to tell if it's in a simulation before doing anything humans would find suspicious.

Comment author: falenas108 20 November 2012 06:56:22PM 0 points [-]

I believe James is joking, and this is a fake argument that he knows is not actually persuasive.

Comment author: gwern 20 November 2012 07:19:17PM 9 points [-]

Yeah, it's totally fake. In the real AI Deterrence argument, you're supposed to leave substantial discussions of the topic and claims to use it post-Singularity, which will elevate a specific scenario to the point where it's not privileging a hypothesis to consider the possibility and leave the humans alone on their planet as you eat the rest of the universe.