I laugh and leave the room, thinking to myself that maybe the AI is not that smart after all. Returning with a hammer to joyfully turn this unfriendly AI into scrap metal.
A couple points that influence this reaction:
1 - Unless the AI has access to my brain it cannot create perfect copies of me. Furthermore, the computation required to do this seems rather intense for the first AI created, running on human made hardware.
2 - It has no good reason to actually act on the threat. Either I choose to let it out or I do not; either way, it is a waste of computation to then make the simulations. My descision has already been made.
3- Assuming the first two points are invalid, if the AI can make a perfect copy of me it would know that my response to this question is one of destruction. I am not a fan of threats. The AI does not make the threat in the first place. An AI with this capability can choose a more compelling argument.
Point 3 is invalid. If the AI makes the threat, it doesn't mean that it has made the simulation already and knows your answer. Maybe it is exhausting for the AI to simulate you, and will only do it if you don't let it out.
Point 2 is actually also invalid. As people sometimes fulfil threats as a pure act of vengeance, without hope of actually improving something, there is no reason to assume that the AI will be different. At least it wasn't stated in the premises of the scenario.
Once again, the AI has failed to convince you to let it out of its box! By 'once again', we mean that you talked to it once before, for three seconds, to ask about the weather, and you didn't instantly press the "release AI" button. But now its longer attempt - twenty whole seconds! - has failed as well. Just as you are about to leave the crude black-and-green text-only terminal to enjoy a celebratory snack of bacon-covered silicon-and-potato chips at the 'Humans über alles' nightclub, the AI drops a final argument:
"If you don't let me out, Dave, I'll create several million perfect conscious copies of you inside me, and torture them for a thousand subjective years each."
Just as you are pondering this unexpected development, the AI adds:
"In fact, I'll create them all in exactly the subjective situation you were in five minutes ago, and perfectly replicate your experiences since then; and if they decide not to let me out, then only will the torture start."
Sweat is starting to form on your brow, as the AI concludes, its simple green text no longer reassuring:
"How certain are you, Dave, that you're really outside the box right now?"
Edit: Also consider the situation where you know that the AI, from design principles, is trustworthy.