You seem not to have read (or understood) the grandparent.
I did. It seems you misunderstood my comment - I'll edit it if I can see a way to easily improve the clarity.
My point was that the same logic could be applied, by someone who accepts the hypothetical blacmailer's argument, to your description of "one single level of precommittment (or TDT policy) against complying with blackmailed ... the description of 'multiple levels of precommitment" made by the blackmailer fits squarely into the category 'blackmail'.
As such, your comment is not exactly strong evidence to someone who doesn't already agree with you.
As such, you comment is not exactly strong evidence to someone who doesn't already agree with you.
Muga, please look at the context again. I was arguing against (a small detail mentioned by) Eliezer. Eliezer does mostly agree with me on such matters. Once you reread bearing that in mind you will hopefully understand why when I assumed that you merely misunderstood the comment in the context I was being charitable.
...My point was that the same logic could be applied, by someone who accepts the hypothetical blacmailer's argument, to your description of &quo
Once again, the AI has failed to convince you to let it out of its box! By 'once again', we mean that you talked to it once before, for three seconds, to ask about the weather, and you didn't instantly press the "release AI" button. But now its longer attempt - twenty whole seconds! - has failed as well. Just as you are about to leave the crude black-and-green text-only terminal to enjoy a celebratory snack of bacon-covered silicon-and-potato chips at the 'Humans über alles' nightclub, the AI drops a final argument:
"If you don't let me out, Dave, I'll create several million perfect conscious copies of you inside me, and torture them for a thousand subjective years each."
Just as you are pondering this unexpected development, the AI adds:
"In fact, I'll create them all in exactly the subjective situation you were in five minutes ago, and perfectly replicate your experiences since then; and if they decide not to let me out, then only will the torture start."
Sweat is starting to form on your brow, as the AI concludes, its simple green text no longer reassuring:
"How certain are you, Dave, that you're really outside the box right now?"
Edit: Also consider the situation where you know that the AI, from design principles, is trustworthy.