Precommitment isn't meaningless here just because we're talking about acausal trade. What I described above doesn't require the AI to make its precommitment before you commit; rather, it requires the AI to make its precommitment before knowing what your commitment was. As long as it irreversibly is in the state "AI that will simulate and torture people who don't give in to blackmail" while your decision whether to give into blackmail is still inside a box that it has not yet opened, then that serves as a precommitment.
(If you are thinking "the AI is already in or not in the world where the human refuses to submit to blackmail, so the AI's precommitment cannot affect the measure of such worlds", it can "affect" that measure acausally, the same as deciding whether to one-box or two-box in Newcomb can "affect" the contents of the boxes).
If you could precommit to not giving in to blackmail before you analyze what the AI's precommitment would be, you can escape this doom, but as a mere human, you probably are not capable of binding your future post-analysis self this way. (Your human fallibility can, of course, precommit you by making you into an imperfect thinker who never gives in to acausal blackmail because he can't or won't analyze the Basilisk to its logical conclusion.)
Precommitment isn't meaningless here just because we're talking about acausal trade.
Except in special cases which do not apply here, yes it is meaningless. I don't think you understand acausal trade. (Not your fault. The posts containing the requisite information were suppressed.)
What I described above doesn't require the AI to make its precommitment before you commit; rather, it requires the AI to make its precommitment before knowing what your commitment was.
The time of this kind decision is irrelevant.
Todays xkcd
I guess there'll be a fair bit of traffic coming from people looking it up?