You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

wedrifid comments on xkcd on the AI box experiment - Less Wrong Discussion

15 Post author: FiftyTwo 21 November 2014 08:26AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (229)

You are viewing a single comment's thread. Show more comments above.

Comment author: wedrifid 26 November 2014 01:03:13AM -1 points [-]

Precommitment isn't meaningless here just because we're talking about acausal trade.

Except in special cases which do not apply here, yes it is meaningless. I don't think you understand acausal trade. (Not your fault. The posts containing the requisite information were suppressed.)

What I described above doesn't require the AI to make its precommitment before you commit; rather, it requires the AI to make its precommitment before knowing what your commitment was.

The time of this kind decision is irrelevant.

Comment author: bogus 26 November 2014 01:17:33AM 1 point [-]

I don't think you understand acausal trade.

For what it's worth, I don't think anybody understands acausal trade. And I don't claim to understand it either.

Comment author: wedrifid 26 November 2014 02:33:16AM -1 points [-]

For what it's worth, I don't think anybody understands acausal trade.

It does get a tad tricky when combined with things like logical uncertainty and potentially multiple universes.