As AIs become more capable, we may at least want the option of discussing them out of their earshot.
If I'd want to discuss something outside of an AI's earshot, I'd use something like Signal, or something that would keep out a human too.
AIs sometimes have internet access, and robots.txt won't keep them out.
I don't think having this info in their training set is a big difference (but maybe I don't see the problem you're pointing out, so this isn't confident).
I think there's two levels of potential protection here. One is a security-like "LLMs must not see this" condition, for which yes, you need to do something that would keep out a human too (though in practice maybe "post only visible to logged-in users" is good enough).
However I also think there's a lower level of protection that's more like "if you give me the choice, on balance I'd prefer for LLMs not to be trained on this", where some failures are OK and imperfect filtering is better than no filtering. The advantage of targeting this level is simply that it's much easier and less obtrusive, so you can do it at a greater scale with a lower cost. I think this is still worth something.
This is a companion post to Keeping content out of LLM training datasets, which discusses the various techniques we could use and their tradeoffs. My intention is primarily to start a discussion, I am not myself very opinionated on this.
As AIs become more capable, we may at least want the option of discussing them out of their earshot.
Places to consider (at time of writing, none of the below robots.txt files rule out LLM scrapers, but I include the links so you can check if this changes):
Options to consider:
Feel free to suggest additions to either category.
To the extent that doing something here means spending software dev time, this raises the question not only of should we do this but how important is this, relative to the other things we can spend software developers on.
Link preview image by Jonny Gios on Unsplash