Based on low-quality articles that seem to be coming up with more regularity, and as mentioned in a few recent posts, AI-generated posts are likely to be a permanent feature of LW (and most online forums, I expect). I wonder if we should focus on harm reduction (or actual value creation, in some cases) rather than trying to disallow something that clearly people want to do.
I wonder how feasible it would be to have a LessWrong-specific workflow for using any or all of the major platforms to assist with (and not fully write) a LW question, a LW summary-of-research post, or a LW rationalist-exploration-of-a-question post (and/or others). This could simply be a help page with sample prompts for "how to generate and use a summary paragraph", "how to generate and modify an outline/thesis sketch", and "how to use the summary and outline to flesh out your ideas on a subtopic".
I've played with these techniques, but I tend to do it all in my captive meatware LLM rather than using an external one, so I don't have a starter example. Do any of you?
That's awesome. One of my worries about this (which applies to most harm-reduction programs) is that I'd rather have less current-quality-LLM-generated stuff on LW overall, and making it a first-class feature makes it seem like I want more of it.
Having a very transparent not-the-same-as-a-post mechanism solves this worry very well.