GPT-generated spam seems like a worse problem for things like product reviews, than for a site like LW where comments are generally evaluated by the quality of their content. If GPT produces low-quality comments it'll be downvoted, if it produces high-quality comments then great.
It could provide a lot of comments that are borderline. Some of them containing links for SEO purposes.
There are many people in the US who are poor people but who are still subject to US labor law that requires paying a minimum wage. For the point it's quite useful to us a term that doesn't include them.
There are reasons why India is a good country for outsourcing these tasks.
It's quite similar to speaking about shipping manufacturing jobs to China. It's insane to have political correctness pushing onto LessWrong in a way where you can't speak about which countries are good for having certain jobs in those countries.
If we learned anything in Germany it's that seeing everything in terms of race is a bad idea. The fact that you and Zachary can't see a talk about countries without pattern matching into race seems illustrative of how screwed up the discourse. Yielding to that on LessWrong where clear thinking is a high value seems very costly.
If someone set up a GPT-3 bot that responded to every new LW post, it'd be really interesting to see how good its responses actually were. What would its karma be after a month?
We already filter a lot of comments by well-meaning internet citizens who just kind of get confused about what LessWrong is about, and are spouting only mostly coherent sentences. So I think we overall won't have much of a problem with moderating this and our processes deal with it pretty well, at least for this generation of GPT-3 without finetuning (I can imagine finetuned versions of GPT-3 to be good enough to cause problems even for us). Karma also helps a lot.
I can imagine being concerned about the next generation of GPT though.
OpenAI seems to do enough diligence that GPT-3 itself is no concern. If however Yandex, Tencent or Baidu create a similar project, things would look different, so the concern isn't so much GPT-3.
The obvious answer to spammers being run by GPT is mods being run by GPT. Ask it whether every comment is high-quality/generated, then act on that as needed to keep the site functional.
How about integrate with the underlay https://www.underlay.org/pub/future/release/5 ? FYI I personally connected some of the team members in the project with each other.
GPT-3 seems to be skilled enough to write forum comments that aren't easy to identify as spam. While OpenAI reduces the access to it's API it will likely don't take that long till other companies develop similar API's that are more freely available. While this isn't the traditional AI safety question, it does seem like it starts to become a significant safety question.