and that that will come with the need for content moderation
It will certainly come with calls for content moderation, but for all the reasons you allude to, the assertion that there will be a need for such moderation seems quite tendentious.
Agreed, good point! Let's say there will be non-stupid arguments in favor of content moderation. For example (top of my head):
I tend to think it's better to limit the model's capability so it fits the use case (e. g. no ability to create porn in software used by children) than having a Llama Guard style moderation tool in the loop supervising both user and model behavior; but I'm still very vague on my own position in the debate. I also don't know the different approaches people are trying out. I'd really like to read up on it though.
(What I do know: I don't want the Metas and Googles of the world being in charge of defining the control mechanisms for something that will be involved in basically all of our creative processes.)
I think I missed a step in your logic. Agreed that generative AI will be near-ubiquitous. I don't see how that changes anything about moderation, censorship, and curation. We ALREADY have so much content that it's impossible to keep up with. More of it doesn't change very much, though it does overwhelm many current (broken) strategies, and it moves the adversarial arms-race (producers trying to evade or game curation algorithms) forward a few steps.
Maybe I should clarify what I mean by "content moderation". I use the term more loosely than just platforms filtering what can be shared. I think we will hear demands for moderation wherever generative AI is used. Example: Microsoft is going to integrate generative AI into Word. The second they do this, they run into all the content moderation questions OpenAI has with ChatGPT. What if I want to write a story about [morally unacceptable thing], will AI assist me in fleshing out the details of [repugnant scene] or not? If not, will it help me to write the rest of the story or will it block me the moment its content filters detect a guideline violation? Who writes the guidelines? Can I disable the content filter, if I disable the AI? If not, will it let me save my work? Will it report me, if I violate specific guidelines? Will I lose access to Word? Are the circumstances law enforcement gets a hint I might be a problem? Who is getting to decide these questions? I think it would be problematic, if the way we answer this is big tech companies just implementing something and us then living with whatever they come up with.
It all boils down to us integrating a tool into our creative processes that can evaluate what we are asking it to do and enforce compliance with whatever rules "we" give it. And it's not hypothetical. One cold make an argument that ChatGPT is among the more important creative tools at the moment. I use it all the time, privately and for my work. Yet I'm extremely careful what I ask it to write. I self-censor myself and when I work on a story, I write in the full knowledge that both my prompts and the model output are always and automatically monitored. Assuming being augmented by AI becomes the normal creative workflow, this is a lot of power in the hands of those providing the tools. In a way, they control what can be said and shown.
At the moment, I see this as a dilemma! I don't think we want generative systems to just do whatever anyone is telling them in any situation (think generated videos and young teenagers experimenting with porn or shocking violence), yet implementing controls has all the problems I described above.
All of this is how I'm thinking about it at the moment; and I haven't done that much thinking yet. I'm looking for the debate about this; but it's surprisingly hard to find a lot of relevant discussion. Most of it is people defending content moderation in the context of communities trying to offer a safe space for their users and those discussions aren't that well calibrated to the problem I have in mind.
Yeah, this is the kind of argument I run into when trying to find something about his topic. It's mostly people arguing for the rights of communities to use moderation and how that is not censorship. It's not primarily what I'm thinking about though. I'm thinking about situations where I'm working on a piece of text (or an image) and make use of generative AI in my workflow. Will MS word prevent me from writing certain stories? I also tried to clarify this a bit more above in my answer to Dagon. See there for more detail (if you're interested).
Thanks, I'll check it out. At the moment, I'm fine with looking at all sorts of arguments; though I'm already pretty horrified by the ways this might be used to censor and control.
I'm currently thinking about generative AI and content moderation through the lens of freedom of thought/expression. The basic idea is that generative AI will likely be integrated into all our creative tools (e. g. word processors and image editing software) – and that that will come with the need for content moderation. We might end up in a situation that is as if a pen would be able to prevent us from drawing/writing and/or sharing the "wrong" thing (and to tell on us, if we tried).
To me this seems to be a source of a lot of power. Whoever produces and controls the moderation tools gets to decide questions like: What is the "wrong" thing? What can be written and drawn? When will moderation be active (e. g. always vs. only if we try to share our work)? What happens when a user violates the guidelines?
Now, I'm pretty much at the beginning of thinking about this topic, and I guess others went there before me. I'm looking for recommendations for articles, blog posts, books and the like, so I can catch up to where the discussion is at. Thanks!