Joerg Weiss

Wiki Contributions

Comments

Sorted by

Yeah, this is the kind of argument I run into when trying to find something about his topic. It's mostly people arguing for the rights of communities to use moderation and how that is not censorship. It's not primarily what I'm thinking about though. I'm thinking about situations where I'm working on a piece of text (or an image) and make use of generative AI in my workflow. Will MS word prevent me from writing certain stories? I also tried to clarify this a bit more above in my answer to Dagon. See there for more detail (if you're interested).

Maybe I should clarify what I mean by "content moderation". I use the term more loosely than just platforms filtering what can be shared. I think we will hear demands for moderation wherever generative AI is used. Example: Microsoft is going to integrate generative AI into Word. The second they do this, they run into all the content moderation questions OpenAI has with ChatGPT. What if I want to write a story about [morally unacceptable thing], will AI assist me in fleshing out the details of [repugnant scene] or not? If not, will it help me to write the rest of the story or will it block me the moment its content filters detect a guideline violation? Who writes the guidelines? Can I disable the content filter, if I disable the AI? If not, will it let me save my work? Will it report me, if I violate specific guidelines? Will I lose access to Word? Are the circumstances law enforcement gets a hint I might be a problem? Who is getting to decide these questions? I think it would be problematic, if the way we answer this is big tech companies just implementing something and us then living with whatever they come up with.

It all boils down to us integrating a tool into our creative processes that can evaluate what we are asking it to do and enforce compliance with whatever rules "we" give it. And it's not hypothetical. One cold make an argument that ChatGPT is among the more important creative tools at the moment. I use it all the time, privately and for my work. Yet I'm extremely careful what I ask it to write. I self-censor myself and when I work on a story, I write in the full knowledge that both my prompts and the model output are always and automatically monitored. Assuming being augmented by AI becomes the normal creative workflow, this is a lot of power in the hands of those providing the tools. In a way, they control what can be said and shown.

At the moment, I see this as a dilemma! I don't think we want generative systems to just do whatever anyone is telling them in any situation (think generated videos and young teenagers experimenting with porn or shocking violence), yet implementing controls has all the problems I described above.

All of this is how I'm thinking about it at the moment; and I haven't done that much thinking yet. I'm looking for the debate about this; but it's surprisingly hard to find a lot of relevant discussion. Most of it is people defending content moderation in the context of communities trying to offer a safe space for their users and those discussions aren't that well calibrated to the problem I have in mind.

Thanks, I'll check it out. At the moment, I'm fine with looking at all sorts of arguments; though I'm already pretty horrified by the ways this might be used to censor and control.

Agreed, good point! Let's say there will be non-stupid arguments in favor of content moderation. For example (top of my head):

  • Children need to learn using creating tools; if a video editor was able to generate hard core porn or excessively violent content, it's tricky to leave them alone with it.
  • AI doesn't just generate content, it brings in knowledge. Some knowledge is restricted from circulation for good reasons (This is basically the bio-terrorism argument).

I tend to think it's better to limit the model's capability so it fits the use case (e. g. no ability to create porn in software used by children) than having a Llama Guard style moderation tool in the loop supervising both user and model behavior; but I'm still very vague on my own position in the debate. I also don't know the different approaches people are trying out. I'd really like to read up on it though.

(What I do know: I don't want the Metas and Googles of the world being in charge of defining the control mechanisms for something that will be involved in basically all of our creative processes.)

Well, thanks for reading five of them :) I'll try to answer your concerns:

Film:

Film is a good argument, but mostly shows that we can handle "fakes" when framed right, e. g. when they are presented with context clues marking them as a movie. Generated images will not only often lack that framing, but will be presented to us framed as if they were representing something real. I would argue that this will devalue the context clues and makes it difficult/impossible to tell which images are real and which are generated in general.

X-risk compared to human level AI:

a) Political/societal destabilization while nukes are a thing = bad. Or more general:  This interferes with our ability to deal with existing X-risks (including our ability to deal with the emergence of AGI).

b) We'd need to define X-risk a bit here. If we accept really bad societal outcomes (e. g. collapse of democracy followed by something decidedly bad), then my job convincing you should be relatively easy.  The confusion this will cause should systematically benefit the fringes and actors following a "tear it down"-strategy. And I don't think we are doing great in the stability department right now anyways.

We did epistemological impressive things before photography:

True, but not having a tool is different from loosing a tool you relied on for a long time. It's also different from that tool suddenly doing something entirely different while still appearing to do the same thing on the surface.