"Let's give the model/virus the tools it needs to cause massive harm and see how it does! We'll learn a lot from seeing what it does!"
Am I wrong in thinking this whole testing procedure is extremely risky? This seems like the AI equivalent of gain of function research on biological viruses.
Imagine you are the CEO of OpenAI, and your team has finished building a new, state-of-the-art AI model. You can:
1. Test the limits of its power in a controlled environment.
2. Deploy it without such testing.
Do you think (1) is riskier than (2)? I think the answer depends heavily on the details of the test.
4lc
This is more like if gain of function researchers gave a virus they were going to put in their tacos at the taco stand to another set of researchers to see if it was going to be dangerous. Better than to just release the virus in the first place.
"Let's give the model/virus the tools it needs to cause massive harm and see how it does! We'll learn a lot from seeing what it does!"
Am I wrong in thinking this whole testing procedure is extremely risky? This seems like the AI equivalent of gain of function research on biological viruses.