by Vale
1 min read

2

This is a special post for quick takes by Vale. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
1 comment, sorted by Click to highlight new comments since:
[-]Vale0-1

I think people seem to downplay that when artificial intelligence companies release new models/features, they tend to do so with minimal guardrails.

I don't think it is hyperbole to suggest this is done for the PR boost gained by spurring online discussion, though it could also just be part of the churn and rush to appear on top where sound guardrails are not considered a necessity. Either way, models tend to become less controversial and more presentable over time.

Recently OpenAI released their GPT-4o image generation with rather relaxed guardrails (it being able to generate political content and images of celebrities without consent). This came hot off the heels of Google's latest Imagen model, so there was reason to rush to market and 'one-up' Google.

Obviously much of AI risk is centred around swift progress and companies prioritising that progress over safety, but minimising safety specifically for the sake of public perception and marketing strikes me as something we are moving closer towards.

This triggers two main thoughts for me:

- How far are companies willing to relax their guardrails to beat competitors to market?
- Where is 'the line' between a model with relaxed enough guardrails to spur public discussion but not relaxed enough to cause significant damages to the company's perception and wider societal risk?

More from Vale
Curated and popular this week