Last week, OpenAI released the 'scorecard' for ChatGPT-4o which basically outlines the 'key areas of risk' for the company's latest large language model, and how they plan to handle them.
And no, it's not to use it as a weapon of mass destruction.
Strangely, one of its highest risks has been listed as erotic speech.
Advert
Yep, you read that right.
OpenAI has flagged 'generating erotic and violent speech' as a key area of concern along with 'unauthorized voice generation' and 'generating disallowed audio content.'
The risk description on OpenAI's official website reads: 'GPT-4o may be prompted to output erotic or violent speech content, which may be more evocative or harmful than the same context in text. Because of this, we decided to restrict the generation of erotic and violent speech.'
Advert
It continued: 'Risk Mitigation: We run our existing moderation model over a text transcription of the audio input to detect if it contains a request for violent or erotic content, and will block a generation if so.'
The company’s 'system card' update also highlights that GPT-4o can create 'audio with a human-sounding synthetic voice,' which raises concerns about potential misuse, like fraud through impersonation or spreading false information, according to OpenAI.
Furthermore, the updated chatbot model can even make nonverbal sounds like music and sound effects, which hasn't sat well with everyone, and could potentially cause copyright issues for some content creators.
Luckily, OpenAI has kept the risk of accidental voice replication 'minimal.'
Advert
They’ve restricted voice generation to those created with voice actors, making it difficult for users to trick the system into using unauthorised voices.
'Voice generation can also occur in non-adversarial situations, such as our use of that ability to generate voices for ChatGPT's advanced voice mode,' OpenAI wrote in its documentation.
'During testing, we also observed rare instances where the model would unintentionally generate an output emulating the user's voice.'
Advert
Simon Willison, an AI researcher, said: 'My reading of the system card is that it’s not going to be possible to trick it into using an unapproved voice because they have a really robust brute force protection in place against that.
'Imagine how much fun we could have with the unfiltered model. I’m annoyed that it’s restricted from singing—I was looking forward to getting it to sing stupid songs to my dog.'
It'll be interesting to see where the AI company will go with its future updates as some experimental individuals are enjoying pushing the chatbot beyond its boundaries such as forcing it to admit to having personal beliefs.