• News
    • Tech News
    • AI
  • Gadgets
    • Apple
    • iPhone
  • Gaming
    • Playstation
    • Xbox
  • Science
    • News
    • Space
  • Streaming
    • Netflix
  • Vehicles
    • Car News
  • Social Media
    • WhatsApp
    • YouTube
  • Advertise
  • Terms
  • Privacy & Cookies
  • LADbible Group
  • LADbible
  • UNILAD
  • SPORTbible
  • GAMINGbible
  • Tyla
  • FOODbible
  • License Our Content
  • About Us & Contact
  • Jobs
  • Latest
  • Topics A-Z
  • Authors
Facebook
Instagram
X
TikTok
Snapchat
WhatsApp
Submit Your Content
OpenAI identifies 'erotic speech' as a 'key area of risk' in latest ChatGPT report

Home> News> AI

Published 09:51 13 Aug 2024 GMT+1

OpenAI identifies 'erotic speech' as a 'key area of risk' in latest ChatGPT report

ChatGPT's latest report had some concerning findings

Rebekah Jordan

Rebekah Jordan

Featured Image Credit: NurPhoto / Contributor / Getty
ChatGPT
Science

Advert

Advert

Advert

Last week, OpenAI released the 'scorecard' for ChatGPT-4o which basically outlines the 'key areas of risk' for the company's latest large language model, and how they plan to handle them.

And no, it's not to use it as a weapon of mass destruction.

Strangely, one of its highest risks has been listed as erotic speech.

Yep, you read that right.

Advert

OpenAI has flagged 'generating erotic and violent speech' as a key area of concern along with 'unauthorized voice generation' and 'generating disallowed audio content.'

The risk description on OpenAI's official website reads: 'GPT-4o may be prompted to output erotic or violent speech content, which may be more evocative or harmful than the same context in text. Because of this, we decided to restrict the generation of erotic and violent speech.'

NurPhoto / Contributor / Getty
NurPhoto / Contributor / Getty

It continued: 'Risk Mitigation: We run our existing moderation model over a text transcription of the audio input to detect if it contains a request for violent or erotic content, and will block a generation if so.'

Advert

The company’s 'system card' update also highlights that GPT-4o can create 'audio with a human-sounding synthetic voice,' which raises concerns about potential misuse, like fraud through impersonation or spreading false information, according to OpenAI.

Furthermore, the updated chatbot model can even make nonverbal sounds like music and sound effects, which hasn't sat well with everyone, and could potentially cause copyright issues for some content creators.

Luckily, OpenAI has kept the risk of accidental voice replication 'minimal.'

They’ve restricted voice generation to those created with voice actors, making it difficult for users to trick the system into using unauthorised voices.

Advert

NurPhoto / Contributor / Getty
NurPhoto / Contributor / Getty

'Voice generation can also occur in non-adversarial situations, such as our use of that ability to generate voices for ChatGPT's advanced voice mode,' OpenAI wrote in its documentation.

'During testing, we also observed rare instances where the model would unintentionally generate an output emulating the user's voice.'

Simon Willison, an AI researcher, said: 'My reading of the system card is that it’s not going to be possible to trick it into using an unapproved voice because they have a really robust brute force protection in place against that.

Advert

'Imagine how much fun we could have with the unfiltered model. I’m annoyed that it’s restricted from singing—I was looking forward to getting it to sing stupid songs to my dog.'

It'll be interesting to see where the AI company will go with its future updates as some experimental individuals are enjoying pushing the chatbot beyond its boundaries such as forcing it to admit to having personal beliefs.

Choose your content:

2 hours ago
10 hours ago
  • 2 hours ago

    ChatGPT users freak out as Sam Altman launches 'AI agents' eerily similar to apocalyptic 'AI 2027' prediction

    It's like an AI Nostradamus and Baba Vanga

    News
  • 10 hours ago

    Disturbing simulation shows how much microplastic we consume every week and the result is terrifying

    Paper straws suddenly don't seem so bad

    Science
  • 10 hours ago

    'Coldplaygate' CEO Andy Byron targeted by namesake in viral LinkedIn post as he deletes account

    A man with the same name weighed in on the drama that unfolded at a Coldplay concert

    News
  • 10 hours ago

    How Andy Byron's $1,300,000,000 company could be affected after Coldplay 'catch him with another woman'

    The ultimate being caught in 4K

    News
  • AI is willing to kill humans to avoid shutdown as chilling new report identifies 'malicious' behaviour
  • ChatGPT CEO Sam Altman has a $100m Meta problem as 'safety issues' delay OpenAI masterplan
  • Fascinating video shows what it's like using a brain implant with ChatGPT
  • Reddit user shares 'wild' ChatGPT prompt that gives you a full CIA intelligence report about your life