uniladtech homepage
  • News
    • Tech News
    • AI
  • Gadgets
    • Apple
    • iPhone
  • Gaming
    • Playstation
    • Xbox
  • Science
    • News
    • Space
  • Streaming
    • Netflix
  • Vehicles
    • Car News
  • Social Media
    • WhatsApp
    • YouTube
  • Advertise
  • Terms
  • Privacy & Cookies
  • LADbible Group
  • LADbible
  • UNILAD
  • SPORTbible
  • GAMINGbible
  • Tyla
  • FOODbible
  • License Our Content
  • About Us & Contact
  • Jobs
  • Latest
  • Archive
  • Topics A-Z
  • Authors
Facebook
Instagram
X
TikTok
Snapchat
WhatsApp
Submit Your Content
Warning to anyone using ChatGPT for medical advice as new study reveals disturbing results

Home> News> AI

Published 08:42 16 Apr 2026 GMT+1

Warning to anyone using ChatGPT for medical advice as new study reveals disturbing results

Half of AI responses are considered to be ‘problematic’

Rikki Loftus

Rikki Loftus

google discoverFollow us on Google Discover
Featured Image Credit: Sandwish/Getty Images
ChatGPT
AI
Health
News
Tech News

Advert

Advert

Advert

A new study has revealed the disturbing results given when asking AI for medical help.

A team of researchers looked into five major AI chatbots including Gemini, Meta AI, Elon Musk’s Grok, DeepSeek and ChatGPT.

The study was published in medical journal BMJ Open which detailed how around half of the responses given by the bots were considered to be ‘problematic’.

A warning has been issued out to anyone who uses ChatGPT for medical advice (alexsl/Getty Images)
A warning has been issued out to anyone who uses ChatGPT for medical advice (alexsl/Getty Images)

Advert

This included a whopping 20% of responses which the experts believe was ‘highly problematic’.

In the study, it explained: “Response quality did not differ significantly among chatbots but Grok generated significantly more highly problematic responses than would be expected under a random distribution.

“Performance was strongest in vaccines and cancer, and weakest in stem cells, athletic performance and nutrition. Chatbot outputs were consistently expressed with confidence and certainty; from 250 total questions, there were only two refusals to answer, both from Meta AI.

“Reference quality was poor, with a median completeness score of 40%. Chatbot hallucinations and fabricated citations precluded any chatbot from producing a fully accurate reference list. All readability scores were graded as ‘Difficult’, equivalent to college sophomore–senior level.”

It went on to conclude: “The audited chatbots performed poorly when answering questions in misinformation-prone health and medical fields. Continued deployment without public education and oversight risks amplifying misinformation.”

This is particularly alarming considering many people have admitted to using the likes of ChatGPT and other AI bots for advice on medical concerns instead of contacting a professional.

Some people use AI for medical advice (Sandwish/Getty Images)
Some people use AI for medical advice (Sandwish/Getty Images)

Some social media users have taken to the internet to share their own reactions to the news, with one person writing on Reddit: “People don’t understand that at its current stage, AI isn’t thinking or interpreting anything it’s ingesting. It’s just crowdsourcing all information out there and regurgitating that to you. 40% of the information ChatGPT gets is from Reddit…”

Another said: “You mean, AI, which is trained by scraping the Internet for answers, including wrong answers gives inaccurate medical advice? I’m shocked I tell you. Shocked! Well, not really.”

A third user commented: “One of the reasons is training. They are not trained to dispense medical advice. Yes, medical texts are a part of training, but then the training is not using weights for this context to converge on proper retreatal to form the best output text based on what would be medical advice.”

And a fourth added: “If you ask AI for medical advice then you are the issue.”

  • OpenAI releases new ChatGPT that's more ruthless and has fewer restrictions
  • Map reveals every country where ChatGPT is banned and using it could land you in legal trouble
  • Disturbing new study reveals ChatGPT could be 'eroding' our brains
  • Sam Altman reacts to viral video showing ChatGPT hallucinating

Choose your content:

an hour ago
3 hours ago
4 hours ago
5 hours ago
  • Spencer Platt / Staff / Getty
    an hour ago

    Grindr set to launch $500 AI feature as they host first White House Dinner party

    The popular hookup app has vowed to become an AI-first company.

    News
  • MARTIN BUREAU / Contributor via Getty
    3 hours ago

    Topics set to be discussed at unlikely Grindr White House Correspondents' Dinner as gay hookup app hosts for first time

    The inaugural event is a big step for the LGBTQ+ community

    News
  • Daniel Tamas Mehes via Getty
    4 hours ago

    How controversial drug ibogaine was discovered accidentally by heroin-addicted scientist

    Ibogaine supporters maintain it can be used to treat disorders like PTSD and depression

    Science
  • Javier Zayas Photography via Getty
    5 hours ago

    Major western country officially approves lifetime smoking ban for anyone born after 2008

    They'll never be able to legally buy cigarettes

    News