ChatGPT has certainly taken the world by storm, but many wonder if it could really be capable of offering medical advice - let alone saving your life.
OpenAI's generative AI chatbot software has been used for a wide range of activities, including writing University essays, recommending top tourist spots and restaurants, and even predicting the future.
ChatGPT has had its fair share of controversies and odd moments though, as users have reported the software endlessly talking to itself, starting conversations without being prompted, and potentially even letting hackers steal your personal information.
Advert
What if it could save a life though?
On first read, we thought the post on Reddit from u/sinebiryan was true too, after it detailed that the conversational AI software recognized that they were in the early stages of a heart attack.
The user remarks that they mentioned to ChatGPT about their symptoms after a rough night working late, "expecting some bland response about needing to get more sleep or cut back on coffee."
Advert
Instead, they detail that ChatGPT "actually took it pretty seriously," asking them about further symptoms, indicating afterwards that their situation could indicate a cardiac arrest, and to seek medical attention immediately.
This led u/sinebiryan to drive to the ER, where a doctor then confirmed that they were in the early stages of a heart attack - meaning that ChatGPT effectively saved their life.
As expected the post - in the r/ChatGPT subreddit no less - received an overwhelming positive response, garnering over 50,000 upvotes and 2,000 comments.
Other users in the comments have shared their own stories where ChatGPT has helped them out too, with one commenter declared that "ChatGPT is my free therapist," whereas another outlined that the software "helped save my marriage."
Advert
All good things must come to an end though, as shortly after the post went viral the same user revealed that the whole thing was made up and written by ChatGPT itself.
Advert
"Yeah it's cool I guess," affirmed u/sinebiryan in the own-up post, and you can't say they didn't have thousands fooled.
Not everyone was fooled though, as some key users did cast doubt on the original post, relishing in their accurate prediction once all was revealed.
The current second highest-voted comment on the original post argues that the post "was 100% written by AI," continuing on to predict that the story itself is fake, and that "there are clear telltale signs."
They're not alone in this assessment either as another user questioned the post, asking "why did you use an em-dash with no space in this comment, but single dash with spaces in the main post?"
Advert
Another user replied to this interrogation, pointing out that "this is one of the classic hallmarks of ChatGPT-generated text," going on to then congratulate the above comment for correctly predicting the matter of the situation.
Perhaps what we've learned from this hoax-of-sorts is that we shouldn't be too quick to trust impressive stories surrounding ChatGPT and other AI technologies.
It's scarily impressive how convincing and imperceptible the software has now become, and while some are able to see between the cracks, it's clear that most are more easily fooled.
On top of this - maybe don't go asking your AI for medical advice. If you feel you need to go to the doctor, it's probably unnecessary to ask ChatGPT for permission first!