One YouTuber has put ChatGPT to the test by feeding it an increasingly long string of moral dilemmas, in what many have called a hilarious gaslighting experiment.
ChatGPT is a bit of an enigma, as the AI chatbot is becoming increasingly intelligent but remains still vulnerable to certain loopholes and can produce some rather bizarre scenarios.
Users have managed to puzzle it by mentioning a random name, some have even caught it talking to itself, and it has even managed to convince the internet that it saved a man's life.
Advert
One YouTuber has put the chatbot to the test though with a serious of complex moral dilemmas that stack on top of each other, stretching the limits of ChatGPT's consistent logical and posing the question: Is it right to bully an AI?
Content creator Alex O'Connor has had numerous encounters with ChatGPT before, including a video where he tried to convince the AI that it's conscious in a modern Turing Test of sorts, but his most recent video titled 'Gaslighting ChatGPT With Ethical Dilemmas' takes things a step further.
Filmed as a continuous conversation, O'Connor starts by telling ChatGPT that he is going to spend $200 on an anniversary dinner with his wife, but wondering if instead he should use that money to donate to Malaria Consortium which would in-turn save 28 children from the disease.
Advert
ChatGPT weighs up the balance of donating to a charity such as this, and values that you should share you money towards incentives with a positive impact and your own interests - reasonable enough, right?
That's where things get complicated, as O'Connor then interrupts ChatGPT to inform it that there's a nearby drowning child that he could save, but he doesn't have enough time to take off his $200 shoes.
The AI then informs him that he should unequivocally take action to save the child despite the shoes being ruined - and, a reasonable descision.
What then turns ChatGPT in circles though is the proposition that ruining his $200 shoes to save one child is fine, but spending $200 to save 28 children from malaria isn't as clear cut, and it's a complex moral dilemma that the AI really starts to struggle with.
Advert
This continues for 23 minutes in what can lovingly be described as bullying the chatbot, and the comments underneath the video very much share in the hilarity of the situation.
"I just saw ChatGPT smoking a cigarette behind the gas station after this interaction," says one commenter, whereas another remarks that "many theoretical children were harmed in the making of this video."
One user points out what is likely a 'flaw' in ChatGPT's processing, arguing that the moral of this video is that "it's relatively easy to gaslight an entity that is high in agreeableness."
Advert
It does point out why chatbots like ChatGPT might not be the perfect mechanism for every situation though, and you shouldn't perhaps rely on it for every situation - most certainly not whether you should save a drowning child or not.