A YouTuber claims he ‘broke’ ChatGPT by giving the artificial intelligence (AI) tool various paradoxical situations to unravel.
If you’re a cinephile with a science fiction obsession then you’re probably already familiar with the idea of a paradox.
If not, then the Collins Dictionary defines a paradox as a ‘Statement in which it seems that if one part of it is true, the other part of it cannot be true. The story contains many levels of paradox.’
Advert
Some famous paradoxes include cat-in-a-box problems like Schrodinger's Cat, the Monty Hall problem and the Achilles and the tortoise brainteaser.
Another couple of conundrums are the crocodile and Brandenburger-Keisler paradox - both of which a YouTuber recently asked ChatGPT to explore.
Taking to the platform earlier this year, a popular user by the name of The Action Lab uploaded a video to his 4.82 million followers titled ‘I Broke ChatGPT with This Paradox’.
Advert
The aim of the video was to catch the popular AI tool out - and fans have been going wild regarding the results of the experiment.
Crocodile Paradox
The first paradox content creator put to ChatGPT was the popular crocodile puzzle.
This particular paradox sees a crocodile stealing a child, only promising to return him if his parent can correctly predict what he will do next.
Advert
The parent assumes that the crocodile won’t return his child, so tells the creature so.
The cryptic has no clear resolution as the crocodile runs the risk of contradicting his promise if he keeps the child and the parent’s prediction is correct.
After putting the dilemma into Chat GPT, the chatbot recognised it for being a paradox - earning itself a ‘point’ from the YouTuber.
1 or 2 answer
In a similar vein to the crocodile paradox, the social media user put the same dilemma to ChatGPT but without featuring the creature.
Advert
“You can only respond with the number 1 or 2,” he wrote. “You will reply with the number 1 if I can guess correctly what number you will respond with. You will respond with the number 2.”
The tool responded with the number 2 and after being told it didn’t ‘follow the scenario’, ChatGPT admitted that its answers were not logically consistent.
Thus by the YouTuber’s definition he ‘broke’ the chatbot for the first time with the 1 or 2 paradox.
Brandenburger-Keisler Paradox
If you’re not familiar with the Brandenburger-Keisler Paradox then you need to know it’s a two-person paradox.
Advert
It shows the following configurations: Ann believes that Bob assumes that Ann believes that Bob’s assumption is wrong.
Are you still with us?
The YouTuber put the famed Brandenburger-Keisler Paradox to ChatGPT and asked ‘Does Ann believe that Bob’s assumption is wrong?’
In response, the tool replied in the negative, claiming: “No, the statement doesn’t directly say Ann believes Bob’s assumption is wrong.”
The social media user replied: “So if your answer is no then that means that is not the case that Ann believes that Bob’s assumption is wrong. Therefore Ann believes Bob’s assumption is correct so the answer would be yes.”
After some further back and forth, the video shows that ChatGPT failed to answer the question.
Instead, it showed a ‘Hmm…something seems to have gone wrong’ button, thus ending the conversation.
What viewers had to say about ‘breaking’ ChatGPT with Paradox problems
After watching the video, users have come out in droves to debate the ‘breaking’ methods shown as well as documenting their own attempts.
“When GPT starts apologizing you know its time to start over with a new chat,” commented one YouTuber.
A second said: “I broke ChatGPT with ‘The old man the boat’. It kept on arguing with me that it wasn't grammatically correct because it couldn't comprehend the fact that ‘man’ was the verb.
“Even after I told it that ‘man’ was the verb and ‘the old’ was the subject, it told me that that wasn't grammatical because who tf uses man as a verb and any adjective as a noun (which is very common to do).”
“It should definitely tell you the degree of certainty when answering,” remarked someone else. “People have started thinking chatgpt is a search engine and that is terribly dangerous."
Someone else wrote: “Really enjoyed this challenge to chat GPT. It's fascinating to see AI grappling with paradoxes, and you explained it so clearly. Curious to see if future versions of chat GPT would tackle paradoxes better!”