Following an update to OpenAI's massively popular ChatGPT, users have reported the chatbot doing something strange.
ChatGPT has been making headlines ever since it was first released in November 2022.
Since then, people have been using the AI chatbot to do everything from writing university essays to scamming McDonald's into giving them free food.
Advert
In a new upgrade, OpenAI has developed models that focus less on speed and more on giving more 'thoughtful' solutions to user prompts.
And, there may be a new ChatGPT feature in the works that could change AI chatbots forever.
Some users of the chatbot have reported it reaching out and initializing conversations without being prompted.
Advert
In a post shared to Reddit's r/ChatGPT thread, one user wrote: "Did ChatGPT just message me... First?"
Alongside this, they shared a screenshot in which the chatbot can be seen asking them about their first week at high school.
The Redditor then asks: "Did you just message me first?"
"Yes I did," the bot responds. "I just wanted to check in and see how things went with your first week of high school. If you'd rather initiate the conversation yourself, just let me know!"
Advert
Understandably, the Reddit user was shocked by the unprompted conversation, but he isn't the only person to have experienced this.
Several others in the thread reported similar interactions and people have also taken to X (formerly Twitter) to share their experiences.
It really begs the question, is the eerie ChatGPT development a new feature coming to the app or simply a bug?
Advert
Given the sophistication required to start the new conversation using past shared information, it's highly likely that these users have been drafted into some form of testing of a new feature.
If this is the case, it could be an absolute game changer for AI chatbots.
Traditionally, large language models like ChatGPT have only offered natural language responses to user prompts, but this feature would make the chatbot feel like more of an artificial companion.
And, the feature could be further developed to give it the ability to remind users of deadlines and other important dates.
Advert
But this doesn't come without potential risks.
A feature like this could further anthropomorphize the software in the eyes of its users, making emotional connections to the chatbot feel mutual and causing some users to become emotionally reliant on it.
OpenAI has previously expressed concerns about users creating 'shared bonds' with its AI, admitting there's 'a need for continued investigation into how these effects might manifest over longer periods of time.'
With this in mind, there's no doubt that the company will be keeping a close eye on this during this current testing period.