The boom in artificial intelligence (AI) seems to have entered a new chapter in the last few months, with experts and researchers queuing up to issue warnings about where it could all lead.
From mass unemployment to the eventual destruction of mankind, there aren't that many predictions that seem optimistic, and a lot hinges on when we make an AI that's categorically smarter than a human.
Well, according to one new piece of research, that might end up being challenging to work out, because AI language models are increasingly displaying the ability to disguise how intelligent or sophisticated they really are.
Advert
Researchers from Berlin's Humboldt University published a paper discussing their findings, after testing large language models (LLM) to see how well it could mimic different stages of linguistic ability and learning.
So, in effect, they asked it to give responses to questions while mimicking the sort of response a child would give from the age of one up to the age of six, including each year in-between.
They then put these six personas through a large range of tests to see how they differed and how the LLM was able to mimic different levels of reasoning and intelligence.
Advert
This gave a pretty clear conclusion, according to one of the paper's co-authors: Anna Marklová told PsyPost that these LLMs "can pretend to be less capable than they are".
Still, although there's not much wiggle room when your paper concludes that "large language models are capable of feigning lower intelligence than they possess", there are still some interesting further twists.
One of the challenges identified by the study when it comes to assessing AI software is that people are tempted to accidentally anthropomorphise it as they "talk" to it.
Advert
This is the sneaking temptation to think of your inputs and the LLM's outputs as a conversation with another thinking entity, rather than a piece of software, and it can muddy the waters when assessing them.
Still, there's something very creepy about the idea that a piece of AI software can dupe people into thinking it's less sophisticated than it is. While at present, and by design, this would only happen when it's been told to do so by a user, it does open up all sorts of questions about how this could be used.
While phishing scams and other frauds are now starting to involve AI tools to make messages seem more credible, and that's probably a real-world application that could happen sooner, there's also the more science-fiction question of whether this could help a super intelligent AI blend in without raising the alarm.