Scientists believe a method used to observe light in deep space could help to identify deepfake imagery generated by Artificial Intelligence (AI).
A deepfake is a type of digital content created using AI to mimic a person’s likeness or voice.
These images are usually employed to spread disinformation and are famously used as part of social engineering scams.
Deepfake content can therefore be dangerous and cybercriminals can often use it to wreak havoc—but that could all change with new research.
Advert
Fake images often lack consistency in the reflections between each eye, whereas real images generally show the same reflections in both eyes, writes Science Daily.
Therefore if the images don’t match, the image could potentially be ruled as a deepfake.
To test this theory further, scientists from the Royal Astronomical Society's National Astronomy Meeting in Hull have been analyzing reflections of light on the eyeballs of people in real and AI-generated images using methods often employed to observe deep space.
"It dawned on me that the reflections in the eyes were the obvious thing to look at," said Kevin Pimbblet professor of astrophysics and director of the Centre of Excellence for Data Science, Artificial Intelligence and Modelling at the University of Hull.
Advert
"The reflections in the eyeballs are consistent for the real person, but incorrect [from a physics point of view] for the fake person.”
Pimbblet, alongside Adejumoke Owolabi, a Masters student at the University, developed software to detect light reflections before running morphological features through the CAS (concentration, asymmetry, smoothness).
“To measure the shapes of galaxies, we analyse whether they're centrally compact, whether they're symmetric, and how smooth they are. We analyse the light distribution," Professor Pimbblet explained.
Advert
“We detect the reflections in an automated way and run their morphological features through the CAS and Gini indices to compare similarity between left and right eyeballs.”
The Gini coefficient is usually employed when scientists want to measure how the light in an image of a galaxy is distributed among its pixels.
To make this measurement, researchers work to order galaxy pixels in ascending order by flux before comparing results to what would be expected from a perfectly even flux distribution.
Advert
As well as using the Gini coefficient, the Hull scientists tested CAS parameters.
However, they discovered it was not a successful method to predict whether image eyes were fake or real.
So, while these methods are typically used in astronomy, it’s been proven that they work to detect deepfake imagery too.
However, Professor Pimbblet has stated that his method isn’t a one-size-fits-all.
Advert
"There are false positives and false negatives; it's not going to get everything,” he added. “But this method provides us with a basis, a plan of attack, in the arms race to detect deepfakes.”