I reckon I'm not the only one who isn't so keen on finding out when and how I'm going to kick the bucket. The paranoia of a looming 'death date' would truly put a dent in my everyday life.
But for those of you who would like to know their upcoming time of death, scientists from Denmark have researched a new AI system that can predict an individual's time of death with high accuracy.
The new AI system called 'life2vec' was trained on the lives of over a million people in Denmark.
Advert
In the study, scientists from the Technical University of Denmark (DTU) collected data from six million Danes between the years of 2008 to 2020.
The data included educational background, health (including appointments and diagnoses), and occupation.
Once the model was trained on this data, the AI bot could create patterns and predict outcomes, like chances of morality and time of death.
Advert
AI used the algorithm to predict whether someone in the age group of 35 to 65 on the Danish national registers had died by 2020, with an impressive accuracy of 79%.
The study author Dr. Sune Lehman from DTU said: 'We used the model to address the fundamental question: to what extent can we predict events in your future based on conditions and events in your past?'.
Lehman explained that the 'life2vec' machine system utilises similar technology as that in ChatGPT where a series of words are taken, and the patterns can statistically determine the probability of what's next.
Advert
'This is usually the type of task for which transformer models in AI are used, but in our experiments, we use them to analyse what we call life sequences, i.e., events that have happened in human life,' Dr. Lehman said.
She continued: 'What’s exciting is to consider human life as a long sequence of events, similar to how a sentence in a language consists of a series of words.'
The study, published in the journal Nature Computational Science, found that the model's predictions were 11% more accurate than those of any other existing AI model or the methods used by life insurance companies.
However, Dr. Lehman disagreed on the ethical risks of doing so: 'Clearly, our model should not be used by an insurance company, because the whole idea of insurance is that, by sharing the lack of knowledge of who is going to be the unlucky person struck by some incident, or death, or losing your backpack, we can kind of share this burden.'