AIethicists have called for urgent safeguards against an emerging digital afterlife industry.
The concerns centre on chatbots that mimic the appearances, speech, and personalities of dead people.
Known as deadbots or griefbots, these AI clones are trained ondataabout the deceased.

They then provide simulated interactions with virtual recreations of the departed.
This postmortem presence can social and psychological harm, according to researchers from Cambridge University.
Their new study highlights several risks.
One involves the use of deadbots for advertising.
By mimicking lost lovedones, deadbots could manipulate their vulnerablesurvivorsinto buying products.
40% off TNW Conference!
The researchers fear that these will create an overwhelming emotional weight.
This could intensify the grieving process into endless virtual interactions.
Deadbots coming to life
The study also envisions deadbots spamming users with unwanted notifications.
The researchers compare this to being digitally stalked by the dead.
Its a prospect thats quickly becoming a reality.
Services such as Project December and HereAfter already offer customers a chance todigitally resurrect the dead.
Another suggested safeguard is user-friendly termination methods.
These could even involve a digital funeral for the deadbot.
All these measures need to consider both the dead and those they leave behind.
The potential psychological effect, particularly at an already difficult time, could be devastating.
Story byThomas Macaulay
Thomas is the managing editor of TNW.
He leads our coverage of European tech and oversees our talented team of writers.
Away from work, he e(show all)Thomas is the managing editor of TNW.
He leads our coverage of European tech and oversees our talented team of writers.
Away from work, he enjoys playing chess (badly) and the guitar (even worse).