AIhas become adept at recreating, altering, and restoring human speech.
But as the replicas become indistinguishable from the real, fears about the tech are growing.
Alex Serdiuk has a unique understanding of both the opportunities and threats.

Yet Serdiuk has also seen synthetic media at its worst.
The CEO and his company are based in Ukraine, which has been the target ofdeepfakedisinformation.
40% off TNW Conference!
The clip showed a digitally-rendered Zelensky telling soldiersto surrender to Russia.
The impact, however, was minimal.
And our nation is smart.
The Zelensky deepfake is so bad it won’t convince any Ukrainians to “lay down their arms.”
Indeed, fake sounds can be more convincing than fake sights.
Editing reality
Seriduk believes synthetic voices can avoid theuncanny valleymore smoothly than artificial visuals.
He adds that this realism can benefit society.
Respeecher, for instance, has developedvoice replacement techfor people who have undergone a laryngectomy.
In trials, the system created a natural-sounding voice while preserving the users articulation.
Sonantic, an AI startup, produced another powerful example.
In 2021, the companyrecreated ValKilmers voiceafter throat cancer treatment left the actor unable tospeak clearly.
Sonantic CEO Zeena Qureshi said the project showed the altruistic potential of the approach.
The project with Val demonstrated again how empowering it can be when people overcome challenges with speaking.
However, other uses of speech synthesis have caused concern.
Voicing complaints
In 2021, a documentary about Anthony Bourdain sparked a heated debate about deepfakes.
Inan interview, directorMorgan Neville revealed that AI had recreated the late chefs voice in the film.
The synthetic dialogue was comprised of words Bourdainhad written but never said.
Critics felt the move was duplicitous and lacked consent from Bourdain whowas famously obsessed with authenticity.
Neville later said hed received approval from Bourdains next of kin.
But Ottavia Bourdain, the chefs widow, disputed this claim.
The companysethics statementprohibits deceptive uses ofsynthetic speech.
The company has further pledged to never use the voice of a private person or actor without permission.
Respeecher is also developing two technical defenses: a synthetic speech detector and audio watermarking.
Ultimately, voice cloning will remain another tool that we can use for both good and bad.
Serdiuk hopes the safeguards stop the harms from overshadowing the benefits.
Story byThomas Macaulay
Thomas is the managing editor of TNW.
He leads our coverage of European tech and oversees our talented team of writers.
Away from work, he e(show all)Thomas is the managing editor of TNW.
He leads our coverage of European tech and oversees our talented team of writers.
Away from work, he enjoys playing chess (badly) and the guitar (even worse).