Send in the clones: Using artificial intelligence to digitally replicate human voices

Alibaba to test gaming potential of metaverse as Big Tech firms stampede into virtual world

The science behind making machines talk just like humans is very complex, because our speech patterns are so nuanced.

“The voice is not easy to grasp,” says Klaus Scherer, emeritus professor of the psychology of emotion at the University of Geneva. “To analyze the voice really requires quite a lot of knowledge about acoustics, vocal mechanisms and physiological aspects. So it is necessarily interdisciplinary, and quite demanding in terms of what you need to master in order to do anything of consequence.”

So it’s not surprisingly taken well over 200 years for synthetic voices to get from the first speaking machine, invented by Wolfgang von Kempelen around 1800 – a boxlike contraption that used bellows, pipes and a rubber mouth and nose to simulate a few recognizably human utterances, like mama and papa – to a Samuel L. Jackson voice clone delivering the weather report on Alexa today.

Read more