Alexa Can Speak in Your Dead Grandmother’s Voice. Thanks, We Hate It

Photo credit: Yagi Studio - Getty Images
Photo credit: Yagi Studio - Getty Images


"Hearst Magazines and Yahoo may earn commission or revenue on some items through the links below."

  • Amazon’s Alexa soon may be able to talk to you in custom voices—like the voice of your dead grandmother.

  • It’s a feat of speech-synthesizing technology, which has been around for a while, but is now gaining favor with big companies.

  • This vocal eternity comes with certain risks, though, from bank fraud to putting words in deceased influential figures’ mouths.


In the very near future, Amazon’s famed voice assistant, Alexa, may sound quite different from the dutiful (and impersonal) voice you’ve grown accustomed to since it rolled out in 2014. In fact, the cloud-based digital assistant’s voice may ricochet off your kitchen walls in the voice of your deceased grandmother, spouse, best friend, or even Elvis Presley.

At least, that’s what Rohit Prasad, Amazon’s senior vice president and head scientist for Alexa, announced at Amazon’s re:MARS conference, a global artificial intelligence (AI) event that Amazon founder and executive chair Jeff Bezos hosted over the summer. With just a one-minute audio sample, the technology could bring a loved one’s voice bounding through an Echo device’s speakers.

📲 Tech can be tricky. We’ll be your support. Join Pop Mech Pro.

Prasad used a short presentation to show the audience how the new speech-synthesizer technology could help us forge lasting memories of our deceased relatives. “Alexa, can grandma finish reading me The Wizard of Oz?” A young boy asked a cute Echo speaker with big Panda eyes. “Okay,” Alexa responded in its typical voice. Then, the boy’s “grandma” began to narrate the classic children’s novel. Prasad did not say exactly when this feature would roll out, and there weren’t further details on how it would work.

Robotic speech synthesizers have been around for a while, but they didn’t truly make their way into pop culture until the 1980s, when the theoretical physicist Stephen Hawking started using his. To create synthesized speech, you chain together pieces of recorded speech that are stored in a database. “Amazon, in specific, is using a bank of audio they already have to build a base model. Then, they’re going to adapt the base model accordingly,” Lee Mallon tells Popular Mechanics. Mallon is an app developer who’s worked on projects for Alexa’s voice services and is the founder of voiceOK, an app that preserves recorded stories read aloud by loved ones.

“Let’s say you speak English. They’re using data of thousands upon thousands or more people who speak English as the base kind of language model, and then adding your voice fingerprint to it, generating your synthetic voice within a few minutes,” Mallon explains. Your voice fingerprint is your genuine voice, with all of its unique characteristics (think: voice biometrics).

An Ethical Can of Worms

The fact that it only takes a minute for Amazon to reconstruct a person’s voice does not reflect a lifetime of emotions, though. “Will the person be able to tell a sentence in a state of horror or excitement and laugh at the same time?” Mallon asks. In other words, will the 60-second clip contain every inflection of the person’s voice? Mallon thinks that in the few successful cases where the synthesized voice manages to capture the microemotions of the original, the result could tremendously help a person process grief.

In most cases though, the final product might be disappointing, if not downright eerie—at least until technology progresses enough to erase the boundaries between the real voice and the synthetic one. “Synthetic voice is still five to six years away from being indistinguishable from the real one,” Mallon says. Not to mention, in its current nascent state, speech synthesis could open a big ethical can of worms.

In February 2021, for instance, a deepfake of Hollywood star Tom Cruise swept TikTok. “Cruise” showed off his CD collection and played a Dave Matthews Band song on guitar. There was a creepiness about the similarity of the fake piece of media compared to reality that alarmed many TikTok users: what if someone uses a visual (or audio) deepfake of us to act out an embarrassing scene and spread the synthetic media around the internet?

But things don’t get better in death, either, for deepfake tech might not let us rest in peace. Theoretically, anyone with access to our data—like tweets, Facebook messages, voice notes, and emails—could virtually resurrect our likeness through a deepfake, avatar, or chatbot without us ever having consented to such a thing when we were alive. And creating an index from this data does not always lead to organic or honest responses, Irina Raicu, the director of the internet ethics program at Santa Clara University’s Markkula Center for Applied Ethics, told Popular Mechanics in 2021.

“If this becomes accepted, I think this could have a chilling effect on human communications,” Raicu says. “If I’m worried that anything I’m going to say could be used in a weird avatar of myself, I’ll have to second-guess everything.”

Living persons can dispute deepfakes and take the culprits to court. But with the dead, especially those who died in the not-so-recent past (and those without active legal estates), there is more opportunity for abuse. What would happen if, say, you had Muhammad Ali speak about racial tension with words he never actually said? The iconic American professional boxer was a Muslim and a renowned advocate for African American rights.

“Imagine what would happen if we took Ali’s voice right now, with all the stuff that’s going on with Salman Rushdie, and put words into his mouth—words he would never utter?” asks Rupal Patel, a professor at Northeastern University’s Department of Communication Sciences and Disorders and vice president of voice and accessibility at Veritone, an AI tech company based in California. (Rushdie, a renowned Indian-born British-English author, was stabbed in August before giving a talk on the U.S. as a safe space for exiled writers).

“We need to proactively prevent such egregious misuses,” Patel says, otherwise we may end up “misconstruing an influential figure’s mark in life.” Do that to other dead public figures, and you might end up distorting a whole legacy and throwing a society already walking on tenterhooks off balance.

Who Really Owns Your Voice?

With this new development, Amazon is popularizing an existing technology, but we have not yet safeguarded ourselves against the problems that could arise if this posthumous voice tech proliferates.

“Your voice is your intellectual property,” Patel says. “There will have to be some kind of control in terms of who gets access to the licensing of that voice, or who can control the voice engine once it is built, because there are great risks otherwise ... The AI voice could be used to impersonate someone, which may not fool a human, but may trick a voice authentication system such as the ones used in banking. Voice receivership is a whole new chapter we really don’t know how to deal with yet,” Patel tells Popular Mechanics.

And as with every bit of exciting, new technology that gains favor with big companies, we might want to read the fine print first. To teach the machines, Amazon employees listen to and assess voice inputs on a regular basis. Amazon keeps a copy of everything Alexa records after it hears its name, and reportedly, Alexa eavesdrops on her masters quite regularly. “An algorithm can gauge how old you are, your gender, or whether English is your first or second language from the slight inflections in your voice when you speak it, and much, much more,” Mallon says.

An April 2022 report from the University of Washington, UC Davis, UC Irvine, and Northeastern University found that Amazon shares Alexa’s data with 41 different advertising partners. And this is likely the ultimate motive for having Alexa speak “from the other side,” Mallon explains. “They’re doing it to make it look a bit sexier and keep Alexa alive, so it can continue going into your house.”

You Might Also Like

Advertisement