Amazon developed a new feature for their company's digital assistant Alexa, allowing it to imitate a dead person's voices.

Alexa Artificial Intelligence Vice President Rohit Prasad revealed the added feature during the MARS conference in Las Vegas last Wednesday, People reported. He emphasized human empathy and affection as key to building trust in companionship and relationship. The head scientist said these attributes were among the most important matter during the COVID-19 pandemic when many people's loved ones had died.

After that, Prasad played a clip of a young boy asking Alexa to read a bedtime story in the voice of his grandma. Alexa recognized the command and changed its voice to the grandmother's voice before beginning to recite the tale.

"While A.I. can't eliminate that pain of loss, it can make their memories last," Prasad told the audience.

According to him, their team had taught Alexa to mimic the voice of another person by using a less than a minute voice recording. He noted that people were living in the golden era of AI when dreams and science fiction became a reality.

Various Reactions To Amazon's New Feature

According to Global News, AI voice recreations have progressively grown in popularity in recent years as it was occasionally used in movies and television.

In Anthony Bourdain's documentary "Roadrunner," there were three lines allegedly read out by artificial intelligence that imitated the late chef's voice. Based on the report, it was not clear if he was the one who read the lines however a dispute erupted among his estate as they never permitted his voice to be used like that.

Another recent instance was the artificial intelligence (AI) speech used in the movie "Top Gun: Maverick" to impersonate Val Kilmer, who suffered from throat cancer and lost his voice.

After its announcement, the new feature of Alexa garnered different reactions from people. Many people were calling it creepy and some people raised privacy concerns to which Amazon hadn't answered yet.

 

Micheal Inouye of ABI Research told CNN News that AI has risks of not imitating well a voice that could distort family's memories. He said it might sound creepy to others but some people were open to letting their children listen, for example to their deceased grandparents.

Inouye said he believed that people's ability to adapt to innovation and accept that it will be their reality soon depends on how they would react to such inventions. He explained that until people reached a certain comfort with new things, there would still be different responses from the people.

Also Read: Artificial Intelligence 'Will Never Be Ethical', Said Oxford University Scientists' AI During An Ethical Debate

Reference To Future Implications

Imitating a person's voice had existed before and it has several negative implications such as impeding a voice professional's livelihood and vulnerability to security and theft.

Impede Voice Professional Livelihood

Before Amazon announced the feature, the short-form video hosting service TikTok has a similar feature which led  to a lawsuit.

Voice Actor Bev Standing sued China's tech giant TikTok for using her voice in its text-to-speech function, BBC reported. TikTok has also used AI technology to convert a user's writing into Standing's speech which people usually use often for comedic effect.

In 2018, Standing had recorded roughly 10,000 audio sentences for the government-sponsored Chinese Institute of Acoustics research organization to use in translations, according to the report. In her court defense, she claimed that the use of her voice for vulgar and obscene language had damaged her reputation beyond repair.

Standing's lawyer, Robert Sciglimpaglia told BBC News that a technology replicating anyone's voice through artificial intelligence could hugely impact celebrities or voice actors' livelihood.

According to The Verge, TikTok had agreed with Standing to pay the settlement. Sciglimpaglia said that TikTok has been granted a license to use Standing's voice as part of the agreement, but TikTok has the option to utilize it or not.

Vulnerability To Theft

In a study conducted in 2015 by the University of Alabama at Birmingham, researchers have discovered vulnerabilities in voice-based user authentication, both automatic and human. They investigated how an attacker in possession of voice audio samples could jeopardize a victim's security, safety, and privacy.

Dr. Nitesh Saxena, the director of the Security and Privacy In Emerging computing and networking Systems (SPIES), said that the voice could be an easy tool for humans to use for security functions, however, this might let the voice a vulnerable commodity. She explained that a person who has malicious intentions can easily obtain a person's voice recording in many ways including being physically near the target, making a spam call, downloading audiovisual clips online, or compromising servers.

The researchers created a speech impersonation attack using a commercial voice-morphing program to try to get past automatic and human verification mechanisms. In their controlled study setting, the voice-morphing tool replicated two well-known actors, Morgan Freeman, and Oprah Winfrey. Saxena found that a few minutes of recording could mimic a victim's voice which can lead to grave acts from perpetrators.

Saxena advised individuals to be more cautious when publishing audio snippets of their voices online and to raise their awareness of the risk of these attacks. She also suggested that there's a need for speaker verification systems that could resist voice imitation attacks.

Related Article: China Clamping Down On Voice-Changing Software Used By Activists To Speak Out Anonymously