Apple is looking at how to improve Siri To better understand people who have a speech problem such as stuttering, according to new details published by The Wall Street Journal in an article about how companies are developing voice assistants to deal with unusual speech such as stuttering, and the strange thing is that Apple has had an advantage to address this for a long time, but it seems It is not sophisticated enough.


The development of a personal assistant for any company has become an urgent matter and is required to understand all types of speech and dialects. For example, a story occurred that proved that it is very much needed. Mrs. Dagmar Moon and her husband bought a smart speaker from Amazon seven years after Mrs. Moon contracted ALS, That causes nerve cells to be damaged, and the muscles can no longer be moved, which gradually leads to their weakness, to the point of paralysis. During the disease, the muscles responsible for moving the limbs, swallowing, speaking and even breathing, are gradually injured, in no fixed order. This disease does not harm the five senses, the mental abilities, or the internal muscles (heart, urinary bladder, digestive system, etc.).

Oh God, heal us and you from all harm and evil, and to heal every Muslim patient

At first, the voice assistant, Alexa, was able to understand what Mrs. Moon was saying. But when her condition worsened and her speech became slower and more confused, she found herself unable to communicate through sound.

“I am not quick to speak enough to say voice commands,” Ms. Moon said. “If I wanted to say something like, 'Alexa, tell me the news,' it will be closed before I finish the question.”

Consequently, Ms. Moon became unable to interact with voice assistants like Alexa because the technology was not trained to understand people who suffer from stuttering and dysarthria, a speech disorder caused by weak speech muscles. People who suffer from stuttering or slurred speech due to poor hearing or oral cancer may also find it difficult to be understood by voice assistants.

About 7.5 million people in the United States only have trouble using their voices, according to the National Institute on Deafness and Communication Disorders. This group is at risk of being left behind by lagging voice recognition technology, Julie Cateau, production manager for Google's AI team, said. Google is one of the companies that is trying now to train voice assistants to understand everyone, and therefore these people will be the group that will benefit the most from this technology.

“For someone with cerebral palsy who sits in a wheelchair, being able to control his environment through his voice may be very beneficial to him,” Ms. Katiao said. Google is collecting atypical speech data as part of an initiative to train voice recognition tools.

Anne Toth, director of Alexa Trust at Amazon, said training voice assistants to respond to people with speech disabilities could improve the experience of voice recognition tools for a growing group of potential users such as the elderly, who are more susceptible to degenerative disease.

So in December Amazon announced Alexa's integration with Voiceitt, a startup supported by Amazon's Alexa Fund that allows people with speech disabilities to train an algorithm to recognize their unique vocal patterns. The integration is set to begin in the coming months, and the integration will allow people with atypical speech to operate Alexa devices by speaking in the Voiceitt app.


A feature in the iPhone to help stutterers since 2015

Apple said its Hold to Talk feature, which was introduced on iPhones in 2015, already allows users to control how long they want the voice assistant Siri to listen, preventing the assistant from interrupting users who are already They stutter before they finish speaking.

The company is now looking at how to automatically detect if someone is talking with a stutter, and has already created a bank of 28000 podcasts that feature Stutter to help do so.

An Apple spokesperson said the data aims to help improve voice recognition systems for people with atypical speech patterns and declined to comment on how Apple uses the data results in detail.


Google and the Euphonia Project

Project Euphonia is testing Project Euphonia from Google, a Google research initiative focused on helping people with speech struggles to be better, and allows them to communicate with Google Assistant and Google Home smart products through a training program to understand their unique speech patterns. Their. But it also collects a vocal bank for extraordinary speech contributed by volunteers, including the aforementioned Mrs. Moon.

Google hopes that this will help train its artificial intelligence to understand all patterns of speech, although it is not an easy task for everyone, because the patterns of ordinary speech are close despite the difference in accents, unlike non-standard speech, it is more varied and different, which makes understanding more difficult Artificial intelligence.

Critics say companies have been too slow to address the issue of having personal assistants around 10 years ago. He replied that this technology had become more sophisticated enough and dealt with the complexities of non-standard speech.

Contributing to projects like the Euphonia project can also be difficult for people with atypical speech. Ms. Moon says she sometimes finds talking physically exhausting, but is happy to contribute if it helps teach the voice assistant to understand her.

Do you think a personal assistant would be able to professionally handle atypical speech? What do you think of taking such initiatives to help these people? Let us know in the comments.

Source:

wsj

Related articles