WiFi everywhere with Google Loon
Helium balloons are carrying WiFi connectivity to developing regions lacking network infrastructure
Voice assistants perfectly exemplify the fact that new technologies are not neutral. These biases can be seen in many fields. For example, the controversy over image recognition algorithms that confused black people with gorillas is well known. We can also go further back in time and see that car seat belts were designed with the male anatomy in mind. In voice assistants, machine learning is used to understand users, and databases are often based on standard diction. That means that a large percentage of the population with ailments such as cerebral palsy or stuttering problems are left out when, on many occasions, they are the ones who need them the most. Fortunately, just as there are already systems to recognize the language of the deaf and dumb, large technology companies are working to improve voice recognition. One of the latest is Apple, which has published an article about its work with a database of 32,000 clips sourced from podcasts.
The goal of the company founded by Steve Jobs is to enable its voice assistant Siri to interpret pauses, prolongations, repetitions, and incomplete words. Based on the Stuttering Events in Podcasts database and FluencyBank, preliminary results point to an improvement in accuracy of 28% and 24% for each dataset. One of the main problems with Siri so far was that it interpreted stuttering pauses as the end of the sentence, which returned poor quality results. The researchers, who have published the article in arXiv, an open archive for scientific research, say that the technology can also be used for people affected by dysarthria, i.e., difficulties in articulating phonemes due to lesions of the nervous system.
Apple is not the only company gearing its efforts toward more inclusive speech recognition systems. Firstly, Google is collecting speech samples with greater diversity to address the needs of this sector of the population. Also, as part of the Euphonia project, it is already testing a prototype app through which people with atypical diction will be able to train their devices to consider their specific way of speaking.
Secondly, Amazon announced in December 2020 the integration of technology from an Israeli startup into its Alexa assistant. In a similar fashion to Google's project, the technology will allow each user to train the algorithm with their own particularities. The option is expected to be operational throughout 2021.
Until now, voice assistants have relied on common voice patterns and tonalities that transcend specific accents. However, the challenge of extending speech recognition to people with stuttering and dysarthria is thought to be much more complex. Firstly, because the databases are smaller and secondly because the variability of speakers is infinitely greater. Fortunately, advances in artificial intelligence and machine learning are opening the door to a new era of accessibility for all in the field of voice assistants. If you are interested in learning more about these types of applications, we recommend this article on using wearables and smartphones to improve accessibility.
Source: Wall Street Journal
All fields are mandatory.
Read the most discussed articles
{{CommentsCount}} Comments
Currently no one has commented on the news.
Be the first to leave a comment.
{{firstLevelComment.Name}}
{{firstLevelComment.DaysAgo}} days ago
{{firstLevelComment.Text}}
Answer{{secondLevelComment.Name}}
{{secondLevelComment.DaysAgo}} days ago
{{secondLevelComment.Text}}