In a newly announced partnership with the Canadian Down Syndrome Society, Google has launched an effort to make their voice recognition technology better adapted to the vocal patterns of people with Down syndrome. Because of their unique skeletal and muscular structure, individuals with Down syndrome have very different speech patterns than those without the disorder – and most voice recognition technology isn’t trained to understand those patterns.
Named “Project Understood,” the initiative has called on individuals with Down Syndrome to contribute to their library of speech recordings.
Project Understood’s website explains:
Automatic Speech Recognition (ASR) can greatly improve the ability of those with speech impairments to interact with everyday smart devices and facilitate more independent living. However, these systems have predominantly been trained on “typical speech.” But not all human speech is the same.
The unique speech patterns of people with Down syndrome make it difficult for voice technologies to understand them. This is due to a large lack of training data. The Canadian Down Syndrome Society along with Google’s Project Euphonia are setting out to make speech technology more accessible to those with disabilities by recording the voices of thousands of participants with Down syndrome to help train and improve its technology. By reading and recording simple phrases, we can help Google recognize your unique speech patterns to improve Google’s system.
So far, 300 voices have been collected, making Project Understood more than halfway to their goal of 500 – “the more voice samples shared by the Down syndrome community, the closer we get to a world where every person is understood.”