Hey Siri, it’s time to understand the stuttering community!

January 19, 2023 - Shelly DeJong

A phone showing an interaction with Siri

Voice-activated artificial intelligence (AI) has become prevalent in our daily lives—think Alexa, Siri, voice-to-text, and AI-scoring hiring tools. Its increasing use has left the stuttering community, a community of more than 70 million people worldwide, at a disadvantage. A multidisciplinary team from Michigan State University and Western Michigan University has received a $750,000 grant through the National Science Foundation’s Convergence Accelerator program to make voice-activated AI accessible and fair to people who stutter.  

“While voice-activated AI systems can facilitate interactions for some individuals, they present notable barriers for people who stutter. Automatic speech recognition systems cannot accurately understand stuttered speech,” said co-principal investigator and MSU communicative sciences and disorders faculty Dr. J. Scott Yaruss.  “This research aims to level the playing field for people who stutter (and, ultimately, people with other communication differences) by increasing accessibility and fairness of voice-activated AI systems.” 

Below is a real example of how voice-activated AI interview tools transcribe speech. The left column shows the conveyed message, and the right column shows the actual speech transcription.   

An example of voice-activated AI interview tools transcribing speech.

With voice-activated AI technology being used in employment contexts, like automated phone interfaces and job preparation and hiring software, people with speech differences are often discriminated against, since the technology is designed and tested without considering communication that varies from societal norms.  

“The importance of voice-activated AI accessibility and fairness for people with communication differences will grow as the technology gets more deeply embedded in everyday products and services and is used in a greater range of applications,” said principal investigator and MSU electrical and computer engineering faculty Dr. Nihar Mahapatra. “A major concern is when exclusionary voice-activated AI makes inferences about the speech and speaker and then uses them to make discriminatory high-stakes decisions, for example, in an employment context.” 

A multidisciplinary approach 

The award will allow this multidisciplinary team, including specialists from communicative sciences and disorders, engineering, and psychology, to use cutting-edge advances in AI, as well as a deep understanding of the nature and experience of stuttering, to make a difference in the lives of all people who exhibit disruptions in their speech. 

The team consists of Dr. Nihar Mahapatra, from MSU engineering, Dr. Ann Marie Ryan from MSU psychology, Dr. J. Scott Yaruss, from MSU communicative sciences and disorders, Dr. Hope Gerlach-Houck, from Western Michigan Speech, Language and Hearing Sciences, and Caryn Herring, an MSU doctoral candidate and chairperson of the board of directors of FRIENDS: The National Association of Young People Who Stutter. 

“Having experts in engineering collaborating with experts in communication is essential to designing optimal AI-based speech recognition systems. Having experts in fair hiring processes as well as experts in biases against individuals with speech disfluencies enables us to consider implications of design for employment access,” said co-principal investigator and MSU psychology faculty Dr. Ann Marie Ryan. “Having an experienced team member from FRIENDS: The National Association of Young People Who Stutter, Caryn Herring, ensures that we have the input and perspective of those who stutter in every aspect of our work.” 

The Stuttering Community 

Caryn Herring has personal experience with stuttering. She said frustrations with AI are a widely discussed topic.  

Herring described a recent call she made about her dental insurance. The automatic answering service prompted her to push one or two to guide her through the answering tree. She went through the prompts until she came to a prompt asking her to vocalize why she was calling. No matter what she said, the system could not understand her. It kept prompting her to state her reason without providing an accessible option such as speaking with an actual person.  

“Those types of technologies are just not something that’s accessible to me,” said Herring. “And more and more first-round job interviews are now being done through AI. I know that AI does not understand stuttered speech, so I would be discriminated against for a job that I might be very qualified for.” 

She hopes that this project can have a widespread impact on the stuttering community. She also sees the impact that it can have on other speech variations or even accents.   

“We’re approaching this project with the belief that there isn’t anything wrong with the diversity in how someone talks. We believe that it’s AI’s job to catch up,” added Herring.