Skip to content

AI-Powered Productivity Tools That Can Make Life Harder

Deaf since birth, Paul Meyer has used human interpreters and captioners to communicate with colleagues throughout his 25-year career in human resources and recruiting.

But as companies began to rely more on video conferencing during the pandemic, he noticed a worrying trend. As meetings moved online, companies began regularly using AI-based transcription software. And as that technology became part of everyday business, some employers thought it could be implemented in other cases, for example, to replace human interpreters.

The problem, according to Meyer, is that there are flaws in the technology that employers are not aware of and that make life more difficult for deaf workers.

“The company thought the AI ​​technology for subtitles was perfect. “They were confused because I was missing a lot of information.”

Some content could not be loaded. Check your internet connection or browser settings.

Quote from Paul Meyer that says: “The company thought the AI ​​technology for closed captioning was perfect. “They were confused because I was missing a lot of information.” His voice transcribed by Google Speech to Text says: “The capital thought I would talk about the country while I had a lot of information.”

Voice recognition technology, which became available in workplaces in the 1990s, has improved greatly and created new opportunities for disabled people to have conversations when an interpreter is not available.

Now hearing people increasingly use it as a productivity tool that can help teams summarize notes or generate transcripts for meetings, for example. According to Forrester Research, 39 percent of workers surveyed globally said their employers had started using or planned to incorporate generative AI into video conferencing. Six in 10 now use online or video conferencing weekly, a figure that has doubled since 2020.

This story was produced in association with the Pulitzer Center. AI Accountability Network

The increased prevalence has many positive aspects for deaf workers, but some warn that these tools could be harmful to disabled people if employers do not understand their limitations. One concern is the assumption that AI can replace trained human interpreters and captioners. The concern is compounded by a historical lack of participation by disabled people in AI products, even some that are marketed as assistive technologies.

Speech recognition models often fail to understand people with irregular or accented speech, and can perform poorly in noisy environments.

“People have false ideas that AI is perfect for us. It’s not perfect for us,” says Meyer. He was fired from his job and believes a lack of suitable accommodation made him an easy target when the company downsized.

Some content could not be loaded. Check your internet connection or browser settings.

Quote from Paul Meyer that says: “People have false ideas that AI is perfect for us. “It’s not perfect for us.” His voice transcribed by Google Speech to Text says:

Some companies are now looking to improve speech recognition technology, through efforts such as training their models on a broader spectrum of speech.

Google, for example, began collecting more diverse speech samples in 2019 after recognizing that its own models were not working for all of its users. It launched the Project Relate app on Android in 2021, which collects individual voice samples to create a real-time transcription of a user’s speech. The app is aimed at people with non-standard speech, including those with a deaf accent, ALS, Parkinson’s disease, cleft palate, and stuttering.

In 2022, four other technology companies (Amazon, Apple, Meta and Microsoft) joined Google in research led by the Beckman Institute at the University of Illinois Urbana-Champaign to collect more voice samples that will be shared among themselves and others. researchers.

Google researcher Dimitri Kanevsky, who has a Russian accent and speaks non-standard, says the Relate app allowed him to have impromptu conversations with contacts, such as other attendees at a math conference.

“I became much more sociable. I could communicate with anyone anytime, anywhere and they could understand me,” says Kanevsky, who lost his hearing at age three. “It gave me an incredible feeling of freedom.”

Some content could not be loaded. Check your internet connection or browser settings.

Dimitri Kanevsky quote that says “I became much more sociable. I could communicate with anyone anytime, anywhere and they could understand me. “It gave me an incredible feeling of freedom.” His voice transcribed by Google Speech to Text says: “It became much more social. I could communicate with any budget at any time and anywhere and they could understand me and gave me an incredible sense of freedom.” His voice transcribed using Project Relate matches his quote.

A handful of deaf-led startups, such as Intel-backed OmniBridge and Techstars-funded Sign-Speak, are working on products that focus on translation between American Sign Language (ASL) and English. Adam Munder, founder of OmniBridge, says that while he’s been fortunate at Intel to have access to translators around the clock, even while walking around the office and in the lunchroom, he knows that many companies don’t offer that access.

“With OmniBridge, you could complete those conversations in hallways and cafeterias,” says Munder.

But despite progress in this area, there is concern about the lack of representation of people with disabilities in the development of some more conventional translation tools. “There are a lot of hearing people who have set up solutions or tried to do things assuming they know what deaf people need, assuming they know the best solution, but they may not really understand the whole story,” Munder says.

At Google, where 6.5 percent of employees identify themselves as disabled, Jalon Hall, the only Black woman in Google’s group of deaf and hard of hearing employees, led a project that began in 2021 to better understand the needs of Black deaf users. . Many of those he spoke to used Black ASL, a variant of American Sign Language that largely diverged due to the segregation of American schools in the 19th and 20th centuries. She says the people she spoke to didn’t find Google products worked that well for them.

“There are many technically competent deaf users, but they are not often included in important dialogues. They are not usually included in important products when they are developed,” says Hall. “It means they will be left further behind.”

In a recent articleA team of five deaf and hard of hearing researchers found that most recently published sign language studies did not include deaf perspectives. They also did not use data sets that represented deaf people and included modeling decisions that perpetuated incorrect biases about sign language and the deaf community. These prejudices could become a problem for future deaf workers.

“What hearing people, who don’t sign, consider ‘good enough’ could lead to the basic level for bringing products to market being quite low,” says Maartje De Meulder, a senior researcher at Utrecht University of Applied Sciences in the Netherlands. who was co-author of the article. “It is worrying that the technology is simply not good enough or voluntarily adopted by deaf workers, while they are required or even forced to use it.”

Ultimately, companies will need to prioritize improving these tools for people with disabilities. Google has yet to incorporate advances in its speech-to-text models into commercial products despite researchers reporting reducing your error rate by a third.

Hall says he has received positive feedback from senior managers about his work, but there is no clarity on whether it will affect Google’s product decisions.

As for Meyer, she hopes to see more deaf representation and tools designed for disabled people. “I think a problem with AI is that people think it will help them talk to us more easily, but it may not be easy for us to talk to them,” Meyer says.

Some content could not be loaded. Check your internet connection or browser settings.

Quote from Paul Meyer that says: “I think one problem with AI is that people think it will help them talk to us more easily. But it may not be easy for us to talk to them.” His voice transcribed by Google Speech to Text says:

Design work by Carolina Nevitt