Researcher advances sign language technology with NSF CAREER Award

Rachel Hynds
May 18, 2021

UNIVERSITY PARK, Pa. — As the worldwide population of deaf and hard-of-hearing people reaches almost 500 million, the need for advanced accessibility technology to improve communication between members of the deaf community and the hearing community is also increasing, according to Mahanth Gowda, assistant professor of computer science and electrical engineering at Penn State. With a five-year, $500,000 National Science Foundation (NSF) CAREER Award, Gowda and his students are working to create a wearable translation device to better facilitate communication between deaf and hard-of-hearing people who use American Sign Language (ASL), one of the hundreds of sign languages used globally, and English speakers. 

The device involves smart rings, which the ASL user would wear on their fingers for detecting hand movements, and sensors in earphones for detecting facial movements such as eyebrow motion, head motion and more. 

“You interact with people as you normally would with ASL, and the system will take your data from the sensors and translate it into English or other spoken languages,” Gowda said. “For the hearing person, the translation would come through a smartphone speaker.” 

The response from the hearing person would be translated into ASL and conveyed to the deaf person via sign language animations. 

A key component of the research includes recognition of the signs made by the ASL user, to be translated eventually into a spoken language. Gowda said that the technology for sign language recognition is somewhat analogous to the automatic speech recognition used in Amazon Alexa and Apple Siri, which can transcribe spoken word to text. However, there are fundamental differences between converting a single acoustic signal of spoken words into text of the same spoken language, versus recognition of 3D visual signals — from hands, body and face — from ASL, which can be considered a different language, and translating them into spoken words.

This research could advance the current solutions for ASL translation to help with integration between deaf and hearing communities, Gowda said, noting the technological challenges of communicating the nuances of ASL. While deaf people can currently use translation technologies involving cameras, the cameras depend on good lighting conditions and resolution and require the user to always face the lens. While hiring sign language interpreters is another option, it can be challenging to find qualified interpreters to provide services.  

To navigate the multi-modal nature of any sign language and the difficulty of translating between spoken languages based on words and sign languages based on signs, Gowda is developing new machine learning techniques. Ultimately, he said, the technology aims to effectively integrate the multiple data inputs and translate sign language to a spoken language — a much greater difficulty than translating between spoken languages.

“From my perspective and for my students, it’s a big learning experience,” Gowda said. “As there are different accents in spoken language, the same exists in sign language.”

To better understand the language-related requirements of such a communication, Gowda has partnered with Kenneth DeHaan, professor at Gallaudet University, who will bring in his expertise in ASL and Deaf culture into the project. At Penn State, Gowda is collaborating with Rebecca Passonneau, professor in computer science and engineering and a senior partner on the project, who studies linguistics and natural language. ASL experts from Penn State, including Sommar Chilton, associate teaching professor of communication sciences and disorders; Gary Thomas, ASL staff interpreter, Penn State Affirmative Action Office; and Shasta Dreese, assistant teaching professor of communication sciences and disorders, will provide guidance and feedback on the technology to help the researchers understand the key user requirements. In addition, Vijaykrishnan Narayanan, A. Robert Noll Chair of Electrical Engineering and Computer Science with specific expertise in computer architecture, will help develop sensor prototypes for accurate sensing and long-term battery life. Penn State graduate students Yilin Liu, Fengyang Jiang, Shijia Zhang, Suryoday Basak and Arun Teja Muluka have already contributed to the project, Gowda said, and will continue in the future.

“We have a long way to go but we’ve made significant progress, and the interactions with other faculty and their feedback has been super useful,” Gowda said.

(Media Contacts)

Last Updated May 18, 2021