Creating an Automated Sign Language Translator

Creating an Automated Sign Language Translator

Photo (from left): Curtis Zhou ’24 and Ryan Rong ’24 recently introduced their American Sign Language translation tool at the Microsoft Imagine Cup.

By Ryan Rong ’24 and Curtis Zhou ’24

It all started during the summer of 2022 when Curtis Zhou ’24 and I decided to participate in the Uber Global Hackathon. Curtis had trouble communicating with his deaf friend when they ordered fast food. Meanwhile, I noticed that the deaf and hard of hearing had difficulty communicating in Zoom meetings during COVID. When we shared our grievances, Audible Motion was born: A real-time American Sign Language (ASL) to English translator to help the deaf and hard of hearing communicate. We wanted to give voice to their motion.

To achieve this, we leveraged the power of machine learning. A webcam would record footage of the user and extract key points, such as hand joints. Based on how these points move, the webcam interprets a sign. We worked day and night during the 48-hour Hackathon to develop our model. I was in China while Curtis was in California, so I would work while Curtis was sleeping and vice versa. Essentially, we were working nonstop. We pitched our idea to the judges at Uber Global Hackathon with our working prototype, and they loved it. Our project secured first place in the coding category and won the Innovation Award.

We were both delighted by the results of the Hackathon, and we decided to continue our project when we returned to school. We developed an innovative model architecture that improved translation accuracy over traditional methods and deployed the model to the internet. We also designed a website to host our model and made it publicly available. Realizing that only translating from ASL to English limited communication to only one direction, we added a speech-to-text function that transcribes speech. This allows the deaf and hard of hearing to read what others say, achieving two-way communication. We brought our improved product, available as a public website, to the Terra North Jersey Science Fair (TNJSF).

It was frustrating, to say the least. I worked on the code during the day and let my computer run at night to train the translation model. I would then wake up the next morning to find that it underperformed, and I would have to repeat this process again.

Curtis explained, “Designing the website was quite difficult. Not only did I have to come up with a way to account for all features of Audible Motion, but I would also have to build the user interface to be as easy to use as possible, to give Audible Motion practicality and versatility. I would have to code line by line and write script by script until both these conditions were met.”

Leading up to the science fair, Curtis and I designed our poster, worked on a demonstration of our product, and presented it to the Math Department for practice. We rehearsed our presentation again and again until we perfected the delivery.

In April 2023, our new model was displayed for the first time, along with hundreds of other projects, at the Terra NJ Science Fair. It was a thrilling experience, we stood in front of our project for three hours, two days straight, and we always had to be ready to present to the judges.

We finished the science fair as ISEF award finalists, winning first place in their category of Computer Science. In addition, we won the IEEE Young Engineers Award, the Association for Computing Machinery Award and the Journal of Emerging Investigators Finalists Award.

However, we did not rest when we returned and worked to prepare for the Microsoft Imagine Cup World Finals. Our project was one of the 48 teams worldwide featuring innovative solutions by high school and college students. We prepared a three-minute presentation, put together a one-minute elevator pitch and practiced answering the judge’s potential questions. We competed against 16 other finalists in the Americas Region and won the Lifestyle category for providing a solution that improved the daily lives of the deaf and hard of hearing. Together, we won $3,300 in prizes, and we plan to invest this money to continue building our product.

We want to thank Peddie’s Technology Department for helping with our technical setup, the Math Department for giving us extensive feedback on our presentation and Mrs. Joy Wolfe for supporting us throughout the journey.