
Informatics and Engineering Systems Faculty Publications and Presentations
A Performance Comparison Between Two Speech-To-ASL-Gesture-Projection Translation Implementations
Document Type
Conference Proceeding
Publication Date
4-22-2025
Abstract
Millions of people with hearing disabilities use sign language for communication, creating a communication gap with those who are not fluent in ASL (American Sign Language). This paper aims to introduce an ASL interpreter system using a smart-glasses-based augmented reality system. We begin by introducing and comparing two models that translate spoken language into ASL poses. The first system translates spoken text to ASL Gloss, an intermediate representation, before generating ASL poses. The second system directly translates the text to ASL poses. Our analysis shows that using ASL Gloss as an intermediate step significantly improves the translation speed. We then explore a system of encoding ASL pose videos for display on smart glasses. The chosen translation method has a BLEU score of 66.5801 and a rate of 1.825 ms per gloss translation. Our algorithm for mapping gloss text to ASL videos obtained a mean squared error of 0.05, indicating that our system has good translational accuracy and a low mapping error.
Recommended Citation
Motlagh, Alexandra Kashani, Shikha Mehta, Ishfaq Ahmad, Addison Clark, and Hansheng Lei. "A Performance Comparison Between Two Speech-To-ASL-Gesture-Projection Translation Implementations." In World Congress in Computer Science, Computer Engineering & Applied Computing, pp. 118-126. Cham: Springer Nature Switzerland, 2024.
First Page
118
Last Page
126
Publication Title
Health Informatics and Medical Systems and Biomedical Engineering
DOI
10.1007/978-3-031-85908-3_10
Comments
© 2025 The Author(s), under exclusive license to Springer Nature Switzerland AG
https://rdcu.be/erfc5