'Sign Language Detection and Translation using Smart Glove' is aimed at enhancing communication and accessibility for individuals with hearing impairments. This innovative project centres on the development of an advanced system for sign language detection and translation using a specially designed smart glove. The potential impact of this project is profound, as it has the capacity to revolutionise communication for those with hearing impairments. By enhancing accessibility, enabling personalised communication, and promoting a user-centric approach, it is poised to increase adoption among its target user bases. This endeavour also contributes significantly to the fields of AI and assistive technology, not only addressing real-world communication challenges but also promoting awareness about inclusivity. The project's core goal is to bridge the communication gap by providing real-time recognition and translation of sign language gestures, empowering individuals with hearing loss to engage more effectively in their day-to-day interactions. Initially developed as a dissertation project for an MSc in Artificial Intelligence, the current prototype focuses on right-hand gestures, specifically detecting and translating numbers from 0 to 9 and alphabets from A to E in American Sign Language (ASL).
The primary objective of the project's expansion is:
- to develop wireless and wearable smart gloves
- to enhance whole-hand movement detection
- to optimise the efficiency and effectiveness of complex communication
This expansion includes the integration of technology with communication devices, offering a more inclusive and accessible means of communication for those with hearing loss. Future research directions encompass the implementation of two-way communication, multilingual support, accessibility for special needs such as education, and improving situation-based performance. These enhancements collectively aim to provide a comprehensive solution for sign language recognition and translation, significantly improving the quality of life for its intended users.
The project's evolution will involve augmenting the sensor setup with additional sensors, utilising sensor fusion techniques, developing speech recognition and text-to-speech capabilities for real-time communication, integrating device compatibility, creating user-friendly interfaces, and refining machine learning models to handle expanded data. Collaborative efforts with experts such as Dr Sandra Fernando, Subeksha Shrestha, Dion Mariyanayagam, and Professor Bal Virdee, along with the inventor, Sunila Maharjan, and product users (individuals with hearing impairments and/or sign language experts). The iterative development process is used collectively to enhance communication for individuals who use sign language, offering an inclusive and versatile solution.
Project team