Signs of progress: How Artificial Intelligence can unlock the language of the silent
Worldwide, about 466 million people have a hearing disability. This number is expected to continue to grow. People with severe hearing impairments use sign languages like the American Sign Language (ASL). Communication with non-Sign Language speakers, however, can be difficult. Trained interpreters can help, especially with legal and medical appointments, but they are not an everyday solution. Instead, affected people now hope for a solution that involves Artificial Intelligence, such as Deep Learning programs. This study presents an approach for doing so. It was published during the 2019 International Conference on Sustainable Technologies for Industry 4.0 (STI).
Take aways
- Due to new, swiftly developing Deep Learning programs, Artificial Intelligence offers opportunities to revolutionize sign language recognition.
- The program developed in this study can recognize sign language with up to 100% accuracy.
- Yet, the program is not evolved enough to translate whole conversations. There is still much to do in the future.
Study information
Who?
The study used four datasets, each consisting of images of static sign language gestures:
- The New Zealand MU HandImages ASL data set, including 2,425 pictures
- The Turkish sign language digit data set, from 218 students of the Ankara Ayranci Anadolu high school, including 2,180 pictures
- The British ASL finger spelling dataset collected by the University of Surrey with over 65,000 pictures
- The ASL Alphabet dataset provided by Akash Nagaraj, consisting of 87,000 pictures
Where?
Bangladesh
How?
First, the researchers standardized the datasets consisting of pictures of American Sign Language (ASL)'s numbers and letters, using the same sizes and color schemes. They also enlarged the pictures by changing details, such as the angle of the picture. A CNN model named SLRNet-8 was trained using the datasets, while 10% of the photos were used to test the program's accuracy.
Facts and findings
- The program could identify 99.9 to 100% of the pictures of letters and numbers of American Sign language.
- When the researchers mixed the pictures of numbers or letters, that did not affect the recognition.
- Compared to older programs, this study’s program SLRNet-8 was 9% better at recognizing letters and numbers.
- Although very accurate, the program could only recognize pictures of single letters or numbers. For a realistic conversation, the program should be able to identify whole sentences in videos or livestreams.