Improved Approach for Arabic Sign Language Visual Recognition Using Hand and Facial Gestures with Deep Learning

Project: Research

Project Details

Description

Sign language is the main language of deaf people that employs visual-manual modality to convey meaning. Sign languages are geographical specific where several languages exist around the world and sometimes within the same country. Moreover, some countries that speak one language may have different sign languages such as British Sign Language and American Sign Language. In Arabic countries, there are several sign languages such as Saudi, Jordanian, Yemeni, and Omani sign languages. Manual and non-manual gestures are the two components of the sign language that are employed simultaneously during signing. Manual gestures consist of hands movement while non-manual gestures consist of body postures and facial gestures such as head motion and facial expressions. Hand gestures are the predominant component of sign language that are utilized for almost all signs. These gestures are simultaneously accompanied by non-manual gestures such as facial expressions, mouth shape, and head movement. These gestures are used mainly to express the emotion and feelings and to remove the ambiguity resulting from using the same manual gestures for several signs. Over the past two decades, researchers have gained considerable effort in recognizing sign language gestures. This effort targeted non-Arabic sign languages such as German and American sign languages. In addition, most of this effort was toward hand gesture recognition. However, the accuracy of the proposed techniques does not reach the level that can make it commercial products. One of the main reasons behind these low recognition accuracies is that the researchers ignored other body postures that are an integral part of the sign language. This research project investigates a novel hybrid model that combines unsupervised and supervised deep feature learning and classification to enhance the recognition rate of Arabic Sign Language. Manual and non-manual gestures, in particular facial gestures, will be integrated to boost the recognition rate. Moreover, an augmented database of Arabic Sign Language will be collected using the state-of-the-art acquisition techniques by focusing on signs that employ both facial and hand gestures. This will overcome the lack of resources needed in this field. The database will be pre-processed and tested for automatic recognition of Arabic Sign Language. It will be made freely available as a benchmarking database to other researchers. The research outcomes, including both the dataset and the methodology, will be shared with the community through various venues and potentially it will lead to patents.
StatusFinished
Effective start/end date15/04/2015/03/21

Fingerprint

Explore the research topics touched on by this project. These labels are generated based on the underlying awards/grants. Together they form a unique fingerprint.