Arabic Sign Language Recognition Using Deep Machine Learning

Wael Suliman, Mohamed Deriche, Hamzah Luqman, Mohamed Mohandes

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

13 Scopus citations

Abstract

In this work, we present an effective method for automatic Arabic Sign Language recognition that uses a Convolutional Neural Network (CNN) for feature extraction and a Long Short-Term Memory (LSTM) for classification. AlexNet, a CNN architecture, is used to extract deep features from the input image while the LSTM is used to preserve the sequential structure of the video frames. The method was tested on a data set consisting of 50 repetitions of 150 signs commonly used in daily activities performed by three signers. The proposed method achieved an overall recognition accuracy of 95.9% for the signer-dependent case, and 43.62% for the more difficult signer-independent case.

Original languageEnglish
Title of host publication2021 4th International Symposium on Advanced Electrical and Communication Technologies, ISAECT 2021
PublisherInstitute of Electrical and Electronics Engineers Inc.
ISBN (Electronic)9781665437738
DOIs
StatePublished - 2021

Publication series

Name2021 4th International Symposium on Advanced Electrical and Communication Technologies, ISAECT 2021

Bibliographical note

Publisher Copyright:
© 2021 IEEE.

Keywords

  • Arabic sign language recognition
  • CNN
  • LSTM

ASJC Scopus subject areas

  • Artificial Intelligence
  • Computer Networks and Communications
  • Energy Engineering and Power Technology
  • Renewable Energy, Sustainability and the Environment
  • Aerospace Engineering
  • Electrical and Electronic Engineering
  • Instrumentation

Fingerprint

Dive into the research topics of 'Arabic Sign Language Recognition Using Deep Machine Learning'. Together they form a unique fingerprint.

Cite this