A hybrid computational model for an automated image descriptor for visually impaired users

Tarek Helmy*, Mohammad M. Hassan, Muhammad Sarfraz

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

3 Scopus citations


Nowadays, with the development of high-quality software, most presentations contain images. This makes a problem for visually impaired people, as there is a support for text-to-voice conversion but not for image-to-voice. For documents which combine images and text, we propose a hybrid model to make a meaningful and easily recognizable descriptor for images in three main categories (statistical, geometrical and non-geometrical). First, a neural classifier is trained, by mining the associated texts using advanced concepts, so that it can assign each document to a specific category. Then, a similarity matching with that category's annotated templates is performed for images in every other category. We have made a classifier by using novel features based on color projection and able to differentiate geometrical images from ordinary images. Thus we have significantly improved the similarity matching, to achieve more accurate descriptions of images for visually impaired users. An important feature of the proposed model is that its specific matching techniques, suitable for a particular category, can be easily integrated and developed for other categories.

Original languageEnglish
Pages (from-to)677-693
Number of pages17
JournalComputers in Human Behavior
Issue number2
StatePublished - Mar 2011


  • Classification
  • Image analysis and descriptor

ASJC Scopus subject areas

  • Arts and Humanities (miscellaneous)
  • Human-Computer Interaction
  • General Psychology


Dive into the research topics of 'A hybrid computational model for an automated image descriptor for visually impaired users'. Together they form a unique fingerprint.

Cite this