Automatic classification of speech and music using neural networks

M. Kashif Saeed Khan*, Wasfi G. Al-Khatib, Muhammad Moinuddin

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

13 Scopus citations

Abstract

The importance of automatic discrimination between speech signals and music signals has evolved as a research topic over recent years. The need to classify audio into categories such as speech or music is an important aspect of many multimedia document retrieval systems. Several approaches have been previously used to discriminate between speech and music data. In this paper, we propose the use of the mean and variance of the discrete wavelet transform in addition to other features that have been used previously for audio classification. We have used Multi-Layer Perceptron (MLP) Neural Networks as a classifier. Our initial tests have shown encouraging results that indicate the viability of our approach.

Original languageEnglish
Title of host publicationMMDB 2004
Subtitle of host publicationProceedings of the Second ACM International Workshop on Multimedia Databases
PublisherAssociation for Computing Machinery (ACM)
Pages94-99
Number of pages6
ISBN (Print)1581139756, 9781581139754
DOIs
StatePublished - 2004

Publication series

NameMMDB 2004: Proceedings of the Second ACM International Workshop on Multimedia Databases

Keywords

  • Audio features
  • Audio signal processing
  • Content-based indexing
  • Music speech classification
  • Neural networks

ASJC Scopus subject areas

  • General Engineering

Fingerprint

Dive into the research topics of 'Automatic classification of speech and music using neural networks'. Together they form a unique fingerprint.

Cite this