Abstract
Video has rich content and has recently received considerable attention for opinion mining and sentiment analysis, especially with the growing volume of online videos in social networks. However, it has been proved that machine-learning techniques for sentiment analysis are gender biased, i.e. can be more accurate for one gender than the other. Hence, if gender can be recognized early, it can lead to improving the sentiment analysis results. This paper explores multimodal analysis of video data to extract features and build gender recognition models. Different methods are evaluated for unimodal, bimodal and trimodal systems using features from three modalities: visual, audio, and text. The proposed system is evaluated and the results revealed that the fusion of multiple modalities at the feature level can have significant impact on the performance of gender recognition as compared to unimodal models.
| Original language | English |
|---|---|
| Title of host publication | 2019 8th International Conference on Modeling Simulation and Applied Optimization, ICMSAO 2019 |
| Publisher | Institute of Electrical and Electronics Engineers Inc. |
| ISBN (Electronic) | 9781538676844 |
| DOIs | |
| State | Published - Apr 2019 |
Publication series
| Name | 2019 8th International Conference on Modeling Simulation and Applied Optimization, ICMSAO 2019 |
|---|---|
| Volume | 2019-January |
Bibliographical note
Publisher Copyright:© 2019 IEEE.
UN SDGs
This output contributes to the following UN Sustainable Development Goals (SDGs)
-
SDG 5 Gender Equality
ASJC Scopus subject areas
- Signal Processing
- Industrial and Manufacturing Engineering
- Safety, Risk, Reliability and Quality
- Control and Optimization
- Modeling and Simulation
- Health Informatics
Fingerprint
Dive into the research topics of 'Using feature-level fusion for multimodal gender recognition for opinion mining videos'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver