A deep multi-modal neural network for informative Twitter content classification during emergencies

Abhinav Kumar, Jyoti Prakash Singh, Yogesh K. Dwivedi*, Nripendra P. Rana

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

65 Scopus citations

Abstract

People start posting tweets containing texts, images, and videos as soon as a disaster hits an area. The analysis of these disaster-related tweet texts, images, and videos can help humanitarian response organizations in better decision-making and prioritizing their tasks. Finding the informative contents which can help in decision making out of the massive volume of Twitter content is a difficult task and require a system to filter out the informative contents. In this paper, we present a multi-modal approach to identify disaster-related informative content from the Twitter streams using text and images together. Our approach is based on long-short-term-memory and VGG-16 networks that show significant improvement in the performance, as evident from the validation result on seven different disaster-related datasets. The range of F1-score varied from 0.74 to 0.93 when tweet texts and images used together, whereas, in the case of only tweet text, it varies from 0.61 to 0.92. From this result, it is evident that the proposed multi-modal system is performing significantly well in identifying disaster-related informative social media contents.

Original languageEnglish
Pages (from-to)791-822
Number of pages32
JournalAnnals of Operations Research
Volume319
Issue number1
DOIs
StatePublished - Dec 2022
Externally publishedYes

Bibliographical note

Publisher Copyright:
© 2020, Springer Science+Business Media, LLC, part of Springer Nature.

Keywords

  • Disaster
  • LSTM
  • Social media
  • Tweets
  • Twitter
  • VGG-16

ASJC Scopus subject areas

  • General Decision Sciences
  • Management Science and Operations Research

Fingerprint

Dive into the research topics of 'A deep multi-modal neural network for informative Twitter content classification during emergencies'. Together they form a unique fingerprint.

Cite this