CCA-Based Fusion of Camera and Radar Features for Target Classification Under Adverse Weather Conditions

Research output: Contribution to journalArticlepeer-review

1 Scopus citations

Abstract

Deep learning models such as deep convolutional neural networks (DCNNs) image classifiers have achieved outstanding performance over the last decade. However, these models are mostly trained with high-quality images drawn from publicly available datasets such as ImageNet. Recently, many researchers have evaluated the impact of low-quality image degradations on the performance of different neural network-based image classifiers. But, most of these studies generate low-quality images by synthetic modification of the high-quality images. Besides, most of the studies employed various image processing techniques to remove the image degradations and trained the DCNNs again to achieve better performance. But it has since been discovered that such methods could not improve the classification accuracy of DCNNs. The robustness of DCNNs based image classifiers trained on low-quality images resulting from natural factors common in autonomous driving and other intelligent system settings was rarely studied over the recent years. In this paper, we proposed a canonical correlation analysis (CCA) based fusion of camera and radar features for improving the performance of DCNNs image classifiers trained on natural adverse weather data. CCA is a statistical approach that creates a highly discriminative feature vector by measuring the linear relationship between the camera and radar features. A spatial attention network was designed to re-weight the camera features before associating them with radar features in the CCA-feature fusion block. Our findings based on experimental evaluations have proven that, indeed, the performance of the DCNN models (i.e., Alex-Net and VGG-16-Net) is heavily affected by degradations arising from natural factors. Specifically, the DCNN models are more affected by the degradations arising from rainfall, foggy and nighttime conditions using Radiate and Carrada datasets. However, the proposed fusion frameworks have improved the performance of the individual sensing modalities significantly. The radar data has helped substantially in enhancing the fusion performance, mainly using rainfall data where the camera data is heavily affected.

Original languageEnglish
Pages (from-to)7293-7319
Number of pages27
JournalNeural Processing Letters
Volume55
Issue number6
DOIs
StatePublished - Dec 2023
Externally publishedYes

Bibliographical note

Publisher Copyright:
© 2023, The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature.

Keywords

  • Autonomous driving
  • Camera
  • Canonical correlation analysis
  • Deep convolutional neural networks
  • Deep learning
  • Image classification
  • Radar

ASJC Scopus subject areas

  • Software
  • General Neuroscience
  • Computer Networks and Communications
  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'CCA-Based Fusion of Camera and Radar Features for Target Classification Under Adverse Weather Conditions'. Together they form a unique fingerprint.

Cite this