Towards robust autonomous driving systems through adversarial test set generation

Devrim Unal*, Ferhat Ozgur Catak, Mohammad Talal Houkan, Mohammed Mudassir, Mohammad Hammoudeh

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

12 Scopus citations

Abstract

Correct environmental perception of objects on the road is vital for the safety of autonomous driving. Making appropriate decisions by the autonomous driving algorithm could be hindered by data perturbations and more recently, by adversarial attacks. We propose an adversarial test input generation approach based on uncertainty to make the machine learning (ML) model more robust against data perturbations and adversarial attacks. Adversarial attacks and uncertain inputs can affect the ML model's performance, which can have severe consequences such as the misclassification of objects on the road by autonomous vehicles, leading to incorrect decision-making. We show that we can obtain more robust ML models for autonomous driving by making a dataset that includes highly-uncertain adversarial test inputs during the re-training phase. We demonstrate an improvement in the accuracy of the robust model by more than 12%, with a notable drop in the uncertainty of the decisions returned by the model. We believe our approach will assist in further developing risk-aware autonomous systems.

Original languageEnglish
Pages (from-to)69-79
Number of pages11
JournalISA Transactions
Volume132
DOIs
StatePublished - Jan 2023

Bibliographical note

Publisher Copyright:
© 2022 ISA

Keywords

  • DL
  • Risk-aware autonomous systems
  • Test set generation
  • Uncertainty

ASJC Scopus subject areas

  • Control and Systems Engineering
  • Instrumentation
  • Computer Science Applications
  • Electrical and Electronic Engineering
  • Applied Mathematics

Fingerprint

Dive into the research topics of 'Towards robust autonomous driving systems through adversarial test set generation'. Together they form a unique fingerprint.

Cite this