On the speedup of deep reinforcement learning deep Q-networks (RL-DQNs)

Anas M. Albaghajati, Lahouari Ghouti

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Deep reinforcement learning (DRL) merges reinforcement (RL) and deep learning (DL). In DRL-based agents rely on high-dimensional imagery inputs to make accurate decisions. Such excessively high-dimensional inputs and sophisticated algorithms require very powerful computing resources and longer training times. To alleviate the need for powerful resources and reduce the training times, this paper proposes novel solutions to mitigate the curse-of-dimensionality without compromising the DRL agent performance. Using these solutions, the deep Q-network model (DQN) and its improved versions require less training times while achieving better performance.

Original languageEnglish
Title of host publicationESANN 2019 - Proceedings, 27th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning
PublisherESANN (i6doc.com)
Pages185-190
Number of pages6
ISBN (Electronic)9782875870650
StatePublished - 2019
Event27th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning, ESANN 2019 - Bruges, Belgium
Duration: 24 Apr 201926 Apr 2019

Publication series

NameESANN 2019 - Proceedings, 27th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning

Conference

Conference27th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning, ESANN 2019
Country/TerritoryBelgium
CityBruges
Period24/04/1926/04/19

Bibliographical note

Publisher Copyright:
© 2019 ESANN (i6doc.com). All rights reserved.

ASJC Scopus subject areas

  • Artificial Intelligence
  • Information Systems

Fingerprint

Dive into the research topics of 'On the speedup of deep reinforcement learning deep Q-networks (RL-DQNs)'. Together they form a unique fingerprint.

Cite this