Abstract
Deep reinforcement learning (DRL) merges reinforcement (RL) and deep learning (DL). In DRL-based agents rely on high-dimensional imagery inputs to make accurate decisions. Such excessively high-dimensional inputs and sophisticated algorithms require very powerful computing resources and longer training times. To alleviate the need for powerful resources and reduce the training times, this paper proposes novel solutions to mitigate the curse-of-dimensionality without compromising the DRL agent performance. Using these solutions, the deep Q-network model (DQN) and its improved versions require less training times while achieving better performance.
Original language | English |
---|---|
Title of host publication | ESANN 2019 - Proceedings, 27th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning |
Publisher | ESANN (i6doc.com) |
Pages | 185-190 |
Number of pages | 6 |
ISBN (Electronic) | 9782875870650 |
State | Published - 2019 |
Event | 27th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning, ESANN 2019 - Bruges, Belgium Duration: 24 Apr 2019 → 26 Apr 2019 |
Publication series
Name | ESANN 2019 - Proceedings, 27th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning |
---|
Conference
Conference | 27th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning, ESANN 2019 |
---|---|
Country/Territory | Belgium |
City | Bruges |
Period | 24/04/19 → 26/04/19 |
Bibliographical note
Publisher Copyright:© 2019 ESANN (i6doc.com). All rights reserved.
ASJC Scopus subject areas
- Artificial Intelligence
- Information Systems