Microgrid energy management using deep Q-network reinforcement learning

Research output: Contribution to journalArticlepeer-review

95 Scopus citations

Abstract

This paper proposes a deep reinforcement learning-based approach to optimally manage the different energy resources within a microgrid. The proposed methodology considers the stochastic behavior of the main elements, which include load profile, generation profile, and pricing signals. The energy management problem is formulated as a finite horizon Markov Decision Process (MDP) by defining the state, action, reward, and objective functions, without prior knowledge of the transition probabilities. Such formulation does not require explicit model of the microgrid, making use of the accumulated data and interaction with the microgrid to derive the optimal policy. An efficient reinforcement learning algorithm based on deep Q-networks is implemented to solve the developed formulation. To confirm the effectiveness of such methodology, a case study based on a real microgrid is implemented. The results of the proposed methodology demonstrate its capability to obtain online scheduling of various energy resources within a microgrid with optimal cost-effective actions under stochastic conditions. The achieved costs of operation are within 2% of those obtained in the optimal schedule.

Original languageEnglish
Pages (from-to)9069-9078
Number of pages10
JournalAlexandria Engineering Journal
Volume61
Issue number11
DOIs
StatePublished - Nov 2022

Bibliographical note

Publisher Copyright:
© 2022 THE AUTHORS

Keywords

  • Deep Q-networks
  • Deep reinforcement learning
  • Energy management
  • Microgrid

ASJC Scopus subject areas

  • General Engineering

Fingerprint

Dive into the research topics of 'Microgrid energy management using deep Q-network reinforcement learning'. Together they form a unique fingerprint.

Cite this