TY - JOUR
T1 - Deep Reinforcement Learning-Based Model-Free Secondary Frequency Control of Widespread Islanded Microgrid with Stability Constraints
AU - Awan, Ramisha Qasim
AU - Zeb, Kamran
AU - Hayat, Rameez
AU - Rasheed, Ahmed
AU - Uddin, Waqar
AU - Khalid, Muhammad
N1 - Publisher Copyright:
© King Fahd University of Petroleum & Minerals 2025.
PY - 2025
Y1 - 2025
N2 - This paper proposes a novel model-free control of an islanded microgrid (MG) using value- and policy-based deep reinforcement learning (DRL) for secondary frequency regulation, considering the nonlinear behavior of renewable energy sources (RES), load variations, and limitations of model-based techniques. The proposed DRL incorporates both value-based deep-Q learning (DQN) and policy-based proximal policy optimization (PPO) agents. The employed MG test-bench environment includes 300 small-capacity distributed energy resources. The proposed model-free control is compared with the existing distributed model predictive controller (MPC), adaptive linear quadratic regulator (LQR), and proportional–integral–derivative with a first-order filter (PIDF). The results depict a 91.9% and 61.6% reduction in average secondary frequency deviation, obtained using value- and policy-based proposed DRL methodology as compared to model-based optimal control techniques. Additionally, the proposed technique optimizes the operational cost of the system by merging observations of both agents. Also, the system stability is validated by comparing the integral square error (ISE) with prior model-based techniques.
AB - This paper proposes a novel model-free control of an islanded microgrid (MG) using value- and policy-based deep reinforcement learning (DRL) for secondary frequency regulation, considering the nonlinear behavior of renewable energy sources (RES), load variations, and limitations of model-based techniques. The proposed DRL incorporates both value-based deep-Q learning (DQN) and policy-based proximal policy optimization (PPO) agents. The employed MG test-bench environment includes 300 small-capacity distributed energy resources. The proposed model-free control is compared with the existing distributed model predictive controller (MPC), adaptive linear quadratic regulator (LQR), and proportional–integral–derivative with a first-order filter (PIDF). The results depict a 91.9% and 61.6% reduction in average secondary frequency deviation, obtained using value- and policy-based proposed DRL methodology as compared to model-based optimal control techniques. Additionally, the proposed technique optimizes the operational cost of the system by merging observations of both agents. Also, the system stability is validated by comparing the integral square error (ISE) with prior model-based techniques.
KW - Deep reinforcement learning (DRL), Energy access
KW - Distributed energy resources (DER)
KW - Microgrid
KW - Renewable energy resources
UR - https://www.scopus.com/pages/publications/105019383932
U2 - 10.1007/s13369-025-10740-7
DO - 10.1007/s13369-025-10740-7
M3 - Article
AN - SCOPUS:105019383932
SN - 2193-567X
JO - Arabian Journal for Science and Engineering
JF - Arabian Journal for Science and Engineering
ER -