Abstract
This manuscript introduces a generalized Markov Decision Process (MDP) model for dynamic capacity planning in the presence of stochastic time-nonhomogeneous demand, wherein system capacity may be flexibly increased or decreased throughout a finite planning horizon. The model includes investment, disinvestment, maintenance, operational, and shortage costs, in addition to a salvage value at the end of the planning horizon. Under very realistic conditions, we investigate the structural properties of the optimal policy and demonstrate its monotonic structure. By leveraging these properties, we propose a revised value iteration algorithm that capitalizes on the intrinsic structure of the problem, thereby achieving enhanced computational efficiency compared to traditional dynamic programming techniques. The proposed model is applicable across a range of sectors, including manufacturing systems, cloud-computing services, logistics systems, healthcare resource management, power capacity planning, and other intelligent infrastructures driven by Industry 4.0.
| Original language | English |
|---|---|
| Article number | 3865 |
| Journal | Mathematics |
| Volume | 13 |
| Issue number | 23 |
| DOIs | |
| State | Published - Dec 2025 |
Bibliographical note
Publisher Copyright:© 2025 by the authors.
Keywords
- capacity planning
- decision making under uncertainty
- revised value iteration algorithm
- structured optimal policy
ASJC Scopus subject areas
- Computer Science (miscellaneous)
- General Mathematics
- Engineering (miscellaneous)