Computer Science Faculty Publications and Presentations
Offline Reinforcement Learning Approaches for Safe and Effective Smart Grid Control
Document Type
Conference Proceeding
Publication Date
10-2025
Abstract
This paper explores the under-examined potential of offline reinforcement learning algorithms in the context of Smart Grids. While online methods, such as Proximal Policy Optimization (PPO), have been extensively studied, offline methods, which inherently avoid real-time interactions, may offer practical safety benefits in scenarios like power grid management, where suboptimal policies could lead to severe consequences. To investigate this, we conducted experiments in Grid2Op environments with varying grid complexity, including differences in size and topology. Our results suggest that offline algorithms can achieve comparable or superior performance to online methods, particularly as grid complexity increases. Additionally, we observed that the diversity of training data plays a crucial role, with data collected through environment sampling yielding better results than data generated by trained models. These findings underscore the value of further exploring offline approaches in safety-critical applications.
Recommended Citation
Peredo, Angel, Hector Lugo, Christian Narcia-Macias, Jose Espinoza, Daniel Masamba, Adan Gandarilla, Erik Enriquez, and Dong-Chul Kim. "Offline Reinforcement Learning Approaches for Safe and Effective Smart Grid Control." In International Congress on Information and Communication Technology, pp. 457-469. Singapore: Springer Nature Singapore, 2025. https://doi.org/10.1007/978-981-96-6429-0_38
Publication Title
Proceedings of Tenth International Congress on Information and Communication Technology
DOI
10.1007/978-981-96-6429-0_38

Comments
https://rdcu.be/eUrY2