International Journal of Control, Automation, and Systems 2025; 23(1): 1-40
https://doi.org/10.1007/s12555-024-0990-1
© The International Journal of Control, Automation, and Systems
The success of reinforcement learning (RL) combined with deep neural networks has led to the development of numerous RL algorithms that have demonstrated remarkable performance across various domains. However, despite its potential in process control, relatively limited research has explored RL in this field. This paper aims to bridge the gap between RL and process control, providing potential applications and insights for process control engineers. In this review, we first summarize previous efforts to apply RL to process control. Next, we provide an overview of RL concepts and categorize recent RL algorithms, analyzing the strengths and weaknesses of each category. We implement fourteen RL algorithms and apply them to six relevant benchmark environments, conducting quantitative analyses to identify the most suitable approaches for specific process control problems. Finally, we draw conclusions and outline future research directions to advance RL’s application in process control.
Keywords Approximate dynamic programming, optimal control, process control, reinforcement learning.
International Journal of Control, Automation, and Systems 2025; 23(1): 1-40
Published online January 1, 2025 https://doi.org/10.1007/s12555-024-0990-1
Copyright © The International Journal of Control, Automation, and Systems.
Joonsoo Park, Hyein Jung, Jong Woo Kim*, and Jong Min Lee*
Incheon National University, Seoul National University
The success of reinforcement learning (RL) combined with deep neural networks has led to the development of numerous RL algorithms that have demonstrated remarkable performance across various domains. However, despite its potential in process control, relatively limited research has explored RL in this field. This paper aims to bridge the gap between RL and process control, providing potential applications and insights for process control engineers. In this review, we first summarize previous efforts to apply RL to process control. Next, we provide an overview of RL concepts and categorize recent RL algorithms, analyzing the strengths and weaknesses of each category. We implement fourteen RL algorithms and apply them to six relevant benchmark environments, conducting quantitative analyses to identify the most suitable approaches for specific process control problems. Finally, we draw conclusions and outline future research directions to advance RL’s application in process control.
Keywords: Approximate dynamic programming, optimal control, process control, reinforcement learning.
Vol. 23, No. 1, pp. 1~88
Jong Min Lee and Jay H. Lee
International Journal of Control, Automation and Systems 2004; 2(3): 263-278Muhammad Abu Bakar Siddique, Dongya Zhao*, and Harun Jamil
International Journal of Control, Automation, and Systems 2024; 22(10): 3117-3132Zhihui Wu*, Guo-Ping Liu, June Hu, Hui Yu, and Dongyan Chen
International Journal of Control, Automation, and Systems 2024; 22(9): 2699-2710