Survey Paper

International Journal of Control, Automation, and Systems 2025; 23(1): 1-40

https://doi.org/10.1007/s12555-024-0990-1

© The International Journal of Control, Automation, and Systems

Reinforcement Learning for Process Control: Review and Benchmark Problems

Joonsoo Park, Hyein Jung, Jong Woo Kim*, and Jong Min Lee*

Incheon National University, Seoul National University

Abstract

The success of reinforcement learning (RL) combined with deep neural networks has led to the development of numerous RL algorithms that have demonstrated remarkable performance across various domains. However, despite its potential in process control, relatively limited research has explored RL in this field. This paper aims to bridge the gap between RL and process control, providing potential applications and insights for process control engineers. In this review, we first summarize previous efforts to apply RL to process control. Next, we provide an overview of RL concepts and categorize recent RL algorithms, analyzing the strengths and weaknesses of each category. We implement fourteen RL algorithms and apply them to six relevant benchmark environments, conducting quantitative analyses to identify the most suitable approaches for specific process control problems. Finally, we draw conclusions and outline future research directions to advance RL’s application in process control.

Keywords Approximate dynamic programming, optimal control, process control, reinforcement learning.

Article

Survey Paper

International Journal of Control, Automation, and Systems 2025; 23(1): 1-40

Published online January 1, 2025 https://doi.org/10.1007/s12555-024-0990-1

Copyright © The International Journal of Control, Automation, and Systems.

Reinforcement Learning for Process Control: Review and Benchmark Problems

Joonsoo Park, Hyein Jung, Jong Woo Kim*, and Jong Min Lee*

Incheon National University, Seoul National University

Abstract

The success of reinforcement learning (RL) combined with deep neural networks has led to the development of numerous RL algorithms that have demonstrated remarkable performance across various domains. However, despite its potential in process control, relatively limited research has explored RL in this field. This paper aims to bridge the gap between RL and process control, providing potential applications and insights for process control engineers. In this review, we first summarize previous efforts to apply RL to process control. Next, we provide an overview of RL concepts and categorize recent RL algorithms, analyzing the strengths and weaknesses of each category. We implement fourteen RL algorithms and apply them to six relevant benchmark environments, conducting quantitative analyses to identify the most suitable approaches for specific process control problems. Finally, we draw conclusions and outline future research directions to advance RL’s application in process control.

Keywords: Approximate dynamic programming, optimal control, process control, reinforcement learning.

IJCAS
January 2025

Vol. 23, No. 1, pp. 1~88

Stats or Metrics

Share this article on

  • line

Related articles in IJCAS

IJCAS

eISSN 2005-4092
pISSN 1598-6446