International Journal of Control, Automation, and Systems 2025; 23(1): 315-331
https://doi.org/10.1007/s12555-024-0231-7
© The International Journal of Control, Automation, and Systems
For a class of continuous-time nonlinear systems with input constraints, a novel event triggered control (ETC) of integral reinforcement learning (IRL) based on sliding mode (SM) is proposed in this paper. Firstly, a SM surface-based performance index function is designed and the Hamiltonian equation is solved by the policy iteration algorithm. Secondly, the IRL technique is utilized to obtain the integral Bellman equation, which makes the controller do not need to know the drift dynamics. Thirdly, the ETC is introduced to reduce the communication burden and a triggering condition is designed to ensure the asymptotic stability of the system. Then, a critic neural network (NN) is used to learn the optimal value function to obtain the optimal tracking controller. Finally, the asymptotic stability of the whole closed-loop system and uniformly ultimately bounded of the critic NN weights are proved based on the Lyapunov theory. Simulation and comparison results demonstrate the effectiveness of the proposed method.
Keywords Adaptive dynamic programming, event-triggered control, integral reinforcement learning, sliding mode.
International Journal of Control, Automation, and Systems 2025; 23(1): 315-331
Published online January 1, 2025 https://doi.org/10.1007/s12555-024-0231-7
Copyright © The International Journal of Control, Automation, and Systems.
Chao Jia*, Xinyu Li, Hongkun Wang, and Zijian Song
Tianjin University of Technology
For a class of continuous-time nonlinear systems with input constraints, a novel event triggered control (ETC) of integral reinforcement learning (IRL) based on sliding mode (SM) is proposed in this paper. Firstly, a SM surface-based performance index function is designed and the Hamiltonian equation is solved by the policy iteration algorithm. Secondly, the IRL technique is utilized to obtain the integral Bellman equation, which makes the controller do not need to know the drift dynamics. Thirdly, the ETC is introduced to reduce the communication burden and a triggering condition is designed to ensure the asymptotic stability of the system. Then, a critic neural network (NN) is used to learn the optimal value function to obtain the optimal tracking controller. Finally, the asymptotic stability of the whole closed-loop system and uniformly ultimately bounded of the critic NN weights are proved based on the Lyapunov theory. Simulation and comparison results demonstrate the effectiveness of the proposed method.
Keywords: Adaptive dynamic programming, event-triggered control, integral reinforcement learning, sliding mode.
Vol. 23, No. 1, pp. 1~88
Ning Chen, Yang Xiang, Jiayao Chen*, Biao Luo, Binyan Li, and Weihua Gui
International Journal of Control, Automation, and Systems 2025; 23(1): 196-211Yue Wang, Yong-Hui Yang*, and Li-Bing Wu
International Journal of Control, Automation, and Systems 2025; 23(1): 175-186Kwaku Ayepah, Zirong Liu, Mei Sun, and Qiang Jia*
International Journal of Control, Automation, and Systems 2024; 22(12): 3731-3743