Regular Papers

International Journal of Control, Automation, and Systems 2025; 23(1): 315-331

https://doi.org/10.1007/s12555-024-0231-7

© The International Journal of Control, Automation, and Systems

Sliding Mode-based Integral Reinforcement Learning Event Triggered Control

Chao Jia*, Xinyu Li, Hongkun Wang, and Zijian Song

Tianjin University of Technology

Abstract

For a class of continuous-time nonlinear systems with input constraints, a novel event triggered control (ETC) of integral reinforcement learning (IRL) based on sliding mode (SM) is proposed in this paper. Firstly, a SM surface-based performance index function is designed and the Hamiltonian equation is solved by the policy iteration algorithm. Secondly, the IRL technique is utilized to obtain the integral Bellman equation, which makes the controller do not need to know the drift dynamics. Thirdly, the ETC is introduced to reduce the communication burden and a triggering condition is designed to ensure the asymptotic stability of the system. Then, a critic neural network (NN) is used to learn the optimal value function to obtain the optimal tracking controller. Finally, the asymptotic stability of the whole closed-loop system and uniformly ultimately bounded of the critic NN weights are proved based on the Lyapunov theory. Simulation and comparison results demonstrate the effectiveness of the proposed method.

Keywords Adaptive dynamic programming, event-triggered control, integral reinforcement learning, sliding mode.

Article

Regular Papers

International Journal of Control, Automation, and Systems 2025; 23(1): 315-331

Published online January 1, 2025 https://doi.org/10.1007/s12555-024-0231-7

Copyright © The International Journal of Control, Automation, and Systems.

Sliding Mode-based Integral Reinforcement Learning Event Triggered Control

Chao Jia*, Xinyu Li, Hongkun Wang, and Zijian Song

Tianjin University of Technology

Abstract

For a class of continuous-time nonlinear systems with input constraints, a novel event triggered control (ETC) of integral reinforcement learning (IRL) based on sliding mode (SM) is proposed in this paper. Firstly, a SM surface-based performance index function is designed and the Hamiltonian equation is solved by the policy iteration algorithm. Secondly, the IRL technique is utilized to obtain the integral Bellman equation, which makes the controller do not need to know the drift dynamics. Thirdly, the ETC is introduced to reduce the communication burden and a triggering condition is designed to ensure the asymptotic stability of the system. Then, a critic neural network (NN) is used to learn the optimal value function to obtain the optimal tracking controller. Finally, the asymptotic stability of the whole closed-loop system and uniformly ultimately bounded of the critic NN weights are proved based on the Lyapunov theory. Simulation and comparison results demonstrate the effectiveness of the proposed method.

Keywords: Adaptive dynamic programming, event-triggered control, integral reinforcement learning, sliding mode.

IJCAS
January 2025

Vol. 23, No. 1, pp. 1~88

Stats or Metrics

Share this article on

  • line

Related articles in IJCAS

IJCAS

eISSN 2005-4092
pISSN 1598-6446