Regular Papers

International Journal of Control, Automation and Systems 2020; 18(6): 1593-1604

Published online December 26, 2019

https://doi.org/10.1007/s12555-019-0120-7

© The International Journal of Control, Automation, and Systems

Model-free Adaptive Optimal Control of Episodic Fixed-horizon Manufacturing Processes Using Reinforcement Learning

Johannes Dornheim*, Norbert Link, and Peter Gumbsch

Karlsruhe University of Applied Sciences

Abstract

A self-learning optimal control algorithm for episodic fixed-horizon manufacturing processes with timediscrete control actions is proposed and evaluated on a simulated deep drawing process. The control model is built during consecutive process executions under optimal control via reinforcement learning, using the measured product quality as a reward after each process execution. Prior model formulation, which is required by algorithms from model predictive control and approximate dynamic programming, is therefore obsolete. This avoids several difficulties namely in system identification, accurate modeling, and runtime complexity, that arise when dealing with processes subject to nonlinear dynamics and stochastic influences. Instead of using the pre-created process and observation models, value-function-based reinforcement learning algorithms build functions of expected future reward, which are used to derive optimal process control decisions. The expectation functions are learned online, by interacting with the process. The proposed algorithm takes stochastic variations of the process conditions into account and is able to cope with partial observability. A Q-learning-based method for adaptive optimal control of partially observable episodic fixed-horizon manufacturing processes is developed and studied. The resulting algorithm is instantiated and evaluated by applying it to a simulated stochastic optimal control problem in metal sheet deep drawing.

Keywords Adaptive optimal control, manufacturing process optimization, model-free optimal control, reinforcement learning.

Article

Regular Papers

International Journal of Control, Automation and Systems 2020; 18(6): 1593-1604

Published online June 1, 2020 https://doi.org/10.1007/s12555-019-0120-7

Copyright © The International Journal of Control, Automation, and Systems.

Model-free Adaptive Optimal Control of Episodic Fixed-horizon Manufacturing Processes Using Reinforcement Learning

Johannes Dornheim*, Norbert Link, and Peter Gumbsch

Karlsruhe University of Applied Sciences

Abstract

A self-learning optimal control algorithm for episodic fixed-horizon manufacturing processes with timediscrete control actions is proposed and evaluated on a simulated deep drawing process. The control model is built during consecutive process executions under optimal control via reinforcement learning, using the measured product quality as a reward after each process execution. Prior model formulation, which is required by algorithms from model predictive control and approximate dynamic programming, is therefore obsolete. This avoids several difficulties namely in system identification, accurate modeling, and runtime complexity, that arise when dealing with processes subject to nonlinear dynamics and stochastic influences. Instead of using the pre-created process and observation models, value-function-based reinforcement learning algorithms build functions of expected future reward, which are used to derive optimal process control decisions. The expectation functions are learned online, by interacting with the process. The proposed algorithm takes stochastic variations of the process conditions into account and is able to cope with partial observability. A Q-learning-based method for adaptive optimal control of partially observable episodic fixed-horizon manufacturing processes is developed and studied. The resulting algorithm is instantiated and evaluated by applying it to a simulated stochastic optimal control problem in metal sheet deep drawing.

Keywords: Adaptive optimal control, manufacturing process optimization, model-free optimal control, reinforcement learning.

IJCAS
January 2025

Vol. 23, No. 1, pp. 1~88

Stats or Metrics

Share this article on

  • line

Related articles in IJCAS

IJCAS

eISSN 2005-4092
pISSN 1598-6446