Special Issue: ICCAS 2023

International Journal of Control, Automation, and Systems 2024; 22(11): 3303-3313

https://doi.org/10.1007/s12555-024-0045-7

© The International Journal of Control, Automation, and Systems

Image Quality Assessment in Visual Reinforcement Learning for Fast-moving Targets

Sanghyun Ryoo, Jiseok Jeong, and Soohee Han*

POSTECH

Abstract

Visual reinforcement learning (RL) enables agents to develop optimal control strategies directly from image data. However, most existing research primarily concentrates on numerical simulations for learning algorithms, often neglecting the challenges encountered in real-world scenarios. To address this gap, this study introduces a semi-real environment that combines MuJoCo Gym simulation with a real camera sensor, aiming to create a more realistic augmented simulation for state-of-the-art visual RL algorithms. The usefulness of this semi-real environment was initially demonstrated through conventional camera-free learning, revealing that general RL experiences substantial performance degradation, especially with fast-moving objects, due to motion blur effects. Building on this semi-real environment, the study also presents the deceleration visual RL (DVRL) algorithm, which incorporates a novel deep learning-based image quality assessment to evaluate the suitability of the acquired data for learning policies. The DVRL algorithm performs real-time image quality assessment and manages fast-moving targets by adjusting their speed, thereby balancing speed and image quality to optimize policy learning and achieve superior performance compared to baseline models.

Keywords Deep Learning, representation learning, visual reinforcement learning, visual servoing.

Article

Special Issue: ICCAS 2023

International Journal of Control, Automation, and Systems 2024; 22(11): 3303-3313

Published online November 1, 2024 https://doi.org/10.1007/s12555-024-0045-7

Copyright © The International Journal of Control, Automation, and Systems.

Image Quality Assessment in Visual Reinforcement Learning for Fast-moving Targets

Sanghyun Ryoo, Jiseok Jeong, and Soohee Han*

POSTECH

Abstract

Visual reinforcement learning (RL) enables agents to develop optimal control strategies directly from image data. However, most existing research primarily concentrates on numerical simulations for learning algorithms, often neglecting the challenges encountered in real-world scenarios. To address this gap, this study introduces a semi-real environment that combines MuJoCo Gym simulation with a real camera sensor, aiming to create a more realistic augmented simulation for state-of-the-art visual RL algorithms. The usefulness of this semi-real environment was initially demonstrated through conventional camera-free learning, revealing that general RL experiences substantial performance degradation, especially with fast-moving objects, due to motion blur effects. Building on this semi-real environment, the study also presents the deceleration visual RL (DVRL) algorithm, which incorporates a novel deep learning-based image quality assessment to evaluate the suitability of the acquired data for learning policies. The DVRL algorithm performs real-time image quality assessment and manages fast-moving targets by adjusting their speed, thereby balancing speed and image quality to optimize policy learning and achieve superior performance compared to baseline models.

Keywords: Deep Learning, representation learning, visual reinforcement learning, visual servoing.

IJCAS
November 2024

Vol. 22, No. 11, pp. 3253~3544

Stats or Metrics

Share this article on

  • line

IJCAS

eISSN 2005-4092
pISSN 1598-6446