International Journal of Control, Automation, and Systems 2024; 22(12): 3762-3776
https://doi.org/10.1007/s12555-024-0367-5
© The International Journal of Control, Automation, and Systems
The problem of robust image-based visual servoing of an aerial robot for tracking a moving vehicle is addressed in this brief. First, a camera perspective projection model is used with a coordinate frame attached to the quadrotor with a fixed relative pose with respect to it. Next, several world points are projected onto the image plane so that features may be extracted from the projected point coordinates of the goal object. The chosen features are functions of the zeroth, first, and second moments of the acquired binary image. These features provide signals associated with the position and yaw angle of the aerial robot in Cartesian space. The visual feature error vector generates the desired thrust vector that stabilizes the position and yaw angle error of the platform. However, significant uncertainty is imposed on the system leading to a negative impact on the tracking performance of the control system. This paper contributes to the field by introducing a self-organizing neural network (SONN) that can circumvent the challenge stemming not only from the changes in the velocity of the moving vehicle but also payload variations. Additionally, extensive ROS Gazebo simulations and real-world experiments are conducted to assess the effectiveness of this method, in contrast to many existing studies that rely solely on MATLAB-based simulations. Eventually, the stability of the closed-loop system is proven through Lyapunov theory with the uncertainty terms taken into account.
Keywords Image-based visual servoing (IBVS), Lyapunov theory, robust control, ROS Gazebo simulation, selforganizing neural networks.
International Journal of Control, Automation, and Systems 2024; 22(12): 3762-3776
Published online December 1, 2024 https://doi.org/10.1007/s12555-024-0367-5
Copyright © The International Journal of Control, Automation, and Systems.
Shayan Sepahvand*, Farrokh Janabi-Sharifi, Houman Masnavi, Farhad Aghili, and Niloufar Amiri
Toronto Metropolitan University
The problem of robust image-based visual servoing of an aerial robot for tracking a moving vehicle is addressed in this brief. First, a camera perspective projection model is used with a coordinate frame attached to the quadrotor with a fixed relative pose with respect to it. Next, several world points are projected onto the image plane so that features may be extracted from the projected point coordinates of the goal object. The chosen features are functions of the zeroth, first, and second moments of the acquired binary image. These features provide signals associated with the position and yaw angle of the aerial robot in Cartesian space. The visual feature error vector generates the desired thrust vector that stabilizes the position and yaw angle error of the platform. However, significant uncertainty is imposed on the system leading to a negative impact on the tracking performance of the control system. This paper contributes to the field by introducing a self-organizing neural network (SONN) that can circumvent the challenge stemming not only from the changes in the velocity of the moving vehicle but also payload variations. Additionally, extensive ROS Gazebo simulations and real-world experiments are conducted to assess the effectiveness of this method, in contrast to many existing studies that rely solely on MATLAB-based simulations. Eventually, the stability of the closed-loop system is proven through Lyapunov theory with the uncertainty terms taken into account.
Keywords: Image-based visual servoing (IBVS), Lyapunov theory, robust control, ROS Gazebo simulation, selforganizing neural networks.
Vol. 23, No. 3, pp. 683~972
Zhuoer An, Xinghua Liu*, Gaoxi Xiao, Yu Kang, and Peng Wang
International Journal of Control, Automation, and Systems 2025; 23(3): 737-747Huibeom Youn, Gyuwon Kim, and Jaepil Ban*
International Journal of Control, Automation, and Systems 2025; 23(2): 674-682Abdul Aris Umar and Jung-Su Kim*
International Journal of Control, Automation, and Systems 2025; 23(2): 611-619