International Journal of Control, Automation, and Systems 2024; 22(8): 2602-2612
https://doi.org/10.1007/s12555-023-0358-y
© The International Journal of Control, Automation, and Systems
With the increasing complexity of the robot grasping environment, it puts forward higher requirements on the grasping strategy of manipulators. However, grasping cluttered multi-class objects is a challenging task because the objects are stacked and occluded from each other, and it is often difficult for robots to find a suitable grasping position. Deep reinforcement learning with DQN algorithm has been used to study the pushing and grasping strategy to manipulate cluttered multi-class objects. However, there exists long learning time and low success rate. To solve this problem, we adopt two fully convolutional networks (FCN) to map the color and depth maps to actions: pushing and grasping. These two networks are trained by an improved soft actor-critic algorithm that includes auto entropy regularization, regularized objective function and clipped double Q learning. Pushing and grasping synergies has been learned with the dense reward feedback. The simulation experiment demonstrate that the learning process converges quickly and stable with a grasp success rate up to 83.3%. We further demonstrate the generalization ability and well performance of our models when novel objects appear in the scenes that the robot has never grasped before. Finally, real-world experiments with trained models from simulations are conducted to test the grasping performance on manually arranged scenes.
Keywords Autonomous grasping, deep reinforcement learning, robot, soft actor-critic.
International Journal of Control, Automation, and Systems 2024; 22(8): 2602-2612
Published online August 1, 2024 https://doi.org/10.1007/s12555-023-0358-y
Copyright © The International Journal of Control, Automation, and Systems.
Xuan Zheng, Shuaiming Yuan, and Pengzhan Chen*
Taizhou University
With the increasing complexity of the robot grasping environment, it puts forward higher requirements on the grasping strategy of manipulators. However, grasping cluttered multi-class objects is a challenging task because the objects are stacked and occluded from each other, and it is often difficult for robots to find a suitable grasping position. Deep reinforcement learning with DQN algorithm has been used to study the pushing and grasping strategy to manipulate cluttered multi-class objects. However, there exists long learning time and low success rate. To solve this problem, we adopt two fully convolutional networks (FCN) to map the color and depth maps to actions: pushing and grasping. These two networks are trained by an improved soft actor-critic algorithm that includes auto entropy regularization, regularized objective function and clipped double Q learning. Pushing and grasping synergies has been learned with the dense reward feedback. The simulation experiment demonstrate that the learning process converges quickly and stable with a grasp success rate up to 83.3%. We further demonstrate the generalization ability and well performance of our models when novel objects appear in the scenes that the robot has never grasped before. Finally, real-world experiments with trained models from simulations are conducted to test the grasping performance on manually arranged scenes.
Keywords: Autonomous grasping, deep reinforcement learning, robot, soft actor-critic.
Vol. 23, No. 3, pp. 683~972
Jianjun Ni*, Yu Gu, Yang Gu*, Yonghao Zhao, and Pengfei Shi
International Journal of Control, Automation, and Systems 2024; 22(8): 2591-2601Huaishu Chen, Min-Cheol Kim, Yeongoh Ko, and Chang-Sei Kim*
International Journal of Control, Automation, and Systems 2023; 21(11): 3507-3518Yuxiang Sun*, Bo Yuan, Yongliang Zhang, Wanwen Zheng, Qingfeng Xia, Bojian Tang, and Xianzhong Zhou*
International Journal of Control, Automation and Systems 2021; 19(9): 2984-2998