International Journal of Control, Automation, and Systems 2024; 22(9): 2860-2870
https://doi.org/10.1007/s12555-023-0686-y
© The International Journal of Control, Automation, and Systems
The retargeting human motions to those of a humanoid robot is a difficult task that involves using complex humanoid models and intensive geometric calculations, while also requiring high joint recognition accuracy. Herein, we propose a new motion retargeting framework for whole-body motions using a monocular camera composed of three modules per frame: 1) the extraction of 3D human joint coordinates from a package from pose AI, 2) the calculation of human joint angles by fitting the humanoid model to the skeleton model attained from the pose AI using a global optimization method, and 3) the transmission of the estimated joint angles as a pose command to a full-scale humanoid robot. The results suggest that the proposed framework can reproduce human-like motion sequences while reflecting certain limitations in the robot’s joint angles due to intentionally set hardware limitations and constraints. Using the proposed method, the robot can directly mimic human motion at the joint angle level based solely on images taken by an RGB camera or video files. These findings suggest that it would be useful to construct big data consisting of joint angle vectors for various human poses and joint angle trajectories for human motions, so that—as one example application in the near future—a robot butler could refer to these big data when performing various motions at a person’s home or in the office.
Keywords Humanoid robot, motion retargeting, optimization method, pose estimation.
International Journal of Control, Automation, and Systems 2024; 22(9): 2860-2870
Published online September 1, 2024 https://doi.org/10.1007/s12555-023-0686-y
Copyright © The International Journal of Control, Automation, and Systems.
Sujin Baek, Ahyeon Kim, Jin-Young Choi, Eunju Ha, and Jong-Wook Kim*
Dong-A University
The retargeting human motions to those of a humanoid robot is a difficult task that involves using complex humanoid models and intensive geometric calculations, while also requiring high joint recognition accuracy. Herein, we propose a new motion retargeting framework for whole-body motions using a monocular camera composed of three modules per frame: 1) the extraction of 3D human joint coordinates from a package from pose AI, 2) the calculation of human joint angles by fitting the humanoid model to the skeleton model attained from the pose AI using a global optimization method, and 3) the transmission of the estimated joint angles as a pose command to a full-scale humanoid robot. The results suggest that the proposed framework can reproduce human-like motion sequences while reflecting certain limitations in the robot’s joint angles due to intentionally set hardware limitations and constraints. Using the proposed method, the robot can directly mimic human motion at the joint angle level based solely on images taken by an RGB camera or video files. These findings suggest that it would be useful to construct big data consisting of joint angle vectors for various human poses and joint angle trajectories for human motions, so that—as one example application in the near future—a robot butler could refer to these big data when performing various motions at a person’s home or in the office.
Keywords: Humanoid robot, motion retargeting, optimization method, pose estimation.
Vol. 22, No. 9, pp. 2673~2953
Dong Yan, Liping Chen, Jianwan Ding*, Ziyao Xiong, and Yu Chen
International Journal of Control, Automation, and Systems 2024; 22(6): 1971-1984Jiemei Zhao* and Zhonghui Hu
International Journal of Control, Automation and Systems 2019; 17(5): 1141-1148Gab-Soon Kim
International Journal of Control, Automation and Systems 2007; 5(4): 419-428