Special Issue: ICCAS 2024

International Journal of Control, Automation, and Systems 2025; 23(2): 530-540

https://doi.org/10.1007/s12555-024-0527-7

© The International Journal of Control, Automation, and Systems

LUOR: A Framework for Language Understanding in Object Retrieval and Grasping

Dongmin Yoon, Seonghun Cha, and Yoonseon Oh*

Hanyang University

Abstract

In human-centered environments, assistive robots are required to understand verbal commands to retrieve and grasp objects within complex scenes. Previous research on natural language object retrieval tasks has mainly focused on commands explicitly mentioning an object’s name. However, in real-world environments, responding to implicit commands based on an object’s function is also essential. To address this problem, we propose a new dataset consisting of 712 verb-object pairs containing 78 verbs for 244 ImageNet classes and 336 verb-object pairs covering 54 verbs for 138 ObjectNet classes. Utilizing this dataset, we propose a novel language understanding object retrieval (LUOR) module by fine-tuning the CLIP text encoder. This approach enables effective learning for the downstream task of object retrieval while preserving the object classification performance. Additionally, we integrate LUOR with a YOLOv3-based multi-task detection (MTD) module for simultaneous object and grasp pose detection. This integration enables the robot manipulator to accurately grasp objects based on verbal commands in complex environments containing multiple objects. Our results demonstrate that LUOR outperforms CLIP in both explicit and implicit retrieval tasks while preserving object classification accuracy for both the ImageNet and ObjectNet datasets. Also, the real-world applicability of the integrated system is demonstrated through experiments with the Franka Panda manipulator.

Keywords Grasp detection, multi-modal learning, robotic object retrieval.

Article

Special Issue: ICCAS 2024

International Journal of Control, Automation, and Systems 2025; 23(2): 530-540

Published online February 1, 2025 https://doi.org/10.1007/s12555-024-0527-7

Copyright © The International Journal of Control, Automation, and Systems.

LUOR: A Framework for Language Understanding in Object Retrieval and Grasping

Dongmin Yoon, Seonghun Cha, and Yoonseon Oh*

Hanyang University

Abstract

In human-centered environments, assistive robots are required to understand verbal commands to retrieve and grasp objects within complex scenes. Previous research on natural language object retrieval tasks has mainly focused on commands explicitly mentioning an object’s name. However, in real-world environments, responding to implicit commands based on an object’s function is also essential. To address this problem, we propose a new dataset consisting of 712 verb-object pairs containing 78 verbs for 244 ImageNet classes and 336 verb-object pairs covering 54 verbs for 138 ObjectNet classes. Utilizing this dataset, we propose a novel language understanding object retrieval (LUOR) module by fine-tuning the CLIP text encoder. This approach enables effective learning for the downstream task of object retrieval while preserving the object classification performance. Additionally, we integrate LUOR with a YOLOv3-based multi-task detection (MTD) module for simultaneous object and grasp pose detection. This integration enables the robot manipulator to accurately grasp objects based on verbal commands in complex environments containing multiple objects. Our results demonstrate that LUOR outperforms CLIP in both explicit and implicit retrieval tasks while preserving object classification accuracy for both the ImageNet and ObjectNet datasets. Also, the real-world applicability of the integrated system is demonstrated through experiments with the Franka Panda manipulator.

Keywords: Grasp detection, multi-modal learning, robotic object retrieval.

IJCAS
February 2025

Vol. 23, No. 2, pp. 359~682

Stats or Metrics

Share this article on

  • line

IJCAS

eISSN 2005-4092
pISSN 1598-6446