🤖 GraspMAS: Zero-Shot Language-driven Grasp Detection with Multi-Agent System

Quang Nguyen1,2       Tri Le1       Huy Nguyen3       Thieu Vo4
Tung Ta5               Baoru Huang6               Minh Vu3               Anh Nguyen6

1FPT Software AI Center   2Hanoi University of Science and Technology  
3TU Wien   4National University of Singapore   5University of Tokyo   6University of Liverpool

Abstract

Language-driven grasp detection has the potential to revolutionize human-robot interaction by allowing robots to understand and execute grasping tasks based on natural language commands. However, existing approaches face two key challenges. First, they often struggle to interpret complex text instructions or operate ineffectively in densely cluttered environments. Second, most methods require a training or finetuning step to adapt to new domains, limiting their generation in real-world applications. In this paper, we introduce GraspMAS, a new multi-agent system framework for language-driven grasp detection. GraspMAS is designed to reason through ambiguities and improve decision-making in real-world scenarios. Our framework consists of three specialized agents: Planner, responsible for strategizing complex queries; Coder, which generates and executes source code; and Observer, which evaluates the outcomes and provides feedback. Intensive experiments on two large-scale datasets demonstrate that our GraspMAS significantly outperforms existing baselines. Additionally, robot experiments conducted in both simulation and real-world settings further validate the effectiveness of our approach.

BibTeX

@inproceedings{nguyen2025graspmas,
      title={GraspMAS: Zero-Shot Language-driven Grasp Detection with Multi-Agent System},
      author={Nguyen, Quang and Le, Tri and Nguyen, Huy and Vo, Thieu and Ta, Tung D and Huang, Baoru and Vu, Minh N and Nguyen, Anh},
      booktitle = IROS,
      year      = {2025}
  }

Acknowledgements

We borrow the page template from HyperNeRF. Special thanks to them!