🤖 GraspMAS: Zero-Shot Language-driven Grasp Detection with Multi-Agent System

Quang Nguyen1       Tri Le1       Huy Nguyen2       Thieu Vo3
Tung Ta4               Baoru Huang5               Minh Vu2               Anh Nguyen5

1FPT Software AI Center, Vietnam   2Automation & Control Institute (ACIN), Austria  
3NUS, Singapore   4University of Tokyo, Japan   5University of Liverpool, UK

Abstract

Language-driven grasp detection has the potential to revolutionize human-robot interaction by allowing robots to understand and execute grasping tasks based on natural language commands. However, existing approaches face two key challenges. First, they often struggle to interpret complex text instructions or operate ineffectively in densely cluttered environments. Second, most methods require a training or finetuning step to adapt to new domains, limiting their generation in real-world applications. In this paper, we introduce GraspMAS, a new multi-agent system framework for language-driven grasp detection. GraspMAS is designed to reason through ambiguities and improve decision-making in real-world scenarios. Our framework consists of three specialized agents: Planner, responsible for strategizing complex queries; Coder, which generates and executes source code; and Observer, which evaluates the outcomes and provides feedback. Intensive experiments on two large-scale datasets demonstrate that our GraspMAS significantly outperforms existing baselines. Additionally, robot experiments conducted in both simulation and real-world settings further validate the effectiveness of our approach.

BibTeX

@inproceedings{nguyen2025graspmas,
      title={GraspMAS: Zero-Shot Language-driven Grasp Detection with Multi-Agent System},
      author={Nguyen, Quang and Le, Tri and Nguyen, Huy and Vo, Thieu and Ta, Tung D and Huang, Baoru and Vu, Minh N and Nguyen, Anh},
      booktitle = IROS,
      year      = {2025}
  }

Acknowledgements

We borrow the page template from HyperNeRF. Special thanks to them!