VLM-Grounder: A VLM Agent for Zero-Shot 3D Visual Grounding

The Chinese University of Hong Kong1, Shanghai AI Laboratory2, Zhejiang University3

Abstract

3D visual grounding is crucial for robots, requiring integration of natural language and 3D scene understanding. Traditional methods depending on supervised learning with 3D point clouds are limited by scarce datasets. Recently zero-shot methods leveraging LLMs have been proposed to address the data issue. While effective, these methods only use object-centric information, limiting their ability to handle complex queries. In this work, we present VLM-Grounder, a novel framework using vision-language models (VLMs) for zero-shot 3D visual grounding based solely on 2D images. VLM-Grounder dynamically stitches image sequences, employs a grounding and feedback scheme to find the target object, and uses a multi-view ensemble projection to accurately estimate 3D bounding boxes. Experiments on ScanRefer and Nr3D datasets show VLM-Grounder outperforms previous zero-shot methods, achieving 51.6% Acc@0.25 on ScanRefer and 48.0% Acc on Nr3D, without relying on 3D geometry or object priors.

Method

Method

An overview of VLM-Grounder. VLM-Grounder analyzes the user query and dynamically stitches image sequences for efficient VLM processing to locate the target image and object. A 2D open-vocabulary detection model and the Segment Anything Model generate a fine-grained mask, which is then projected using a multi-view ensemble strategy to obtain the 3D bounding box.

Dynamic Stitching

Dynamic Stitching. VLM-Grounder employs a dynamic stitching strategy to stitch image sequences together, enabling VLM to efficiently process more than 100 images.

Visual Retrieval Benchmark

Visual Retrieval Benchmark. We develop a Visual Retrieval Benchmark to find the optimal stitching strategy for VLMs.

Experiment Results

  1. 3D visual grounding results on ScanRefer. Without using geometric information from point clouds, VLM-Grounder outperforms previous zero-shot methods and achieves performance comparable to supervised learning baselines. * indicates that the evaluation is based on 2D masks.
  2. Experiment Results of ScanRefer

  3. 3D visual grounding results on Nr3D. VLM-Grounder surpasses the previous SOTA zero-shot method without requiring access to point clouds or ground-truth bounding box priors.
  4. Experiment Results of Nr3D

Demo

demo

Please refer to the paper for more demos.

BibTeX

@inproceedings{xu2024vlmgrounder,
title={VLM-Grounder: A VLM Agent for Zero-Shot 3D Visual Grounding},
author={Xu, Runsen and Huang, Zhiwei and Wang, Tai and Chen, Yilun and Pang, Jiangmiao and Lin, Dahua},
booktitle={CoRL},
year={2024}
}