Icon Goat PointLLM: Empowering Large Language Models to Understand Point Clouds

The Chinese University of Hong Kong1, Shanghai AI Laboratory2, Zhejiang University3

PointLLM is online! Try it at http://101.230.144.196. You can chat with PointLLM about the models of the Objaverse dataset or about your own point clouds!

Teaser Image

PointLLM: a multi-modal large language model capable of understanding colored point clouds of objects. It perceives object types, geometric structures, and appearance without concerns for ambiguous depth, occlusion, or viewpoint dependency.

Abstract

The unprecedented advancements in Large Language Models (LLMs) have created a profound impact on natural language processing but are yet to fully embrace the realm of 3D understanding. This paper introduces PointLLM, a preliminary effort to fill this gap, thereby enabling LLMs to understand point clouds and offering a new avenue beyond 2D visual data.

PointLLM processes colored object point clouds with human instructions and generates contextually appropriate responses, illustrating its grasp of point clouds and common sense. Specifically, it leverages a point cloud encoder with a powerful LLM to effectively fuse geometric, appearance, and linguistic information.

We collect a novel dataset comprising 660K simple and 70K complex point-text instruction pairs to enable a two-stage training strategy: initially aligning latent spaces and subsequently instruction-tuning the unified model.

To rigorously evaluate our model's perceptual abilities and its generalization capabilities, we establish two benchmarks: Generative 3D Object Classification and 3D Object Captioning, assessed through three different methods, including human evaluation, GPT-4/ChatGPT evaluation, and traditional metrics. Experiment results show that PointLLM demonstrates superior performance over existing 2D baselines. Remarkably, in human-evaluated object captioning tasks, PointLLM outperforms human annotators in over 50% of the samples.

Model Architecture

The point encoder extracts features from the input point cloud and projects them to the latent space of the LLM backbone. The LLM backbone processes sequences of point tokens and text tokens, and generates the predicted tokens as the output.

Model Architecture

Please refer to our paper for data collection, model training, as well as benchmarks and evaluation.

Experiment Results

  1. Generative 3D object classification results on the ModelNet40 test split and Objaverse datasets. The results show the classification accuracy for different models, under the Instruction-typed (I) prompt "What is this?" and the Completion-typed (C) prompt "This is an object of ".
  2. Experiment Results of Classification

  3. 3D object captioning results on Objaverse. Models are evaluated using human evaluation, GPT-4 evaluation, and traditional metrics. A primary focus is placed on human and GPT-4 evaluation, along with data-driven metrics (Sentence-BERT and SimCSE), as conventional measures like BLEU, ROUGE-L, and METEOR may not sufficiently capture the semantic richness and diversity of the generated captions.
  4. Experiment Results of Captioning

Qualitative Comparisons

Qualitative comparisons with InstructBLIP and ground truths on our benchmark. Samples 1-2 and 3-4 show classification on ModelNet40 and Objaverse, respectively. Samples 5-6 show object captioning on Objaverse. The first image of each sample is the input of InstructBLIP and we also show point clouds from other views for reference. These samples show our PointLLM produces more accurate and detailed results than image-based InstructBLIP and even human-annotated ground truths. Qualitative Comparisons

Dialogue Examples

BibTeX

@inproceedings{xu2024pointllm,
title={PointLLM: Empowering Large Language Models to Understand Point Clouds},
author={Xu, Runsen and Wang, Xiaolong and Wang, Tai and Chen, Yilun and Pang, Jiangmiao and Lin, Dahua},
booktitle={ECCV},
year={2024}
}