Occlusion-aware Text-Image-Point Cloud
Pretraining for Open-World
3D Object Recognition

The University of Western Australia
[CVPR 2025]
Teaser image

Comparison to existing methods. (a) State-of-the-art approaches pretrain 3D encoders on complete point clouds, which differ significantly from occluded ones in practical scenarios (top). This leads to a substantial gap in zero-shot performance between ModelNet40 benchmark with full point clouds and ScanObjectNN with real-world data (bottom). (b) The proposed framework OccTIP pretrains 3D models on partial point clouds to better simulate practical conditions, leading to significant improvements on various recognition tasks, especially when combined with our DuoMamba architecture. (c) Compared to the popular PointBERT, DuoMamba has significantly lower FLOPs (top) and latency (bottom) during inference, making it better suited for real-world applications.

Abstract

Recent open-world representation learning approaches have leveraged CLIP to enable zero-shot 3D object recognition. However, performance on real point clouds with occlusions still falls short due to the unrealistic pretraining settings. Additionally, these methods incur high inference costs because they rely on Transformer's attention modules. In this paper, we make two contributions to address these limitations. First, we propose occlusion-aware text-image-point cloud pretraining to reduce the training-testing domain gap. From 52K synthetic 3D objects, our framework generates nearly 630K partial point clouds for pretraining, consistently improving real-world recognition performances of existing popular 3D networks. Second, to reduce computational requirements, we introduce DuoMamba, a two-stream linear state space model tailored for point clouds. By integrating two space-filling curves with 1D convolutions, DuoMamba effectively models spatial dependencies between point tokens, offering a powerful alternative to Transformer. When pretrained with our framework, DuoMamba surpasses current state-of-the-art methods while reducing latency and FLOPs, highlighting the potential of our approach for real-world applications.

Proposed Method

Pretraining Framework

(a) Given a 3D object, we generate RGB and depth images from preset camera positions, which are used to construct partial point clouds. Texts are generated from dataset metadata, image captioning models, and retrieved descriptions of similar photos from LION-5B.

(b) During pretraining, we extract multi-modal features using a learnable point cloud network and frozen CLIP encoders, then align them through contrastive learning.

OccTIP Pretraining Framework

DuoMamba

In our DuoMamba block, we integrate two Hilbert curves and standard 1D convolutions with linear-time S6 modules to efficiently model geometric dependencies and enrich spatial context.

DuoMamba

BibTeX

@article{nguyen2025occlusion,
      title={Occlusion-aware Text-Image-Point Cloud Pretraining for Open-World 3D Object Recognition},
      author={Nguyen, Khanh and Hassan, Ghulam Mubashar and Mian, Ajmal},
      journal={CVPR},
      year={2025}
    }