Skip to content

Canonical Voting: Towards Robust Oriented Bounding Box Detection in 3D Scenes (CVPR2022)

Notifications You must be signed in to change notification settings

qq456cvb/CanonicalVoting

Repository files navigation

Canonical Voting: Towards Robust Oriented Bounding Box Detection in 3D Scenes

Yang You, Zelin Ye, Yujing Lou, Chengkun Li, Yong-Lu Li, Lizhuang Ma, Weiming Wang, Cewu Lu

CVPR 2022

Paper PDF Project Page Video

Canonical Voting is a 3D detection method that disentangles Hough voting targets into Local Canonical Coordinates (LCC), box scales and box orientations. LCC and box scales are regressed for each point while box orientations are generated by a canonical voting scheme. Finally, a LCC-aware back-projection checking algorithm iteratively cuts out bounding boxes from the generated vote maps, with the elimination of false positives. Our model achieves state-of-the-art performance on challenging large-scale datasets of real point cloud scans: ScanNet, SceneNN and SUN RGB-D.

News

  • [2022.03] Our voting-based category-level 9D pose estimation method CPPF, which achieves decent sim-to-real performance, is accepted to CVPR 2022!

Change Logs

  • [2022.04.11] Upload Bathtub fixed Scan2CAD annotations.
  • [2022.04.11] Update install dependencies to more recent versions.
  • [2022.04.22] Fix a bug in evaluating joint models.

Contents

Overview

This is the official Pytorch implementation of our work: Canonical Voting.

Installation

  • MinkowskiEngine v0.5.3
  • Install our custom Hough Voting module under houghvoting folder, by running python setup.py install
  • Tested with PyTorch v1.8.1 + CUDA 10.2
  • Other dependecies:
pip install hydra-core==1.1.1 scipy scikit-learn tqdm shapely numpy-quaternion==2021.8.30.10.33.11 pickle plyfile

Train and Test on ScanNet

Data Preparation

You will need to first download the original ScanNet dataset. For Scan2CAD labels with oriented bounding boxes, we removed some ambiguous Scan2CAD annotations for Bathtub (wordnet id: 02808440) category, including washbasins, washstands, etc. You can download our Bathtub fixed annotations on Google Drive.

Download our annotated Scan2CAD model segments here and preprocessed ground-truth boxes here for evaluation. Adjust their path accordingly in config/config.yaml.

Start Training

To train model jointly for all categories, with one unified model:

python train_joint.py

To train model separately for each category:

python train_separate.py category=03211117,04379243,02808440,02747177,04256520,03001627,02933112,02871439,others -m
Evaluate mAP

Once trained, you can evaluate the model's mAP on ScanNet val set.

To eval the jointly trained model:

python eval_joint.py

To eval the separately trained model:

python eval_separate.py

Test on SceneNN

Data Preparation

You will need to download our processed SceneNN data, which contains raw segmentation labels, instance labels and bounding box annotations. Set scene_nn_root in config.yaml to your downloaded directory.

Evaluate mAP

Run eval_joint.py or eval_separate.py with modified variable SCENENN=True.

Train and Test on SUN RGB-D

Data Preparation

we follow BRNet to prepare data for training and testing, while separately train a learned FPS proposal sampler as described in the paper.

Start Training

First download the pretrained CanonicalVoting model on Google Drive.

To reproduce the result, replace the original BRNet module with out BRNetCanon in sunrgbd/brnetcanon.py. Besides, change L88 and L95 of configs/_base_/models/brnet.py to sample_mod='custom'; and change L11 of configs/_base_/schedules/schedule_cos.py to total_epochs=72 since changing sampling strategy takes more epochs to converge.

Pretrained Models

Pretrained Model on ScanNet

Pretrained models for both joint and separate training settings can be found here. You will get about 15.4 mAP and 21.7 mAP for joint and separate training settings, respectively.

Pretrained Model on SUN RGB-D

Pretrained CanonicalVoting model can be found here.

Citation

If you find our algorithm useful or use our processed data, please consider citing:

@article{you2022canonical,
  title={Canonical Voting: Towards Robust Oriented Bounding Box Detection in 3D Scenes},
  author={You, Yang and Ye, Zelin and Lou, Yujing and Li, Chengkun and Li, Yong-Lu and Ma, Lizhuang and Wang, Weiming and Lu, Cewu},
  journal={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  year={2022}
}

About

Canonical Voting: Towards Robust Oriented Bounding Box Detection in 3D Scenes (CVPR2022)

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published