Inferring 3D locations and shapes of multiple objects from a single 2D image is a long-standing objective of computer vision. Most of the existing works either predict one of these 3D properties or focus on solving both for a single object. One fundamental challenge lies in how to learn an effective representation of the image that is well-suited for 3D detection and reconstruction. In this work, we propose to learn a regular grid of 3D voxel features from the input image which is aligned with 3D scene space via a 3D feature lifting operator. Based on the 3D voxel features, our novel CenterNet-3D detection head formulates the 3D detection as keypoint detection in the 3D space. Moreover, we devise an efficient coarse-to-fine reconstruction module, including coarse-level voxelization and a novel local PCA-SDF shape representation, which enables fine detail reconstruction and one order of magnitude faster inference than prior methods. With complementary supervision from both 3D detection and reconstruction, one enables the 3D voxel features to be geometry and context preserving, benefiting both tasks.The effectiveness of our approach is demonstrated through 3D detection and reconstruction in single object and multiple object scenarios.
Voxel-based 3D Detection and Reconstruction of Multiple Objects from a Single Image
Feng Liu, Xiaoming LiuKeywords: 3D Object Detection, 3D Shape Reconstruction, Generic Object 3D Reconstruction
Source Code
The source code can be downloaded from here
Publications
-
Voxel-based 3D Detection and Reconstruction of Multiple Objects from a Single Image
Feng Liu, Xiaoming Liu
In Proceeding of Thirty-fifth Conference on Neural Information Processing Systems (NeurIPS 2021), Virtual, Dec. 2021
Bibtex | PDF | arXiv | Supplemental | Code | Video