Surface reconstruction for point clouds is an important task in 3D computer vision. Most of the latest methods resolve this problem by learning signed distance functions (SDF) from point clouds, which are limited to reconstructing shapes or scenes with closed surfaces. Some other methods tried to represent shapes or scenes with open surfaces using unsigned distance functions (UDF) which are learned from large scale ground truth unsigned distances. However, the learned UDF is hard to provide smooth distance fields near the surface due to the noncontinuous character of point clouds. In this paper, we propose a novel method to learn consistency-aware unsigned distance functions directly from raw point clouds. We achieve this by learning to move 3D queries to reach the surface with a field consistency constraint, where we also enable to progressively estimate a more accurate surface. Specifically, we train a neural network to gradually infer the relationship between 3D queries and the approximated surface by searching for the moving target of queries in a dynamic way, which results in a consistent field around the surface. Meanwhile, we introduce a polygonization algorithm to extract surfaces directly from the gradient field of the learned UDF. The experimental results in surface reconstruction for synthetic and real scan data show significant improvements over the state-of-the-art under the widely used benchmarks.
We show a 2D case of learning the distance field from a sparse 2D point cloud which only contains 13 points. The level-sets show the distance fields learned by (a) Neural-pull, (b) SAL, (c) NDF, (d) Ours. The color of blue and red represent positive or negative distance. The darker the color, the closer it is to the approximated surface.
Overview of our method. The CAP-UDF is designed to reconstruct surfaces from raw point clouds by learning consistency-aware UDFs. Given a 3D query point as input, the neural network predicts the unsigned distance of the query point and moves it against the gradient direction at the query location with a stride of predicted unsigned distance. The field consistency loss is then computed between the moved queries and the target point cloud as the optimization target. After the network converges in the current stage, we update the target point cloud with a subset of query points as additional priors to learn more local details in the next stage. Finally, we use the gradient field of the learned UDFs to model the relationship between different 3D grids and extract iso-surfaces directly.
ShapeNet Cars:
|
|
3D Scene:
|
|
@ARTICLE{zhou2024cappami,
author={Zhou, Junsheng and Ma, Baorui and Li, Shujuan and Liu, Yu-Shen and Fang, Yi and Han, Zhizhong},
journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
title={CAP-UDF: Learning Unsigned Distance Functions Progressively From Raw Point Clouds With Consistency-Aware Field Optimization},
year={2024},
volume={46},
number={12},
pages={7475-7492},
keywords={Point cloud compression;Surface reconstruction;Three-dimensional displays;Shape;Neural networks;Estimation;Training;Normal estimation;point clouds;scene reconstruction;surface reconstruction;unsigned distance functions},
doi={10.1109/TPAMI.2024.3392364}
}
@inproceedings{zhou2022capudf,
title = {Learning Consistency-Aware Unsigned Distance Functions Progressively from Raw Point Clouds},
author = {Zhou, Junsheng and Ma, Baorui and Liu, Yu-Shen and Fang, Yi and Han, Zhizhong},
booktitle = {Advances in Neural Information Processing Systems (NeurIPS)},
year = {2022}
}