Learning Gradient Fields for Shape Generation
Abstract
In this work, we propose a novel technique to generate shapes from point cloud data. A point cloud can be viewed as samples from a distribution of 3D points whose density is concentrated near the surface of the shape. Point cloud generation thus amounts to moving randomly sampled points to high-density areas. We generate point clouds by performing stochastic gradient ascent on an unnormalized probability density, thereby moving sampled points toward the high-likelihood regions. Our model directly predicts the gradient of the log density field and can be trained with a simple objective adapted from score-based generative models. We show that our method can reach state-of-the-art performance for point cloud auto-encoding and generation, while also allowing for extraction of a high-quality implicit surface.
Keywords
3D generation, generative models
Brief Introduction to the Method
We model shapes as a distribution of points. Such that points residing on the surface of the shape have high probability and off-surface points have low-probability. Our key insight is that we can produce such a point cloud by moving points along the gradient of this probability distribution.
We use a network to model the gradient of the log-density with multiple noise levels. Given a sampled location x and a sigma, the network predicts the gradient of log probability distribution at the point x at that noise level.
Once training is complete, we can move randomly sampled points to the shape surface using the predicted gradient conditioned on different noise levels, starting from a large sigma value and gradually reducing the amount of noise.
Auto-encoding Results
Generation Results
Acknowledgements
This work was supported in part by grants from Magic Leap and Facebook AI, and the Zuckerman STEM leadership program.