Zero-Shot Scene Reconstruction from Single Images with Deep Prior Assembly

 

NeurIPS 2024

 

Junsheng Zhou1                  Yu-Shen Liu1                  Zhizhong Han2

 

1School of Software, Tsinghua University, 2Wayne State University

Abstract

Large language and vision models have been leading a revolution in visual computing. By greatly scaling up sizes of data and model parameters, the large models learn deep priors which lead to remarkable performance in various tasks. In this work, we present deep prior assembly, a novel framework that assembles diverse deep priors from large models for scene reconstruction from single images in a zero-shot manner. We show that this challenging task can be done without extra knowledge but just simply generalizing one deep prior in one sub-task. To this end, we introduce novel methods related to poses, scales, and occlusion parsing which are keys to enable deep priors to work together in a robust way.Deep prior assembly does not require any 3D or 2D data-driven training in the task and demonstrates superior performance in generalizing priors to open-world scenes. We conduct evaluations on various datasets, and report analysis, numerical and visual comparisons with the latest methods to show our superiority.

Method

Overview of DeepPriorAssembly. Given a single image of a 3D scene, we detect the instances and segment them with Grounded-SAM. After normalizing the size and center for the instances, we attempt to amend the quality of the instance images by enhancing and inpainting them. Here, we take a sofa in the image for example. Leveraging the Stable-Diffusion model, we generate a set of candidate images with the image-to-image generation and a text prompt of the instance category predicted by Grounded-SAM. We then filter out the bad generation samples with Open-CLIP by evaluating the cosine similarity between the generated instances and original one. After that, we generate multiple 3D model proposals for this instance with Shap·E from the Top-K generated instance images. Additionally, we estimate the depth of the origin input image with Omnidata as a 3D geometry prior. To estimate the layout, we propose an approach to optimize the location, orientation and size for each 3D proposal by matching it with the estimated segmentation masks and the depths (the red ★ for the example sofa). Finally, we choose the 3D model proposal with minimal matching error as the final prediction of this instance, and the final scene is generated by combining the generated 3D models for all detected instances.

Visual Comparisons Under Synthesis and Real-world Scenes


More Comparisons under 3D-Front


More Comparisons under BlendSwap and Replica


More Comparisons under ScanNet

BibTeX

@inproceedings{zhou2024DeepPriorAssembly,
      title = {Zero-Shot Scene Reconstruction from Single Images with Deep Prior Assembly},
      author = {Zhou, Junsheng and Liu, Yu-Shen and Han, Zhizhong},
      booktitle = {Advances in Neural Information Processing Systems (NeurIPS)},
      year = {2024}
  }