Object-Centric Neural Scene Rendering
Michelle Guo
Stanford University
Stanford University
Alireza Fathi
Google Research
Google Research
Jiajun Wu
Stanford University
Stanford University
Thomas Funkhouser
Google Research
Google Research
Images Rendered with Learned Object Scattering Functions (OSFs)
Abstract
We present a method for composing photorealistic scenes from captured images of objects. Our work builds upon neural radiance fields (NeRFs), which implicitly model the volumetric density and directionally-emitted radiance of a scene. While NeRFs synthesize realistic pictures, they only model static scenes and are closely tied to specific imaging conditions. This property makes NeRFs hard to generalize to new scenarios, including new lighting or new arrangements of objects. Instead of learning a scene radiance field as a NeRF does, we propose to learn object-centric neural scattering functions (OSFs), a representation that models per-object light transport implicitly using a lighting- and view-dependent neural network. This enables rendering scenes even when objects or lights move, without retraining. Combined with a volumetric path tracing procedure, our framework is capable of rendering both intra- and inter-object light transport effects including occlusions, specularities, shadows, and indirect illumination. We evaluate our approach on scene composition and show that it generalizes to novel illumination conditions, producing photorealistic, physically accurate renderings of multi-object scenes.Technical Video
BibTex
@article{guo2020osf,
title={Object-Centric Neural Scene Rendering},
author={Guo, Michelle and Fathi, Alireza and Wu, Jiajun and Funkhouser,
Thomas},
journal={arXiv preprint arXiv:2012.08503},
year={2020}
}