S3-NeRF: Neural Reflectance Field from Shading and Shadow under a Single Viewpoint
Authors
Authors
- Zhenfang Chen
- Wenqi Yang
- Chaofeng Chen
- Guanying Chen
- Kwan-Yee K. Wong
Authors
- Zhenfang Chen
- Wenqi Yang
- Chaofeng Chen
- Guanying Chen
- Kwan-Yee K. Wong
Published on
12/04/2022
Categories
In this paper, we address the “dual problem” of multi-view scene reconstruction in which we utilize single-view images captured under different point lights to learn a neural scene representation. Different from existing single-view methods which can only recover a 2.5D scene representation (i.e., a normal / depth map for the visible surface), our method learns a neural reflectance field to represent the 3D geometry and BRDFs of a scene. Instead of relying on multi-view photo-consistency, our method exploits two information-rich monocular cues, namely shading and shadow, to infer scene geometry. Experiments on multiple challenging datasets show that our method is capable of recovering 3D geometry, including both visible and invisible parts, of a scene from single-view images. Thanks to the neural reflectance field representation, our method is robust to depth discontinuities. It supports applications like novel-view synthesis and relighting. Our code and model can be found at https://ywq.github.io/s3nerf.
Please cite our work using the BibTeX below.
@inproceedings{
yang2022snerf,
title={S\${\textasciicircum}3\$-Ne{RF}: Neural Reflectance Field from Shading and Shadow under a Single Viewpoint},
author={Wenqi Yang and Guanying Chen and Chaofeng Chen and Zhenfang Chen and Kwan-Yee K. Wong},
booktitle={Advances in Neural Information Processing Systems},
editor={Alice H. Oh and Alekh Agarwal and Danielle Belgrave and Kyunghyun Cho},
year={2022},
url={https://openreview.net/forum?id=tvwkeAIcRP8}
}