S3-NeRF: Neural Reflectance Field from Shading and Shadow under a Single Viewpoint



  • Zhenfang Chen
  • Wenqi Yang
  • Chaofeng Chen
  • Guanying Chen
  • Kwan-Yee K. Wong

Published on


In this paper, we address the “dual problem” of multi-view scene reconstruction in which we utilize single-view images captured under different point lights to learn a neural scene representation. Different from existing single-view methods which can only recover a 2.5D scene representation (i.e., a normal / depth map for the visible surface), our method learns a neural reflectance field to represent the 3D geometry and BRDFs of a scene. Instead of relying on multi-view photo-consistency, our method exploits two information-rich monocular cues, namely shading and shadow, to infer scene geometry. Experiments on multiple challenging datasets show that our method is capable of recovering 3D geometry, including both visible and invisible parts, of a scene from single-view images. Thanks to the neural reflectance field representation, our method is robust to depth discontinuities. It supports applications like novel-view synthesis and relighting. Our code and model can be found at

Please cite our work using the BibTeX below.

title={S\${\textasciicircum}3\$-Ne{RF}: Neural Reflectance Field from Shading and Shadow under a Single Viewpoint},
author={Wenqi Yang and Guanying Chen and Chaofeng Chen and Zhenfang Chen and Kwan-Yee K. Wong},
booktitle={Advances in Neural Information Processing Systems},
editor={Alice H. Oh and Alekh Agarwal and Danielle Belgrave and Kyunghyun Cho},
Close Modal