MonoNeRF: Learning Generalizable NeRFs from Monocular Videos without Camera Poses

1University of California San Diego, 2FAIR Meta AI

Given a short clip of monoculr video, we learn a MonoNeRF from monocular videos that can be applied to depth estimation, novel view synthesis, and camera pose estimation.

Abstract

We propose a generalizable neural radiance fields - MonoNeRF, that can be trained on large-scale monocular videos of moving in static scenes without any ground-truth annotations of depth and camera poses. MonoNeRF follows an Autoencoder-based architecture, where the encoder estimates the monocular depth and the camera pose, and the decoder constructs a Multiplane NeRF representation based on the depth encoder feature, and renders the input frames with the estimated camera. The learning is supervised by the reconstruction error. Once the model is learned, it can be applied to multiple applications including depth estimation, camera pose estimation, and single-image novel view synthesis.

Method

Results

Novel View Synthesis

Input View
VideoAE
MonoNeRF
Input view
VideoAE
MonoNeRF

Camera Trajectory

Input Video
VideoAE
MonoNeRF
Input Video
VideoAE
MonoNeRF

Depth Estimation

Input video
Estimated Depth
Input video
Estimated Depth

Video

BibTeX


            	@article{fu2022mononerf,
			title={MonoNeRF: Learning Generalizable NeRFs from Monocular Videos without Camera Poses},
  			author={Fu, Yang and Misra, Ishan and Wang, Xiaolong},
  			journal={arXiv preprint arXiv:2210.07181},
  			year={2022}
		    }