Recognizing Scenes from Novel Viewpoints


Shengyi Qian
Alexander Kirillov
Nikhila Ravi
Devendra Singh Chaplot

Justin Johnson
David F. Fouhey
Georgia Gkioxari

Facebook AI Research, University of Michigan


[pdf]
[code]
[video]


We propose ViewSeg which takes as input (a) a few images of a novel scene, and recognizes the scene from novel viewpoints. The novel viewpoint, in the form of camera coordinates, queries (b) our learnt 3D representation to produce (c) semantic segmentations from the view without access to the view's RGB image. The view query additionally produces (c) depth. ViewSeg trains on hundreds of scenes using multi-view 2D annotations and no 3D supervision.

Abstract

Humans can perceive scenes in 3D from a handful of 2D views. For AI agents, the ability to recognize a scene from any viewpoint given only a few images enables them to efficiently interact with the scene and its objects. In this work, we attempt to endow machines with this ability. We propose a model which takes as input a few RGB images of a new scene and recognizes the scene from novel viewpoints by segmenting it into semantic categories. All this without access to the RGB images from those views. We pair 2D scene recognition with an implicit 3D representation and learn from multi-view 2D annotations of hundreds of scenes without any 3D supervision beyond camera poses. We experiment on challenging datasets and demonstrate our model's ability to jointly capture semantics and geometry of novel scenes with diverse layouts, object types and shapes.


Approach


Results



Acknowledgements

We thank Shuaifeng Zhi for his help of Semantic-NeRF, Oleksandr Maksymets and Yili Zhao for their help of AI-Habitat, and Ang Cao, Chris Rockwell, Linyi Jin, Nilesh Kulkarni for helpful discussions.