CVPR 2024 Highlight · Dynamic 360° Neural Fields
DiVa-360: The Dynamic Visual Dataset for Immersive Neural Fields
CVPR 2024 Highlight
DiVa-360 is a large-scale real-world dynamic 360° visual dataset designed to benchmark immersive Neural Radiance Fields and ego-centric foundation models under complex object-centric and camera-centric motion.
The dataset captures synchronized high-resolution video from 53 calibrated cameras, covering diverse dynamic scenes and motion patterns. DiVa-360 addresses the critical lack of real-world benchmarks for dynamic 360° view synthesis, enabling rigorous evaluation of state-of-the-art methods such as HyperNeRF, TiNeuVox, and HFGaussian.
International Collaboration
This project was conducted through a strategic collaboration between the Robot Vision team (CNRS / I3S) and the Interactive Vision Lab (IVL), Brown University (Prof. Srinath Sridhar).
Citation
@inproceedings{DiVa360,
title = {{DiVa-360}: The Dynamic Visual Dataset for Immersive Neural Fields},
author = {Lu, Cheng-You and Zhou, Peisen and Xing, Ali and Pokhariya, Chandradeep
and Dey, Arnab and Shah, Ishan N. and Mavidipalli, Rugved and Hu, Dylan
and Comport, Andrew I. and Chen, Kefan and Sridhar, Srinath},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year = {2024}
}