EventNeuS: 3D Mesh Reconstruction from a Single Event Camera
Abstract
Event cameras offer a considerable alternative to RGB cameras in many scenarios. While there are recent works on event-based novel-view synthesis, dense 3D mesh reconstruction remains scarcely explored and existing event-based techniques are severely limited in their 3D reconstruction accuracy. To address this limitation, we present EventNeuS, a self-supervised neural model for learning 3D representations from monocular colour event streams. Our approach, for the first time, combines 3D signed distance function and density field learning with event-based supervision. Furthermore, we introduce spherical harmonics encodings into our model for enhanced handling of view-dependent effects. EventNeuS outperforms existing approaches by a significant margin, achieving 34% lower Chamfer distance and 31% lower mean absolute error on average compared to the best previous method.
Growth and citations
This paper is currently showing No growth state computed yet..
Citation metrics and growth state from academic sources (e.g. Semantic Scholar). See About for details.
Cited by (0)
No citing papers yet
Papers that cite this one will appear here once data is available.
View citations page →References (0)
No references in DB yet
References for this paper will appear here once ingested.
Related papers in Computer Vision and Pattern Recognition
- PrevizWhiz: Combining Rough 3D Scenes and 2D Video to Guide Generative Video Previsualization0 citations
- Deep-learning-based pan-phenomic data reveals the explosive evolution of avian visual disparity0 citations
- Continuous Control of Editing Models via Adaptive-Origin Guidance0 citations
Growth transitions
No transitions recorded yet
Growth state transitions will appear here once computed.