Large-scale Indoor 3D Reconstruction from 360° Images
Problem
Reconstruct large-scale indoor environments for virtual tours using 360° imagery while staying robust to sparse viewpoints and imperfect capture conditions.
Method
The system combines panoramic image alignment, multi-view geometry, learned depth estimation, and depth fusion to recover consistent scene structure across wide indoor spaces.
My Role
I designed the reconstruction pipeline, with a focus on camera pose refinement, depth fusion, and practical system behavior on larger datasets.
Focus
- Robustness under sparse views
- Stable alignment across panoramas
- Scalable processing for larger scenes