IMLS-Splatting: Efficient Mesh Reconstruction from Multi-view Images via Point Representation

SIGGRAPH 2025 (TOG)

Abstract

Multi-view mesh reconstruction has long been a challenging problem in graphics and computer vision. In contrast to recent volumetric rendering methods that generate meshes through post-processing, we propose an end-to-end mesh optimization approach called IMLS-Splatting. Our method leverages the sparsity and flexibility of point clouds to efficiently represent the underlying surface. To achieve this, we introduce a splatting-based differentiable Implicit Moving-Least Squares (IMLS) algorithm that enables the fast conversion of point clouds into SDFs and texture fields, optimizing both mesh reconstruction and rasterization. Additionally, the IMLS representation ensures that the reconstructed SDF and mesh maintain continuity and smoothness without the need for extra regularization. With this efficient pipeline, our method enables the reconstruction of highly detailed meshes in approximately 11 minutes, supporting high-quality rendering and achieving state-of-the-art reconstruction performance.


Approach

We represent 3D scenes using point clouds with neural texture features. Through our fast, splatting-based IMLS approach, the point clouds are converted into SDF and texture feature fields in a differentiable way. Next, 3D meshes with vertex features are extracted using the Marching Cubes algorithm. By integrating differentiable rasterization, our method can be optimized by multi-view image supervision.


Results