No Pose, No Problem

Surprisingly Simple 3D Gaussian Splats
from Sparse Unposed Images

1ETH Zurich      2NVIDIA      3Microsoft      4UC Merced     

NVS Results of 2 Unposed Views

Abstract

We introduce NoPoSplat, a feed-forward model capable of reconstructing 3D scenes parameterized by 3D Gaussians from unposed sparse multi-view images. Our model, trained exclusively with photometric loss, achieves real-time 3D Gaussian reconstruction during inference. To eliminate the need for accurate pose input during reconstruction, we anchor one input view's local camera coordinates as the canonical space and train the network to predict Gaussian primitives for all views within this space. This approach obviates the need to transform Gaussian primitives from local coordinates into a global coordinate system, thus avoiding errors associated with per-frame Gaussians and pose estimation. To resolve scale ambiguity, we design and compare various intrinsic embedding methods, ultimately opting to convert camera intrinsics into a token embedding and concatenate it with image tokens as input to the model, enabling accurate scene scale prediction. We utilize the reconstructed 3D Gaussians for novel view synthesis and pose estimation tasks and propose a two-stage coarse-to-fine pipeline for accurate pose estimation. Experimental results demonstrate that our pose-free approach can achieve superior novel view synthesis quality compared to pose-required methods, particularly in scenarios with limited input image overlap. For pose estimation, our method, trained without ground truth depth or explicit matching loss, significantly outperforms the state-of-the-art methods with substantial improvements. This work makes significant advances in pose-free generalizable 3D reconstruction and demonstrates its applicability to real-world scenarios.

Method Overview

Overall Framework of NoPoSplat. Given sparse unposed images, our method directly reconstruct Gaussians in a canonical space from a feed-forward network to represent the underlying 3D scene. We also introduce a camera intrinsic token embedding, which is concatenated with the image tokens as input to the network to address the scale ambiguity problem. For simplicity, we use a two-view setup as an example.

Quantitative Comparisons on NVS

Novel view synthesis performance comparison on the RealEstate10k dataset.
Our method largely outperforms previous pose-free methods (DUSt3R and Splatt3R) on all overlap settings, and even outperforms SOTA pose-required methods (pixelSplat and MVSplat), especially when the overlap is small.


nvs_re10k

Out-of-distribution performance comparison.
Our method shows superior performance when zero-shot evaluation on DTU and ScanNet++ using the model solely trained on RE10k and generalize to DTU and ScanNet++.


nvs_ood

Quantitative Comparisons on Pose Estimation

Pose Estimation performance comparison on the RealEstate10k dataset.
NoPoSplat largely outperforms previous SOTA pose estimation methods.
NoPoSplat does not require an explicit matching loss during training,
eliminating the need for ground truth depth and allowing it to be trained on video datasets such as RE10K.


pose_RE10k_comparison

Pose Estimation performance comparison on the ACID dataset.
None of the methods are trained on this dataset, NoPoSplat achieves the best performance.


pose_ACID_comparison

Pose Estimation performance comparison on the ScanNet-1500 dataset.
NoPoSplat is not trained on this dataset, but still achieves the best performance.


pose_ScanNet-1500_comparison

Comparisons of Reconstructed Gaussians

The red and green indicate input and target camera views, and the rendered image and depths are shown on the right side. The magenta and blue arrows correspond to the distorted or misalignment regions in baseline 3DGS. The results show that even without camera poses as input, our method produces higher-quality 3D Gaussians resulting in better color/depth rendering over baselines.

Qualitative Comparisons on NVS


Compared to baselines, we obtain: 1) more coherent fusion from input views, 2) superior reconstruction from limited image overlap, 3) enhanced geometry reconstruction in non-overlapping regions.

Comparisons of Cross-dataset Generalization

RE10K → DTU

RE10K → DTU

RE10K → DTU

RE10K → DTU


RE10K → ScanNet++

RE10K → ScanNet++

RE10K → ScanNet++

RE10K → ScanNet++


Our model can better zero-shot transfer to out-of-distribution data than SOTA pose-required methods. MVSplat and pixelSplat struggle to smoothly merge the underlying geometry and appearance of different input views, whereas our NoPoSplat renders competitive and holistic novel views due to the design that outputs Gaussians in a canonical coordinate system

More Results on In-the-Wild Data

Photos taken with an iPhone
Sora generated images
Images from Tanks&Temples dataset

BibTeX

@article{ye2024noposplat,
      title   = {No Pose, No Problem: Surprisingly Simple 3D Gaussian Splats from Sparse Unposed Images},
      author  = {Ye, Botao and Liu, Sifei and Xu, Haofei and Xueting, Li and Pollefeys, Marc and Yang, Ming-Hsuan and Songyou, Peng},
      journal = {arXiv preprint arXiv:2410.24207},
      year    = {2024}
    }