einit provides fast and robust initialization for 3D point cloud alignment using ellipsoid analysis. It computes initial transformations by analyzing the ellipsoids of inertia of point clouds and uses KD-tree correspondence recovery for robustness to real-world scenarios.
- Fast: < 1ms for 1000 points
- Accurate: Often achieves excellent alignment without iterative refinement
- OpenCV compatible: Returns standard 4×4 transformation matrices
- Robust: Handles noise, partial overlap, and permuted point clouds
- Permutation invariant: Results are identical regardless of point ordering
- Feature-augmented: Optional per-point features (RGB, intensity, normals) break geometric degeneracy for symmetric shapes
- Configurable: Adjustable parameters for different use cases
- Simple API: One function call to get results
import numpy as np
from einit import register_ellipsoid
# Your point clouds (N x 3 arrays)
src_points = np.random.randn(1000, 3)
dst_points = src_points @ R + t # Apply some transformation
# Get the transformation matrix
T = register_ellipsoid(src_points, dst_points)
print(T) # 4x4 homogeneous transformation matrix
# With custom parameters for robustness control
T = register_ellipsoid(
src_points, dst_points,
max_correspondence_distance=0.1, # Maximum distance for valid correspondences
min_inlier_fraction=0.7, # Require 70% valid correspondences
leafsize=8 # Smaller KD-tree leaf size
)
# With per-point features (e.g. RGB colour or LiDAR intensity)
# Features break geometric degeneracy for symmetric shapes (spheres, cubes)
src_rgb = np.random.rand(1000, 3) # RGB colour per point
perm = np.random.permutation(1000)
dst_points_shuf = dst_points[perm] # shuffle destination cloud
dst_rgb = src_rgb[perm] # carry the same colours along
T = register_ellipsoid(
src_points, dst_points_shuf,
src_features=src_rgb,
dst_features=dst_rgb,
feature_weight=1.0 # 0.0 = geometry only; typical range 0.1–1.0
)There are multiple installation options. One of them is to install the current release from PyPI:
pip install einitAnother option is to install most recent builds directly from GitHub:
pip install git+https://github.com/sashakolpakov/einit.gitFor development or testing:
pip install "einit[test] @ git+https://github.com/sashakolpakov/einit.git@main" # Includes matplotlib, pytest
pip install "einit[all] @ git+https://github.com/sashakolpakov/einit.git@main" # Everything including docsOr clone and install locally:
git clone https://github.com/sashakolpakov/einit.git
cd einit
pip install -e .[test] # Editable install with test dependenciesReal-world performance on test datasets:
| Dataset | Points | RMSE | Time |
|---|---|---|---|
| Sphere | 1500 | 0.035 | 0.006 ± 0.002 ms |
| Cube | 3375 | 0.025 | 0.010 ± 0.008 ms |
| Bunny | 992 | 0.02 | 0.047 ± 0.021 ms |
With 0.01-0.02 standard Gaussian noise and partial overlap
The algorithm works by:
- Centering point clouds at their centroids
- Computing ellipsoids of inertia via eigendecomposition (optionally augmented with per-point features)
- Searching through 8 reflection combinations using KD-tree correspondence recovery
- Filtering correspondences by distance and inlier fraction
- Returning a 4×4 transformation matrix
KD-tree correspondence recovery makes the algorithm robust to point cloud permutations, partial overlaps, and outliers without assuming that points at the same array indices correspond to each other.
When per-point features are supplied (src_features, dst_features, feature_weight > 0), the spatial covariance is augmented by a spatial-feature cross-covariance term:
E_aug = P^T P + feature_weight * trace_ratio * (P^T F)(P^T F)^T
This biases the principal axes toward directions where features vary most, breaking eigenvalue degeneracy for symmetric shapes (spheres, cubes, cylinders). The KD-tree step also runs in feature-augmented space so that feature similarity guides nearest-neighbour selection. Features are normalised to keep feature_weight dimensionless and dataset-independent.
import cv2
from einit import register_ellipsoid
# Get initial transformation
T_init = register_ellipsoid(src, dst)
# Refine alignment with OpenCV
src_aligned = apply_transform(src, T_init)
retval, T_refined, inliers = cv2.estimateAffine3D(
src_aligned.astype(np.float32),
dst.astype(np.float32)
)The einit_examples/ directory contains demonstrations and visualizations:
Interactive Jupyter Notebook:
jupyter notebook einit_examples/visual_tests.ipynb
Comprehensive visual demonstrations including sphere, cube, and Stanford bunny alignments with performance analysis.
Permutation Invariance Test:
python einit_examples/point_reoder_test.pyDemonstrates that einit correctly handles randomly permuted point clouds.
Partial Overlap Test:
python einit_examples/rand_overlap_test.pyTests algorithm robustness with randomized partial overlap scenarios using the Stanford bunny.
Bounding Box Overlap Test:
python einit_examples/bbox_overlap_test.pyEvaluates performance on the Stanford bunny with geometric bounding box constraints.
Note: Unlike randomized overlaps, this is a known failure mode of the algorithm. Low success rate is expected.
Feature Matching Diagnostic:
python einit_examples/cube_feature_matching.pyFour-panel diagnostic visualisation showing how RGB face colours break the cube's 48-fold geometric symmetry. Produces eigenvalue spectra, per-candidate RMSE/inlier plots, and side-by-side alignment results comparing geometry-only vs feature-augmented registration.
The einit_tests/ directory contains comprehensive test suites validating core functionality:
# All tests
pytest einit_tests/ -v
# Specific test categories
pytest einit_tests/test_einit.py -v # Core algorithm tests
pytest einit_tests/test_integration.py -v # Integration and robustness tests
pytest einit_tests/test_features.py -v # Feature-augmented benchmarksTest Coverage:
- Core Algorithm Tests (
test_einit.py): Basic functionality, permutation invariance, noise robustness, and Stanford bunny dataset validation - Integration Tests (
test_integration.py): End-to-end pipeline testing with real-world scenarios - Feature Tests (
test_features.py): Three real-world feature scenarios (Stanford bunny with LiDAR intensity, hemisphere-flip sphere, coloured cube), a feature-weight sweep, and API-level validation tests
More comprehensive documentation is available.
- Alexander Kolpakov (University of Austin at Texas)
- Michael Werman (Hebrew University of Jerusalem)
- Judah Levin (University of Austin at Texas)
MIT License - see LICENSE file for details.
This work is supported by the Google Cloud Research Award number GCP19980904.
If you use this work, please cite the paper below.
BibTeX:
@article{kolpakov2023approach,
title={An Approach to Robust ICP Initialization},
author={Kolpakov, Alexander and Werman, Michael},
journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
volume={45},
number={10},
pages={12685--12691},
year={2023},
publisher={IEEE},
doi={10.1109/TPAMI.2023.3287468},
url={https://ieeexplore.ieee.org/document/10155262}
}APA: Kolpakov, A., & Werman, M. (2023). An Approach to Robust ICP Initialization. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(10), 12685-12691. https://doi.org/10.1109/TPAMI.2023.3287468
