Rich Features, Clean Code
☐ Improve the Examples section.
☐ Clearly list the features that are already integrated and the features planned for future integration.
☐ Build proper docs to replace the current collection of README files.
Prerequisite: PyTorch.
# 1. Environment setup
# Example only; Python and PyTorch version requirements are flexible.
conda create -n friendly-splat python=3.10 -y
conda activate friendly-splat
pip install torch==2.4.0 torchvision==0.19.0 --index-url https://download.pytorch.org/whl/cu121
# 2. Clone and install
git clone --recursive https://github.com/AshadowZ/FriendlySplat.git
cd FriendlySplat
# Basic install (train & viewer)
pip install ninja
pip install -e ".[train,viewer]" --no-build-isolation
# OR install the full toolchain
# pip install -e ".[train,viewer,mesh,segment,sfm,priors]" --no-build-isolationTips & Notes:
- Faster Installation: We highly recommend installing
uvand replacingpipwithuv pipin the commands above. - CUDA Build: The
--no-build-isolationflag is required forgsplatto properly reuse your local PyTorch/CUDA setup. - Extra Dependencies: Some tools require additional setup (e.g., the
sfmextra requires HLOC). Please check the respective subfolder docs like tools/sfm/README.md.
To build and run FriendlySplat using Docker, please follow the steps below:
Navigate to the project root directory and execute the build command. The example below uses TORCH_CUDA_ARCH_LIST="8.9", which is targeted for an RTX 4090.
docker build --build-arg TORCH_CUDA_ARCH_LIST="8.9" -t friendlysplat:latest .- GPU Architecture: If you are using a different graphics card, please check your GPU architecture via
nvidia-smior refer to NVIDIA's compute capability table to confirm and update theTORCH_CUDA_ARCH_LISTversion to match your specific hardware. - Driver Requirements: Regardless of your GPU, ensure that your host NVIDIA driver is >= 530.30 to ensure CUDA runtime compatibility with the container.
- Large Files Warning: Exclude unnecessary large files that are not required for building the Docker environment (e.g., datasets, checkpoints, outputs) using the
.dockerignorefile to prevent Out-Of-Memory (OOM) crashes during the "transferring context" phase (often surfacing asrpc error: ... EOF), excessively long build times and bloated image sizes.
After a successful build, you can start the container using the following command (make sure to replace /path/to/FriendlySplat with your local FriendlySplat project path and /path/to/your/datasets with your local dataset path):
docker run --gpus all -it --rm \
-v /path/to/your/datasets:/data \
-v /path/to/FriendlySplat:/app/FriendlySplat \
-p 8080:8080 \
--shm-size=8g \
friendlysplat:latestImportant Notes for Development:
- Hot-Reloading Code: The
-vflag maps your local source code directly into the container. If your local code modifications do not affect project dependencies, this mapping allows you to easily verify your code changes without needing to rebuild. - Handling C-Extensions: Mounting local source code can overwrite files generated during image build, such as compiled artifacts like
gsplat/csrc.so.entrypoint.shrestores these critical files from a protected location inside the image, effectively patching the mounted directory so the modules remain usable. - Rebuilding on Dependency Changes: If your local modifications do break or change the original dependency relationships (e.g., updating
pyproject.tomlorsetup.py), you must re-run thedocker buildcommand to update the system dependencies within the image. - Shared Memory: The
--shm-size=8gparameter is crucial. It increases the container's shared memory from the default 64 MB to 8 GB, which prevents the PyTorch DataLoader from crashing during training.
For a more streamlined development experience, we highly recommend using Docker Compose. It allows you to define all your configurations—including volume mounts for source code, datasets, and model outputs—in a single docker-compose.yml file. This drastically reduces the complexity of terminal commands and accelerates your development workflow.
Before running, please ensure you have updated the volume paths in your docker-compose.yml to match your local environment. Then, simply execute:
docker compose run --rm friendlysplatFriendlySplat expects a COLMAP-style dataset directory under --io.data-dir:
data_dir/
images/
sparse/0/
depth_prior/ # optional
normal_prior/ # optional
dynamic_mask/ # optional
sky_mask/ # optional
images/stores the training images.sparse/0/stores the COLMAP reconstruction.- The prior and mask folders are optional and only needed if you enable the corresponding inputs in the config.
- To generate
sparse/0/, see tools/sfm/README.md. To infer geometry priors such asdepth_prior/andnormal_prior/, see tools/geometry_prior/README.md.
Train on a COLMAP scene:
fs-train \
--io.data-dir /path/to/data-dir \
--io.result-dir /path/to/result-dir \
--io.device cuda:0 \
--io.export-splats \
--io.export-format sog \
--io.save-ckpt \
--data.preload none \
--postprocess.use-bilateral-grid \
--optim.visible-adam \
--strategy.impl improved \
--strategy.densification-budget 1000000--io.export-format now accepts ply, ply_compressed, or sog.
If you provide inputs such as --data.depth-dir-name, --data.normal-dir-name, or
--data.sky-mask-dir-name, the corresponding regularization terms are enabled
automatically during training.
See the code for the exact implementation details.
Open the viewer on the latest checkpoint or PLY in a result directory:
fs-view \
--result-dir /path/to/result-dir \
--device cuda \
--port 8080This repo provides some examples to help you decide which extra tricks are worth
enabling, and how to tune the many magic-number-like hyperparameters in
friendly_splat/trainer/configs.py. This part is still under construction. For now,
you can also use Codex / Claude Code to read the repo and help generate a training
command for your scene.
FriendlySplat is developed by researchers and contributors from Differential Robotics, FastLab, and Zhejiang University.
|
|
|
Issues and pull requests are welcome. The codebase is still evolving, and many features may not have been widely tested yet, so issue reports are especially welcome.
FriendlySplat is built with substantial help from the broader Gaussian Splatting community. We first thank gaussian-splatting and gsplat for efficient CUDA kernels and strong feature integration.
We also thank Improved-GS, AbsGS, taming-3dgs, 3dgs-mcmc, and mini-splatting for high-quality densification implementations and references.
For pruning-related ideas and code references, we thank GNS, speedy-splat, GaussianSpa, and LightGaussian.
We also thank PGSR, 2DGS, GGGS, dn-splatter, mvsanywhere, and 2DGS++ for their explorations of geometry regularization and high-quality code releases.
We further thank CityGaussian for valuable code references on urban-scale scene reconstruction, and InstaScene together with MaskClustering for 2D-to-3D lifting references.
Finally, special thanks to XiaoBin2001 for helpful suggestions throughout development.

