Skip to content

VeOmni: Scaling any Modality Model Training to any Accelerators with PyTorch native Training Framework

License

Notifications You must be signed in to change notification settings

ByteDance-Seed/VeOmni

Repository files navigation

VeOmni: Scaling any Modality Model Training to any Accelerators with PyTorch native Training Framework


πŸ”— Overview

VeOmni is a versatile framework for both single- and multi-modal pre-training and post-training. It empowers users to seamlessly scale models of any modality across various accelerators, offering both flexibility and user-friendliness.

Our guiding principles when building VeOmni are:

  • Flexibility and Modularity: VeOmni is built with a modular design, allowing users to decouple most components and replace them with their own implementations as needed.

  • Trainer-free: VeOmni avoids rigid, structured trainer classes (e.g., PyTorch-Lightning or HuggingFace Trainer). Instead, VeOmni keeps training scripts linear, exposing the entire training logic to users for maximum transparency and control.

  • Omni model native: VeOmni enables users to effortlessly scale any omni-model across devices and accelerators.

  • Torch native: VeOmni is designed to leverage PyTorch’s native functions to the fullest extent, ensuring maximum compatibility and performance.

πŸ”₯ Latest News

  • [2025/04/03] We release VeOmni.

πŸ”– Table of Contents

πŸ“š Key Features

  • Parallelism
    • Parallel state by DeviceMesh
    • Torch FSDP1/2
    • Experts parallelism(Experimental)
    • Easy to add new parallelism plan
    • Sequence parallelism
    • Activation offloading
    • Activation checkpointing
  • Kernels
  • Model
    • Any transformers models.
    • Multi-modal
      • Qwen2.5-VL
      • Qwen2-VL
      • Seed-Omni
  • Data IO
    • Dynamic batching strategy
    • Omnidata processing
  • Distributed Checkpointing
    • ByteCheckpoint (Recommend)
    • Torch Distributed checkpointing
    • Dcp merge tools
  • Other tools
    • Profiling tools
    • Easy yaml configuration and argument parsing

πŸ§ͺ Upcoming Features

🎈 Getting Started

Read the VeOmni Best Practice for more details.

πŸ”§ Installation

Install using PyPI:

pip3 install veomni

Install from source code:

pip3 install -e .

Install veScale(Not available yet)

git clone https://github.com/volcengine/veScale.git
pip3 install .

πŸš€ Quick Start

User can quickly start training like this:

bash train.sh $TRAIN_SCRIPT $CONFIG.yaml

You can also override arguments in yaml by passing arguments from an external command line:

bash train.sh $TRAIN_SCRIPT $CONFIG.yaml \
    --model.model_path PATH/TO/MODEL \
    --data.train_path PATH/TO/DATA \
    --train.global_batch_size GLOBAL_BATCH_SIZE \

Here is an end-to-end workflow for preparing a subset of the fineweb dataset, continuing training a qwen2_5 model with sequence parallel 2 for 20 steps, and then merging the global_step_10 distributed checkpoint to hf weight by ByteCheckpoint.

  1. Download fineweb dataset
python3 scripts/download_hf_data.py \
  --repo_id HuggingFaceFW/fineweb \
  --local_dir ./fineweb/ \
  --allow_patterns sample/10BT/*
  1. Download qwen2_5 model
python3 scripts/download_hf_model.py \
  --repo_id Qwen/Qwen2.5-7B \
  --local_dir .
  1. Training
bash train.sh tasks/train_torch.py configs/pretrain/qwen2_5.yaml \
    --model.model_path ./Qwen2.5-7B \
    --data.train_path ./fineweb/sample/10BT/ \
    --train.global_batch_size 512 \
    --train.lr 5e-7 \
    --train.ulysses_parallel_size 2 \
    --train.save_steps 10 \
    --train.max_steps 20 \
    --train.output_dir Qwen2.5-7B_CT
  1. Merge checkpoints
python3 scripts/mereg_dcp_to_hf.py \
    --load-dir Qwen2.5-7B-Instruct_CT/checkpoints/global_step_10 \
    --model_assets_dir Qwen2.5-7B-Instruct_CT/model_assets \
    --save-dir Qwen2.5-7B-Instruct_CT/checkpoints/global_step_10/hf_ckpt
  1. Inference
python3 tasks/infer.py \
  --infer.model_path Qwen2.5-7B-Instruct_CT/checkpoints/global_step_10/hf_ckpt

πŸ”’ Merge checkpoints

we use ByteCheckpoint to save checkpoints in torch.distributed.checkpoint(dcp) format. You can merge the dcp files using this command:

python3 scripts/mereg_dcp_to_hf.py \
    --load-dir PATH/TO/CHECKPOINTS \
    --model_assets_dir PATH/TO/MODEL_ASSETS \
    --save-dir PATH/TO/SAVE_HF_WEIGHT \

For example, your output_dir is seed_omni, and you want to merge global_step_100 checkpoint to huggingface-type weight:

python3 scripts/mereg_dcp_to_hf.py \
    --load-dir seed_omni/checkpoints/global_step_100 \
    --model_assets_dir seed_omni/model_assets \
    --save-dir seed_omni/hf_ckpt \

πŸ“¦ Build Docker

cd docker/
docker compose up -d
docker compose exec VeOmni bash

🧱 Training Examples

PyTorch FSDP2 Qwen2VL

bash train.sh tasks/multimodal/omni/train_qwen2_vl.py configs/multimodal/qwen2_vl/qwen2_vl.yaml

PyTorch FSDP2 Qwen2

bash train.sh tasks/train_torch.py configs/pretrain/qwen2_5.yaml

PyTorch FSDP2 llama3-8b-instruct

bash train.sh  tasks/train_torch.py configs/pretrain/llama3.yaml

✏️ Supported Models

Model Model size Example config File
DeepSeek 2.5/3/R1 236B/671B deepseek.yaml
Llama 3-3.3 1B/3B/8B/70B llama3.yaml
Qwen 2-2.5 0.5B/1.5B/3B/7B/14B/32B/72B/ qwen2_5.yaml
Qwen2-VL/Qwen2.5-VL/QVQ 2B/3B/7B/32B/72B qwen2_vl.yaml
Seed_omni any foundation model with any omni encoder&&decoder seed_omni.yaml

VeOmni Support all transformers models if you don't need sequence parallelism or experts parallelism or other parallelism and cuda kernal optimize in VeOmni. We design a model registry mechanism. When the model is registered in veomni, we will automatically load the model and optimizer in VeOmni. Otherwise, it will default to load the modeling file in transformers.

If you want to add a new model, you can add a new model in the model registry. See in Support costom model docs.

⛰️ Performance

Coming soon with tech report.

😊 Acknowledgement

Thanks to the following projects for their excellent work:

πŸ’‘ Awesome work using VeOmni

🎨 Contributing

Contributions from the community are welcome! Please check out CONTRIBUTING.md our project roadmap(To be updated),

πŸ“„ License

This project is licensed under Apache License 2.0. See the LICENSE file for details.

πŸ“ Citation

If you find VeOmni useful for your research and applications, feel free to give us a star ⭐ or cite us using:

@software{VeOmni,
      title={VeOmni: Scaling any Modality Model Training to any Accelerators with PyTorch native Training Framework},
      author={Qianli Ma, Yaowei Zheng, Zhelun Shi, Zhongkai Zhao, Bin jia, Ziyue Huang, Zhi Zhang},
      year={2025},
      howpublished={GitHub repository},
      publisher={ByteDance Seed},
      url={https://github.com/ByteDance-Seed/VeOmni},
}

🌱 About ByteDance Seed Team

seed logo

Founded in 2023, ByteDance Seed Team is dedicated to crafting the industry's most advanced AI foundation models. The team aspires to become a world-class research team and make significant contributions to the advancement of science and society.

You can get to know us better through the following channelsπŸ‘‡


About

VeOmni: Scaling any Modality Model Training to any Accelerators with PyTorch native Training Framework

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages