📄 License • 🖥️ Demo • 🚀 Quick Start • 📊 Performance • 🤗 Models • 📚 Citation
Youtu-Parsing is a specialized document parsing model built upon the open-source Youtu-LLM 2B foundation. By extending the capabilities of the base model with a prompt-guided framework and NaViT-style dynamic visual encoder, Youtu-Parsing offers enhanced parsing capabilities for diverse document elements including text, tables, formulas, and charts. The model incorporates an efficient parallel decoding mechanism that significantly accelerates inference, making it practical for real-world document analysis applications. We share Youtu-Parsing with the community to facilitate research and development in document understanding.
- Text Localization: Accurately detects and localizes text regions with pixel-level precision, ensuring no content is missed or misplaced across diverse document layouts.
- Reading Order Restoration: Intelligently reconstructs the logical reading sequence of document content, maintaining proper flow across columns, sections, and pages for coherent understanding.
- Text Recognition: Provides precise text recognition across diverse scenarios.
- Formula Recognition: Automatically converts mathematical expressions to LaTeX format.
- Table Recognition: Automatically detects tables and converts them to HTML format.
- Chart Recognition: Converts charts to markdown tables, mind maps and flow charts to mermaid format.
- Token Parallelism: Enables simultaneous inference of multiple tokens for accelerated processing, achieving 5-11x speedup.
- Query Parallelism: Combines multiple queries together to maximize Token Parallelism benefits, providing an additional 2x speedup on top of Token Parallelism acceleration.
The following guide demonstrates how to use Youtu-Parsing with Hugging Face integration for local deployment.
Option 1: Install from Git Repository
conda create -n youtu_parsing python=3.10
conda activate youtu_parsing
pip install git+https://github.com/TencentCloudADP/youtu-parsing.git#subdirectory=youtu_hf_parserOption 2: Local Development Installation
git clone https://github.com/TencentCloudADP/youtu-parsing.git
cd youtu-parsing/youtu_hf_parser
pip install -e .Flash Attention is required for optimal performance. Choose the installation method that works best for your environment:
# 🎯 For CUDA 12.x + PyTorch 2.6 + Python 3.10 + Linux x86_64:
pip install https://github.com/Dao-AILab/flash-attention/releases/download/v2.7.4.post1/flash_attn-2.7.4.post1+cu12torch2.6cxx11abiFALSE-cp310-cp310-linux_x86_64.whl
# 🔄 Alternative: Install from PyPI (may require compilation)
pip install flash-attn>=2.7.0💡 Note: Flash Attention installation is platform-specific. If you encounter issues, please refer to the official installation guide.
Download the pre-trained model weights from our official repository:
from youtu_hf_parser import YoutuOCRParserHF
# Initialize the parser with model configuration
parser = YoutuOCRParserHF(
model_path=model_path, # Path to downloaded model weights
enable_angle_correct=True, # Set to False to disable angle correction
angle_correct_model_path=angle_correct_model_path # Path to angle correction model
)
# Parse a document (supports images, PDFs, and more)
parser.parse_file(
input_path=image_path, # Input document path
output_dir=output_dir # Output directory for results
)
print("✅ Document parsing completed!")
print(f"📄 Results saved to: {output_dir}")Our comprehensive evaluation demonstrates Youtu-Parsing's superior performance across multiple benchmarks and real-world scenarios.
We extend our gratitude to the following projects and communities that made Youtu-Parsing possible:
If you find Youtu-Parsing useful in your research or applications, please consider citing our work:
@article{youtu-parsing,
title={Youtu-Parsing: Perception, Structuring and Recognition via High-Parallelism Decoding},
author={Tencent Youtu Lab},
year={2026},
eprint={},
archivePrefix={},
primaryClass={},
url={},
}
@article{youtu-vl,
title={Youtu-VL: Unleashing Visual Potential via Unified Vision-Language Supervision},
author={Tencent Youtu Lab},
year={2026},
eprint={2601.19798},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2601.19798},
}
@article{youtu-llm,
title={Youtu-LLM: Unlocking the Native Agentic Potential for Lightweight Large Language Models},
author={Tencent Youtu Lab},
year={2025},
eprint={2512.24618},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2512.24618},
}











