Skip to content

biabia-55/mlx-models

Folders and files

NameName
Last commit message
Last commit date

Latest commit

ย 

History

4 Commits
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 

Repository files navigation

MLX Models for Apple Silicon

๐Ÿš€ Production-ready MLX implementations of popular AI models, optimized for Apple Silicon (M1/M2/M3/M4)

License: MIT Python 3.9+ MLX

๐ŸŒŸ Available Models

1. PaddleOCR-VL-MLX

World's first MLX-native OCR model

  • ๐Ÿ”— Model on Hugging Face
  • ๐Ÿ“ฆ Size: ~2GB
  • โšก Speed: 2-3s per image on M4 Max
  • ๐ŸŽฏ Use case: Document digitization, receipt processing, academic paper OCR
  • ๐ŸŒ Web UI Available - Beautiful browser-based interface!

2. Hunyuan-MT-Chimera-7B-MLX-Q8

8-bit quantized multilingual translation model

  • ๐Ÿ”— Model on Hugging Face
  • ๐Ÿ“ฆ Size: 4.2GB (70% smaller than original)
  • โšก Speed: 25 tokens/s on M4 Max
  • ๐ŸŒ Languages: 200+
  • ๐ŸŽฏ Use case: Document translation, real-time translation, multilingual chat

๐Ÿš€ Quick Start

Installation

# Install MLX and dependencies
pip install mlx mlx-lm transformers pillow

# Clone this repo for examples
git clone https://github.com/biabia-55/mlx-models.git
cd mlx-models

OCR Example

from transformers import AutoTokenizer
from PIL import Image

# Load model
tokenizer = AutoTokenizer.from_pretrained(
    "biabia-55/PaddleOCR-VL-MLX",
    trust_remote_code=True
)

# Process image
image = Image.open("document.jpg")
# See paddleocr-vl-mlx/examples/ for complete code

Translation Example

from mlx_lm import load, generate

# Load model
model, tokenizer = load("biabia-55/Hunyuan-MT-Chimera-7B-MLX-Q8")

# Translate
prompt = "Translate to French: Hello, world!"
result = generate(model, tokenizer, prompt=prompt, max_tokens=256)
print(result)

๐Ÿ“Š Performance Benchmarks

Tested on M4 Max (128GB RAM):

Model Task Speed Memory vs PyTorch
PaddleOCR-VL-MLX OCR 2-3s/image ~4GB 2.5x faster
Hunyuan-MT-MLX-Q8 Translation 25 tok/s ~8GB 1.67x faster

๐Ÿ“ Repository Structure

mlx-models/
โ”œโ”€โ”€ paddleocr-vl-mlx/          # PaddleOCR-VL examples and tools
โ”‚   โ”œโ”€โ”€ examples/              # Usage examples
โ”‚   โ”œโ”€โ”€ conversion/            # PyTorch โ†’ MLX conversion scripts
โ”‚   โ””โ”€โ”€ README.md
โ”œโ”€โ”€ hunyuan-mt-mlx/            # Hunyuan-MT examples and tools
โ”‚   โ”œโ”€โ”€ examples/              # Usage examples
โ”‚   โ”œโ”€โ”€ quantization/          # Quantization scripts
โ”‚   โ””โ”€โ”€ README.md
โ”œโ”€โ”€ docs/                      # Documentation
โ”‚   โ”œโ”€โ”€ performance_benchmarks.md
โ”‚   โ”œโ”€โ”€ conversion_guide.md
โ”‚   โ””โ”€โ”€ faq.md
โ””โ”€โ”€ README.md                  # This file

๐ŸŽฏ Use Cases

Document Processing Pipeline

# See paddleocr-vl-mlx/examples/document_pipeline.py
from paddleocr_mlx import OCRPipeline

pipeline = OCRPipeline()
result = pipeline.process("document.pdf")
# Preserves formatting, tables, and layout

Real-time Translation

# See hunyuan-mt-mlx/examples/streaming_translation.py
from hunyuan_mlx import StreamingTranslator

translator = StreamingTranslator()
for chunk in translator.translate_stream("Long text..."):
    print(chunk, end='', flush=True)

Batch Processing

# See examples for batch processing multiple files
# Optimized for throughput on Apple Silicon

๐Ÿ› ๏ธ Development

Converting Your Own Models

See docs/conversion_guide.md for detailed instructions on converting PyTorch models to MLX.

Running Tests

python -m pytest tests/

๐Ÿ“š Documentation

๐Ÿค Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

  1. Fork the repository
  2. Create your feature branch (git checkout -b feature/AmazingFeature)
  3. Commit your changes (git commit -m 'Add some AmazingFeature')
  4. Push to the branch (git push origin feature/AmazingFeature)
  5. Open a Pull Request

๐Ÿ“„ License

This project is licensed under the MIT License - see the LICENSE file for details.

๐Ÿ™ Acknowledgments

๐Ÿ“ฎ Contact

Note: Models are hosted on Hugging Face under gamhtoi, while code and documentation are on GitHub under biabia-55. Both accounts belong to the same author.

โญ Star History

If you find this project useful, please consider giving it a star!


Made with โค๏ธ for the Apple Silicon community

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors