Automatically sync and convert Unsloth Colab notebooks to NVIDIA Brev-compatible launchables. This repository provides a production-ready pipeline that:
- β‘ Syncs daily with the unslothai/notebooks repository
- π Automatically converts Colab-specific code to Brev-compatible format
- π³ Generates companion files (requirements.txt, setup.sh, docker-compose.yml, README)
- π§ͺ Tests all conversions with comprehensive pytest suite
- π¦ Creates launchables ready to deploy on NVIDIA Brev
This adapter transforms Unsloth Colab notebooks for seamless use on NVIDIA Brev by:
- Installation Conversion - Replaces
unsloth[colab-new]withunsloth[conda] - Magic Commands - Converts
!and%commands tosubprocesscalls - Storage Adaptation - Removes Google Drive mounting, updates paths to
/workspace/ - GPU Configuration - Adds
device_map="auto"for multi-GPU support - Batch Size Optimization - Adjusts batch sizes for NVIDIA GPUs
- Companion Files - Generates setup scripts, Docker configs, and documentation
Below are 181 Unsloth notebooks organized into 129 launchables for NVIDIA Brev, categorized by model type. Each notebook is fully adapted for Brev environments with GPU-optimized configurations, companion files, and ready-to-run setups.
Quick Start: Browse the notebooks below, clone this repo, and deploy on Brev Console or run locally with Docker. View the original Unsloth notebooks here.
| Model | Type | GPU Requirements | Notebook Link |
|---|---|---|---|
| DeepSeek_R1_0528_Qwen3_(8B)_GRPO | GRPO | L4 (16GB) | View Notebook |
| Gemma3N_(4B)-Conversational | Conversational | L4 (16GB) | View Notebook |
| Gemma3_(4B) | L4 (16GB) | View Notebook | |
| DeepSeek_R1_0528_Qwen3_(8B)_GRPO | GRPO | L4 (16GB) | View Notebook |
| Qwen3_(4B)-GRPO | GRPO | L4 (16GB) | View Notebook |
| Llama3.1_(8B)-Alpaca | Alpaca | L4 (16GB) | View Notebook |
| Llama3.2_(11B)-Vision | Vision | L4 (16GB) | View Notebook |
| Llama3.2_(1B_and_3B)-Conversational | Conversational | L4 (16GB) | View Notebook |
| Meta_Synthetic_Data_Llama3_2_(3B) | Synthetic Data | L4 (16GB) | View Notebook |
| Mistral_v0.3_(7B)-Conversational | Conversational | L4 (16GB) | View Notebook |
| Phi_4-Conversational | Conversational | A100-40GB (24GB) | View Notebook |
| Qwen3_(14B)-Reasoning-Conversational | Conversational | A100-40GB (24GB) | View Notebook |
| Qwen3_(4B)-GRPO | GRPO | L4 (16GB) | View Notebook |
| Sesame_CSM_(1B)-TTS | TTS | T4 (12GB) | View Notebook |
| Model | Type | GPU Requirements | Notebook Link |
|---|---|---|---|
| Llasa_TTS_(1B) | TTS | L4 (16GB) | View Notebook |
| Llasa_TTS_(3B) | TTS | L4 (16GB) | View Notebook |
| Orpheus_(3B)-TTS | TTS | L4 (16GB) | View Notebook |
| Oute_TTS_(1B) | TTS | L4 (16GB) | View Notebook |
| Spark_TTS_(0_5B) | TTS | L4 (16GB) | View Notebook |
| Whisper | STT | L4 (16GB) | View Notebook |
| Model | Type | GPU Requirements | Notebook Link |
|---|---|---|---|
| Gemma3N_(4B)-Vision | Vision | L4 (16GB) | View Notebook |
| Gemma3_(4B)-Vision | Vision | L4 (16GB) | View Notebook |
| Pixtral_(12B)-Vision | Vision | L4 (16GB) | View Notebook |
| Qwen2.5_VL_(7B)-Vision | Vision | L4 (16GB) | View Notebook |
| Qwen2_VL_(7B)-Vision | Vision | L4 (16GB) | View Notebook |
| Qwen3_VL_(8B)-Vision | Vision | A100-40GB (24GB) | View Notebook |
| Model | GPU Requirements | Notebook Link |
|---|---|---|
| bert_classification | L4 (16GB) | View Notebook |
| Usecase | Model | GPU Requirements | Notebook Link |
|---|---|---|---|
| Fine-tuning | Mistral_(7B)-Text_Completion | L4 (16GB) | View Notebook |
| Tool Calling | Qwen2.5_Coder_(1.5B)-Tool_Calling | L4 (16GB) | View Notebook |
| Model | Type | GPU Requirements | Notebook Link |
|---|---|---|---|
| Advanced_Llama3_1_(3B)_GRPO_LoRA | GRPO | L4 (16GB) | View Notebook |
| Advanced_Llama3_2_(3B)_GRPO_LoRA | GRPO | L4 (16GB) | View Notebook |
| Gemma3_(1B)-GRPO | GRPO | L4 (16GB) | View Notebook |
| Gemma3_(4B)-Vision-GRPO | Vision | L4 (16GB) | View Notebook |
| Advanced_Llama3_1_(3B)_GRPO_LoRA | GRPO | L4 (16GB) | View Notebook |
| Advanced_Llama3_2_(3B)_GRPO_LoRA | GRPO | L4 (16GB) | View Notebook |
| Gemma3_(1B)-GRPO | GRPO | L4 (16GB) | View Notebook |
| Gemma3_(4B)-Vision-GRPO | Vision | L4 (16GB) | View Notebook |
| Llama3.1_(8B)-GRPO | GRPO | L4 (16GB) | View Notebook |
| Mistral_v0.3_(7B)-GRPO | GRPO | L4 (16GB) | View Notebook |
| Phi_4_(14B)-GRPO | GRPO | A100-40GB (24GB) | View Notebook |
| Qwen2.5_(3B)-GRPO | GRPO | L4 (16GB) | View Notebook |
| Qwen2_5_7B_VL_GRPO | GRPO | L4 (16GB) | View Notebook |
| Qwen3_VL_(8B)-Vision-GRPO | Vision | A100-40GB (24GB) | View Notebook |
| gpt-oss-(20B)-GRPO | GRPO | A100-40GB (24GB) | View Notebook |
| gpt-oss-(20B)_A100-GRPO | GRPO | A100-40GB (24GB) | View Notebook |
| gpt_oss_(20B)_GRPO_BF16 | GRPO | A100-80GB (40GB) | View Notebook |
| Llama3.1_(8B)-GRPO | GRPO | L4 (16GB) | View Notebook |
| Mistral_v0.3_(7B)-GRPO | GRPO | L4 (16GB) | View Notebook |
| Phi_4_(14B)-GRPO | GRPO | A100-40GB (24GB) | View Notebook |
| Qwen2.5_(3B)-GRPO | GRPO | L4 (16GB) | View Notebook |
| Qwen2_5_7B_VL_GRPO | GRPO | L4 (16GB) | View Notebook |
| Qwen3_VL_(8B)-Vision-GRPO | Vision | A100-40GB (24GB) | View Notebook |
| gpt-oss-(20B)-GRPO | GRPO | A100-40GB (24GB) | View Notebook |
| gpt-oss-(20B)_A100-GRPO | GRPO | A100-40GB (24GB) | View Notebook |
| gpt_oss_(20B)_GRPO_BF16 | GRPO | A100-80GB (40GB) | View Notebook |
| Model | Type | GPU Requirements | Notebook Link |
|---|---|---|---|
| GPT_OSS_BNB_(20B)-Inference | Inference | L4 (16GB) | View Notebook |
| GPT_OSS_MXFP4_(20B)-Inference | Inference | L4 (16GB) | View Notebook |
| OpenEnv_gpt_oss_(20B)_Reinforcement_Learning_2048_Game | A100-40GB (24GB) | View Notebook | |
| OpenEnv_gpt_oss_(20B)_Reinforcement_Learning_2048_Game_BF16 | A100-40GB (24GB) | View Notebook | |
| gpt-oss-(120B)_A100-Fine-tuning | A100-80GB (80GB) | View Notebook | |
| gpt-oss-(20B)-Fine-tuning | A100-40GB (24GB) | View Notebook | |
| gpt_oss_(20B)_Reinforcement_Learning_2048_Game | A100-40GB (24GB) | View Notebook | |
| gpt_oss_(20B)_Reinforcement_Learning_2048_Game_BF16 | A100-40GB (24GB) | View Notebook | |
| gpt_oss_(20B)_Reinforcement_Learning_2048_Game_DGX_Spark | A100-40GB (24GB) | View Notebook |
| Model | Type | GPU Requirements | Notebook Link |
|---|---|---|---|
| CodeGemma_(7B)-Conversational | Conversational | L4 (16GB) | View Notebook |
| Gemma2_(2B)-Alpaca | Alpaca | L4 (16GB) | View Notebook |
| Gemma2_(9B)-Alpaca | Alpaca | L4 (16GB) | View Notebook |
| Gemma3N_(2B)-Inference | Inference | L4 (16GB) | View Notebook |
| Gemma3N_(4B)-Audio | L4 (16GB) | View Notebook | |
| Gemma3N_(4B)-Vision | Vision | L4 (16GB) | View Notebook |
| Gemma3_(270M) | L4 (16GB) | View Notebook | |
| Gemma3_(27B)_A100-Conversational | Conversational | L4 (16GB) | View Notebook |
| Gemma3_(4B)-Vision | Vision | L4 (16GB) | View Notebook |
| gemma7b | L4 (16GB) | View Notebook |
| Model | Type | GPU Requirements | Notebook Link |
|---|---|---|---|
| Falcon_H1-Alpaca | Alpaca | L4 (16GB) | View Notebook |
| Falcon_H1_(0.5B)-Alpaca | Alpaca | L4 (16GB) | View Notebook |
| Liquid_LFM2-Conversational | Conversational | L4 (16GB) | View Notebook |
| Liquid_LFM2_(1.2B)-Conversational | Conversational | L4 (16GB) | View Notebook |
| Model | Type | GPU Requirements | Notebook Link |
|---|---|---|---|
| Llama3.1_(8B)-Inference | Inference | L4 (16GB) | View Notebook |
| Llama3.2_(1B)-RAFT | RAFT | L4 (16GB) | View Notebook |
| Llama3.3_(70B)_A100-Conversational | Conversational | L4 (16GB) | View Notebook |
| Llama3_(8B)-Alpaca | Alpaca | L4 (16GB) | View Notebook |
| Llama3_(8B)-Conversational | Conversational | L4 (16GB) | View Notebook |
| Llama3_(8B)-ORPO | ORPO | L4 (16GB) | View Notebook |
| Llama3_(8B)-Ollama | Ollama | L4 (16GB) | View Notebook |
| Llasa_TTS_(1B) | TTS | L4 (16GB) | View Notebook |
| Llasa_TTS_(3B) | TTS | L4 (16GB) | View Notebook |
| Meta-Synthetic-Data-Llama3.1_(8B) | Synthetic Data | L4 (16GB) | View Notebook |
| TinyLlama_(1.1B)-Alpaca | Alpaca | L4 (16GB) | View Notebook |
| llama2-finetune-own-data | L4 (16GB) | View Notebook | |
| llama2-finetune | L4 (16GB) | View Notebook | |
| llama2 | L4 (16GB) | View Notebook | |
| llama3-to-ollama | Ollama | L4 (16GB) | View Notebook |
| llama31_law | L4 (16GB) | View Notebook | |
| llama3_finetune_inference | Inference | L4 (16GB) | View Notebook |
| llama3dpo | DPO | L4 (16GB) | View Notebook |
| nvidia_nim_agents_llama3.1 | L4 (16GB) | View Notebook | |
| tensorrt-llama3 | L4 (16GB) | View Notebook |
| Model | Type | GPU Requirements | Notebook Link |
|---|---|---|---|
| Mistral_Nemo_(12B)-Alpaca | Alpaca | L4 (16GB) | View Notebook |
| Mistral_Small_(22B)-Alpaca | Alpaca | L4 (16GB) | View Notebook |
| Mistral_v0.3_(7B)-Alpaca | Alpaca | L4 (16GB) | View Notebook |
| Mistral_v0.3_(7B)-CPT | CPT | L4 (16GB) | View Notebook |
| Pixtral_(12B)-Vision | Vision | L4 (16GB) | View Notebook |
| Zephyr_(7B)-DPO | DPO | L4 (16GB) | View Notebook |
| biomistral-finetune | L4 (16GB) | View Notebook | |
| biomistral | L4 (16GB) | View Notebook | |
| mistral-finetune-nemo | L4 (16GB) | View Notebook | |
| mistral-finetune-own-data | L4 (16GB) | View Notebook | |
| mistral-finetune | L4 (16GB) | View Notebook | |
| tensorrt_mistral | L4 (16GB) | View Notebook | |
| zephyr-chatbot | Conversational | L4 (16GB) | View Notebook |
| Model | Type | GPU Requirements | Notebook Link |
|---|---|---|---|
| Orpheus_(3B)-TTS | TTS | L4 (16GB) | View Notebook |
| Model | Type | GPU Requirements | Notebook Link |
|---|---|---|---|
| Oute_TTS_(1B) | TTS | L4 (16GB) | View Notebook |
| Model | Type | GPU Requirements | Notebook Link |
|---|---|---|---|
| Phi_3.5_Mini-Conversational | Conversational | L4 (16GB) | View Notebook |
| Phi_3_Medium-Conversational | Conversational | L4 (16GB) | View Notebook |
| phi2-finetune-own-data | L4 (16GB) | View Notebook | |
| phi2-finetune | L4 (16GB) | View Notebook |
| Model | Type | GPU Requirements | Notebook Link |
|---|---|---|---|
| Qwen2.5_(7B)-Alpaca | Alpaca | L4 (16GB) | View Notebook |
| Qwen2.5_Coder_(14B)-Conversational | Conversational | L4 (16GB) | View Notebook |
| Qwen2.5_VL_(7B)-Vision | Vision | L4 (16GB) | View Notebook |
| Qwen2_(7B)-Alpaca | Alpaca | L4 (16GB) | View Notebook |
| Qwen2_VL_(7B)-Vision | Vision | L4 (16GB) | View Notebook |
| Qwen3_(14B)-Alpaca | Alpaca | A100-40GB (24GB) | View Notebook |
| Qwen3_(14B) | A100-40GB (24GB) | View Notebook | |
| Qwen3_(32B)_A100-Reasoning-Conversational | Conversational | L4 (16GB) | View Notebook |
| Qwen3_(4B)-Instruct | Instruct | L4 (16GB) | View Notebook |
| Qwen3_(4B)-Thinking | Thinking | L4 (16GB) | View Notebook |
| Qwen3_(4B)_Instruct-QAT | Instruct | L4 (16GB) | View Notebook |
| Qwen3_VL_(8B)-Vision | Vision | A100-40GB (24GB) | View Notebook |
| Model | Type | GPU Requirements | Notebook Link |
|---|---|---|---|
| Spark_TTS_(0_5B) | TTS | L4 (16GB) | View Notebook |
| Model | Type | GPU Requirements | Notebook Link |
|---|---|---|---|
| Whisper | STT | L4 (16GB) | View Notebook |
| Model | Type | GPU Requirements | Notebook Link |
|---|---|---|---|
| CodeForces-cot-Finetune_for_Reasoning_on_CodeForces | Reasoning | L4 (16GB) | View Notebook |
| Granite4.0 | L4 (16GB) | View Notebook | |
| Granite4.0_350M | L4 (16GB) | View Notebook | |
| LoRAwithTensorRT-LLM | L4 (16GB) | View Notebook | |
| Magistral_(24B)-Reasoning-Conversational | Conversational | L4 (16GB) | View Notebook |
| RAG_WIth_Local_NIM_V2 | L4 (16GB) | View Notebook | |
| Synthetic_Data_Hackathon | Synthetic Data | L4 (16GB) | View Notebook |
| Unsloth_Studio | Studio | L4 (16GB) | View Notebook |
| ara | L4 (16GB) | View Notebook | |
| automatic1111-stable-diffusion-ui | L4 (16GB) | View Notebook | |
| baklava | L4 (16GB) | View Notebook | |
| caltech-protein-demo | L4 (16GB) | View Notebook | |
| comfyui | L4 (16GB) | View Notebook | |
| container_vulnerability_analysis | L4 (16GB) | View Notebook | |
| controlnet | L4 (16GB) | View Notebook | |
| dbrx_inference | Inference | L4 (16GB) | View Notebook |
| deploy-to-replicate | L4 (16GB) | View Notebook | |
| diffusion_lora_inference | Inference | L4 (16GB) | View Notebook |
| efficientvit-segmentation | L4 (16GB) | View Notebook | |
| gguf-export | L4 (16GB) | View Notebook | |
| julia-install | L4 (16GB) | View Notebook | |
| llava-finetune | L4 (16GB) | View Notebook | |
| meta-chameleon-model | L4 (16GB) | View Notebook | |
| mixtral-finetune-own-data | L4 (16GB) | View Notebook | |
| mixtral-finetune | L4 (16GB) | View Notebook | |
| molmim-optimization | L4 (16GB) | View Notebook | |
| nemo-reranker | L4 (16GB) | View Notebook | |
| nim-quickstart | L4 (16GB) | View Notebook | |
| ocr-pdf-analysis | L4 (16GB) | View Notebook | |
| oobabooga | L4 (16GB) | View Notebook | |
| pdf-blueprint | L4 (16GB) | View Notebook | |
| question_answer_nemo | L4 (16GB) | View Notebook | |
| rapids_cudf_pandas | L4 (16GB) | View Notebook | |
| setup-k8s | L4 (16GB) | View Notebook | |
| streamingllm-tensorrt | L4 (16GB) | View Notebook | |
| tensorrt-comfyui | L4 (16GB) | View Notebook |
Note: Deploy buttons will be added by the Brev team as Launchables are created on the platform.
To deploy any converted notebook to Brev:
- Go to Brev Console: brev.nvidia.com
- Create New Launchable: Navigate to Launchables β Create New
- Configure Settings:
- Repository:
https://github.com/brevdev/unsloth-notebook-adaptor - Path:
converted/{model-name}(see table above for exact model names) - GPU Tier: Use recommended tier from table above
- Port: 8888 (for Jupyter Lab)
- Repository:
- Deploy: Click Deploy and access Jupyter at the provided URL
All converted notebooks include:
- Original notebook file (
.ipynb) - Main training notebook requirements.txt- Python dependenciessetup.sh- Environment setup scriptdocker-compose.yml- Local Docker configurationREADME.md- Model-specific documentation.brevconfig.json- Brev metadata
# Browse available launchables
ls converted/
# Launch a specific model (example)
cd converted/llama-3.1-8b-fine-tuning
cat README.md # View instructions
# Or use with Docker
docker-compose upAll converted notebooks are in the converted/ directory, organized by model name.
Important: Use a virtual environment to avoid system package conflicts (especially on macOS).
# Clone the repository
git clone [email protected]:brevdev/unsloth-notebook-adaptor.git
cd unsloth-notebook-adaptor
# Create and activate virtual environment
python3 -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
# Install dependencies
pip install -r requirements.txt
# Run tests
pytest tests/ -v# Clone Unsloth notebooks
git clone https://github.com/unslothai/notebooks.git unsloth-notebooks
# Convert all notebooks
python scripts/convert_notebook.py \
--source unsloth-notebooks/nb \
--output converted
# Or convert specific notebooks
python scripts/convert_notebook.py \
--source unsloth-notebooks/nb \
--output converted \
--notebooks "Llama_3.1_(8B).ipynb" "Gemma_3_(4B).ipynb"unsloth-notebook-adaptor/
βββ .github/workflows/ # GitHub Actions automation
β βββ sync-and-convert.yml # Daily sync workflow
β βββ test-conversions.yml # Test suite on PR
βββ adapters/ # Conversion logic
β βββ base_adapter.py # Base adapter class
β βββ colab_to_brev.py # ColabβBrev conversions
β βββ model_configs.py # Model-specific configs
βββ templates/ # Jinja2 templates
β βββ requirements.txt.jinja2
β βββ setup.sh.jinja2
β βββ docker-compose.yml.jinja2
β βββ README.md.jinja2
βββ converted/ # Output: converted notebooks
β βββ [launchable-name]/
β βββ notebook.ipynb
β βββ requirements.txt
β βββ setup.sh
β βββ docker-compose.yml
β βββ README.md
β βββ .brevconfig.json
βββ metadata/ # Tracking and registry
β βββ launchables.json # Registry of all launchables
β βββ last_sync.txt # Last synced commit hash
βββ scripts/ # CLI tools
β βββ convert_notebook.py # Main conversion script
β βββ compare_notebooks.py # Detect upstream changes
β βββ generate_metadata.py # Build registry
β βββ create_summary.py # GitHub Actions summary
βββ tests/ # Test suite
βββ test_conversions.py
βββ test_notebooks.py
GitHub Actions runs daily at 6 AM UTC:
- Checks out the latest Unsloth notebooks
- Compares against last synced commit
- Converts any changed notebooks
- Generates metadata registry
- Commits and pushes changes
For each notebook:
- Load source notebook and model configuration
- Apply conversion functions (installation, magic commands, storage, etc.)
- Add Brev header cell with model information
- Generate companion files from Jinja2 templates
- Save adapted notebook and files to
converted/[launchable-name]/
- Comprehensive pytest suite tests all conversion functions
- Integration tests verify end-to-end notebook adaptation
- GitHub Actions runs tests on all PRs and commits
# Installation
!pip install "unsloth[colab-new] @ git+https://github.com/unslothai/unsloth.git"
# GPU Check
!nvidia-smi
# Storage
from google.colab import drive
drive.mount('/content/drive')
model_path = '/content/drive/MyDrive/models'
# Model Loading
model = FastLanguageModel.from_pretrained(
"unsloth/llama-3-8b",
max_seq_length=2048,
load_in_4bit=True
)# Installation
import subprocess
import sys
subprocess.check_call([
sys.executable, "-m", "pip", "install",
"unsloth[conda] @ git+https://github.com/unslothai/unsloth.git"
])
# GPU Check
subprocess.run(['nvidia-smi'], check=False)
# Storage
model_path = '/workspace/models'
# Model Loading
model = FastLanguageModel.from_pretrained(
"unsloth/llama-3-8b",
max_seq_length=2048,
load_in_4bit=True,
device_map="auto" # Added for multi-GPU support
)- gpt-oss (20B, 120B) - Reasoning models
- Llama 3.1 (8B), Llama 3.2 (1B, 3B) - Text generation
- Gemma 3 (1B, 4B, 27B), Gemma 3n (E4B) - Multimodal
- Qwen3 (4B, 14B, 32B) - Text generation
- Phi-4 (14B) - Reasoning
- Llama 3.2 Vision (11B)
- Qwen3-VL (8B)
- Gemma 3 Vision (4B)
- Whisper Large V3 - Speech-to-Text (STT)
- Orpheus-TTS (3B) - Text-to-Speech
- Sesame-CSM (1B) - Text-to-Speech
- gpt-oss-20b GRPO
- Qwen3-VL GRPO - Vision RL
- Gemma 3 GRPO
- Llama 3.2 GRPO
- Phi-4 GRPO
See adapters/model_configs.py for complete list with GPU requirements.
# Run all tests
pytest tests/ -v
# Run with coverage
pytest tests/ --cov=adapters --cov-report=term --cov-report=html
# Run specific test file
pytest tests/test_conversions.py -v
# Run specific test
pytest tests/test_conversions.py::test_convert_installation -vWe welcome contributions! Here's how to help:
- Add New Models - Update
adapters/model_configs.py - Improve Conversions - Enhance conversion functions in
adapters/colab_to_brev.py - Fix Bugs - Submit PRs with test coverage
- Report Issues - Use GitHub Issues
# Create a feature branch
git checkout -b feature/my-improvement
# Make changes and test
pytest tests/ -v
# Commit with conventional commits
git commit -m "feat: add support for new model"
# Push and create PR
git push origin feature/my-improvementThe metadata/launchables.json file contains a complete registry of all converted launchables:
{
"version": "1.0.0",
"generated_at": "2025-10-20T12:00:00Z",
"total_launchables": 25,
"launchables": [
{
"id": "llama-3.1-8b-fine-tuning",
"name": "Llama 3.1 (8B)",
"description": "Fine-tune Llama 3.1 (8B) with Unsloth on NVIDIA GPUs",
"notebook": "notebook.ipynb",
"path": "llama-3.1-8b-fine-tuning",
"gpu": {
"tier": "L4",
"min_vram_gb": 16,
"multi_gpu": false
},
"tags": ["unsloth", "fine-tuning", "text-generation"],
"upstream": {
"source": "unslothai/notebooks",
"notebook_url": "https://colab.research.google.com/...",
"last_synced": "2025-10-20T12:00:00Z"
},
"files": [...]
}
]
}- Unsloth - Website | Docs | GitHub
- NVIDIA Brev - Website | Docs
- Original Notebooks - unslothai/notebooks
- Issues & Support - GitHub Issues
This project is licensed under the LGPL-3.0 License - see the LICENSE file for details.
The converted notebooks maintain their original licenses from the Unsloth project.
- Unsloth AI for the amazing fine-tuning framework and notebooks
- NVIDIA Brev for providing the GPU infrastructure platform
- All contributors to the Unsloth and Brev communities