Skip to content
This repository was archived by the owner on Oct 30, 2025. It is now read-only.

brevdev/unsloth-notebook-adaptor

Repository files navigation

πŸš€ Unsloth to NVIDIA Brev Adapter

Sync Notebooks Test Conversions Python 3.9+ License: LGPL-3.0

Automatically sync and convert Unsloth Colab notebooks to NVIDIA Brev-compatible launchables. This repository provides a production-ready pipeline that:

  • ⚑ Syncs daily with the unslothai/notebooks repository
  • πŸ”„ Automatically converts Colab-specific code to Brev-compatible format
  • 🐳 Generates companion files (requirements.txt, setup.sh, docker-compose.yml, README)
  • πŸ§ͺ Tests all conversions with comprehensive pytest suite
  • πŸ“¦ Creates launchables ready to deploy on NVIDIA Brev

πŸ“‹ What This Does

This adapter transforms Unsloth Colab notebooks for seamless use on NVIDIA Brev by:

  1. Installation Conversion - Replaces unsloth[colab-new] with unsloth[conda]
  2. Magic Commands - Converts ! and % commands to subprocess calls
  3. Storage Adaptation - Removes Google Drive mounting, updates paths to /workspace/
  4. GPU Configuration - Adds device_map="auto" for multi-GPU support
  5. Batch Size Optimization - Adjusts batch sizes for NVIDIA GPUs
  6. Companion Files - Generates setup scripts, Docker configs, and documentation

πŸ“’ Available Launchables

Below are 181 Unsloth notebooks organized into 129 launchables for NVIDIA Brev, categorized by model type. Each notebook is fully adapted for Brev environments with GPU-optimized configurations, companion files, and ready-to-run setups.

Quick Start: Browse the notebooks below, clone this repo, and deploy on Brev Console or run locally with Docker. View the original Unsloth notebooks here.

Main Notebooks

Model Type GPU Requirements Notebook Link
DeepSeek_R1_0528_Qwen3_(8B)_GRPO GRPO L4 (16GB) View Notebook
Gemma3N_(4B)-Conversational Conversational L4 (16GB) View Notebook
Gemma3_(4B) L4 (16GB) View Notebook
DeepSeek_R1_0528_Qwen3_(8B)_GRPO GRPO L4 (16GB) View Notebook
Qwen3_(4B)-GRPO GRPO L4 (16GB) View Notebook
Llama3.1_(8B)-Alpaca Alpaca L4 (16GB) View Notebook
Llama3.2_(11B)-Vision Vision L4 (16GB) View Notebook
Llama3.2_(1B_and_3B)-Conversational Conversational L4 (16GB) View Notebook
Meta_Synthetic_Data_Llama3_2_(3B) Synthetic Data L4 (16GB) View Notebook
Mistral_v0.3_(7B)-Conversational Conversational L4 (16GB) View Notebook
Phi_4-Conversational Conversational A100-40GB (24GB) View Notebook
Qwen3_(14B)-Reasoning-Conversational Conversational A100-40GB (24GB) View Notebook
Qwen3_(4B)-GRPO GRPO L4 (16GB) View Notebook
Sesame_CSM_(1B)-TTS TTS T4 (12GB) View Notebook

Text-to-Speech (TTS) Notebooks

Model Type GPU Requirements Notebook Link
Llasa_TTS_(1B) TTS L4 (16GB) View Notebook
Llasa_TTS_(3B) TTS L4 (16GB) View Notebook
Orpheus_(3B)-TTS TTS L4 (16GB) View Notebook
Oute_TTS_(1B) TTS L4 (16GB) View Notebook
Spark_TTS_(0_5B) TTS L4 (16GB) View Notebook
Whisper STT L4 (16GB) View Notebook

Vision (Multimodal) Notebooks

Model Type GPU Requirements Notebook Link
Gemma3N_(4B)-Vision Vision L4 (16GB) View Notebook
Gemma3_(4B)-Vision Vision L4 (16GB) View Notebook
Pixtral_(12B)-Vision Vision L4 (16GB) View Notebook
Qwen2.5_VL_(7B)-Vision Vision L4 (16GB) View Notebook
Qwen2_VL_(7B)-Vision Vision L4 (16GB) View Notebook
Qwen3_VL_(8B)-Vision Vision A100-40GB (24GB) View Notebook

BERT Notebooks

Model GPU Requirements Notebook Link
bert_classification L4 (16GB) View Notebook

Specific use-case Notebooks

Usecase Model GPU Requirements Notebook Link
Fine-tuning Mistral_(7B)-Text_Completion L4 (16GB) View Notebook
Tool Calling Qwen2.5_Coder_(1.5B)-Tool_Calling L4 (16GB) View Notebook

GRPO Notebooks

Model Type GPU Requirements Notebook Link
Advanced_Llama3_1_(3B)_GRPO_LoRA GRPO L4 (16GB) View Notebook
Advanced_Llama3_2_(3B)_GRPO_LoRA GRPO L4 (16GB) View Notebook
Gemma3_(1B)-GRPO GRPO L4 (16GB) View Notebook
Gemma3_(4B)-Vision-GRPO Vision L4 (16GB) View Notebook
Advanced_Llama3_1_(3B)_GRPO_LoRA GRPO L4 (16GB) View Notebook
Advanced_Llama3_2_(3B)_GRPO_LoRA GRPO L4 (16GB) View Notebook
Gemma3_(1B)-GRPO GRPO L4 (16GB) View Notebook
Gemma3_(4B)-Vision-GRPO Vision L4 (16GB) View Notebook
Llama3.1_(8B)-GRPO GRPO L4 (16GB) View Notebook
Mistral_v0.3_(7B)-GRPO GRPO L4 (16GB) View Notebook
Phi_4_(14B)-GRPO GRPO A100-40GB (24GB) View Notebook
Qwen2.5_(3B)-GRPO GRPO L4 (16GB) View Notebook
Qwen2_5_7B_VL_GRPO GRPO L4 (16GB) View Notebook
Qwen3_VL_(8B)-Vision-GRPO Vision A100-40GB (24GB) View Notebook
gpt-oss-(20B)-GRPO GRPO A100-40GB (24GB) View Notebook
gpt-oss-(20B)_A100-GRPO GRPO A100-40GB (24GB) View Notebook
gpt_oss_(20B)_GRPO_BF16 GRPO A100-80GB (40GB) View Notebook
Llama3.1_(8B)-GRPO GRPO L4 (16GB) View Notebook
Mistral_v0.3_(7B)-GRPO GRPO L4 (16GB) View Notebook
Phi_4_(14B)-GRPO GRPO A100-40GB (24GB) View Notebook
Qwen2.5_(3B)-GRPO GRPO L4 (16GB) View Notebook
Qwen2_5_7B_VL_GRPO GRPO L4 (16GB) View Notebook
Qwen3_VL_(8B)-Vision-GRPO Vision A100-40GB (24GB) View Notebook
gpt-oss-(20B)-GRPO GRPO A100-40GB (24GB) View Notebook
gpt-oss-(20B)_A100-GRPO GRPO A100-40GB (24GB) View Notebook
gpt_oss_(20B)_GRPO_BF16 GRPO A100-80GB (40GB) View Notebook

GPT-OSS Notebooks

Model Type GPU Requirements Notebook Link
GPT_OSS_BNB_(20B)-Inference Inference L4 (16GB) View Notebook
GPT_OSS_MXFP4_(20B)-Inference Inference L4 (16GB) View Notebook
OpenEnv_gpt_oss_(20B)_Reinforcement_Learning_2048_Game A100-40GB (24GB) View Notebook
OpenEnv_gpt_oss_(20B)_Reinforcement_Learning_2048_Game_BF16 A100-40GB (24GB) View Notebook
gpt-oss-(120B)_A100-Fine-tuning A100-80GB (80GB) View Notebook
gpt-oss-(20B)-Fine-tuning A100-40GB (24GB) View Notebook
gpt_oss_(20B)_Reinforcement_Learning_2048_Game A100-40GB (24GB) View Notebook
gpt_oss_(20B)_Reinforcement_Learning_2048_Game_BF16 A100-40GB (24GB) View Notebook
gpt_oss_(20B)_Reinforcement_Learning_2048_Game_DGX_Spark A100-40GB (24GB) View Notebook

Gemma Notebooks

Model Type GPU Requirements Notebook Link
CodeGemma_(7B)-Conversational Conversational L4 (16GB) View Notebook
Gemma2_(2B)-Alpaca Alpaca L4 (16GB) View Notebook
Gemma2_(9B)-Alpaca Alpaca L4 (16GB) View Notebook
Gemma3N_(2B)-Inference Inference L4 (16GB) View Notebook
Gemma3N_(4B)-Audio L4 (16GB) View Notebook
Gemma3N_(4B)-Vision Vision L4 (16GB) View Notebook
Gemma3_(270M) L4 (16GB) View Notebook
Gemma3_(27B)_A100-Conversational Conversational L4 (16GB) View Notebook
Gemma3_(4B)-Vision Vision L4 (16GB) View Notebook
gemma7b L4 (16GB) View Notebook

Linear Attention Notebooks

Model Type GPU Requirements Notebook Link
Falcon_H1-Alpaca Alpaca L4 (16GB) View Notebook
Falcon_H1_(0.5B)-Alpaca Alpaca L4 (16GB) View Notebook
Liquid_LFM2-Conversational Conversational L4 (16GB) View Notebook
Liquid_LFM2_(1.2B)-Conversational Conversational L4 (16GB) View Notebook

Llama Notebooks

Model Type GPU Requirements Notebook Link
Llama3.1_(8B)-Inference Inference L4 (16GB) View Notebook
Llama3.2_(1B)-RAFT RAFT L4 (16GB) View Notebook
Llama3.3_(70B)_A100-Conversational Conversational L4 (16GB) View Notebook
Llama3_(8B)-Alpaca Alpaca L4 (16GB) View Notebook
Llama3_(8B)-Conversational Conversational L4 (16GB) View Notebook
Llama3_(8B)-ORPO ORPO L4 (16GB) View Notebook
Llama3_(8B)-Ollama Ollama L4 (16GB) View Notebook
Llasa_TTS_(1B) TTS L4 (16GB) View Notebook
Llasa_TTS_(3B) TTS L4 (16GB) View Notebook
Meta-Synthetic-Data-Llama3.1_(8B) Synthetic Data L4 (16GB) View Notebook
TinyLlama_(1.1B)-Alpaca Alpaca L4 (16GB) View Notebook
llama2-finetune-own-data L4 (16GB) View Notebook
llama2-finetune L4 (16GB) View Notebook
llama2 L4 (16GB) View Notebook
llama3-to-ollama Ollama L4 (16GB) View Notebook
llama31_law L4 (16GB) View Notebook
llama3_finetune_inference Inference L4 (16GB) View Notebook
llama3dpo DPO L4 (16GB) View Notebook
nvidia_nim_agents_llama3.1 L4 (16GB) View Notebook
tensorrt-llama3 L4 (16GB) View Notebook

Mistral Notebooks

Model Type GPU Requirements Notebook Link
Mistral_Nemo_(12B)-Alpaca Alpaca L4 (16GB) View Notebook
Mistral_Small_(22B)-Alpaca Alpaca L4 (16GB) View Notebook
Mistral_v0.3_(7B)-Alpaca Alpaca L4 (16GB) View Notebook
Mistral_v0.3_(7B)-CPT CPT L4 (16GB) View Notebook
Pixtral_(12B)-Vision Vision L4 (16GB) View Notebook
Zephyr_(7B)-DPO DPO L4 (16GB) View Notebook
biomistral-finetune L4 (16GB) View Notebook
biomistral L4 (16GB) View Notebook
mistral-finetune-nemo L4 (16GB) View Notebook
mistral-finetune-own-data L4 (16GB) View Notebook
mistral-finetune L4 (16GB) View Notebook
tensorrt_mistral L4 (16GB) View Notebook
zephyr-chatbot Conversational L4 (16GB) View Notebook

Orpheus Notebooks

Model Type GPU Requirements Notebook Link
Orpheus_(3B)-TTS TTS L4 (16GB) View Notebook

Oute Notebooks

Model Type GPU Requirements Notebook Link
Oute_TTS_(1B) TTS L4 (16GB) View Notebook

Phi Notebooks

Model Type GPU Requirements Notebook Link
Phi_3.5_Mini-Conversational Conversational L4 (16GB) View Notebook
Phi_3_Medium-Conversational Conversational L4 (16GB) View Notebook
phi2-finetune-own-data L4 (16GB) View Notebook
phi2-finetune L4 (16GB) View Notebook

Qwen Notebooks

Model Type GPU Requirements Notebook Link
Qwen2.5_(7B)-Alpaca Alpaca L4 (16GB) View Notebook
Qwen2.5_Coder_(14B)-Conversational Conversational L4 (16GB) View Notebook
Qwen2.5_VL_(7B)-Vision Vision L4 (16GB) View Notebook
Qwen2_(7B)-Alpaca Alpaca L4 (16GB) View Notebook
Qwen2_VL_(7B)-Vision Vision L4 (16GB) View Notebook
Qwen3_(14B)-Alpaca Alpaca A100-40GB (24GB) View Notebook
Qwen3_(14B) A100-40GB (24GB) View Notebook
Qwen3_(32B)_A100-Reasoning-Conversational Conversational L4 (16GB) View Notebook
Qwen3_(4B)-Instruct Instruct L4 (16GB) View Notebook
Qwen3_(4B)-Thinking Thinking L4 (16GB) View Notebook
Qwen3_(4B)_Instruct-QAT Instruct L4 (16GB) View Notebook
Qwen3_VL_(8B)-Vision Vision A100-40GB (24GB) View Notebook

Spark Notebooks

Model Type GPU Requirements Notebook Link
Spark_TTS_(0_5B) TTS L4 (16GB) View Notebook

Whisper Notebooks

Model Type GPU Requirements Notebook Link
Whisper STT L4 (16GB) View Notebook

Other Notebooks

Model Type GPU Requirements Notebook Link
CodeForces-cot-Finetune_for_Reasoning_on_CodeForces Reasoning L4 (16GB) View Notebook
Granite4.0 L4 (16GB) View Notebook
Granite4.0_350M L4 (16GB) View Notebook
LoRAwithTensorRT-LLM L4 (16GB) View Notebook
Magistral_(24B)-Reasoning-Conversational Conversational L4 (16GB) View Notebook
RAG_WIth_Local_NIM_V2 L4 (16GB) View Notebook
Synthetic_Data_Hackathon Synthetic Data L4 (16GB) View Notebook
Unsloth_Studio Studio L4 (16GB) View Notebook
ara L4 (16GB) View Notebook
automatic1111-stable-diffusion-ui L4 (16GB) View Notebook
baklava L4 (16GB) View Notebook
caltech-protein-demo L4 (16GB) View Notebook
comfyui L4 (16GB) View Notebook
container_vulnerability_analysis L4 (16GB) View Notebook
controlnet L4 (16GB) View Notebook
dbrx_inference Inference L4 (16GB) View Notebook
deploy-to-replicate L4 (16GB) View Notebook
diffusion_lora_inference Inference L4 (16GB) View Notebook
efficientvit-segmentation L4 (16GB) View Notebook
gguf-export L4 (16GB) View Notebook
julia-install L4 (16GB) View Notebook
llava-finetune L4 (16GB) View Notebook
meta-chameleon-model L4 (16GB) View Notebook
mixtral-finetune-own-data L4 (16GB) View Notebook
mixtral-finetune L4 (16GB) View Notebook
molmim-optimization L4 (16GB) View Notebook
nemo-reranker L4 (16GB) View Notebook
nim-quickstart L4 (16GB) View Notebook
ocr-pdf-analysis L4 (16GB) View Notebook
oobabooga L4 (16GB) View Notebook
pdf-blueprint L4 (16GB) View Notebook
question_answer_nemo L4 (16GB) View Notebook
rapids_cudf_pandas L4 (16GB) View Notebook
setup-k8s L4 (16GB) View Notebook
streamingllm-tensorrt L4 (16GB) View Notebook
tensorrt-comfyui L4 (16GB) View Notebook

Note: Deploy buttons will be added by the Brev team as Launchables are created on the platform.

Manual Deploy Instructions

To deploy any converted notebook to Brev:

  1. Go to Brev Console: brev.nvidia.com
  2. Create New Launchable: Navigate to Launchables β†’ Create New
  3. Configure Settings:
    • Repository: https://github.com/brevdev/unsloth-notebook-adaptor
    • Path: converted/{model-name} (see table above for exact model names)
    • GPU Tier: Use recommended tier from table above
    • Port: 8888 (for Jupyter Lab)
  4. Deploy: Click Deploy and access Jupyter at the provided URL

All converted notebooks include:

  • Original notebook file (.ipynb) - Main training notebook
  • requirements.txt - Python dependencies
  • setup.sh - Environment setup script
  • docker-compose.yml - Local Docker configuration
  • README.md - Model-specific documentation
  • .brevconfig.json - Brev metadata

🎯 Quick Start for Users

Deploy on Brev Console

# Browse available launchables
ls converted/

# Launch a specific model (example)
cd converted/llama-3.1-8b-fine-tuning
cat README.md  # View instructions

# Or use with Docker
docker-compose up

All converted notebooks are in the converted/ directory, organized by model name.

πŸ› οΈ Quick Start for Contributors

Local Setup

Important: Use a virtual environment to avoid system package conflicts (especially on macOS).

# Clone the repository
git clone [email protected]:brevdev/unsloth-notebook-adaptor.git
cd unsloth-notebook-adaptor

# Create and activate virtual environment
python3 -m venv venv
source venv/bin/activate  # On Windows: venv\Scripts\activate

# Install dependencies
pip install -r requirements.txt

# Run tests
pytest tests/ -v

Manual Conversion

# Clone Unsloth notebooks
git clone https://github.com/unslothai/notebooks.git unsloth-notebooks

# Convert all notebooks
python scripts/convert_notebook.py \
  --source unsloth-notebooks/nb \
  --output converted

# Or convert specific notebooks
python scripts/convert_notebook.py \
  --source unsloth-notebooks/nb \
  --output converted \
  --notebooks "Llama_3.1_(8B).ipynb" "Gemma_3_(4B).ipynb"

πŸ“ Repository Structure

unsloth-notebook-adaptor/
β”œβ”€β”€ .github/workflows/       # GitHub Actions automation
β”‚   β”œβ”€β”€ sync-and-convert.yml      # Daily sync workflow
β”‚   └── test-conversions.yml      # Test suite on PR
β”œβ”€β”€ adapters/                # Conversion logic
β”‚   β”œβ”€β”€ base_adapter.py           # Base adapter class
β”‚   β”œβ”€β”€ colab_to_brev.py         # Colabβ†’Brev conversions
β”‚   └── model_configs.py         # Model-specific configs
β”œβ”€β”€ templates/               # Jinja2 templates
β”‚   β”œβ”€β”€ requirements.txt.jinja2
β”‚   β”œβ”€β”€ setup.sh.jinja2
β”‚   β”œβ”€β”€ docker-compose.yml.jinja2
β”‚   └── README.md.jinja2
β”œβ”€β”€ converted/               # Output: converted notebooks
β”‚   └── [launchable-name]/
β”‚       β”œβ”€β”€ notebook.ipynb
β”‚       β”œβ”€β”€ requirements.txt
β”‚       β”œβ”€β”€ setup.sh
β”‚       β”œβ”€β”€ docker-compose.yml
β”‚       β”œβ”€β”€ README.md
β”‚       └── .brevconfig.json
β”œβ”€β”€ metadata/                # Tracking and registry
β”‚   β”œβ”€β”€ launchables.json         # Registry of all launchables
β”‚   └── last_sync.txt            # Last synced commit hash
β”œβ”€β”€ scripts/                 # CLI tools
β”‚   β”œβ”€β”€ convert_notebook.py      # Main conversion script
β”‚   β”œβ”€β”€ compare_notebooks.py     # Detect upstream changes
β”‚   β”œβ”€β”€ generate_metadata.py     # Build registry
β”‚   └── create_summary.py        # GitHub Actions summary
└── tests/                   # Test suite
    β”œβ”€β”€ test_conversions.py
    └── test_notebooks.py

πŸ”§ How It Works

1. Daily Sync (Automated)

GitHub Actions runs daily at 6 AM UTC:

  • Checks out the latest Unsloth notebooks
  • Compares against last synced commit
  • Converts any changed notebooks
  • Generates metadata registry
  • Commits and pushes changes

2. Conversion Pipeline

For each notebook:

  1. Load source notebook and model configuration
  2. Apply conversion functions (installation, magic commands, storage, etc.)
  3. Add Brev header cell with model information
  4. Generate companion files from Jinja2 templates
  5. Save adapted notebook and files to converted/[launchable-name]/

3. Quality Assurance

  • Comprehensive pytest suite tests all conversion functions
  • Integration tests verify end-to-end notebook adaptation
  • GitHub Actions runs tests on all PRs and commits

🎨 Key Conversions

Before (Colab)

# Installation
!pip install "unsloth[colab-new] @ git+https://github.com/unslothai/unsloth.git"

# GPU Check
!nvidia-smi

# Storage
from google.colab import drive
drive.mount('/content/drive')
model_path = '/content/drive/MyDrive/models'

# Model Loading
model = FastLanguageModel.from_pretrained(
    "unsloth/llama-3-8b",
    max_seq_length=2048,
    load_in_4bit=True
)

After (Brev)

# Installation
import subprocess
import sys

subprocess.check_call([
    sys.executable, "-m", "pip", "install",
    "unsloth[conda] @ git+https://github.com/unslothai/unsloth.git"
])

# GPU Check
subprocess.run(['nvidia-smi'], check=False)

# Storage
model_path = '/workspace/models'

# Model Loading
model = FastLanguageModel.from_pretrained(
    "unsloth/llama-3-8b",
    max_seq_length=2048,
    load_in_4bit=True,
    device_map="auto"  # Added for multi-GPU support
)

πŸ¦™ Supported Models

Language Models (LLMs)

  • gpt-oss (20B, 120B) - Reasoning models
  • Llama 3.1 (8B), Llama 3.2 (1B, 3B) - Text generation
  • Gemma 3 (1B, 4B, 27B), Gemma 3n (E4B) - Multimodal
  • Qwen3 (4B, 14B, 32B) - Text generation
  • Phi-4 (14B) - Reasoning

Vision Models (VLMs)

  • Llama 3.2 Vision (11B)
  • Qwen3-VL (8B)
  • Gemma 3 Vision (4B)

Audio Models

  • Whisper Large V3 - Speech-to-Text (STT)
  • Orpheus-TTS (3B) - Text-to-Speech
  • Sesame-CSM (1B) - Text-to-Speech

Reinforcement Learning (GRPO)

  • gpt-oss-20b GRPO
  • Qwen3-VL GRPO - Vision RL
  • Gemma 3 GRPO
  • Llama 3.2 GRPO
  • Phi-4 GRPO

See adapters/model_configs.py for complete list with GPU requirements.

πŸ§ͺ Testing

# Run all tests
pytest tests/ -v

# Run with coverage
pytest tests/ --cov=adapters --cov-report=term --cov-report=html

# Run specific test file
pytest tests/test_conversions.py -v

# Run specific test
pytest tests/test_conversions.py::test_convert_installation -v

🀝 Contributing

We welcome contributions! Here's how to help:

  1. Add New Models - Update adapters/model_configs.py
  2. Improve Conversions - Enhance conversion functions in adapters/colab_to_brev.py
  3. Fix Bugs - Submit PRs with test coverage
  4. Report Issues - Use GitHub Issues

Development Workflow

# Create a feature branch
git checkout -b feature/my-improvement

# Make changes and test
pytest tests/ -v

# Commit with conventional commits
git commit -m "feat: add support for new model"

# Push and create PR
git push origin feature/my-improvement

πŸ“Š Metadata Registry

The metadata/launchables.json file contains a complete registry of all converted launchables:

{
  "version": "1.0.0",
  "generated_at": "2025-10-20T12:00:00Z",
  "total_launchables": 25,
  "launchables": [
    {
      "id": "llama-3.1-8b-fine-tuning",
      "name": "Llama 3.1 (8B)",
      "description": "Fine-tune Llama 3.1 (8B) with Unsloth on NVIDIA GPUs",
      "notebook": "notebook.ipynb",
      "path": "llama-3.1-8b-fine-tuning",
      "gpu": {
        "tier": "L4",
        "min_vram_gb": 16,
        "multi_gpu": false
      },
      "tags": ["unsloth", "fine-tuning", "text-generation"],
      "upstream": {
        "source": "unslothai/notebooks",
        "notebook_url": "https://colab.research.google.com/...",
        "last_synced": "2025-10-20T12:00:00Z"
      },
      "files": [...]
    }
  ]
}

πŸ”— Links

πŸ“„ License

This project is licensed under the LGPL-3.0 License - see the LICENSE file for details.

The converted notebooks maintain their original licenses from the Unsloth project.

πŸ™ Acknowledgments

  • Unsloth AI for the amazing fine-tuning framework and notebooks
  • NVIDIA Brev for providing the GPU infrastructure platform
  • All contributors to the Unsloth and Brev communities

Built with ❀️ by the Brev team | Brev | Unsloth

About

Adaptor repo for making Unsloth notebooks useable as Brev Launchables

Resources

License

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •  

Languages