A comprehensive toolkit for setting up and optimizing DeepSeek models locally using Ollama. Designed for both human developers and agentic IDEs (like Cursor, Cline, or Windsurf).
# Clone this repository
git clone https://github.com/ccross2/llocal-llm-startup.git
cd llocal-llm-startup
# Run the automated setup script
chmod +x setup.sh
sudo ./setup.sh- Features
- Prerequisites
- Installation
- Scripts Overview
- Configuration
- Usage
- Optimization
- Monitoring
- Troubleshooting
- Automated system analysis and model selection
- Smart optimization for both desktop and laptop environments
- Real-time performance monitoring
- Thermal and power management for laptops
- Benchmark testing suite
- Agentic IDE integration support
# Update package list
sudo apt update
# Install Python 3.10+ and development tools
sudo apt install -y python3.10 python3.10-venv python3-pip build-essential
# Install system monitoring tools
sudo apt install -y htop nvidia-smi sensorsAll required Python packages are listed in requirements.txt:
langchain-community>=0.0.10
psutil>=5.9.0
numpy>=1.24.0
torch>=2.0.0 # Optional, for GPU support# Make setup script executable
chmod +x setup.sh
# Run setup script
sudo ./setup.sh# Create virtual environment
python3 -m venv venv
source venv/bin/activate
# Install dependencies
pip install -r requirements.txt
# Install Ollama
curl -fsSL https://ollama.com/install.sh | shMain installation and configuration script:
- Updates system packages
- Installs dependencies
- Sets up Python environment
- Configures Ollama
- Runs initial system analysis
Analyzes system capabilities:
- CPU information and extensions
- Memory capacity and speed
- GPU detection and specifications
- Storage availability
- Power management status
- Temperature sensors
Recommends optimal model based on hardware:
- Analyzes system resources
- Suggests appropriate model size
- Provides configuration recommendations
- Adapts to laptop/desktop environments
System optimization script:
- CPU governor management
- Memory optimization
- Power management (laptop-specific)
- Process priority optimization
- Thermal management
Real-time system monitoring:
- CPU usage and temperature
- Memory utilization
- GPU statistics (if available)
- Process status
- Power consumption (laptops)
Performance testing suite:
- Response time measurement
- Memory usage tracking
- Token generation speed
- Temperature monitoring
- Resource utilization analysis
Available models with hardware requirements:
| Model | RAM Required | Best For | Command |
|---|---|---|---|
| DeepSeek Coder 6.7B | 6-8GB | Code completion, lightweight usage | ollama pull deepseek-coder:6.7b |
| DeepSeek LLM 7B | 8GB | General purpose, efficient | ollama pull deepseek-llm:7b |
| DeepSeek R1 8B | 12GB | Balanced performance | ollama pull deepseek-r1:8b |
| DeepSeek R1 14B | 16GB | Enhanced capabilities | ollama pull deepseek-r1:14b |
| DeepSeek Coder V2 7B | 8GB | Enhanced code completion | ollama pull deepseek-coder-v2:7b |
# Dynamic model selection based on system memory
FROM {{ if gt .Memory 32 }}deepseek-r1:14b{{ else if gt .Memory 16 }}deepseek-r1:8b{{ else }}deepseek-coder:6.7b{{ end }}
# System-specific parameters
PARAMETER num_ctx {{ if gt .Memory 32 }}4096{{ else if gt .Memory 16 }}2048{{ else }}1024{{ end }}
PARAMETER num_predict {{ if gt .Memory 32 }}512{{ else if gt .Memory 16 }}256{{ else }}150{{ end }}
# Quality parameters
PARAMETER temperature 0.7
PARAMETER top_k 40
PARAMETER top_p 0.9
PARAMETER repeat_penalty 1.1./system_check.sh > system_info.txt./select_model.sh > model_selection.txtsudo ./optimize_system.sh./monitor.shpython llm_benchmark.py- Full CPU utilization
- Maximum performance governor
- High priority process scheduling
- GPU acceleration when available
- Dynamic CPU governor
- Thermal-aware processing
- Power-efficient thread allocation
- Battery life optimization
Real-time monitoring includes:
- CPU usage and frequency
- Memory utilization
- GPU statistics
- Temperature tracking
- Power consumption
- Process statistics
- Out of Memory
# Add swap space
sudo ./optimize_system.sh- High Temperature
# Monitor temperature
./monitor.sh- Poor Performance
# Check system status
./system_check.sh
# Adjust model selection
./select_model.sh- Fork the repository
- Create your feature branch
- Commit your changes
- Push to the branch
- Create a Pull Request
MIT License - See LICENSE file for details