Skip to content

ccross2/llocal-llm-startup

Repository files navigation

Local LLM Setup with DeepSeek and Ollama πŸš€

A comprehensive toolkit for setting up and optimizing DeepSeek models locally using Ollama. Designed for both human developers and agentic IDEs (like Cursor, Cline, or Windsurf).

Quick Start πŸƒ

# Clone this repository
git clone https://github.com/ccross2/llocal-llm-startup.git
cd llocal-llm-startup

# Run the automated setup script
chmod +x setup.sh
sudo ./setup.sh

Table of Contents πŸ“‘

  1. Features
  2. Prerequisites
  3. Installation
  4. Scripts Overview
  5. Configuration
  6. Usage
  7. Optimization
  8. Monitoring
  9. Troubleshooting

Features ✨

  • Automated system analysis and model selection
  • Smart optimization for both desktop and laptop environments
  • Real-time performance monitoring
  • Thermal and power management for laptops
  • Benchmark testing suite
  • Agentic IDE integration support

Prerequisites πŸ“‹

Required Software

# Update package list
sudo apt update

# Install Python 3.10+ and development tools
sudo apt install -y python3.10 python3.10-venv python3-pip build-essential

# Install system monitoring tools
sudo apt install -y htop nvidia-smi sensors

Python Dependencies

All required Python packages are listed in requirements.txt:

langchain-community>=0.0.10
psutil>=5.9.0
numpy>=1.24.0
torch>=2.0.0  # Optional, for GPU support

Installation πŸ’Ώ

Automated Installation

# Make setup script executable
chmod +x setup.sh

# Run setup script
sudo ./setup.sh

Manual Installation

# Create virtual environment
python3 -m venv venv
source venv/bin/activate

# Install dependencies
pip install -r requirements.txt

# Install Ollama
curl -fsSL https://ollama.com/install.sh | sh

Scripts Overview πŸ“œ

1. setup.sh

Main installation and configuration script:

  • Updates system packages
  • Installs dependencies
  • Sets up Python environment
  • Configures Ollama
  • Runs initial system analysis

2. system_check.sh

Analyzes system capabilities:

  • CPU information and extensions
  • Memory capacity and speed
  • GPU detection and specifications
  • Storage availability
  • Power management status
  • Temperature sensors

3. select_model.sh

Recommends optimal model based on hardware:

  • Analyzes system resources
  • Suggests appropriate model size
  • Provides configuration recommendations
  • Adapts to laptop/desktop environments

4. optimize_system.sh

System optimization script:

  • CPU governor management
  • Memory optimization
  • Power management (laptop-specific)
  • Process priority optimization
  • Thermal management

5. monitor.sh

Real-time system monitoring:

  • CPU usage and temperature
  • Memory utilization
  • GPU statistics (if available)
  • Process status
  • Power consumption (laptops)

6. llm_benchmark.py

Performance testing suite:

  • Response time measurement
  • Memory usage tracking
  • Token generation speed
  • Temperature monitoring
  • Resource utilization analysis

Configuration βš™οΈ

Model Selection

Available models with hardware requirements:

Model RAM Required Best For Command
DeepSeek Coder 6.7B 6-8GB Code completion, lightweight usage ollama pull deepseek-coder:6.7b
DeepSeek LLM 7B 8GB General purpose, efficient ollama pull deepseek-llm:7b
DeepSeek R1 8B 12GB Balanced performance ollama pull deepseek-r1:8b
DeepSeek R1 14B 16GB Enhanced capabilities ollama pull deepseek-r1:14b
DeepSeek Coder V2 7B 8GB Enhanced code completion ollama pull deepseek-coder-v2:7b

Modelfile Configuration

# Dynamic model selection based on system memory
FROM {{ if gt .Memory 32 }}deepseek-r1:14b{{ else if gt .Memory 16 }}deepseek-r1:8b{{ else }}deepseek-coder:6.7b{{ end }}

# System-specific parameters
PARAMETER num_ctx {{ if gt .Memory 32 }}4096{{ else if gt .Memory 16 }}2048{{ else }}1024{{ end }}
PARAMETER num_predict {{ if gt .Memory 32 }}512{{ else if gt .Memory 16 }}256{{ else }}150{{ end }}

# Quality parameters
PARAMETER temperature 0.7
PARAMETER top_k 40
PARAMETER top_p 0.9
PARAMETER repeat_penalty 1.1

Usage 🎯

1. System Analysis

./system_check.sh > system_info.txt

2. Model Selection

./select_model.sh > model_selection.txt

3. System Optimization

sudo ./optimize_system.sh

4. Monitoring

./monitor.sh

5. Benchmark Testing

python llm_benchmark.py

Optimization πŸš€

Desktop Systems

  • Full CPU utilization
  • Maximum performance governor
  • High priority process scheduling
  • GPU acceleration when available

Laptop Systems

  • Dynamic CPU governor
  • Thermal-aware processing
  • Power-efficient thread allocation
  • Battery life optimization

Monitoring πŸ“Š

Real-time monitoring includes:

  • CPU usage and frequency
  • Memory utilization
  • GPU statistics
  • Temperature tracking
  • Power consumption
  • Process statistics

Troubleshooting πŸ”§

Common Issues

  1. Out of Memory
# Add swap space
sudo ./optimize_system.sh
  1. High Temperature
# Monitor temperature
./monitor.sh
  1. Poor Performance
# Check system status
./system_check.sh
# Adjust model selection
./select_model.sh

Contributing 🀝

  1. Fork the repository
  2. Create your feature branch
  3. Commit your changes
  4. Push to the branch
  5. Create a Pull Request

License πŸ“„

MIT License - See LICENSE file for details

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors