EdgePilot is an on-premises AI copilot that combines a lightweight FastAPI backend with an Electron desktop UI. It features full MCP (Model Context Protocol) integration, enabling Gemini to autonomously monitor your system, launch applications with scheduling, and manage processes through natural language.
- 🤖 MCP Integration - Gemini can autonomously call tools for system monitoring, app launching, and process management
- 📊 Real-time Metrics - CPU, memory, disk, network monitoring with process-level details and executable paths
- 🚀 Smart App Launcher - Launch applications by name with delay support using Windows Start Menu search
- 🎯 Smart Tool Calling - LLM automatically decides when to gather metrics, launch apps, or end processes
- 🖥️ Desktop UI - Electron-based chat interface with dark theme
- 🔌 Provider Abstraction - Pluggable system supporting Gemini (with tools), Claude, and GPT
- 💾 Local Persistence - JSON-based chat history and usage analytics (privacy-first)
- ⚡ Lightweight - Clean codebase focused on core functionality
# Install dependencies
pip install -r requirements.txt
# Configure your API key
# Edit env/.env and add: GEMINI_API_KEY=your_key_here
# Install Electron UI (one-time, requires Node.js 18+)
cd ui && npm install && cd ..# Launch the full application (API + Electron UI)
python main.py
# Or run API only:
python main.py serve --host 127.0.0.1 --port 8000# Test all MCP tools integration
python test_tools.py
# Test launcher directly
python tools/launcher.pyOpen the UI and try these prompts with Gemini:
System Monitoring:
- "What's my current CPU and memory usage?"
- "Show me the top 5 processes using the most CPU"
Application Discovery:
- "What apps do I have installed?"
- "Do I have Discord installed?"
- "List all my games"
Application Launching:
- "Launch notepad"
- "Open Chrome in 30 seconds"
- "Start Minecraft in 1 minute"
Process Control:
- "Close all Chrome instances"
- "End the notepad process"
Edit env/.env:
GEMINI_API_KEY=your_gemini_key # Required for MCP tools
ANTHROPIC_API_KEY=your_claude_key # Optional
OPENAI_API_KEY=your_openai_key # Optional
DEFAULT_PROVIDER=gemini # Use gemini for tool callingEdgePilot/
├── README.md
├── requirements.txt
├── test_tools.py # MCP tools integration test
├── main.py # FastAPI backend + CLI entry point
├── ui/ # Electron desktop application
│ ├── index.html # UI markup
│ ├── renderer.js # Frontend logic
│ ├── styles.css # Dark theme styling
│ ├── main.js # Electron main process
│ └── package.json # Node.js dependencies
├── providers/ # LLM provider adapters
│ ├── base.py # BaseLLM protocol + ToolCall classes
│ ├── gemini.py # Gemini with function calling
│ ├── claude.py # Claude adapter
│ └── gpt.py # GPT placeholder
├── tools/ # System utilities exposed as tools
│ ├── __init__.py # Export gather_metrics, launch, search, list_apps, end_task
│ ├── metrics.py # System monitoring (CPU, memory, processes)
│ ├── launcher.py # Application launcher with Windows Start Menu search
│ └── end_task.py # Process termination
├── MCP/ # Model Context Protocol integration
│ ├── tool_schemas.py # Function calling schemas for all 5 tools
│ ├── tool_executor.py # Tool execution engine
│ └── README.md # Full MCP documentation
├── env/.env # API keys and configuration
└── data/ # JSON persistence
├── chat_history.json # Chat sessions
├── usage_metrics.json # API usage tracking
└── tool_call_history.json # Tool execution logs
GET /api/providers– enumerate providers and configuration statusGET /api/chats– list chat sessions with summary metadataPOST /api/chats– create a new chat sessionGET /api/chats/{chat_id}– fetch full conversation historyPOST /api/chats/{chat_id}/messages– send a prompt and get LLM response (with tool calling)GET /api/metrics– retrieve current system metrics snapshot
EdgePilot includes full MCP integration with 5 powerful tools using launcher.py for intelligent app launching:
Collects comprehensive system metrics including CPU, memory, disk, network, battery, and all running processes with executable paths.
# LLM can call this automatically when user asks about system status
gather_metrics(top_n=10, all_processes=False)Launch applications by name with optional delay. Uses Windows Start Menu search and Microsoft Store app discovery.
# LLM calls this when user wants to launch an app
launch(app_name="chrome", delay_seconds=0)
launch(app_name="minecraft", delay_seconds=30) # Launch in 30 secondsFeatures:
- Searches Windows Start Menu shortcuts
- Finds Microsoft Store/UWP apps
- Supports delayed execution with threading
- Simple app names (no paths needed)
Search for installed applications by name. Returns list of matching apps found in Start Menu and Microsoft Store.
# LLM calls this to check if an app is installed
search(app_name="discord") # Returns: ["Discord"]
search(app_name="game") # Returns: ["Game Bar", "Steam", ...]List all installed applications with optional filtering. Perfect for "what apps do I have?" queries.
# LLM calls this to browse available apps
list_apps(filter_term="") # Returns all apps
list_apps(filter_term="game") # Returns only apps with "game" in nameTerminates processes by name, path, or command line identifier.
# LLM calls this when user wants to close an app
end_task(identifier="chrome", force=False)
end_task(identifier="notepad", force=True)- User sends a message in natural language
- Gemini analyzes the request and decides which tools to call
- Tools are executed automatically (e.g., launching apps, gathering metrics)
- Results are fed back to Gemini
- Gemini formulates a human-readable response
Example 1: System Monitoring
User: "Show me what's using the most CPU"
→ Gemini calls gather_metrics(top_n=3)
→ Receives: {processes: [{name: "chrome.exe", cpu: 15.2%, ...}]}
→ Responds: "Chrome is using the most CPU at 15.2%..."
Example 2: Scheduled App Launch
User: "Launch Minecraft in 30 seconds"
→ Gemini calls launch(app_name="minecraft", delay_seconds=30)
→ Receives: {success: true, message: "Scheduled 'minecraft' to launch in 30 seconds"}
→ Responds: "I've scheduled Minecraft to launch in 30 seconds!"
Example 3: App Discovery
User: "What games do I have?"
→ Gemini calls list_apps(filter_term="game")
→ Receives: {count: 3, apps: ["Game Bar", "Steam", "Minecraft"]}
→ Responds: "You have 3 games installed: Game Bar, Steam, and Minecraft"
See MCP/README.md for the complete guide. It's a simple 5-step process:
- Create tool function in
tools/ - Export it in
tools/__init__.py - Add schema to
MCP/tool_schemas.py - Add executor in
MCP/tool_executor.py - Restart and test!
# Test all MCP tools integration
python test_tools.py
# Test launcher directly (launches notepad, chrome, minecraft)
python tools/launcher.py
# Run modules directly
python -c "from tools import gather_metrics; print(gather_metrics(top_n=5))"
python -c "from tools import search; print(search('chrome'))"
python -c "from tools import list_apps; print(list_apps('game'))"- Add a module under
providers/implementing theBaseLLMprotocol - Register it in
providers/__init__.py - Add environment variables for API keys/models
- For tool support, implement
enable_tools()and parsetool_callsin responses
EdgePilot's application launching is powered by launcher.py, which provides:
- Windows Start Menu Search - Searches .lnk shortcuts in user and system Start Menu locations
- Microsoft Store Apps - Discovers and launches UWP/Store apps via PowerShell
- Delayed Execution - Background threading for scheduled launches
- Intelligent Fallback - Falls back to Windows
startcommand for built-in apps - Simple API - Just 3 core functions:
launch(),search(),list_apps()
The LLM can use simple app names like "chrome", "minecraft", or "notepad" without needing full paths!
README.md(this file) - Quick start and overviewMCP/README.md- Complete MCP integration guidetools/launcher.py- Application launcher implementation with detailed documentation
MIT License - See LICENSE file for details