Skip to content

local LM web searcher (i will keep on developing it whenever i have free time & all the code did with "sonnet" vibe coding )

License

Notifications You must be signed in to change notification settings

gopinath87607/LocalDeepResearch

Repository files navigation

LocalDeepResearch

A comprehensive AI-powered research platform that combines autonomous research agents with interactive chat capabilities.

Features

  • πŸ” Autonomous Research: AI agents conduct multi-step research using web search, document analysis, and academic sources
  • πŸ’¬ Interactive Chat: Chat with your research results using local LLMs
  • πŸ”— URL Tracking: Automatically collects and organizes all website links found during research
  • πŸ“Š Real-time Monitoring: Live view of research process, server logs, and agent intelligence
  • 🎯 Local First: Runs entirely on your hardware with local LLMs via llama.cpp

Demo

Video Demo

Demo Video

Note: If the video doesn't play directly in GitHub, you can view it here

Screenshots

Main Dashboard Dashboard Screenshot

Research Interface Research Interface

Chat Interface Chat Interface

Quick Start

Prerequisites

  • Python 3.8+
  • Node.js 16+
  • llama.cpp server
  • Research-capable LLM model (e.g., Qwen, Llama, etc.)

1. Clone and Setup

git clone https://github.com/gopinath87607/LocalDeepResearch.git
cd LocalDeepResearch

# Create isolated environment with Python 3.10.0
conda create -n LocalDeepResearch_env python=3.10.0
conda activate LocalDeepResearch_env

# Or using virtualenv
python3.10 -m venv LocalDeepResearch_env
source LocalDeepResearch_env/bin/activate  # On Windows: deepresearch_env\Scripts\activate

pip install -r requirements.txt

# Backend setup
cd backend
pip install -r requirements.txt

# Frontend setup
cd ../frontend
npm install

2. Start LLM Servers

# Main research model (port 8080)
./llama-server -m path/to/your-model.gguf --host 0.0.0.0 --port 8080

# ReaderLM for web extraction (port 8081) - optional
./llama-server -m path/to/reader-lm.gguf --host 0.0.0.0 --port 8081

3. Configure Environment

edit this file > inference/run_react_infer_with_progress.sh

export SERPER_KEY_ID="your-serper-api-key"
export API_BASE="http://localhost:8080/v1"
export READERLM_ENDPOINT="http://localhost:8081/v1"

4. Run the Application

# Start backend
cd backend
python main.py

# Start frontend (new terminal)
cd frontend
npm start

Visit http://localhost:3000 and start researching!

Architecture

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚   React Frontendβ”‚    β”‚  Flask Backend  β”‚    β”‚  llama.cpp      β”‚
β”‚                 │◄──►│                 │◄──►│  LLM Servers    β”‚
β”‚  - Dashboard    β”‚    β”‚  - Research API β”‚    β”‚                 β”‚
β”‚  - Chat UI      β”‚    β”‚  - WebSocket    β”‚    β”‚  Main: :8080    β”‚
β”‚  - URL Display  β”‚    β”‚  - URL Tracking β”‚    β”‚  Reader: :8081  β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                                β”‚
                                β–Ό
                       β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
                       β”‚  Research Tools β”‚
                       β”‚                 β”‚
                       β”‚  - Web Search   β”‚
                       β”‚  - Visit Pages  β”‚
                       β”‚  - Scholar      β”‚
                       β”‚  - Python Code  β”‚
                       β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Configuration

API Keys

  • Serper API: Required for web search - get free key at serper.dev
  • Other APIs: Optional depending on tools used

Models

Usage

Basic Research

  1. Enter your research question
  2. Watch real-time progress in the monitoring panels
  3. Review collected URLs and research intelligence
  4. Read comprehensive results

Chat with Results

After research completes:

  1. Chat interface appears below results
  2. Ask follow-up questions about findings
  3. Get clarifications and deeper insights
  4. Context-aware responses based on research

Development

Project Structure

  • backend/: Flask API server and research orchestration
  • frontend/: React dashboard and user interface
  • inference/: Research agent scripts and tools
  • tools/: Individual research capability modules

Adding New Tools

  1. Create tool in backend/tools/tool_name.py
  2. Register in research agent configuration
  3. Test with isolated queries

API Documentation

See docs/api.md for detailed API reference.

Contributing

  1. Fork the repository
  2. Create feature branch (git checkout -b feature/amazing-feature)
  3. Commit changes (git commit -m 'Add amazing feature')
  4. Push to branch (git push origin feature/amazing-feature)
  5. Open Pull Request

License

This project is licensed under the MIT License - see the LICENSE file for details.

Acknowledgments

  • DeepResearch by Alibaba-NLP for the core research agent implementation and methodologies
  • llama.cpp for local LLM inference
  • Serper for web search API
  • Research methodologies inspired by various AI research frameworks

Support


Note

all out put is saved here output_dir = f"/home/XXX/DeepResearch/outputs/session_{session_id}"

⭐ Star this repo if you find it useful!

About

local LM web searcher (i will keep on developing it whenever i have free time & all the code did with "sonnet" vibe coding )

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published