A multi-agent orchestration server for OpenSearch Dashboards with context-aware routing and Model Context Protocol (MCP) integration.
OpenSearch Agent Server enables intelligent agent-based interactions within OpenSearch Dashboards by:
- Multi-Agent Orchestration β Routes requests to specialized agents based on context
- OpenSearch Integration β Connects to OpenSearch via MCP for real-time data access
- AG-UI Protocol β Implements OpenSearch Dashboard's agent UI protocol with SSE streaming
- Flexible LLM Support β Works with AWS Bedrock, Ollama, or other LLM providers
- Production Ready β Includes authentication, rate limiting, error recovery, and observability
OpenSearch Dashboards (AG-UI)
β
OpenSearch Agent Server
βββ Router (context-based)
βββ Agent Registry
β βββ ART Agent (strands-agents)
β βββ Default Agent
βββ OpenSearch MCP Server
β
OpenSearch Cluster
- Context-Aware Routing β Automatically selects the appropriate agent based on request context
- Streaming Responses β Real-time SSE streaming for interactive user experiences
- Tool Execution β Agents can execute tools and visualize results in the dashboard
- Authentication & Authorization β JWT-based auth with configurable policies
- Rate Limiting β Protects backend services from overload
- Error Recovery β Automatic retry with exponential backoff
- Observability β Structured logging with request tracking
- Python 3.12+
- OpenSearch 2.x (local or remote cluster)
- LLM Provider (choose one):
- AWS Bedrock (requires AWS credentials)
- Ollama (local installation)
-
Clone the repository
git clone https://github.com/mingshl/opensearch-agent-server.git cd opensearch-agent-server -
Create virtual environment
python -m venv .venv source .venv/bin/activate # On Windows: .venv\Scripts\activate
-
Install dependencies
pip install -e . -
Configure environment
cp .env.example .env # Edit .env with your configuration
Create a .env file with the following settings:
# OpenSearch Connection
OPENSEARCH_URL=https://localhost:9200
OPENSEARCH_USERNAME=admin
OPENSEARCH_PASSWORD=admin
# Authentication (set to false for local development)
AG_UI_AUTH_ENABLED=false
# CORS (allow OpenSearch Dashboards origin)
AG_UI_CORS_ORIGINS=http://localhost:5601
# LLM Provider β Option 1: AWS Bedrock
AWS_ACCESS_KEY_ID=your_access_key
AWS_SECRET_ACCESS_KEY=your_secret_key
AWS_REGION=us-east-1
BEDROCK_INFERENCE_PROFILE_ARN=arn:aws:bedrock:...
# LLM Provider β Option 2: Ollama (local)
OLLAMA_MODEL=llama3
# Logging
AG_UI_LOG_FORMAT=human
AG_UI_LOG_LEVEL=INFOTo run the full demo with OpenSearch, Agent Server, and Dashboards:
Terminal 1 - OpenSearch
# Start OpenSearch on port 9200
docker run -d -p 9200:9200 -p 9600:9600 \
-e "discovery.type=single-node" \
-e "OPENSEARCH_INITIAL_ADMIN_PASSWORD=Admin1234!" \
opensearchproject/opensearch:latest
# Verify
curl http://localhost:9200 -u admin:Admin1234!Terminal 2 - Agent Server
# Configure and start opensearch agent server
cd opensearch-agent-server
cp .env.example .env
# Edit .env with your settings
source .venv/bin/activate
python run_server.py
# Server starts on http://localhost:8001Terminal 3 - OpenSearch Dashboards
# Start dashboard (requires Node.js 22+)
cd OpenSearch-Dashboards
# Ensure config/opensearch_dashboards.yml has chat.agUiUrl configured
yarn start --no-base-path
# Dashboard opens on http://localhost:5601Access the Chat
- Open http://localhost:5601
- Click the chat icon (π¬) in the top-right header
- Start asking questions about your data!
python run_server.pyOr using uvicorn directly:
uvicorn server.ag_ui_app:app --host 0.0.0.0 --port 8001The server will start on http://localhost:8001
# Check server health
curl http://localhost:8001/health
# List available agents
curl http://localhost:8001/agents
# Test agent interaction (requires OpenSearch running)
curl -X POST http://localhost:8001/runs \
-H "Content-Type: application/json" \
-d '{
"input": "Show me recent logs",
"context": [{"appId": "discover"}]
}'-
Start OpenSearch (port 9200)
# Using Docker docker run -d -p 9200:9200 -p 9600:9600 \ -e "discovery.type=single-node" \ -e "OPENSEARCH_INITIAL_ADMIN_PASSWORD=Admin1234!" \ opensearchproject/opensearch:latest # Or use your local OpenSearch installation
-
Start OpenSearch Agent Server (port 8001)
cd opensearch-agent-server source .venv/bin/activate python run_server.py
-
Configure OpenSearch Dashboards
Edit
config/opensearch_dashboards.yml:# OpenSearch connection opensearch.hosts: ["http://localhost:9200"] opensearch.ssl.verificationMode: none # Enable new UI header (required for chat button) uiSettings: overrides: "home:useNewHomePage": true # Enable context provider (sends page context to agent) contextProvider: enabled: true # Enable chat with opensearch agent server chat: enabled: true agUiUrl: "http://localhost:8001/runs"
-
Start OpenSearch Dashboards (port 5601)
cd OpenSearch-Dashboards yarn start --no-base-path -
Access the Chat Interface
- Open http://localhost:5601 in your browser
- Look for the chat icon in the top-right header
- Click to open the assistant panel
- Start chatting with your data!
pip install -e ".[dev]"pytestruff format .
ruff check .opensearch-agent-server/
βββ src/
β βββ agents/ # Agent implementations
β β βββ art/ # ART (Search Relevance Testing) agent
β β β βββ art_agent.py # ART orchestrator agent
β β β βββ specialized_agents.py # Hypothesis, evaluation, UBI sub-agents
β β βββ base.py # Agent protocol / base types
β β βββ default_agent.py # General OpenSearch assistant
β βββ orchestrator/ # Routing and registry
β β βββ router.py # Context-based routing
β β βββ registry.py # Agent registry
β βββ server/ # FastAPI application
β β βββ ag_ui_app.py # Main FastAPI app and lifespan
β β βββ agent_orchestrator.py # Orchestrator: routes requests to agents
β β βββ run_routes.py # AG-UI protocol endpoints
β β βββ config.py # Configuration management
β β βββ ... # Middleware, auth, rate limiting, etc.
β βββ tools/ # Agent tools (local computation)
β β βββ art/ # ART-specific tools
β β βββ experiment_tools.py # Experiment results aggregation
β βββ utils/ # Shared utilities
β βββ mcp_connection.py # OpenSearch MCP client
β βββ logging_helpers.py # Structured logging
β βββ monitored_tool.py # Tool instrumentation wrapper
β βββ ... # Persistence, activity monitor, etc.
βββ tests/
β βββ helpers/ # Shared test helpers
β βββ integration/ # Integration tests
β βββ unit/ # Unit tests
βββ run_server.py # Entry point
βββ pyproject.toml # Project metadata and dependencies
βββ .env.example # Environment template
GET /health
Returns server health status.
GET /agents
Returns available agents and their capabilities.
POST /runs
Creates a new agent run with streaming responses via SSE.
GET /runs/{run_id}
Returns the status of a specific run.
- Verify OpenSearch is running:
curl http://localhost:9200 - Check credentials in
.env - Disable SSL verification for local development
- AWS Bedrock: Ensure AWS credentials are configured
- Ollama: Verify Ollama is running:
ollama list
If port 8001 is in use, modify the startup command:
uvicorn server.ag_ui_app:app --host 0.0.0.0 --port 8002Contributions are welcome! Please:
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
This project is licensed under the Apache License 2.0 - see the LICENSE file for details.
- Built with strands-agents for multi-agent orchestration
- Implements AG-UI Protocol for OpenSearch Dashboards
- Uses Model Context Protocol (MCP) for OpenSearch integration