Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
30 changes: 30 additions & 0 deletions .github/workflows/markdownlint.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
# .github/workflows/markdownlint.yml
name: Lint

on:
push:
paths:
- '**/*.md'
- '.github/workflows/markdownlint.yaml'
branches:
- main
pull_request:
paths:
- '**/*.md'
- '.github/workflows/markdownlint.yaml'

permissions:
contents: read

jobs:
lint-markdown:
runs-on: ubuntu-latest
steps:
- name: Checkout repo
uses: actions/checkout@v4

- name: Install Task
uses: arduino/setup-task@8b35f53e4d5a51bf691c94c71f2c7222483206cb

- name: Lint Markdown Files
run: task lint:markdown
7 changes: 7 additions & 0 deletions .markdownlint-cli2.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
# Define glob expressions to use (only valid at root)
globs:
- "**/*.md"
ignores:
- "**/.venv/**"
# Show found files on stdout (only valid at root)
showFound: true
21 changes: 21 additions & 0 deletions .markdownlint.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
# .markdownlint.yaml

# MD001: Heading levels should only increment by one level at a time
MD001: false

# MD004: Unordered list style - allow different styles in sublists
MD004:
style: sublist

# MD013: Line length
MD013:
line_length: 120
code_blocks: false
tables: false

# MD025: Multiple top-level headings allowed
MD025: false

# MD033: Inline HTML
MD033:
allowed_elements: [p, img]
43 changes: 28 additions & 15 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,43 +2,50 @@

## Prerequisites

+ **[Docker Desktop](https://www.docker.com/products/docker-desktop/) 4.43.0+ or [Docker Engine](https://docs.docker.com/engine/)** installed
+ **A laptop or workstation with a GPU** (e.g., a MacBook) for running open models locally. If you don't have a GPU, you can alternatively use [**Docker Offload**](https://www.docker.com/products/docker-offload).
+ If you're using Docker Engine on Linux or Docker Desktop on Windows, ensure that the [Docker Model Runner requirements](https://docs.docker.com/ai/model-runner/) are met (specifically that GPU support is enabled) and the necessary drivers are installed
+ If you're using Docker Engine on Linux, ensure you have Compose 2.38.1 or later installed
+ **[Docker Desktop] 4.43.0+ or [Docker Engine]** installed.
+ **A laptop or workstation with a GPU** (e.g., a MacBook) for running open models locally. If you
don't have a GPU, you can alternatively use **[Docker Offload]**.
+ If you're using [Docker Engine] on Linux or [Docker Desktop] on Windows, ensure that the
[Docker Model Runner requirements] are met (specifically that GPU
support is enabled) and the necessary drivers are installed.
+ If you're using Docker Engine on Linux, ensure you have [Docker Compose] 2.38.1 or later installed.

## Demos

Each of these demos is self-contained and can be run either locally or using a cloud context. They are all configured using two steps.
Each of these demos is self-contained and can be run either locally or using a cloud context. They
are all configured using two steps.

1. change directory to the root of the demo project
1. create a `.mcp.env` file from the `mcp.env.example` file (if it exists, otherwise the demo doesn't need any secrets) and supply the required MCP tokens
1. run `docker compose up --build`
2. create a `.mcp.env` file from the `mcp.env.example` file (if it exists, otherwise the demo
doesn't need any secrets) and supply the required MCP tokens
3. run `docker compose up --build`

### Using OpenAI models

The demos support using OpenAI models instead of running models locally with Docker Model Runner. To use OpenAI:

1. Create a `secret.openai-api-key` file with your OpenAI API key:

```
sk-...
```
```plaintext
sk-...
```

2. Start the project with the OpenAI configuration:

```
docker compose -f compose.yaml -f compose.openai.yaml up
```
```sh
docker compose -f compose.yaml -f compose.openai.yaml up
```

# Compose for Agents Demos - Classification

| Demo | Agent System | Models | MCPs | project | compose |
| ---- | ---- | ---- | ---- | ---- | ---- |
| [A2A](https://github.com/a2a-agents/agent2agent) Multi-Agent Fact Checker | Multi-Agent | OpenAI | duckduckgo | [./a2a](./a2a) | [compose.yaml](./a2a/compose.yaml) |
| [A2A](https://github.com/a2a-agents/agent2agent) Multi-Agent Fact Checker | Multi-Agent | OpenAI | duckduckgo | [./a2a](./a2a) | [compose.yaml](./a2a/compose.yaml) |
| [Agno](https://github.com/agno-agi/agno) agent that summarizes GitHub issues | Multi-Agent | qwen3(local) | github-official | [./agno](./agno) | [compose.yaml](./agno/compose.yaml) |
| [Vercel AI-SDK](https://github.com/vercel/ai) Chat-UI for mixing MCPs and Model | Single Agent | llama3.2(local), qwen3(local) | wikipedia-mcp, brave, resend(email) | [./vercel](./vercel) | [compose.yaml](https://github.com/slimslenderslacks/scira-mcp-chat/blob/main/compose.yaml) |
| [CrewAI](https://github.com/crewAIInc/crewAI) Marketing Strategy Agent | Multi-Agent | qwen3(local) | duckduckgo | [./crew-ai](./crew-ai) | [compose.yaml](https://github.com/docker/compose-agents-demo/blob/main/crew-ai/compose.yaml) |
| [ADK](https://github.com/google/adk-python) Multi-Agent Fact Checker | Multi-Agent | gemma3-qat(local) | duckduckgo | [./adk](./adk) | [compose.yaml](./adk/compose.yaml) |
| [ADK](https://github.com/google/adk-python) & [Cerebras](https://www.cerebras.ai/) Golang Experts | Multi-Agent | unsloth/qwen3-gguf:4B-UD-Q4_K_XL & ai/qwen2.5:latest (DMR local), llama-4-scout-17b-16e-instruct (Cerebras remote) | | [./adk-cerebras](./adk-cerebras) | [compose.yml](./adk-cerebras/compose.yml) |
| [ADK](https://github.com/google/adk-python) & [Cerebras](https://www.cerebras.ai/) Golang Experts | Multi-Agent | unsloth/qwen3-gguf:4B-UD-Q4_K_XL & ai/qwen2.5:latest (DMR local), llama-4-scout-17b-16e-instruct (Cerebras remote) | | [./adk-cerebras](./adk-cerebras) | [compose.yml](./adk-cerebras/compose.yml) |
| [LangGraph](https://github.com/langchain-ai/langgraph) SQL Agent | Single Agent | qwen3(local) | postgres | [./langgraph](./langgraph) | [compose.yaml](./langgraph/compose.yaml) |
| [Embabel](https://github.com/embabel/embabel-agent) Travel Agent | Multi-Agent | qwen3, Claude3.7, llama3.2, jimclark106/all-minilm:23M-F16 | brave, github-official, wikipedia-mcp, weather, google-maps, airbnb | [./embabel](./embabel) | [compose.yaml](https://github.com/embabel/travel-planner-agent/blob/main/compose.yaml) and [compose.dmr.yaml](https://github.com/embabel/travel-planner-agent/blob/main/compose.dmr.yaml) |
| [Spring AI](https://spring.io/projects/spring-ai) Brave Search | Single Agent | none | duckduckgo | [./spring-ai](./spring-ai) | [compose.yaml](./spring-ai/compose.yaml) |
Expand All @@ -54,3 +61,9 @@ made by Docker in this repository.
> apply to that specific example, and they must be respected accordingly.

`SPDX-License-Identifier: Apache-2.0 OR MIT`

[Docker Compose]: https://github.com/docker/compose
[Docker Desktop]: https://www.docker.com/products/docker-desktop/
[Docker Engine]: https://docs.docker.com/engine/
[Docker Model Runner requirements]: https://docs.docker.com/ai/model-runner/
[Docker Offload]: https://www.docker.com/products/docker-offload/
21 changes: 21 additions & 0 deletions Taskfile.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
version: '3'

vars:
MARKDOWN_CLI2_CMD: docker run --rm -v $(pwd):/workdir davidanson/markdownlint-cli2:v0.18.1

tasks:
lint:markdown:
desc: Lint Markdown files
cmds:
- "{{ .MARKDOWN_CLI2_CMD }}"
dir: .
lint:markdown:fix:
desc: Lint and Fix Markdown files
cmds:
- "{{ .MARKDOWN_CLI2_CMD }} --fix"
dir: .

lint:
deps:
- lint:markdown
desc: Lint all files
52 changes: 26 additions & 26 deletions a2a/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,6 @@ internal reasoning alone. The system showcases how agents with distinct roles an
> [!Tip]
> ✨ No configuration needed — run it with a single command.


<p align="center">
<img src="demo.gif"
alt="A2A Multi-Agent Fact Check Demo"
Expand All @@ -22,17 +21,20 @@ internal reasoning alone. The system showcases how agents with distinct roles an

### Requirements

+ **[Docker Desktop](https://www.docker.com/products/docker-desktop/) 4.43.0+ or [Docker Engine](https://docs.docker.com/engine/)** installed
+ **A laptop or workstation with a GPU** (e.g., a MacBook) for running open models locally. If you don't have a GPU, you can alternatively use [**Docker Offload**](https://www.docker.com/products/docker-offload).
+ If you're using Docker Engine on Linux or Docker Desktop on Windows, ensure that the [Docker Model Runner requirements](https://docs.docker.com/ai/model-runner/) are met (specifically that GPU support is enabled) and the necessary drivers are installed
+ If you're using Docker Engine on Linux, ensure you have Compose 2.38.1 or later installed
+ An [OpenAI API Key](https://platform.openai.com/api-keys) 🔑
+ **[Docker Desktop] 4.43.0+ or [Docker Engine]** installed.
+ **A laptop or workstation with a GPU** (e.g., a MacBook) for running open models locally. If you
don't have a GPU, you can alternatively use **[Docker Offload]**.
+ If you're using [Docker Engine] on Linux or [Docker Desktop] on Windows, ensure that the
[Docker Model Runner requirements] are met (specifically that GPU
support is enabled) and the necessary drivers are installed.
+ If you're using Docker Engine on Linux, ensure you have [Docker Compose] 2.38.1 or later installed.
+ An [OpenAI API Key](https://platform.openai.com/api-keys) 🔑.

### Run the project

Create a `secret.openai-api-key` file with your OpenAI API key:

```
```plaintext
sk-...
```

Expand Down Expand Up @@ -61,18 +63,17 @@ same demo with a larger model that takes advantage of a more powerful GPU on the
docker compose -f compose.dmr.yaml -f compose.offload.yaml up --build
```


# ❓ What Can It Do?

This system performs multi-agent fact verification, coordinated by an **Auditor**:

- 🧑‍⚖️ **Auditor**:
- Orchestrates the process from input to verdict.
- Delegates tasks to Critic and Reviser agents.
- 🧠 **Critic**:
- Uses DuckDuckGo via MCP to gather real-time external evidence.
- ✍️ **Reviser**:
- Refines and verifies the Critic’s conclusions using only reasoning.
+ 🧑‍⚖️ **Auditor**:
* Orchestrates the process from input to verdict.
* Delegates tasks to Critic and Reviser agents.
+ 🧠 **Critic**:
* Uses DuckDuckGo via MCP to gather real-time external evidence.
+ ✍️ **Reviser**:
* Refines and verifies the Critic’s conclusions using only reasoning.

**🧠 All agents use the Docker Model Runner for LLM-based inference.**

Expand All @@ -89,7 +90,6 @@ Example question:
| `src/AgentKit` | Agent runtime |
| `agents/*.yaml` | Agent definitions |


# 🔧 Architecture Overview

```mermaid
Expand All @@ -115,10 +115,10 @@ flowchart TD

```

- The Auditor is a Sequential Agent, it coordinates Critic and Reviser agents to verify user-provided claims.
- The Critic agent performs live web searches through DuckDuckGo using an MCP-compatible gateway.
- The Reviser agent refines the Critic’s conclusions using internal reasoning alone.
- All agents run inference through a Docker-hosted Model Runner, enabling fully containerized LLM reasoning.
+ The Auditor is a Sequential Agent, it coordinates Critic and Reviser agents to verify user-provided claims.
+ The Critic agent performs live web searches through DuckDuckGo using an MCP-compatible gateway.
+ The Reviser agent refines the Critic’s conclusions using internal reasoning alone.
+ All agents run inference through a Docker-hosted Model Runner, enabling fully containerized LLM reasoning.

# 🤝 Agent Roles

Expand All @@ -128,7 +128,6 @@ flowchart TD
| **Critic** | ✅ DuckDuckGo via MCP | Gathers evidence to support or refute the claim |
| **Reviser** | ❌ None | Refines and finalizes the answer without external input |


# 🧹 Cleanup

To stop and remove containers and volumes:
Expand All @@ -137,15 +136,16 @@ To stop and remove containers and volumes:
docker compose down -v
```


# 📎 Credits
- [A2A]
- [DuckDuckGo]
- [Docker Compose]

+ [A2A]
+ [DuckDuckGo]
+ [Docker Compose]

[A2A]: https://github.com/a2aproject/a2a-python
[DuckDuckGo]: https://duckduckgo.com
[Docker Compose]: https://github.com/docker/compose
[Docker Desktop]: https://www.docker.com/products/docker-desktop/
[Docker Model Runner]: https://docs.docker.com/ai/model-runner/
[Docker Engine]: https://docs.docker.com/engine/
[Docker Model Runner requirements]: https://docs.docker.com/ai/model-runner/
[Docker Offload]: https://www.docker.com/products/docker-offload/
46 changes: 33 additions & 13 deletions adk-cerebras/README.md
Original file line number Diff line number Diff line change
@@ -1,31 +1,39 @@
# DevDuck agents

A multi-agent system for Go programming assistance built with Google Agent Development Kit (ADK). This project features a coordinating agent (DevDuck) that manages two specialized sub-agents (Bob and Cerebras) for different programming tasks.
A multi-agent system for Go programming assistance built with Google Agent Development Kit (ADK). This
project features a coordinating agent (DevDuck) that manages two specialized sub-agents (Bob and
Cerebras) for different programming tasks.

## Architecture

The system consists of three main agents orchestrated by Docker Compose, which plays a **primordial role** in launching and coordinating all agent services:
The system consists of three main agents orchestrated by Docker Compose, which plays a
**primordial role** in launching and coordinating all agent services:

### 🐙 Docker Compose Orchestration

- **Central Role**: Docker Compose serves as the foundation for the entire multi-agent system
- **Service Orchestration**: Manages the lifecycle of all three agents (DevDuck, Bob, and Cerebras)
- **Configuration Management**: Defines agent prompts, model configurations, and service dependencies directly in the compose file
- **Configuration Management**: Defines agent prompts, model configurations, and service dependencies
directly in the compose file
- **Network Coordination**: Establishes secure inter-agent communication channels
- **Environment Management**: Handles API keys, model parameters, and runtime configurations

### Agent Components:
### Agent Components

### 🦆 DevDuck (Main Agent)

- **Role**: Main development assistant and project coordinator
- **Model**: Qwen3 (unsloth/qwen3-gguf:4B-UD-Q4_K_XL)
- **Capabilities**: Routes requests to appropriate sub-agents based on user needs

### 👨‍💻 Bob Agent

- **Role**: General development tasks and project coordination
- **Model**: Qwen2.5 (ai/qwen2.5:latest)
- **Specialization**: Go programming expert for understanding code, explaining concepts, and generating code snippets

### 🧠 Cerebras Agent

- **Role**: Advanced computational tasks and complex problem-solving
- **Model**: Llama-4 Scout (llama-4-scout-17b-16e-instruct)
- **Provider**: Cerebras API
Expand All @@ -43,21 +51,25 @@ The system consists of three main agents orchestrated by Docker Compose, which p

### Prerequisites

+ **[Docker Desktop](https://www.docker.com/products/docker-desktop/) 4.43.0+ or [Docker Engine](https://docs.docker.com/engine/)** installed
+ **A laptop or workstation with a GPU** (e.g., a MacBook) for running open models locally. If you don't have a GPU, you can alternatively use [**Docker Offload**](https://www.docker.com/products/docker-offload).
+ If you're using Docker Engine on Linux or Docker Desktop on Windows, ensure that the [Docker Model Runner requirements](https://docs.docker.com/ai/model-runner/) are met (specifically that GPU support is enabled) and the necessary drivers are installed
+ If you're using Docker Engine on Linux, ensure you have Compose 2.38.1 or later installed
- **[Docker Desktop] 4.43.0+ or [Docker Engine]** installed.
- **A laptop or workstation with a GPU** (e.g., a MacBook) for running open models locally. If you
don't have a GPU, you can alternatively use **[Docker Offload]**.
- If you're using [Docker Engine] on Linux or [Docker Desktop] on Windows, ensure that the
[Docker Model Runner requirements] are met (specifically that GPU
support is enabled) and the necessary drivers are installed.
- If you're using Docker Engine on Linux, ensure you have [Docker Compose] 2.38.1 or later installed.

### Configuration

1. **You need a Cerebras API Key**: https://cloud.cerebras.ai/
1. **You need a Cerebras API Key**: <https://cloud.cerebras.ai/>
2. Create a `.env` file with the following content:

```env
CEREBRAS_API_KEY=<your_cerebras_api_key>
CEREBRAS_BASE_URL=https://api.cerebras.ai/v1
CEREBRAS_CHAT_MODEL=llama-4-scout-17b-16e-instruct
```

> look at the `.env.sample` file

### ✋ All the prompts are defined in the 🐙 compose file
Expand All @@ -71,14 +83,14 @@ docker compose up

The application will be available at [http://0.0.0.0:8000](http://0.0.0.0:8000)


### Usage

The agents can be accessed through the web interface or API endpoints.

> Activate Token Streaming

**You can try this**:

```text
Hello I'm Phil

Expand All @@ -90,20 +102,28 @@ Cerebras can you analyse and comment this code

Can you generate the tests
```

> ✋ For a public demo, stay simple, the above examples are working.

**🎥 How to use the demo**: [https://youtu.be/WYB31bzfXnM](https://youtu.be/WYB31bzfXnM)

#### Routing Requests

- **General requests**: Handled by DevDuck, who routes to appropriate sub-agents
- **Specific agent requests**:
- "I want to speak with Bob" → Routes to Bob agent
- "I want to speak with Cerebras" → Routes to Cerebras agent
- **Specific agent requests**
+ "I want to speak with Bob" → Routes to Bob agent
+ "I want to speak with Cerebras" → Routes to Cerebras agent

## Tips

If for any reason, you cannot go back from the Cerebras agent to the Bob agent, try this:

```text
go back to devduck
```

[Docker Compose]: https://github.com/docker/compose
[Docker Desktop]: https://www.docker.com/products/docker-desktop/
[Docker Engine]: https://docs.docker.com/engine/
[Docker Model Runner requirements]: https://docs.docker.com/ai/model-runner/
[Docker Offload]: https://www.docker.com/products/docker-offload/
Loading