| title | Meridian Chatbot |
|---|---|
| emoji | 🛒 |
| colorFrom | indigo |
| colorTo | purple |
| sdk | docker |
| app_port | 7860 |
| pinned | false |
| short_description | AI customer support chatbot for Meridian Electronics |
A production-grade prototype that lets Meridian Electronics customers check product availability, place orders, look up order history, and authenticate themselves — all through a chat interface backed by an MCP server.
┌──────────────┐ ┌─────────────────┐ ┌──────────────────────┐ ┌─────────────────┐
│ Chainlit │ │ Agent loop │ │ MCP client │ │ Meridian MCP │
│ chat UI │ ───▶ │ (openai-agents │ ───▶ │ (Streamable HTTP │ ───▶ │ server │
│ │ │ SDK) │ │ transport) │ │ │
│ - streaming │ │ - tool calls │ │ - dynamic discovery │ │ - products │
│ - cl.Step │ │ - structured │ │ - tool dispatch │ │ - orders │
│ - sessions │ │ output │ │ │ │ - auth │
└──────────────┘ └─────────────────┘ └──────────────────────┘ └─────────────────┘
│ │
│ ├── Guardrails (input scrubbing, prompt-injection defense)
│ ├── Auth state in cl.user_session (verified email only)
│ └── System prompt scoped to Meridian + safety rules
└── LangFuse traces (every turn, every tool call, every token)
Three layers, one job each:
- UI layer (Chainlit): chat surface, streaming, step visualization for tool calls — what the customer sees.
- Agent layer (openai-agents SDK): the LLM loop, tool selection, structured output. Guardrails sit at this boundary so injection attempts and unauthenticated account requests are stopped before the model decides to call a tool.
- Tool layer (MCP, Streamable HTTP): the chatbot has zero hard-coded business logic. It discovers tools dynamically from the MCP server at startup. Adding a new capability is a server-side change — no chatbot redeploy.
| Concern | Choice | Why |
|---|---|---|
| UI | Chainlit >=2.0 |
Native MCP support, built-in step visualization (live tool-call rendering), streaming, sessions. |
| Agent loop | openai-agents SDK |
Anthropic's MCP-native agent runtime. Auto-discovers MCP tool schemas, handles the loop, supports structured Pydantic outputs. |
| Model | gpt-4o-mini |
Cost-effective tier per the brief. Strong tool-calling, ~$0.15 / 1M input tokens. |
| MCP transport | Streamable HTTP | Matches the deployed server (order-mcp-74afyau2q-uc.a.run.app/mcp). |
| Auth | Email + 4-digit PIN, verified via MCP tool | Stateless — only the verified email is held in session, never the PIN. |
| Tracing | LangFuse | Per-turn traces, token counts, latency, tool-call timeline — visible during the demo. |
| Tests | pytest + pytest-cov | Guardrails unit tests + lightweight agent integration tests with a mocked MCP. |
| CI | GitHub Actions (Python 3.11/3.12 matrix, ruff + pytest) | Wired and green. |
| Deploy | HuggingFace Spaces (Docker SDK) | Per the brief's minimum-deploy requirement. |
The bot in action and behind the scenes. Click any thumbnail to view full-size.
Per-turn observability — model, prompt, response, token counts, latency, and cost for every LLM call.

Cost and request volume against the OpenAI API.

The chatbot deploys to Hugging Face Spaces via GitHub Actions. Every push to main runs the test matrix; on success, the workflow force-pushes the repo to the Space's git remote, which triggers HF to rebuild the Docker image.
-
Create the Space on huggingface.co (Spaces → Create new Space → SDK: Docker). Note the Space name.
-
Generate a write token at https://huggingface.co/settings/tokens (scope:
Write). -
Add three GitHub Actions secrets (repo Settings → Secrets and variables → Actions → New repository secret):
Secret Value HF_TOKENthe write token from step 2 HF_USERNAMEyour Hugging Face username HF_SPACEthe Space name from step 1 (e.g. meridian-support) -
Add runtime secrets to the HF Space (Space → Settings → Variables and secrets):
Secret Required Notes OPENAI_API_KEYyes OpenAI key for gpt-4o-miniMCP_SERVER_URLyes https://order-mcp-74afyau2q-uc.a.run.app/mcpCHAINLIT_AUTH_SECRETyes output of chainlit create-secretCHAINLIT_COOKIE_SAMESITEyes (HF) set to noneso the auth cookie survives HF's iframe wrapper athuggingface.co/spaces/…. Chainlit auto-flipsSecureto match. Local dev can leave this unset (defaultlaxworks onlocalhost).LANGFUSE_PUBLIC_KEYoptional tracing — leave blank to disable LANGFUSE_SECRET_KEYoptional tracing LANGFUSE_HOSToptional defaults to https://cloud.langfuse.com
A make loadtest target wraps Apache Bench against /healthz:
chainlit run app.py # in one shell
make loadtest # in another (tweak via N=, C=, PORT=)The default 1000 requests at concurrency 50 saturates the single-container
HF Space free tier well below its rated capacity; horizontal scaling is in
future.md Strategic.
Pipe Chainlit's stdout through scripts/tail_logs.py for a colour-coded view
of structured events (mcp_call, circuit_open, agent_decision, etc.):
chainlit run app.py 2>&1 | python scripts/tail_logs.py
# or filter to one event type:
chainlit run app.py 2>&1 | python scripts/tail_logs.py --filter agent_decisionStdlib only — no extra deps.
GET /healthz returns a JSON payload with the status of the two upstreams the
chat depends on (the MCP server and OpenAI). HTTP 200 when both reachable,
503 when either is degraded:
{
"status": "ok",
"checks": {
"mcp": {"status": "ok", "latency_ms": 234.1},
"openai": {"status": "ok", "latency_ms": 156.3}
}
}Suitable as the target for HF Spaces' health probe, UptimeRobot, or any external monitor. Public, no auth — payload reveals nothing sensitive.
A SQLite file at ./chainlit.db is auto-created on startup; no env var or
external service is needed. Browser refresh resumes the conversation, the
verified email, and the running cost-cap counter. HF Spaces caveat: the
container filesystem is ephemeral, so the DB is wiped on every Space
rebuild (i.e. every push to main). For durable production storage,
attach an HF Persistent Storage volume and point _DB_PATH at the mount.
Push to main. The Actions workflow runs lint + tests across Python 3.11/3.12, then deploys. The Space rebuilds in ~2-3 minutes; live URL is https://huggingface.co/spaces/<HF_USERNAME>/<HF_SPACE>.
docker build -t meridian-support .
docker run --rm -p 7860:7860 --env-file .env meridian-support
# open http://localhost:7860
