Shared context layer for AI-native developers. Auto-capture conversations from ChatGPT and Gemini, expose them to Cursor and Claude Code via MCP, and seamlessly inject relevant context back into browser tools.
Developers live in fragmented AI tool silos:
- ChatGPT conversation → manual copy-paste to Cursor
- Gemini image → manual screenshot or download
- Cursor decision → no way to send back to ChatGPT
contextbridge is the missing glue.
┌──────────┐ capture ┌─────────────┐ inject ┌─────────────┐
│ ChatGPT │ ────────►│ Context │ ◄──────┤ Cursor │
│ Gemini │ │ Store │ │ Claude Code │
└──────────┘ └─────────────┘ └─────────────┘
- Capture: Chrome extension watches ChatGPT/Gemini conversations
- Store: Conversations saved as markdown files + indexed
- MCP Expose: Cursor/Claude Code read via MCP tools
- Inject: Confirmation toast offers to inject context back into tools
- Node.js 18+
- npm
- Chrome browser
- Cursor or Claude Code (optional, for MCP integration)
# 1. Install dependencies
npm install
cd extension && npm install && cd ..
# 2. Build bridge + extension
npm run build
cd extension && npm run build && cd ..
# 3. Load extension into Chrome
# - chrome://extensions/
# - Enable Developer mode
# - Load unpacked → select contextbridge/extension/dist/
# 4. Install MCP for Cursor/Claude Code (optional)
bash scripts/install-mcp.sh
# 5. Start the bridge
npm run devThe bridge runs on http://127.0.0.1:3131.
- Open https://chat.openai.com
- Have a conversation with ChatGPT
- Watch
~/.contextbridge/sessions/for new.mdfiles appearing in real-time - Open https://gemini.google.com
- A toast should appear: "📋 New context from chatgpt"
- Click "Yes, inject" to insert the ChatGPT discussion into your Gemini prompt
# Terminal 1: start bridge
pnpm dev
# Terminal 2: capture a test event
curl -X POST http://127.0.0.1:3131/capture \
-H 'Content-Type: application/json' \
-d '{
"source": "manual",
"title": "JWT strategy decision",
"url": "",
"content": "Use short-lived access tokens + httpOnly refresh tokens for security."
}'
# Verify the file was created
ls ~/.contextbridge/sessions/
cat ~/.contextbridge/index.jsonThen in Cursor or Claude Code:
You: "Use get_recent_context to tell me what's been captured"
Cursor/Claude Code will call the MCP tool and return your captured context.
| Component | Purpose |
|---|---|
| Bridge (Fastify + MCP) | Captures events via REST, exposes MCP tools to IDEs |
| Markdown Store | All context stored in ~/.contextbridge/sessions/*.md (git-friendly) |
| Chrome Extension | (Week 2+) Captures DOM from ChatGPT/Gemini |
| Confirmation Toast | (Week 4+) Offers to inject context back into tools |
~/.contextbridge/
sessions/ ← one markdown file per captured context
2026-04-11_chatgpt_*.md
2026-04-11_gemini_*.md
artifacts/ ← images from Gemini
index.json ← index of all sessions
# Type check
pnpm typecheck
# Lint
pnpm lint
# Watch mode (rebuilds on change)
pnpm --filter bridge devv0.1.0 - COMPLETE ✅
All core features implemented and tested:
- Week 1 ✓ Bridge + markdown store
- Week 2 ✓ Chrome extension: ChatGPT + Gemini capture
- Week 3 ✓ Image artifacts + linking
- Week 4 ✓ Bidirectional injection with toast UI
- Week 5 ✓ MCP integration + Cursor rules
See detailed docs:
- WEEK2_SETUP.md — Extension setup
- WEEK3_ARTIFACTS.md — Image capture
- WEEK4_BIDIRECTIONAL.md — Injection flow
- WEEK5_MCP_INTEGRATION.md — MCP + Cursor rules
| Tool | Purpose |
|---|---|
get_recent_context(limit, source?) |
Get recent captured contexts |
search_context(q, limit) |
Search by title/tags |
get_session(id) |
Get full body of a session |
save_context(title, content, tags?) |
Write a decision to store |
list_artifacts(sessionId?) |
List captured images |
Add to .cursor/rules or your workspace config:
At the start of each task, call `get_recent_context` to check for relevant prior discussions or decisions from ChatGPT/Gemini.MIT
Contributions welcome! Please open an issue or PR.