Athena, ancient Greek goddess associated with wisdom, warfare, and handicraft.
A proxy server that maps Anthropic's API format to OpenAI API format, allowing you to use Claude Code with OpenRouter's vast selection of AI models.
- π API Translation: Maps Anthropic API calls to OpenRouter format
- π Streaming Support: Full SSE streaming for real-time responses
- π οΈ Tool Calling: Complete function/tool calling support
- π― Model Mapping: Configurable mappings for Opus, Sonnet, and Haiku models
- π Provider Routing: Automatic Groq provider routing for Kimi K2 models
- βοΈ Flexible Configuration: CLI flags, config files, environment variables, and .env files
- π Minimal Dependencies: Lightweight with only essential external packages (Cobra CLI, YAML parser)
curl -fsSL https://raw.githubusercontent.com/martinffx/athena/main/install.sh | bash- Download the latest release from GitHub Releases
- Extract and move to your PATH
- Set up configuration (see below)
The proxy looks for configuration in this priority order (highest to lowest):
- Command line flags - CLI arguments override everything
- Environment variables - ATHENA_* prefixed env vars
- Local config file -
./athena.ymlin current directory - Global config file -
~/.config/athena/athena.yml - Built-in defaults - Hardcoded fallback values
# ~/.config/athena/athena.yml
port: "12377"
api_key: "your-openrouter-api-key-here"
base_url: "https://openrouter.ai/api"
model: "moonshotai/kimi-k2-0905"
opus_model: "anthropic/claude-3-opus"
sonnet_model: "anthropic/claude-3.5-sonnet"
haiku_model: "anthropic/claude-3.5-haiku"Note: The default model moonshotai/kimi-k2-0905 automatically uses Groq provider routing for optimal performance.
For fine-grained control over provider routing, add provider configurations to your YAML config:
port: "12377"
api_key: "your-openrouter-api-key-here"
base_url: "https://openrouter.ai/api"
model: "moonshotai/kimi-k2-0905"
default_provider:
order:
- Groq
allow_fallbacks: false
opus_model: "anthropic/claude-3-opus"
opus_provider:
order:
- Anthropic
allow_fallbacks: true
sonnet_model: "anthropic/claude-3.5-sonnet"
haiku_model: "anthropic/claude-3.5-haiku"Provider routing allows you to:
- Force requests through specific providers (e.g., Groq, Anthropic)
- Control fallback behavior when the primary provider is unavailable
- Configure different providers for different model tiers
export OPENROUTER_API_KEY="your-key"
export OPUS_MODEL="anthropic/claude-3-opus"
export SONNET_MODEL="anthropic/claude-3.5-sonnet"
export HAIKU_MODEL="anthropic/claude-3.5-haiku"
export DEFAULT_MODEL="moonshotai/kimi-k2-0905"
export PORT="12377"# ./.env
OPENROUTER_API_KEY=your-openrouter-api-key-here
OPUS_MODEL=anthropic/claude-3-opus
SONNET_MODEL=anthropic/claude-3.5-sonnet
HAIKU_MODEL=anthropic/claude-3.5-haiku# Run in foreground (default)
athena
# Run as background daemon
athena start
# Stop daemon
athena stop
# Check daemon status
athena status
# View logs (daemon mode)
tail -f ~/.athena/athena.log# Use specific models and port (foreground)
athena -port 9000 -api-key YOUR_KEY
# Or run as daemon with custom port
athena start -port 9000 -api-key YOUR_KEY
# Enable debug logging to see full request/response bodies
athena --log-level debug# Start Athena daemon
athena start
# Configure Claude Code to use the proxy
export ANTHROPIC_BASE_URL=http://localhost:12377
export ANTHROPIC_API_KEY=your-openrouter-key
# Run Claude Code
claudeThe proxy server:
- Receives Anthropic API calls from Claude Code on
/v1/messages - Transforms the request format to OpenAI-compatible format
- Forwards to OpenRouter's
/v1/chat/completionsendpoint - Converts the response back to Anthropic format
- Streams the response back to Claude Code
When Claude Code requests a model:
claude-3-opus*β Your configuredopus_modelclaude-3.5-sonnet*β Your configuredsonnet_modelclaude-3.5-haiku*β Your configuredhaiku_model- Models with
/(e.g.,openai/gpt-4) β Passed through as-is - Other models β Your configured
default_model
git clone https://github.com/martinffx/athena.git
cd athena
go build -o athena ./cmd/athenaThe proxy provides a fully compatible Anthropic Messages API that supports:
- β Text generation
- β Streaming responses
- β System messages
- β Tool/function calling
- β Multi-turn conversations
- β Content blocks (text, tool_use, tool_result)
- β Usage tracking
- β Stop reasons
POST /v1/messages- Anthropic Messages API (proxied to OpenRouter)GET /health- Health check endpoint
- Linux (AMD64, ARM64)
- macOS (Intel, Apple Silicon)
- Windows (AMD64, ARM64)
opus_model: "openai/gpt-4"
sonnet_model: "openai/gpt-4-turbo"
haiku_model: "openai/gpt-3.5-turbo"opus_model: "google/gemini-pro"
sonnet_model: "google/gemini-pro"
haiku_model: "google/gemini-pro"base_url: "http://localhost:12377/v1"
opus_model: "llama3:70b"
sonnet_model: "llama3:8b"
haiku_model: "llama3:8b"# Use a different port
athena -port 9000# Check if key is set
echo $OPENROUTER_API_KEY
# Test the proxy directly
curl -X POST http://localhost:12377/v1/messages \
-H "Content-Type: application/json" \
-H "X-Api-Key: your-key" \
-d '{"model":"claude-3-sonnet","messages":[{"role":"user","content":"Hi"}]}'MIT License - see LICENSE for details.
Contributions are welcome! Please feel free to submit a Pull Request.
