Connect ReasonKit to any LLM provider or CLI tool "Turn Prompts into Protocols - Anywhere"
Universal Compatibility: ReasonKit integrates seamlessly with Claude, Gemini, OpenAI, Cursor, VS Code, and any LLM provider. The same structured reasoning protocols work across all platforms, giving you flexibility without vendor lock-in.
Comprehensive guides for integrating ReasonKit with major LLM providers:
| Provider | Guide | Best For | Quick Start |
|---|---|---|---|
| Anthropic (Claude) | ANTHROPIC_INTEGRATION.md | Complex reasoning, extended thinking, safety-critical | rk think --provider anthropic "query" |
| OpenAI (GPT) | OPENAI_INTEGRATION.md | General reasoning, JSON mode, embeddings | rk think --provider openai "query" |
| Google (Gemini) | GOOGLE_INTEGRATION.md | Long context (2M), multimodal, documents | rk think --provider gemini "query" |
| Groq | GROQ_INTEGRATION.md | Ultra-fast inference, rapid iteration | rk think --provider groq "query" |
| OpenRouter | OPENROUTER_INTEGRATION.md | 300+ models, fallback routing, cost optimization | rk think --provider openrouter "query" |
| Provider | Speed | Quality | Cost | Best Profile |
|---|---|---|---|---|
| Groq | Fastest | Good | Lowest | --quick |
| Gemini Flash | Very Fast | Good | Very Low | --quick, --balanced |
| OpenAI GPT-4o | Fast | Excellent | Medium | --balanced |
| Claude Sonnet | Fast | Excellent | Medium | --balanced, --deep |
| Claude Opus | Medium | Best | High | --deep, --paranoid |
| OpenAI o1 | Slow | Excellent | High | --deep, --paranoid |
| Feature | Anthropic | OpenAI | Gemini | Groq | OpenRouter |
|---|---|---|---|---|---|
| Max Context | 200K | 200K | 2M | 128K | Varies |
| Extended Thinking | Yes | Yes (o1) | Yes | No | Via models |
| JSON Mode | No | Yes | Yes | Yes | Via models |
| Vision | Yes | Yes | Yes | No | Via models |
| Audio | No | Yes | Yes | Yes | Via models |
| Video | No | No | Yes | No | Via models |
| Embeddings | No | Yes | Yes | No | Via models |
| Free Tier | No | No | Yes | Limited | Yes |
Set all API keys for maximum flexibility:
# Core providers
export ANTHROPIC_API_KEY="sk-ant-..." # Anthropic Claude
export OPENAI_API_KEY="sk-..." # OpenAI GPT
export GEMINI_API_KEY="..." # Google Gemini
export GROQ_API_KEY="gsk_..." # Groq
export OPENROUTER_API_KEY="sk-or-..." # OpenRouter (300+ models)
# Additional providers
export XAI_API_KEY="xai-..." # xAI Grok
export MISTRAL_API_KEY="..." # Mistral AI
export DEEPSEEK_API_KEY="..." # DeepSeek# Default provider for rk think
[thinktool]
default_provider = "anthropic"
default_model = "claude-sonnet-4"
# Provider-specific defaults
[providers.anthropic]
default_model = "claude-sonnet-4"
[providers.openai]
default_model = "gpt-4o"
[providers.gemini]
default_model = "gemini-2.0-flash"
[providers.groq]
default_model = "llama-3.3-70b-versatile"
[providers.openrouter]
default_model = "anthropic/claude-sonnet-4"# Use specific provider
rk think --provider anthropic "Analyze this code"
rk think --provider openai "Evaluate this design"
rk think --provider gemini "Summarize this document"
rk think --provider groq "Quick review"# Specify exact model
rk think --provider anthropic --model claude-opus-4 "Deep analysis"
rk think --provider openai --model o1 "Complex reasoning"
rk think --provider gemini --model gemini-1.5-pro "Long document"
rk think --provider groq --model llama-3.1-405b-reasoning "Hard problem"# Profiles auto-select appropriate models
rk think --profile quick "Fast check" # Uses fastest available
rk think --profile balanced "Standard" # Uses balanced model
rk think --profile deep "Thorough" # Uses reasoning model
rk think --profile paranoid "Critical" # Uses best modelReasonKit can act as the default reasoning layer inside popular CLI agents:
| Tool | Injection Method | Bypass |
|---|---|---|
claude |
--append-system-prompt |
--no-rk |
gemini |
Prompt prefix | --no-rk |
codex |
Prompt prefix | --no-rk |
opencode |
Prompt prefix | --no-rk |
cursor-agent |
Prompt prefix | --no-rk |
copilot |
via gh copilot |
--no-rk |
bash reasonkit-core/scripts/install_cli_defaults.sh
source ~/.zshrcAll defaults are controlled by reasonkit-core/config/cli_defaults.toml:
export RK_CONFIG="$HOME/RK-PROJECT/reasonkit-core/config/cli_defaults.toml"Example:
[defaults]
profile = "balanced"
protocol_dir = "protocols/cli"
[tools.codex]
profile = "paranoid"export RK_PROFILE=quick
# or
export RK_PROFILE=paranoidclaude --no-rk # Run without ReasonKit injectionProtocols are stored in reasonkit-core/protocols/cli/:
balanced.mdquick.mdparanoid.md
# Set per-query budget
rk think "Expensive query" --budget "$1.00"
# Set time budget
rk think "Time-limited" --budget "60s"
# Set token budget
rk think "Token-limited" --budget "5000t"# View usage
rk metrics cost --period day
rk metrics cost --period month
rk metrics cost --provider anthropic- TOOLING_SPEC.md - CLI tool integration details
- CONSULTATIONS.md - AI-to-AI consultation patterns
- CLI_REFERENCE.md - Full CLI documentation
- THINKTOOLS_QUICK_REFERENCE.md - ThinkTool cheat sheet
ReasonKit Integrations | v1.0.0 | Apache 2.0 "See How Your AI Thinks"
