Skip to content

Commit 98e2910

Browse files
feat: Add support for OpenRouter (#92)
* Add support for OpenRouter as a new model provider - Introduced `ProviderOpenRouter` in the `models` package. - Added OpenRouter-specific models, including `GPT41`, `GPT41Mini`, `GPT4o`, and others, with their configurations and costs. - Updated `generateSchema` to include OpenRouter as a provider. - Added OpenRouter-specific environment variable handling (`OPENROUTER_API_KEY`) in `config.go`. - Implemented default model settings for OpenRouter agents in `setDefaultModelForAgent`. - Updated `getProviderAPIKey` to retrieve the OpenRouter API key. - Extended `SupportedModels` to include OpenRouter models. - Added OpenRouter client initialization in the `provider` package. - Modified `processGeneration` to handle `FinishReasonUnknown` in addition to `FinishReasonToolUse`. * [feature/openrouter-provider] Add new models and provider to schema - Added "deepseek-chat-free" and "deepseek-r1-free" to the list of supported models in `opencode-schema.json`. * [feature/openrouter-provider] Add OpenRouter provider support and integrate new models - Updated README.md to include OpenRouter as a supported provider and its configuration details. - Added `OPENROUTER_API_KEY` to environment variable configuration. - Introduced OpenRouter-specific models in `internal/llm/models/openrouter.go` with mappings to existing cost and token configurations. - Updated `internal/config/config.go` to set default models for OpenRouter agents. - Extended `opencode-schema.json` to include OpenRouter models in the schema definitions. - Refactored model IDs and names to align with OpenRouter naming conventions. * [feature/openrouter-provider] Refactor finish reason handling and tool call logic in agent and OpenAI provider - Simplified finish reason check in `agent.go` by removing redundant variable assignment. - Updated `openai.go` to override the finish reason to `FinishReasonToolUse` when tool calls are present. - Ensured consistent finish reason handling in both `send` and `stream` methods of the OpenAI provider. [feature/openrouter-provider] Refactor finish reason handling and tool call logic in agent and OpenAI provider - Simplified finish reason check in `agent.go` by removing redundant variable assignment. - Updated `openai.go` to override the finish reason to `FinishReasonToolUse` when tool calls are present. - Ensured consistent finish reason handling in both `send` and `stream` methods of the OpenAI provider. * **[feature/openrouter-provider] Add support for custom headers in OpenAI client configuration** - Introduced a new `extraHeaders` field in the `openaiOptions` struct to allow specifying additional HTTP headers. - Added logic in `newOpenAIClient` to apply `extraHeaders` to the OpenAI client configuration. - Implemented a new option function `WithOpenAIExtraHeaders` to set custom headers in `openaiOptions`. - Updated the OpenRouter provider configuration in `NewProvider` to include default headers (`HTTP-Referer` and `X-Title`) for OpenRouter API requests. * Update OpenRouter model config and remove unsupported models * [feature/openrouter-provider] Update OpenRouter models and default configurations - Added new OpenRouter models: `claude-3.5-sonnet`, `claude-3-haiku`, `claude-3.7-sonnet`, `claude-3.5-haiku`, and `claude-3-opus` in `openrouter.go`. - Updated default agent models in `config.go`: - `agents.coder.model` now uses `claude-3.7-sonnet`. - `agents.task.model` now uses `claude-3.7-sonnet`. - `agents.title.model` now uses `claude-3.5-haiku`. - Updated `opencode-schema.json` to include the new models in the allowed list for schema validation. - Adjusted logic in `setDefaultModelForAgent` to reflect the new default models. * [feature/openrouter-provider] Remove unused ProviderEvent emission in stream function The changes remove the emission of a `ProviderEvent` with type `EventContentStop` in the `stream` function of the `openaiClient` implementation. This event was sent upon successful stream completion but is no longer used.
1 parent 2941137 commit 98e2910

File tree

8 files changed

+405
-38
lines changed

8 files changed

+405
-38
lines changed

README.md

+7-3
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ OpenCode is a Go-based CLI application that brings AI assistance to your termina
1111
## Features
1212

1313
- **Interactive TUI**: Built with [Bubble Tea](https://github.com/charmbracelet/bubbletea) for a smooth terminal experience
14-
- **Multiple AI Providers**: Support for OpenAI, Anthropic Claude, Google Gemini, AWS Bedrock, Groq, and Azure OpenAI
14+
- **Multiple AI Providers**: Support for OpenAI, Anthropic Claude, Google Gemini, AWS Bedrock, Groq, Azure OpenAI, and OpenRouter
1515
- **Session Management**: Save and manage multiple conversation sessions
1616
- **Tool Integration**: AI can execute commands, search files, and modify code
1717
- **Vim-like Editor**: Integrated editor with text input capabilities
@@ -97,8 +97,12 @@ You can configure OpenCode using environment variables:
9797
"disabled": false
9898
},
9999
"groq": {
100-
"apiKey": "your-api-key",
101-
"disabled": false
100+
"apiKey": "your-api-key",
101+
"disabled": false
102+
},
103+
"openrouter": {
104+
"apiKey": "your-api-key",
105+
"disabled": false
102106
}
103107
},
104108
"agents": {

cmd/schema/main.go

+1
Original file line numberDiff line numberDiff line change
@@ -173,6 +173,7 @@ func generateSchema() map[string]any {
173173
string(models.ProviderOpenAI),
174174
string(models.ProviderGemini),
175175
string(models.ProviderGROQ),
176+
string(models.ProviderOpenRouter),
176177
string(models.ProviderBedrock),
177178
string(models.ProviderAzure),
178179
}

internal/config/config.go

+39
Original file line numberDiff line numberDiff line change
@@ -267,6 +267,15 @@ func setProviderDefaults() {
267267
return
268268
}
269269

270+
// OpenRouter configuration
271+
if apiKey := os.Getenv("OPENROUTER_API_KEY"); apiKey != "" {
272+
viper.SetDefault("providers.openrouter.apiKey", apiKey)
273+
viper.SetDefault("agents.coder.model", models.OpenRouterClaude37Sonnet)
274+
viper.SetDefault("agents.task.model", models.OpenRouterClaude37Sonnet)
275+
viper.SetDefault("agents.title.model", models.OpenRouterClaude35Haiku)
276+
return
277+
}
278+
270279
// AWS Bedrock configuration
271280
if hasAWSCredentials() {
272281
viper.SetDefault("agents.coder.model", models.BedrockClaude37Sonnet)
@@ -527,6 +536,8 @@ func getProviderAPIKey(provider models.ModelProvider) string {
527536
return os.Getenv("GROQ_API_KEY")
528537
case models.ProviderAzure:
529538
return os.Getenv("AZURE_OPENAI_API_KEY")
539+
case models.ProviderOpenRouter:
540+
return os.Getenv("OPENROUTER_API_KEY")
530541
case models.ProviderBedrock:
531542
if hasAWSCredentials() {
532543
return "aws-credentials-available"
@@ -578,6 +589,34 @@ func setDefaultModelForAgent(agent AgentName) bool {
578589
return true
579590
}
580591

592+
if apiKey := os.Getenv("OPENROUTER_API_KEY"); apiKey != "" {
593+
var model models.ModelID
594+
maxTokens := int64(5000)
595+
reasoningEffort := ""
596+
597+
switch agent {
598+
case AgentTitle:
599+
model = models.OpenRouterClaude35Haiku
600+
maxTokens = 80
601+
case AgentTask:
602+
model = models.OpenRouterClaude37Sonnet
603+
default:
604+
model = models.OpenRouterClaude37Sonnet
605+
}
606+
607+
// Check if model supports reasoning
608+
if modelInfo, ok := models.SupportedModels[model]; ok && modelInfo.CanReason {
609+
reasoningEffort = "medium"
610+
}
611+
612+
cfg.Agents[agent] = Agent{
613+
Model: model,
614+
MaxTokens: maxTokens,
615+
ReasoningEffort: reasoningEffort,
616+
}
617+
return true
618+
}
619+
581620
if apiKey := os.Getenv("GEMINI_API_KEY"); apiKey != "" {
582621
var model models.ModelID
583622
maxTokens := int64(5000)

internal/llm/models/models.go

+1
Original file line numberDiff line numberDiff line change
@@ -86,4 +86,5 @@ func init() {
8686
maps.Copy(SupportedModels, GeminiModels)
8787
maps.Copy(SupportedModels, GroqModels)
8888
maps.Copy(SupportedModels, AzureModels)
89+
maps.Copy(SupportedModels, OpenRouterModels)
8990
}

internal/llm/models/openrouter.go

+262
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,262 @@
1+
package models
2+
3+
const (
4+
ProviderOpenRouter ModelProvider = "openrouter"
5+
6+
OpenRouterGPT41 ModelID = "openrouter.gpt-4.1"
7+
OpenRouterGPT41Mini ModelID = "openrouter.gpt-4.1-mini"
8+
OpenRouterGPT41Nano ModelID = "openrouter.gpt-4.1-nano"
9+
OpenRouterGPT45Preview ModelID = "openrouter.gpt-4.5-preview"
10+
OpenRouterGPT4o ModelID = "openrouter.gpt-4o"
11+
OpenRouterGPT4oMini ModelID = "openrouter.gpt-4o-mini"
12+
OpenRouterO1 ModelID = "openrouter.o1"
13+
OpenRouterO1Pro ModelID = "openrouter.o1-pro"
14+
OpenRouterO1Mini ModelID = "openrouter.o1-mini"
15+
OpenRouterO3 ModelID = "openrouter.o3"
16+
OpenRouterO3Mini ModelID = "openrouter.o3-mini"
17+
OpenRouterO4Mini ModelID = "openrouter.o4-mini"
18+
OpenRouterGemini25Flash ModelID = "openrouter.gemini-2.5-flash"
19+
OpenRouterGemini25 ModelID = "openrouter.gemini-2.5"
20+
OpenRouterClaude35Sonnet ModelID = "openrouter.claude-3.5-sonnet"
21+
OpenRouterClaude3Haiku ModelID = "openrouter.claude-3-haiku"
22+
OpenRouterClaude37Sonnet ModelID = "openrouter.claude-3.7-sonnet"
23+
OpenRouterClaude35Haiku ModelID = "openrouter.claude-3.5-haiku"
24+
OpenRouterClaude3Opus ModelID = "openrouter.claude-3-opus"
25+
)
26+
27+
var OpenRouterModels = map[ModelID]Model{
28+
OpenRouterGPT41: {
29+
ID: OpenRouterGPT41,
30+
Name: "OpenRouter – GPT 4.1",
31+
Provider: ProviderOpenRouter,
32+
APIModel: "openai/gpt-4.1",
33+
CostPer1MIn: OpenAIModels[GPT41].CostPer1MIn,
34+
CostPer1MInCached: OpenAIModels[GPT41].CostPer1MInCached,
35+
CostPer1MOut: OpenAIModels[GPT41].CostPer1MOut,
36+
CostPer1MOutCached: OpenAIModels[GPT41].CostPer1MOutCached,
37+
ContextWindow: OpenAIModels[GPT41].ContextWindow,
38+
DefaultMaxTokens: OpenAIModels[GPT41].DefaultMaxTokens,
39+
},
40+
OpenRouterGPT41Mini: {
41+
ID: OpenRouterGPT41Mini,
42+
Name: "OpenRouter – GPT 4.1 mini",
43+
Provider: ProviderOpenRouter,
44+
APIModel: "openai/gpt-4.1-mini",
45+
CostPer1MIn: OpenAIModels[GPT41Mini].CostPer1MIn,
46+
CostPer1MInCached: OpenAIModels[GPT41Mini].CostPer1MInCached,
47+
CostPer1MOut: OpenAIModels[GPT41Mini].CostPer1MOut,
48+
CostPer1MOutCached: OpenAIModels[GPT41Mini].CostPer1MOutCached,
49+
ContextWindow: OpenAIModels[GPT41Mini].ContextWindow,
50+
DefaultMaxTokens: OpenAIModels[GPT41Mini].DefaultMaxTokens,
51+
},
52+
OpenRouterGPT41Nano: {
53+
ID: OpenRouterGPT41Nano,
54+
Name: "OpenRouter – GPT 4.1 nano",
55+
Provider: ProviderOpenRouter,
56+
APIModel: "openai/gpt-4.1-nano",
57+
CostPer1MIn: OpenAIModels[GPT41Nano].CostPer1MIn,
58+
CostPer1MInCached: OpenAIModels[GPT41Nano].CostPer1MInCached,
59+
CostPer1MOut: OpenAIModels[GPT41Nano].CostPer1MOut,
60+
CostPer1MOutCached: OpenAIModels[GPT41Nano].CostPer1MOutCached,
61+
ContextWindow: OpenAIModels[GPT41Nano].ContextWindow,
62+
DefaultMaxTokens: OpenAIModels[GPT41Nano].DefaultMaxTokens,
63+
},
64+
OpenRouterGPT45Preview: {
65+
ID: OpenRouterGPT45Preview,
66+
Name: "OpenRouter – GPT 4.5 preview",
67+
Provider: ProviderOpenRouter,
68+
APIModel: "openai/gpt-4.5-preview",
69+
CostPer1MIn: OpenAIModels[GPT45Preview].CostPer1MIn,
70+
CostPer1MInCached: OpenAIModels[GPT45Preview].CostPer1MInCached,
71+
CostPer1MOut: OpenAIModels[GPT45Preview].CostPer1MOut,
72+
CostPer1MOutCached: OpenAIModels[GPT45Preview].CostPer1MOutCached,
73+
ContextWindow: OpenAIModels[GPT45Preview].ContextWindow,
74+
DefaultMaxTokens: OpenAIModels[GPT45Preview].DefaultMaxTokens,
75+
},
76+
OpenRouterGPT4o: {
77+
ID: OpenRouterGPT4o,
78+
Name: "OpenRouter – GPT 4o",
79+
Provider: ProviderOpenRouter,
80+
APIModel: "openai/gpt-4o",
81+
CostPer1MIn: OpenAIModels[GPT4o].CostPer1MIn,
82+
CostPer1MInCached: OpenAIModels[GPT4o].CostPer1MInCached,
83+
CostPer1MOut: OpenAIModels[GPT4o].CostPer1MOut,
84+
CostPer1MOutCached: OpenAIModels[GPT4o].CostPer1MOutCached,
85+
ContextWindow: OpenAIModels[GPT4o].ContextWindow,
86+
DefaultMaxTokens: OpenAIModels[GPT4o].DefaultMaxTokens,
87+
},
88+
OpenRouterGPT4oMini: {
89+
ID: OpenRouterGPT4oMini,
90+
Name: "OpenRouter – GPT 4o mini",
91+
Provider: ProviderOpenRouter,
92+
APIModel: "openai/gpt-4o-mini",
93+
CostPer1MIn: OpenAIModels[GPT4oMini].CostPer1MIn,
94+
CostPer1MInCached: OpenAIModels[GPT4oMini].CostPer1MInCached,
95+
CostPer1MOut: OpenAIModels[GPT4oMini].CostPer1MOut,
96+
CostPer1MOutCached: OpenAIModels[GPT4oMini].CostPer1MOutCached,
97+
ContextWindow: OpenAIModels[GPT4oMini].ContextWindow,
98+
},
99+
OpenRouterO1: {
100+
ID: OpenRouterO1,
101+
Name: "OpenRouter – O1",
102+
Provider: ProviderOpenRouter,
103+
APIModel: "openai/o1",
104+
CostPer1MIn: OpenAIModels[O1].CostPer1MIn,
105+
CostPer1MInCached: OpenAIModels[O1].CostPer1MInCached,
106+
CostPer1MOut: OpenAIModels[O1].CostPer1MOut,
107+
CostPer1MOutCached: OpenAIModels[O1].CostPer1MOutCached,
108+
ContextWindow: OpenAIModels[O1].ContextWindow,
109+
DefaultMaxTokens: OpenAIModels[O1].DefaultMaxTokens,
110+
CanReason: OpenAIModels[O1].CanReason,
111+
},
112+
OpenRouterO1Pro: {
113+
ID: OpenRouterO1Pro,
114+
Name: "OpenRouter – o1 pro",
115+
Provider: ProviderOpenRouter,
116+
APIModel: "openai/o1-pro",
117+
CostPer1MIn: OpenAIModels[O1Pro].CostPer1MIn,
118+
CostPer1MInCached: OpenAIModels[O1Pro].CostPer1MInCached,
119+
CostPer1MOut: OpenAIModels[O1Pro].CostPer1MOut,
120+
CostPer1MOutCached: OpenAIModels[O1Pro].CostPer1MOutCached,
121+
ContextWindow: OpenAIModels[O1Pro].ContextWindow,
122+
DefaultMaxTokens: OpenAIModels[O1Pro].DefaultMaxTokens,
123+
CanReason: OpenAIModels[O1Pro].CanReason,
124+
},
125+
OpenRouterO1Mini: {
126+
ID: OpenRouterO1Mini,
127+
Name: "OpenRouter – o1 mini",
128+
Provider: ProviderOpenRouter,
129+
APIModel: "openai/o1-mini",
130+
CostPer1MIn: OpenAIModels[O1Mini].CostPer1MIn,
131+
CostPer1MInCached: OpenAIModels[O1Mini].CostPer1MInCached,
132+
CostPer1MOut: OpenAIModels[O1Mini].CostPer1MOut,
133+
CostPer1MOutCached: OpenAIModels[O1Mini].CostPer1MOutCached,
134+
ContextWindow: OpenAIModels[O1Mini].ContextWindow,
135+
DefaultMaxTokens: OpenAIModels[O1Mini].DefaultMaxTokens,
136+
CanReason: OpenAIModels[O1Mini].CanReason,
137+
},
138+
OpenRouterO3: {
139+
ID: OpenRouterO3,
140+
Name: "OpenRouter – o3",
141+
Provider: ProviderOpenRouter,
142+
APIModel: "openai/o3",
143+
CostPer1MIn: OpenAIModels[O3].CostPer1MIn,
144+
CostPer1MInCached: OpenAIModels[O3].CostPer1MInCached,
145+
CostPer1MOut: OpenAIModels[O3].CostPer1MOut,
146+
CostPer1MOutCached: OpenAIModels[O3].CostPer1MOutCached,
147+
ContextWindow: OpenAIModels[O3].ContextWindow,
148+
DefaultMaxTokens: OpenAIModels[O3].DefaultMaxTokens,
149+
CanReason: OpenAIModels[O3].CanReason,
150+
},
151+
OpenRouterO3Mini: {
152+
ID: OpenRouterO3Mini,
153+
Name: "OpenRouter – o3 mini",
154+
Provider: ProviderOpenRouter,
155+
APIModel: "openai/o3-mini-high",
156+
CostPer1MIn: OpenAIModels[O3Mini].CostPer1MIn,
157+
CostPer1MInCached: OpenAIModels[O3Mini].CostPer1MInCached,
158+
CostPer1MOut: OpenAIModels[O3Mini].CostPer1MOut,
159+
CostPer1MOutCached: OpenAIModels[O3Mini].CostPer1MOutCached,
160+
ContextWindow: OpenAIModels[O3Mini].ContextWindow,
161+
DefaultMaxTokens: OpenAIModels[O3Mini].DefaultMaxTokens,
162+
CanReason: OpenAIModels[O3Mini].CanReason,
163+
},
164+
OpenRouterO4Mini: {
165+
ID: OpenRouterO4Mini,
166+
Name: "OpenRouter – o4 mini",
167+
Provider: ProviderOpenRouter,
168+
APIModel: "openai/o4-mini-high",
169+
CostPer1MIn: OpenAIModels[O4Mini].CostPer1MIn,
170+
CostPer1MInCached: OpenAIModels[O4Mini].CostPer1MInCached,
171+
CostPer1MOut: OpenAIModels[O4Mini].CostPer1MOut,
172+
CostPer1MOutCached: OpenAIModels[O4Mini].CostPer1MOutCached,
173+
ContextWindow: OpenAIModels[O4Mini].ContextWindow,
174+
DefaultMaxTokens: OpenAIModels[O4Mini].DefaultMaxTokens,
175+
CanReason: OpenAIModels[O4Mini].CanReason,
176+
},
177+
OpenRouterGemini25Flash: {
178+
ID: OpenRouterGemini25Flash,
179+
Name: "OpenRouter – Gemini 2.5 Flash",
180+
Provider: ProviderOpenRouter,
181+
APIModel: "google/gemini-2.5-flash-preview:thinking",
182+
CostPer1MIn: GeminiModels[Gemini25Flash].CostPer1MIn,
183+
CostPer1MInCached: GeminiModels[Gemini25Flash].CostPer1MInCached,
184+
CostPer1MOut: GeminiModels[Gemini25Flash].CostPer1MOut,
185+
CostPer1MOutCached: GeminiModels[Gemini25Flash].CostPer1MOutCached,
186+
ContextWindow: GeminiModels[Gemini25Flash].ContextWindow,
187+
DefaultMaxTokens: GeminiModels[Gemini25Flash].DefaultMaxTokens,
188+
},
189+
OpenRouterGemini25: {
190+
ID: OpenRouterGemini25,
191+
Name: "OpenRouter – Gemini 2.5 Pro",
192+
Provider: ProviderOpenRouter,
193+
APIModel: "google/gemini-2.5-pro-preview-03-25",
194+
CostPer1MIn: GeminiModels[Gemini25].CostPer1MIn,
195+
CostPer1MInCached: GeminiModels[Gemini25].CostPer1MInCached,
196+
CostPer1MOut: GeminiModels[Gemini25].CostPer1MOut,
197+
CostPer1MOutCached: GeminiModels[Gemini25].CostPer1MOutCached,
198+
ContextWindow: GeminiModels[Gemini25].ContextWindow,
199+
DefaultMaxTokens: GeminiModels[Gemini25].DefaultMaxTokens,
200+
},
201+
OpenRouterClaude35Sonnet: {
202+
ID: OpenRouterClaude35Sonnet,
203+
Name: "OpenRouter – Claude 3.5 Sonnet",
204+
Provider: ProviderOpenRouter,
205+
APIModel: "anthropic/claude-3.5-sonnet",
206+
CostPer1MIn: AnthropicModels[Claude35Sonnet].CostPer1MIn,
207+
CostPer1MInCached: AnthropicModels[Claude35Sonnet].CostPer1MInCached,
208+
CostPer1MOut: AnthropicModels[Claude35Sonnet].CostPer1MOut,
209+
CostPer1MOutCached: AnthropicModels[Claude35Sonnet].CostPer1MOutCached,
210+
ContextWindow: AnthropicModels[Claude35Sonnet].ContextWindow,
211+
DefaultMaxTokens: AnthropicModels[Claude35Sonnet].DefaultMaxTokens,
212+
},
213+
OpenRouterClaude3Haiku: {
214+
ID: OpenRouterClaude3Haiku,
215+
Name: "OpenRouter – Claude 3 Haiku",
216+
Provider: ProviderOpenRouter,
217+
APIModel: "anthropic/claude-3-haiku",
218+
CostPer1MIn: AnthropicModels[Claude3Haiku].CostPer1MIn,
219+
CostPer1MInCached: AnthropicModels[Claude3Haiku].CostPer1MInCached,
220+
CostPer1MOut: AnthropicModels[Claude3Haiku].CostPer1MOut,
221+
CostPer1MOutCached: AnthropicModels[Claude3Haiku].CostPer1MOutCached,
222+
ContextWindow: AnthropicModels[Claude3Haiku].ContextWindow,
223+
DefaultMaxTokens: AnthropicModels[Claude3Haiku].DefaultMaxTokens,
224+
},
225+
OpenRouterClaude37Sonnet: {
226+
ID: OpenRouterClaude37Sonnet,
227+
Name: "OpenRouter – Claude 3.7 Sonnet",
228+
Provider: ProviderOpenRouter,
229+
APIModel: "anthropic/claude-3.7-sonnet",
230+
CostPer1MIn: AnthropicModels[Claude37Sonnet].CostPer1MIn,
231+
CostPer1MInCached: AnthropicModels[Claude37Sonnet].CostPer1MInCached,
232+
CostPer1MOut: AnthropicModels[Claude37Sonnet].CostPer1MOut,
233+
CostPer1MOutCached: AnthropicModels[Claude37Sonnet].CostPer1MOutCached,
234+
ContextWindow: AnthropicModels[Claude37Sonnet].ContextWindow,
235+
DefaultMaxTokens: AnthropicModels[Claude37Sonnet].DefaultMaxTokens,
236+
CanReason: AnthropicModels[Claude37Sonnet].CanReason,
237+
},
238+
OpenRouterClaude35Haiku: {
239+
ID: OpenRouterClaude35Haiku,
240+
Name: "OpenRouter – Claude 3.5 Haiku",
241+
Provider: ProviderOpenRouter,
242+
APIModel: "anthropic/claude-3.5-haiku",
243+
CostPer1MIn: AnthropicModels[Claude35Haiku].CostPer1MIn,
244+
CostPer1MInCached: AnthropicModels[Claude35Haiku].CostPer1MInCached,
245+
CostPer1MOut: AnthropicModels[Claude35Haiku].CostPer1MOut,
246+
CostPer1MOutCached: AnthropicModels[Claude35Haiku].CostPer1MOutCached,
247+
ContextWindow: AnthropicModels[Claude35Haiku].ContextWindow,
248+
DefaultMaxTokens: AnthropicModels[Claude35Haiku].DefaultMaxTokens,
249+
},
250+
OpenRouterClaude3Opus: {
251+
ID: OpenRouterClaude3Opus,
252+
Name: "OpenRouter – Claude 3 Opus",
253+
Provider: ProviderOpenRouter,
254+
APIModel: "anthropic/claude-3-opus",
255+
CostPer1MIn: AnthropicModels[Claude3Opus].CostPer1MIn,
256+
CostPer1MInCached: AnthropicModels[Claude3Opus].CostPer1MInCached,
257+
CostPer1MOut: AnthropicModels[Claude3Opus].CostPer1MOut,
258+
CostPer1MOutCached: AnthropicModels[Claude3Opus].CostPer1MOutCached,
259+
ContextWindow: AnthropicModels[Claude3Opus].ContextWindow,
260+
DefaultMaxTokens: AnthropicModels[Claude3Opus].DefaultMaxTokens,
261+
},
262+
}

0 commit comments

Comments
 (0)