fix: add fireworks-ai-firepass provider with kimi-k2p6-turbo model#3331
Merged
Conversation
Add a dedicated fireworks-ai-firepass provider to separate Firepass
router models from the standard fireworks-ai provider. This replaces
the previous approach of embedding firepass models within the
fireworks-ai provider.
Changes:
- Add FIREWORKS_AI_FIREPASS ProviderId constant ("fireworks-ai-firepass")
with display name "FireworksAIFirepass"
- Register in built_in_providers() and FromStr mappings
- Add new provider entry in provider.json sharing the same API URL and
FIREWORKS_AI_API_KEY env var as fireworks-ai
- Add kimi-k2p6-turbo model (accounts/fireworks/routers/kimi-k2p6-turbo)
presented as "Kimi K2.6 Turbo (firepass)" with 262K context, tool use,
reasoning, parallel tool calls, and text+image input modalities
- Remove deprecated firepass router models (kimi-k2p5-turbo, kimi-k2p6-turbo)
from the fireworks-ai provider
- Extend ReasoningContent and strict schema transformers in the request
pipeline to apply to the new provider
- Add unit tests for from_str, display_name, and built_in_providers
Co-Authored-By: ForgeCode <noreply@forgecode.dev>
There was a problem hiding this comment.
Pull request overview
This PR introduces a dedicated fireworks-ai-firepass provider to separate Firepass router-backed models from the existing fireworks-ai provider, and wires the new provider into the OpenAI-request transformation pipeline.
Changes:
- Added
ProviderId::FIREWORKS_AI_FIREPASSwithFromStr, display name, and built-in provider registration (plus unit tests). - Updated
provider.jsonto remove Firepass router models fromfireworks-aiand add a newfireworks-ai-firepassprovider containingaccounts/fireworks/routers/kimi-k2p6-turbo. - Extended request transformers (ReasoningContent + strict schema enforcement) to also apply to
FIREWORKS_AI_FIREPASS.
Reviewed changes
Copilot reviewed 3 out of 3 changed files in this pull request and generated 1 comment.
| File | Description |
|---|---|
crates/forge_domain/src/provider.rs |
Adds new provider ID constant, display mapping, parsing, and tests. |
crates/forge_repo/src/provider/provider.json |
Moves Firepass router model(s) under a new provider entry. |
crates/forge_app/src/dto/openai/transformers/pipeline.rs |
Applies existing transformer behaviors to the new provider ID. |
Comments suppressed due to low confidence (1)
crates/forge_app/src/dto/openai/transformers/pipeline.rs:107
ProviderId::FIREWORKS_AI_FIREPASSis now included in the strict schema enforcement condition, but there’s no unit test verifying this provider gets the same tool/response-format schema normalization guarantees asfireworks-ai. Add a test similar totest_fireworks_provider_enforces_strict_tool_and_response_format_schemasforfireworks-ai-firepass.
let strict_schema = EnforceStrictToolSchema
.pipe(EnforceStrictResponseFormatSchema)
.when(move |_| {
provider.id == ProviderId::FIREWORKS_AI
|| provider.id == ProviderId::FIREWORKS_AI_FIREPASS
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
…provider The firepass provider uses a different API key from the standard fireworks-ai provider. Update the env var from FIREWORKS_AI_API_KEY to FIREWORKS_AI_FIREPASS_API_KEY to reflect this distinction. Co-Authored-By: ForgeCode <noreply@forgecode.dev>
amitksingh1490
approved these changes
May 14, 2026
amitksingh1490
approved these changes
May 14, 2026
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Add a dedicated
fireworks-ai-firepassprovider to separate Firepass router models from the standardfireworks-aiprovider. This replaces the previous approach of embedding firepass models directly within the fireworks-ai provider.Changes
New Provider:
fireworks-ai-firepassFIREWORKS_AI_FIREPASSProviderIdconstant ("fireworks-ai-firepass") with display name"FireworksAIFirepass"built_in_providers()list andFromStrmappingsprovider.jsonsharing the same API URL (https://api.fireworks.ai/inference/v1/chat/completions) andFIREWORKS_AI_API_KEYenv var as the existingfireworks-aiproviderNew Model: Kimi K2.6 Turbo (firepass)
accounts/fireworks/routers/kimi-k2p6-turboRemoved Deprecated Models
kimi-k2p5-turbo(Kimi K2.5 Turbo Firepass) from thefireworks-aiproviderkimi-k2p6-turbo(Kimi K2.6 Turbo Fire Pass) from thefireworks-aiproviderPipeline Transformers
ReasoningContenttransformer to apply toFIREWORKS_AI_FIREPASSEnforceStrictToolSchema+EnforceStrictResponseFormatSchemapipeline to apply toFIREWORKS_AI_FIREPASSTests
from_str,display_name,in_built_in_providersFiles Changed
crates/forge_domain/src/provider.rscrates/forge_repo/src/provider/provider.jsoncrates/forge_app/src/dto/openai/transformers/pipeline.rsVerification
cargo checkpassesforge_domain+forge_appforge_repo