Summary
LangChainJS merged langchain-ai/langchainjs#10654, which adds the unified @langchain/google package (ChatGoogle) as an explicit supported provider in LangChain's initChatModel registry.
That upstream change is small but likely worth a follow-up in @librechat/agents now that we have upgraded LangChain/LangGraph and refreshed the provider smoke/live tests.
Upstream Behavior
The merged LangChainJS PR adds this provider config to libs/langchain/src/chat_models/universal.ts:
google: {
package: "@langchain/google",
className: "ChatGoogle",
},
It intentionally does not change default gemini-* model name inference. In upstream LangChain, gemini-* without an explicit provider still defaults to google-vertexai; callers must explicitly use one of:
await initChatModel("gemini-2.5-flash", { modelProvider: "google" });
await initChatModel("google:gemini-2.5-flash");
The important feature of @langchain/google is that it provides a unified Google model wrapper that can authenticate against Vertex AI or AI Studio depending on credentials/configuration.
Current State In This Repo
We do not currently depend on @langchain/google directly. The package currently includes:
@langchain/google-genai
@langchain/google-vertexai
- transitive
@langchain/google-common
- transitive
@langchain/google-gauth
Our local provider construction path is not LangChain's initChatModel; it is:
src/llm/init.ts -> initializeModel(...)
src/llm/providers.ts -> llmProviders
src/types/llm.ts -> provider option/model maps
Current mappings:
Providers.GOOGLE -> CustomChatGoogleGenerativeAI from src/llm/google/index.ts, backed by @langchain/google-genai
Providers.VERTEXAI -> our local ChatVertexAI from src/llm/vertexai/index.ts, based on Google auth/common classes
Because Providers.GOOGLE already has meaning in LibreChat, we should not blindly mirror upstream's provider key change without checking compatibility and migration behavior.
Proposal
Add a follow-up PR to evaluate and, if appropriate, support the unified @langchain/google / ChatGoogle path in our provider layer.
The PR should answer these questions explicitly:
- Should
Providers.GOOGLE migrate from CustomChatGoogleGenerativeAI to a custom wrapper around @langchain/google's ChatGoogle, or should we introduce a separate provider key for the unified wrapper?
- Does
ChatGoogle preserve the AI Studio behavior we rely on today when GOOGLE_API_KEY is present?
- Does
ChatGoogle preserve Vertex AI behavior when GOOGLE_APPLICATION_CREDENTIALS / Vertex config is present?
- Which of our local Google customizations still need to be ported?
- Can we reduce custom code by leaning on
@langchain/google, or does it still lack behavior we need?
Custom Behavior To Preserve Or Re-evaluate
Audit src/llm/google/index.ts, src/llm/google/utils/common.ts, and src/llm/vertexai/index.ts before implementation. At minimum, check these areas:
- Gemini 3 model detection for multimodal/function-calling thought signatures
thinkingConfig support and translation
customHeaders, apiVersion, and baseUrl handling
models/ prefix normalization
- safety setting validation
maxOutputTokens, temperature, topP, topK, stop sequence behavior
- JSON response MIME behavior
- system instruction handling
- usage metadata mapping, including cached tokens and Gemini 3 pro over-200k token accounting
- tool calling and streaming tool calls
- content block conversion and
contentBlocks reasoning behavior added during the LangChain upgrade work
- current Vertex AI auth/config behavior and any LibreChat-specific signature handling
Suggested Implementation Shape
Possible paths:
- Preferred if compatible: create a LibreChat wrapper around
@langchain/google's ChatGoogle, port only the custom behavior still missing upstream, and wire it into Providers.GOOGLE after compatibility tests pass.
- Safer migration path: add a separate provider key for unified Google first, keep
Providers.GOOGLE on @langchain/google-genai, and later migrate defaults once live tests prove parity.
- No-op outcome: document why
@langchain/google is not yet viable for us and keep existing wrappers until the upstream package covers the missing behavior.
Avoid changing Providers.GOOGLE semantics without explicit tests for both AI Studio and Vertex AI credential modes.
Test Plan
Add/adjust tests before switching provider behavior:
- Unit/smoke test
initializeModel({ provider: Providers.GOOGLE }) creates the expected wrapper.
- Unit/smoke test tool binding still works through
initializeModel.
- Live AI Studio smoke test with
GOOGLE_API_KEY:
- simple invoke
- streaming invoke
- tool call
- structured output if supported
- thinking/contentBlocks behavior
- Live Vertex AI smoke test with
GOOGLE_APPLICATION_CREDENTIALS or repo-supported Vertex config:
- simple invoke
- streaming invoke
- tool call if supported
- Regression tests for our custom Google behaviors listed above.
- Confirm
npm run build, npx tsc --noEmit, and provider smoke tests pass.
Acceptance Criteria
- We either support
@langchain/google through our provider layer or document why we are deferring.
- Backward compatibility for existing
Providers.GOOGLE users is preserved or the breaking behavior is intentionally called out.
- The implementation includes live smoke coverage for the unified Google path, with CI failure visibility matching the provider soft-fail work added in the LangChain/LangGraph upgrade PR.
- The PR notes which local customizations became redundant versus which were ported.
References
Summary
LangChainJS merged
langchain-ai/langchainjs#10654, which adds the unified@langchain/googlepackage (ChatGoogle) as an explicit supported provider in LangChain'sinitChatModelregistry.That upstream change is small but likely worth a follow-up in
@librechat/agentsnow that we have upgraded LangChain/LangGraph and refreshed the provider smoke/live tests.Upstream Behavior
The merged LangChainJS PR adds this provider config to
libs/langchain/src/chat_models/universal.ts:It intentionally does not change default
gemini-*model name inference. In upstream LangChain,gemini-*without an explicit provider still defaults togoogle-vertexai; callers must explicitly use one of:The important feature of
@langchain/googleis that it provides a unified Google model wrapper that can authenticate against Vertex AI or AI Studio depending on credentials/configuration.Current State In This Repo
We do not currently depend on
@langchain/googledirectly. The package currently includes:@langchain/google-genai@langchain/google-vertexai@langchain/google-common@langchain/google-gauthOur local provider construction path is not LangChain's
initChatModel; it is:src/llm/init.ts->initializeModel(...)src/llm/providers.ts->llmProviderssrc/types/llm.ts-> provider option/model mapsCurrent mappings:
Providers.GOOGLE->CustomChatGoogleGenerativeAIfromsrc/llm/google/index.ts, backed by@langchain/google-genaiProviders.VERTEXAI-> our localChatVertexAIfromsrc/llm/vertexai/index.ts, based on Google auth/common classesBecause
Providers.GOOGLEalready has meaning in LibreChat, we should not blindly mirror upstream's provider key change without checking compatibility and migration behavior.Proposal
Add a follow-up PR to evaluate and, if appropriate, support the unified
@langchain/google/ChatGooglepath in our provider layer.The PR should answer these questions explicitly:
Providers.GOOGLEmigrate fromCustomChatGoogleGenerativeAIto a custom wrapper around@langchain/google'sChatGoogle, or should we introduce a separate provider key for the unified wrapper?ChatGooglepreserve the AI Studio behavior we rely on today whenGOOGLE_API_KEYis present?ChatGooglepreserve Vertex AI behavior whenGOOGLE_APPLICATION_CREDENTIALS/ Vertex config is present?@langchain/google, or does it still lack behavior we need?Custom Behavior To Preserve Or Re-evaluate
Audit
src/llm/google/index.ts,src/llm/google/utils/common.ts, andsrc/llm/vertexai/index.tsbefore implementation. At minimum, check these areas:thinkingConfigsupport and translationcustomHeaders,apiVersion, andbaseUrlhandlingmodels/prefix normalizationmaxOutputTokens,temperature,topP,topK, stop sequence behaviorcontentBlocksreasoning behavior added during the LangChain upgrade workSuggested Implementation Shape
Possible paths:
@langchain/google'sChatGoogle, port only the custom behavior still missing upstream, and wire it intoProviders.GOOGLEafter compatibility tests pass.Providers.GOOGLEon@langchain/google-genai, and later migrate defaults once live tests prove parity.@langchain/googleis not yet viable for us and keep existing wrappers until the upstream package covers the missing behavior.Avoid changing
Providers.GOOGLEsemantics without explicit tests for both AI Studio and Vertex AI credential modes.Test Plan
Add/adjust tests before switching provider behavior:
initializeModel({ provider: Providers.GOOGLE })creates the expected wrapper.initializeModel.GOOGLE_API_KEY:GOOGLE_APPLICATION_CREDENTIALSor repo-supported Vertex config:npm run build,npx tsc --noEmit, and provider smoke tests pass.Acceptance Criteria
@langchain/googlethrough our provider layer or document why we are deferring.Providers.GOOGLEusers is preserved or the breaking behavior is intentionally called out.References