Skip to content

Switch from Vercel AI Gateway to OpenAI-compatible providers#155

Open
pinglanyi wants to merge 11 commits intovercel-labs:mainfrom
pinglanyi:claude/add-openai-proxy-support-RFBLu
Open

Switch from Vercel AI Gateway to OpenAI-compatible providers#155
pinglanyi wants to merge 11 commits intovercel-labs:mainfrom
pinglanyi:claude/add-openai-proxy-support-RFBLu

Conversation

@pinglanyi
Copy link

Summary

This PR migrates the project from using Vercel AI Gateway to support OpenAI-compatible providers (such as NUWA proxy, vLLM, or OpenAI directly). This enables greater flexibility in choosing AI providers and models while maintaining a consistent interface across the codebase.

Key Changes

  • New AI Provider Module: Created ai-provider.ts in both apps/web and examples/chat that configures an OpenAI-compatible provider via environment variables

    • Supports custom base URLs for proxy relays and self-hosted endpoints
    • Provides a getModel() function for consistent model instantiation
    • Defaults to NUWA proxy (https://api.nuwaapi.com/v1) with gpt-4o-mini model
  • Updated Environment Configuration:

    • Replaced AI_GATEWAY_API_KEY and AI_GATEWAY_MODEL with OPENAI_API_KEY, OPENAI_BASE_URL, and AI_MODEL
    • Updated .env.example files with clear documentation and examples for different providers (NUWA, vLLM, OpenAI)
    • Added .env.example to examples/chat
  • Dependency Updates:

    • Replaced @ai-sdk/gateway with @ai-sdk/openai in both apps/web and examples/chat
  • API Route Updates:

    • apps/web/app/api/docs-chat/route.ts: Removed Anthropic-specific cache control logic and switched to getModel()
    • apps/web/app/api/generate/route.ts: Updated to use getModel() instead of hardcoded model
    • examples/chat/lib/agent.ts: Replaced gateway import with local ai-provider module

Implementation Details

  • The getModel() function provides a single point of configuration, allowing model selection via environment variable or parameter override
  • Removed Anthropic-specific optimizations (cache control) that are no longer applicable with the new provider approach
  • All hardcoded model defaults (anthropic/claude-haiku-4.5) have been replaced with the new configurable system

https://claude.ai/code/session_01PrgA5DKzfg9HouK3j31KFC

Replaces @ai-sdk/gateway with @ai-sdk/openai (v3.x) to enable use of any
OpenAI-compatible API endpoint — including proxy relays (e.g. NUWA api.nuwaapi.com)
and self-hosted vLLM endpoints — without requiring Vercel deployment.

Changes:
- Add @ai-sdk/openai@^3.0.22 to apps/web and examples/chat
- Add lib/ai-provider.ts to both apps with createOpenAI() configured via env vars
- Update /api/generate and /api/docs-chat routes to use getModel() from ai-provider
- Update examples/chat agent to use getModel() instead of gateway()
- Remove Anthropic-specific cacheControl from docs-chat (not supported by OpenAI API)
- Update .env.example with new vars: OPENAI_API_KEY, OPENAI_BASE_URL, AI_MODEL

Environment variables:
  OPENAI_API_KEY   - API key (required)
  OPENAI_BASE_URL  - Proxy base URL (default: https://api.nuwaapi.com/v1)
                     Set to http://localhost:8000/v1 for vLLM
  AI_MODEL         - Model ID (default: gpt-4o-mini)

https://claude.ai/code/session_01PrgA5DKzfg9HouK3j31KFC
@vercel
Copy link
Contributor

vercel bot commented Feb 24, 2026

@claude is attempting to deploy a commit to the Vercel Labs Team on Vercel.

A member of the Team first needs to authorize it.

@socket-security
Copy link

socket-security bot commented Feb 24, 2026

Warning

Review the following alerts detected in dependencies.

According to your organization's Security Policy, it is recommended to resolve "Warn" alerts. Learn more about Socket for GitHub.

Action Severity Alert  (click "▶" to expand/collapse)
Warn High
Obfuscated code: npm @ai-sdk/openai is 98.0% likely obfuscated

Confidence: 0.98

Location: Package overview

From: apps/web/package.jsonnpm/@ai-sdk/openai@3.0.31

ℹ Read more on: This package | This alert | What is obfuscated code?

Next steps: Take a moment to review the security alert above. Review the linked package source code to understand the potential risk. Ensure the package is not malicious before proceeding. If you're unsure how to proceed, reach out to your security team or ask the Socket team for help at support@socket.dev.

Suggestion: Packages should not obfuscate their code. Consider not using packages with obfuscated code.

Mark the package as acceptable risk. To ignore this alert only in this pull request, reply with the comment @SocketSecurity ignore npm/@ai-sdk/openai@3.0.31. You can also ignore all packages with @SocketSecurity ignore-all. To ignore an alert for all future pull requests, use Socket's Dashboard to change the triage state of this alert.

View full report

Copy link
Contributor

@vercel vercel bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Additional Suggestion:

Import of @ai-sdk/gateway in search.ts references a package that was removed from package.json, causing a build-time module resolution failure.

Fix on Vercel

- Build workspace packages (@json-render/core, react, shadcn, codegen)
  so examples can resolve them at compile time
- Replace gateway("perplexity/sonar") in webSearch tool with getModel()
  to remove the last @ai-sdk/gateway dependency
- Update turbo.json globalEnv to include OPENAI_API_KEY, OPENAI_BASE_URL,
  AI_MODEL (replacing the old AI_GATEWAY_MODEL)

https://claude.ai/code/session_01PrgA5DKzfg9HouK3j31KFC
@ai-sdk/openai v3.x defaults to the new OpenAI Responses API (/responses),
which DeepSeek, vLLM, NUWA and other compatible providers do not implement.

Switch from openaiProvider(model) to openaiProvider.chat(model) so all
calls go to /v1/chat/completions instead.

Also add explicit LanguageModel return type to satisfy TS strict portability
check, and document that OPENAI_BASE_URL must include the /v1 path segment.

https://claude.ai/code/session_01PrgA5DKzfg9HouK3j31KFC
…solution

Next.js bundler (webpack/Turbopack) cannot follow pnpm symlinks to
@ai-sdk/openai when bundling. Marking it as a serverExternalPackages entry
tells Next.js to skip bundling it and let Node.js resolve it at runtime,
which works correctly since the package is only used in server-side API routes.

https://claude.ai/code/session_01PrgA5DKzfg9HouK3j31KFC
Two bugs fixed:

1. SyntaxError on res.json(): widget POST /api/v1/widgets had no try/catch
   around createWidget(), so a missing DATABASE_URL threw an unhandled error
   and Next.js returned a 500 with an empty body. Added try/catch returning
   a proper JSON error response. Also guard res.ok before calling res.json()
   in the client component so a non-2xx response never crashes the UI.

2. Dashboard generate route still used AI_GATEWAY_MODEL / @ai-sdk/gateway.
   Replaced with getModel() from a new lib/ai-provider.ts (same pattern as
   apps/web and examples/chat). Updated package.json and next.config.js to
   match.

https://claude.ai/code/session_01PrgA5DKzfg9HouK3j31KFC
@socket-security
Copy link

socket-security bot commented Feb 27, 2026

Review the following changes in direct dependencies. Learn more about Socket for GitHub.

Diff Package Supply Chain
Security
Vulnerability Quality Maintenance License
Added@​ai-sdk/​openai@​3.0.31941008898100

View full report

claude and others added 6 commits February 27, 2026 07:53
Replace @ai-sdk/gateway + AI_GATEWAY_MODEL with getModel() from a new
lib/ai-provider.ts, matching the pattern used in apps/web, examples/chat,
and examples/dashboard. Add serverExternalPackages for pnpm symlink fix.

https://claude.ai/code/session_01PrgA5DKzfg9HouK3j31KFC
When Next.js builds from the monorepo root via Turborepo, webpack resolves
modules relative to the root node_modules, which does not have @ai-sdk/openai.
The package only exists in each app's own node_modules (pnpm symlink).

Adding config.resolve.alias pins webpack to the local node_modules path so
the module is found regardless of the build's working directory.
Applied to examples/dashboard, examples/remotion, and examples/chat.

https://claude.ai/code/session_01PrgA5DKzfg9HouK3j31KFC
…S conflict

With "type": "module" in package.json, Next.js compiles next.config.ts to
next.config.compiled.js in CJS format (using exports), but Node.js treats
.js files as ESM, causing "exports is not defined in ES module scope".

Converting to .js (like dashboard and remotion already do) means the file
is loaded as native ESM with no compilation step, so import.meta / ESM
globals work correctly and the @ai-sdk/openai webpack alias is applied.

https://claude.ai/code/session_01PrgA5DKzfg9HouK3j31KFC
DTS generation invokes the full TypeScript compiler in a separate process.
Running it on every file change during --watch consumes excessive memory and
causes the process to be killed (exit code 137 / SIGKILL).

DTS files are only required when publishing packages. In a monorepo dev
session, workspace consumers resolve types directly from source TypeScript.

Use tsup's function-form config so options.watch is available:
- dts: !options.watch  (or false for packages/react which has a custom dts config)
- clean: !options.watch  (avoid redundant dist wipes on each rebuild)

Affects: core, react, react-state, shadcn, codegen, jotai, redux, remotion, zustand

https://claude.ai/code/session_01PrgA5DKzfg9HouK3j31KFC
…otion examples

Next.js 16 uses Turbopack by default; the webpack resolver function was
causing a fatal build error ("webpack config but no turbopack config").

- Remove the unused webpack alias for @ai-sdk/openai (package is now
  resolved naturally via pnpm node_modules)
- Add `turbopack: {}` to both next.config.js files to acknowledge
  Turbopack usage and silence the conflict error
- Keep `serverExternalPackages: ["@ai-sdk/openai"]` so the package is
  resolved at runtime by Node rather than bundled

Fixes: Module not found: Can't resolve '@ai-sdk/openai'

https://claude.ai/code/session_01GYFefafspmDYjG53sJJC6p
fix: replace webpack alias with turbopack config in dashboard and rem…
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants