Switch from Vercel AI Gateway to OpenAI-compatible providers#155
Switch from Vercel AI Gateway to OpenAI-compatible providers#155pinglanyi wants to merge 11 commits intovercel-labs:mainfrom
Conversation
Replaces @ai-sdk/gateway with @ai-sdk/openai (v3.x) to enable use of any OpenAI-compatible API endpoint — including proxy relays (e.g. NUWA api.nuwaapi.com) and self-hosted vLLM endpoints — without requiring Vercel deployment. Changes: - Add @ai-sdk/openai@^3.0.22 to apps/web and examples/chat - Add lib/ai-provider.ts to both apps with createOpenAI() configured via env vars - Update /api/generate and /api/docs-chat routes to use getModel() from ai-provider - Update examples/chat agent to use getModel() instead of gateway() - Remove Anthropic-specific cacheControl from docs-chat (not supported by OpenAI API) - Update .env.example with new vars: OPENAI_API_KEY, OPENAI_BASE_URL, AI_MODEL Environment variables: OPENAI_API_KEY - API key (required) OPENAI_BASE_URL - Proxy base URL (default: https://api.nuwaapi.com/v1) Set to http://localhost:8000/v1 for vLLM AI_MODEL - Model ID (default: gpt-4o-mini) https://claude.ai/code/session_01PrgA5DKzfg9HouK3j31KFC
|
@claude is attempting to deploy a commit to the Vercel Labs Team on Vercel. A member of the Team first needs to authorize it. |
|
Warning Review the following alerts detected in dependencies. According to your organization's Security Policy, it is recommended to resolve "Warn" alerts. Learn more about Socket for GitHub.
|
- Build workspace packages (@json-render/core, react, shadcn, codegen)
so examples can resolve them at compile time
- Replace gateway("perplexity/sonar") in webSearch tool with getModel()
to remove the last @ai-sdk/gateway dependency
- Update turbo.json globalEnv to include OPENAI_API_KEY, OPENAI_BASE_URL,
AI_MODEL (replacing the old AI_GATEWAY_MODEL)
https://claude.ai/code/session_01PrgA5DKzfg9HouK3j31KFC
@ai-sdk/openai v3.x defaults to the new OpenAI Responses API (/responses), which DeepSeek, vLLM, NUWA and other compatible providers do not implement. Switch from openaiProvider(model) to openaiProvider.chat(model) so all calls go to /v1/chat/completions instead. Also add explicit LanguageModel return type to satisfy TS strict portability check, and document that OPENAI_BASE_URL must include the /v1 path segment. https://claude.ai/code/session_01PrgA5DKzfg9HouK3j31KFC
…solution Next.js bundler (webpack/Turbopack) cannot follow pnpm symlinks to @ai-sdk/openai when bundling. Marking it as a serverExternalPackages entry tells Next.js to skip bundling it and let Node.js resolve it at runtime, which works correctly since the package is only used in server-side API routes. https://claude.ai/code/session_01PrgA5DKzfg9HouK3j31KFC
Two bugs fixed: 1. SyntaxError on res.json(): widget POST /api/v1/widgets had no try/catch around createWidget(), so a missing DATABASE_URL threw an unhandled error and Next.js returned a 500 with an empty body. Added try/catch returning a proper JSON error response. Also guard res.ok before calling res.json() in the client component so a non-2xx response never crashes the UI. 2. Dashboard generate route still used AI_GATEWAY_MODEL / @ai-sdk/gateway. Replaced with getModel() from a new lib/ai-provider.ts (same pattern as apps/web and examples/chat). Updated package.json and next.config.js to match. https://claude.ai/code/session_01PrgA5DKzfg9HouK3j31KFC
|
Review the following changes in direct dependencies. Learn more about Socket for GitHub.
|
Replace @ai-sdk/gateway + AI_GATEWAY_MODEL with getModel() from a new lib/ai-provider.ts, matching the pattern used in apps/web, examples/chat, and examples/dashboard. Add serverExternalPackages for pnpm symlink fix. https://claude.ai/code/session_01PrgA5DKzfg9HouK3j31KFC
When Next.js builds from the monorepo root via Turborepo, webpack resolves modules relative to the root node_modules, which does not have @ai-sdk/openai. The package only exists in each app's own node_modules (pnpm symlink). Adding config.resolve.alias pins webpack to the local node_modules path so the module is found regardless of the build's working directory. Applied to examples/dashboard, examples/remotion, and examples/chat. https://claude.ai/code/session_01PrgA5DKzfg9HouK3j31KFC
…S conflict With "type": "module" in package.json, Next.js compiles next.config.ts to next.config.compiled.js in CJS format (using exports), but Node.js treats .js files as ESM, causing "exports is not defined in ES module scope". Converting to .js (like dashboard and remotion already do) means the file is loaded as native ESM with no compilation step, so import.meta / ESM globals work correctly and the @ai-sdk/openai webpack alias is applied. https://claude.ai/code/session_01PrgA5DKzfg9HouK3j31KFC
DTS generation invokes the full TypeScript compiler in a separate process. Running it on every file change during --watch consumes excessive memory and causes the process to be killed (exit code 137 / SIGKILL). DTS files are only required when publishing packages. In a monorepo dev session, workspace consumers resolve types directly from source TypeScript. Use tsup's function-form config so options.watch is available: - dts: !options.watch (or false for packages/react which has a custom dts config) - clean: !options.watch (avoid redundant dist wipes on each rebuild) Affects: core, react, react-state, shadcn, codegen, jotai, redux, remotion, zustand https://claude.ai/code/session_01PrgA5DKzfg9HouK3j31KFC
…otion examples
Next.js 16 uses Turbopack by default; the webpack resolver function was
causing a fatal build error ("webpack config but no turbopack config").
- Remove the unused webpack alias for @ai-sdk/openai (package is now
resolved naturally via pnpm node_modules)
- Add `turbopack: {}` to both next.config.js files to acknowledge
Turbopack usage and silence the conflict error
- Keep `serverExternalPackages: ["@ai-sdk/openai"]` so the package is
resolved at runtime by Node rather than bundled
Fixes: Module not found: Can't resolve '@ai-sdk/openai'
https://claude.ai/code/session_01GYFefafspmDYjG53sJJC6p
fix: replace webpack alias with turbopack config in dashboard and rem…
Summary
This PR migrates the project from using Vercel AI Gateway to support OpenAI-compatible providers (such as NUWA proxy, vLLM, or OpenAI directly). This enables greater flexibility in choosing AI providers and models while maintaining a consistent interface across the codebase.
Key Changes
New AI Provider Module: Created
ai-provider.tsin bothapps/webandexamples/chatthat configures an OpenAI-compatible provider via environment variablesgetModel()function for consistent model instantiationhttps://api.nuwaapi.com/v1) withgpt-4o-minimodelUpdated Environment Configuration:
AI_GATEWAY_API_KEYandAI_GATEWAY_MODELwithOPENAI_API_KEY,OPENAI_BASE_URL, andAI_MODEL.env.examplefiles with clear documentation and examples for different providers (NUWA, vLLM, OpenAI).env.exampletoexamples/chatDependency Updates:
@ai-sdk/gatewaywith@ai-sdk/openaiin bothapps/webandexamples/chatAPI Route Updates:
apps/web/app/api/docs-chat/route.ts: Removed Anthropic-specific cache control logic and switched togetModel()apps/web/app/api/generate/route.ts: Updated to usegetModel()instead of hardcoded modelexamples/chat/lib/agent.ts: Replaced gateway import with localai-providermoduleImplementation Details
getModel()function provides a single point of configuration, allowing model selection via environment variable or parameter overrideanthropic/claude-haiku-4.5) have been replaced with the new configurable systemhttps://claude.ai/code/session_01PrgA5DKzfg9HouK3j31KFC