Skip to content

feat: add Perplexity AI LLM provider support#2467

Open
junaiddshaukat wants to merge 25 commits intoarchestra-ai:mainfrom
junaiddshaukat:feat/add-perplexity-provider
Open

feat: add Perplexity AI LLM provider support#2467
junaiddshaukat wants to merge 25 commits intoarchestra-ai:mainfrom
junaiddshaukat:feat/add-perplexity-provider

Conversation

@junaiddshaukat
Copy link
Contributor

Summary

Add Perplexity AI as a new LLM provider with full LLM Proxy and Chat support
Perplexity uses OpenAI-compatible API with built-in web search (no external tool calling)
Streaming chat works correctly with proper response handling

Key Changes

Backend: Perplexity adapter, proxy routes, dual LLM client, error handling
Frontend: API key form, model selector, provider icon
Docs: Updated supported providers documentation

Important Notes

Perplexity has no /models endpoint - models are hardcoded (sonar, sonar-pro, sonar-reasoning-pro, sonar-deep-research)
Tool calling disabled - Perplexity has built-in web search instead

API Key: https://www.perplexity.ai/settings/api

Demo

Screen.Recording.2026-01-28.at.1.56.53.PM.mov

/claim #1854
Closes #1854

Copilot AI review requested due to automatic review settings January 28, 2026 09:17
@algora-pbc algora-pbc bot mentioned this pull request Jan 28, 2026
2 tasks
@London-Cat
Copy link
Collaborator

London-Cat commented Jan 28, 2026

📊 Reputation Summary

User Rep Pull Requests Activity Assigned Core Reactions
joeyorlando ⚡ 2205 88✅ 5🔄 7❌ 100 issues, 50 comments 7
junaiddshaukat ⚡ 46 2✅ 2🔄 0❌ 0 issues, 15 comments 0

How is the score calculated? Read about it in the Reputation Bot repository 🤖

@junaiddshaukat
Copy link
Contributor Author

Hi @Konstantinov-Innokentii, this PR adds Perplexity AI provider support (closes #1854). All features are working - demo video is attached showing Chat with streaming. Please have a look when you got time. If need any changes let me know. Thanks!

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR adds Perplexity AI as a new LLM provider to the Archestra platform with full LLM Proxy and Chat support. Perplexity uses an OpenAI-compatible API at https://api.perplexity.ai and provides AI-powered search capabilities with built-in web search (no external tool calling). The implementation includes hardcoded models (sonar, sonar-pro, sonar-reasoning-pro, sonar-deep-research) since Perplexity lacks a /models endpoint.

Changes:

  • Added Perplexity as a supported provider across backend and frontend with appropriate type definitions, adapters, routes, and UI components
  • Implemented tool calling exclusion logic in Chat routes since Perplexity has built-in web search instead
  • Added dual LLM client support for Perplexity using OpenAI-compatible API

Reviewed changes

Copilot reviewed 31 out of 33 changed files in this pull request and generated 3 comments.

Show a summary per file
File Description
platform/shared/routes.ts Added RouteIds for Perplexity proxy endpoints
platform/shared/model-constants.ts Added "perplexity" to supported providers enum and display names
platform/shared/hey-api/clients/api/types.gen.ts Added "perplexity" to provider types across API definitions (auto-generated)
platform/pnpm-lock.yaml Added @ai-sdk/perplexity dependency (unused)
platform/frontend/src/lib/llmProviders/perplexity.ts Created Perplexity interaction handler (extends OpenAI)
platform/frontend/src/lib/interaction.utils.ts Registered Perplexity interaction handler
platform/frontend/src/components/proxy-connection-instructions.tsx Added Perplexity provider configuration
platform/frontend/src/components/chat/model-selector.tsx Added Perplexity logo provider mapping
platform/frontend/src/components/chat-api-key-form.tsx Added Perplexity API key form configuration
platform/frontend/public/icons/perplexity.png Added Perplexity provider icon (binary)
platform/backend/src/types/llm-providers/perplexity/*.ts Created Perplexity type definitions (API schemas, messages, tools, index)
platform/backend/src/types/llm-providers/index.ts Exported Perplexity types
platform/backend/src/types/interaction.ts Added Perplexity interaction schema
platform/backend/src/types/chat-api-key.ts Added "perplexity" to supported chat providers
platform/backend/src/tokenizers/index.ts Added Perplexity to OpenAI-compatible tokenizer group
platform/backend/src/server.ts Registered Perplexity OpenAPI schemas
platform/backend/src/routes/proxy/utils/cost-optimization.ts Added Perplexity message types
platform/backend/src/routes/proxy/routesv2/perplexity.ts Created Perplexity proxy routes with HTTP proxy and chat completion handlers
platform/backend/src/routes/proxy/adapterV2/perplexity.ts Implemented Perplexity adapter with request/response/stream adapters (538 lines)
platform/backend/src/routes/proxy/adapterV2/index.ts Exported perplexityAdapterFactory
platform/backend/src/routes/index.ts Registered Perplexity proxy routes
platform/backend/src/routes/chat/routes.models.ts Added fetchPerplexityModels with API key validation and hardcoded model list
platform/backend/src/routes/chat/routes.chat.ts Added tool calling exclusion for Perplexity provider
platform/backend/src/routes/chat/errors.ts Added Perplexity to OpenAI-compatible error handling
platform/backend/src/models/optimization-rule.ts Added empty Perplexity optimization rules (no defaults)
platform/backend/src/config.ts Added Perplexity configuration for LLM proxy and Chat
platform/backend/src/clients/llm-client.ts Added Perplexity model creators using OpenAI SDK
platform/backend/src/clients/dual-llm-client.ts Implemented PerplexityDualLlmClient with JSON schema support
platform/backend/package.json Added @ai-sdk/perplexity dependency (unused)
docs/pages/platform-supported-llm-providers.md Added comprehensive Perplexity documentation section
Files not reviewed (1)
  • platform/pnpm-lock.yaml: Language not supported

@junaiddshaukat junaiddshaukat force-pushed the feat/add-perplexity-provider branch from 2f78760 to 27312de Compare January 31, 2026 14:24
@junaiddshaukat
Copy link
Contributor Author

Hi @joeyorlando, just checking in to see if you’ve had a chance to review this PR. Let me know if there’s anything else I can do on my side.

@joeyorlando
Copy link
Contributor

hi there @junaiddshaukat 👋 I just merged #2610 which simplified a few parts regarding adding a new LLM provider - this introduced a few merge conflicts in your PR, do you mind rebasing off of latest main + addressing merge conflicts? I will review once that is done!

Additionally, can you run pnpm check:ci locally and address any issues + push up? 🙏

@junaiddshaukat junaiddshaukat force-pushed the feat/add-perplexity-provider branch from 6c38b52 to 233b6af Compare February 5, 2026 14:17
@junaiddshaukat
Copy link
Contributor Author

Hi @joeyorlando, done! Rebased on latest main, resolved merge conflicts, and ran pnpm check:ci. The only failing check is a pre-existing TypeScript error in page.client.tsx that exists on main (not from this PR). Ready for review!

@junaiddshaukat junaiddshaukat force-pushed the feat/add-perplexity-provider branch 2 times, most recently from d0ab4ae to 22defda Compare February 7, 2026 05:19
Add full Perplexity AI integration for both LLM Proxy and Chat features.

Perplexity is an OpenAI-compatible API with built-in web search capabilities.
Note: Perplexity does NOT support external tool calling - it has internal
web search that returns results in the search_results field.

LLM Proxy:
- Add Perplexity adapter with OpenAI-compatible request/response handling
- Add proxy routes for chat completions
- Handle streaming with proper finalization logic
- Add Perplexity-specific response fields (citations, search_results)

Chat:
- Add Perplexity to supported chat providers
- Implement model fetching with API key validation (hardcoded models
  since Perplexity has no /models endpoint)
- Disable tool calling for Perplexity in chat routes
- Use OpenAI SDK for better streaming compatibility

Other:
- Add PerplexityDualLlmClient for verification
- Add error handling with OpenAI-compatible error mapping
- Update tokenizers to use tiktoken for Perplexity
- Add documentation for Perplexity provider
- Add frontend components (API key form, model selector, icons)

Closes archestra-ai#1854
@junaiddshaukat junaiddshaukat force-pushed the feat/add-perplexity-provider branch from 8cca78d to 762830c Compare February 7, 2026 14:45
@joeyorlando
Copy link
Contributor

Scope: all 5 workspace projects
 ERR_PNPM_OUTDATED_LOCKFILE  Cannot install with "frozen-lockfile" because pnpm-lock.yaml is not up to date with <ROOT>/backend/package.json

Note that in CI environments this setting is true by default. If you still need to run install in such cases, use "pnpm install --no-frozen-lockfile"

  Failure reason:
  specifiers in the lockfile don't match specifiers in package.json:
* 1 dependencies were removed: @ai-sdk/perplexity@^3.0.11

@junaiddshaukat
Copy link
Contributor Author

Hi @joeyorlando, fixed! The pnpm-lock.yaml still had orphaned @ai-sdk/perplexity package entries even though it was removed from package.json (we use createOpenAI instead for better compatibility). Cleaned up the lockfile - pnpm install with frozen-lockfile now passes locally. Pushed the fix.

@joeyorlando
Copy link
Contributor

can you please ensure pnpm check:ci passes for you locally and push up any changes?

• Packages in scope: @backend, @e2e-tests, @frontend, @shared
• Running check:ci in 4 packages
• Remote caching disabled
@shared:check:ci
@e2e-tests:check:ci
@frontend:check:ci
cache miss, executing 2bdc9ea72e758867

> @frontend@1.0.37 check:ci /home/runner/work/archestra/archestra/platform/frontend
> pnpm type-check && pnpm test && pnpm knip && biome ci


> @frontend@1.0.37 type-check /home/runner/work/archestra/archestra/platform/frontend
> tsc --noEmit

Error: src/app/logs/[id]/page.client.tsx(269,13): error TS2322: Type 'unknown' is not assignable to type 'ReactNode'.
 ELIFECYCLE  Command failed with exit code 2.
 ELIFECYCLE  Command failed with exit code 2.
Error:  command finished with error: command (/home/runner/work/archestra/archestra/platform/frontend) /home/runner/setup-pnpm/node_modules/.bin/pnpm run check:ci exited (2)
@backend:check:ci

@junaiddshaukat
Copy link
Contributor Author

Hi @joeyorlando, pnpm check:ci now passes locally. Fixed the issues from the upstream refactoring (#2610) - added perplexity to the new provider registries (models-dev-client, seed, llm-metrics) and updated routesv2/perplexity.ts to use the new utils.user.getUser() API. Also fixed the page.client.tsx TypeScript error. Pushed up!

image

@junaiddshaukat
Copy link
Contributor Author

Hi @joeyorlando, Is there anything else needed to do in this PR? Have a look when you got time. Thanks

Copy link
Contributor

@joeyorlando joeyorlando left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewed against platform-adding-llm-providers.md

Good overall structure. The no-tool-calling approach is handled cleanly in the adapter. Several items need attention.

Missing required files

  1. backend/src/routes/features.ts - Missing perplexityEnabled feature flag. The guide requires a {provider}Enabled boolean in the features response. Frontend uses this for conditional rendering.

  2. .env.example - Missing ARCHESTRA_PERPLEXITY_BASE_URL and ARCHESTRA_CHAT_PERPLEXITY_API_KEY entries. Every other provider adds their env vars here.

  3. E2E tests - Entirely missing. Even though Perplexity doesn't support tool calling, the following test suites still apply:

    • model-optimization.spec.ts - needs Perplexity config
    • token-cost-limits.spec.ts - needs Perplexity config
    • e2e-tests/tests/ui/chat.spec.ts - needs Perplexity config
    • WireMock stub mappings (helm/e2e-tests/mappings/perplexity-*.json)
    • .github/values-ci.yaml - needs ARCHESTRA_PERPLEXITY_BASE_URL pointing to WireMock

Functional issues

  1. fetchPerplexityModels sends a real chat completion to validate the API key (sends {model: "sonar", messages: [{role: "user", content: "hi"}], max_tokens: 1}). This costs money on every model list refresh. Other providers call their /models endpoint, which is free. If there's no models endpoint, just return the hardcoded list without validation, or validate with a cheaper approach (e.g., a HEAD request or catching errors on first actual use).

  2. types.gen.ts has request: unknown and response: unknown for the Perplexity interaction type, while other providers have their concrete types (e.g., OpenAiChatCompletionRequest). This means the log detail page won't properly parse Perplexity interactions. Likely a codegen issue - the Perplexity response schema uses .passthrough() which may prevent proper type generation.

  3. Duplicate baseUrl in config - config.chat.perplexity.baseUrl is redundant with config.llm.perplexity.baseUrl. No other provider has a separate base URL under chat. The directModelCreators should use config.llm.perplexity.baseUrl like other providers do, and config.chat.perplexity should only have apiKey.

Minor issues

  1. Unrelated change in page.client.tsx - The processedRequest && to !!processedRequest && change is not Perplexity-specific. Should be in a separate commit or PR.

@junaiddshaukat
Copy link
Contributor Author

Hi @joeyorlando, addressed all review feedback:

  1. Added perplexityEnabled feature flag to features.ts
  2. Added ARCHESTRA_PERPLEXITY_BASE_URL to .env.example
  3. Added E2E tests: Perplexity config in model-optimization.spec.ts, token-cost-limits.spec.ts, chat.spec.ts + 9 WireMock stub mappings + CI config in values-ci.yaml
  4. Removed paid chat completion call from fetchPerplexityModels - now returns hardcoded list directly (validation on first actual use)
  5. Fixed types.gen.ts - concrete types instead of unknown. Simplified perplexity/api.ts to extend OpenAI's response schema like Mistral does
  6. Removed duplicate baseUrl from config.chat.perplexity - now uses config.llm.perplexity.baseUrl like other providers
  7. Reverted unrelated page.client.tsx change

pnpm check:ci passes locally (only pre-existing errors on main remain). Ready for re-review!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Provider] Add Perplexity support

3 participants