Skip to content

Conversation

@salmanap
Copy link
Contributor

@salmanap salmanap commented Jan 16, 2026

We should be able to support the following config so that developers don't have to restart plano everytime they want to try/use a new model.


listeners:
  # Model listener for direct LLM access
  - type: model
    name: llms
    address: 0.0.0.0
    port: 12000

model_providers:
  # OpenAI - support all models via wildcard
  - model: openai/*
    access_key: $OPENAI_API_KEY

  # Anthropic - support all Claude models
  - model: anthropic/*
    access_key: $ANTHROPIC_API_KEY

  - model: xai/*
    access_key: $GROK_API_KEY


  # Custom internal LLM provider
  # Note: Requires base_url and provider_interface for unknown providers
  - model: ollama/*
    base_url: https://llm.internal.company.com

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants