component-llm-openai is a Greentic component that sends OpenAI-compatible /chat/completions requests to a selected provider. It keeps the component-level config small and lets per-invocation input override request-specific values such as model, messages, temperature, top_p, max_tokens, and extra.
The canonical component config is:
providerbase_urlapi_key_secretdefault_modeltimeout_ms
api_key_secret is the name of a stored secret to look up at runtime, not the API key value itself.
Static manifest secrets remain empty because the real requirement is conditional:
openai: API key secret usually requiredopenrouter: API key secret requiredtogether: API key secret requiredollama: API key secret usually not requiredcustom: depends on the endpoint
Built-in providers use explicit runtime defaults unless overridden by component config:
| Provider | Default base URL | Default model |
|---|---|---|
openai |
https://api.openai.com/v1 |
gpt-4.1-mini |
ollama |
http://localhost:11434/v1 |
llama3:8b |
openrouter |
https://openrouter.ai/api/v1 |
openrouter/auto |
together |
https://api.together.xyz/v1 |
togethercomputer/llama-3-8b-instruct |
custom |
no built-in endpoint; you must supply base_url |
compatibility fallback gpt-4.1-mini if no model is supplied |
custom is treated specially:
base_urlmust be supplied- the setup/default wizard asks for
default_model - runtime still keeps the current compatibility fallback model if nothing is configured
Resolution order is:
- explicit per-invocation override, such as
input.model - component config
- runtime provider defaults
- compatibility fallback only where still needed
In practice, input.model overrides component default_model, and component default_model overrides the provider default.
OpenAI:
{
"provider": "openai",
"api_key_secret": "OPENAI_API_KEY"
}Ollama:
{
"provider": "ollama"
}OpenRouter:
{
"provider": "openrouter",
"api_key_secret": "OPENROUTER_API_KEY",
"default_model": "openrouter/auto"
}Custom:
{
"provider": "custom",
"base_url": "https://my-llm.example.com/v1",
"api_key_secret": "MY_LLM_API_KEY",
"default_model": "gpt-oss-120b",
"timeout_ms": 30000
}Per-invocation input is request-specific and includes:
modelmessagestemperaturetop_pmax_tokensextra
Use component config for defaults. Use invocation input for request-by-request behavior.
The component exports Greentic v0.6 QA lifecycle flows for:
defaultsetupupdateremove
Default mode keeps things minimal:
- choose provider first
- OpenAI/OpenRouter/Together ask only for API key secret name
- Ollama asks nothing else
- Custom asks for base URL, whether an API key is required, optional secret reference, and default model
Setup mode is fuller:
- provider
- standard endpoint yes/no for built-in providers
- base URL when needed
- authentication questions when needed
- default model
- timeout handling
- timeout in milliseconds when custom timeout is selected
Update mode starts by asking which area to change:
- provider
- endpoint
- authentication
- default model
- timeout
Then it asks only the relevant follow-up questions.
Remove mode stays minimal and asks for confirmation.
The current greentic:component/component@0.6.0 export uses invoke with a CBOR envelope and does not expose a separate invoke_stream ABI entrypoint. The internal streaming response parser is still present for future use with OpenAI-compatible SSE data: chunks.
cargo test
cargo build --target wasm32-wasip2The generated component.manifest.json points at target/wasm32-wasip2/release/component_llm_openai.wasm. After rebuilding the release wasm, refresh the manifest hash with greentic-component inspect --json target/wasm32-wasip2/release/component_llm_openai.wasm.
The repo includes a real Rust integration test at tests/live_provider.rs that talks to a live OpenAI-compatible endpoint.
It is designed to:
- use local Ollama by default during local development
- use CI-provided OpenAI-compatible settings when secrets are present in GitHub Actions
Local setup:
cp .secrets.sample .secretsThe file is sourced by bash, so keep values shell-safe. In particular, quote values that contain spaces.
The default local .secrets example targets Ollama:
LIVE_LLM_PROVIDER=ollama
LIVE_LLM_BASE_URL=http://localhost:11434/v1
LIVE_LLM_MODEL=llama3:8b
LIVE_LLM_API_KEY=Then run:
ollama serve
ollama pull llama3:8b
set -a
. ./.secrets
set +a
cargo test live_provider_roundtrip --test live_provider -- --exact.secrets is gitignored and should stay local-only.
CI behavior:
.github/workflows/nightly-e2e.ymlruns the Rust live test directly on a nightly schedule- if
OPENAI_API_KEYis present, the nightly test runs against OpenAI by default usinggpt-5-mini LIVE_LLM_BASE_URL,LIVE_LLM_MODEL, andLIVE_LLM_PROVIDERcan optionally override that default
LIVE_LLM_API_KEY is optional overall. Leave it empty for local Ollama, or provide it for OpenAI, OpenRouter, Together, or a custom authenticated endpoint.