Add configurable timeout for LLM provider and session summarization#278
Add configurable timeout for LLM provider and session summarization#278jsykes wants to merge 1 commit intosipeed:mainfrom
Conversation
Replaces hardcoded 120s timeout with a timeout field in agents.defaults config, allowing users running local/slow models to avoid request timeouts.
|
@Zepan Adds configurable timeout for LLM provider calls and session summarization. Prevents hanging requests from blocking the agent indefinitely. Recommendation: Merge. +17/-5, small and practical. Especially important for unreliable network conditions on embedded devices. |
nikolasdehor
left a comment
There was a problem hiding this comment.
Clean, focused, minimal — exactly what this needs. LGTM.
The change makes the 120s timeout configurable via agents.defaults.timeout (JSON) and PICOCLAW_AGENTS_DEFAULTS_TIMEOUT (env var), with a safe fallback to 120s when unset or <= 0. It applies to both the HTTP provider client and the session summarization context.
Strengths:
- Single concern, 17-line diff
- Consistent with existing config patterns (
max_tokens,temperature,max_tool_iterations) - Example config updated
- Backward compatible — existing deployments get the same 120s default
This directly fixes #430 and #312 for users running local models (Ollama, vLLM) that need longer inference times.
Minor nit (non-blocking): no tests added, but the change is simple enough that the risk is low. If desired, a quick test verifying NewHTTPProvider with timeout=0 falls back to 120s would be nice.
I also reviewed PR #432 which tackles the same problem but bundles 4 unrelated changes — I'd recommend merging this PR first and having #432 split and rebase on top.
Replaces hardcoded 120s timeout with a timeout field in agents.defaults config, allowing users running local/slow models to avoid request timeouts.