Skip to content

Comments

Add configurable timeout for LLM provider and session summarization#278

Open
jsykes wants to merge 1 commit intosipeed:mainfrom
jsykes:config-timeout-support
Open

Add configurable timeout for LLM provider and session summarization#278
jsykes wants to merge 1 commit intosipeed:mainfrom
jsykes:config-timeout-support

Conversation

@jsykes
Copy link

@jsykes jsykes commented Feb 16, 2026

Replaces hardcoded 120s timeout with a timeout field in agents.defaults config, allowing users running local/slow models to avoid request timeouts.

Replaces hardcoded 120s timeout with a timeout field in agents.defaults config, allowing users running local/slow models to avoid request timeouts.
@Leeaandrob
Copy link
Collaborator

@Zepan Adds configurable timeout for LLM provider calls and session summarization. Prevents hanging requests from blocking the agent indefinitely.

Recommendation: Merge. +17/-5, small and practical. Especially important for unreliable network conditions on embedded devices.

Copy link

@nikolasdehor nikolasdehor left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Clean, focused, minimal — exactly what this needs. LGTM.

The change makes the 120s timeout configurable via agents.defaults.timeout (JSON) and PICOCLAW_AGENTS_DEFAULTS_TIMEOUT (env var), with a safe fallback to 120s when unset or <= 0. It applies to both the HTTP provider client and the session summarization context.

Strengths:

  • Single concern, 17-line diff
  • Consistent with existing config patterns (max_tokens, temperature, max_tool_iterations)
  • Example config updated
  • Backward compatible — existing deployments get the same 120s default

This directly fixes #430 and #312 for users running local models (Ollama, vLLM) that need longer inference times.

Minor nit (non-blocking): no tests added, but the change is simple enough that the risk is low. If desired, a quick test verifying NewHTTPProvider with timeout=0 falls back to 120s would be nice.

I also reviewed PR #432 which tackles the same problem but bundles 4 unrelated changes — I'd recommend merging this PR first and having #432 split and rebase on top.

@nikolasdehor
Copy link

Related: #432 (PR increasing Ollama timeout) and #605 (issue requesting timeout control). All three address the same need for configurable LLM timeouts. Suggesting consolidation — this issue (#278) is the earliest and most complete description of the requirement.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants