fix: prevent OTel context leak in fire-and-forget background tasks (backport #5168)#5228
Open
mergify[bot] wants to merge 1 commit intorelease-0.6.xfrom
Open
fix: prevent OTel context leak in fire-and-forget background tasks (backport #5168)#5228mergify[bot] wants to merge 1 commit intorelease-0.6.xfrom
mergify[bot] wants to merge 1 commit intorelease-0.6.xfrom
Conversation
…5168) ## What's the problem? When you look at a trace in Jaeger, you expect it to show what happened during a single request. Instead, we found traces that looked like this during load testing: - A request that took **5 seconds** showed a trace lasting **62 seconds** - That trace contained **2,594 spans**, including **334 database writes that belonged to completely different requests** The trace was essentially garbage -- you couldn't tell what actually happened during the request vs. what leaked in from other requests happening at the same time. ## Why does this happen? The server uses background worker tasks to write data to the database without blocking the API response. These workers are long-lived -- they start up once and process a shared queue forever. The problem is how Python's `asyncio.create_task` works: it copies all context variables (including the OpenTelemetry trace context) at the moment the task is created. So whichever API request happens to **first** trigger worker creation permanently stamps its trace ID onto that worker. Every database write the worker processes from that point forward -- regardless of which request it came from -- gets attributed to that original request's trace. ``` Request A arrives → spawns worker → worker inherits trace A Request B arrives → enqueues work → worker processes it under trace A ← wrong! Request C arrives → enqueues work → worker processes it under trace A ← wrong! ...forever ``` ## How does this fix it? Two changes working together: **1. Workers start with a clean slate.** A new helper (`create_task_with_detached_otel_context`) creates the worker task with an empty trace context, so it doesn't permanently inherit any request's identity. **2. Each queue item carries its own trace context.** When a request enqueues work, it snapshots its current trace context and attaches it to the queue item. When the worker picks up that item, it temporarily activates the captured context for the duration of that work, then returns to a clean state before processing the next item. ``` Request A arrives → enqueues work with trace A context Request B arrives → enqueues work with trace B context Worker (no trace) → picks up item A → activates trace A → writes to DB → deactivates → picks up item B → activates trace B → writes to DB → deactivates ``` The result: each database write shows up under the correct request's trace. No inflation, no cross-contamination. ## What changed? | File | What it does | |------|-------------| | `core/task.py` (new) | Three utilities: `create_task_with_detached_otel_context` (start tasks clean), `capture_otel_context` (snapshot current context), `activate_otel_context` (temporarily restore a captured context) | | `inference_store.py` | Queue items now carry the OTel context; workers activate it per-item before writing | | `openai_responses.py` | Same pattern for the responses background worker | ## How is this tested? **14 new tests** across three files: - **`test_task.py`** (9 tests) -- validates the primitives: detached tasks get clean context, captured context can be re-activated, context flows correctly through a queue, and two requests don't contaminate each other - **`test_inference_store.py`** (2 tests) -- end-to-end with a real SQLite-backed InferenceStore: simulates two API requests, lets the queue + workers process the writes, and asserts each write lands in the correct trace (this directly reproduces the original bug) - **`test_responses_background.py`** (3 tests) -- same validation for the responses worker, plus a test proving that error-handling DB writes (marking a response as failed) are also attributed to the correct trace ## Test plan - [x] All 14 new unit tests pass - [x] All existing unit tests unaffected - [x] Inference and Responses API tests that use in memory OTEL span collectors pass --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: Charlie Doern <cdoern@redhat.com> (cherry picked from commit 20916be) # Conflicts: # src/llama_stack/providers/inline/agents/meta_reference/responses/openai_responses.py
Contributor
Author
|
Cherry-pick of 20916be has failed: To fix up this pull request, you can check it out locally. See documentation: https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/reviewing-changes-in-pull-requests/checking-out-pull-requests-locally |
3 tasks
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
What's the problem?
When you look at a trace in Jaeger, you expect it to show what happened during a single request. Instead, we found traces that looked like this during load testing:
The trace was essentially garbage -- you couldn't tell what actually happened during the request vs. what leaked in from other requests happening at the same time.
Why does this happen?
The server uses background worker tasks to write data to the database without blocking the API response. These workers are long-lived -- they start up once and process a shared queue forever.
The problem is how Python's
asyncio.create_taskworks: it copies all context variables (including the OpenTelemetry trace context) at the moment the task is created. So whichever API request happens to first trigger worker creation permanently stamps its trace ID onto that worker. Every database write the worker processes from that point forward -- regardless of which request it came from -- gets attributed to that original request's trace.How does this fix it?
Two changes working together:
1. Workers start with a clean slate.
A new helper (
create_task_with_detached_otel_context) creates the worker task with an empty trace context, so it doesn't permanently inherit any request's identity.2. Each queue item carries its own trace context.
When a request enqueues work, it snapshots its current trace context and attaches it to the queue item. When the worker picks up that item, it temporarily activates the captured context for the duration of that work, then returns to a clean state before processing the next item.
The result: each database write shows up under the correct request's trace. No inflation, no cross-contamination.
What changed?
core/task.py(new)create_task_with_detached_otel_context(start tasks clean),capture_otel_context(snapshot current context),activate_otel_context(temporarily restore a captured context)inference_store.pyopenai_responses.pyHow is this tested?
14 new tests across three files:
test_task.py(9 tests) -- validates the primitives: detached tasks get clean context, captured context can be re-activated, context flows correctly through a queue, and two requests don't contaminate each othertest_inference_store.py(2 tests) -- end-to-end with a real SQLite-backed InferenceStore: simulates two API requests, lets the queue + workers process the writes, and asserts each write lands in the correct trace (this directly reproduces the original bug)test_responses_background.py(3 tests) -- same validation for the responses worker, plus a test proving that error-handling DB writes (marking a response as failed) are also attributed to the correct traceTest plan
This is an automatic backport of pull request fix: prevent OTel context leak in fire-and-forget background tasks #5168 done by Mergify.