-
Notifications
You must be signed in to change notification settings - Fork 728
Description
Context
I've started using the compound engineering workflow and I'm trying to understand how the compound step works in practice, specifically around sub-agent context and information retention.
What I'm trying to understand
When work happens through sub-agents (via /workflows:work, /lfg, /slfg), each sub-agent has its own context where it may try multiple approaches, hit dead ends, discover codebase quirks, and eventually succeed. The sub-agent then returns a summary to the orchestrator.
By the time /workflows:compound runs at the end, several things may have happened:
- Sub-agents compressed their experience into summaries, potentially dropping the failed attempts and unexpected discoveries that are arguably the most valuable things to compound
- Multiple sub-agent summaries have accumulated in the orchestrator's context, with early ones potentially losing weight as later tasks push them further back
- On longer task lists, compaction may have occurred at least once before the compound step runs, which could mean early sub-agent summaries are gone entirely — not just far back in context, but actually removed
Questions
-
What does
/workflows:compoundactually read from? Does it work from the orchestrator's conversation context, the git diff, the PR, the plan file, or some combination? -
If it relies on the orchestrator's context, how does it handle the case where compaction has already occurred and early task summaries have been lost?
-
Do sub-agents currently have any instructions to report failed approaches and unexpected discoveries, or are they primarily reporting outcomes?
-
Is there a recommended task list size or session structure where the compound step works best? For example, should users be running compound more frequently on long feature branches rather than once at the end?
-
Has the team observed cases where the compound step produced thin or low-value
docs/solutions/entries, and if so, was context loss a contributing factor? -
Would it make sense to compound at the sub-agent level — having each sub-agent capture learnings while its full context is still fresh, right after completing its task? If so, is there a recommended way to achieve this today, or would it require changes to the workflow?
I want to make sure I'm getting full value from the compound step rather than just going through the motions. Any insight into how this actually works under the hood would be really helpful.