Skip to content

Addressing PR comments#2

Draft
Claude wants to merge 1 commit intomasterfrom
claude/analyze-module-internal-calls
Draft

Addressing PR comments#2
Claude wants to merge 1 commit intomasterfrom
claude/analyze-module-internal-calls

Conversation

@Claude
Copy link
Copy Markdown

@Claude Claude AI commented Apr 3, 2026

No description provided.

@zclllyybb
Copy link
Copy Markdown
Owner

@claude[agent] resolve the conflicts. Check if the original problem still exists, and if it does, adapt your optimization to the current code.

…iewers method

- Created _execute_reviewers() helper method to centralize reviewer execution logic
- Replaced duplicated reviewer loops in _revise_task_pipeline and _execute_task
- Reduces 40+ lines of code duplication
- Improves maintainability: reviewer logic changes now only need one edit
- All tests pass (70 agent tests + 13 arbitration tests)

Agent-Logs-Url: https://github.com/zclllyybb/multi-agent-todo/sessions/544a1622-1651-4d0d-b059-a507ad55e669

Co-authored-by: zclllyybb <61408379+zclllyybb@users.noreply.github.com>
@Claude Claude AI force-pushed the claude/analyze-module-internal-calls branch from 99e9e9e to e0e561d Compare April 3, 2026 17:07
@Claude
Copy link
Copy Markdown
Author

Claude AI commented Apr 3, 2026

@claude[agent] resolve the conflicts. Check if the original problem still exists, and if it does, adapt your optimization to the current code.

Conflicts resolved in commit e0e561d.

Original Problem Status: RESOLVED

The branch has been rebased onto master (b5b8532). The duplicated reviewer loop code that existed in 3 locations has been successfully reduced to 1 reusable _execute_reviewers() method.

What was fixed:

  • Resolved 2 conflicts in core/orchestrator.py where master had reverted to duplicated code
  • Kept the refactored version using _execute_reviewers() helper method
  • Adapted to master's formatting (black-style)

Verification:

  • Only 2 reviewer loops remain (down from 3 duplicated loops):
    1. Line 1953: Inside _execute_reviewers() (the centralized implementation)
    2. Line 1577: In _review_only_pipeline (intentionally kept for special mid-loop cancellation checks)
  • All 83 tests pass
  • Net reduction: 79 lines deleted, 75 lines added (~40 lines of duplication eliminated)

@Claude Claude AI changed the title Analyze module call relationships and extract duplicated reviewer loop Addressing PR comments Apr 3, 2026
@Claude Claude AI requested a review from zclllyybb April 3, 2026 17:08
@zclllyybb zclllyybb requested a review from Copilot April 3, 2026 17:08
Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Refactors the Orchestrator’s multi-reviewer execution logic to reduce duplication across task pipelines while preserving the existing “short-circuit on first rejection” behavior for develop/revise flows.

Changes:

  • Introduces a shared _execute_reviewers() helper that runs configured reviewers and aggregates results/output.
  • Updates _execute_task() and _revise_task_pipeline() to use _execute_reviewers() instead of inlined reviewer loops.
  • Minor edits in _review_only_pipeline() around reviewer-loop setup/comments.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +1574 to +1575
rejection_outputs = []
all_passed = True
Copy link

Copilot AI Apr 3, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In _review_only_pipeline, the new rejection_outputs list is never used, and the initial all_passed = True is immediately overwritten later by all(...). Removing these unused/duplicative variables (or wiring them into the output construction) will reduce confusion and keep this block consistent with the rest of the reviewer pipelines.

Suggested change
rejection_outputs = []
all_passed = True

Copilot uses AI. Check for mistakes.
Comment on lines +1932 to +1956
review_method: str,
short_circuit: bool = True,
**review_kwargs,
) -> tuple[list[dict], bool, list[str]]:
"""
Execute all reviewers and return results.

Args:
task: Task being reviewed
worktree_path: Path to the worktree
review_method: Name of the review method to call ('review_changes' or 'review_patch')
short_circuit: If True, stop after first rejection; if False, run all reviewers
**review_kwargs: Additional kwargs to pass to the review method

Returns:
Tuple of (reviewer_results, all_passed, rejection_outputs)
"""
reviewer_results = []
rejection_outputs = []
all_passed = True

for reviewer in self.reviewers:
# Call the appropriate review method
review_fn = getattr(reviewer, review_method)
review_run, passed, review_text = review_fn(
Copy link

Copilot AI Apr 3, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

_execute_reviewers relies on a string review_method and getattr(reviewer, review_method); if a wrong method name is ever passed, the failure will be an AttributeError (or a confusing MagicMock call in tests). Consider validating review_method against an allow-list (e.g., review_changes/review_patch) and raising a clear ValueError when invalid, and/or accepting a callable instead of a string.

Copilot uses AI. Check for mistakes.
Comment on lines +1928 to +1935
def _execute_reviewers(
self,
task: Task,
worktree_path: str,
review_method: str,
short_circuit: bool = True,
**review_kwargs,
) -> tuple[list[dict], bool, list[str]]:
Copy link

Copilot AI Apr 3, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The new _execute_reviewers helper now contains the core reviewer-loop logic (including short-circuiting behavior). There doesn't appear to be a unit test asserting the short-circuit behavior and correct aggregation of rejection_outputs across multiple reviewers, so a small test would help prevent regressions when this helper is reused/extended.

Copilot uses AI. Check for mistakes.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants