[prompt-clustering] Copilot Agent Prompt Clustering Analysis – 959 PRs, 8 Clusters #22970
Closed
Replies: 1 comment
-
|
This discussion was automatically closed because it expired on 2026-03-26T20:33:23.496Z.
|
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Summary
Analysis period: All available history (~959 copilot PRs)
Prompts extracted: 959
Clusters identified: 8
Overall merge success rate: 69.0%
Key Findings
.mdchanges and MCP server updates.Cluster-by-Cluster Breakdown
C1: Unclear / Meta Prompts
Size: 67 PRs | Success rate: 73%
███████░░░Top keywords:
coding agent,coding,copilot coding,agent tips,tips,agentDescription: PRs where the original prompt wasn't extracted cleanly — body contained only the copilot suffix section with tips/agent metadata rather than a clear task description.
Example PRs: #11065, #11066, #11097
C2: Issue-Tracker–Driven Tasks
Size: 103 PRs | Success rate: 58%
█████░░░░░Top keywords:
task miner,miner,discussion task,discussion,task,gh awDescription: Tasks structured around
<issue_title>/<issue_description>tags, typically spawned from the task-miner or discussion-task workflows. Broad scope: Go toolchain, MCP tool setup, workflow fixes.Example PRs: #11059, #11067, #11074
C3: CI Failure Doctor – Issue-Based
Size: 176 PRs | Success rate: 53%
█████░░░░░Top keywords:
gh aw,aw,gh,issue_title,workflow,issueDescription: CI failures or regressions packaged as structured issues and routed to Copilot. Examples: regex changes breaking tests, ANSI escape sequences in YAML, TypeScript type errors after prior PRs.
Example PRs: #11058, #11068, #11069
C4: Agentic Workflow Updates & Features
Size: 406 PRs | Success rate: 74%
███████░░░Top keywords:
agentic,update,reference,campaign,workflows,workflowDescription: The dominant category — direct-text prompts requesting changes to agentic workflow
.mdfiles, campaigns, MCP servers, or supporting infrastructure. Covers upgrades, new flags, and feature additions.Example PRs: #11050, #11053, #11054
C5: Maintenance & Logging Improvements
Size: 60 PRs | Success rate: 63%
██████░░░░Top keywords:
comments,failure,workflow failure,issue_title,issue,issue_descriptionDescription: Issue-driven maintenance tasks: merging jobs, adding logging, improving error messages, enforcing YAML config constraints.
Example PRs: #11060, #11077, #11084
C6: Direct CI Job Failure Fixes
Size: 33 PRs | Success rate: 82%
████████░░Top keywords:
job,fix,implement,failing,analyze,root causeDescription: Targeted prompts that supply a failing job URL + ID and ask Copilot to analyze logs, identify the root cause, and ship a fix. Highest-precision task type.
Example PRs: #11096, #11915, #12304
C7: Report Formatting Standardization
Size: 30 PRs | Success rate: 87%
████████░░Top keywords:
style,guidelines,update workflow,formatting,hierarchy,markdownDescription: A focused campaign to normalize markdown header levels (h2→h3) and add progressive disclosure to agentic workflow reports across many workflow files.
Example PRs: #11490, #11503, #11538
C8: Safe Outputs Infrastructure
Size: 84 PRs | Success rate: 81%
████████░░Top keywords:
safe,safe outputs,outputs,safe output,project,outputDescription: Tasks related to the safe-outputs MCP server container: Docker/git configuration, mounting workspace, updating base images.
Example PRs: #11116, #11117, #11119
Success Rate by Cluster
Recommendations
analyze the workflow logs, identify the root cause, fix itprompt with a specific job URL achieves 82% success. Use it as the canonical template for CI failure remediation.References: §23561258569
Beta Was this translation helpful? Give feedback.
All reactions