[prompt-analysis] 🤖 Copilot PR Prompt Pattern Analysis - 2025-11-20 #4379
Closed
Replies: 1 comment
-
|
This discussion was automatically closed because it was created by an agentic workflow more than 1 week ago. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
🤖 Copilot PR Prompt Pattern Analysis - 2025-11-20
This report analyzes 1,000 Copilot-generated PRs from the last 30 days to identify which prompt patterns lead to successful merges versus closed PRs.
Summary
Analysis Period: Last 30 days
Total PRs: 1,000 | Merged: 766 (76.6%) | Closed: 233 (23.3%) | Open: 1
Key Finding: Overall, 76.6% of Copilot PRs are successfully merged, indicating strong effectiveness of the Copilot coding agent when properly prompted.
Full Analysis Details
Prompt Categories and Success Rates
Insight: Documentation and testing prompts have the highest success rates, suggesting these are well-defined tasks that Copilot handles effectively.
Prompt Characteristics Analysis
✅ Successful Prompt Patterns
Common characteristics in merged PRs:
Most effective keywords in merged PRs:
issue,workflow,github- Context referencesupdate,add,section- Clear actionscommand,files,description- Specific targetsExample successful prompts:
Short & Specific with Context (PR Fix broken documentation links in troubleshooting and how-it-works pages #4374) → MERGED
Clear Imperative (PR Fix test isolation in collect_ndjson_output.test.cjs #4365) → MERGED
Detailed with Issue Context (Multiple examples) → MERGED
❌ Unsuccessful Prompt Patterns
Common characteristics in closed PRs:
Keywords more common in closed PRs:
Example unsuccessful prompts:
Vague/Incomplete (PR [WIP] Remove common keywords and phrases from analysis #4346) → CLOSED
Potentially Too Complex (PR [WIP] Skip conclusion job if agent job is cancelled #4370) → CLOSED
Key Insights
📊 Pattern 1: Context is King
📏 Pattern 2: File References Matter
🎯 Pattern 3: Prompt Length is Neutral
🔧 Pattern 4: Documentation & Testing Excel
Recommendations
Based on the analysis of 1,000 Copilot PRs:
✅ DO: Include Context Links
Recommendation: Always include GitHub issue or workflow run URLs when relevant
Fix docs broken links https://github.com/org/repo/actions/runs/12345✅ DO: Reference Specific Files
Recommendation: Mention specific files, paths, or file extensions when possible
Update the authentication logic in auth.js to handle token refresh✅ DO: Use Clear Imperative Verbs
Recommendation: Start with action verbs: fix, add, update, remove, implement
Fix javascript testsinstead ofThe tests need fixing✅ DO: Be Specific About Scope
Recommendation: Define clear boundaries for the task
Add error handling to the API clientnotImprove error handling❌ AVOID: Vague Instructions
Recommendation: Don't use ambiguous verbs like "improve", "enhance", "optimize" without specifics
Improve the code| GOOD:Refactor the parser to use switch statements❌ AVOID: Multiple Complex Dependencies
Recommendation: Break down tasks with multiple conditions into simpler prompts
💡 IDEAL PROMPT TEMPLATE
Statistical Highlights
Conclusion
The analysis reveals that Copilot coding agent is highly effective with a 76.6% merge rate. Success is strongly correlated with:
Developers can improve success rates by providing specific context, referencing exact files, and using clear imperative instructions rather than vague improvement requests.
Analysis Period: 2025-11-20 (Last 30 days)
Data Source: 1,000 Copilot-generated PRs from githubnext/gh-aw
Beta Was this translation helpful? Give feedback.
All reactions