Improve prompt-lookup skill and add skill review workflow#1072
Improve prompt-lookup skill and add skill review workflow#1072fernandezbaptiste wants to merge 2 commits intof:mainfrom
Conversation
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: Organization UI Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (1)
📝 WalkthroughWalkthroughA new GitHub Actions workflow is introduced to automatically review skill definitions, and the prompt-lookup skill documentation is restructured to provide more prescriptive, example-driven guidance on prompt retrieval and improvement workflows. Changes
Estimated code review effort🎯 2 (Simple) | ⏱️ ~12 minutes Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (1)
plugins/claude/prompts.chat/skills/prompt-lookup/SKILL.md (1)
12-14: Add language identifiers to fenced code blocks.The code blocks lack language specifiers, which triggers markdownlint warnings (MD040). Since these show tool call syntax, consider adding a language hint for syntax highlighting.
Proposed fix
-``` -search_prompts(query="code review", category="coding", limit=5) -``` +```python +search_prompts(query="code review", category="coding", limit=5) +```Apply similarly to lines 24-26 and 32-34.
Also applies to: 24-26, 32-34
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@plugins/claude/prompts.chat/skills/prompt-lookup/SKILL.md` around lines 12 - 14, Add language identifiers to the fenced code blocks that demonstrate tool calls (e.g., the block containing search_prompts(query="code review", category="coding", limit=5)) by changing the opening fence from ``` to a language-specific fence like ```python so markdownlint MD040 warnings are resolved; update the same pattern for the other two fenced blocks in this document that contain similar search_prompts examples.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.github/workflows/skill-review.yml:
- Line 14: Replace the invalid action reference
"tesslio/skill-review@3505a029f9a978284a74edacfbf7045625627db3" with the
supported Tessl integration by using "tesslio/publish@main" and supply the
required inputs (token, review, review-threshold); specifically remove the old
uses line and add the publish action with inputs: token mapped to
secrets.TESSL_API_TOKEN, review set to true, and review-threshold set to the
desired numeric threshold (e.g., 50) so the workflow invokes Tessl's built-in
review feature instead of a non-existent action.
In `@plugins/claude/prompts.chat/skills/prompt-lookup/SKILL.md`:
- Line 28: The documentation in SKILL.md incorrectly shows `${variable}` syntax;
update the sentence to reference the actual template syntax used by the
get_prompt tool (use `{{variable}}` for placeholders) and adjust the
example/phrase about defaults to match the tool behavior (ask user to fill
required `{{variable}}` placeholders and note that defaults are not used if the
tool requires explicit values). Locate references to
`${variable}`/`${variable:default}` and replace them with `{{variable}}` (and
update any mention of default handling to reflect get_prompt's behavior).
---
Nitpick comments:
In `@plugins/claude/prompts.chat/skills/prompt-lookup/SKILL.md`:
- Around line 12-14: Add language identifiers to the fenced code blocks that
demonstrate tool calls (e.g., the block containing search_prompts(query="code
review", category="coding", limit=5)) by changing the opening fence from ``` to
a language-specific fence like ```python so markdownlint MD040 warnings are
resolved; update the same pattern for the other two fenced blocks in this
document that contain similar search_prompts examples.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: d0b8abcb-85b2-410d-a515-23c5ab666ab7
📒 Files selected for processing (2)
.github/workflows/skill-review.ymlplugins/claude/prompts.chat/skills/prompt-lookup/SKILL.md
| contents: read | ||
| steps: | ||
| - uses: actions/checkout@v4 | ||
| - uses: tesslio/skill-review@3505a029f9a978284a74edacfbf7045625627db3 # main |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Description: Check if the tesslio/skill-review action exists and retrieve basic info
# Check if the repository exists and get metadata
gh api repos/tesslio/skill-review --jq '{name, description, visibility: .private | not, archived, pushed_at, stargazers_count, open_issues_count}' 2>/dev/null || echo "Repository not found or not accessible"
# Verify the pinned commit exists
gh api repos/tesslio/skill-review/commits/3505a029f9a978284a74edacfbf7045625627db3 --jq '{sha: .sha, message: .commit.message, date: .commit.committer.date}' 2>/dev/null || echo "Commit not found"Repository: f/prompts.chat
Length of output: 337
🌐 Web query:
tesslio/skill-review GitHub Action
💡 Result:
There doesn’t appear to be a separate tesslio/skill-review GitHub Action. In Tessl’s docs, skill review is either:
- Run via the Tessl CLI (
tessl skill review), optionally with--optimize. [2] - Enabled as a quality gate inside the
tesslio/publish@mainGitHub Action using thereviewandreview-thresholdinputs. [1]
Minimal workflow example (from Tessl docs):
- uses: tesslio/publish@main
with:
token: ${{ secrets.TESSL_API_TOKEN }}
# review: 'true' # default: true
# review-threshold: '50' # default: 50 (0-100)Sources: Tessl “Review and publish with GitHub Actions” docs [1] and Tessl CLI reference for tessl skill review [2].
Remove the non-existent tesslio/skill-review action and use the correct Tessl integration.
The tesslio/skill-review action does not exist as a standalone GitHub Action. According to Tessl documentation, skill review is available either through the Tessl CLI (tessl skill review) or as a built-in feature of the tesslio/publish@main action with review and review-threshold inputs. Replace this action with the correct implementation, such as:
- uses: tesslio/publish@main
with:
token: ${{ secrets.TESSL_API_TOKEN }}
review: 'true'
review-threshold: '50'🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/skill-review.yml at line 14, Replace the invalid action
reference "tesslio/skill-review@3505a029f9a978284a74edacfbf7045625627db3" with
the supported Tessl integration by using "tesslio/publish@main" and supply the
required inputs (token, review, review-threshold); specifically remove the old
uses line and add the publish action with inputs: token mapped to
secrets.TESSL_API_TOKEN, review set to true, and review-threshold set to the
desired numeric threshold (e.g., 50) so the workflow invokes Tessl's built-in
review feature instead of a non-existent action.
| - `type`: Filter by TEXT, STRUCTURED, IMAGE, VIDEO, or AUDIO | ||
| - `category`: Filter by category slug (e.g., "coding", "writing") | ||
| - `tag`: Filter by tag slug | ||
| If the prompt contains `${variable}` or `${variable:default}` placeholders, ask the user to fill required values (no default) before using. |
There was a problem hiding this comment.
Incorrect placeholder syntax documented.
The actual get_prompt tool implementation uses {{variable}} template syntax (per the tool description: "If the prompt contains template variables (like {{variable}})"), but this documentation shows ${variable} and ${variable:default}. This mismatch will confuse agents trying to identify placeholders.
Proposed fix
-If the prompt contains `${variable}` or `${variable:default}` placeholders, ask the user to fill required values (no default) before using.
+If the prompt contains `{{variable}}` placeholders, ask the user to fill in values before using.📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| If the prompt contains `${variable}` or `${variable:default}` placeholders, ask the user to fill required values (no default) before using. | |
| If the prompt contains `{{variable}}` placeholders, ask the user to fill in values before using. |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@plugins/claude/prompts.chat/skills/prompt-lookup/SKILL.md` at line 28, The
documentation in SKILL.md incorrectly shows `${variable}` syntax; update the
sentence to reference the actual template syntax used by the get_prompt tool
(use `{{variable}}` for placeholders) and adjust the example/phrase about
defaults to match the tool behavior (ask user to fill required `{{variable}}`
placeholders and note that defaults are not used if the tool requires explicit
values). Locate references to `${variable}`/`${variable:default}` and replace
them with `{{variable}}` (and update any mention of default handling to reflect
get_prompt's behavior).
|
Following up on the above, @f |
Hey @f, glad #1026 was useful - reviewed this skill as well, and made some improvements.
ran the prompt-lookup skill through evals, found quick fixes (description went from
82%to100%, content went from57%to87%):tool docs listed parameters but never showed actual call syntax. added inline examples (
search_prompts,get_prompt,improve_prompt) so the agent can copy-paste directly"When to Use This Skill" section repeated what the description already covers. removed it and replaced the body with a search, retrieve, improve workflow with error handling for empty results
description was missing concrete actions like "browse prompt categories" or "search a prompt library". restructured to lead with what the skill does, followed by trigger terms
also added a lightweight GitHub Action that auto-reviews any skill.md changed in a PR. this means that it gives you and your contributors an instant quality signal before you have to review yourself (no signup, no tokens needed).
happy to keep contributing fixes like this if it's helpful
Summary by CodeRabbit
Documentation
Chores