Scripts and hooks that make Claude Code enforce your design system automatically. Instead of hoping the AI uses your tokens and components correctly, these skills check deterministically after every file write.
Two skills work together:
- /design-setup — Run once. Discovers your components, tokens, and composition patterns. Generates config files the compose skill uses.
- /design-compose — Run when building UI. Composes existing components, enforces token usage, and catches new undocumented components. Validation scripts run automatically via hooks after every file write.
Copy the .claude/skills/ directory into your project's .claude/ folder:
your-project/
.claude/
skills/
design-setup/
SKILL.md
scripts/
design-compose/
SKILL.md
scripts/
config/ ← generated by design-setup
references/
If your project already has a .claude/ directory, just add the skills/ folder inside it.
Open Claude Code in your project and run:
/design-setup
This runs a series of Python scripts that scan your project and generate config files:
- paths.json — Where your components, tokens, and UI files live
- component-map.json — Catalog of your design system components
- composition-rules.json — Compound component patterns (e.g. Card must include CardHeader + CardContent)
- token-patterns.json — Regex patterns for catching hardcoded values
These config files are written to .claude/skills/design-compose/config/. You only need to run setup once per project (or again if your design system changes significantly).
Now when you build UI:
/design-compose make me a dashboard with charts and a data table
Three validation scripts run automatically after every file the AI writes:
| Script | What it checks |
|---|---|
| validate-tokens.py | No hardcoded colors, font sizes, or spacing values |
| check-imports.py | Design system components used instead of raw HTML |
| check-new-components.py | Flags components not yet in the catalog and asks you to add them |
When the session ends, validate-stop.py does a final sweep of all modified files.
If a script finds a problem, the AI is told to fix it before continuing. If a script finds a new component, the AI asks you whether to add it to the catalog. All results are logged to .claude/logs/validation.log.
The skills use Claude Code's hooks system to run Python scripts at specific points:
- PostToolUse hook with
Edit|Writematcher — runs the three validation scripts after every file write - Stop hook — runs the final validation sweep when the session ends
- UserPromptSubmit hook — logs the session for debugging
The scripts are deterministic — they use regex and file comparison, not AI judgment. This means they're fast (milliseconds), free (zero tokens), and reliable (same input always gives the same output).
The SKILL.md file tells the AI how to compose. The scripts tell it what it got wrong. The hooks make sure the scripts run at the right time.
.claude/skills/
design-setup/ ← Run once to configure
SKILL.md
scripts/ ← Discovery and config generation scripts
design-compose/ ← Run when building UI
SKILL.md
scripts/ ← Validation scripts (heavily commented for learning)
config/ ← Generated by design-setup
references/
docs/ ← Deep-dive documentation on hooks, scripts, and enforcement
the-problem.md
handler-taxonomy.md
where-handlers-run.md
the-feedback-loop.md
composition-enforcement.md
writing-good-validator-scripts.md
worked-example.md
Every script in design-compose/scripts/ has a detailed header comment written in plain language. Open any script and read the top — it explains what the script does, when it runs, what happens when it finds something, and where to see the results. No code experience required.
A few gotchas that aren't obvious from the Claude Code docs:
$CLAUDE_PROJECT_DIRis the only path variable available in hook commands. There is no$CLAUDE_SKILL_DIR. Use the full path:$CLAUDE_PROJECT_DIR/.claude/skills/<skill-name>/scripts/...type: prompthooks don't work on PostToolUse. They only work on PreToolUse, PermissionRequest, Stop, and UserPromptSubmit. Usetype: commandfor post-write validation.- Stop hooks with
type: promptuse{"decision": "block", "reason": "..."}or{}to allow — not{"ok": true/false}. But prompt hooks show an error on block rather than re-prompting. For iterative fix loops, use a command hook with exit code 2 instead. - Exit 0 stdout from hooks is not shown in the chat. It's hidden behind "X hooks ran." Only exit code 2 stderr shows prominently. Design your scripts to be silent on pass and loud on fail.
- AI-based validation is unreliable for catalog checks. We tried having the AI decide whether components were "new" — it hallucinated. Deterministic regex-based import parsing is the correct approach.
- Stop validation should only scan git-modified files. Scanning all files blocks designers with pre-existing violations they didn't create.