You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: CONTRIBUTING.md
+84-20Lines changed: 84 additions & 20 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -44,8 +44,7 @@ On [GitHub Codespaces](https://github.com/features/codespaces) it's even simpler
44
44
1. Push to your fork and submit a pull request
45
45
1. Wait for your pull request to be reviewed and merged.
46
46
47
-
For the detailed test workflow, command-selection prompt, and PR reporting template, see [`TESTING.md`](./TESTING.md).
48
-
Activate the project virtual environment (see the Setup block in [`TESTING.md`](./TESTING.md)), then install the CLI from your working tree (`uv pip install -e .` after `uv sync --extra test`) or otherwise ensure the shell uses the local `specify` binary before running the manual slash-command tests described below.
47
+
Activate the project virtual environment (see [Testing setup](#testing-setup) below), then install the CLI from your working tree (`uv pip install -e .` after `uv sync --extra test`) or otherwise ensure the shell uses the local `specify` binary before running the manual slash-command tests described below.
49
48
50
49
Here are a few things you can do that will increase the likelihood of your pull request being accepted:
51
50
@@ -69,34 +68,99 @@ When working on spec-kit:
69
68
70
69
For the smoothest review experience, validate changes in this order:
71
70
72
-
1.**Run focused automated checks first** — use the quick verification commands in [`TESTING.md`](./TESTING.md) to catch packaging, scaffolding, and configuration regressions early.
73
-
2.**Run manual workflow tests second** — if your change affects slash commands or the developer workflow, follow [`TESTING.md`](./TESTING.md) to choose the right commands, run them in an agent, and capture results for your PR.
74
-
3.**Use local release packages when debugging packaged output** — if you need to inspect the exact files CI-style packaging produces, generate local release packages as described below.
71
+
1.**Run focused automated checks first** — use the quick verification commands [below](#automated-checks) to catch scaffolding and configuration regressions early.
72
+
2.**Run manual workflow tests second** — if your change affects slash commands or the developer workflow, follow the [manual testing](#manual-testing) section to choose the right commands, run them in an agent, and capture results for your PR.
75
73
76
-
### Testing template and command changes locally
74
+
### Automated checks
77
75
78
-
Running `uv run specify init` pulls released packages, which won’t include your local changes.
79
-
To test your templates, commands, and other changes locally, follow these steps:
76
+
#### Agent configuration and wiring consistency
80
77
81
-
1.**Create release packages**
78
+
```bash
79
+
uv run python -m pytest tests/test_agent_config_consistency.py -q
80
+
```
82
81
83
-
Run the following command to generate the local packages:
82
+
Run this when you change agent metadata, context update scripts, or integration wiring.
# Install the project and test dependencies from your local branch
90
+
cd<spec-kit-repo>
91
+
uv sync --extra test
92
+
source .venv/bin/activate # On Windows (CMD): .venv\Scripts\activate | (PowerShell): .venv\Scripts\Activate.ps1
93
+
uv pip install -e .
94
+
# Ensure the `specify` binary in this environment points at your working tree so the agent runs the branch you're testing.
94
95
95
-
3.**Open and test the agent**
96
+
# Initialize a test project using your local changes
97
+
uv run specify init <temp-dir>/speckit-test --ai <agent> --offline
98
+
cd<temp-dir>/speckit-test
96
99
97
-
Navigate to your test project folder and open the agent to verify your implementation.
100
+
# Open in your agent
101
+
```
98
102
99
-
If you only need to validate generated file structure and content before doing manual agent testing, start with the focused automated checks in [`TESTING.md`](./TESTING.md). Keep this section for the cases where you need to inspect the exact packaged output locally.
103
+
#### Manual testing process
104
+
105
+
Any change that affects a slash command's behavior requires manually testing that command through an AI agent and submitting results with the PR.
106
+
107
+
1.**Identify affected commands** — use the [prompt below](#determining-which-tests-to-run) to have your agent analyze your changed files and determine which commands need testing.
108
+
2.**Set up a test project** — scaffold from your local branch (see [Testing setup](#testing-setup)).
109
+
3.**Run each affected command** — invoke it in your agent, verify it completes successfully, and confirm it produces the expected output (files created, scripts executed, artifacts populated).
110
+
4.**Run prerequisites first** — commands that depend on earlier commands (e.g., `/speckit.tasks` requires `/speckit.plan` which requires `/speckit.specify`) must be run in order.
111
+
5.**Report results** — paste the [reporting template](#reporting-results) into your PR with pass/fail for each command tested.
112
+
113
+
#### Reporting results
114
+
115
+
Paste this into your PR:
116
+
117
+
~~~markdown
118
+
## Manual test results
119
+
120
+
**Agent**: [e.g., GitHub Copilot in VS Code] | **OS/Shell**: [e.g., macOS/zsh]
121
+
122
+
| Command tested | Notes |
123
+
|----------------|-------|
124
+
|`/speckit.command`||
125
+
~~~
126
+
127
+
#### Determining which tests to run
128
+
129
+
Copy this prompt into your agent. Include the agent's response (selected tests plus a brief explanation of the mapping) in your PR.
130
+
131
+
~~~text
132
+
Read CONTRIBUTING.md, then run `git diff --name-only main` to get my changed files.
133
+
For each changed file, determine which slash commands it affects by reading
134
+
the command templates in templates/commands/ to understand what each command
135
+
invokes. Use these mapping rules:
136
+
137
+
- templates/commands/X.md → the command it defines
138
+
- scripts/bash/Y.sh or scripts/powershell/Y.ps1 → every command that invokes that script (grep templates/commands/ for the script name). Also check transitive dependencies: if the changed script is sourced by other scripts (e.g., common.sh is sourced by create-new-feature.sh, check-prerequisites.sh, setup-plan.sh, update-agent-context.sh), then every command invoking those downstream scripts is also affected
139
+
- templates/Z-template.md → every command that consumes that template during execution
140
+
- src/specify_cli/*.py → CLI commands (`specify init`, `specify check`, `specify extension *`, `specify preset *`); test the affected CLI command and, for init/scaffolding changes, at minimum test /speckit.specify
141
+
- extensions/X/commands/* → the extension command it defines
142
+
- extensions/X/scripts/* → every extension command that invokes that script
143
+
- extensions/X/extension.yml or config-template.yml → every command in that extension. Also check if the manifest defines hooks (look for `hooks:` entries like `before_specify`, `after_implement`, etc.) — if so, the core commands those hooks attach to are also affected
144
+
- presets/*/* → test preset scaffolding via `specify init` with the preset
145
+
- pyproject.toml → packaging/bundling; test `specify init` and verify bundled assets
146
+
147
+
Include prerequisite tests (e.g., T5 requires T3 requires T1).
148
+
149
+
Output in this format:
150
+
151
+
### Test selection reasoning
152
+
153
+
| Changed file | Affects | Test | Why |
154
+
|---|---|---|---|
155
+
| (path) | (command) | T# | (reason) |
156
+
157
+
### Required tests
158
+
159
+
Number each test sequentially (T1, T2, ...). List prerequisite tests first.
0 commit comments