ci: validate k8s manifests in deployment#102
Conversation
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughAdds a new GitHub Actions workflow Changes
Sequence Diagram(s)sequenceDiagram
participant PR as Pull Request
participant GH as GitHub Actions
participant Runner as Runner (ubuntu-latest)
participant Repo as Repository (deployment/*)
participant KC as kubeconform
participant KS as kube-score
PR->>GH: PR modifies deployment/**
GH->>Runner: start validate job
Runner->>Repo: checkout repository
Runner->>KC: download & install kubeconform v0.7.0
Runner->>KS: download & install kube-score v1.20.0
Runner->>Repo: discover manifests in deployment/
alt manifests found
Runner->>KC: run strict schema validation (K8s v1.28.0)
KC-->>Runner: validation results (fail on schema errors)
Runner->>KS: run kube-score (WARN-only)
KS-->>Runner: warnings reported (non-fatal unless invocation error)
else no manifests
Runner-->>GH: skip validation steps
end
Runner-->>GH: report job outcome
Estimated code review effort🎯 2 (Simple) | ⏱️ ~12 minutes Poem
🚥 Pre-merge checks | ✅ 6✅ Passed checks (6 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Tip Issue Planner is now in beta. Read the docs and try it out! Share your feedback on Discord. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 5
🤖 Fix all issues with AI agents
In @.github/workflows/k8s-validate.yml:
- Around line 76-82: The current kube-score step ("kube-score (WARN only; do not
fail workflow)") uses a blanket `|| true` after the pipeline (`find ... | xargs
-0 kube-score score --output-format ci || true`) which hides real failures;
change it to capture the kube-score exit code and only ignore the specific exit
code(s) that indicate policy findings while failing on execution errors: run the
pipeline into a variable (e.g., `set +e; find ... | xargs -0 kube-score score
--output-format ci; rc=$?; set -e`), then if rc indicates a runtime/installation
error (e.g., command not found or non-kube-score exit codes) `exit $rc`,
otherwise `echo "kube-score reported findings (rc=$rc)"` and continue; update
the step to use that conditional logic instead of unconditional `|| true` so
tool invocation failures still fail the job but findings remain non-fatal.
- Around line 51-61: The "Check for manifests" step (id: files) can fail when
the deployment directory is missing; update the step to defensively check for
the deployment directory before running find (e.g., use if [ -d deployment ] to
set COUNT via the existing find pipeline only when the directory exists,
otherwise set COUNT=0), preserving the existing behavior that writes
found=true/false to GITHUB_OUTPUT and keeping set -euo pipefail in place.
- Around line 1-7: The pull_request trigger only watches the "deployment/**"
pattern so changes to this workflow (k8s-validate.yml) won't run validation;
update the pull_request.paths list in the workflow's on: block to also include
the workflow file itself (add the k8s-validate.yml path alongside
"deployment/**") so the job runs when the workflow is modified.
- Around line 23-26: Remove the explicit "Install jq" step from the workflow
(the job step that runs "sudo apt-get update" and "sudo apt-get install -y jq");
since ubuntu-latest already includes jq, delete that entire step to eliminate
the unnecessary apt-get overhead and shorten job runtime.
- Around line 28-49: Replace the fragile "latest" GitHub API lookups for
kubeconform and kube-score with pinned-version downloads and authenticate
requests using the runner's GITHUB_TOKEN: stop using the jq/API flow that sets
URL via curl+jq (the URL variable and select logic), instead use the known
release asset URLs for kubeconform v0.7.0 and kube-score v1.20.0 and download
them directly; when using curl to fetch from api.github.com or GitHub releases,
add an Authorization: Bearer $GITHUB_TOKEN header to avoid rate limits; keep the
subsequent steps (tar extraction/chmod + sudo mv to /usr/local/bin and version
checks like kubeconform -v and kube-score version) but change the URL assignment
and curl invocations to the pinned URLs and authenticated curl headers and
remove the jq dependency.
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In @.github/workflows/k8s-validate.yml:
- Line 93: Remove the leaked AI citation artifact
":contentReference[oaicite:0]{index=0}" from the kube-score comment on the line
that currently reads "# kube-score exits non-zero (commonly 1) when it reports
problems in CI usage. :contentReference[oaicite:0]{index=0}"; update the comment
to only contain the human-readable text (e.g., "# kube-score exits non-zero
(commonly 1) when it reports problems in CI usage.") so no AI-generated tokens
remain.
- Around line 29-39: The workflow step references ${GITHUB_TOKEN} in the install
shell but set -u aborts because GITHUB_TOKEN isn't exported to the shell; update
the workflow to provide the token as an environment variable either by adding an
env mapping for GITHUB_TOKEN at the job level so both the kubeconform install
and any other steps can use it, or by replacing ${GITHUB_TOKEN} in the curl
command with the GitHub Actions expression (e.g., ${{ secrets.GITHUB_TOKEN }})
so the value is injected inline; adjust the step that constructs the url and the
curl header usage accordingly (symbols: KUBECONFORM_VERSION, GITHUB_TOKEN,
kubeconform install step).
|
@coderabbitai full review |
✅ Actions performedFull review triggered. |
|
@coderabbitai full review |
✅ Actions performedFull review triggered. |
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In @.github/workflows/k8s-validate.yml:
- Around line 92-99: The kubeconform invocation (the command starting with
"kubeconform" and its flags like -strict, -summary, -ignore-missing-schemas,
-kubernetes-version) should include an explicit --output flag for CI-friendly
parsing; update the command to add either --output json or --output tap (e.g.,
--output json) so validation failures are machine-readable and easier to
integrate with CI reporters/annotations.
- Around line 72-99: The manifest-discovery (find + grep apiVersion/kind) logic
is duplicated; extract it into the existing "Check for manifests" step by
writing the matching file paths to a single shared file (e.g.,
"$RUNNER_TEMP/k8s-manifests.txt") and change both validation steps (including
the "Schema validate (FAIL on errors)" step and the other validation step that
repeats the same discovery) to read that file into the files array instead of
re-running find/grep; update references in the steps that currently use the
inline discovery so they populate files from the shared list and remove the
duplicate discovery blocks.
| kubeconform \ | ||
| -strict \ | ||
| -summary \ | ||
| -ignore-missing-schemas \ | ||
| -kubernetes-version "${K8S_VERSION}" \ | ||
| -schema-location default \ | ||
| -schema-location "https://raw.githubusercontent.com/yannh/kubernetes-json-schema/master/{{.NormalizedKubernetesVersion}}-standalone-strict/{{.ResourceKind}}.json" \ | ||
| "${files[@]}" |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Consider adding --output format for CI-friendly kubeconform output.
Currently kubeconform uses its default text output. Adding --output json or --output tap can make failures easier to parse in CI logs and enables future integration with annotation/reporting tools.
This is purely optional — the current setup is functional.
🤖 Prompt for AI Agents
In @.github/workflows/k8s-validate.yml around lines 92 - 99, The kubeconform
invocation (the command starting with "kubeconform" and its flags like -strict,
-summary, -ignore-missing-schemas, -kubernetes-version) should include an
explicit --output flag for CI-friendly parsing; update the command to add either
--output json or --output tap (e.g., --output json) so validation failures are
machine-readable and easier to integrate with CI reporters/annotations.
|
Pushed updates . Could you take another look when you get a chance? |
KaranJagtiani
left a comment
There was a problem hiding this comment.
Squash commits: ci: add Kubernetes manifest validation workflow
| else | ||
| echo "kube-score failed to run correctly (rc=$rc)" | ||
| exit "$rc" | ||
| fi No newline at end of file |
.github/workflows/k8s-validate.yml
Outdated
| while IFS= read -r -d '' f; do | ||
| if grep -qE '^[[:space:]]*apiVersion:' "$f" && grep -qE '^[[:space:]]*kind:' "$f"; then | ||
| files+=("$f") | ||
| fi | ||
| done < <(find deployment -type f \( -name '*.yml' -o -name '*.yaml' \) -print0) |
There was a problem hiding this comment.
Duplicated logic. Centralize it in "Check for manifests" step, write to $RUNNER_TEMP/k8s-manifests.txt, and have both validation steps read from it to avoid drift.
.github/workflows/k8s-validate.yml
Outdated
| if [ "${#files[@]}" -eq 0 ]; then | ||
| echo "No Kubernetes manifest YAMLs (apiVersion/kind) found under deployment/. Skipping schema validation." | ||
| exit 0 | ||
| fi |
There was a problem hiding this comment.
Remove redundant “no manifests found” checks since steps.files.outputs.found already guards execution.
.github/workflows/k8s-validate.yml
Outdated
| run: | | ||
| set -euo pipefail | ||
|
|
||
| # Only validate files that look like Kubernetes manifests. |
There was a problem hiding this comment.
Remove redundant comments.
|
Pushed updates again. Could you take another look when you get a chance? |
KaranJagtiani
left a comment
There was a problem hiding this comment.
Squash commits: ci: add Kubernetes manifest validation workflow
|
@iSwiin Please squash the commits into a single clean commit. |
cac0bc4 to
0405157
Compare
|
Squashed into a single commit (ci: add Kubernetes manifest validation workflow) and force pushed. Please take a final pass when you get a chance. |
KaranJagtiani
left a comment
There was a problem hiding this comment.
Let's rename the file to: "deployment-ci.yml" so that it can be later upgraded to support additional checks for the deployment/** path.
| else | ||
| echo "kube-score failed to run correctly (rc=$rc)" | ||
| exit "$rc" | ||
| fi No newline at end of file |
There was a problem hiding this comment.
Newline is still missing here
| @@ -0,0 +1,111 @@ | |||
| name: Validate Kubernetes manifests | |||
There was a problem hiding this comment.
| name: Validate Kubernetes manifests | |
| name: Deployment | |
This way the checks render as:
Deployment / validate
Deployment / security-scan (future)
Deployment / lint-helm (future)
|
@iSwiin There are pending changes blocking this PR. |
Description
Check: Infrastructure/build changes
Related Issue(s)
Closes #99
Type of Change
Testing
Opened PR that modifies deployment/** and confirmed k8s-validate workflow triggers.
Verified schema validation step fails on invalid manifest and passes after fix.
Verified kube score reports findings as warnings (workflow still passes).
Checklist
Before Requesting Review
Code Quality
printstatements orconsole.logcallspackage-lock.json(we useyarnonly for the UI)Screenshots (if applicable)
Additional Notes