-
Notifications
You must be signed in to change notification settings - Fork 24
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: create ClusterConfig CRD #1233
base: main
Are you sure you want to change the base?
Conversation
… back to operator path
…ator path in values
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Still need to go through the config functions more and test out a few deploy scenarios but wanted to get this initial round of feedback out. The approach overall looks sound and aligns with the design so no overall concerns - just will want to validate there's no timing or upgrade issues with the new CRD approaches.
configLog.error( | ||
"Invalid CA Cert provided in uds-operator-config ClusterConfig, falling back to no CA Cert", | ||
); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This might be a good one to add an event on the cluster config CR since we fallback to something and they might get unexpected behavior if they just made a typo or something.
const cfgSecretList = await K8s(kind.Secret).InNamespace("pepr-system").Get(); | ||
|
||
const cfgSecret = cfgSecretList.items.find(s => s.metadata?.name === "uds-operator-config"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I might be missing intent here but .Get()
supports getting a secret by name, i.e. this could just be one line:
const cfgSecretList = await K8s(kind.Secret).InNamespace("pepr-system").Get(); | |
const cfgSecret = cfgSecretList.items.find(s => s.metadata?.name === "uds-operator-config"); | |
const cfgSecret = await K8s(kind.Secret).InNamespace("pepr-system").Get("uds-operator-config"); |
This also could be used for the cluster config, but I see that you're also using the full get to error if multiple cluster configs are found so maybe that one makes sense as is?
@@ -82,4 +83,39 @@ export async function registerCRDs() { | |||
process.exit(1); | |||
}); | |||
} | |||
|
|||
// Register the Exemption CRD if we're in "admission" or dev mode (Exemptions are watched by the admission controllers) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Comment needs updating to reflect the right CRD, but also probably should update the conditional to run regardless of PEPR_WATCH_MODE
? (although this isn't actually in the execution code anymore)
pattern: | ||
"^(([0-9]{1,3}\\.){3}[0-9]{1,3}\\/[0-9]+)|(([a-fA-F0-9:]+:+)+[a-fA-F0-9:]+(/[0-9]+)?)$", // Matches IPv4 and IPv6 CIDR |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Assuming this regex works, can we use the same on the kubeapiCIDR
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It might also be nice to have tests against this regex in our code somewhere - I think @UnicornChance briefly looked at this for the remoteCidr
stuff and we ended up just leaving the validation to k8s because we couldn't find a succinct and solid approach to validating both ipv4 and ipv6.
adminDomain: { | ||
type: "string", | ||
description: | ||
"Domain all cluster services on the admin gateawy will be exposed on", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"Domain all cluster services on the admin gateawy will be exposed on", | |
"Domain all cluster services on the admin gateway will be exposed on", |
cluster: | ||
attributes: | ||
UDS_CLUSTER_NAME: "###ZARF_VAR_CLUSTER_NAME###" | ||
UDS_CLUSTER_TAGS: "###ZARF_VAR_CLUSTER_TAGS###" | ||
expose: | ||
# Domain configuration (admin defaults to `admin.UDS_DOMAIN`) | ||
UDS_DOMAIN: "###ZARF_VAR_DOMAIN###" | ||
UDS_ADMIN_DOMAIN: "###ZARF_VAR_ADMIN_DOMAIN###" | ||
UDS_CA_CERT: "###ZARF_VAR_CA_CERT###" | ||
policy: | ||
UDS_ALLOW_ALL_NS_EXEMPTIONS: "###ZARF_VAR_ALLOW_ALL_NS_EXEMPTIONS###" | ||
networking: | ||
KUBEAPI_CIDR: "" | ||
KUBENODE_CIDRS: [] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
imo all of these can follow better values naming standards. As long as we are maintaining parity on the zarf vars, these are net new keys so we can name them better. For example:
cluster: | |
attributes: | |
UDS_CLUSTER_NAME: "###ZARF_VAR_CLUSTER_NAME###" | |
UDS_CLUSTER_TAGS: "###ZARF_VAR_CLUSTER_TAGS###" | |
expose: | |
# Domain configuration (admin defaults to `admin.UDS_DOMAIN`) | |
UDS_DOMAIN: "###ZARF_VAR_DOMAIN###" | |
UDS_ADMIN_DOMAIN: "###ZARF_VAR_ADMIN_DOMAIN###" | |
UDS_CA_CERT: "###ZARF_VAR_CA_CERT###" | |
policy: | |
UDS_ALLOW_ALL_NS_EXEMPTIONS: "###ZARF_VAR_ALLOW_ALL_NS_EXEMPTIONS###" | |
networking: | |
KUBEAPI_CIDR: "" | |
KUBENODE_CIDRS: [] | |
cluster: | |
attributes: | |
clusterName: "###ZARF_VAR_CLUSTER_NAME###" | |
clusterTags: "###ZARF_VAR_CLUSTER_TAGS###" | |
expose: | |
# Domain configuration (admin defaults to `admin.<domain>`) | |
domain: "###ZARF_VAR_DOMAIN###" | |
adminDomain: "###ZARF_VAR_ADMIN_DOMAIN###" | |
caCert: "###ZARF_VAR_CA_CERT###" | |
policy: | |
allowAllNsExemptions: "###ZARF_VAR_ALLOW_ALL_NS_EXEMPTIONS###" | |
networking: | |
kubeApiCidrs: "" | |
kubeNodeCidrs: [] |
For context the other keys were all caps + _
to match with env standards since they were directly mounted as env by pepr. Since we're no longer doing that it would be good to align with helm norms of camelcase here.
UDS_CLUSTER_NAME: "###ZARF_VAR_CLUSTER_NAME###" | ||
UDS_CLUSTER_TAGS: "###ZARF_VAR_CLUSTER_TAGS###" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For cluster name and tags imo we don't need to expose a zarf var for these, unless there's a specific UI use case I'm missing here - they could just be set as overrides?
Specific to tags, shouldn't that be a list rather than a string (maybe this is string because of zarf var right now though)? It could also be useful to have a values schema for this chart now that we're adding more complexity to it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
On the tag comment I think that would also help clean up the helm template for tags if switched to a proper list.
{{- range $index, $cidr := .Values.operator.KUBENODE_CIDRS }} | ||
- {{ $cidr | quote }} | ||
{{- end }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll probably have to test this out locally so putting a placeholder comment for now but this templating looks off to me. I'd expect that we need to split on ,
here and range over that (example of this). By default I think range here will either glob things together or it might range over each character?
# Label the specific CRDs | ||
./zarf tools kubectl label $(./zarf tools kubectl get crds | awk '/exemptions.uds.dev|packages.uds.dev/ {print "crd/" $1}') "app.kubernetes.io/managed-by=Helm" --overwrite || true | ||
|
||
# Annotate with release name | ||
./zarf tools kubectl annotate $(./zarf tools kubectl get crds | awk '/exemptions.uds.dev|packages.uds.dev/ {print "crd/" $1}') "meta.helm.sh/release-name=uds-cluster-crds" --overwrite || true | ||
|
||
# Annotate with release namespace | ||
./zarf tools kubectl annotate $(./zarf tools kubectl get crds | awk '/exemptions.uds.dev|packages.uds.dev/ {print "crd/" $1}') "meta.helm.sh/release-namespace=pepr-system" --overwrite || true |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This feels overly complex, we should be able to just directly label/annotate two CRDs in a single kubectl call without needing a get/awk in there?
# Label the specific CRDs | |
./zarf tools kubectl label $(./zarf tools kubectl get crds | awk '/exemptions.uds.dev|packages.uds.dev/ {print "crd/" $1}') "app.kubernetes.io/managed-by=Helm" --overwrite || true | |
# Annotate with release name | |
./zarf tools kubectl annotate $(./zarf tools kubectl get crds | awk '/exemptions.uds.dev|packages.uds.dev/ {print "crd/" $1}') "meta.helm.sh/release-name=uds-cluster-crds" --overwrite || true | |
# Annotate with release namespace | |
./zarf tools kubectl annotate $(./zarf tools kubectl get crds | awk '/exemptions.uds.dev|packages.uds.dev/ {print "crd/" $1}') "meta.helm.sh/release-namespace=pepr-system" --overwrite || true | |
# Label the specific CRDs | |
./zarf tools kubectl label crd exemptions.uds.dev packages.uds.dev "app.kubernetes.io/managed-by=Helm" --overwrite || true | |
# Annotate with release name | |
./zarf tools kubectl annotate crd exemptions.uds.dev packages.uds.dev "meta.helm.sh/release-name=uds-cluster-crds" --overwrite || true | |
# Annotate with release namespace | |
./zarf tools kubectl annotate crd exemptions.uds.dev packages.uds.dev "meta.helm.sh/release-namespace=pepr-system" --overwrite || true |
I think the resulting command ends up being the same unless someone had only one of the CRs on their cluster - but the net result should be the same always due to the || true
fallback.
# This is the chart version. This version number should be incremented each time you make changes | ||
# to the chart and its templates, including the app version. | ||
# Versions are expected to follow Semantic Versioning (https://semver.org/) | ||
version: 0.1.0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Placeholder note mostly for myself - I've encountered issues in the past when charts are updated without a version bump. I think that zarf will upgrade/apply regardless but we may want to validate to ensure that we have the ability to upgrade CRDs easily here without versioning this chart. If not we may want to automate the version bump or ensure there's clear dev notes on how to.
Description
This PR:
Related Issue
Design Doc
https://www.notion.so/ClusterConfig-v1-183e512f24fc80879e7cce7731617039
Type of change
Steps to Validate
Checklist before merging