-
Notifications
You must be signed in to change notification settings - Fork 223
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RFC for Degraded NodePool Status Condition #1910
base: main
Are you sure you want to change the base?
Conversation
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: jigisha620 The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Hi @jigisha620. Thanks for your PR. I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
79262b2
to
1bc7741
Compare
Pull Request Test Coverage Report for Build 12718791708Details
💛 - Coveralls |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Checkpointing
designs/degraded-nodepools.md
Outdated
|
||
#### Considerations | ||
|
||
1. 👎 Heuristics can be wrong and mask failures |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you elaborate on what type of failures are being masked? As for it being wrong, I'm wondering if we should only consider degraded unknown or true. Maybe we don't ever transition it to false?
designs/degraded-nodepools.md
Outdated
|
||
One example is that when a network path does not exist due to a misconfigured VPC (network access control lists, subnets, route tables), Karpenter will not be able to provision compute with that NodeClass that joins the cluster until the error is fixed. Crucially, this will continue to charge users for compute that can never be used in a cluster. | ||
|
||
To improve visibility of these failure modes, this RFC proposes adding a `Degraded` status condition on the Nodepool that indicate to cluster users there may be a problem with a NodePool/NodeClass combination that needs to be investigated and corrected. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think like Jason has called out online and offline, I would make our motivation front and center here. Why do we think that there is a need for something like this to exist? Does it make tracking down failures to NodePools easier? Does alarming get easier with this kind of a setup?
designs/degraded-nodepools.md
Outdated
Evaluation conditions - | ||
|
||
1. We start with an empty buffer with `Degraded: Unknown`. | ||
2. There have to be 2 minimum failures in the buffer for `Degraded` to transition to `True` or basically the threshold would be 80%. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One thing that I still feel like would be better here is if we considered flipping the polarity of this condition type at all -- Degraded: False meaning that it's healthy feels a bit weird to me but I get that we have to come up with some other word besides "Degraded" that isn't "Ready" and probably isn't "Healthy" to really reflect what this condition is evaluating
designs/degraded-nodepools.md
Outdated
Unsuccessful Launch: -1 | ||
|
||
[] = 'Degraded: Unknown' | ||
[-1] = 'Degraded: Unknown' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: It's slightly confusing to call this "Degraded: Unknown". The only reason that I say that is becasue this doesn't necessarily mean that we transition the condition to Unknown when the condition is already set -- I know this is said above but I did find it a tad semantically odd as I was reading through this and trying to parse-out the design
1bc7741
to
3ffdbcc
Compare
Last Transition Time: 2025-01-13T18:57:20Z | ||
Message: | ||
Observed Generation: 1 | ||
Reason: Degraded |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One thing that I had discussed with Reed is making Reason
a more structured object and putting a serialized string output of that here since I understand that this has to be a string. That way, we can expose more details including error codes mentioning the reason behind the degradation, expose resource IDs/dependents causing it as well as have more than one reason for the degradation. Making it a structured object will also allow us to parse it better for metrics.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is reason
the right field to surface that level of detail? I agree with the direction, but it seems like message
would be more appropriate.
|
||
This RFC proposes enhancing the visibility of these failure modes by introducing a `Degraded` status condition on the NodePool. We can then create new metric/metric-labels around this status condition which will improve the observability by alerting cluster administrators to potential issues within a NodePool that require investigation and resolution. | ||
|
||
The `Degraded` status would specifically highlight instance launch/registration failures that Karpenter cannot fully diagnose or predict. However, this status should not be a mechanism to catch all types of launch/registration failures. Karpenter should not mark resources as `Degraded` if it can definitively determine, based on the NodePool/NodeClass configurations or through dry-run, that launch or registration will fail. For instance, if a NodePool is restricted to a specific zone using the `topology.kubernetes.io/zone` label, but the specified zone is not accessible through the provided subnet configurations, this inconsistency shouldn't trigger a `Degraded` status. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For instance, if a NodePool is restricted to a specific zone using the
topology.kubernetes.io/zone
label, but the specified zone is not accessible through the provided subnet configurations, this inconsistency shouldn't trigger aDegraded
status.
Can we enumerate different semantics for failures that we'd want to capture as different .Reasons that should trigger degraded == true, e.g. badSecurityGroup
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Major +1 to this -- I think what we need to explore here is how we are going to capture these failure modes -- if we are just relying on the registration timeout being hit, it's going to be tough to know what the reason was that the Node failed to join
|
||
### Option 1: In-memory Buffer to store history - Recommended | ||
|
||
This option will have an in-memory FIFO buffer, which will grow to a max size of 10 (this can be changed later). This buffer will store data about the success or failure during launch/registration and is evaluated by a controller to determine the relative health of the NodePool. This will be an int buffer and a positive means `Degraded: False`, negative means `Degraded: True` and 0 means `Degraded: Unknown`. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If I understand correctly, the final sentence states that the buffer can have three values: -1
(degraded true), 0
(unknown), 1
(degraded false). I don't think this matches the example, which only has two values, and the values map to launch success / failure, not the actual degraded state right? I think it might be more clear like this:
This option will have an in-memory FIFO buffer, which will grow to a max size of 10 (this can be changed later). This buffer will store data about the success or failure during launch/registration and is evaluated by a controller to determine the relative health of the NodePool. This will be an int buffer and a positive means `Degraded: False`, negative means `Degraded: True` and 0 means `Degraded: Unknown`. | |
This option will have an in-memory FIFO buffer, which will grow to a max size of 10 (this can be changed later). This buffer will store data about the success or failure during launch/registration and is evaluated by a controller to determine the relative health of the NodePool. This would be implemented as a `[]bool`, where `true` indicates a launch success, and `false` represents a failure. The state of the degraded condition would be based on the number of `false` entries in the buffer. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed, that's a miss on my end. I can update this to reflect two states instead of positive, negative, neutral.
|
||
This RFC proposes enhancing the visibility of these failure modes by introducing a `Degraded` status condition on the NodePool. We can then create new metric/metric-labels around this status condition which will improve the observability by alerting cluster administrators to potential issues within a NodePool that require investigation and resolution. | ||
|
||
The `Degraded` status would specifically highlight instance launch/registration failures that Karpenter cannot fully diagnose or predict. However, this status should not be a mechanism to catch all types of launch/registration failures. Karpenter should not mark resources as `Degraded` if it can definitively determine, based on the NodePool/NodeClass configurations or through dry-run, that launch or registration will fail. For instance, if a NodePool is restricted to a specific zone using the `topology.kubernetes.io/zone` label, but the specified zone is not accessible through the provided subnet configurations, this inconsistency shouldn't trigger a `Degraded` status. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Major +1 to this -- I think what we need to explore here is how we are going to capture these failure modes -- if we are just relying on the registration timeout being hit, it's going to be tough to know what the reason was that the Node failed to join
|
||
## Motivation | ||
|
||
Karpenter may initiate the creation of nodes based on a NodePool configuration, but these nodes might fail to join the cluster due to unforeseen registration issues that Karpenter cannot anticipate or prevent. An example illustrating this issue is when network connectivity is impeded by incorrect cluster security group configuration, such as missing outbound rule that allows outbound access to any IPv4 address. In such cases, Karpenter will continue its attempts to provision compute resources, but these resources will fail to join the cluster until the outbound rule for the security group is updated. The critical concern here is that users will incur charges for these compute resources despite their inability to be utilized within the cluster. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To be clear: I don't think that we're really solving for the problem of cost here -- we're still going to be launching instances and retrying
[-1, +1, +1, +1, +1, +1, +1, +1, +1, +1] = 'Degraded: False' | ||
``` | ||
|
||
#### Considerations |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We discussed this but what happens if we have one success and that causes us to stop trying new NodeClaims -- when that happens, what's the way that we make sure that we eventually get out of a Degraded state
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I haven't updated the RFC since our discussion about that. But @rschalo and I were thinking about expiring the entries in the buffer after some time - maybe 3x the registration ttl.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think one of the options we had discussed was expiring entries some amount of time after the last write so that there is some recency bias.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Expiring the entries like this should also see when was the last update made to the buffer if it is frequent enough (we can define the time), then we don't expire entries until the buffer is full.
1. 👎 Three retries can still be a long time to wait on compute that never provisions correctly. | ||
2. 👎 Setting `Degraded: False` on an update to NodePool implies Karpenter can vet with certainty that NodePool is correctly configured which is misleading. | ||
|
||
### How Does this Affect Metrics and Improve Observability? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm still a bit fuzzy on the reason that we need this extra label
2. True - NodePool has configuration issues that require customer investigation and resolution. Since Karpenter cannot automatically detect these specific launch or registration failures, we will document common failure scenarios and possible fixes in our troubleshooting guide to assist customers. | ||
3. False - There has been successful node registration using this NodePool. | ||
|
||
The state transition is not unidirectional meaning it can go from True to False and back to True or Unknown. A NodePool marked as Degraded can still be used for provisioning workloads, as this status isn't a precondition for readiness. However, when multiple NodePools have the same weight, a degraded NodePool will receive lower priority during the provisioning process compared to non-degraded ones. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If the other nodepool are healthy, when will we ever retry the degraded nodepool? Also why not make this for nodepools with different weights?
Description
Adding RFC for Degraded NodePool Status Condition.
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.