-
Notifications
You must be signed in to change notification settings - Fork 12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
MULTIARCH-4252: namespace-scoped Pod Placement Config #453
base: main
Are you sure you want to change the base?
Conversation
@aleskandro: This pull request references MULTIARCH-4252 which is a valid jira issue. Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the spike to target the "4.19.0" version, but no target version was set. In response to this: Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
f90ad13
to
c364649
Compare
/cc @jeffdyoung @prb112 |
@aleskandro: This pull request references MULTIARCH-4252 which is a valid jira issue. Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the spike to target the "4.19.0" version, but no target version was set. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
minor comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is awesome... minor change since last update
/lgtm
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: prb112 The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
29c97dc
to
1598da1
Compare
f99229e
to
c2769c2
Compare
/lgtm |
- The pod placement controller should apply the configuration defined by the namespace-scoped `PodPlacementConfig` CRD to the pods in the selected namespaces, and the cluster-scoped `ClusterPodPlacementConfig` configuration for preferred affinities should be ignored | ||
- The pod placement controller should apply the configuration defined by the highest-priority namespace-scoped `PodPlacementConfig` CRD to pods matching multiple `PodPlacementConfig` CRDs in the same namespace | ||
- No PPC with the same priority should be allowed in the same namespace | ||
- When no priority is set in the `PodPlacementConfig`, the operator should default to the lowest priority among the existing Pod Placement Configs in the same namespace decreased by 1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what will occur when new ppc with no priority is created but the current lowest priority of ppc in the namespace is already 0? will the creation be rejected?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's a good point @lwan-wanglin. I updated the proposal by defaulting to 0 and increasing the value for each new one PPC that gets created.
In this way:
- 'newly' created PPCs (if the priority field is omitted), gets a higher priority (instead of a lower one)
- The issue arises when the highest priority is 255, in which case, we'd reject the creation (Adding a test case for this too).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i'm wondering if we are being too rigid with the priority. if no priority is defined, we should just set it to 0. If there are multiple ppc's with no priority it's a random pick. If the user really cares, they will specify the priority. This way is less deterministic when ppcs get evaluated, but assuming that a PPC which is created first has a higher priority might not be the best assumption to go with.
On the other hand, if a value is specified, we should take care that no other ppc has the same value - I'm fine with that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Given what you say, is your idea to:
- Allow multiple PPCs with priority 0 (when users omit it, we set it to 0 automatically)
- Do not allow multiple PPCs with same priority != 0
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
that was the idea. If we want we could allow multiple ppc's with the same non-zero priority as well - but then we have to think about a strategy of the order of evaluating ppc's within each priority bucket (FIFO, lifo, etc...) . Not sure that level of complexity is required at first so I thought if priority is specified we make sure it doesn't collide
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm removing the constraint to have the priority field as autoincremented but still ensuring the priorities are unique in the same namespace.
I wouldn't go into defining a sorting system among 'buckets' of PPC with the same priority either.
When users do not set the priority, it defaults to 0.
Finally, if a PPC with priority 0 exists already, the creation fails, requesting the user to set the priority explicitly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See 6f4e4ca, now squashed
58bbec9
to
4753570
Compare
/lgtm |
New changes are detected. LGTM label has been removed. |
b232ba3
to
6f4e4ca
Compare
6f4e4ca
to
a39df83
Compare
@aleskandro: all tests passed! Full PR test history. Your PR dashboard. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
This PR provides the design proposal for the namespace-scoped Pod Placement Configuration and minor fixes to the (to be updated) initial EP.