-
Notifications
You must be signed in to change notification settings - Fork 239
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
OCPBUGS-36688: Move to use newer IPsec DaemonSets irrespective of MCP state #2454
base: master
Are you sure you want to change the base?
Conversation
@pperiyasamy: This pull request references Jira Issue OCPBUGS-36688, which is valid. The bug has been moved to the POST state. 3 validation(s) were run on this bug
Requesting review from QA contact: The bug has been updated to refer to the pull request using the external bug tracker. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
/assign @yuvalk @jcaamano @anuragthehatter @huiran0826 |
b32b067
to
93d9013
Compare
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: pperiyasamy The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/retest |
/test e2e-aws-ovn-ipsec-upgrade |
/test e2e-ovn-ipsec-step-registry |
@pperiyasamy: This pull request references Jira Issue OCPBUGS-36688, which is valid. 3 validation(s) were run on this bug
Requesting review from QA contact: In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am fine with checking the "paused" spec field of the pools for now.
@@ -288,6 +309,12 @@ spec: | |||
- -c | |||
- | | |||
#!/bin/bash | |||
{{ if .IPsecCheckForLibreswan }} | |||
if rpm --dbpath=/usr/share/rpm -q libreswan; then | |||
echo "host has libreswan and therefore ipsec will be configured by ipsec host daemonset, this ovn ipsec container is always \"alive\"" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What do you mean here with is always alive
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is just to keep liveness probe to succeed every time (when host flavor actually serving ipsec as host already installed with libreswan), otherwise this pod would crashloop.
pkg/network/ovn_kubernetes.go
Outdated
data.Data["IPsecMachineConfigEnable"] = IPsecMachineConfigEnable | ||
data.Data["OVNIPsecDaemonsetEnable"] = OVNIPsecDaemonsetEnable | ||
data.Data["OVNIPsecEnable"] = OVNIPsecEnable | ||
data.Data["IPsecCheckForLibreswan"] = renderBothIPsecDemonSetsWhenAPoolPausedState |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Couldn't this just be
data.Data["IPsecCheckForLibreswan"] = renderIPsecHostDaemonSet && renderIPsecContainerizedDaemonSet
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, done.
pkg/network/ovn_kubernetes.go
Outdated
machineConfigPoolPaused := isThereAnyMachineConfigPoolPaused(bootstrapResult.Infra) | ||
isIPsecMachineConfigActiveInUnPausedPools := isIPsecMachineConfigActive(bootstrapResult.Infra, true) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would move these two variables to the same block where renderBothIPsecDemonSetsWhenAPoolPausedState
is defined. And then I would elaborate a bit more the comment of that block saying that if there are unpaused pools, we wait until thos poolls have the ipsec machine config active before deploying both daemonsets
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
pkg/network/ovn_kubernetes.go
Outdated
@@ -653,7 +664,7 @@ func shouldRenderIPsec(conf *operv1.OVNKubernetesConfig, bootstrapResult *bootst | |||
|
|||
// While OVN ipsec is being upgraded and IPsec MachineConfigs deployment is in progress | |||
// (or) IPsec config in OVN is being disabled, then ipsec deployment is not updated. | |||
renderIPsecDaemonSetAsCreateWaitOnly = isIPsecMachineConfigNotActiveOnUpgrade || (isOVNIPsecActive && !renderIPsecOVN) | |||
renderIPsecDaemonSetAsCreateWaitOnly = (isIPsecMachineConfigNotActiveOnUpgrade && !renderBothIPsecDemonSetsWhenAPoolPausedState) || (isOVNIPsecActive && !renderIPsecOVN) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This condition is counter-intuitive.
What about
...isIPsecMachineConfigNotActiveOnUpgrade || !isIPsecMachineConfigActiveInUnPausedPools ...
Also since you changed the condition, please update the comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the existing condition (isIPsecMachineConfigNotActiveOnUpgrade && !renderBothIPsecDemonSetsWhenAPoolPausedState)
helps the case at which both daemonsets can be rendered without create-wait annotation, it can't be done with the suggested approach.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So I guess you mean to say is that you want to update the daemonsets if machine config is active in unpaused pools and inactive in paused pools. But I am missing the reasoning.
- Updating the deamonsets is something we didn't want to do if the machine config was not updated. Why?
- And why can we update them in the case the pools are paused? Are both these reasonings independent?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So I guess you mean to say is that you want to update the daemonsets if machine config is active in unpaused pools and inactive in paused pools. But I am missing the reasoning.
- Updating the deamonsets is something we didn't want to do if the machine config was not updated. Why?
This is the main issue we are trying to address with this PR, when ipsec machine config is not active on paused pools, then it updates both host and containerized ipsec daemonsets so that it doesn't block network upgrade and at the same time ipsec is enabled on the dataplane, otherwise it would stick with previous version of ipsec daemonset(s).
- And why can we update them in the case the pools are paused? Are both these reasonings independent?
when pools are paused and ipsec machine config is not active on those pools nodes, then containerized daemonset pod would configure IPsec on those nodes and host flavor pod doesn't have impact at all.
Once these pools are unpaused and ipsec machine config are installed, then it switches back to use host flavor pod.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jcaamano as discussed offline, updated 4.15 PR (#2449) with following:
- Update with both daemonsets as long as ipsec machine config is not active in any of the pool(s).
- Get rid of checking 'paused' pool.
- Remove
LegacyIPsecUpgrade
checks as it's not needed anymore due to update of both daemonsets at the start of upgrade itself.
Would update this PR once IPsec upgrade CI looks clean there.
pkg/network/ovn_kubernetes.go
Outdated
|
||
// The containerized ipsec deployment is only rendered during upgrades or | ||
// for hypershift hosted clusters. | ||
renderIPsecContainerizedDaemonSet = (renderIPsecDaemonSet && isHypershiftHostedCluster) || isIPsecMachineConfigNotActiveOnUpgrade | ||
renderIPsecContainerizedDaemonSet = (renderIPsecDaemonSet && isHypershiftHostedCluster) || isIPsecMachineConfigNotActiveOnUpgrade || |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since you changed the condition, please update the comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
pkg/network/ovn_kubernetes.go
Outdated
// If ipsec is enabled, we render the host ipsec deployment except for | ||
// hypershift hosted clusters and we need to wait for the ipsec MachineConfig | ||
// extensions to be active first. We must also render host ipsec deployment | ||
// at the time of upgrade though user created IPsec Machine Config is not | ||
// present/active. | ||
renderIPsecHostDaemonSet = (renderIPsecDaemonSet && isIPsecMachineConfigActive && !isHypershiftHostedCluster) || isIPsecMachineConfigNotActiveOnUpgrade | ||
renderIPsecHostDaemonSet = (renderIPsecDaemonSet && isIPsecMachineConfigActive && !isHypershiftHostedCluster) || |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since you changed the condition, please update the comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
93d9013
to
90a1608
Compare
/retest |
/test ? |
@pperiyasamy: The following commands are available to trigger required jobs:
The following commands are available to trigger optional jobs:
Use
In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
90a1608
to
14c5df7
Compare
pkg/bootstrap/types.go
Outdated
// MasterMCPs contains machine config pools having master role. | ||
MasterMCPs []mcfgv1.MachineConfigPool | ||
|
||
// WorkerMCPStatus contains machine config pool statuses for pools having worker role. | ||
WorkerMCPStatuses []mcfgv1.MachineConfigPoolStatus | ||
// WorkerMCPs contains machine config pools having worker role. | ||
WorkerMCPs []mcfgv1.MachineConfigPool |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we just keep the statuses? In theory, status should be all we based our decisions on.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, right, but now we need to rely on MachineCongPool
for a new unit test covering MachineConfigPool in paused and unpaused states. updated commit message to reflect this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why would you need to unit test that if the functionality does not depend on that anymore? You are not really testing any new code path or anything. That should be an e2e test instead.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes @jcaamano , it makes more sense. reverted back to using only mcp statuses now.
// The containerized ipsec deployment is only rendered during upgrades or | ||
// for hypershift hosted clusters. | ||
renderIPsecContainerizedDaemonSet = (renderIPsecDaemonSet && isHypershiftHostedCluster) || isIPsecMachineConfigNotActiveOnUpgrade | ||
// hypershift hosted clusters. We must also render host ipsec daemonset |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Have you checked that the comment block for the method itself (lines 594-608) is accurate?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, updated the method comment about new upgrade behavior.
14c5df7
to
ac0a438
Compare
/retest |
1 similar comment
/retest |
/jira refresh |
@pperiyasamy: This pull request references Jira Issue OCPBUGS-36688, which is invalid:
Comment In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
/jira refresh |
@pperiyasamy: This pull request references Jira Issue OCPBUGS-36688, which is valid. 3 validation(s) were run on this bug
Requesting review from QA contact: In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
/test e2e-aws-ovn-ipsec-upgrade |
The ipsec upgrade is successful in the run ci/prow/e2e-aws-ovn-ipsec-upgrade, pluto logs look clean. there are still two api connection related disruption tests are failing. rerunning the job... |
/test e2e-aws-ovn-ipsec-upgrade |
ac0a438
to
f9a4af0
Compare
/retest |
f9a4af0
to
422feed
Compare
During upgrade When MCP is not in ready state with ipsec machine config, network operator continues to render older IPsec daemonsets which blocks network cluster operator not getting upgraded to newer version. Hence this commit renders newer IPsec daemonsets immeditately, with new IPsecCheckForLibreswan check either one of the pod serves IPsec for the node. When MCPs are fully rolled out with ipsec machine config, then it goes ahead with rendering only host flavored IPsec daemonset. Signed-off-by: Periyasamy Palanisamy <[email protected]>
422feed
to
14bf304
Compare
/retest |
@pperiyasamy: The following tests failed, say
Full PR test history. Your PR dashboard. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
@jcaamano I think this PR now depends on #2383 because network co is reported with success as soon as ipsec daemonsets are deployed successfully, but ipsec machine configs are being rolled out a bit later and no status is reported for that. Because of this openshift-install returns prematurely without waiting for ipsec machine config rollout to complete, I think this caused failure on this ipsec ci run, but it doesn't have mg report to prove this explanation. |
When a machine config pool is in paused state, network operator currently does the following.
During fresh IPsec install, it just keeps waiting IPsec machine config to be rolled out on all cluster nodes, only then it starts rendering IPsec host daemonset. This would get dataplane into IPsec encrypted state. So as long as any of the machine config pool is paused state, the cluster never gets IPsec enabled.
During legacy upgrade, let's say from 4.14 to 4.15, it just continues to render older 4.14 IPsec daemonsets which blocks network cluster operator not getting upgraded to 4.15 (this scenario may not happen when user upgrades IPsec from 4.15 to 4.16)
Hence this PR renders both newer IPsec daemonsets during this MCP pause period. When MCPs are moved to unpaused state and also IPsec machine configs are installed on it, then it goes ahead with rendering only host flavored IPsec daemonset.