Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OCPBUGS-36688: Move to use newer IPsec DaemonSets irrespective of MCP state #2454

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

pperiyasamy
Copy link
Member

@pperiyasamy pperiyasamy commented Aug 1, 2024

When a machine config pool is in paused state, network operator currently does the following.

  1. During fresh IPsec install, it just keeps waiting IPsec machine config to be rolled out on all cluster nodes, only then it starts rendering IPsec host daemonset. This would get dataplane into IPsec encrypted state. So as long as any of the machine config pool is paused state, the cluster never gets IPsec enabled.

  2. During legacy upgrade, let's say from 4.14 to 4.15, it just continues to render older 4.14 IPsec daemonsets which blocks network cluster operator not getting upgraded to 4.15 (this scenario may not happen when user upgrades IPsec from 4.15 to 4.16)

Hence this PR renders both newer IPsec daemonsets during this MCP pause period. When MCPs are moved to unpaused state and also IPsec machine configs are installed on it, then it goes ahead with rendering only host flavored IPsec daemonset.

@openshift-ci-robot openshift-ci-robot added jira/severity-moderate Referenced Jira bug's severity is moderate for the branch this PR is targeting. jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. jira/valid-bug Indicates that a referenced Jira bug is valid for the branch this PR is targeting. labels Aug 1, 2024
@openshift-ci-robot
Copy link
Contributor

@pperiyasamy: This pull request references Jira Issue OCPBUGS-36688, which is valid. The bug has been moved to the POST state.

3 validation(s) were run on this bug
  • bug is open, matching expected state (open)
  • bug target version (4.17.0) matches configured target version for branch (4.17.0)
  • bug is in the state ASSIGNED, which is one of the valid states (NEW, ASSIGNED, POST)

Requesting review from QA contact:
/cc @anuragthehatter

The bug has been updated to refer to the pull request using the external bug tracker.

In response to this:

When a machine config pool is in paused state, network operator currently does the following.

  1. During fresh IPsec install, it just keeps waiting IPsec machine config to be rolled out on all cluster nodes, only then it starts rendering IPsec host daemonset. This would get dataplane into IPsec encrypted state. So as long as any of the machine config pool is paused state, the cluster never gets IPsec enabled.

  2. During legacy upgrade, let's say from 4.15 to 4.16, it just continues to render older 4.14 IPsec daemonsets which blocks network cluster operator not getting upgraded to 4.15 (this scenario may not happen when user upgrades IPsec from 4.15 to 4.16)

Hence this PR renders both newer IPsec daemonsets during this MCP pause period. When MCPs are moved to unpaused state and also IPsec machine configs are installed on it, then it goes ahead with rendering only host flavored IPsec daemonset.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@pperiyasamy
Copy link
Member Author

Copy link
Contributor

openshift-ci bot commented Aug 1, 2024

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: pperiyasamy
Once this PR has been reviewed and has the lgtm label, please ask for approval from jcaamano. For more information see the Kubernetes Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@pperiyasamy
Copy link
Member Author

/retest

@pperiyasamy
Copy link
Member Author

/test e2e-aws-ovn-ipsec-upgrade

@pperiyasamy
Copy link
Member Author

/test e2e-ovn-ipsec-step-registry

@openshift-ci-robot
Copy link
Contributor

@pperiyasamy: This pull request references Jira Issue OCPBUGS-36688, which is valid.

3 validation(s) were run on this bug
  • bug is open, matching expected state (open)
  • bug target version (4.17.0) matches configured target version for branch (4.17.0)
  • bug is in the state POST, which is one of the valid states (NEW, ASSIGNED, POST)

Requesting review from QA contact:
/cc @anuragthehatter

In response to this:

When a machine config pool is in paused state, network operator currently does the following.

  1. During fresh IPsec install, it just keeps waiting IPsec machine config to be rolled out on all cluster nodes, only then it starts rendering IPsec host daemonset. This would get dataplane into IPsec encrypted state. So as long as any of the machine config pool is paused state, the cluster never gets IPsec enabled.

  2. During legacy upgrade, let's say from 4.14 to 4.15, it just continues to render older 4.14 IPsec daemonsets which blocks network cluster operator not getting upgraded to 4.15 (this scenario may not happen when user upgrades IPsec from 4.15 to 4.16)

Hence this PR renders both newer IPsec daemonsets during this MCP pause period. When MCPs are moved to unpaused state and also IPsec machine configs are installed on it, then it goes ahead with rendering only host flavored IPsec daemonset.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

Copy link
Contributor

@jcaamano jcaamano left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am fine with checking the "paused" spec field of the pools for now.

@@ -288,6 +309,12 @@ spec:
- -c
- |
#!/bin/bash
{{ if .IPsecCheckForLibreswan }}
if rpm --dbpath=/usr/share/rpm -q libreswan; then
echo "host has libreswan and therefore ipsec will be configured by ipsec host daemonset, this ovn ipsec container is always \"alive\""
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What do you mean here with is always alive?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is just to keep liveness probe to succeed every time (when host flavor actually serving ipsec as host already installed with libreswan), otherwise this pod would crashloop.

data.Data["IPsecMachineConfigEnable"] = IPsecMachineConfigEnable
data.Data["OVNIPsecDaemonsetEnable"] = OVNIPsecDaemonsetEnable
data.Data["OVNIPsecEnable"] = OVNIPsecEnable
data.Data["IPsecCheckForLibreswan"] = renderBothIPsecDemonSetsWhenAPoolPausedState
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Couldn't this just be

data.Data["IPsecCheckForLibreswan"] = renderIPsecHostDaemonSet && renderIPsecContainerizedDaemonSet

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, done.

Comment on lines 624 to 625
machineConfigPoolPaused := isThereAnyMachineConfigPoolPaused(bootstrapResult.Infra)
isIPsecMachineConfigActiveInUnPausedPools := isIPsecMachineConfigActive(bootstrapResult.Infra, true)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would move these two variables to the same block where renderBothIPsecDemonSetsWhenAPoolPausedState is defined. And then I would elaborate a bit more the comment of that block saying that if there are unpaused pools, we wait until thos poolls have the ipsec machine config active before deploying both daemonsets

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

@@ -653,7 +664,7 @@ func shouldRenderIPsec(conf *operv1.OVNKubernetesConfig, bootstrapResult *bootst

// While OVN ipsec is being upgraded and IPsec MachineConfigs deployment is in progress
// (or) IPsec config in OVN is being disabled, then ipsec deployment is not updated.
renderIPsecDaemonSetAsCreateWaitOnly = isIPsecMachineConfigNotActiveOnUpgrade || (isOVNIPsecActive && !renderIPsecOVN)
renderIPsecDaemonSetAsCreateWaitOnly = (isIPsecMachineConfigNotActiveOnUpgrade && !renderBothIPsecDemonSetsWhenAPoolPausedState) || (isOVNIPsecActive && !renderIPsecOVN)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This condition is counter-intuitive.

What about

...isIPsecMachineConfigNotActiveOnUpgrade || !isIPsecMachineConfigActiveInUnPausedPools ... 

Also since you changed the condition, please update the comment

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the existing condition (isIPsecMachineConfigNotActiveOnUpgrade && !renderBothIPsecDemonSetsWhenAPoolPausedState) helps the case at which both daemonsets can be rendered without create-wait annotation, it can't be done with the suggested approach.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So I guess you mean to say is that you want to update the daemonsets if machine config is active in unpaused pools and inactive in paused pools. But I am missing the reasoning.

  • Updating the deamonsets is something we didn't want to do if the machine config was not updated. Why?
  • And why can we update them in the case the pools are paused? Are both these reasonings independent?

Copy link
Member Author

@pperiyasamy pperiyasamy Aug 20, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So I guess you mean to say is that you want to update the daemonsets if machine config is active in unpaused pools and inactive in paused pools. But I am missing the reasoning.

  • Updating the deamonsets is something we didn't want to do if the machine config was not updated. Why?

This is the main issue we are trying to address with this PR, when ipsec machine config is not active on paused pools, then it updates both host and containerized ipsec daemonsets so that it doesn't block network upgrade and at the same time ipsec is enabled on the dataplane, otherwise it would stick with previous version of ipsec daemonset(s).

  • And why can we update them in the case the pools are paused? Are both these reasonings independent?

when pools are paused and ipsec machine config is not active on those pools nodes, then containerized daemonset pod would configure IPsec on those nodes and host flavor pod doesn't have impact at all.
Once these pools are unpaused and ipsec machine config are installed, then it switches back to use host flavor pod.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jcaamano as discussed offline, updated 4.15 PR (#2449) with following:

  1. Update with both daemonsets as long as ipsec machine config is not active in any of the pool(s).
  2. Get rid of checking 'paused' pool.
  3. Remove LegacyIPsecUpgrade checks as it's not needed anymore due to update of both daemonsets at the start of upgrade itself.

Would update this PR once IPsec upgrade CI looks clean there.


// The containerized ipsec deployment is only rendered during upgrades or
// for hypershift hosted clusters.
renderIPsecContainerizedDaemonSet = (renderIPsecDaemonSet && isHypershiftHostedCluster) || isIPsecMachineConfigNotActiveOnUpgrade
renderIPsecContainerizedDaemonSet = (renderIPsecDaemonSet && isHypershiftHostedCluster) || isIPsecMachineConfigNotActiveOnUpgrade ||
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since you changed the condition, please update the comment

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

// If ipsec is enabled, we render the host ipsec deployment except for
// hypershift hosted clusters and we need to wait for the ipsec MachineConfig
// extensions to be active first. We must also render host ipsec deployment
// at the time of upgrade though user created IPsec Machine Config is not
// present/active.
renderIPsecHostDaemonSet = (renderIPsecDaemonSet && isIPsecMachineConfigActive && !isHypershiftHostedCluster) || isIPsecMachineConfigNotActiveOnUpgrade
renderIPsecHostDaemonSet = (renderIPsecDaemonSet && isIPsecMachineConfigActive && !isHypershiftHostedCluster) ||
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since you changed the condition, please update the comment

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

@pperiyasamy
Copy link
Member Author

/retest

@pperiyasamy
Copy link
Member Author

/test ?

Copy link
Contributor

openshift-ci bot commented Aug 21, 2024

@pperiyasamy: The following commands are available to trigger required jobs:

  • /test 4.18-upgrade-from-stable-4.17-images
  • /test e2e-aws-ovn-hypershift-conformance
  • /test e2e-aws-ovn-upgrade
  • /test e2e-aws-ovn-windows
  • /test e2e-azure-ovn-upgrade
  • /test e2e-gcp-ovn
  • /test e2e-gcp-ovn-upgrade
  • /test e2e-metal-ipi-ovn-ipv6
  • /test images
  • /test lint
  • /test unit
  • /test verify

The following commands are available to trigger optional jobs:

  • /test 4.18-upgrade-from-stable-4.17-e2e-aws-ovn-upgrade
  • /test 4.18-upgrade-from-stable-4.17-e2e-azure-ovn-upgrade
  • /test 4.18-upgrade-from-stable-4.17-e2e-gcp-ovn-upgrade
  • /test e2e-aws-hypershift-ovn-kubevirt
  • /test e2e-aws-ovn-ipsec-serial
  • /test e2e-aws-ovn-ipsec-upgrade
  • /test e2e-aws-ovn-local-to-shared-gateway-mode-migration
  • /test e2e-aws-ovn-serial
  • /test e2e-aws-ovn-shared-to-local-gateway-mode-migration
  • /test e2e-aws-ovn-single-node
  • /test e2e-aws-ovn-techpreview-serial
  • /test e2e-azure-ovn
  • /test e2e-azure-ovn-dualstack
  • /test e2e-azure-ovn-manual-oidc
  • /test e2e-gcp-ovn-techpreview
  • /test e2e-metal-ipi-ovn-ipv6-ipsec
  • /test e2e-network-mtu-migration-ovn-ipv4
  • /test e2e-network-mtu-migration-ovn-ipv6
  • /test e2e-openstack-ovn
  • /test e2e-ovn-hybrid-step-registry
  • /test e2e-ovn-ipsec-step-registry
  • /test e2e-ovn-step-registry
  • /test e2e-vsphere-ovn
  • /test e2e-vsphere-ovn-dualstack
  • /test e2e-vsphere-ovn-dualstack-primaryv6
  • /test e2e-vsphere-ovn-windows
  • /test okd-scos-images
  • /test qe-perfscale-aws-ovn-medium-cluster-density
  • /test qe-perfscale-aws-ovn-medium-node-density-cni
  • /test qe-perfscale-aws-ovn-small-cluster-density
  • /test qe-perfscale-aws-ovn-small-node-density-cni
  • /test security

Use /test all to run the following jobs that were automatically triggered:

  • pull-ci-openshift-cluster-network-operator-master-4.18-upgrade-from-stable-4.17-e2e-aws-ovn-upgrade
  • pull-ci-openshift-cluster-network-operator-master-4.18-upgrade-from-stable-4.17-e2e-azure-ovn-upgrade
  • pull-ci-openshift-cluster-network-operator-master-4.18-upgrade-from-stable-4.17-e2e-gcp-ovn-upgrade
  • pull-ci-openshift-cluster-network-operator-master-4.18-upgrade-from-stable-4.17-images
  • pull-ci-openshift-cluster-network-operator-master-e2e-aws-hypershift-ovn-kubevirt
  • pull-ci-openshift-cluster-network-operator-master-e2e-aws-ovn-hypershift-conformance
  • pull-ci-openshift-cluster-network-operator-master-e2e-aws-ovn-local-to-shared-gateway-mode-migration
  • pull-ci-openshift-cluster-network-operator-master-e2e-aws-ovn-serial
  • pull-ci-openshift-cluster-network-operator-master-e2e-aws-ovn-shared-to-local-gateway-mode-migration
  • pull-ci-openshift-cluster-network-operator-master-e2e-aws-ovn-single-node
  • pull-ci-openshift-cluster-network-operator-master-e2e-aws-ovn-upgrade
  • pull-ci-openshift-cluster-network-operator-master-e2e-aws-ovn-windows
  • pull-ci-openshift-cluster-network-operator-master-e2e-azure-ovn
  • pull-ci-openshift-cluster-network-operator-master-e2e-azure-ovn-upgrade
  • pull-ci-openshift-cluster-network-operator-master-e2e-gcp-ovn
  • pull-ci-openshift-cluster-network-operator-master-e2e-gcp-ovn-upgrade
  • pull-ci-openshift-cluster-network-operator-master-e2e-metal-ipi-ovn-ipv6
  • pull-ci-openshift-cluster-network-operator-master-e2e-metal-ipi-ovn-ipv6-ipsec
  • pull-ci-openshift-cluster-network-operator-master-e2e-network-mtu-migration-ovn-ipv4
  • pull-ci-openshift-cluster-network-operator-master-e2e-network-mtu-migration-ovn-ipv6
  • pull-ci-openshift-cluster-network-operator-master-e2e-openstack-ovn
  • pull-ci-openshift-cluster-network-operator-master-e2e-ovn-hybrid-step-registry
  • pull-ci-openshift-cluster-network-operator-master-e2e-ovn-ipsec-step-registry
  • pull-ci-openshift-cluster-network-operator-master-e2e-ovn-step-registry
  • pull-ci-openshift-cluster-network-operator-master-e2e-vsphere-ovn
  • pull-ci-openshift-cluster-network-operator-master-e2e-vsphere-ovn-dualstack
  • pull-ci-openshift-cluster-network-operator-master-e2e-vsphere-ovn-dualstack-primaryv6
  • pull-ci-openshift-cluster-network-operator-master-images
  • pull-ci-openshift-cluster-network-operator-master-lint
  • pull-ci-openshift-cluster-network-operator-master-security
  • pull-ci-openshift-cluster-network-operator-master-unit
  • pull-ci-openshift-cluster-network-operator-master-verify

In response to this:

/test ?

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

Comment on lines 113 to 117
// MasterMCPs contains machine config pools having master role.
MasterMCPs []mcfgv1.MachineConfigPool

// WorkerMCPStatus contains machine config pool statuses for pools having worker role.
WorkerMCPStatuses []mcfgv1.MachineConfigPoolStatus
// WorkerMCPs contains machine config pools having worker role.
WorkerMCPs []mcfgv1.MachineConfigPool
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we just keep the statuses? In theory, status should be all we based our decisions on.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, right, but now we need to rely on MachineCongPool for a new unit test covering MachineConfigPool in paused and unpaused states. updated commit message to reflect this.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why would you need to unit test that if the functionality does not depend on that anymore? You are not really testing any new code path or anything. That should be an e2e test instead.

Copy link
Member Author

@pperiyasamy pperiyasamy Oct 17, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes @jcaamano , it makes more sense. reverted back to using only mcp statuses now.

// The containerized ipsec deployment is only rendered during upgrades or
// for hypershift hosted clusters.
renderIPsecContainerizedDaemonSet = (renderIPsecDaemonSet && isHypershiftHostedCluster) || isIPsecMachineConfigNotActiveOnUpgrade
// hypershift hosted clusters. We must also render host ipsec daemonset
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Have you checked that the comment block for the method itself (lines 594-608) is accurate?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, updated the method comment about new upgrade behavior.

@pperiyasamy
Copy link
Member Author

/retest

1 similar comment
@pperiyasamy
Copy link
Member Author

/retest

@pperiyasamy
Copy link
Member Author

/jira refresh

@openshift-ci-robot openshift-ci-robot added jira/invalid-bug Indicates that a referenced Jira bug is invalid for the branch this PR is targeting. and removed jira/valid-bug Indicates that a referenced Jira bug is valid for the branch this PR is targeting. labels Sep 16, 2024
@openshift-ci-robot
Copy link
Contributor

@pperiyasamy: This pull request references Jira Issue OCPBUGS-36688, which is invalid:

  • expected the bug to target either version "4.18." or "openshift-4.18.", but it targets "4.17.0" instead

Comment /jira refresh to re-evaluate validity if changes to the Jira bug are made, or edit the title of this pull request to link to a different bug.

In response to this:

/jira refresh

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@pperiyasamy
Copy link
Member Author

/jira refresh

@openshift-ci-robot openshift-ci-robot added jira/valid-bug Indicates that a referenced Jira bug is valid for the branch this PR is targeting. and removed jira/invalid-bug Indicates that a referenced Jira bug is invalid for the branch this PR is targeting. labels Sep 16, 2024
@openshift-ci-robot
Copy link
Contributor

@pperiyasamy: This pull request references Jira Issue OCPBUGS-36688, which is valid.

3 validation(s) were run on this bug
  • bug is open, matching expected state (open)
  • bug target version (4.18.0) matches configured target version for branch (4.18.0)
  • bug is in the state POST, which is one of the valid states (NEW, ASSIGNED, POST)

Requesting review from QA contact:
/cc @anuragthehatter

In response to this:

/jira refresh

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@pperiyasamy
Copy link
Member Author

/test e2e-aws-ovn-ipsec-upgrade

@pperiyasamy
Copy link
Member Author

The ipsec upgrade is successful in the run ci/prow/e2e-aws-ovn-ipsec-upgrade, pluto logs look clean. there are still two api connection related disruption tests are failing. rerunning the job...

@pperiyasamy
Copy link
Member Author

/test e2e-aws-ovn-ipsec-upgrade

@pperiyasamy pperiyasamy changed the title OCPBUGS-36688: Move to use newer IPsec DaemonSets when MCP is in paused state OCPBUGS-36688: Move to use newer IPsec DaemonSets irrespective of MCP state Oct 16, 2024
@pperiyasamy
Copy link
Member Author

/retest

During upgrade When MCP is not in ready state with ipsec machine config,
network operator continues to render older IPsec daemonsets which blocks
network cluster operator not getting upgraded to newer version.
Hence this commit renders newer IPsec daemonsets immeditately, with new
IPsecCheckForLibreswan check either one of the pod serves IPsec for
the node. When MCPs are fully rolled out with ipsec machine config, then
it goes ahead with rendering only host flavored IPsec daemonset.

Signed-off-by: Periyasamy Palanisamy <[email protected]>
@pperiyasamy
Copy link
Member Author

/retest

Copy link
Contributor

openshift-ci bot commented Oct 17, 2024

@pperiyasamy: The following tests failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
ci/prow/e2e-aws-ovn-ipsec-upgrade ac0a438 link false /test e2e-aws-ovn-ipsec-upgrade
ci/prow/e2e-ovn-ipsec-step-registry 14bf304 link false /test e2e-ovn-ipsec-step-registry
ci/prow/e2e-aws-hypershift-ovn-kubevirt 14bf304 link false /test e2e-aws-hypershift-ovn-kubevirt
ci/prow/e2e-vsphere-ovn-dualstack 14bf304 link false /test e2e-vsphere-ovn-dualstack
ci/prow/e2e-aws-ovn-single-node 14bf304 link false /test e2e-aws-ovn-single-node
ci/prow/e2e-azure-ovn-upgrade 14bf304 link true /test e2e-azure-ovn-upgrade
ci/prow/e2e-vsphere-ovn-dualstack-primaryv6 14bf304 link false /test e2e-vsphere-ovn-dualstack-primaryv6
ci/prow/security 14bf304 link false /test security
ci/prow/e2e-network-mtu-migration-ovn-ipv6 14bf304 link false /test e2e-network-mtu-migration-ovn-ipv6
ci/prow/e2e-azure-ovn 14bf304 link false /test e2e-azure-ovn

Full PR test history. Your PR dashboard.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

@pperiyasamy
Copy link
Member Author

@jcaamano I think this PR now depends on #2383 because network co is reported with success as soon as ipsec daemonsets are deployed successfully, but ipsec machine configs are being rolled out a bit later and no status is reported for that. Because of this openshift-install returns prematurely without waiting for ipsec machine config rollout to complete, I think this caused failure on this ipsec ci run, but it doesn't have mg report to prove this explanation.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
jira/severity-moderate Referenced Jira bug's severity is moderate for the branch this PR is targeting. jira/valid-bug Indicates that a referenced Jira bug is valid for the branch this PR is targeting. jira/valid-reference Indicates that this PR references a valid Jira ticket of any type.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants