Skip to content

Conversation

@petr-muller
Copy link
Member

Similar to Justin's earlier proposal about Pod-related timeouts (#2497), this proposes prolonging timeouts applied to CRD deletion. Most calls to deletion helpers are done for testcase cleanup, and we often see failures in various testcases correlating with periods of high control plane load.

This is not an entirely uncommon flake: https://search.dptools.openshift.org/?search=deleting+CustomResourceDefinition%3A+context+deadline+exceeded&maxAge=48h&context=1&type=bug%2Bissue%2Bjunit&name=&excludeName=&maxMatches=5&maxBytes=20971520&groupBy=job

Similarly to #2497, prolonging timeouts could be done downstream or argued upstream, and it is also possible that there may be other ways to reduce control plane load so that these failures stop occurring. It is also possible that the timeouts are unrelated to control plane load and the timeouts are a symptom of a real problem. I do not have solid evidence.

My approach (that I'd propose even for #2497) would be to run a timeboxed experiment with the prolonged timeouts, validate that we'd see a failure reduction we hope for, and then try moving the changes upstream.

/hold

Similar to Justin's earlier proposal about Pod-related timeouts, this proposes prolonging timeouts applied to CRD deletion. Most calls to deletion helpers are done for testcase cleanup, and we often see failures in various testcases correlating with periods of high control plane load.

This is not an entirely uncommon flake: https://search.dptools.openshift.org/?search=deleting+CustomResourceDefinition%3A+context+deadline+exceeded&maxAge=48h&context=1&type=bug%2Bissue%2Bjunit&name=&excludeName=&maxMatches=5&maxBytes=20971520&groupBy=job
@openshift-ci-robot openshift-ci-robot added the backports/unvalidated-commits Indicates that not all commits come to merged upstream PRs. label Nov 12, 2025
@openshift-ci openshift-ci bot added the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Nov 12, 2025
@openshift-ci-robot
Copy link

@petr-muller: the contents of this pull request could not be automatically validated.

The following commits could not be validated and must be approved by a top-level approver:

Comment /validate-backports to re-evaluate validity of the upstream PRs, for example when they are merged upstream.

@petr-muller petr-muller changed the title UPSTREAM: <carry>: Extend CRD deletion timeouts NO-ISSUE: Extend CRD deletion timeouts Nov 12, 2025
@openshift-ci-robot openshift-ci-robot added the jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. label Nov 12, 2025
@openshift-ci-robot
Copy link

@petr-muller: This pull request explicitly references no jira issue.

In response to this:

Similar to Justin's earlier proposal about Pod-related timeouts (#2497), this proposes prolonging timeouts applied to CRD deletion. Most calls to deletion helpers are done for testcase cleanup, and we often see failures in various testcases correlating with periods of high control plane load.

This is not an entirely uncommon flake: https://search.dptools.openshift.org/?search=deleting+CustomResourceDefinition%3A+context+deadline+exceeded&maxAge=48h&context=1&type=bug%2Bissue%2Bjunit&name=&excludeName=&maxMatches=5&maxBytes=20971520&groupBy=job

Similarly to #2497, prolonging timeouts could be done downstream or argued upstream, and it is also possible that there may be other ways to reduce control plane load so that these failures stop occurring. It is also possible that the timeouts are unrelated to control plane load and the timeouts are a symptom of a real problem. I do not have solid evidence.

My approach (that I'd propose even for #2497) would be to run a timeboxed experiment with the prolonged timeouts, validate that we'd see a failure reduction we hope for, and then try moving the changes upstream.

/hold

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-ci openshift-ci bot requested review from bertinatto and tkashem November 12, 2025 23:42
@openshift-ci openshift-ci bot added the vendor-update Touching vendor dir or related files label Nov 12, 2025
@openshift-ci
Copy link

openshift-ci bot commented Nov 12, 2025

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: petr-muller
Once this PR has been reviewed and has the lgtm label, please assign jerpeter1 for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci
Copy link

openshift-ci bot commented Nov 13, 2025

@petr-muller: The following tests failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
ci/prow/e2e-aws-ovn-runc 26ed5cd link true /test e2e-aws-ovn-runc
ci/prow/e2e-aws-ovn-techpreview 26ed5cd link false /test e2e-aws-ovn-techpreview
ci/prow/e2e-aws-ovn-serial-2of2 26ed5cd link true /test e2e-aws-ovn-serial-2of2

Full PR test history. Your PR dashboard.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

backports/unvalidated-commits Indicates that not all commits come to merged upstream PRs. do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. vendor-update Touching vendor dir or related files

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants