You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The DowngradeUpgrade failpoint stops and restarts up etcd servers up to 6 times. It could take a long time for a new server to join a cluster.
So it is likely to see the test fail from time to time because Failpoints are expected to finish within 60s.
We should
make Failpoint respect context
reduce the time it takes for DowngradeUpgrade or
make it possible increase the timeout for specific failpoints
Anything else we need to know?
No response
The text was updated successfully, but these errors were encountered:
Which Github Action / Prow Jobs are flaking?
https://prow.k8s.io/view/gs/kubernetes-ci-logs/logs/ci-etcd-robustness-main-amd64/1884310668266442752
Which tests are flaking?
DowngradeUpgrade
Github Action / Prow Job link
No response
Reason for failure (if possible)
The DowngradeUpgrade failpoint stops and restarts up etcd servers up to 6 times. It could take a long time for a new server to join a cluster.
So it is likely to see the test fail from time to time because Failpoints are expected to finish within 60s.
We should
Anything else we need to know?
No response
The text was updated successfully, but these errors were encountered: