-
Notifications
You must be signed in to change notification settings - Fork 1.6k
remove use of istioctl x wait #16515
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
remove use of istioctl x wait #16515
Conversation
Hi @AritraDey-Dev. Thanks for your PR. I'm waiting for a istio member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
Co-authored-by: Daniel Hawton <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/ok-to-test
/retest |
/test doc.test.profile-default |
/retest |
Does this PR cause a meaningful change to the time the tests take to run? |
tests/util/helpers.sh
Outdated
|
||
while (( attempt <= max_attempts )); do | ||
# Check if the resource exists | ||
if kubectl get "$kind" "$name" -n "$namespace" >/dev/null 2>&1; then |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Aren't you basically doing
kubectl wait --for=create -n $namespace $kind/$name --timeout=30s
with this whole function?
Perhaps that would be cleaner?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's what some of the other functions do.
The thing here is, that the goal of this wasn't so much to ensure that the configuration was created in Kubernetes, which either method checks for, but to ensure all the sidecars were programmed with it. They generally are, but in the instance that the object is created but is not translated to XDS and sent to the other proxy servers, there's no obvious way to tell.
So kubectl wait
is probably necessary but not sufficient.
Congratulations, your testing has found a bug! In #13208 the doc was changed to not require I'll change the test to only check for the services that are used by this example. The cleanup will remove all of them, whether they exist or not. |
That one feels a little more like a flake I've seen before. /retest |
Thank you for catching that and updating the test accordingly! |
np. Before we merge, please change this function to use |
I have updated the implementation to use kubectl wait, and also renamed the function to _wait_for_resource throughout all the tests. |
This reverts commit c10ec0c.
Interesting error
Either this is a flake, or it's confused by (a) there being two different types of Gateway (b) waiting on (Again, this shows why |
/test doc.test.profile-default |
/retest |
As we're encountering edge cases where config propagation is still flaky, I guess we could consider adding an optional advanced wait mechanism in the future (e.g., |
/test doc.test.profile-default |
1 similar comment
/test doc.test.profile-default |
These aren't flakes it seems.. |
df798e2
to
71487b5
Compare
/retest |
The real issue appears to be Istiod control plane health or authentication problems, not the look at the logs here... |
Signed-off-by: Aritra Dey <[email protected]>
Signed-off-by: Aritra Dey <[email protected]>
/retest |
I read that error as "test failed, trying to clean up, error logged while cleaning up" I'm not 100% sure how best to progress on this. Right now it seems like we've just slowed everything down by 1s per test? |
Signed-off-by: Aritra Dey <[email protected]>
Signed-off-by: Aritra Dey <[email protected]>
I agree that we slowed everything down.. unfortunately, whichout a replacement for the old I'm ok with implementing this for now... it makes things slower but hopefully will address some of the flakes that happen because we checked or tried to use something that hadn't yet applied. |
Ok, approving, and we'll see what the impact is to other PRs. |
Description
fixes #16429
Reviewers