-
Notifications
You must be signed in to change notification settings - Fork 509
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Run Conformance Tests on implementations in CI #1734
Comments
Ive been creating my k8s cluster and running this script
would help the community if something like this is incorporated into the CI Sucessful conformance tests take around ~5min to finish today so running multiple in parallel on multiple implementations shouldnt slow down the CI too much imho. A suggestion for acceptable criteria could be - implementation must be passing all conformance tests for it to be added into the CI |
another suggestion is to specify implementation specific steps in https://gateway-api.sigs.k8s.io/concepts/conformance/?h=conf#3-conformance-tests and linking this page to the PR template #1733 making it a one click discovery (CI would be the ideal auto magic way) |
So interestingly enough we were already planning on this and actively working on this (the https://github.com/kong/blixt project (which is being donated to Kubernetes SIGs) is related as we intend to plug that implementation into CI). You can find the relevant announcement #1706, but after searching I don't think we had a tracking issue for it so this works as the tracking issue, thank you. 👍 |
@shaneutt I completely missed that discussion, thanks for surfacing it. This is great that Blixt will be incorporated into the CI. My follow up questions are
The velocity of contributions for conformance related PRs should increase once ^ happens. |
Currently we don't have plans to do this, though it's not the first time it's been asked. For right this moment, we're going to focus on Layer4 as that will get a fair amount of the ground work covered, while also having the side benefit of driving L4 conformance so that we can work on getting If you're interested in pursuing this I would definitely recommend contributing to the project and checking in on our friday |
It's great to see that Blixt will help drive L4 adoption. I know Blixt is meant to be used for testing only, so it seems like a good fit for L4 conformance testing. However, I'm concerned that most implementations are L7 with a focus on supporting the Beta APIs. What existing L4 implementations is Blixt supporting? I can't tell from this list. |
Not sure I understand the question perfectly, but my take of the question was that you're asking what other L4 implementations there are today. If that's the case, we don't know exactly and that's actually something we're trying to solve by driving forward L4 conformance tests (with Blixt as a catalyst) and also the conformance profiles work which will include reporting back to upstream. |
Thanks for the feedback @shaneutt. More specifically, why not prioritize L7 over L4 since most, if not all, current implementations are L7? |
The main reason is that HTTPRoute already has momentum: there are already several L7 implementations helping to push conformance forward, in fact 99% of our conformance tests are L7 today because of those efforts. HTTPRoute is already tracking for a GA release this year. Focusing on L4 allows us to help push L4 to BETA by helping to pave the road for conformance as no implementation has really stepped up to do this yet and as a side effect we'll still get CI for exercising some of the common functionality (e.g. A prominent secondary reason is that we wanted to provide vendor neutrality with our tech choice: we didn't want the "reference" implementation to end up being a specific vendor implementation, nor did we want it to be a custom one bound to envoy, or nginx. Being bound to native Linux functionality seemed very neutral to us and was a precondition of acceptance of a testing/reference implementation when proposed. As stated above we would still consider adding L7 functionality in time if we can navigate it neutrally, as the benefit of exercising the L7 conformance tests on each release is definitely a desire. |
@shaneutt thanks for the additional feedback. From my understanding, L7 conformance test coverage was driven by affiliates implementing L7 proxies, load balancers, etc. I would have expected that the approach for L4 would follow this approach. Instead of pivoting to L4 due to L7-specific implementation details, I ask that the community consider an approach for running conformance tests on L7 implementations. If this has already been thoroughly discussed, do you mind sharing a link where I can learn more? |
Correct.
Same for us maintainers, and that's part of what drove this. In fact, now because of the momentum started by the Blixt project we're starting to see some implementations out there coming forward wanting to join in on conformance, such as STUNner (https://github.com/l7mp/stunner) which was presented to us some months ago at a community sync and is an implementer of
Multiple conversations to build consensus almost a year ago prior to and during Kubecon NA. Check out the sync entries and community recordings from around that time and up until now.
Given the wording "instead" and "pivot" it seems you're suggesting we stop current efforts which I don't see a compelling reason to do, particularly since we've even covered the desire and possibility to move into L7 space with our current direction. Perhaps now is a good a time as any to explore that direction further? |
@shaneutt thanks for the additional clarity. To clarify, I'm not against running L4 conformance tests. My point is that the project should consider how to support L7 implementations, even though none are based on Linux from my knowledge. The use case I'm trying to support is:
|
@danehans Agree that this is unnecessarily confusing today. On 2) so far that's represented the implementation's stability/support level and is populated/controlled by the owner of the implementation. I think most of these concerns are going to be resolved with #1709. Despite the name of "Conformance Profiles", the GEP has the larger goal of centralized conformance reporting which will likely be surfaced on the implementations page. How that will look is TBD. I think we could definitely use some help driving that forward so let us know if you're interested. |
@robscott thanks for the feedback. I may be conflating this issue with the conformance profiles GEP, so I appreciate the clarification. I'm happy to help. Let me dive into that GEP to better understand the details. |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
What would you like to be added:
Run the entire conformance test suite when a PR is raised on Gateway API implementations, but run it as a non blocking step.
Why this is needed:
As a developer writing conformance tests, I do not how to test my changes.
The text was updated successfully, but these errors were encountered: