-
Notifications
You must be signed in to change notification settings - Fork 457
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conformance: Signal Stability of Tests #3009
Comments
Totally agree on this one. I think that, in addition to a way for marking a conformance test as still experimental, we need to create a process that leads to the graduation of such a test, i.e., when a test can be moved out of experimental. Let me grab this one. |
Any thoughts on the naming - I'm thinking it would be confusing to describe a test as
Maybe stable/unstable is better way to describe it? |
but then we have a stable/experimental channel .. hmm |
Maybe all new conformance tests are given a named status until we have at least 3 implementations passing the tests or they've been present in the API for at least 1 release and have evidence of at least 3 implementations passing them, ideally via submitted conformance reports. Some possible terms for this state:
If we want a corresponding term for tests that have graduated from that state, maybe one of the following would work?
|
+1. We could initialize such a field as I think we should add some knobs to the conformance suite to opt-in the To overcome this problem, we could remove the graduation criteria of having 3 implementations, and leave the "1 release cycle" requirement. This way, we force implementations to try and (maybe) fix
Some other terms that could be a good fit:
I'd go with |
I did this pretty simply in #3212 - just added a new set called |
What would you like to be added:
Some kind of indicator that a conformance test is still "experimental".
Why this is needed:
In some cases we want to add conformance tests before we have any implementations of a feature (x-ref #2821). Not being able to meaningfully run a test like this increases the chances that a test could either be wrong or buggy. In general, our conformance tests represent a contract that we should be very hesitant to change, but when tests are new and untested, we likely need to have some kind of label to indicate that they could still change if needed.
cc @dprotaso
The text was updated successfully, but these errors were encountered: