You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
However, this isn't enforced anywhere and we don't directly check the conformance spec. Any suggestions for how we can better improve the composability of the codebase as well as testability against the points in the conformance docs? I've been experimenting recently with pytest-bdd and think something similar where each step is checked would be good. However, certain steps depend on others, which can be accomplished from BDD testing of features with multiple possible scenarios.
The text was updated successfully, but these errors were encountered:
Arjan does a deep dive on python decorators that might be a more upfront way to do what I think you described for pytest fixtures. See: https://www.youtube.com/watch?v=QH5fw9kxDQA
He goes pretty quickly showing how to nest classes and then functions that run in the order you want. As the tests proceed, I don't know if there a bookkeeping type way to keep blocks from retesting the same parts of a dataset with the same rules. Now we are talking like being able to compute the test coverage of a dataset, specification vs the code coverage of a software package?
Compliance Checker currently has a very imperative code style. For checks like those implemented by CF, the conformance docs form a set of steps to check, which would lend itself more towards a declarative code style, e.g. those contained in https://cfconventions.org/Data/cf-documents/requirements-recommendations/conformance-1.11.html.
We have comments in the code indicating when a part of the code is implements CF conformance in tests, such as in https://github.com/ioos/compliance-checker/blob/develop/compliance_checker/cf/cf_1_6.py#L1878-L1988 and in unit tests.
However, this isn't enforced anywhere and we don't directly check the conformance spec. Any suggestions for how we can better improve the composability of the codebase as well as testability against the points in the conformance docs? I've been experimenting recently with pytest-bdd and think something similar where each step is checked would be good. However, certain steps depend on others, which can be accomplished from BDD testing of features with multiple possible scenarios.
The text was updated successfully, but these errors were encountered: