Discussion: how to validate regressions and add CI tests? #590
Replies: 1 comment
-
Regarding the first issue:We currently rely on an internal environment for regression testing, which is not integrated with CI checks. The main reason is that debugging the GitHub Actions CI environment for tests like FUSE has proven somewhat challenging. The workflow is as follows: We primarily use the entry scripts available at https://github.com/CurvineIO/curvine/tree/main/scripts. The regression testing is deployed on an independent test machine, where Curvine is run in standalone mode for validation. Within the Startup Instructions:python build-server.py --project-path ~/codespace/curvine-dev --results-dir ~/curvine-dailytest/resultTriggering Tests:curl -X POST http://localhost:5002/dailytest # Start all tests
curl -X POST http://localhost:5002/fuse/run # Start FUSE test individuallyIt’s important to note that this regression test framework relies on logic such as updating Curvine code, which needs to be customized independently based on the specific environment. Regarding the second issue:This is an excellent suggestion. A minimal CI test suite would enable CI checks to quickly verify whether new code breaks existing functionalities. By the way, there are currently some known test failures in the project’s codebase. Contributions to help improve this are highly welcome! |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
I’ve been running build/run-tests.sh on the main branch and encountering several test failures. It’s currently hard to tell whether these are new regressions or if the tests/scripts themselves have become stale or outdated.
I’d like to open a discussion around two key questions:
Beta Was this translation helpful? Give feedback.
All reactions