Skip to content

exercism/dyalog-apl-test-runner

Repository files navigation

Exercism Dyalog APL Test Runner

The Docker image to automatically run tests on Dyalog APL solutions submitted to Exercism.

Developing the Runner

The runner consists of functions in the APLSource directory. It is developed using Link.

In Dyalog:

]LINK.Create # APLSource

The Run function is the interface that simply returns JSON text of the results object. The script bin/dylaog-apl-runner.apls gets parameters from the command line and outputs the result of the Run function to the file specified.

Design

This implementation of the version 2 test runner interface allows multiple tests: one APL function per test. This is one .aplf file per function, which allows natural development and debugging using Link.

The tests make use of an Assert function which takes a left argument that is the message to be displayed to the user in case the test case fails. In this way, a single test function can test multiple aspects of an exercise and usefully report failures back to the user.

Run the test runner

To run the tests of a single solution, do the following:

  1. Open a terminal in the project's root
  2. Run ./bin/run.sh <exercise-slug> <solution-dir> <output-dir>

Once the test runner has finished, its results will be written to <output-dir>/results.json.

Run the test runner on a solution using Docker

This script is provided for testing purposes, as it mimics how test runners run in Exercism's production environment.

To run the tests of a single solution using the Docker image, do the following:

  1. Open a terminal in the project's root
  2. Run ./bin/run-in-docker.sh <exercise-slug> <solution-dir> <output-dir>

Once the test runner has finished, its results will be written to <output-dir>/results.json.

Run the tests

To run the tests to verify the behavior of the test runner, do the following:

  1. Open a terminal in the project's root
  2. Run ./bin/run-tests.sh

These are golden tests that compare the results.json generated by running the current state of the code against the "known good" tests/<test-name>/expected_results.json. All files created during the test run itself are discarded.

When you've made modifications to the code that will result in a new "golden" state, you'll need to update the affected tests/<test-name>/expected_results.json file(s).

Run the tests using Docker

This script is provided for testing purposes, as it mimics how test runners run in Exercism's production environment.

To run the tests to verify the behavior of the test runner using the Docker image, do the following:

  1. Open a terminal in the project's root
  2. Run ./bin/run-tests-in-docker.sh

These are golden tests that compare the results.json generated by running the current state of the code against the "known good" tests/<test-name>/expected_results.json. All files created during the test run itself are discarded.

When you've made modifications to the code that will result in a new "golden" state, you'll need to update the affected tests/<test-name>/expected_results.json file(s).

Benchmarking

There are two scripts you can use to benchmark the test runner:

  1. ./bin/benchmark.sh: benchmark the test runner code
  2. ./bin/benchmark-in-docker.sh: benchmark the Docker image

These scripts can give a rough estimation of the test runner's performance. Bear in mind though that the performance on Exercism's production servers is often lower.

About

No description or website provided.

Topics

Resources

License

Code of conduct

Stars

Watchers

Forks

Releases

No releases published

Sponsor this project

  •  

Packages

No packages published

Contributors 4

  •  
  •  
  •  
  •