Skip to content

Technical Discussion & Strategy

Adam De Fouw edited this page Mar 2, 2021 · 52 revisions

Overview

Here's a place where our technical discussions can land in a semi-formalized manner.

We can look at the advantages and disadvantages of different strategies so we can come to conclusions based upon well-considered analysis of different positions.

Topics

Seeding Strategy

We need to look at where we want to perform the setup and teardown of database seeds within the framework. This will help to shape a standard for core testing and define how we will structure tests moving forward.

There are basically three different ways we can seed the database.

  1. Before Every Assertion
  2. Before Specific Group of Assertions
  3. Before Test Suite Begins

Before Every Assertion (e.g. before every single assertion - it() block - in /integration/core/data/compare.js)

Advantages Disadvantages
Known DB State Always in a known database state within an individual assertion. Tightly couples you to a specific database structure. Database structure of REDCap inevitably changes over time. Could be maintenance issue. Need to look at how Vanderbilt handles database migrations.
Determinism Test seeds that happen upstream shouldn't affect current assertion.
Programmer Effort Level Effort level very high. Each assertion needs its own database set.

Before Specific Group of Assertions (e.g. within before() block of /integration/core/data/compare.js)

Advantages Disadvantages
Known DB State Usually in known database state within an individual assertion. Could still have side effects happen upstream that impact current tests. Can be mitigated by using smart seeds in before() block.
Determinism Upstream seeds typically shouldn't affect determinism if appropriate state changes happen at this level. Something could be introduced upstream in the future that could have impact.
Programmer Effort Level Some effort required, but not as much as database seeding before every assertion.

Before Test Suite Begins (before block of /support/index.js)

Advantages Disadvantages
Known DB State Only in known database state during before first test spec runs.
Determinism Extremely low. Very difficult to know whether tests are passing because of what you're currently doing or because of what was done before in a different test.
Programmer Effort Level Extremely low effort required.

Analysis of Export Files

During our meeting on 02/17/2021, we pondered what level we should be testing the file exports that come out of REDCap. We know that there are tests that require us to look at whether we can export data to CSV, SPSS, SAS, R, STAT, CDISC XML formats, for instance.

The question is this:

At what level do we want to test file formats and the exported data into that file format?

Thoughts:

We could do the easy thing and just look at whether the "ability exists" (in other words, whether the option is available in REDCap). However, it would be more robust to look at the actual file that is exported.

Thus, some ideas that were considered during the conference were the following:

  1. Looking for a non-zero length file
  2. Examining the file headers for a specific file

There was also the thought that we should look at what the Manual Testing group does in terms of checking on the file format.

What are your thoughts?

Clone this wiki locally