-
-
Notifications
You must be signed in to change notification settings - Fork 41
Overall Protocol for Testing
Jon Clucas edited this page Dec 19, 2019
·
16 revisions
This document outlines the testing process for before a new release of C-PAC can be made.
- Nosetests must be run and all unit tests must pass.
- All fixes for issues on the kanban board must be tested to ensure that they have been fixed.
- GUI testing must be finished with no failures.
- Specific elements to look at:
- Subject list builder
- Pipeline configuration (individual-level)
- Group analysis
- Tool tips, in-app documentation and config comments - make sure they are current.
- This can be performed contemporaneously with a regression test run.
- The regression test suite must be run using the benchmark dataset of 40 participants from ADHD200. The run will be done on on dozer natively, without virtualization.
- The output directory will be '/tdata/CPAC/v
version number with no periods
/output/benchmark'. - The working directory will be '/tdata/CPAC/v
version number with no periods
/working/benchmark'. - The crash directory will be '/tdata/CPAC/v
version number with no periods
/crash/benchmark'. - The logs directory will be '/tdata/CPAC/v
version number with no periods
/logs/benchmark'. - Retention policy:
- Save the results for the last regression test for use in the benchmark package and for comparison to the results of the next release (~ 333 GB for each set of outputs at present).
- There will be a maximum of 2 regression test runs stored on dozer at any given time (one for the last release, one for the release currently under development).
- Components of the Regression Test Suite:
- Group analysis.
- An ANTS run.
- An FSL run.
- Resource allocation
- Correlations are run on all of the above.
- A script kicks off the regression test run and runs correlations automatically, tarring the results and sending them via e-mail when then the run is complete.
- If all is well and we are ready to release, the output directory should be uploaded to AWS as the canonical benchmark results.
- Perform a fresh installation on several platforms using the install script.
- This step is combined with the small-scale tests. Small-scale tests cannot run within a container/on a platform unless the installation script works first.
- Platforms:
- CentOS 5 (Docker) - EOL : 2017-03-31
- CentOS 6 (Docker) - EOL : 2020-11-30
- CentOS 7 (Docker) - EOL : 2024-06-30
- Ubuntu 12.04 LTS (Docker) - EOL : 2017-04-26
- Ubuntu 14.04 LTS (Docker) - EOL : 2019-04
- Ubuntu 16.04 LTS (Docker) - EOL : 2021-04
- Ubuntu 16.10 (Docker) - EOL : 2017-07
- OS X (10.6 is available on the C-PAC Mac)
- Individual developer/tester laptops
- For platforms that are run within Docker containers, switch to interactive mode after a failure and troubleshoot from within there, making changes to the cloned development branch repository within the container and pushing back to Github when the issues are resolved.
- Smaller scale tests of two participant sub-datasets are run on a variety of datasets / platforms. This is to ensure that runs execute successfully under diverse scenarios.
- A GitHub repository will contain a wide variety of pipeline configuration YAMLs to try out, with various options toggled on and off.
- Platforms (Linux distribution versions that are end-of-lifed will not be tested):
- AWS : Perform a run on a freshly-formed AMI that tests the AWS functions (i.e., uses data stored in S3 for a participant list, puts data into S3 for outputs).
- CentOS 5 (Docker) - EOL : 2017-03-31
- CentOS 6 (Docker) - EOL : 2020-11-30
- CentOS 7 (Docker) - EOL : 2024-06-30
- Ubuntu 12.04 LTS (Docker) - EOL : 2017-04-26
- Ubuntu 14.04 LTS (Docker) - EOL : 2019-04
- Ubuntu 16.04 LTS (Docker) - EOL : 2021-04
- Ubuntu 16.10 (Docker) - EOL : 2017-07
- OS X (10.6 is available on the C-PAC Mac)
- Individual developer/tester laptops
- Datasets
- Pipelines should be configured to use the following directories on each platform:
- Output directory : '/tdata/CPAC/
platform
/vversion number with no periods
/output/dataset name
'. - Working directory : '/tdata/CPAC/
platform
/vversion number with no periods
/working/dataset name
'. - Crash directory : '/tdata/CPAC/
platform
/vversion number with no periods
/crash/dataset name
'. - Logs directory : '/tdata/CPAC/
platform
/vversion number with no periods
/logs/dataset name
'.
- Output directory : '/tdata/CPAC/
- When these tests are run crash files and logs will be aggregated and examined before the final release. If anything needs to be fixed, regression testing will be re-run after C-PAC has been patched and the pipeline that produced the crash will be re-run.
- Don't forget to periodically remove old Docker images. Assuming that only unlabeled images are used for small-scale testing, you may use the following command to remove them:
docker rmi $(docker images | grep '^<none>' | awk '{print }')
- We should also test C-PAC in environments that make use of the following resource schedulers:
- SLURM
- PBS
- SGE
2023+ Additions
- Developer Info
- Developers' Logbook
Legacy Posts
- Specifications
- Group Analysis Datagrabber
- Multisite and Multiscan Support
- Documentation
- Documentation Standards
- Generating Webpage Documentation with Sphinx
- How to Document Code
- Testing
- Overall Protocol for Testing
- Protocol for Testing the GUI
- Protocol for Writing Test Scripts
- Pre-Release
- Updates Checklist
- New Dependencies to Installation Instruction