A Correlator-beamforming Unit and Acceptance Testing based framework for MeerKAT digital signal processing.
Clone the repository including all submodules attached to it.
git clone --recursive git@github.com:ska-sa/mkat_fpga_tests.git
Also see: opt/dsim_dependencies/README.md
List of dependencies:
- katcp-python
- casperfpga
- corr2
- nosekatreport
- spead2 v1.1.1 *s gcc4.9.3: spead2 dependency.
It is highly recommended to install Python virtual environment before continuing, else below is step-by-step instructions.
# Install Python essentials and pip
curl -s https://bootstrap.pypa.io/get-pip.py | python
pip install --user -U virtualenv # or $ sudo pip install -U virtualenv
# Automagic installation of all dependencies in a virtualenv
make bootstrap
Running unit-testing.
# This will run all unit-tests defined in mkat_fpga_tests/test_cbf.py
make tests
The python run_cbf_tests.py -h
script has several options. Please see run_cbf_tests.py --help
for up-to-date detail.
(Test)mmphego@dbelab04:~/src/mkat_fpga_tests$ ./run_cbf_tests.py
usage: run_cbf_tests.py [-h] [--loglevel LOG_LEVEL] [-q] [--nose NOSE_ARGS]
[--acceptance SITE_ACCEPTANCE] [--instrument-activate]
[--dry_run] [--no-manual-test] [--available-tests]
[--4k] [--array_release_x] [--1k] [--32k] [--quick]
[--with_html] [--QTP] [--QTR] [--no_slow]
[--report REPORT] [--clean] [--dev_update]
This script auto executes CBF Tests with selected arguments.
optional arguments:
-h, --help show this help message and exit
--loglevel LOG_LEVEL log level to use, default INFO, options INFO, DEBUG,
WARNING, ERROR
-q, --quiet Be more quiet
--nose NOSE_ARGS Additional arguments to pass on to nosetests. eg:
--nosetests -x -s -v
--acceptance SITE_ACCEPTANCE
Will only run test marked '@site_acceptance' or if in
the Karoo(site) then also @site_only tests
--instrument-activate
launch an instrument. eg:./run_cbf_tests.py -v
--instrument-activate --4A4k
--dry_run Do a dry run. Print commands that would be called as
well as generatetest procedures
--no-manual-test Exclude manual tests decorated with @manual_test in
this test run
--available-tests Do a dry run. Print all tests available
--4k Run the tests decorated with @instrument_4k
--array_release_x Run the tests decorated with @array_release_x
--1k Run the tests decorated with @instrument_1k
--32k Run the tests decorated with @instrument_32k
--quick Only generate a small subset of the reports
--with_html Generate HTML report output
--QTP Generate PDF report output with Qualification Test
Procedure
--QTR Generate PDF report output with Qualification Test
Report
--no_slow Exclude tests decorated with @slow in this test run
--report REPORT Only generate the reports. No tests will be run.Valid
options are: local, jenkins, skip and results.
'results' will print the katreport[_accept].json test
results
--clean Cleanup reports from previous test run. Reports are
replaced by default without --clean. Clean is useful
with --quick to only generate the html of the test run
report
--dev_update Do pip install update and install latest packages
--sensor_logs Generates a log report of the sensor errors and
warnings occurred during the test run.
For documentation we used Sphinx and latex, it is already included in the pip-requirements.txt
to be installed.
- See: README.md
- See: run_cbf_tests.py
- Report generation seems to be very tedious and needs some improvements.
- see run_cbf_tests.py,
- see report.py, this release doesn't need to be hard-coded.
- see process_core_xml.py, this script converts
CORE.xml
(CORE export) into a json file to be used when extractingREQ
, and etc which are then converted to.rst
to be used by report.py to generate alatex
document... I am sure that can be improved or moved into it's own repository.
- Improve test_cbf.py, and aqf_utils.py
- Test the common case of everything you can. This will tell you when that code breaks after you make some change (which is, in my opinion, the single greatest benefit of automated unit testing).
- Test the edge cases of a few unusually complex code that you think will probably have errors.
- Whenever you find a bug, write a test case to cover it before fixing it
- Add edge-case tests to less critical code whenever someone has time to kill
- “Code is like humour. When you have to explain it, it’s bad.” – Cory House
- Mpho Mphego
- Alec Rust