The Trigger software is regularly tested in each nightly release build with ART (ATLAS Release Tester), which executes two series of tests:
The local tests are short (no longer than 1 hour, but typically 5-10 minutes) and run on fewer events (20-100) and the full results are available in the morning, whereas the grid tests typically run on a larger sample (500-1000 events) and some of them may take more than 10h.
Trigger ART tests are organised in four packages:
athenawith various menu configurations
Reco_tfto steer the execution
athenaHLTto test all workflows relevant for P1 operation in a similar environment
Each package contains a
test directory where each ART test is defined as a separate python or shell script. All these
scripts are installed in the release and can be executed directly (see the next section). The naming convention is
test_PKG_NAME_TYPE where PKG is
trigUpgr corresponding to each of the four packages, NAME
describes the test and typically includes a string identifying the menu (e.g. “v1PhysP1” for the PhysicsP1_pp_ca_v1
menu), and TYPE is either build (for ART local) or grid (for ART grid).
Results of all local and grid ART tests are available in the Trigger ART Monitor page:
An introduction to the page and what it offers was presented in the 28/08/2019 TGM.
to be added - work in progress
to be added - work in progress
Each athena package may define a set of unit tests in its
CMakeLists.txt. Unit tests should be very short (order of
1 minute) jobs which test a specific functionality contained within this package and with minimal dependency on other
packages. More information on how to define such a test is available in
The unit tests for all packages are executed as part of the nightly release build procedures and also as part of
Merge Request testing (see below)
While developing trigger software, the developer would typically run a single test of their choice corresponding the
most to what they are changing. For example, if you are developing muon triggers, you would most likely run a muon
slice build test running on a signal (ttbar) MC sample,
test_trig_mc_v1Dev_slice_muon_build.py. In order to do this,
simply type in the script name and hit Enter. Tab auto-completion is available since the tests are installed in the
release. For example:
asetup Athena,master,latest source your-build-dir/architecture-tag/setup.sh test_trig_mc_v1Dev_slice_muon_build.py
Each developer should run a wider set of tests to make sure their changes are well integrated with other parts of the software. In order to facilitate this testing, a helper script
runTrigART.py is available. It provides several modes
of running (run with
--help to see all), but the most common use case would be the
-m option which runs a minimal
set of tests giving a good coverage of most cases. Use option
-jN to run the tests in N parallel jobs. Option
allows to only print the tests which would be executed in order to learn which they are. At the time of writing, running
the minimal local tests covers four ART local tests:
$ runTrigART.py -m -j4 INFO The following 4 tests will be executed: INFO test_trigAna_RDOtoRDOTrig_mt1_build.py INFO test_trigP1_v1Dev_build.py INFO test_trig_data_newJO_build.py INFO test_trig_data_v1Dev_build.py (then the results follow after tests are finished)
The full results of each test are also available in the
runTrigART/results/runTrigART directory created by the script.
runTrigART.py reports trigger count changes, follow the instructions in the
corresponding section below.
You should also run all unit tests for the packages you modified. This can be done by running the
ctest command from
CMake in the build directory. Type
--help for all options - the most common are
-jN to run N parallel jobs,
list all available tests,
--output-on-failure to print the output to stdout if a test fails, and
run only those tests which didn’t succeed in the previous execution. Example:
ctest -j4 --output-on-failure --rerun-failed
Each Merge Request (MR) to the athena repository triggers a Continuous Integration (CI) pipeline running the following steps:
The last step includes integration tests covering all domains of ATLAS software for a given project (Athena, AthSimulation, AthAnalysis, etc.) with configuration defined in the atlas-sit/CI repository. Currently there are three sets of tests executed for Trigger, running athena on data, athena on MC and athenaHLT on data. The Trigger CI tests are simply executing a pre-defined set of Trigger ART local (“build”) tests. The configuration is as follows:
The tests marked in green and with [!] in the table above include a step
comparing trigger counts in the log file to a reference located in the source tree of athena. Any MR changing these
counts must include update of these references as part of the submitted change set. In order to do this, run the
affected test in any of the above ways - either directly or with
runTrigART.py -m and follow the
instructions printed by
chainComp. For example, running
test_trig_data_v1Dev_build.py may result in the following
chainComp INFO Test file: ref_data_v1Dev_build.new chainComp INFO Reference file: /myWorkArea/build/x86_64-centos7-gcc8-opt/data/TriggerTest/ref_data_v1Dev_build.ref chainComp INFO Found 1 new chain added: chainComp INFO HLT_j45_pf_subjesgscIS_ftf_boffperf_split_L1J20 chainComp INFO Found 1 chain removed: chainComp INFO HLT_2mu5_bUpsimumu_L12MU4 chainComp INFO Found 3 chains with count differences: chainComp INFO HLT_e3_etcut1step_g5_etcut_L12EM3: chainComp INFO eventCount: 8 -> 10 chainComp INFO stepCounts: chainComp INFO 1: 8 -> 10 chainComp INFO 2: 8 -> 10 chainComp INFO stepFeatures: chainComp INFO 1: 20 -> 23 chainComp INFO 2: 20 -> 23 chainComp INFO HLT_e5_etcut_L1EM3: chainComp INFO stepFeatures: chainComp INFO 1: 123 -> 148 chainComp INFO HLT_g5_tight_L1EM3: chainComp INFO eventCount: 0 -> 1 chainComp INFO stepCounts: chainComp INFO 3: 0 -> 1 chainComp INFO stepFeatures: chainComp INFO 3: 0 -> 1 chainComp ERROR Trigger counts differ from the reference. If the above differences are intended, update the reference chainComp INFO Patch file created. To apply, run in the athena source directory: chainComp INFO git apply /myWorkArea/build/runTrigART/results/runTrigART/test_trig_data_v1Dev_build/ref_data_v1Dev_build.patch chainComp INFO Then check with git diff and, if everything is correct, add and commit
In this case the following steps are required:
git applycommand printed by
git diffif the changes are as expected.
git addand then commit and push to your branch.
NB1: After running both tests relying on reference comparison, it is perfectly fine (even recommended) to execute the
git apply commands from both tests one after another and then add them to a single commit.
NB2: It is currently not possible to download the reference patch file directly from CI machines. When the reference
comparison fails in CI, the patch will be printed to the log and can be copied as text from there. You can create a
patch file locally by pasting the contents, and use the same
git apply command with this file.
NB3: A walkthrough video is available below, presenting trigger (menu) development process from checking out the repository through testing and updating the references to pushing a new branch ready for opening a MR.
In addition to CI and nightly tests, regular large-statistics validation campaigns with MC and data are organised regularly by the Trigger Group. Specifically these are:
Organised by the Trigger Release & Validation Coordinators. Running full MC production chain including trigger on 100k ttbar events and often also other samples. Histograms are produced using the PhysicsValidation transform and compared to a reference produced in the same way in a web display. Checked by Trigger Validation contacts from Signature Groups.
Organised by the Trigger Release & Validation Coordinators and Trigger Menus & Performance Coordinators. Running full MC production chain including trigger on small MC samples (10k events) for a large set of physics processes. These samples provide input for trigger performance studies for physics groups. Checked by analysis groups in coordination with Trigger representatives of the corresponding physics group.
Organised by the Trigger Operations & Data Quality Coordinators. Rerunning HLT with an updated menu on EnhancedBias stream collision data recorded earlier, followed by full offline reconstruction and monitoring like at Tier0. The results are compared with web display to a reference produced in the same way. Checked by Trigger Signature Group On-Call Experts with support of the Trigger Validation contacts of each group.
What is the athena command running in test X?
There are two ways to check this - with the ART Monitor website or by running locally. To check in the website, navigate to the result for the test of interest and check its outputs as follows:
dirin the Links column next to the test of your interest.
commands.jsonor directly in the stdout log.
To check locally, you can execute a test in ‘dry mode’, which only configures the steps without really running them. This only works for tests run with python scripts, not for the deprecated shell script versions. To run in dry mode, set the corresponding environment variable for the test, for example:
will print all steps with the corresponding commands and also create the
How to run reconstruction on HLT output?
One can run HLT with many kinds of input and output formats, but not all outputs can be fully used for further processing. The table below lists all possible combinations in which HLT can be run and the level of support for reconstruction of the outputs. Names of example ART tests which validate the corresponding HLT+reco chain are also given. When chains of commands are run for HLT+reco, transform wrappers around the athena[HLT] commands are often used:
Trig_reco_tf. “Limited” support means that the output may be missing some information needed for full offline reconstruction (e.g. detector data), but it will still include the HLT objects.
|Command||Input type||Input format||Output format||Reconstruction support||Test|
|athena.py||MC||RDO||BS||limited||currently no reco test|
|athena.py||data||BS||RDO_TRIG||limited||currently no reco test|