Trigger Validation

Last update: 21 Jan 2025 [History] [Edit]

ART tests

The Trigger software is regularly tested in each nightly release build with ART (ATLAS Release Tester), which executes two series of tests:

  • ART local executed on a single machine (per release) at CERN directly after the nightly release is built
  • ART grid where each test is submitted as a job to the Grid

The local tests are short (no longer than 1 hour, but typically 5-10 minutes) and run on fewer events (20-100) and the full results are available in the morning, whereas the grid tests typically run on a larger sample (500-1000 events) and some of them may take more than 10h. The output of the tests is available on eos:

  • ART local: /eos/atlas/atlascerngroupdisk/data-art/local-output/
  • ART build: /eos/atlas/atlascerngroupdisk/data-art/grid-output

Trigger ART tests are organised in three packages:

  • TriggerTest - running trigger with offline athena with various menu configurations
  • TrigAnalysisTest - jobs testing integration of the trigger with offline workflows and analysis tools, typically using Reco_tf to steer the execution
  • TrigP1Test - running trigger with athenaHLT to test all workflows relevant for P1 operation in a similar environment

Each package contains a test/ directory where each ART test is defined as a separate python or shell script. These scripts are installed in the release and can be executed directly (see the next section). The naming convention is test_PKG_NAME_TYPE where PKG is trig, trigAna or trigP1 corresponding to each of the three packages, NAME describes the test and typically includes a string identifying the menu (e.g. v1PhysP1), and TYPE is either build (for ART local) or grid (for ART grid).

Trigger ART Monitor website

Results of all local and grid ART tests are available in the Trigger ART Monitor home page: https://test-atrvshft.web.cern.ch/test-atrvshft/ART_monitor/

An introduction to the page and what it offers was presented in the 28/08/2019 TGM.

ART Monitor frontend (responsible for the web display) and backend (responsible for the nightly ) don’t communicate directly. Rather the backend is run periodically during the day (as a crontab job on lxplus under the atrvshft account) and stores updated output on eos under the ART Monitor assets directory. From there it is accessed by the frontend, whenever the web page is opened.

Trigger counts monitoring

Trigger counts monitoring is part of the Trigger ART Monitoring and its results are available on the Trigger ART Monitor websites.

Trigger Counts are monitored daily using the output from a selection of nightly grid tests. Monitored are accept-counts of L1 items and HLT chains, as well as the counts at every HLT chain step. The files containing these counts are L1AV.txt, HLTChain.txt, and HLTDecision.txt correspondingly and are located in the output directory of each nightly test. The file TotalCounts.txt contains the total number of processed events and is required as well for the counts analysis to confirm the validity of the nightly test output.

The analysis of these count files is done in the Trigger ART monitor backend, which simply collects the information and stores it in the ART Monitor assets area as a single file summaryCountChecker.json.

An overview of the trigger counts check result is available in the lower section of the Trigger ART Monitor home page. An overview of all tests can be seen on the Summary Page, from where the counts analysis for each individual test can be reached.

The ART monitor web frontend is responsible for rendering the web pages, comparing daily results from the assets file and marking unexpected jumps or errors. It has all the logic about warning thresholds, color coding, etc implemented.

Trigger EDM size monitoring

Trigger container sizes are monitored on the TriggerEDMSizeMonitoring webpage for every nightly of both the main and current P1/Tier0 branch. The sizes use the ART test AOD output from the following 3 test types:

  • Test test_trigP1_v1PhysP1_T0Mon, which produces AODFULL content (all containers for AOD) for reprocessed EnhancedBias data, using the PhysicsP1_pp_run3 menu with HLTReprocessing prescale set.
  • Test test_trigAna_RDOtoAOD_v1Dev, which produces AODFULL content on simulated ttbar Monte Carlo events and Dev_pp_run3_v1 menu.
  • Test test_trigAna_RDOtoAOD_v1MC, which produces AODSLIM content (containers for MC AOD samples of bulk physics analyses) for simulated ttbar Monte Carlo events and MC_pp_run3_v1 menu.

Sizes are categorised into their respective signatures. To inspect individual container sizes between two nightlies, one can follow "see full results" next to any category. A summary table will display a "Container diff" link between any two nightlies, which takes you to a detailed table of individual container sizes of all containers in the AOD. Note that EDM sizes measured in tests on data may not accurately reflect the average size in the physics stream due to different prescale sets; Monitoring sizes tend to be an underestimate for actual physics data as physics prescales will naturally heavily bias the physics stream towards more energetic and high multiplicity events.

Unit tests

Each athena package may define a set of unit tests in its CMakeLists.txt. Unit tests should be very short (order of 1 minute) jobs which test a specific functionality contained within this package and with minimal dependency on other packages. More information on how to define such a test see the Athena CMake documentation. The unit tests for all packages are executed as part of the nightly release build procedures and also as part of Merge Request testing (see below)

Testing during development

When writing code

While developing trigger software, the developer would typically run a single test of their choice corresponding the most to what they are changing. For example, if you are developing muon triggers, you would most likely run a muon signature build test running on a signal (ttbar) MC sample, test_trig_mc_v1Dev_slice_muon_build.py. In order to do this, simply type in the script name and hit Enter. Tab auto-completion is available since the tests are installed in the release.

If you are outside CERN you need to create a Kerberos ticket first to access the input files

kinit USER@CERN.CH # <-- needed if you are outside CERN

where you should replace USER with your username.

Then you can run a test. For example:

asetup Athena,24.0,latest
source x86_64-el9-gcc13-opt/setup.sh
test_trig_mc_v1Dev_slice_muon_build.py

If you only want to run a subset of chains, you can modify the command (see FAQ below) by using the Trigger.selectChains flag:

athena.py ... TriggerJobOpts/runHLT.py Trigger.selectChains=[\"HLT_2mu3_L12MU3V\"] ...

Before submitting a Merge Request

In order to avoid delays when submitting a merge request, running the following minimal tests locally is recommended:

1. Use runTrigART.py
Each developer should run a wider set of tests to make sure their changes are well integrated with other parts of the software. In order to facilitate this testing, a helper script runTrigART.py is available. It provides several modes of running (run with --help to see all), but the most common use case would be the -m option which runs a minimal set of tests giving a good coverage of most cases. Use option -jN to run the tests in N parallel jobs. Option -d allows to only print the tests which would be executed in order to learn which they are. At the time of writing, running the minimal local tests, would run these tests:

$ runTrigART.py -m -j4
INFO     The following 3 tests will be executed:
INFO         test_trigAna_RDOtoRDOTrig_v1Dev_build.py
INFO         test_trigP1_v1Dev_decodeBS_build.py
INFO         test_trig_data_v1Dev_build.py

The full results of each test are also available in the runTrigART/results/runTrigART directory created by the script.

If a test failed, fix the problem and rerun the relevant test manually or using the --rerun-failed option. In case runTrigART.py reports trigger count changes, follow the instructions in the corresponding section below.

2. Use ctest
You should also run all unit tests for the packages you modified. This can be done by running the ctest command from CMake in the build directory. Type --help for all options - the most common are -jN to run N parallel jobs, -N to list all available tests, --output-on-failure to print the output to stdout if a test fails, and --rerun-failed to run only those tests which didn’t succeed in the previous execution. Example:

ctest -j4 --output-on-failure --rerun-failed

Continuous Integration tests

Each Merge Request (MR) to the athena repository triggers a Continuous Integration (CI) pipeline running the following steps:

  1. Compile athena external dependencies
  2. Compile athena including changes submitted in the MR
  3. Run the unit tests for the packages affected by the MR
  4. Run a pre-defined fixed set of CI tests

The last step includes integration tests covering all domains of ATLAS software for a given project (Athena, AthSimulation, AthAnalysis, etc.). The CI tests are configured in the athena repository, e.g. tests of the Athena project in the main branch are defined in AtlasTest/CITest/Athena.cmake. There are three main sets of tests for checking trigger functionality: running athena on data, athena on MC and athenaHLT on data. These CI tests are simply executing a pre-defined set of ART local (“build”) tests. In the main branch, tests of the Phase-II trigger are performed. The configuration of the three main sets is as follows:

branch, project Trigger_athena_data Trigger_athena_MC Trigger_athenaHLT
main, Athena test_trig_data_v1Dev_build.py
test_trig_mc_v1DevHI_build.py[!]
test_trig_mc_v1Dev_ITk_ttbar200PU_build.py
test_FPGATrackSimWorkflow.sh
test_trigP1_v1Dev_decodeBS_build.py[!]
test_trigP1_v1PhysP1_build.py
test_trigP1_v1Cosmic_build.py
24.0, Athena test_trig_data_v1Dev_build.py test_trigAna_RDOtoRDOTrig_v1Dev_build.py[!]
test_trig_mc_v1DevHI_build.py[!]
test_trigP1_v1Dev_decodeBS_build.py[!]
test_trigP1_v1PhysP1_build.py
test_trigP1_v1Cosmic_build.py

The tests marked in blue are running Phase-II trigger simulation.

In addition to the tests in the table above, the CI runs unit tests of the TriggerJobOpts.TriggerConfigFlags module with python -m TriggerJobOpts.TriggerConfigFlags --verbose.

Updating reference files for MRs changing trigger counts

The tests marked in green and with [!] in the table above include a step comparing trigger counts in the log file to a reference located in the source tree of athena. Any MR changing these counts must include an update of these references as part of the submitted change set. In order to do this, run the affected test in any of the above ways - either directly or with runTrigART.py -m and follow the instructions printed by chainComp. For example, running test_trigP1_v1Dev_decodeBS_build.py may result in the following printout:

chainComp INFO     Test file: ref_data_v1Dev_build.new
chainComp INFO     Reference file: /myWorkArea/build/x86_64-centos7-gcc8-opt/data/TriggerTest/ref_data_v1Dev_build.ref
chainComp INFO     Found 1 new chain added:
chainComp INFO       HLT_j45_pf_subjesgscIS_ftf_boffperf_split_L1J20
chainComp INFO     Found 1 chain removed:
chainComp INFO       HLT_2mu5_bUpsimumu_L12MU4
chainComp INFO     Found 3 chains with count differences:
chainComp INFO       HLT_e3_etcut1step_g5_etcut_L12EM3:
chainComp INFO         eventCount: 8 -> 10
chainComp INFO         stepCounts:
chainComp INFO           1: 8 -> 10
chainComp INFO           2: 8 -> 10
chainComp INFO         stepFeatures:
chainComp INFO           1: 20 -> 23
chainComp INFO           2: 20 -> 23
chainComp INFO       HLT_e5_etcut_L1EM3:
chainComp INFO         stepFeatures:
chainComp INFO           1: 123 -> 148
chainComp INFO       HLT_g5_tight_L1EM3:
chainComp INFO         eventCount: 0 -> 1
chainComp INFO         stepCounts:
chainComp INFO           3: 0 -> 1
chainComp INFO         stepFeatures:
chainComp INFO           3: 0 -> 1
chainComp ERROR    Trigger counts differ from the reference. If the above differences are intended, update the reference
chainComp INFO     Patch file created. To apply, run in the athena source directory:
chainComp INFO     git apply /myWorkArea/build/runTrigART/results/runTrigART/test_trig_data_v1Dev_build/ref_data_v1Dev_build.patch
chainComp INFO     Then check with git diff and, if everything is correct, add and commit

In this case the following steps are required:

  1. Verify that the printed differences are all intended and expected.
  2. Go to the athena source directory.
  3. Execute the git apply command printed by chainComp.
  4. Check again with git diff if the changes are as expected.
  5. If everything looks good, stage the updated file with git add and then commit and push to your branch.

Tip After running both tests relying on reference comparison, it is perfectly fine (even recommended) to execute the suggested git apply commands from both tests one after another and then add them to a single commit.

Warning It is currently not possible to download the reference patch file directly from CI machines. When the reference comparison fails in CI, the patch will be printed to the log and can be copied as text from there. You can create a patch file locally by pasting the contents, and use the same git apply command with this file.

A walkthrough video is available below, presenting trigger (menu) development process from checking out the repository through testing and updating the references to pushing a new branch ready for opening a MR.

Large-stat MC and data validation

In addition to CI and nightly tests, regular large-statistics validation campaigns with MC and data are organised regularly by the Trigger Group. Specifically these are:

Sample A validation

Organised by the Trigger Release & Validation Coordinators (e.g. ATR-29909). Running full MC production chain including trigger on 100k ttbar events and often also other samples. Histograms are produced using the PhysicsValidation transform and compared to a reference produced in the same way in a web display. Checked by Trigger Validation contacts from Signature Groups.

Sample T validation

Organised by the Trigger Release & Validation Coordinators and Trigger Menus & Performance Coordinators (e.g. ATR-29779). Running full MC production chain including trigger on small MC samples (10k events) for a large set of physics processes. These samples provide input for trigger performance studies for physics groups. Checked by analysis groups in coordination with Trigger representatives of the corresponding physics group.

HLT data reprocessing

Organised by the Trigger Release & Validation Coordinators and Trigger Operations & Data Quality Coordinators (e.g. ATR-30624). Rerunning the HLT with an updated menu on EnhancedBias stream collision data recorded earlier, followed by full offline reconstruction and monitoring like at Tier0. The results are compared with web display to a reference produced in the same way. Checked by Trigger Signature Group On-Call Experts with support of the Trigger Validation contacts of each group.

Frequently Asked Questions

What is the athena command running in test X?

There are two ways to check this - with the ART Monitor website or by running locally. To check in the website, navigate to the result for the test of interest and check its outputs as follows:

  1. Go to the ART Monitor website
  2. Follow the [Expertpage] link next to a recent date for either build or grid results, depending on your interest.
  3. Select branch and project, e.g. 24.0_Athena_x86_64-el9-gcc13-opt and then package, e.g. TriggerTest
  4. Click on dir in the Links column next to the test of your interest.
  5. See the file commands.json or directly stdout.txt for the full command line.

To check locally, you can execute a test in “dry mode”, which only configures the steps without really running them. This only works for tests run with python scripts, not for the deprecated shell script versions. To run in dry mode, set the corresponding environment variable for the test, for example:

TRIGVALSTEERING_DRY_RUN=1 test_trig_data_v1Dev_build.py

will print all steps with the corresponding commands and also create the commands.json file.

How to run reconstruction on HLT output?

One can run HLT with many kinds of input and output formats, but not all outputs can be fully used for further processing. The table below lists all possible combinations in which HLT can be run and the level of support for reconstruction of the outputs. Names of example ART tests which validate the corresponding HLT+reco chain are also given. When chains of commands are run for HLT+reco, transform wrappers around the athena[HLT] commands are often used: Reco_tf and Trig_reco_tf. “Limited” support means that the output may be missing some information needed for full offline reconstruction (e.g. detector data), but it will still include the HLT objects.

Command Input type Input format Output format Reconstruction support Test
athenaHLT.py data BS BS yes, critical test_trigP1_v1PhysP1_T0Mon[Trf]_build
athena.py MC RDO RDO_TRIG yes, critical test_trigAna_RDOtoAOD_mt1_build
athena.py MC RDO BS limited currently no reco test
athena.py data BS BS limited test_trigAna_BStoBStoESDAOD_mt1_build
athena.py data BS RDO_TRIG limited currently no reco test

Authored by the ATLAS Trigger Group. Report any issues here.