Trigger Configuration Database

Last update: 12 Nov 2024 [History] [Edit]

Introduction

The Trigger configuration database, aka TriggerDB, is a relational database containing trigger-menu related information. It is primarily used to configure the L1 and HLT trigger for data taking at Point 1, and for HLT reprocessings.

The advantages of configuring the trigger from a database rather than job-options like in the offline world are:

  • Simple specification: the entire menu configuration for operating the trigger is described by just 4 numbers
  • Configuration time: algorithm properties are directly loaded from the database, no need for running python job-options
  • Archiving: the trigger configuration of a run can easily be recalled at a later point in time

Users interact with the TriggerDB using the TriggerToolWeb, which is documented here

TriggerDB content

The TriggerDB contains the following information:

L1 menu
  • The L1 items definitions used by the CTP
  • The topological algorithm configuration used by L1Topo
  • The calorimeter object definitions, e.g thresholds, used by L1Calo
  • The muon object momentum thresholds for barrel and endcap used by the sector logic
  • Hardware configuration files for CTP and MUCTPI
HLT menu
  • HLT chains names and streaming info
  • Data stream definitions
  • Chain groups, e.g. for coherent prescaling
HLT Job options
  • HLT algorithm properties
  • Control flow information
HLT Monitoring groups
  • Configuration of the monitoring groups, used during reprocessing
Prescales
  • L1 and HLT prescales, used by the CTP and the HLT jobs respectively
Bunch group set
  • L1 bunch group sets, used by the CTP

Each piece of information is stored as BLOB in json format, with exception of some of the CTP and MUCTPI hardware configuration files.

Configuration keys

Access to this information is provided through the following integer numbers, which technically are row indices in the top-level tables in the database.

  • Supermaster key (SMK): access to L1 Menu, HLT Menu, and HLT job options
  • L1 Prescale key (L1PSK): access to the prescales of the L1 items
  • HLT Prescale key (HLTPSK): access to the prescales of the HLT chains and streams
  • Bunchgroupset key (BGSK): access to the bunchgroup set

TriggerDB interaction

Populating the TriggerDB

Content can be uploaded to the TriggerDB in two ways

  • Using the TriggerTool web interface
  • Using the command-line scripts defined in the TriggerDB package in the TDAQ release In both cases access rights are required, and uploads can only be performed by trigger menu experts, online experts, and CTP experts

To upload the menu experts usually generate a consistent set of .json files for a given release, by

  1. running an HLT athena job to generate L1 menu, HLT menu, job-options and monitoring groups files, and
  2. running the rulebook to generate prescale set files for L1 and HLT

To make a trigger L1 menu usable by the CTP and MUCTPI, CTP experts have to also produce the firmware files for their system and upload them to the online TriggerDB as well, attached to the same SMK.

Bunchgroups are always created and uploaded in the same process, which starts with a LHC fill pattern, which is a bit-array of length 3564. From such a fill pattern the CTP software generates the definition of the 16 bunchgroups that make up a bunchgroup set and the script ReadBunchGroup.py uploads that to the TriggerDB. The fill pattern is either provided as a file from LHC Physics Coordination, or measured from the beam that is present in the machine.

The upload process results in a set of consistent configuration keys by which the uploaded configuration can be accessed.

Accessing the TriggerDB

The TriggerDB content can be accessed in multiple ways

TriggerDB instances

The following databases exist. They are hosted on the oracle server ATONR_ADG, except a few integration databases which are hosted on INT8R (as specified).

For data taking

  • TRIGGERDBV1: Run 1 - v1 (range:) (ATLAS_CONF_TRIGGER)
  • TRIGGERDB_RUN1: Run 1 - v2 (range:) (ATLAS_CONF_TRIGGER_V2)
  • TRIGGERDB_RUN2: Run 2 (ATLAS_CONF_TRIGGER_RUN2)
  • TRIGGERDB_RUN3: Run 3 (ATLAS_CONF_TRIGGER_RUN3)
  • TRIGGERDB: same as TRIGGERDB_RUN2

In planning:

  • TRIGGERDB_RUN2_NF: Run 2 with Run3-schema (ATLAS_CONF_TRIGGER_RUN2_NF)

For trigger reprocessings

  • TRIGGERDBREPR_RUN2: Run 2 (ATLAS_CONF_TRIGGER_REPR)
  • TRIGGERDBREPR_RUN3: Run 3 (ATLAS_CONF_TRIGGER_RUN3_REPR)
  • TRIGGERDBREPR: same as TRIGGERDBREPR_RUN2

For Monte Carlo configuration

In Run 1 and Run 2 the menu and prescales in the TriggerDB was used to configure the MC athena job. In Run 3 the TriggerDB only serves to display the configuration that was used in the MC production.

  • TRIGGERDBMC_RUN1: Run 1 (ATLAS_CONF_TRIGGER_MC)
  • TRIGGERDBMC_RUN2: Run 2 (ATLAS_CONF_TRIGGER_RUN2_MC)
  • TRIGGERDBMC_RUN3: Run 3 (ATLAS_CONF_TRIGGER_RUN3_MC)
  • TRIGGERDBMC: same as TRIGGERDBMC_RUN2

For ART nightly tests

  • TRIGGERDBART: for ART tests of the database

For development

  • TRIGGERDBDEV1_I8: development database 1 (INT8R/ATLAS_CONF_TRIGGER_DEV1)
  • TRIGGERDBDEV2_I8: development database 2 (INT8R/ATLAS_CONF_TRIGGER_DEV2)

Note that the content on TRIGGERDBDEV1_I8 must not be deleted for now, as it was used for data taking during the commissioning in 2022. This will be re-assessed.

Databases for other purposes

  • TRIGGERDBATN: used by the menu group for tests (INT8R/ATLAS_TRIGGER_ATN)
  • TRIGGERDBTEST: old test database, can be repurposed (ATLAS_CONF_TRIGGER_TEST)

Deleted databases

These databases aliases are still defined in the current dblookup.xml file, but the databases/servers don’t exist any longer

  • TRIGGERDBDEV1: development database on INTR
  • TRIGGERDBDEV2: development database on INTR
  • TRIGGERDBRTT: runtime-tester database on devdb10
  • TRIGGERDBRTT2: runtime-tester database 2 on devdb10
  • TRIGGERDBATNDEV: nightly test database on devdb10