Trigger Navigation

Last update: 03 Jun 2024 [History] [Edit]

The Trigger Navigation Graph

The trigger navigation structures hold the bookkeeping data relating to the execution of the HLT. To start with, the HLT needs to track which chain’s legs are initially active on which seeding RoIs. Then the progress of the execution is tracked for all of the active chain legs down to the point where the processing of the RoI terminates due to acceptance or rejection by all chain legs. Note: chains which don’t need an explicit RoI still get assigned to a special “full-scan / no-seed” RoI for the purposes of this bookkeeping.

Tip The HLT navigation tracks single-leg chains by their Chain-ID, and multi-leg chains by each leg’s Leg-ID. When this page refers to Leg-IDs it implicitly refers to both of these cases. The Leg-ID is referred to also as HLT::Identifier and is a numeric representation of the HLT chain name for single legged IDs or prefix legXYZ_ + chain name, where XYZ are numbers like 001.

The trigger navigation is structured as a Directed Acyclic Graph (DAG) formed of nodes connected by edges.

The edges in the graph are EDM ElementLinks, hence anything can be linked into the graph! The core of the graph is made by interlinking DecisionObjects. The periphery of the graph is formed by ElementLinks to reconstructed objects like RoIs, electrons, muons, other physics objects, etc. - but the graph never extends further from any of these periphery nodes.

The core node-type of the graph is the DecisionObject.

  • Each core graph node has a short name, this is used to identify the type of framework component which created it.
  • Each core graph node maintains a list of chains (Leg-IDs) which are active at this node (and for multi-leg chains, a list of active chain-legs).
  • Each core graph node possesses unidirectional links to other core or to periphery nodes. The graph edges are also labelled, and this name provides context about the node to which the edge points.

The navigable core structure of the DAG is formed via edges from DecisionObject-to-DecisionObject denoting seeding or steps of the reconstruction. Each DecisionObject (bar initial ones with the name L1) links to its parent DecisionObject(s) via seed edge(s).

A DecisionObject can possess external ElementLinks which point to periphery graph nodes. Examples include ElementLinks labelled feature pointing to an xAOD::Electron, xAOD::Muon (etc.), or an roi label when pointing to a HLT TrigRoIDescriptor.

Each individual DecisionObject has a correspondence to some physics quantity. This is intentionally vague, as a DecisionObject is simply the representation in the trigger navigation of some abstract reconstructed object which we want to associate as satisfying the selection for a given set of HLT chain legs. The underlying physics representation of each DecisionObject therefore starts out in the HLT seeding stage corresponding to an RoI, and becomes progressively more and more refined as more steps of the HLT are executed.

The HLT Seeding algorithm is responsible for creating the initial set of DecisionObjects named L1 which form the base of the graph (one per RoI from L1). The HLT selection is summarised by a Decision Summary algorithm that closes the graph by connecting edges from a special HLTPassRaw terminus node back to all nodes where individual chains accepted the event.

In between the L1 seeding and Decision Summary algorithm, the HLT is comprised of a large number of steps which in turn subdivide into a number of framework components which read from the existing graph structure, do some work, and then write their own nodes - extending the graph. This in turn activates the reconstruction of physics objects that get connected to DecisionObjects the end of each step. The DecisionObjects are grouped in DecisionContainers that are inputs and output from various framework algorithms.

HLT Per-Event Navigation Chronology

There are differences between nodes in the graph created by the differing HLT framework components, these are documented below in the order that they are created during the processing of a single event by the HLT.

HLT Seeding

  • Input Collections: None
  • Output Collections: Many
  • I-O nodes-linking within collections: N/A
  • Periphery edges attached: initialRoI, initialRecRoI
  • Nodes name: L1

In this illustrative event it is assumed that L1_MU14FCH accepted the event, seeding the chain HLT_e7_lhmedium_mu24_L1MU14FCH (hereafter shortened to just HLT_e7_mu24) which has threshold requirements of eEM5 and MU14FCH on its first and second legs respectively. And that L1_eEM26M accepted the event seeding the chain HLT_e26_lhtight_ivarloose_L1eEM26M (hereafter HLT_e26) which has a threshold requirement of eEM26M. The eEM RoI at index #0 passes the eEM5 threshold, the one at index #1 passes both the eEM5 and eEM26M thresholds. The muon RoI at index #0 passes the MU14FCH threshold.

Three initial L1 navigation nodes are produced in response to three seeding RoIs, each with periphery edges to initialRoI and initialRecRoI.

Dotted boxes indicate DecisionContainers in which related graph nodes are stored during the execution of the HLT, these containers are routed between framework components with Read and Write Handles.

The HLT seeding algorithm is a single algorithm instance which creates the initial graph nodes. The presence of these nodes initiates the reconstruction in the first step of the HLT.

There are multiple initial nodes, one is created per L1 RoI. These objects are named L1 and they have no parent and hence no outgoing seed edges.

Each L1 node is linked to a HLT RoI descriptor via a periphery initialRoI edge which is created by the HLT seeding and its and describes geometry (rapidity and azimuthal angle) of the L1 RoI, and via a periphery initialRecRoI edge to the xAOD RoI object produced by the L1Calo or L1Muon systems.

An additional node (not illustrated above) is created to track chains which do not process on a per-RoI basis, this node is still linked to an HLT initialRoI which has its FullScan bit set to true. This node doesn’t have an initialRecRoI edge.

The nodes are written into a number of containers, one per type of threshold. E.g, HLT_MURoIs, HLT_eEMRoIs, HLT_eTAURoIs, HLT_jTAURoIs, etc. with HLT_FSRoI holding the single FullScan node.

Trigger chains are activated based on L1 seed and HLT prescale requirements. Activated chains have L1 threshold requirement for each of their legs, this may be e.g. something like eEM26M (L1 26 GeV e/gamma with medium ID) or FSNOSEED which maps to the node with the FullScan RoI. The HLT seeding is aware of which L1 threshold are satisfied by each initial node, and will add the Leg-ID of each activated chain-leg to each node which satisfied the leg’s L1 threshold requirement.

Filtering

  • Input Collections: Many
  • Output Collections: Many
  • I-O nodes-linking within collections: One-to-one
  • Periphery edges attached: None
  • Nodes name: F

F (for Filter) navigation nodes are created by the first-step Filter algorithms which link back to the L1 nodes as seed.

The filter responsible for single-electron chains outputs a single container in which it passes the single physics object (still an RoI here) which is active for the HLT_e26 chain. The filter responsible for electron-muon chains outputs two containers, one for each leg. Note that the higher pt electron RoI node is now present in two filter outputs as it is of interest to both the electron reconstruction path and the electron+muon reconstruction path.

Filter algorithms consume the output of the previous step. If it is the first step (as in the above illustration), then the filter algorithms consume the output of HLT seeding. The nodes created by filter algorithms have the name F.

The filter algorithms will read nodes from all applicable inputs. For example, if the filter algorithm is in the first step and is responsible for two-leg electron+muon chains with electrons on the first leg and muons on the second leg, then the filter would read and consume all nodes contained in the HLT_eEMRoIs and HLT_MURoIs collections created by HLT seeding.

The filter will create one output node for each input node, and write this to an output container which mirrors its input container. For example for the electron+muon chain filter algorithm, this means that the nodes corresponding to electron candidates and the nodes corresponding to muon candidates remain encapsulated in their own containers - this encapsulation of groups of related nodes in containers is maintained throughout the HLT processing.

Each filter algorithm has a fixed set of Leg-IDs which it is responsible for processing, it will propagate from its input to its output only the sub-set of Leg-IDs for which it is responsible for. There is a one-to-one mapping between the Leg-IDs and responsible filters, every Leg-ID will hence propagate through exactly one filter algorithm at a given HLT step, and be ignored by all others which happen to read the same input collection. This causes an initial duplication or “fan out” of nodes in the filter layer, i.e. one electron L1 node in the HLT_eEMRoIs collection from HLT seeding with more than one active chain legs may be the parent of multiple F nodes, each F node keeping only their own unique subset of Leg-IDs.

The filter implements early-rejection. Nodes are removed from the output collections if there are zero Leg-IDs remaining in the node, and the filter will fail if there are zero nodes in its output collections. A failed filter is recognised by the control-flow logic layer and will not unlock its reconstruction sequencers for that step. But as the reconstruction sequencers are shared among filters, one filter alone failing does not imply that the reconstruction will not run. For a given reconstruction sequencer to not run, all of the filters which make use of the reconstruction sequencer must fail.

There are no periphery graph nodes linked from the F nodes.

Reconstruction Input-Maker

  • Input Collections: Many
  • Output Collections: One
  • I-O nodes-linking within collections: Many-to-one
  • Periphery edges attached: roi, view
  • Nodes name: IM

The fan-out operation of the Filters is undone in the Input Makers, and the RoIs to be reconstructed into higher granularity physics objects in the event are enumerated. An refinement of both the electron and muon RoIs from the L1 RoI to an updated RoI is shown here (normally this would only start to happen from the second step onwards). EventView instances are launched on each of the updated RoIs and periphery edges to the EventViews are added to the graph.

Each input maker is responsible for kickstarting a specific piece of reconstruction. The reconstruction may be requested by more than one type of chain - for example either-or-both of the filter controlling electron+muon chains and the filter controlling muon chains may activate the input maker responsible for muon reconstruction.

The input maker will hence read in multiple copies of its (single) input collection of nodes, one from each filter algorithm which makes use of the input maker’s reconstruction sequencer. The input maker first needs to deduplicate the nodes in these input collections by performing a “fan in” operation which reverses the “fan out” of nodes into the filters. The input maker prepares a single output collection and adds a new layer of IM nodes to the graph. Nodes in the output collection can hence have multiple parents originating from the different filters. The set of active Leg-IDs is recombined in each node to be the superset of active Leg-IDs on all parent filter nodes. The input maker needs a way to compare equality between nodes from different filters, and with the default configuration two nodes will be considered to be the same - and hence merged - if they both share the same initiailRoI edge after having followed their lineage back up to the L1 node.

The input maker is required to attach an roi edge to each output IM node following the deduplication stage. This is most important for chains whose RoI size changes between steps, for other chains there are helper-tools available which will set the roi edge to point to (and hence re-use) the same RoI as in the previous step, or to set the roi edge to point back to the initialRoI from the L1 node.

Finally, the input maker initiates physics object reconstruction over a transient collection of RoIs-to-process (these are not the RoIs linked via the roi edges, which may be hosted in collections external to this input maker - they are copies).

For the case of an input maker which is handling reconstruction within EventViews:

  • one EventView instance is spawned per unique RoI to process,
  • a collection containing a copy of the single RoI is written to each of theses EventViews to seed the reconstruction,
  • view periphery edges are created which link each IM node to the EventView instance which is processing the node’s roi edge.

This view edge to a periphery EventView node are used later on to access the physics objects created inside this EventView.

Hypothesis

  • Input Collections: One
  • Output Collections: One
  • I-O nodes-linking within collections: Normally one-to-one, but one-to-many (e.g. reconstructing multiple candidates within a single RoI) or many-to-one (e.g. combining multiple prior candidates into a single physics object) are supported.
  • Periphery edges attached: feature
  • Nodes name: H

A hypothesis node is created to discriminate on each newly reconstructed physics object which is linked into the graph via a feature periphery edge. In this example we have a one-to-one mapping between the three incoming RoIs and three reconstructed physics objects. All Leg-ID which were present in the IM nodes are seen to be present also in the H nodes too - indicating that all of the newly reconstructed physics objects were accepted by the individual legs of the two active chains.

The hypothesis algorithm performs decisions concerning newly reconstructed physics objects. It will read in a single collection of nodes from the input maker algorithm which it is paired with. It then creates a new layer of H nodes to trace the newly-reconstructed physics objects within the navigation graph. In the majority of cases, a single physics object is reconstructed per incoming IM node and as such one H output node will be created for each IM input node.

If the step’s reconstruction happened within an EventView, then the view edge attached to each incoming IM node is used to access reconstructed physics quantities from inside of the EventView. This naturally maps the incoming IM nodes to their newly reconstructed physics objects. But if EventViews were not used, then it is the job of the hypothesis algorithm to create H the nodes for the newly reconstructed physics objects, and to match these to their logical parent IM node(s).

The hypothesis algorithm is required to attach a feature edge to each H node. The feature edge is probably the most important periphery edge in the navigation graph, it links to the physics objects which is being decided on. Downstream hypothesis perform identical task and create a feature link as well, pointing to objects used in refined selection. The final feature edge from the chain’s final step will be (if the chain accepts the event) used by physics analyses as the online physics candidate against which trigger-matching to offline physics objects is performed.

The hypothesis and following combo hypothesis algorithms are the only two algorithms involved in the navigation building where a Leg-ID may not be propagated from the seeding node. This occurs if the hypothesis tool responsible for discrimination of the Leg-ID rejects the newly reconstructed physics object. This is how rejection occurs at the chain level on individual physics objects. This is different from the filter algorithm case above, where each Leg-ID would be always passed through exactly one filter and be removed by the others during the fan-out-fan-in. Here, the Leg-IDs are permanently abandoned, meaning that one fewer chain-leg is requesting continued reconstruction of this physics object in later steps. If this should reduce to zero chain-legs per RoI, then further reconstruction of the physics object halts.

Combinatorial Hypothesis

  • Input Collections: Many
  • Output Collections: Many
  • I-O nodes-linking within collections: One-to-one
  • Periphery edges attached: None
  • Nodes name: CH

The combo hypothesis cross-correlates the acceptance criteria over multi-object and/or multi-leg chains. In this case everything is observed to pass. But if, for example, the last remaining physics object satisfying the muon leg leg001_HLT_e7_mu24 of this electron+muon chain failed the preceding hypothesis algorithm, then the chain no longer satisfies its multiplicity requirement and all of the electron leg leg000_HLT_e7_mu24 Leg-IDs would be removed by the combo hypothesis from all of the DecisionObjects on the electron leg.

The combinatorial (combo) hypothesis algorithm is the final processing stage of each step. Its input-output pattern is similar to the filter algorithm in that it is common for combo hypothesis algorithms to require input from multiple hypothesis algorithms. For example, the output of the muon hypothesis algorithm may be consumed by both the combo hypothesis algorithm instances responsible for muon chain and for electron+muon chain, respectively.

Also similar to the filter algorithm, the combo hypothesis algorithm will create one output node for each input node and will only consider the Leg-IDs corresponding to chains which are to be processed by this instance of combo hypothesis algorithm. This algorithm combines information from reconstruction paths involved in a given type of chains selection and thus here is exactly one of combo hypothesis algorithm responsible each type of selection.

This means that from the end of the first step (and on), the “fan out” operation is performed by the combo hypothesis algorithms rather than the filter algorithms. With the outputs of each combo hypothesis algorithm mapping directly on to a single filter algorithm in the next step.

The internal logic of the combo hypothesis algorithm is detailed on this page. The combo hypothesis algorithm will check for a chain’s multiplicity or topological requirements over all of its legs. This cross-correlates nodes in the graph in a way which is not done anywhere else, e.g. a graph node in the combo hypothesis algorithm representing an electron, and another representing a muon, have zero connectivity to each other within the structure of the navigation graph - but the combo hypothesis will consider them together if they both possess Leg-IDs for the first and second legs, respectively, for an electron+muon chain.

Three outcomes are possible on a per-chain basis. The chain may be trivially accepted based on simple multiplicity requirements - all of the chain’s Leg-IDs from all of its legs propagate from the input H nodes to their corresponding output CH nodes. The chain may be accepted by a topological requirement - the chain’s Leg-IDs propagate from all input H nodes which were included in at lease one valid combination. The chain may be rejected - none of the chain’s Leg-IDs are propagate further.

Subsequent Steps

The output of the combo hypothesis algorithms in step N are consumed by the filter algorithms in step N+1, this pattern continues until all steps are performed.

Decision Summary Making

  • Input Collections: Many
  • Output Collections: One
  • I-O nodes-linking within collections: Many-to-one
  • Periphery edges attached: None
  • Nodes name: SF, HLTPassRaw

The HLTPassRaw node contains the Chain-ID of every passing chain. It links directly back to the final passing features on each leg of each passing chain via Summary Filter,SF, nodes. In this simplified example only a single step has been shown, but in a more complete example the SF links would point back to many different step numbers as different chains will comprise different numbers of steps.

The HLT graph is finalised by a single summary algorithm.

The decision summary algorithm is configured with the details of the the combo hypothesis algorithm outputs which correspond to the final-decisions of each chain. It will attempt to read all of these at the end of each event. For each valid input, it will check for any CH output nodes which contain a Leg-ID of a chain for which this CH node represents the final stage of HLT processing. If found, it will create a Summary Filter node (SF) with these Leg-IDs and link this to the CH node. It will then link the terminus node (HLTPassRaw) to the SF node, and add the chain’s Chain-ID to the terminus node. These two actions together denote acceptance of the event due to the the chain.

While there were many initial nodes in the graph (one per RoI), there is only a single terminus node. All chains which accept the event must link to this terminus node via a SF. The additional layer of SF nodes is needed as chains may share common reconstruction but accept at events at distinct steps, for example a complete electron chain and etcut chain (electron satisfying only calorimetric criteria) share the same reconstruction up to the precision calorimeter step where the etcut chain accepts, but the electron chain continues. Without the SF nodes both CH node after precision calorimetry and after precision electron reconstruction and validation would be linked to the terminus node and thus create ambiguity as to which is the final feature for the precision electron chain.

All graph navigation traversal algorithms used in the analysis which is only interested in considering passing chains will start at the terminus node and back-navigate through the graph from this point. This will involve recursively following seed edges up through core graph nodes which contain leg IDs of interest and collating edges to periphery nodes of interest (e.g. feature edges to physics object nodes) along the way. Terminating either when L1 nodes are reached, or all the requested edges from the graph are located.

Construction of permanent decision storage

As seen above, during the execution of the HLT there are many different DecisionContainers (collection of DecisionObjects) created. They play an important role in scheduling algorithms and need to be separate due to multithreaded event processing. After the HLT has finished executing, the distinction of which DecisionContainer hosts a particular node is less important and all of the nodes are collated from the large number of individual DecisionContainers and consolidated into a single DecisionContainer for long-term storage.

Example Graph

The whole example graph discussed above of a simplified single-step HLT execution which processes and accepts two chains running on three RoIs then looks as follows. The black dotted boxes represent the DecisionContainers which group similar nodes during the execution of the HLT. These DecisionContainers are routed between the HLT framework containers via WriteHandles and ReadHandles.