Common CP Algorithms Output

Last update: 24 Oct 2019 [History] [Edit]

The general model for the CP algorithms is that they are meant to be run on the DxAODs produced by the derivation framework, and then their output is stored in either n-tuples or mini-xAODs. Typically the jobs running the CP algorithms will run on the grid, and you then download the output files to your local machine or cluster for further processing. There are generally two reasons for this workflow: to avoid running the CP tools on every iteration (some of them are slow to run), and to produce files that are (hopefully) smaller than the DxAODs and easier to process locally.

Whether you produce mini-xAODs or n-tuples is essentially your choice, and it will depend on the technical requirements of your analysis as well as your personal taste. You may also produce mini-xAODs and then process these further into n-tuples. If you use an analysis framework the choice may have been made for you.

Using mini-xAODs

A mini-xAOD is essentially an xAOD in which all the CP tools have been applied to all the objects inside of it, meaning that all the objects exist in all the systematic variations (as systematics are applied by the CP tool). Various analysis frameworks have defined their own variant of mini-xAODs and given them framework specific names, e.g. CxAODs, MxAODs, PxAODs.

Using a mini-xAOD means that you are typically stuck with working inside the ATLAS software environment. Technically you can open a mini-xAOD like an n-tuple and process it in plain root, but if that is your intention you likely find n-tuples more convenient as they allow you to choose the names of all your variables freely.

On the plus side, using mini-xAODs with ATLAS software means a lot of things are done for you, and in an efficient way. The CP algorithms are optimized by using shallow copies as much as possible, so that if a variable ought to be the same between two systematics it will usually only be written out once. And if you read a mini-xAOD and don’t use all the variables, the code will only read those variables you are actually using. And if you use systematics handles you have an entire infrastructure for bookkeeping which object is affected by which systematic.

Another big benefit is that you can freely decide which algorithms you run on the DxAODs and which ones on the mini-xAODs. If you’d rather run one of the CP algorithms locally (e.g. because you are experimenting with its configuration) you can do so quite easily. Or conversely if you find that there is an analysis algorithm you always run without changing it, you can include it in the job to produce the mini-xAODs.

One complication in writing out the xAOD objects produced by the CP algorithms is what exactly to write out. The final result of a CP algorithm sequence is a view container that contains only “good” objects. View containers themselves can not be written out, they are just a way of looking at another container in memory. A rather straight-forward way around that is to write out the original containers they are views of, and then re-create the view container in your analysis job. All you really need to do for that is re-run the algorithm that created the view containers (typically CP::AsgViewFromSelectionAlg), as it is run in the CP algorithm sequence. And as an added benefit, you can always take a look at objects failing your quality selection cuts, as those objects are still in your mini-xAODs, or indeed tweak your selection after you make the mini-xAODs.

The downside of the approach above is that you write out all of your objects, even those clearly failing all quality cuts; which wastes both disk space and time when reading the mini-xAODs. An alternative approach which avoids this is to take the final “good” objects and make what we call deep copies of them (as opposed to view copies). This then allows to write out only the good objects, but you lose all the optimizations we gain from shallow copies and instead you have to write out all variables for all objects for all systematic variations. See the discussion on n-tuples to see why that happens, the main concern here being scale factor systematics. Whether deep copies done like this actually saves any space depends on a number of factors, but suffice it to say we currently have no studies to see if in which cases this would be useful. The only case for which we know that this is clearly better is if you do not have any systematic variations (e.g. for data).

Now there is in principle the possibility to optimize the deep copies even further, by only doing deep copies for momentum systematics and treating scale factor systematics separately. However, this will require changes to the common CP algorithm to support it natively, and it will make the overall systematics handling more complicated as we now explicitly have to deal with two types of systematics. However, supporting this is on the planned features list, and using it could potentially allow other optimizations besides this i/o optimization.

Another obstacle is actually writing out the mini-xAODs. For EventLoop the muon sequence actually has an example of writing out mini-xAODs based on deep copies. However, this is so far mostly a proof-of-principle and we haven’t really tested running it for a complete mini-xAOD with all object types and optimized for minimal disk usage. And while Athena (obviously) can write out xAODs, writing out a mini-xAOD with all the right systematics brings its own complications. If you use an analysis framework it may provide you with a suitable dumper, but otherwise you may have to fudge a little bit.

Using n-Tuples

N-Tuples are plain root files containing one (or several) TTrees. That means you can process them without using any ATLAS software at all, and you have (in principle) full control over the contents of the n-tuple.

N-Tuples have the potential to be smaller and faster than mini-xAODs. That is if you know enough about TTrees and i/o to build an efficient analysis. Also, you will only be smaller or faster if you write out more highly processed information than in the mini-xAOD. If your n-tuple contains the same information as the mini-xAOD it will be only as fast and small as a mini-xAOD, or potentially a lot bigger/slower if you didn’t implement all the optimizations we described for the mini-xAODs.

Since n-tuples are native to root they tend to work better/easier with root facilities like TTree::Draw, RDataFrame and TMVA. Though in practice it is also common for users who use these facilities a lot (particularly TMVA) to write out specific n-tuples just for that (either from their primary n-tuples or mini-xAODs). Still, overall if you use n-tuples to begin with it may make your life easier in this regard.

While we do have an example of how to write out an n-tuple from EventLoop or Athena in this tutorial, we also did highlight some of the short-comings of the current mechanism, so (at the moment) it may be required to write a custom algorithm just for writing out an n-tuple. And as noted above, if you really want to optimize your n-tuple content you likely have to write a custom algorithm for at least some of your n-tuple variables.

Output Our Job to an N-tuple

We can add a short code example to the end of our MyMuonAnalysisAlgorithms.py to output an N-tuple for us. Just before you return your algorithm sequence, implement the following

    # Add an ntuple dumper algorithm:
    treeMaker = createAlgorithm( 'CP::TreeMakerAlg', 'TreeMaker' )
    treeMaker.TreeName = 'muons'
    algSeq += treeMaker
    ntupleMaker = createAlgorithm( 'CP::AsgxAODNTupleMakerAlg', 'NTupleMakerEventInfo' )
    ntupleMaker.TreeName = 'muons'
    ntupleMaker.Branches = [ 'EventInfo.runNumber     -> runNumber',
                             'EventInfo.eventNumber   -> eventNumber', ]
    ntupleMaker.systematicsRegex = '(^$)'
    algSeq += ntupleMaker
    ntupleMaker = createAlgorithm( 'CP::AsgxAODNTupleMakerAlg', 'NTupleMakerMuons' )
    ntupleMaker.TreeName = 'muons'
    ntupleMaker.Branches = [ 'AnalysisMuonsMedium_NOSYS.eta -> mu_eta',
                             'AnalysisMuonsMedium_NOSYS.phi -> mu_phi',
                             'AnalysisMuonsMedium_%SYS%.pt  -> mu_%SYS%_pt', ]
    ntupleMaker.systematicsRegex = '(^MUON_.*)'
    algSeq += ntupleMaker
    treeFiller = createAlgorithm( 'CP::TreeFillerAlg', 'TreeFiller' )
    treeFiller.TreeName = 'muons'
    algSeq += treeFiller

This snippet has three main parts:

  • treeMaker: is making your tree using CP::TreeMakerAlg with the tree name muons

  • ntupleMaker: is responsible for actually naming the branches of what you want to output and associates them to the relevant variable in the container.

  • treeFiller: is finally where the N-tuple is filled.

Add this line to your running script before you setup your driver and test this N-tuple maker in your own code.

# Make sure that both the ntuple and the xAOD dumper have a stream to write to.
job.outputAdd( ROOT.EL.OutputStream( 'ANALYSIS' ) )

Histograms From DxAODs

In principle it is also possible to write out histograms straight from jobs running on DxAODs. However, this workflow is not very common and not recommended. Even if you have enough space to keep all your DxAODs on your local batch cluster, it is still recommended that you produce a mini-xAOD or n-tuple from it. The main reason for this is turn-around time, it is usually a lot faster to process mini-xAODs/n-tuples than to process DxAODs.

There are some special cases in which this is the best workflow, e.g. if you find yourself rebuilding your mini-xAODs as often as you run on them or if the jobs running on your mini-xAODs are actually slower than the jobs producing them, but these are really special cases and you should start out by assuming your use case is not that special. Also, before you go commit to this path remember that with mini-xAODs you have some freedom to choose which algorithms run on the DxAODs and which ones run on the mini-xAOD without having to abandon the “common” workflow with an intermediate format.