In this case we will modify the steering macro/job options that you created
in the stand-alone beginners tutorial earlier,
/MyAnalysis/share/ATestRun_eljob.py
or
MyAnalysis/share/ATestRun_jobOptions.py
. We need to add the muon analysis
algorithm sequence to our job.
The configuration is still not completely fixed, as we are still working out how best to do things. As such the actual content of the configuration file may still change, though the overall structure will likely remain the same.
When configuring your common CP analysis algorithm, it is often useful to
on this type of data you are running on.
This dataType
indicates whether we run on data, Monte Carlo Full-Sim
or Monte Carlo Fast-Sim. Since a lot of the CP tools need to be
configured slightly different depending on the type of data they run
on, we pass these into any and all of the CP algorithm sequence
configurations (regardless of whether they get used).
The file we are using for this tutorial is a Monte Carlo Full-Sim sample so we should set a data type in our steering script after setting up the sample handler object.
...
inputFilePath = os.getenv( 'ALRB_TutorialData' ) + '/mc21_13p6TeV.601229.PhPy8EG_A14_ttbar_hdamp258p75_SingleLep.deriv.DAOD_PHYS.e8357_s3802_r13508_p5057/'
ROOT.SH.ScanDir().filePattern( 'DAOD_PHYS.28625583._000007.pool.root.1' ).scan( sh, inputFilePath )
sh.printContent()
dataType = "mc"
...
In ATestRun_eljob.py
, just below setting the max events of the jobs, we can add our muon
analysis algorithms.
from MyAnalysis.MyMuonAnalysisAlgorithms import makeSequence
algSeq = makeSequence (dataType)
print(algSeq) # For debugging
for algMuon in algSeq:
# use one of the following lines depending on whether your analysis is eventloop or athena based
job.algsAdd( algMuon ) # eventloop analysis
# athAlgSeq += algMuon # athena analysis
pass
One final step is required before we can run. For the corrections to be applied
to the muons correctly we also need to add the pileup analysis sequence to our
muon sequence. Add the following to your MyMuonAnalyisAlgorithms.py
just after you set up the
systematics loader/handler algorithm:
# Include, and then set up the pileup analysis sequence:
from AsgAnalysisAlgorithms.PileupAnalysisSequence import \
makePileupAnalysisSequence
lumicalcfiles = []
prwfiles = []
pileupSequence = makePileupAnalysisSequence( dataType,
userPileupConfigs=prwfiles,
userLumicalcFiles=lumicalcfiles,
)
pileupSequence.configure( inputName = 'EventInfo', outputName = 'EventInfo_%SYS%' )
# Add the pileup sequence to the job:
algSeq += pileupSequence
Please note that pileup reweighting is not currently available for MC21. For the remainder of
the tutorial, please change ALRB_TutorialData
to point to /cvmfs/atlas.cern.ch/repo/tutorials/asg/cern-mar2022
and set your steering macro and/or jobOptions to point to
mc16_13TeV.410470.PhPy8EG_A14_ttbar_hdamp258p75_nonallhad.deriv.DAOD_PHYS.e6337_s3126_r10201_p4172/DAOD_PHYS.21569875._001323.pool.root.1
as your input file.
Okay now you can go rerun CMake, recompile and then run your job similar to how you did in the previous parts of the tutorial.
As mentioned in the beginning, the CP algorithms put their outputs in the event store, and those outputs can then be accessed as if they come directly from the input file. For now we will be building an algorithm that (purposely) looks mostly like the algorithms we have build in the beginner’s tutorial. To make things more like the beginner’s tutorial we will also not deal with systematics in this section.
So next let us make a plot of the sum pT of all muons in an event. First let us create the histogram itself. We do this slightly differently than we do in the base tutorial, because we will modify it in the next section for systematics handling, but overall this still follows the general pattern of the beginner’s tutorial.
First declare it in your class definition inside the header file
(MyxAODAnalysis.h
):
TH1 *m_sumPtHist {nullptr};
This is almost like we do in the base tutorial, but we initialize it
to nullptr
(using C++11 inline initialization). This will be
important in a second.
Next let’s create the histogram. We do this in execute()
, not
initialize()
so that we can pick up the systematics in the next
section (systematics are not available during initialize()
). So add
these lines at the beginning of execute()
:
if (m_sumPtHist == nullptr)
{
std::string name = "sumPtHist";
ANA_CHECK (book (TH1F (name.c_str(), "pt", 20, 0, 200e3)));
m_sumPtHist = hist (name);
}
Now let’s retrieve the list of muons, calculate the sum of pT, and fill it into the histogram:
const xAOD::MuonContainer *muons = nullptr;
ANA_CHECK (evtStore()->retrieve (muons, "AnalysisMuons_NOSYS"));
float sumPt = 0;
for (const xAOD::Muon *muon : *muons)
sumPt += muon->pt();
m_sumPtHist->Fill (sumPt);
Depending on what you have already added to your algorithm/package, you may have to add an include
#include <xAODMuon/MuonContainer.h>
to your source file and a dependency
LINK_LIBRARIES xAODMuon ...
to your CMakeLists.txt
file.
Now compile and run it and see if you get the newly defined histogram and if it makes sense. As a bonus exercise, create a histogram containing only the pt of the first muon and compare it to what we got from the histogram the muon chain produces intrinsically.
Some things to note here:
The name of the muon container is based on what you specified when you configured your muon algorithm sequence. You can change this if you want, as long as you do it everywhere.
There is no specific ordering to the muons in the container, or more
specifically they have the same ordering as the input container,
i.e. they are ordered in pt before momentum corrections are applied.
You can change this by setting the sortPt
option on the algorithm
creating the final deep copy. Still, in many ways it’s better to
get used to the containers not being ordered (as that allows certain
optimizations).
We don’t use any weights here. This can be excused by saying we are
running on data. If you want to read out the scale factor, each
muon has a scale factor attached as muon_eff_tight_NOSYS
, etc. As
an exercise you can collect them for an overall histogram weight.
At some point we’ll likely provide an overall event weight as well
that you can use directly in your histograms.
We are using const containers and objects, as we do not intend to modify them at all. This will become more important in the next section.