ϟ

Luca Mastrolorenzo

Here are all the papers by Luca Mastrolorenzo that you can download and read on OA.mg.
Luca Mastrolorenzo’s last known institution is . Download Luca Mastrolorenzo PDFs here.

Claim this Profile →
DOI: 10.1103/physrevd.93.034014
2016
Cited 32 times
Measurement of the charge asymmetry in top quark pair production inppcollisions ats=8 TeVusing a template method
The charge asymmetry in the production of top quark and antiquark pairs is measured in proton-proton collisions at a center-of-mass energy of 8 TeV. The data, corresponding to an integrated luminosity of 19.6 inverse femtobarns, were collected by the CMS experiment at the LHC. Events with a single isolated electron or muon, and four or more jets, at least one of which is likely to have originated from hadronization of a bottom quark, are selected. A template technique is used to measure the asymmetry in the distribution of differences in the top quark and antiquark absolute rapidities. The measured asymmetry is A[c,y] = [0.33 +/- 0.26 (stat) +/- 0.33 (syst)]%, which is the most precise result to date. The results are compared to calculations based on the standard model and on several beyond-the-standard-model scenarios.
DOI: 10.1088/1748-0221/13/10/p10023
2018
Cited 23 times
First beam tests of prototype silicon modules for the CMS High Granularity Endcap Calorimeter
The High Luminosity phase of the Large Hadron Collider will deliver 10 times more integrated luminosity than the existing collider, posing significant challenges for radiation tolerance and event pileup on detectors, especially for forward calorimetry. As part of its upgrade program, the Compact Muon Solenoid collaboration is designing a high-granularity calorimeter (HGCAL) to replace the existing endcap calorimeters. It will feature unprecedented transverse and longitudinal readout and triggering segmentation for both electromagnetic and hadronic sections. The electromagnetic section and a large fraction of the hadronic section will be based on hexagonal silicon sensors of 0.5–1 cm2 cell size, with the remainder of the hadronic section being based on highly-segmented scintillators with silicon photomultiplier readout. The intrinsic high-precision timing capabilities of the silicon sensors will add an extra dimension to event reconstruction, especially in terms of pileup rejection. First hexagonal silicon modules, using the existing Skiroc2 front-end ASIC developed for CALICE, have been tested in beams at Fermilab and CERN in 2016. We present results from these tests, in terms of system stability, calibration with minimum-ionizing particles and resolution (energy, position and timing) for electrons, and the comparisons of these quantities with GEANT4-based simulation.
DOI: 10.48550/arxiv.2404.01071
2024
Machine Learning in High Energy Physics: A review of heavy-flavor jet tagging at the LHC
The application of machine learning (ML) in high energy physics (HEP), specifically in heavy-flavor jet tagging at Large Hadron Collider (LHC) experiments, has experienced remarkable growth and innovation in the past decade. This review provides a detailed examination of current and past ML techniques in this domain. It starts by exploring various data representation methods and ML architectures, encompassing traditional ML algorithms and advanced deep learning techniques. Subsequent sections discuss specific instances of successful ML applications in jet flavor tagging in the ATLAS and CMS experiments at the LHC, ranging from basic fully-connected layers to graph neural networks employing attention mechanisms. To systematically categorize the advancements over the LHC's three runs, the paper classifies jet tagging algorithms into three generations, each characterized by specific data representation techniques and ML architectures. This classification aims to provide an overview of the chronological evolution in this field. Finally, a brief discussion about anticipated future developments and potential research directions in the field is presented.
DOI: 10.1088/1748-0221/11/02/c02008
2016
Cited 11 times
Triggering on electrons, jets and tau leptons with the CMS upgraded calorimeter trigger for the LHC RUN II
The Compact Muon Solenoid (CMS) experiment has implemented a sophisticated two-level online selection system that achieves a rejection factor of nearly 105. During Run II, the LHC will increase its centre-of-mass energy up to 13 TeV and progressively reach an instantaneous luminosity of 2 × 1034 cm−2 s−1. In order to guarantee a successful and ambitious physics programme under this intense environment, the CMS Trigger and Data acquisition (DAQ) system has been upgraded. A novel concept for the L1 calorimeter trigger is introduced: the Time Multiplexed Trigger (TMT) . In this design, nine main processors receive each all of the calorimeter data from an entire event provided by 18 preprocessors. This design is not different from that of the CMS DAQ and HLT systems. The advantage of the TMT architecture is that a global view and full granularity of the calorimeters can be exploited by sophisticated algorithms. The goal is to maintain the current thresholds for calorimeter objects and improve the performance for their selection. The performance of these algorithms will be demonstrated, both in terms of efficiency and rate reduction. The callenging aspects of the pile-up mitigation and firmware design will be presented.
DOI: 10.1088/1748-0221/12/07/c07013
2017
Cited 5 times
SiW ECAL for future<i>e</i><sup>+</sup><i>e</i><sup>−</sup>collider
Calorimeters with silicon detectors have many unique features and are proposed for several world-leading experiments. We discuss the tests of the first three 18x18 cm$^2$ layers segmented into 1024 pixels of the technological prototype of the silicon-tungsten electromagnetic calorimeter for a future $e^+e^-$ collider. The tests have beem performed in November 2015 at CERN SPS beam line.
DOI: 10.1088/1742-6596/664/9/092009
2015
Cited 4 times
Matrix element method for high performance computing platforms
Lot of efforts have been devoted by ATLAS and CMS teams to improve the quality of LHC events analysis with the Matrix Element Method (MEM). Up to now, very few implementations try to face up the huge computing resources required by this method. We propose here a highly parallel version, combining MPI and OpenCL, which makes the MEM exploitation reachable for the whole CMS datasets with a moderate cost. In the article, we describe the status of two software projects under development, one focused on physics and one focused on computing. We also showcase their preliminary performance obtained with classical multi-core processors, CUDA accelerators and MIC co-processors. This let us extrapolate that with the help of 6 high-end accelerators, we should be able to reprocess the whole LHC run 1 within 10 days, and that we have a satisfying metric for the upcoming run 2. The future work will consist in finalizing a single merged system including all the physics and all the parallelism infrastructure, thus optimizing implementation for best hardware platforms.
DOI: 10.1007/jhep02(2016)122
2016
Cited 4 times
Search for W′ → tb in proton-proton collisions at s = 8 $$ \sqrt{s}=8 $$ TeV
A search is performed for the production of a massive W′ boson decaying to a top and a bottom quark. The data analysed correspond to an integrated luminosity of 19.7 fb−1 collected with the CMS detector at the LHC in proton-proton collisions at $$ \sqrt{s}=8 $$ TeV. The hadronic decay products of the top quark with high Lorentz boost from the W′ boson decay are detected as a single top flavoured jet. The use of jet substructure algorithms allows the top quark jet to be distinguished from standard model QCD background. Limits on the production cross section of a right-handed W′ boson are obtained, together with constraints on the left-handed and right-handed couplings of the W′ boson to quarks. The production of a right-handed W′ boson with a mass below 2.02 TeV decaying to a hadronic final state is excluded at 95% confidence level. This mass limit increases to 2.15 TeV when both hadronic and leptonic decays are considered, and is the most stringent lower mass limit to date in the tb decay mode.
DOI: 10.1016/j.nuclphysbps.2015.09.444
2016
Cited 3 times
The CMS Level-1 Tau identification algorithm for the LHC Run II
The CMS experiment implements a sophisticated two-level online selection system that achieves a rejection factor of nearly 105. The first level (L1) is based on coarse information coming from the calorimeters and the muon detectors while the High Level Trigger combines fine-grain information from all sub-detectors. To guarantee a successful and ambitious physics program despite the very large backgrounds and proton-proton collision rates, the CMS Trigger and Data acquisition system must be consolidated. In particular the L1 Calorimeter Trigger hardware and architecture will be upgraded, benefiting from the recent microTCA technology allowing the calorimeter granularity to be better exploited in more advanced algorithms. Benefiting from the enhanced granularity provided by the new system, an innovative dynamic clustering technique has been developed to obtain an optimized tau selection algorithm.
2018
Higgs Boson Pair Production at Colliders: Status and Perspectives
DOI: 10.1088/1748-0221/11/01/c01051
2016
Run 2 upgrades to the CMS Level-1 calorimeter trigger
The CMS Level-1 calorimeter trigger is being upgraded in two stages to maintain performance as the LHC increases pile-up and instantaneous luminosity in its second run. In the first stage, improved algorithms including event-by-event pile-up corrections are used. New algorithms for heavy ion running have also been developed. In the second stage, higher granularity inputs and a time-multiplexed approach allow for improved position and energy resolution. Data processing in both stages of the upgrade is performed with new, Xilinx Virtex-7 based AMC cards.
2015
Search for the Higgs boson decaying into tau lepton pairs with the Matrix Element Method and tau trigger optimization in the CMS experiment at the LHC
I performed my thesis work in Particle Physics at the Laboratoire Leprince-Ringuet of the Ecole Polytechnique. I have participated to the analysis of the 8 TeV proton-proton collisions produced by the Large Hadron Collider (LHC) and collected by the CMS experiment. The discovery of the Higgs boson has been a major breakthrough in particle physics as the mass of the vector bosons are explained through their interactions with the Higgs field. I worked on the newly discovered Higgs boson analysis. As its direct coupling to fermions remained to be exhibited, I focused on the search for the Higgs boson decaying in tau lepton pairs. The Higgs decay into a tau pair is the only channel allowing the couplings between the Higgs boson and the leptons to be measured. This is due to the large event rate expected in the Standard Model compared to the other leptonic decay modes. The Higgs boson decaying to tau lepton analysis is particularly challenging at the trigger level because the large background imposes high thresholds. I worked on a trigger that ran at the end of the data-taking using the missing transverse energy to lower the threshold on the single lepton. This approach allows the recovery of 41% of the signal events. Events with missing transverse momentum were selected in order to control the trigger rate. My personal contribution consisted in a thorough characterization of this trigger, including the measure of the associated uncertainty. The results of this approach led to an amelioration of 2% in the exclusion limits computed in the Higgs to taus semileptonic channel. For the Run 2, the center-of-mass energy of the LHC collisions has been increased to 13 TeV and the instantaneous luminosity will reach 2E34/cm2/s. To guarantee a successful and ambitious physics program under this intense environment, the CMS Trigger and Data acquisition system has been consolidated. In particular the Level 1 (L1, hardware based first level of the CMS trigger system) benefited from the recent microTCA technology allowing the calorimeter granularity to be better exploited with more advanced algorithms. Thanks to the enhanced granularity provided by the new system, an innovative dynamic clustering technique has been developed to obtain an optimized tau selection algorithm. I took the responsibility of developing a complete new tau trigger algorithm at L1 and measuring its performance. This original approach resulted in the first hardware tau lepton trigger efficient at a hadron collider with a sustainable rate. I had the opportunity to present a poster showing my work at the ICHEP-2014 conference, in Valencia and the proceedings were published in Nuclear Physics B afterwards. During my last year of PhD I focused on the Higgs decays into di-taus analysis, initiating the very first matrix element (ME) approach in this channel, starting with the most sensitive final state: the semileptonic decay mode. The aim is to increase the sensitivity of the analysis to the SM Higgs boson, with respect to the traditional methods. No ME-based analysis using tau leptons has ever been published. The novelty of my work is the treatment of the tau decay. In addition, I have derived a parameterization of the detector response through transfer functions. Finally, the numerical aspects related of multidimensional integrals computations have been tackled. I have fully characterized the method using simulated samples before applying it to the 8 TeV data. The performance in the context of the CMS Higgs into di-taus of this pioneering method are very promising, with a S/B ratio improved by a factor 3, and constitute a baseline for the analysis of the upcoming Run 2 data of the LHC.
2015
Run 2 Upgrades to the CMS Level-1 Calorimeter Trigger
DOI: 10.1016/j.physletb.2016.063.027
2016
Measurement of the inelastic cross section in proton-lead collisions at a centre-of-mass energy per nucleon pair of 5.02 TeV
The inelastic hadronic cross section in proton-lead collisions at a centre-of-mass energy per nucleon pair of 5.02 TeV is measured with the CMS detector at the LHC. The data sample, corresponding to an integrated luminosity of 12.6 +/- 0.4 inverse nanobarns, has been collected with an unbiased trigger for inclusive particle production. The cross section is obtained from the measured number of proton-lead collisions with hadronic activity produced in the pseudorapidity ranges 3<abs(eta)<5 and/or -5<abs(eta)<-3, corrected for photon-induced contributions, experimental acceptance, and other instrumental effects. The inelastic cross section is measured to be sigma[inel,pPb]=2061 +/- 3 (stat) +/- 34 (syst) +/- 72 (lum) mb. Various Monte Carlo generators, commonly used in heavy ion and cosmic ray physics, are found to reproduce the data within uncertainties. The value of sigma[inel,pPb] is compatible with that expected from the proton-proton cross section at 5.02 TeV scaled up within a simple Glauber approach to account for multiple scatterings in the lead nucleus, indicating that further net nuclear corrections are small.
DOI: 10.22323/1.350.0126
2019
Higgs production in the VH mode at ATLAS and CMS
In these proceedings, the key role played by the Higgs boson associated production (VH) mode in the characterization of the electroweak spontaneous symmetry breaking mechanism is described through a review of the latest results from the ATLAS and CMS experiments obtained with the data collected during LHC Run 2. A focus is given to the recent discovery of the Higgs boson decay to a bottom quark-antiquark pair by ATLAS and CMS achieved through the analysis of the additional data collected during 2017.A review of the ATLAS and CMS searches for VH(H → WW) is also provided together with a summary of the role played by the VH production mechanism in the H → ττ observation carried out by CMS and in the challenging search for the Higgs boson decay to charm quarks published by ATLAS in the beginning of LHC Run 2.