ϟ

Davide Valsecchi

Here are all the papers by Davide Valsecchi that you can download and read on OA.mg.
Davide Valsecchi’s last known institution is . Download Davide Valsecchi PDFs here.

Claim this Profile →
DOI: 10.1088/1742-6596/2438/1/012077
2023
Deep learning techniques for energy clustering in the CMS ECAL
The reconstruction of electrons and photons in CMS depends on topological clustering of the energy deposited by an incident particle in different crystals of the electromagnetic calorimeter (ECAL). These clusters are formed by aggregating neighbouring crystals according to the expected topology of an electromagnetic shower in the ECAL. The presence of upstream material (beampipe, tracker and support structures) causes electrons and photons to start showering before reaching the calorimeter. This effect, combined with the 3.8T CMS magnetic field, leads to energy being spread in several clusters around the primary one. It is essential to recover the energy contained in these satellite clusters in order to achieve the best possible energy resolution for physics analyses. Historically satellite clusters have been associated to the primary cluster using a purely topological algorithm which does not attempt to remove spurious energy deposits from additional pileup interactions (PU). The performance of this algorithm is expected to degrade during LHC Run 3 (2022+) because of the larger average PU levels and the increasing levels of noise due to the ageing of the ECAL detector. New methods are being investigated that exploit state-of-the-art deep learning architectures like Graph Neural Networks (GNN) and self-attention algorithms. These more sophisticated models improve the energy collection and are more resilient to PU and noise, helping to preserve the electron and photon energy resolution achieved during LHC Runs 1 and 2. This work will cover the challenges of training the models as well the opportunity that this new approach offers to unify the ECAL energy measurement with the particle identification steps used in the global CMS photon and electron reconstruction.
DOI: 10.48550/arxiv.2403.18582
2024
One flow to correct them all: improving simulations in high-energy physics with a single normalising flow and a switch
Simulated events are key ingredients in almost all high-energy physics analyses. However, imperfections in the simulation can lead to sizeable differences between the observed data and simulated events. The effects of such mismodelling on relevant observables must be corrected either effectively via scale factors, with weights or by modifying the distributions of the observables and their correlations. We introduce a correction method that transforms one multidimensional distribution (simulation) into another one (data) using a simple architecture based on a single normalising flow with a boolean condition. We demonstrate the effectiveness of the method on a physics-inspired toy dataset with non-trivial mismodelling of several observables and their correlations.
DOI: 10.1007/s42484-023-00106-3
2023
Comparing quantum and classical machine learning for Vector Boson Scattering background reduction at the Large Hadron Collider
Abstract We report on a consistent comparison between techniques of quantum and classical machine learning applied to the classification of signal and background events for the Vector Boson Scattering processes, studied at the Large Hadron Collider installed at the CERN laboratory. Quantum machine learning algorithms based on variational quantum circuits are run on freely available quantum computing hardware, showing very good performances as compared to deep neural networks run on classical computing facilities. In particular, we show that such kind of quantum neural networks is able to correctly classify the targeted signal with an Area Under the characteristic Curve (AUC) that is very close to the one obtained with the corresponding classical neural network, but employing a much lower number of resources, as well as less variable data in the training set. Albeit giving a proof-of-principle demonstration with limited quantum computing resources, this work represents one of the first steps towards the use of near term and noisy quantum hardware for practical event classification in High Energy Physics experiments.
DOI: 10.48550/arxiv.2004.00726
2020
VBSCan Mid-Term Scientific Meeting
This document summarises the talks and discussions happened during the VBSCan Mid-Term Scientific Meeting workshop. The VBSCan COST action is dedicated to the coordinated study of vector boson scattering (VBS) from the phenomenological and experimental point of view, for the best exploitation of the data that will be delivered by existing and future particle colliders.
DOI: 10.48550/arxiv.2204.10277
2022
Deep learning techniques for energy clustering in the CMS ECAL
The reconstruction of electrons and photons in CMS depends on topological clustering of the energy deposited by an incident particle in different crystals of the electromagnetic calorimeter (ECAL). These clusters are formed by aggregating neighbouring crystals according to the expected topology of an electromagnetic shower in the ECAL. The presence of upstream material (beampipe, tracker and support structures) causes electrons and photons to start showering before reaching the calorimeter. This effect, combined with the 3.8T CMS magnetic field, leads to energy being spread in several clusters around the primary one. It is essential to recover the energy contained in these satellite clusters in order to achieve the best possible energy resolution for physics analyses. Historically satellite clusters have been associated to the primary cluster using a purely topological algorithm which does not attempt to remove spurious energy deposits from additional pileup interactions (PU). The performance of this algorithm is expected to degrade during LHC Run 3 (2022+) because of the larger average PU levels and the increasing levels of noise due to the ageing of the ECAL detector. New methods are being investigated that exploit state-of-the-art deep learning architectures like Graph Neural Networks (GNN) and self-attention algorithms. These more sophisticated models improve the energy collection and are more resilient to PU and noise, helping to preserve the electron and photon energy resolution achieved during LHC Runs 1 and 2. This work will cover the challenges of training the models as well the opportunity that this new approach offers to unify the ECAL energy measurement with the particle identification steps used in the global CMS photon and electron reconstruction.
2018
VBSCan Thessaloniki 2018 Workshop Summary
This document reports the first year of activity of the VBSCan COST Action network, as summarised by the talks and discussions happened during the VBSCan Thessaloniki 2018 workshop. The VBSCan COST action is aiming at a consistent and coordinated study of vector-boson scattering from the phenomenological and experimental point of view, for the best exploitation of the data that will be delivered by existing and future particle colliders.
2019
VBSCan Mid-Term Scientific Meeting
This document summarises the talks and discussions happened during the VBSCan Mid-Term Scientific Meeting workshop. The VBSCan COST action is dedicated to the coordinated study of vector boson scattering (VBS) from the phenomenological and experimental point of view, for the best exploitation of the data that will be delivered by existing and future particle colliders.
DOI: 10.1088/1748-0221/15/06/c06037
2020
ECAL trigger performance in Run 2 and improvements for Run 3
The CMS electromagnetic Calorimeter (ECAL) is a high resolution crystal calorimeter operating at the CERN LHC. It is responsible for the identification and precise reconstruction of electrons and photons in CMS, which were crucial in the discovery and subsequent characterization of the Higgs boson. It also contributes to the reconstruction of tau leptons, jets, and calorimeter energy sums, which are vital components of many CMS physics analyses. The ECAL trigger system employs fast digital signal processing algorithms to precisely measure the energy and timing information of ECAL energy deposits recorded during LHC collisions. These trigger primitives are transmitted to the Level-1 trigger system at the LHC collision rate of 40 MHz. These energy deposits are then combined with information from other CMS sub-detectors to determine whether the event should trigger the readout of the data from CMS to be transmitted to the High-Level Trigger and eventually to permanent storage. This note will summarize the ECAL trigger performance achieved during LHC Run 2 (2015–2018). It will describe the methods that are used to provide frequent calibrations of the ECAL trigger primitives during LHC operation. These are needed to account for radiation-induced changes in crystal and photodetector response and to maintain stable trigger rates and efficiencies up to 0|η|=3.. They also minimize the spurious triggering on direct signals in the photodetectors used in the barrel region (|η|<1.48). Both of these effects have increased relative to LHC Run 1 (2009–2012), due to the higher luminosities experienced in Run 2. Further improvements in the energy and time reconstruction of the CMS ECAL trigger primitives are being explored for LHC Run 3 (2021–2023), using additional features implemented in the on-detector readout. These are particularly focused on improving the performance at the highest instantaneous luminosities (which will reach or exceed 2⋅ 1034 cm−2 s−1 in Run 3) and in the most forward regions of the calorimeter (|η|>2.5), where the effects of detector aging will be the greatest. The main features of these improved algorithms will be described and preliminary estimates of the expected performance gains will be presented.
DOI: 10.22323/1.398.0441
2022
Search for vector boson scattering with the semi-leptonic WV signature at CMS
A search for the electroweak VBS production of a WV pair plus two jets, in the semi-leptonic channel, at a center-of-mass energy of 13 TeV is reported.The data sample corresponds to the full Run-II CMS dataset of proton-proton collisions at 13 TeV including an integrated luminosity of 137.1 fb -1. Events are analyzed in two energy regimes: either the hadronically decaying W/Z boson is reconstructed as one large-radius jet, or it is identified as a pair of jets with dijet mass near the W/Z mass.Machine learning models are optimized for the signal extraction and the classifiers are interpreted using tools from the explainable machine learning field.The overwhelming background contribution from the single W production plus jets is measured in dedicated control regions implementing a data-driven strategy.
DOI: 10.21203/rs.3.rs-1966967/v1
2022
Comparing Quantum and Classical Machine Learningfor Vector Boson Scattering Background Reduction at the Large Hadron Collide
Abstract We report on a consistent comparison between techniques of quantum and classical machine learning applied to the classification of signal and background events for the Vector Boson Scattering processes, studied at the Large Hadron Collider installed at the CERN laboratory. Quantum machine learning algorithms based on variational quantum circuits are run on freely available quantum computing hardware, showing very good performances as compared to deep neural networks run on classical computing facilities. In particular, we show that such kind of quantum neural networks are able to correctly classify the signal with an Area Under the characteristic Curve (AUC) that is very close to the one obtained with the corresponding classical neural network, but employing a much lower number of resources, as well as less variable data in the training set. Albeit giving a proof-of-principle demonstration with limited quantum computing resources, this work represents one of the first steps towards the use of near term and noisy quantum hardware for practical event classification in High Energy Physics experiments.
DOI: 10.21203/rs.3.rs-2364684/v1
2022
Comparing Quantum and Classical Machine Learning for Vector Boson Scattering Background Reduction at the Large Hadron Collider
Abstract We report on a consistent comparison between techniques of quantum and classical machine learning applied to the classification of signal and background events for the Vector Boson Scattering processes, studied at the Large Hadron Collider installed at the CERN laboratory. Quan- tum machine learning algorithms based on variational quantum circuits are run on freely available quantum computing hardware, showing very good performances as compared to deep neural networks run on classical computing facilities. In particular, we show that such kind of quantum neural networks are able to correctly classify the signal with an Area Under the characteristic Curve (AUC) that is very close to the one obtained with the corresponding classical neural network, but employ- ing a much lower number of resources, as well as less variable data in the training set. Albeit giving a proof-of-principle demonstration with limited quantum computing resources, this work represents one of the first steps towards the use of near term and noisy quantum hardware for practical event classification in High Energy Physics experiments.
DOI: 10.1109/nss/mic44845.2022.10399329
2022
Deep Learning Techniques for Energy Clustering in the CMS Electromagnetic Calorimeter
The reconstruction of electrons and photons in CMS is based on topological clustering of the energy they deposit in different crystals of the electromagnetic calorimeter (ECAL). These clusters are built by aggregating neighbouring crystals according to the expected topology of an electromagnetic shower in the ECAL. The presence of upstream material causes electron and photon early showering before reaching the ECAL. This effect, combined with the 3.8 T CMS magnetic field, leads to energy being spread in several clusters around the primary one. It is essential to recover the energy contained in these satellite clusters to achieve the best possible energy resolution. Historically, satellite clusters have been associated to the primary cluster using a purely topological algorithm which does not attempt to remove spurious energy deposits from additional pile-up interactions (PU). The performance of this algorithm is expected to degrade during LHC Run 3 (2022+) because of the larger average PU levels and the increasing levels of noise due to the ageing of the ECAL detector. New methods are being investigated that exploit state-of-the-art deep learning architectures like Graph Neural Networks (GNN) and self-attention algorithms. These more sophisticated models improve the energy collection and are more resilient to PU and noise. This talk will cover the challenges of training the models and the opportunities that this new approach offers.
2018
VBSCan Thessaloniki 2018 Workshop Summary
This document reports the first year of activity of the VBSCan COST Action network, as summarised by the talks and discussions happened during the VBSCan Thessaloniki 2018 workshop. The VBSCan COST action is aiming at a consistent and coordinated study of vector-boson scattering from the phenomenological and experimental point of view, for the best exploitation of the data that will be delivered by existing and future particle colliders.
2018
VBSCan Thessaloniki 2018 Workshop Summary
This document reports the first year of activity of the VBSCan COST Action network, as summarised by the talks and discussions happened during the VBSCan Thessaloniki 2018 workshop. The VBSCan COST action is aiming at a consistent and coordinated study of vector-boson scattering from the phenomenological and experimental point of view, for the best exploitation of the data that will be delivered by existing and future particle colliders.
DOI: 10.48550/arxiv.1906.11332
2019
VBSCan Thessaloniki 2018 Workshop Summary
This document reports the first year of activity of the VBSCan COST Action network, as summarised by the talks and discussions happened during the VBSCan Thessaloniki 2018 workshop. The VBSCan COST action is aiming at a consistent and coordinated study of vector-boson scattering from the phenomenological and experimental point of view, for the best exploitation of the data that will be delivered by existing and future particle colliders.
DOI: 10.22323/1.364.0161
2020
Optimising the performance of the CMS Electromagnetic Calorimeter to measure Higgs properties during Phase I and Phase II of the LHC
The CMS Electromagnetic Calorimeter (ECAL), is a high granularity lead tungstate crystal calorimeter operating at the CERN LHC.The original design placed a premium on excellent energy resolution.Excellent energy resolution and efficient identification for photons are essential to reconstruct the Higgs boson in the H → γγ decay channel, for measurements of the self-coupling of Higgs bosons and other related parameters.The ECAL performance has been crucial in the discovery and subsequent characterisation of the Higgs boson.The original ECAL design considerations, and the actual experimental energy reconstruction and calibration precision will be reviewed.The improvements to the energy reconstruction and energy calibration algorithms for LHC Run II are described.These are required to maintain the stability of the ECAL energy scale and resolution for the higher LHC luminosities that have been experienced compared to Run I.The precision measurement of the Higgs decay modes is central to the HL-LHC physics program.In addition, the search for di-Higgs production is important to understand the details of the vacuum potential.The crystals in the barrel region will be retained for HL-LHC.The decrease of operating temperature and upgrades to the readout electronics that are needed to maintain the required performance of the barrel region from 2026 onwards will be described.These upgrades will ensure that radiation-induced noise increases will not dominate the energy resolution for photons from Higgs boson decays, and will preserve the ability of CMS to trigger efficiently on these signals.They will also permit precision time measurements (30 ps rms error on the arrival time of photons from Higgs boson decays) which will improve the determination of the location of the production vertex for di-photon events.Time measurement performance of the new readout electronics has been characterized in beam tests.The predicted electron and photon energy resolution and identification efficiencies expected for HL-LHC will be described, and the performance relevant to a number of key Higgs decay channels will be presented.