ϟ

M. Tosi

Here are all the papers by M. Tosi that you can download and read on OA.mg.
M. Tosi’s last known institution is . Download M. Tosi PDFs here.

Claim this Profile →
DOI: 10.1016/j.revip.2023.100085
2023
Cited 5 times
Toward the end-to-end optimization of particle physics instruments with differentiable programming
The full optimization of the design and operation of instruments whose functioning relies on the interaction of radiation with matter is a super-human task, due to the large dimensionality of the space of possible choices for geometry, detection technology, materials, data-acquisition, and information-extraction techniques, and the interdependence of the related parameters. On the other hand, massive potential gains in performance over standard, "experience-driven" layouts are in principle within our reach if an objective function fully aligned with the final goals of the instrument is maximized through a systematic search of the configuration space. The stochastic nature of the involved quantum processes make the modeling of these systems an intractable problem from a classical statistics point of view, yet the construction of a fully differentiable pipeline and the use of deep learning techniques may allow the simultaneous optimization of all design parameters. In this white paper, we lay down our plans for the design of a modular and versatile modeling tool for the end-to-end optimization of complex instruments for particle physics experiments as well as industrial and medical applications that share the detection of radiation as their basic ingredient. We consider a selected set of use cases to highlight the specific needs of different applications.
DOI: 10.1007/jhep04(2016)126
2016
Cited 25 times
Higgs pair production: choosing benchmarks with cluster analysis
New physics theories often depend on a large number of free parameters. The phenomenology they predict for fundamental physics processes is in some cases drastically affected by the precise value of those free parameters, while in other cases is left basically invariant at the level of detail experimentally accessible. When designing a strategy for the analysis of experimental data in the search for a signal predicted by a new physics model, it appears advantageous to categorize the parameter space describing the model according to the corresponding kinematical features of the final state. A multi-dimensional test statistic can be used to gauge the degree of similarity in the kinematics predicted by different models; a clustering algorithm using that metric may allow the division of the space into homogeneous regions, each of which can be successfully represented by a benchmark point. Searches targeting those benchmarks are then guaranteed to be sensitive to a large area of the parameter space. In this document we show a practical implementation of the above strategy for the study of non-resonant production of Higgs boson pairs in the context of extensions of the standard model with anomalous couplings of the Higgs bosons. A non-standard value of those couplings may significantly enhance the Higgs boson pair-production cross section, such that the process could be detectable with the data that the LHC will collect in Run 2.
DOI: 10.1088/1748-0221/18/08/p08020
2023
Integration of thermo-electric coolers into the CMS MTD SiPM arrays for operation under high neutron fluence
Abstract The barrel section of the novel MIP Timing Detector (MTD) will be constructed as part of the upgrade of the CMS experiment to provide a time resolution for single charged tracks in the range of 30–60 ps using LYSO:Ce crystal arrays read out with Silicon Photomultipliers (SiPMs). A major challenge for the operation of such a detector is the extremely high radiation level, of about 2 × 10 14 1 MeV(Si) Eqv. n/cm 2 , that will be integrated over a decade of operation of the High Luminosity Large Hadron Collider (HL-LHC). Silicon Photomultipliers exposed to this level of radiation have shown a strong increase in dark count rate and radiation damage effects that also impact their gain and photon detection efficiency. For this reason during operations the whole detector is cooled down to about -35°C. In this paper we illustrate an innovative and cost-effective solution to mitigate the impact of radiation damage on the timing performance of the detector, by integrating small thermo-electric coolers (TECs) on the back of the SiPM package. This additional feature, fully integrated as part of the SiPM array, enables a further decrease in operating temperature down to about -45°C. This leads to a reduction by a factor of about two in the dark count rate without requiring additional power budget, since the power required by the TEC is almost entirely offset by a decrease in the power required for the SiPM operation due to leakage current. In addition, the operation of the TECs with reversed polarity during technical stops of the accelerator can raise the temperature of the SiPMs up to 60°C (about 50°C higher than the rest of the detector), thus accelerating the annealing of radiation damage effects and partly recovering the SiPM performance.
DOI: 10.1080/10619127.2021.1881364
2021
Cited 8 times
Toward Machine Learning Optimization of Experimental Design
The design of instruments that rely on the interaction of radiation with matter for their operation is a quite complex task if our goal is to achieve near optimality on some well-defined utility fu...
DOI: 10.48550/arxiv.2203.13818
2022
Cited 3 times
Toward the End-to-End Optimization of Particle Physics Instruments with Differentiable Programming: a White Paper
The full optimization of the design and operation of instruments whose functioning relies on the interaction of radiation with matter is a super-human task, given the large dimensionality of the space of possible choices for geometry, detection technology, materials, data-acquisition, and information-extraction techniques, and the interdependence of the related parameters. On the other hand, massive potential gains in performance over standard, "experience-driven" layouts are in principle within our reach if an objective function fully aligned with the final goals of the instrument is maximized by means of a systematic search of the configuration space. The stochastic nature of the involved quantum processes make the modeling of these systems an intractable problem from a classical statistics point of view, yet the construction of a fully differentiable pipeline and the use of deep learning techniques may allow the simultaneous optimization of all design parameters. In this document we lay down our plans for the design of a modular and versatile modeling tool for the end-to-end optimization of complex instruments for particle physics experiments as well as industrial and medical applications that share the detection of radiation as their basic ingredient. We consider a selected set of use cases to highlight the specific needs of different applications.
DOI: 10.1016/j.nuclphysbps.2015.09.436
2016
Cited 3 times
Tracking at High Level Trigger in CMS
The trigger systems of the LHC detectors play a crucial role in determining the physics capabilities of experiments. A reduction of several orders of magnitude of the event rate is needed to reach values compatible with detector readout, offline storage and analysis capability. The CMS experiment has been designed with a two-level trigger system: the Level-1 Trigger (L1T), implemented on custom-designed electronics, and the High Level Trigger (HLT), a streamlined version of the CMS offline reconstruction software running on a computer farm. A software trigger system requires a trade-off between the complexity of the algorithms, the sustainable output rate, and the selection efficiency. With the computing power available during the 2012 data taking the maximum reconstruction time at HLT was about 200 ms per event, at the nominal L1T rate of 100 kHz. Track reconstruction algorithms are widely used in the HLT, for the reconstruction of the physics objects as well as in the identification of b-jets and lepton isolation. Reconstructed tracks are also used to distinguish the primary vertex, which identifies the hard interaction process, from the pileup ones. This task is particularly important in the LHC environment given the large number of interactions per bunch crossing: on average 25 in 2012, and expected to be around 40 in Run II. We will present the performance of HLT tracking algorithms, discussing its impact on CMS physics program, as well as new developments done towards the next data taking in 2015.
DOI: 10.1088/1742-6596/664/8/082055
2015
Performance of Tracking, b-tagging and Jet/MET reconstruction at the CMS High Level Trigger
The trigger systems of the LHC detectors play a crucial role in determining the physics capabilities of experiments. In 2015, the center-of-mass energy of proton-proton collisions will reach 13 TeV up to an unprecedented luminosity of 1 × 1034 cm−2s−1. A reduction of several orders of magnitude of the event rate is needed to reach values compatible with detector readout, offline storage and analysis capabilities. The CMS experiment has been designed with a two-level trigger system: the Level-1 Trigger (L1T), implemented on custom-designed electronics, and the High Level Trigger (HLT), a streamlined version of the offline reconstruction software running on a computer farm. A software trigger system requires a trade-off between the complexity of the algorithms, the sustainable output rate, and the selection efficiency. With the computing power available during the 2012 data taking the maximum reconstruction time at HLT was about 200 ms per event, at the nominal L1T rate of 100 kHz. Tracking algorithms are widely used in the HLT in the object reconstruction through particle-flow techniques as well as in the identification of b-jets and lepton isolation. Reconstructed tracks are also used to distinguish the primary vertex, which identifies the hard interaction process, from the pileup ones. This task is particularly important in the LHC environment given the large number of interactions per bunch crossing: on average 25 in 2012, and expected to be around 40 in Run II with a large contribution from out-of-time particles. In order to cope with tougher conditions the tracking and vertexing techniques used in 2012 have been largely improved in terms of timing and efficiency in order to keep the physics reach at the level of Run I conditions. We will present the performance of these newly developed algorithms, discussing their impact on the b-tagging performances as well as on the jet and missing transverse energy reconstruction.
2023
Exploiting Differentiable Programming for the End-to-end Optimization of Detectors
DOI: 10.48550/arxiv.2306.00818
2023
Integration of thermo-electric coolers into the CMS MTD SiPM arrays for operation under high neutron fluence
The barrel section of the novel MIP Timing Detector (MTD) will be constructed as part of the upgrade of the CMS experiment to provide a time resolution for single charged tracks in the range of $30-60$ ps using LYSO:Ce crystal arrays read out with Silicon Photomultipliers (SiPMs). A major challenge for the operation of such a detector is the extremely high radiation level, of about $2\times10^{14}$ 1 MeV(Si) Eqv. n/cm$^2$, that will be integrated over a decade of operation of the High Luminosity Large Hadron Collider (HL-LHC). Silicon Photomultipliers exposed to this level of radiation have shown a strong increase in dark count rate and radiation damage effects that also impact their gain and photon detection efficiency. For this reason during operations the whole detector is cooled down to about $-35^{\circ}$C. In this paper we illustrate an innovative and cost-effective solution to mitigate the impact of radiation damage on the timing performance of the detector, by integrating small thermo-electric coolers (TECs) on the back of the SiPM package. This additional feature, fully integrated as part of the SiPM array, enables a further decrease in operating temperature down to about $-45^{\circ}$C. This leads to a reduction by a factor of about two in the dark count rate without requiring additional power budget, since the power required by the TEC is almost entirely offset by a decrease in the power required for the SiPM operation due to leakage current. In addition, the operation of the TECs with reversed polarity during technical stops of the accelerator can raise the temperature of the SiPMs up to $60^{\circ}$C (about $50^{\circ}$C higher than the rest of the detector), thus accelerating the annealing of radiation damage effects and partly recovering the SiPM performance.
DOI: 10.22323/1.314.0523
2017
The CMS trigger in Run 2
During its second period of operation (Run 2) which started in 2015, the LHC will reach a peak instantaneous luminosity of approximately 2×10 34 cm -2 s -1 with an average pile-up of about 55, far larger than the design value.Under these conditions, the online event selection is a very challenging task.In CMS, it is realised by a two-level trigger system: the Level-1 (L1) Trigger, implemented in custom-designed electronics, and the High Level Trigger (HLT), a streamlined version of the offline reconstruction software running on a computer farm.In order to face this challenge, the L1 trigger has undergone a major upgrade compared to Run 1, whereby all electronic boards of the system have been replaced, allowing more sophisticated algorithms to be run online.Its last stage, the global trigger, is now able to perform complex selections and to compute high-level quantities, like invariant masses.Likewise, the algorithms that run in the HLT went through big improvements; in particular, new approaches for the online track reconstruction lead to a drastic reduction of the computing time, and to much improved performance.This presentation will describe the performance of the upgraded trigger system in Run 2.
2016
Analytical parametrization and shape classification of anomalous HH production in the EFT approach
In this document we study the effect of anomalous Higgs boson couplings on non-resonant pair production of Higgs bosons ($HH$) at the LHC. We explore the space of the five parameters $\kappa_{\lambda}$, $\kappa_{t}$, $c_2$, $c_g$, and $c_{2g}$ in terms of the corresponding kinematics of the final state, and describe a partition of the space into a limited number of regions featuring similar phenomenology in the kinematics of $HH$ final state. We call clusters the sets of points belonging to the same region; to each cluster corresponds a representative point which we call a benchmark. We discuss a possible technique to estimate the sensitivity of an experimental search to the kinematical differences between the phenomenology of the benchmark points and the rest of the parameter space contained in the corresponding cluster. We also provide an analytical parametrization of the cross-section modifications that the variation of anomalous couplings produces with respect to standard model $HH$ production along with a recipe to translate the results into other parameter-space bases. Finally, we provide a preliminary analysis of variations in the topology of the final state within each region based on recent LHC results.
DOI: 10.22323/1.278.0004
2016
Search for Dark Matter (experiment)
Among the experimental strategies for the search for Dark Matter, collider experiments provide unique sensitivity to its non-gravitational interactions with ordinary matter, for a range of Dark Matter masses between a few GeV and hundreds of GeV.We discuss the status of the main Dark Matter searches at the Large Hadron Collider by the ATLAS and CMS experiments, underlining the complementarity between searches in different final states and between collider and direct detection results.
DOI: 10.1088/1742-6596/447/1/012048
2013
Results on the Search for MSSM Neutral and Charged Higgs bosons (CMS)
In the minimal super-symmetric extension of the Standard Model (MSSM), the Higgs sector contains two Higgs boson doublets, including, after electroweak symmetry breaking, the CP-odd neutral scalar A0, the two charged scalars H±, and the two CP-even neutral scalars h and H0. The neutral Higgs boson is searched in the μ+μ−, τ+τ−, and recently also in the b, channels, whereas the charged Higgs state is searched in top quark decays with at least one τ in the final state. This report reviews the current status of searches for MSSM Higgs bosons with the data collected by the CMS experiment at LHC in the 2011 and 2012 operations.
DOI: 10.1109/glocom.2003.1258862
2004
Implementation of a wideband directional channel model for a units link level simulator
This paper deals with the implementation and the assessment of a wideband directional channel model (WDCM) based on the COST 259 recommendations. Cluster of replicas are introduced by means of tapped delay line (TDL). A good compromise between accuracy of the model and complexity required by the simulator is guaranteed by a suitable statistical approach based on deterministic considerations. In the paper a WDCM implementation for a micro cell case is presented, with particular attention to the cluster position and the mobility model concept. Finally, channel model assessment is performed by comparing simulation results in the general street los radio environment with a recent measurement campaign. Proposed channel model can be used in a link level simulator (LLS).
2016
Ricerca di produzione associata con dark matter
DOI: 10.22323/1.213.0204
2015
Tracking at High Level Trigger in CMS
Abstract The trigger systems of the LHC detectors play a crucial role in determining the physics capabilities of experiments. A reduction of several orders of magnitude of the event rate is needed to reach values compatible with detector readout, offline storage and analysis capability. The CMS experiment has been designed with a two-level trigger system: the Level-1 Trigger (L1T), implemented on custom-designed electronics, and the High Level Trigger (HLT), a streamlined version of the CMS offline reconstruction software running on a computer farm. A software trigger system requires a trade-off between the complexity of the algorithms, the sustainable output rate, and the selection efficiency. With the computing power available during the 2012 data taking the maximum reconstruction time at HLT was about 200 ms per event, at the nominal L1T rate of 100 kHz. Track reconstruction algorithms are widely used in the HLT, for the reconstruction of the physics objects as well as in the identification of b-jets and lepton isolation. Reconstructed tracks are also used to distinguish the primary vertex, which identifies the hard interaction process, from the pileup ones. This task is particularly important in the LHC environment given the large number of interactions per bunch crossing: on average 25 in 2012, and expected to be around 40 in Run II. We will present the performance of HLT tracking algorithms, discussing its impact on CMS physics program, as well as new developments done towards the next data taking in 2015.
DOI: 10.48550/arxiv.1608.06578
2016
Analytical parametrization and shape classification of anomalous HH production in the EFT approach
In this document we study the effect of anomalous Higgs boson couplings on non-resonant pair production of Higgs bosons ($HH$) at the LHC. We explore the space of the five parameters $\kappa_{\lambda}$, $\kappa_{t}$, $c_2$, $c_g$, and $c_{2g}$ in terms of the corresponding kinematics of the final state, and describe a partition of the space into a limited number of regions featuring similar phenomenology in the kinematics of $HH$ final state. We call clusters the sets of points belonging to the same region; to each cluster corresponds a representative point which we call a benchmark. We discuss a possible technique to estimate the sensitivity of an experimental search to the kinematical differences between the phenomenology of the benchmark points and the rest of the parameter space contained in the corresponding cluster. We also provide an analytical parametrization of the cross-section modifications that the variation of anomalous couplings produces with respect to standard model $HH$ production along with a recipe to translate the results into other parameter-space bases. Finally, we provide a preliminary analysis of variations in the topology of the final state within each region based on recent LHC results.
2011
Feasibility of the SM Higgs boson search in the channelH ->ZZ^*->mumubb via VBF at sqrt(s)=7TeV with the CMS experiment.
One of the main goals of the now-running Large Hadron Collider (LHC) machine at CERN in Geneva is to elucidate the mechanism of electroweak symmetry breaking, and in particular to determine whether a Standard Model Higgs boson exists or not. For this aim, and in general to explore the high-energy frontier of particle physics, the LHC produces proton-proton collisions in the core of two multi-purpose experiments: ATLAS and CMS. In the Compact Muon Solenoid (CMS) experiment one of the most promising discovery modes of the Higgs boson is the one involving the decay into two Z bosons, with a subsequent decay of the Z pair in a fully leptonic final state. Among the various production mechanisms for the Higgs boson, vector-boson fusion (VBF) offers certainly one of the most distinctive and interesting signals. In this thesis I report an analysis of the feasibility of the search for Higgs decays in the $H\rightarrow ZZ\rightarrow l^+l^-jj$ decay channel with the CMS detector. Allowing for a $Z$ boson to decay to a jet pair entails a large increase of backgrounds in exchange for a tenfold increase in the total branching ratio. The analysis of the di-lepton plus di-jet final state may furthermore provide ground for interesting additional searches and measurements. An optimization of the search for the Higgs decay signal has been performed using the key observables of this Higgs VBF production mechanism, applying the multivariate technique called {\em boosted decision trees'} for data selection in two successive stages. The application of $b$-jet tagging is also used to try and favorably select $b$-enriched final states, strongly suppressing the main background due to $Z$ production associated to light quarks at the cost of a 4.5 reduction factor of selectable signal events. Results on the signal significance achievable with $30 fb^{-1}$ of collisions with the optimized Higgs candidate selection are presented. The LHC provided proton-proton collisions at the centre-of-mass energy of $7 TeV$ from March $30^{th}$ to November $8^{th}$ 2010. During the 2010 run CMS collected an integrated luminosity of $43.2 pb^{-1}$. Despite being utterly insufficient for a meaningful Higgs boson search, these data have been used to test the analysis strategy and the signal selection methodology. Limits to the ratio between signal cross section and SM-predicted cross section as a function of the Higgs boson mass have been obtained with the available data.
2002
IMPLEMENTATION OF WIDEBAND DIRECTIONAL CHANNEL FOR UMTS LINK LEVEL SIMULATOR
This paper deals with the definition and the implementation of a Wideband Directional Channel Model (WDCM) based on the COST 259 recommendations. Cluster of replicas are introduced by means of Tapped Delay Line (TDL). Macro, Micro and Pico cells models are considered in different general radio environments. Proposed channel model can be used in a Link Level Simulator (LLS). A good compromise between accuracy of the model and complexity required by the simulator is guaranteed by a suitable statistic approach based on deterministic considerations. In this paper, only results about up-link channel model are presented and evaluated.
2022
Toward the End-to-End Optimization of Particle Physics Instruments with Differentiable Programming: a White Paper
The full optimization of the design and operation of instruments whose functioning relies on the interaction of radiation with matter is a super-human task, given the large dimensionality of the space of possible choices for geometry, detection technology, materials, data-acquisition, and information-extraction techniques, and the interdependence of the related parameters. On the other hand, massive potential gains in performance over standard, "experience-driven" layouts are in principle within our reach if an objective function fully aligned with the final goals of the instrument is maximized by means of a systematic search of the configuration space. The stochastic nature of the involved quantum processes make the modeling of these systems an intractable problem from a classical statistics point of view, yet the construction of a fully differentiable pipeline and the use of deep learning techniques may allow the simultaneous optimization of all design parameters. In this document we lay down our plans for the design of a modular and versatile modeling tool for the end-to-end optimization of complex instruments for particle physics experiments as well as industrial and medical applications that share the detection of radiation as their basic ingredient. We consider a selected set of use cases to highlight the specific needs of different applications.