ϟ

Leonardo Cristella

Here are all the papers by Leonardo Cristella that you can download and read on OA.mg.
Leonardo Cristella’s last known institution is . Download Leonardo Cristella PDFs here.

Claim this Profile →
DOI: 10.1007/s41781-018-0018-8
2019
Cited 114 times
A Roadmap for HEP Software and Computing R&D for the 2020s
Particle physics has an ambitious and broad experimental programme for the coming decades. This programme requires large investments in detector hardware, either to build new facilities and experiments, or to upgrade existing ones. Similarly, it requires commensurate investment in the R&D of software to acquire, manage, process, and analyse the shear amounts of data to be recorded. In planning for the HL-LHC in particular, it is critical that all of the collaborating stakeholders agree on the software goals and priorities, and that the efforts complement each other. In this spirit, this white paper describes the R&D activities required to prepare for this software upgrade.
DOI: 10.3389/fdata.2022.787421
2022
Cited 23 times
Applications and Techniques for Fast Machine Learning in Science
In this community review report, we discuss applications and techniques for fast machine learning (ML) in science-the concept of integrating powerful ML methods into the real-time experimental data processing loop to accelerate scientific discovery. The material for the report builds on two workshops held by the Fast ML for Science community and covers three main areas: applications for fast ML across a number of scientific domains; techniques for training and implementing performant and resource-efficient ML algorithms; and computing architectures, platforms, and technologies for deploying these algorithms. We also present overlapping challenges across the multiple scientific domains where common solutions can be found. This community report is intended to give plenty of examples and inspiration for scientific discovery through integrated and accelerated ML solutions. This is followed by a high-level overview and organization of technical advances, including an abundance of pointers to source material, which can enable these breakthroughs.
DOI: 10.1117/12.2685462
2023
TEBAKA: a technological platform for Apulian crop monitoring
Precision farming and remote sensing have seen an unprecedented development in the last decade. The growing interest in this domain has led to the development of robust and accurate processing pipelines to evaluate nutrient management and irrigation practices, among others. Problems such as crop classification have gained significant attention in Southern Italy due to unique challenges such as water scarcity and the spread of cultivar-specific diseases (i.e., Xylella fastidiosa). Here, we present a technological platform hosted by the ReCaS HTC/HPC cluster based in Bari, Italy, for the automated segmentation of common crops in Southern Italy, specifically the Apulia region, in very high-resolution aerial (VHR) RGB images. In particular, we discuss the adoption of a Deep Convolutional Neural Network (DCNN) which uses a lightweight EfficientNet-B0 architecture, for patch-wise land cover classification and compare its performance with a standard machine learning algorithm (Random Forest) fed with Haralick features. The DCNN, pre-trained on ImageNet-1000 and fine-tuned on a 4-class problem, including vineyard, olive groves, arable land, and “no-crop”, had the highest performance with an overall accuracy of 77±5 % when performing a repeated spatial cross-validation. The experimental results demonstrate the effectiveness of the proposed approach in achieving high accuracy in land cover classification, although a misclassification between arable land and “no-crop” was observed, as they share similar vegetation textural patterns. The lightweight EfficientNet-B0 architecture provides a good balance between accuracy and computational efficiency, making it a suitable choice for processing very high-resolution aerial images. The processing pipeline has been successfully implemented and deployed on the high-performance computing (HPC) platform, leveraging Apache Mesos as the underlying framework. To ensure efficient execution of tasks, the Chronos job scheduler has been employed to submit the execution of Docker containers. By utilizing specialized hardware, including Nvidia V100 and A100 GPUs, the pipeline can effectively handle and process substantial volumes of data within tight timeframes. The proposed approach is highly versatile and can be easily adapted to various precision farming applications. The use of Docker technology facilitates easy deployment and portability across different environments. Additionally, the adoption of a lightweight DCNN architecture allows to efficiently exploit parallel computing resources enabling seamless scalability and, therefore, handling of massive computational tasks across broader regions of interest.
2019
Cited 6 times
A roadmap for HEP software and computing R&D for the 2020s
Particle physics has an ambitious and broad experimental programme for the coming decades. This programme requires large investments in detector hardware, either to build new facilities and experiments, or to upgrade existing ones. Similarly, it requires commensurate investment in the R&D of software to acquire, manage, process, and analyse the shear amounts of data to be recorded. In planning for the HL-LHC in particular, it is critical that all of the collaborating stakeholders agree on the software goals and priorities, and that the efforts complement each other. In this spirit, this white paper describes the R&D activities required to prepare for this software upgrade.
DOI: 10.1007/s41781-018-0006-z
2018
Cited 3 times
CMS@home: Integrating the Volunteer Cloud and High-Throughput Computing
Volunteer computing has the potential to provide significant additional computing capacity for the LHC experiments. Initiatives such as the CMS@home project are aiming to integrate volunteer computing resources into the experiment’s computational frameworks to support their scientific workloads. This is especially important, as over the next few years the demands on computing capacity will increase beyond what can be supported by general technology trends. This paper describes how a volunteer computing project that uses virtualization to run high energy physics simulations can integrate those resources into their computing infrastructure. The concept of the volunteer cloud is introduced and how this model can simplify the integration is described. An architecture for implementing the volunteer cloud model is presented along with an implementation for the CMS@home project. Finally, the submission of real CMS workloads to this volunteer cloud are compared to identical workloads submitted to the grid.
DOI: 10.1051/epjconf/201921403006
2019
Cited 3 times
Improving efficiency of analysis jobs in CMS
Hundreds of physicists analyze data collected by the Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider using the CMS Remote Analysis Builder and the CMS global pool to exploit the resources of the Worldwide LHC Computing Grid. Efficient use of such an extensive and expensive resource is crucial. At the same time, the CMS collaboration is committed to minimizing time to insight for every scientist, by pushing for fewer possible access restrictions to the full data sample and supports the free choice of applications to run on the computing resources. Supporting such variety of workflows while preserving efficient resource usage poses special challenges. In this paper we report on three complementary approaches adopted in CMS to improve the scheduling efficiency of user analysis jobs: automatic job splitting, automated run time estimates and automated site selection for jobs.
DOI: 10.1393/ncc/i2015-15207-x
2016
Exotic structures in B meson decays at CMS
DOI: 10.22323/1.273.0051
2016
Exotic quarkonium states at CMS
The studies of the production of the X(3872), either prompt or from B hadron decays, and of the J/ψφ mass spectrum in B hadron decays have been carried out by using pp collisions at √ s = 7 TeV collected with the CMS detector at the LHC.The cross-section ratio of the X(3872) with respect to the ψ(2S) in the J/ψπ + π -decay channel and the fraction of X(3872) coming from B-hadron decays are measured as a function of transverse momentum (p T ), covering unprecedentedly high values of p T .For the first time, the prompt X(3872) cross section times branching fraction is extracted differentially in p T and compared with Non-Relativistic QCD (NRQCD) predictions.The dipion invariant-mass spectrum of the J/ψπ + π -system in the X(3872) decay is also investigated.A peaking structure in the J/ψφ mass spectrum near threshold is observed in B ± → J/ψφ K ± decays.The data sample, selected on the basis of the dimuon decay mode of the J/ψ, corresponds to an integrated luminosity of 5.2 fb -1 .Fitting the structure to an S-wave relativistic Breit-Wigner lineshape above a three-body phase-space nonresonant component gives a signal statistical significance exceeding five standard deviations.The fitted mass and width values are m = 4148.0± 2.4(stat.)± 6.3(syst.)MeV and Γ = 28 +15 -11 (stat.)± 19(syst.)MeV, respectively.Evidence for an additional peaking structure at higher J/ψφ mass is also reported.
2016
Exotic states (X, Y, Z, Pentaquark et al.)
DOI: 10.22323/1.272.0032
2016
Exotic quarkonimum states in CMS
The studies of the production of the X(3872), either prompt or from B hadron decays, and of the J/ψφ mass spectrum in B hadron decays have been carried out by using pp collisions at √ s = 7 TeV collected with the CMS detector at the LHC.The cross-section ratio of the X(3872) with respect to the ψ(2S) in the J/ψπ + π -decay channel and the fraction of X( 3872) coming from B-hadron decays are measured as a function of transverse momentum (p T ), covering unprecedentedly high values of p T .For the first time, the prompt X(3872) cross section times branching fraction is extracted differentially in p T and compared with NRQCD predictions.The dipion invariant-mass spectrum of the J/ψπ + π -system in the X(3872) decay is also investigated.A peaking structure in the J/ψφ mass spectrum near threshold is observed in B ± → J/ψφ K ± decays.The data sample, selected on the basis of the dimuon decay mode of the J/ψ, corresponds to an integrated luminosity of 5.2 f b -1 .Fitting the structure to an S-wave relativistic Breit-Wigner line shape above a three-body phase-space non resonant component gives a signal statistical significance exceeding five standard deviations.The fitted mass and width values are m = 4148.0± 2.4(stat.)± 6.3(syst.)MeV and Γ = 28 + 15 -11(stat.)± 19(syst.)MeV, respectively.Evidence for an additional peaking structure at higher J/ψφ mass is also reported.
DOI: 10.22323/1.278.0010
2016
Exotic states (X,Y,Z, pentaquark, ecc)
2016
Heavy Flavour Spectroscopy And Exotic States In CMS
DOI: 10.1051/epjconf/201713706006
2017
Exotic quarkonium states in CMS
The studies of the production of the X(3872), either prompt or from B hadron decays, and of the J/ψϕ mass spectrum in B hadron decays have been carried out by using pp collisions at √s = 7 TeV collected with the CMS detector at the LHC. The cross-section ratio of the X(3872) with respect to the ψ(2S ) in the J/ψπ+π− decay channel and the fraction of X(3872) coming from B-hadron decays are measured as a function of transverse momentum (pT), covering unprecedentedly high values of pT. For the first time, the prompt production cross section for the X(3872) times the unknown branching fraction for the decay of X(3872) →J/ψπ+π− is extracted differentially in pT and compared to theoretical predictions based on the Non-Relativistic QCD (NRQCD) factorization approach. The dipion invariant-mass spectrum of the J/ψπ+π− system in the X(3872) decay is also investigated.
DOI: 10.1051/epjconf/201713711005
2017
Statistical significance estimation of a signal within the GooFit framework on GPUs
In order to test the computing capabilities of GPUs with respect to traditional CPU cores a high-statistics toy Monte Carlo technique has been implemented both in ROOT/RooFit and GooFit frameworks with the purpose to estimate the statistical significance of the structure observed by CMS close to the kinematical boundary of the J/ψϕ invariant mass in the three-body decay B+ → J/ψϕK+. GooFit is a data analysis open tool under development that interfaces ROOT/RooFit to CUDA platform on nVidia GPU. The optimized GooFit application running on GPUs hosted by servers in the Bari Tier2 provides striking speed-up performances with respect to the RooFit application parallelised on multiple CPUs by means of PROOF-Lite tool. The considerable resulting speed-up, evident when comparing concurrent GooFit processes allowed by CUDA Multi Process Service and a RooFit/PROOF-Lite process with multiple CPU workers, is presented and discussed in detail. By means of GooFit it has also been possible to explore the behaviour of a likelihood ratio test statistic in different situations in which the Wilks Theorem may or may not apply because its regularity conditions are not satisfied.
DOI: 10.1134/s1063778817050040
2017
Exotic quarkonium states in CMS
The studies of the production of the X(3872), either prompt or from B-hadron decays, and of the J/ψϕ mass spectrum in B-hadron decays have been carried out by using pp collisions at $$\sqrt s $$ = 7 TeV collected with the CMS detector at the LHC. The cross-section ratio of the X(3872) with respect to the ψ(2S) in the J/ψπ + π − decay channel and the fraction of X(3872) coming from B-hadron decays are measured as a function of transverse momentum (p T ), covering unprecedentedly high values of p T . For the first time, the prompt X(3872) cross section times branching fraction is extracted differentially in p T and compared with NRQCD predictions. The dipion invariant-mass spectrum of the J/ψπ + π − system in the X(3872) decay is also investigated. A peaking structure in the J/ψϕ mass spectrum near threshold is observed in B ± → J/ψϕK ± decays. The data sample, selected on the basis of the dimuon decay mode of the J/ψ, corresponds to an integrated luminosity of 5.2 fb−1. Fitting the structure to an S-wave relativistic Breit–Wigner lineshape above a three-body phase-space nonresonant component gives a signal statistical significance exceeding five standard deviations. The fitted mass and width values are m = 4148.0 ± 2.4(stat.) ± 6.3(syst.) MeV and Γ = 28+15 − 11(stat.) ± 19(syst.) MeV, respectively. Evidence for an additional peaking structure at higher J/ψϕ mass is also reported.
DOI: 10.1142/9789811200380_0012
2019
Recent Results on B-Physics at CMS
DOI: 10.22323/1.340.0606
2019
Exotic quarkonium states at CMS
The studies of the production of the $X(3872)$, either prompt or from B hadron decays, and of the $J/\psi \phi$ mass spectrum in B hadron decays have been carried out by using $pp$ collisions at $\sqrt{s}=7$ TeV collected with the CMS detector at the LHC. %The production of the $X(3872)$ is studied in $pp$ collisions at $\sqrt{s} = 7$ TeV with the CMS detector at LHC, using decays to $J/\psi\pi^{+}\pi^{-}$ where the $J/\psi$ decays to two muons. The cross-section ratio of the $X(3872)$ with respect to the $\psi(2S)$ in the $J/\psi\pi^{+}\pi^{-}$ decay channel and the fraction of $X(3872)$ coming from B-hadron decays are measured as a function of transverse momentum ($p\mathrm{_T}$), covering unprecedentedly high values of $p\mathrm{_T}$. For the first time, the prompt production cross section for the $X(3872)$ times the unknown branching fraction for the decay of $X(3872) \rightarrow J/\psi\pi^{+}\pi^{-}$ is extracted differentially in $p\mathrm{_T}$ and compared to theoretical predictions based on the Non-Relativistic QCD (NRQCD) factorization approach. The dipion invariant-mass spectrum of the $J/\psi\pi^{+}\pi^{-}$ system in the $X(3872)$ decay is also investigated. The search for resonance-like structures in the $B^{0}_{s}\pi^{\pm}$ invariant mass spectrum do not show any unexpected result. An upper limit on the relative production of the claimed $X(5568)$ and $B_s$ multiplied by the unknown branching fraction of the decay $X(5568) \rightarrow B_{s}\pi^{\pm}$ is estimated to be 3.9\% at 95\% CL in the most conservative case.
DOI: 10.22323/1.351.0015
2019
Improving efficiency of analysis jobs in CMS
Data collected by the Compact Muon Solenoid experiment at the Large Hadron Collider are continuously analyzed by hundreds of physicists thanks to the CMS Remote Analysis Builder and the CMS global pool, exploiting the resources of the Worldwide LHC Computing Grid.Making an efficient use of such an extensive and expensive system is crucial.Supporting a variety of workflows while preserving efficient resource usage poses special challenges, like: scheduling of jobs in a multicore/pilot model where several single core jobs with an undefined run time run inside pilot jobs with a fixed lifetime; avoiding that too many concurrent reads from same storage push jobs into I/O wait mode making CPU cycles go idle; monitoring user activity to detect low efficiency workflows and provide optimizations, etc.In this contribution we report on two novel complementary approaches adopted in CMS to improve the scheduling efficiency of user analysis jobs: automatic job splitting and automated run time estimates.They both aim at finding an optimal value for the scheduling run time.We also report on how we use the flexibility of the global CMS computing pool to select the amount, kind, and running locations of jobs exploiting remote access to the input data.
DOI: 10.22323/1.377.0014
2020
CMS studies of excited $B_c$ states
In this work, the search for excited B + c (2S) states, through their decay to B + c π + π -, is presented.For the first time, signals consistent with the B + c (2S) and B + * c (2S) states are observed in protonproton collisions at (s) = 13 TeV, in an event sample corresponding to an integrated luminosity of 143 fb -1 , collected by the CMS experiment during the 2015-2018 LHC running periods.These excited bc states are observed in the B + c π + π -invariant mass spectrum, with the ground state B + c reconstructed through its decay to J/ψπ + .The two states are reconstructed as two well-resolved peaks, separated in mass by 29.1 ± 1.5(stat) ± 0.7(syst) MeV.The observation of two peaks, rather than one, is established with a significance exceeding five standard deviations.The mass of the B + c (2S) meson is measured to be 6871.0± 1.2(stat) ± 0.8(syst) ± 0.8(B + c ) MeV, where the last term corresponds to the uncertainty in the world-average B + c mass.
2019
CMS studies of excited B_c states
In this work, the search for excited $B_c^+(2S)$ states, through their decay to $B_c^+\pi^+\pi^-$, is presented. For the first time, signals consistent with the $B_c^+(2S)$ and $B_c^{+\,*}(2S)$ states are observed in proton-proton collisions at $\sqrt(s) = 13~\textrm{TeV}$, in an event sample corresponding to an integrated luminosity of $143~\textrm{fb}^{-1}$, collected by the CMS experiment during the 2015-2018 LHC running periods. These excited $\bar b c$ states are observed in the $B_c^+\pi^+\pi^-$ invariant mass spectrum, with the ground state $B_c^+$ reconstructed through its decay to $J/\psi \pi^+$. The two states are reconstructed as two well-resolved peaks, separated in mass by $29.1 \pm 1.5(\textrm{stat}) \pm 0.7(\textrm{syst})~\textrm{MeV}$. The observation of two peaks, rather than one, is established with a significance exceeding five standard deviations. The mass of the $B_c^+(2S)$ meson is measured to be $6871.0 \pm 1.2 (\textrm{stat}) \pm 0.8 (\textrm{syst}) \pm 0.8 (B_c^+)~\textrm{MeV}$, where the last term corresponds to the uncertainty in the world-average $B_c^+$ mass.
DOI: 10.21203/rs.3.rs-51185/v2
2020
A GPU based multidimensional amplitude analysis to search for tetraquark candidates
Abstract The demand for computational resources is steadily increasing in experimental high energy physics as the current collider experiments continue to accumulate huge amounts of data and physicists indulge in more complex and ambitious analysis strategies. This is especially true in the fields of hadron spectroscopy and flavour physics where the analyses often depend on complex multidimensional unbinned maximum-likelihood fits, with several dozens of free parameters, with the aim to study the internal structure of hadrons. Graphics processing units (GPUs) represent one of the most sophisticated and versatile parallel computing architectures that are becoming popular toolkits for high energy physicists to meet their computational demands. GooFit is an upcoming open-source tool interfacing ROOT/RooFit to the CUDA platform on NVIDIA GPUs that acts as a bridge between the MINUIT minimization algorithm and a parallel processor, allowing probability density functions to be estimated on multiple cores simultaneously. In this article, a full-fledged amplitude analysis framework developed using GooFit is tested for its speed and reliability. The four-dimensional fitter framework, one of the firsts of its kind to be built on GooFit, is geared towards the search for exotic tetraquark states in the [[EQUATION]] decays and can also be seamlessly adapted for other similar analyses. The GooFit fitter, running on GPUs, shows a remarkable improvement in the computing speed compared to a ROOT/RooFit implementation of the same analysis running on multi-core CPU clusters. Furthermore, it shows sensitivity to components with small contributions to the overall fit. It has the potential to be a powerful tool for sensitive and computationally intensive physics analyses.
DOI: 10.21203/rs.3.rs-51185/v1
2020
A GPU Based Multidimensional Amplitude Analysis to Search for Tetraquark Candidates
Abstract The demand for computational resources is steadily increasing in experimental high energy physics as the current collider experiments continue to accumulate huge amounts of data while physicists indulge in more complex and ambitious analysis strategies. This is especially true in the fields of hadron spectroscopy and flavour physics where the analyses often depend on complex multidimensional unbinned maximum-likelihood fits, with several dozens of free parameters, with the aim to study the quark structure of hadrons. Graphics processing units (GPUs) represent one of the most sophisticated and versatile parallel computing architectures that are becoming popular toolkits for high energy physicists to meet their computational demands. GooFit is an upcoming open-source tool interfacing ROOT/RooFit to the CUDA platform on NVIDIA GPUs that acts as a bridge between the MINUIT minimization algorithm and a parallel processor, allowing probability density functions to be estimated on multiple cores simultaneously. In this article, a full-fledged amplitude analysis framework developed using GooFit is tested for its speed and reliability. The four-dimensional fitter framework, one of the firsts of its kind to be built on GooFit, is geared towards the search for exotic tetraquark states in the [[EQUATION]] decays that can also be seamlessly adapted for other similar analyses. The GooFit fitter running on GPUs shows a remarkable speed-up in the computing performance when compared to a ROOT/RooFit implementation of the same, running on multicore CPU clusters. Furthermore, it shows sensitivity to components with small contributions to the overall fit. It has the potential to be a powerful tool for sensitive and computationally intensive physics analyses.
DOI: 10.21203/rs.3.rs-51185/v3
2020
A GPU based multidimensional amplitude analysis to search for tetraquark candidates
Abstract The demand for computational resources is steadily increasing in experimental high energy physics as the current collider experiments continue to accumulate huge amounts of data and physicists indulge in more complex and ambitious analysis strategies. This is especially true in the fields of hadron spectroscopy and flavour physics where the analyses often depend on complex multidimensional unbinned maximum-likelihood fits, with several dozens of free parameters, with an aim to study the internal structure of hadrons. Graphics processing units (GPUs) represent one of the most sophisticated and versatile parallel computing architectures that are becoming popular toolkits for high energy physicists to meet their computational demands. GooFit is an upcoming open-source tool interfacing ROOT/RooFit to the CUDA platform on NVIDIA GPUs that acts as a bridge between the MINUIT minimization algorithm and a parallel processor, allowing probability density functions to be estimated on multiple cores simultaneously. In this article, a full-fledged amplitude analysis framework developed using GooFit is tested for its speed and reliability. The four-dimensional fitter framework, one of the firsts of its kind to be built on GooFit, is geared towards the search for exotic tetraquark states in the [[EQUATION]] decays and can also be seamlessly adapted for other similar analyses. The GooFit fitter, running on GPUs, shows a remarkable improvement in the computing speed compared to a ROOT/RooFit implementation of the same analysis running on multi-core CPU clusters. Furthermore, it shows sensitivity to components with small contributions to the overall fit. It has the potential to be a powerful tool for sensitive and computationally intensive physics analyses.
DOI: 10.1186/s40537-020-00408-4
2021
A GPU based multidimensional amplitude analysis to search for tetraquark candidates
Abstract The demand for computational resources is steadily increasing in experimental high energy physics as the current collider experiments continue to accumulate huge amounts of data and physicists indulge in more complex and ambitious analysis strategies. This is especially true in the fields of hadron spectroscopy and flavour physics where the analyses often depend on complex multidimensional unbinned maximum-likelihood fits, with several dozens of free parameters, with an aim to study the internal structure of hadrons. Graphics processing units (GPUs) represent one of the most sophisticated and versatile parallel computing architectures that are becoming popular toolkits for high energy physicists to meet their computational demands. GooFit is an upcoming open-source tool interfacing ROOT/RooFit to the CUDA platform on NVIDIA GPUs that acts as a bridge between the MINUIT minimization algorithm and a parallel processor, allowing probability density functions to be estimated on multiple cores simultaneously. In this article, a full-fledged amplitude analysis framework developed using GooFit is tested for its speed and reliability. The four-dimensional fitter framework, one of the firsts of its kind to be built on GooFit, is geared towards the search for exotic tetraquark states in the $$B^0 \rightarrow J/\psi K \pi$$ <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML"> <mml:mrow> <mml:msup> <mml:mi>B</mml:mi> <mml:mn>0</mml:mn> </mml:msup> <mml:mo>→</mml:mo> <mml:mi>J</mml:mi> <mml:mo>/</mml:mo> <mml:mi>ψ</mml:mi> <mml:mi>K</mml:mi> <mml:mi>π</mml:mi> </mml:mrow> </mml:math> decays and can also be seamlessly adapted for other similar analyses. The GooFit fitter, running on GPUs, shows a remarkable improvement in the computing speed compared to a ROOT/RooFit implementation of the same analysis running on multi-core CPU clusters. Furthermore, it shows sensitivity to components with small contributions to the overall fit. It has the potential to be a powerful tool for sensitive and computationally intensive physics analyses.
DOI: 10.1051/epjconf/202125103013
2021
A novel reconstruction framework for an imaging calorimeter for HL-LHC
To sustain the harsher conditions of the high-luminosity LHC, the CMS collaboration is designing a novel endcap calorimeter system. The new calorimeter will predominantly use silicon sensors to achieve sufficient radiation tolerance and will maintain highly-granular information in the readout to help mitigate the effects of pileup. In regions characterised by lower radiation levels, small scintillator tiles with individual on-tile SiPM readout are employed. A unique reconstruction framework (TICL: The Iterative CLustering) is being developed to fully exploit the granularity and other significant detector features, such as particle identification and precision timing, with a view to mitigate pileup in the very dense environment of HL-LHC. The inputs to the framework are clusters of energy deposited in individual calorimeter layers. Clusters are formed by a density-based algorithm. Recent developments and tunes of the clustering algorithm will be presented. To help reduce the expected pressure on the computing resources in the HL-LHC era, the algorithms and their data structures are designed to be executed on GPUs. Preliminary results will be presented on decreases in clustering time when using GPUs versus CPUs. Ideas for machine-learning techniques to further improve the speed and accuracy of reconstruction algorithms will be presented.
DOI: 10.21203/rs.3.rs-51185/v4
2021
A GPU based multidimensional amplitude analysis to search for tetraquark candidates
Abstract The demand for computational resources is steadily increasing in experimental high energy physics as the current collider experiments continue to accumulate huge amounts of data and physicists indulge in more complex and ambitious analysis strategies. This is especially true in the fields of hadron spectroscopy and flavour physics where the analyses often depend on complex multidimensional unbinned maximum-likelihood fits, with several dozens of free parameters, with an aim to study the internal structure of hadrons.Graphics processing units (GPUs) represent one of the most sophisticated and versatile parallel computing architectures that are becoming popular toolkits for high energy physicists to meet their computational demands. GooFit is an upcoming open-source tool interfacing ROOT/RooFit to the CUDA platform on NVIDIA GPUs that acts as a bridge between the MINUIT minimization algorithm and a parallel processor, allowing probability density functions to be estimated on multiple cores simultaneously.In this article, a full-fledged amplitude analysis framework developed using GooFit is tested for its speed and reliability. The four-dimensional fitter framework, one of the firsts of its kind to be built on GooFit, is geared towards the search for exotic tetraquark states in the B 0 → J/ψKπ decays and can also be seamlessly adapted for other similar analyses. The GooFit fitter, running on GPUs, shows a remarkable improvement in the computing speed compared to a ROOT/RooFit implementation of the same analysis running on multi-core CPU clusters. Furthermore, it shows sensitivity to components with small contributions to the overall fit. It has the potential to be a powerful tool for sensitive and computationally intensive physics analyses.
DOI: 10.48550/arxiv.2110.13041
2021
Applications and Techniques for Fast Machine Learning in Science
In this community review report, we discuss applications and techniques for fast machine learning (ML) in science -- the concept of integrating power ML methods into the real-time experimental data processing loop to accelerate scientific discovery. The material for the report builds on two workshops held by the Fast ML for Science community and covers three main areas: applications for fast ML across a number of scientific domains; techniques for training and implementing performant and resource-efficient ML algorithms; and computing architectures, platforms, and technologies for deploying these algorithms. We also present overlapping challenges across the multiple scientific domains where common solutions can be found. This community report is intended to give plenty of examples and inspiration for scientific discovery through integrated and accelerated ML solutions. This is followed by a high-level overview and organization of technical advances, including an abundance of pointers to source material, which can enable these breakthroughs.