ϟ

Vincenzo Innocente

Here are all the papers by Vincenzo Innocente that you can download and read on OA.mg.
Vincenzo Innocente’s last known institution is . Download Vincenzo Innocente PDFs here.

Claim this Profile →
DOI: 10.1016/0370-2693(84)90139-4
1984
Cited 211 times
Measurement of the proton-antiproton total and elastic cross sections at the CERN SPS collider
The proton-antiproton total cross section was measured at the CM energy √s = 546 GeV. The result is σtot = 61.9± 1.5 mb. The ratio of the elastic to the total cross section is σeℓ/σtot = 0.215±0.005. A comparison to the lower energy data shows that the increase of the total cross section with energy is very close to a log2s behaviour.
DOI: 10.1016/0370-2693(87)90922-1
1987
Cited 164 times
The real part of the proton-antiproton elastic scattering amplitude at the centre of mass energy of 546 GeV
Proton-antiproton elastic scattering was measured at the CERN SPS Collider at the centr-of-mass energy s=546 GeV in the Coulomb interference region. The data provide information on the phase of the hadronic amplitude in the forward direction. The conventional analysis gives for the ratio ϱ of the real to the imaginary part of the hadronic amplitude the result ϱ=0.24±0.04.
DOI: 10.1016/0370-2693(84)90138-2
1984
Cited 148 times
Low momentum transfer elastic scattering at the CERN proton-antiproton collider
Proton-antiproton elastic scattering was measured at the CM energy √s = 546 GeV in the four-momentum transfer range 0.03<−t<0.32GeV2 For −<0.15GeV2 the data are well described by a simple exponential form exp(bt) with slope parameter b=15.2±0.2GeV−2. This value is significantly larger than the one measured in the region 0.21<−t<0.50 GeV2.
DOI: 10.1016/s0168-9002(96)00777-2
1996
Cited 150 times
The forward muon detector of L3
The forward-backward muon detector of the L3 experiment is presented. Intended to be used for LEP 200 physics, it consists of 96 self-calibrating drift chambers of a new design enclosing the magnet pole pieces of the L3 solenoid. The pole pieces are toroidally magnetized to form two independent analyzing spectrometers. A novel trigger is provided by resistive plate counters attached to the drift chambers. Details about the design, construction and performance of the whole system are given together with results obtained during the 1995 running at LEP.
DOI: 10.1016/0370-2693(85)90985-2
1985
Cited 112 times
Elastic scattering at the CERN SPS collider up to a four-momentum transfer of 1.55 GeV2
Proton-antiproton elastic scattering was measured at the center-of-mass energy s=546 GeV in the four-momentum transfer range 0.45⩽−⩽1.55GeV2. The shape of the t-distribution is quite different from that observed in proton-proton scattering at the ISR. Rather than a dip-bump structure, a kink is present at − ≈0.9GeV2 followed by a shoulder. The cross section at the second maximum is more than one order of magnitude higher than at the ISR.
DOI: 10.1016/0370-2693(87)90285-1
1987
Cited 94 times
The cross section of diffraction dissociation at the cern SPS collider
Single diffraction dissociation was measured in the reaction p¯p→p¯X at the centre-of-mass energy √s = 546 GeV. The mass M of the system X was deduced from the pseudorapidity distribution of the observed charged tracks. The cross section of single diffraction dissociation for M2/s⩽0.05isσsd=9.4 ± 0.7 mb. Comparison to the ISR data shows that σsd increases with energy less fast than the total and the elastic cross sections.
DOI: 10.1016/0370-2693(90)91807-n
1990
Cited 76 times
Inclusive production of π0'S in the fragmentation region at the SpS collider
The inclusive production of π0's has been measured in the nucleon fragmentation region at the SppS Collider at 630 GeV center of mass energy. Average transverse momentum and rapidity distributions compared with lower energy ISR data show no sizable violation of Feynman scaling in the fragmentation region.
DOI: 10.1007/978-3-030-29135-8_9
2019
Cited 42 times
The Tracking Machine Learning Challenge: Accuracy Phase
This paper reports the results of an experiment in high energy physics: using the power of the “crowd” to solve difficult experimental problems linked to tracking accurately the trajectory of particles in the Large Hadron Collider (LHC). This experiment took the form of a machine learning challenge organized in 2018: the Tracking Machine Learning Challenge (TrackML). Its results were discussed at the competition session at the Neural Information Processing Systems conference (NeurIPS 2018). Given 100,000 points, the participants had to connect them into about 10,000 arcs of circles, following the trajectory of particles issued from very high energy proton collisions. The competition was difficult with a dozen front-runners well ahead of a pack. The single competition score is shown to be accurate and effective in selecting the best algorithms from the domain point of view. The competition has exposed a diversity of approaches, with various roles for Machine Learning, a number of which are discussed in the document.
DOI: 10.1007/s41781-023-00094-w
2023
Cited 6 times
The Tracking Machine Learning Challenge: Throughput Phase
Abstract This paper reports on the second “Throughput” phase of the Tracking Machine Learning (TrackML) challenge on the Codalab platform. As in the first “Accuracy” phase, the participants had to solve a difficult experimental problem linked to tracking accurately the trajectory of particles as e.g. created at the Large Hadron Collider (LHC): given $$O(10^5)$$ <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML"> <mml:mrow> <mml:mi>O</mml:mi> <mml:mo>(</mml:mo> <mml:msup> <mml:mn>10</mml:mn> <mml:mn>5</mml:mn> </mml:msup> <mml:mo>)</mml:mo> </mml:mrow> </mml:math> points, the participants had to connect them into $$O(10^4)$$ <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML"> <mml:mrow> <mml:mi>O</mml:mi> <mml:mo>(</mml:mo> <mml:msup> <mml:mn>10</mml:mn> <mml:mn>4</mml:mn> </mml:msup> <mml:mo>)</mml:mo> </mml:mrow> </mml:math> individual groups that represent the particle trajectories which are approximated helical. While in the first phase only the accuracy mattered, the goal of this second phase was a compromise between the accuracy and the speed of inference. Both were measured on the Codalab platform where the participants had to upload their software. The best three participants had solutions with good accuracy and speed an order of magnitude faster than the state of the art when the challenge was designed. Although the core algorithms were less diverse than in the first phase, a diversity of techniques have been used and are described in this paper. The performance of the algorithms is analysed in depth and lessons derived.
DOI: 10.1016/0370-2693(86)91014-2
1986
Cited 62 times
Large-t elastic scattering at the CERN SPS collider at
Proton-antiproton elastic scattering was measured at the centre-of-mass energy s = 630 GeV in the four-momentum transfer range 0.7 ⩽ − t ⩽ 2.2 GeV2. The new data confirm our previous results at s = 546 GeV on the presence of a break in the t-distribution at −t ≃ 0.9 GeV2 which is followed by a shoulder, and extend the momentum transfer range to larger values. The t-dependence of the differential cross section beyond the break is discussed.
DOI: 10.1016/0370-2693(86)91598-4
1986
Cited 48 times
Pseudorapidity distribution of charged particles in diffraction dissociation events at the CERN SPS collider
The reaction pp → pX was studied as a function of the mass M of the system X at the centre-of-mass energy s = 546 GeV in the kinematical region where diffraction dissociation dominates. The pseudorapidity distribution of charged tracks, produced in the fragmentation of the system X, is well described within the limits of cylindrical phase space. The fragments of the X-system behave very similarly to the products of non-diffractive inelastic collisions at s = M.
DOI: 10.1016/0168-9002(92)90277-b
1992
Cited 46 times
A high resolution muon detector
The design and operation of precision drift chambers with multisampling as well as the concepts and methods for reaching an extraordinary degree of precision in mechanics and calibration are described. Specific instruments were developed for this purpose. The concept of reproducible positioning and the implementation to 30 μm accuracy, showing stability over three years, is given. Calibration and analysis with UV-laser and cosmic test measurements are outlined with the critical results. The experience of calibration and reliability of the large system in an actual L3 running experiment is analyzed. The resolution under “battle conditions” at LEP resulted in Δpp = (2.50±0.04)% at 45.6 GeV and will be presented in detail. The concept is well suited for future TeV energies.
DOI: 10.1088/1742-6596/513/5/052027
2014
Cited 22 times
Speeding up HEP experiment software with a library of fast and auto-vectorisable mathematical functions
During the first years of data taking at the Large Hadron Collider (LHC), the simulation and reconstruction programs of the experiments proved to be extremely resource consuming. In particular, for complex event simulation and reconstruction applications, the impact of evaluating elementary functions on the runtime is sizeable (up to one fourth of the total), with an obvious effect on the power consumption of the hardware dedicated to their execution. This situation clearly needs improvement, especially considering the even more demanding data taking scenarios after the first LHC long shut down. A possible solution to this issue is the VDT (VectorisD maTh) mathematical library. VDT provides the most common mathematical functions used in HEP in an open source product. The function implementations are fast, can be inlined, provide an approximate accuracy and are usable in vectorised loops. Their implementation is portable across platforms: x86 and ARM processors, Xeon Phi coprocessors and GPGPUs. In this contribution, we describe the features of the VDT mathematical library, showing significant speedups with respect to the LibM library and comparable accuracies. Moreover, taking as examples simulation and reconstruction workflows in production by the LHC experiments, we show the benefits of the usage of VDT in terms of runtime reduction and stability of physics output.
DOI: 10.3389/fdata.2020.601728
2020
Cited 17 times
Heterogeneous Reconstruction of Tracks and Primary Vertices With the CMS Pixel Tracker
The High-Luminosity upgrade of the Large Hadron Collider (LHC) will see the accelerator reach an instantaneous luminosity of 7 × 10 34 cm −2 s −1 with an average pileup of 200 proton-proton collisions. These conditions will pose an unprecedented challenge to the online and offline reconstruction software developed by the experiments. The computational complexity will exceed by far the expected increase in processing power for conventional CPUs, demanding an alternative approach. Industry and High-Performance Computing (HPC) centers are successfully using heterogeneous computing platforms to achieve higher throughput and better energy efficiency by matching each job to the most appropriate architecture. In this paper we will describe the results of a heterogeneous implementation of pixel tracks and vertices reconstruction chain on Graphics Processing Units (GPUs). The framework has been designed and developed to be integrated in the CMS reconstruction software, CMSSW. The speed up achieved by leveraging GPUs allows for more complex algorithms to be executed, obtaining better physics output and a higher throughput.
DOI: 10.1088/1742-6596/513/5/052010
2014
Cited 19 times
Parallel track reconstruction in CMS using the cellular automaton approach
The Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC) is a general-purpose particle detector and comprises the largest silicon-based tracking system built to date with 75 million individual readout channels. The precise reconstruction of particle tracks from this tremendous amount of input channels is a compute-intensive task. The foreseen LHC beam parameters for the next data taking period, starting in 2015, will result in an increase in the number of simultaneous proton-proton interactions and hence the number of particle tracks per event. Due to the stagnating clock frequencies of individual CPU cores, new approaches to particle track reconstruction need to be evaluated in order to cope with this computational challenge. Track finding methods that are based on cellular automata (CA) offer a fast and parallelizable alternative to the well-established Kalman filter-based algorithms. We present a new cellular automaton based track reconstruction, which copes with the complex detector geometry of CMS. We detail the specific design choices made to allow for a high-performance computation on GPU and CPU devices using the OpenCL framework. We conclude by evaluating the physics performance, as well as the computational properties of our implementation on various hardware platforms and show that a significant speedup can be attained by using GPU architectures while achieving a reasonable physics performance at the same time.
DOI: 10.1051/epjconf/202024505009
2020
Cited 14 times
Bringing heterogeneity to the CMS software framework
The advent of computing resources with co-processors, for example Graphics Processing Units (GPU) or Field-Programmable Gate Arrays (FPGA), for use cases like the CMS High-Level Trigger (HLT) or data processing at leadership-class supercomputers imposes challenges for the current data processing frameworks. These challenges include developing a model for algorithms to offload their computations on the co-processors as well as keeping the traditional CPU busy doing other work. The CMS data processing framework, CMSSW, implements multithreading using the Intel Threading Building Blocks (TBB) library, that utilizes tasks as concurrent units of work. In this paper we will discuss a generic mechanism to interact effectively with non-CPU resources that has been implemented in CMSSW. In addition, configuring such a heterogeneous system is challenging. In CMSSW an application is configured with a configuration file written in the Python language. The algorithm types are part of the configuration. The challenge therefore is to unify the CPU and co-processor settings while allowing their implementations to be separate. We will explain how we solved these challenges while minimizing the necessary changes to the CMSSW framework. We will also discuss on a concrete example how algorithms would offload work to NVIDIA GPUs using directly the CUDA API.
2024
Accuracy of Mathematical Functions in Single, Double, Double Extended, and Quadruple Precision
DOI: 10.1051/epjconf/202429507024
2024
HEPScore: A new CPU benchmark for the WLCG
HEPScore is a new CPU benchmark created to replace the HEPSPEC06 benchmark that is currently used by the WLCG for procurement, computing resource pledges, usage accounting and performance studies. The development of the new benchmark, based on HEP applications or workloads, has involved many contributions from software developers, data analysts, experts of the experiments, representatives of several WLCG computing centres and WLCG site managers. In this contribution, we review the selection of workloads and the validation of the new HEPScore benchmark.
DOI: 10.1016/0370-2693(86)90667-2
1986
Cited 25 times
High mass diffraction dissociation in the dual parton model
It is shown that, in the dual parton model, the final states produced in single diffractive dissociation at fixed mass M, are identical to those in non-diffractive inelastic π p collisions at √s = M. This is striking agreement with recent data from the UA4 Collaboration at the SPS Collider.
DOI: 10.1016/s0010-4655(01)00253-3
2001
Cited 21 times
CMS software architecture
This paper describes the design of a resilient and flexible software architecture that has been developed to satisfy the data processing requirements of a large HEP experiment, CMS, currently being constructed at the LHC machine at CERN. We describe various components of a software framework that allows integration of physics modules and which can be easily adapted for use in different processing environments both real-time (online trigger) and offline (event reconstruction and analysis). Features such as the mechanisms for scheduling algorithms, configuring the application and managing the dependences among modules are described in detail. In particular, a major effort has been placed on providing a service for managing persistent data and the experience using a commercial ODBMS (Objectivity/DB) is therefore described in detail.
DOI: 10.1016/0370-2693(88)91051-9
1988
Cited 20 times
Minijets and the real part of the elastic amplitudes
In the framework of the perturbative reggeon calculus, including a hard pomeron, we perform a fit of pp and pp total, elastic and diffractive cross section data and the ratio ϱ. The parameters of the hard pomeron are deduced from the minijet cross section measured by the UA1 Collaboration. We obtain a value of the real part of the pp elastic amplitude compatible with the recent UA4 measurement.
DOI: 10.1016/0168-9002(93)90992-q
1993
Cited 21 times
Trajectory fit in presence of dense materials
Abstract We present procedures to fit trajectories through dense materials fully accounting for multiple scattering and energy loss. A new algorithm based on the inversion of a banded matrix is described and compared with other more classical approaches. Practical applications using the GEANE package are presented.
DOI: 10.1051/epjconf/201715000015
2017
Cited 9 times
Track reconstruction at LHC as a collaborative data challenge use case with RAMP
Charged particle track reconstruction is a major component of data-processing in high-energy physics experiments such as those at the Large Hadron Collider (LHC), and is foreseen to become more and more challenging with higher collision rates. A simplified two-dimensional version of the track reconstruction problem is set up on a collaborative platform, RAMP, in order for the developers to prototype and test new ideas. A small-scale competition was held during the Connecting The Dots / Intelligent Trackers 2017 (CTDWIT 2017) workshop. Despite the short time scale, a number of different approaches have been developed and compared along a single score metric, which was kept generic enough to accommodate a summarized performance in terms of both efficiency and fake rates.
DOI: 10.1209/0295-5075/4/2/010
1987
Cited 14 times
High-Energy bar pp and pp Elastic Scattering and Nucleon Structure
High-energy p and pp elastic data from the CERN Collider and the ISR are analyzed in the nucleon valence core model. Diffraction is described by a profile function that incorporates crossing symmetry and saturation of Froissart-Martin bound. The model is found to provide a very statisfactory description of the elastic scattering over the whole range of energy and momentum transfer. Implications of the analysis on QCD models of nucleon structure are pointed out.
DOI: 10.1088/1748-0221/5/04/p04003
2010
Cited 7 times
Persistent storage of non-event data in the CMS databases
In the CMS experiment, the non event data needed to set up the detector, or being produced by it, and needed to calibrate the physical responses of the detector itself are stored in ORACLE databases. The large amount of data to be stored, the number of clients involved and the performance requirements make the database system an essential service for the experiment to run. This note describes the CMS condition database architecture, the data-flow and PopCon, the tool built in order to populate the offline databases. Finally, the first experience obtained during the 2008 and 2009 cosmic data taking are presented.
DOI: 10.1051/epjconf/201921406037
2019
Cited 6 times
The TrackML high-energy physics tracking challenge on Kaggle
The High-Luminosity LHC (HL-LHC) is expected to reach unprecedented collision intensities, which in turn will greatly increase the complexity of tracking within the event reconstruction. To reach out to computer science specialists, a tracking machine learning challenge (TrackML) was set up on Kaggle by a team of ATLAS, CMS, and LHCb physicists tracking experts and computer scientists building on the experience of the successful Higgs Machine Learning challenge in 2014. A training dataset based on a simulation of a generic HL-LHC experiment tracker has been created, listing for each event the measured 3D points, and the list of 3D points associated to a true track.The participants to the challenge should find the tracks in the test dataset, which means building the list of 3D points belonging to each track.The emphasis is to expose innovative approaches, rather than hyper-optimising known approaches. A metric reflecting the accuracy of a model at finding the proper associations that matter most to physics analysis will allow to select good candidates to augment or replace existing algorithms.
DOI: 10.1109/tns.2005.852755
2005
Cited 10 times
Distributed computing grid experiences in CMS
The CMS experiment is currently developing a computing system capable of serving, processing and archiving the large number of events that will be generated when the CMS detector starts taking data. During 2004 CMS undertook a large scale data challenge to demonstrate the ability of the CMS computing system to cope with a sustained data-taking rate equivalent to 25% of startup rate. Its goals were: to run CMS event reconstruction at CERN for a sustained period at 25 Hz input rate; to distribute the data to several regional centers; and enable data access at those centers for analysis. Grid middleware was utilized to help complete all aspects of the challenge. To continue to provide scalable access from anywhere in the world to the data, CMS is developing a layer of software that uses Grid tools to gain access to data and resources, and that aims to provide physicists with a user friendly interface for submitting their analysis jobs. This paper describes the data challenge experience with Grid infrastructure and the current development of the CMS analysis system.
DOI: 10.1109/escience.2018.00088
2018
Cited 6 times
TrackML: A High Energy Physics Particle Tracking Challenge
To attain its ultimate discovery goals, the luminosity of the Large Hadron Collider at CERN will increase so the amount of additional collisions will reach a level of 200 interaction per bunch crossing, a factor 7 w.r.t the current (2017) luminosity. This will be a challenge for the ATLAS and CMS experiments, in particular for track reconstruction algorithms. In terms of software, the increased combinatorial complexity will have to harnessed without any increase in budget. To engage the Computer Science community to contribute new ideas, we organized a Tracking Machine Learning challenge (TrackML) running on the Kaggle platform from March to June 2018, building on the experience of the successful Higgs Machine Learning challenge in 2014. The data were generated using [ACTS], an open source accurate tracking simulator, featuring a typical all silicon LHC tracking detector, with 10 layers of cylinders and disks. Simulated physics events (Pythia ttbar) overlaid with 200 additional collisions yield typically 10000 tracks (100000 hits) per event. The first lessons from the Accuracy phase of the challenge will be discussed.
DOI: 10.1109/nssmic.2004.1462661
2005
Cited 7 times
An object-oriented simulation program for CMS
The CMS detector simulation package, OSCAR, is based on the Geant4 simulation toolkit and the CMS object-oriented framework for simulation and reconstruction. Geant4 provides a rich set of physics processes describing in detail electromagnetic and hadronic interactions. It also provides the tools for the implementation of the full CMS detector geometry and the interfaces required for recovering information from the particle tracking in the detectors. This functionality is interfaced to the CMS framework, which, via its "action on demand" mechanisms, allows the user to selectively load desired modules and to configure and tune the final application. The complete CMS detector is rather complex with more than 1 million geometrical volumes. OSCAR has been validated by comparing its results with test beam data and with results from simulation with a GEANT3-based program. It has been successfully deployed in the 2004 data challenge for CMS, where more than 35 million events for various LHC physics channels were simulated and analysed.
DOI: 10.1088/1742-6596/119/4/042030
2008
Cited 5 times
Analysing CMS software performance using IgProf, OProfile and callgrind
The CMS experiment at LHC has a very large body of software of its own and uses extensively software from outside the experiment. Understanding the performance of such a complex system is a very challenging task, not the least because there are extremely few developer tools capable of profiling software systems of this scale, or producing useful reports.
DOI: 10.1088/1742-6596/331/4/042007
2011
Cited 4 times
Time-critical Database Condition Data Handling in the CMS Experiment During the First Data Taking Period
Automatic, synchronous and reliable population of the condition databases is critical for the correct operation of the online selection as well as of the offline reconstruction and analysis of data. In this complex infrastructure, monitoring and fast detection of errors is a very challenging task.
DOI: 10.1088/1742-6596/219/4/042046
2010
Cited 4 times
First experience in operating the population of the condition databases for the CMS experiment
Reliable population of the condition databases is critical for the correct operation of the online selection as well as of the offline reconstruction and analysis of data. We will describe here the system put in place in the CMS experiment to populate the database and make condition data promptly available both online for the high-level trigger and offline for reconstruction. The system, designed for high flexibility to cope with very different data sources, uses POOL-ORA technology in order to store data in an object format that best matches the object oriented paradigm for C++ programming language used in the CMS offline software. In order to ensure consistency among the various subdetectors, a dedicated package, PopCon (Populator of Condition Objects), is used to store data online. The data are then automatically streamed to the offline database hence immediately accessible offline worldwide. This mechanism was intensively used during 2008 in the test-runs with cosmic rays. The experience of this first months of operation will be discussed in detail.
DOI: 10.1109/nssmic.2015.7581775
2015
Cited 3 times
Development of a phase-II track trigger based on GPUs for the CMS experiment
The High Luminosity LHC (HL-LHC) is a project to increase the luminosity of the Large Hadron Collider to 5 · 10 <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">34</sup> cm <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">-2</sup> s <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">-1</sup> . The CMS experiment at CERN is planning a major upgrade in order to cope with an expected average number of overlapping collisions per bunch crossing of 140. A key element of this upgrade will be the introduction of tracker information at the very first stages of the trigger system for which several possible hardware implementations are under study. In particular the adoption of Graphics Processing Units in the first level of the trigger system is currently being investigated in several HEP experiments. Graphics Processing Units (GPUs) are massively parallel architectures that can be programmed using extensions to the standard C and C++ languages. In a synchronous system they have been proven to be highly reliable and to show a deterministic time response even in presence of branch divergences. These two features allow GPUs to be well suited to run pattern recognition algorithms on detector data in a trigger environment. Our discussion of an implementation of a track trigger system based on GPUs will include a description of the framework developed for moving data from and to multiple GPUs using GPUDirect and executing pattern recognition algorithms.
DOI: 10.1016/0168-9002(94)90118-x
1994
Cited 11 times
Evaluation of the upper limit to rare processes in the presence of background, and comparison between the Bayesian and classical approaches
We compare the Bayesian and classical approaches to the computation of upper limits to rare processes in the presence of background. The comparison is done using a Monte Carlo test, free of Bayesian assumptions. The test shows that the Bayesian calculation is always conservative; this allows its use in the cases where the classical calculation is difficult to perform. The application to the Higgs search at LEP is discussed.
DOI: 10.1016/0168-9002(92)90011-r
1992
Cited 10 times
Identification of tau decays using a neural network
A neural network is constructed to identify the decays τ → ρν in the L3 detector at LEP. The same network is used to identify τ → π(K)ν and τ → eνν as a cross check. High efficiency in the rho channel is obtained. A major effort has been made to study the systematic errors introduced by the use of a neural network and no obvious bias has been found.
DOI: 10.1016/0168-9002(96)00605-5
1996
Cited 9 times
The RPC trigger system of the F/B muon spectrometer at the L3 experiment
Abstract The L3 experiment has recently been upgraded with a Forward-Backward muon spectrometer in view of the LEP 200 physics. Due to their high efficiency and good time resolution, Resistive Plate Counters (RPCs) where chosen for building a system providing the muon trigger in that region. The detector has been successfully built and installed, and the expected performances are confirmed.
DOI: 10.1088/1742-6596/219/4/042027
2010
Cited 3 times
CMS offline conditions framework and services
Non-event data describing detector conditions change with time and come from different data sources. They are accessible by physicists within the offline event-processing applications for precise calibration of reconstructed data as well as for data-quality control purposes. Over the past years CMS has developed and deployed a software system managing such data. Object-relational mapping and the relational abstraction layer of the LHC persistency framework are the foundation; the offline condition framework updates and delivers C++ data objects according to their validity. A high-level tag versioning system allows production managers to organize data in hierarchical view. A scripting API in python, command-line tools and a web service serve physicists in daily work. A mini-framework is available for handling data coming from external sources. Efficient data distribution over the worldwide network is guaranteed by a system of hierarchical web caches. The system has been tested and used in all major productions, test-beams and cosmic runs.
DOI: 10.1088/1742-6596/664/9/092024
2015
Evaluating the power efficiency and performance of multi-core platforms using HEP workloads
As Moore's Law drives the silicon industry towards higher transistor counts, processor designs are becoming more and more complex. The area of development includes core count, execution ports, vector units, uncore architecture and finally instruction sets. This increasing complexity leads us to a place where access to the shared memory is the major limiting factor, resulting in feeding the cores with data a real challenge. On the other hand, the significant focus on power efficiency paves the way for power-aware computing and less complex architectures to data centers. In this paper we try to examine these trends and present results of our experiments with Haswell-EP processor family and highly scalable HEP workloads.
DOI: 10.1016/0168-9002(90)91501-2
1990
Cited 7 times
Test results of the L3 precision muon detector
The L3 detector is designed to measure the muon momentum with a 2% resolution at p = 45 GeV/c. We discuss here the systems we developed to reach the required accuracy and control the mechanical alignment at running time. We also report on the test done on the muon spectrometer with UV lasers and cosmic rays.
DOI: 10.1016/0168-9002(89)90551-2
1989
Cited 7 times
Muon detection in the L3 experiment at LEP
The L3 muon spectrometer is presented. Characteristics, useful for experiments at future accelerators, are highlighted. Particular emphasis is given to the systems envisaged to keep the error on the relative alignment of detectors below 30 μm and so reach a momentum resolution Δpp = 2% at p = 45 GeV/c.
DOI: 10.1109/nssmic.2004.1462662
2005
Cited 4 times
Use of grid tools to support CMS distributed analysis
In order to prepare the Physics Technical Design Report, due by end of 2005, the CMS experiment needs to simulate, reconstruct and analyse about 100 million events, corresponding to more than 200 TB of data. The data will be distributed to several Computing Centres. In order to provide access to the whole data sample to all the world-wide dispersed physicists, CMS is developing a layer of software that uses the Grid tools provided by the LCG project to gain access to data and resources and that aims to provide a user friendly interface to the physicists submitting the analysis jobs. To achieve these aims CMS will use Grid tools from both the LCG-2 release and those being developed in the framework of the ARDA project. This work describes the current status and the future developments of the CMS analysis system.
DOI: 10.1109/cnna.2012.6331454
2012
Real-time use of GPUs in NA62 experiment
We describe a pilot project for the use of GPUs in a real-time triggering application in the early trigger stages at the CERN NA62 experiment, and the results of the first field tests together with a prototype data acquisition (DAQ) system. This pilot project within NA62 aims at integrating GPUs into the central L0 trigger processor, and also to use them as fast online processors for computing trigger primitives. Several TDC-equipped sub-detectors with sub-nanosecond time resolution will participate in the first-level NA62 trigger (L0), fully integrated with the data-acquisition system, to reduce the readout rate of all sub-detectors to 1 MHz, using multiplicity information asynchronously computed over time frames of a few ns, both for positive sub-detectors and for vetos. The online use of GPUs would allow the computation of more complex trigger primitives already at this first trigger level. We describe the architectures of the proposed systems, focusing on measuring the performance (both throughput and latency) of various approaches meant to solve these high energy physics problems. The challenges and the prospects of this promising idea are discussed.
DOI: 10.1016/0168-9002(95)00102-6
1995
Cited 7 times
The L3 forward-backward muon RPC trigger system
We describe the trigger system for the Forward-Backward muon spectrometer of L3 detector. The trigger uses double gap Resistive Plate Counters (RPC) covering an area of 300 m2. This is the first large scale application of this kind of detectors in high energy physics. The main features of these detectors, the trigger architecture and preliminary results are reported.
DOI: 10.1016/j.nima.2004.07.069
2004
Cited 4 times
Status of the LCG Physicist Interface (PI) project
In the context of the LHC Computing Grid (LCG) project, the Applications Area develops and maintains that part of the physics applications software and associated infrastructure that is shared among the LHC experiments. In this context, the Physicist Interface (PI) project of the LCG Application Area encompasses the interfaces and tools by which physicists will directly use the software. At present, the project covers analysis services, interactivity (the “physicist's desktop”), and visualization. The project has the mandate to provide a consistent set of interfaces and tools by which physicists will directly use the LCG software. This paper will describe the status of the project, concentrating on the activities in relation to the Analysis Services subsystem.
DOI: 10.1109/tns.2005.860150
2005
Cited 3 times
The LCG PI project: using interfaces for physics data analysis
In the context of the LHC computing grid (LCG) project, the applications area develops and maintains that part of the physics applications software and associated infrastructure that is shared among the LHC experiments. The "physicist interface" (PI) project of the LCG application area encompasses the interfaces and tools by which physicists will directly use the software, providing implementations based on agreed standards like the analysis systems subsystem (AIDA) interfaces for data analysis. In collaboration with users from the experiments, work has started with implementing the AIDA interfaces for (binned and unbinned) histogramming, fitting and minimization as well as manipulation of tuples. These implementations have been developed by re-using existing packages either directly or by using a (thin) layer of wrappers. In addition, bindings of these interfaces to the Python interpreted language have been done using the dictionary subsystem of the LCG applications area/SEAL project. The actual status and the future planning of the project will be presented
DOI: 10.5281/zenodo.4730167
2018
TrackML Particle Tracking Challenge
Original source from Kaggle : https://www.kaggle.com/c/trackml-particle-identification/data The dataset comprises multiple independent events, where each event contains simulated measurements (essentially 3D points) of particles generated in a collision between proton bunches at the Large Hadron Collider at CERN. The goal of the tracking machine learning challenge is to group the recorded measurements or hits for each event into tracks, sets of hits that belong to the same initial particle. A solution must uniquely associate each hit to one track. The training dataset contains the recorded hits, their ground truth counterpart and their association to particles, and the initial parameters of those particles. The test dataset contains only the recorded hits. Once unzipped, the dataset is provided as a set of plain .csv files. Each event has four associated files that contain hits, hit cells, particles, and the ground truth association between them. The common prefix, e.g. event000000010, is always event followed by 9 digits. event000000000-hits.csv event000000000-cells.csv event000000000-particles.csv event000000000-truth.csv event000000001-hits.csv event000000001-cells.csv event000000001-particles.csv event000000001-truth.csv <strong>Event hits</strong> The hits file contains the following values for each hit/entry: hit_id: numerical identifier of the hit inside the event. x, y, z: measured x, y, z position (in millimeter) of the hit in global coordinates. volume_id: numerical identifier of the detector group. layer_id: numerical identifier of the detector layer inside the group. module_id: numerical identifier of the detector module inside the layer. The volume/layer/module id could in principle be deduced from x, y, z. They are given here to simplify detector-specific data handling. <strong>Event truth</strong> The truth file contains the mapping between hits and generating particles and the true particle state at each measured hit. Each entry maps one hit to one particle. hit_id: numerical identifier of the hit as defined in the hits file. particle_id: numerical identifier of the generating particle as defined in the particles file. A value of 0 means that the hit did not originate from a reconstructible particle, but e.g. from detector noise. tx, ty, tz true intersection point in global coordinates (in millimeters) between the particle trajectory and the sensitive surface. tpx, tpy, tpz true particle momentum (in GeV/c) in the global coordinate system at the intersection point. The corresponding vector is tangent to the particle trajectory at the intersection point. weight per-hit weight used for the scoring metric; total sum of weights within one event equals to one. <strong>Event particles</strong> The particles files contains the following values for each particle/entry: particle_id: numerical identifier of the particle inside the event. vx, vy, vz: initial position or vertex (in millimeters) in global coordinates. px, py, pz: initial momentum (in GeV/c) along each global axis. q: particle charge (as multiple of the absolute electron charge). nhits: number of hits generated by this particle. All entries contain the generated information or ground truth. <strong>Event hit cells</strong> The cells file contains the constituent active detector cells that comprise each hit. The cells can be used to refine the hit to track association. A cell is the smallest granularity inside each detector module, much like a pixel on a screen, except that depending on the volume_id a cell can be a square or a long rectangle. It is identified by two channel identifiers that are unique within each detector module and encode the position, much like column/row numbers of a matrix. A cell can provide signal information that the detector module has recorded in addition to the position. Depending on the detector type only one of the channel identifiers is valid, e.g. for the strip detectors, and the value might have different resolution. hit_id: numerical identifier of the hit as defined in the hits file. ch0, ch1: channel identifier/coordinates unique within one module. value: signal value information, e.g. how much charge a particle has deposited. <strong>Additional detector geometry information</strong> The detector is built from silicon slabs (or modules, rectangular or trapezoïdal), arranged in cylinders and disks, which measure the position (or hits) of the particles that cross them. The detector modules are organized into detector groups or volumes identified by a volume id. Inside a volume they are further grouped into layers identified by a layer id. Each layer can contain an arbitrary number of detector modules, the smallest geometrically distinct detector object, each identified by a module_id. Within each group, detector modules are of the same type have e.g. the same granularity. All simulated detector modules are so-called semiconductor sensors that are build from thin silicon sensor chips. Each module can be represented by a two-dimensional, planar, bounded sensitive surface. These sensitive surfaces are subdivided into regular grids that define the detectors cells, the smallest granularity within the detector. Each module has a different position and orientation described in the detectors file. A local, right-handed coordinate system is defined on each sensitive surface such that the first two coordinates u and v are on the sensitive surface and the third coordinate w is normal to the surface. The orientation and position are defined by the following transformation pos_xyz = rotation_matrix * pos_uvw + translation that transform a position described in local coordinates u,v,w into the equivalent position x,y,z in global coordinates using a rotation matrix and and translation vector (cx,cy,cz). volume_id: numerical identifier of the detector group. layer_id: numerical identifier of the detector layer inside the group. module_id: numerical identifier of the detector module inside the layer. cx, cy, cz: position of the local origin in the global coordinate system (in millimeter). rot_xu, rot_xv, rot_xw, rot_yu, ...: components of the rotation matrix to rotate from local u,v,w to global x,y,z coordinates. module_t: half thickness of the detector module (in millimeter). module_minhu, module_maxhu: the minimum/maximum half-length of the module boundary along the local u direction (in millimeter). module_hv: the half-length of the module boundary along the local v direction (in millimeter). pitch_u, pitch_v: the size of detector cells along the local u and v direction (in millimeter). There are two different module shapes in the detector, rectangular and trapezoidal. The pixel detector ( with volume_id = 7,8,9) is fully built from rectangular modules, and so are the cylindrical barrels in volume_id=13,17. The remaining layers are made out disks that need trapezoidal shapes to cover the full disk.
2016
The Tracking Machine Learning Challenge
DOI: 10.1016/0168-9002(89)90370-7
1989
Cited 5 times
Energy and position resolution of silicon calorimeter used in the UA7 experiment
The performance of a compact 4 in. diameter, 20 radiation lengths silicon/tungsten calorimeter has been evaluated using electron and pion beams up to 100 GeV at the CERN SPS. The energy and position resolutions have been found to be respectively (25%√E + 1.1%) GeV and 3.8 mm/√E.
DOI: 10.48550/arxiv.physics/0306091
2003
Cited 3 times
CMS Data Analysis: Current Status and Future Strategy
We present the current status of CMS data analysis architecture and describe work on future Grid-based distributed analysis prototypes. CMS has two main software frameworks related to data analysis: COBRA, the main framework, and IGUANA, the interactive visualisation framework. Software using these frameworks is used today in the world-wide production and analysis of CMS data. We describe their overall design and present examples of their current use with emphasis on interactive analysis. CMS is currently developing remote analysis prototypes, including one based on Clarens, a Grid-enabled client-server tool. Use of the prototypes by CMS physicists will guide us in forming a Grid-enriched analysis strategy. The status of this work is presented, as is an outline of how we plan to leverage the power of our existing frameworks in the migration of CMS software to the Grid.
DOI: 10.1109/tns.2011.2155084
2011
Time-Critical Database Conditions Data-Handling for the CMS Experiment
Automatic, synchronous and of course reliable population of the condition database is critical for the correct operation of the online selection as well as of the offline reconstruction and data analysis. We will describe here the system put in place in the CMS experiment to automate the processes to populate centrally the database and make condition data promptly available both online for the high-level trigger and offline for reconstruction. The data are “dropped” by the users in a dedicated service which synchronizes them and takes care of writing them into the online database. Then they are automatically streamed to the offline database, hence immediately accessible offline worldwide. This mechanism was intensively used during 2008 and 2009 operation with cosmic ray challenges and first LHC collision data, and many improvements were done so far. The experience of this first years of operation will be discussed in detail.
DOI: 10.1109/rtc.2010.5750372
2010
SQL-jQuery client a tool managing the DB backend of the CMS software framework through Web Browser
PJ-SQL-Browser is a free Python-Javascript web-based tool. It relies on jQuery and python libraries, and is intended to provide the CMS software framework a real-time handle to the DB backend inside a local web browser.
2018
The TrackML challenge
DOI: 10.5281/zenodo.4730157
2018
TrackML Throughput Phase
Original source from Codalab : https://competitions.codalab.org/competitions/20112 The dataset comprises multiple independent events, where each event contains simulated measurements (essentially 3D points) of particles generated in a collision between proton bunches at the Large Hadron Collider at CERN. The goal of the tracking machine learning challenge is to group the recorded measurements or hits for each event into tracks, sets of hits that belong to the same initial particle. A solution must uniquely associate each hit to one track. The training dataset contains the recorded hits, their ground truth counterpart and their association to particles, and the initial parameters of those particles. The test dataset contains only the recorded hits. Once unzipped, the dataset is provided as a set of plain .csv files. Each event has four associated files that contain hits, hit cells, particles, and the ground truth association between them. The common prefix, e.g. event000000010, is always event followed by 9 digits. event000000000-hits.csv event000000000-cells.csv event000000000-particles.csv event000000000-truth.csv event000000001-hits.csv event000000001-cells.csv event000000001-particles.csv event000000001-truth.csv <strong>Event hits</strong> The hits file contains the following values for each hit/entry: hit_id: numerical identifier of the hit inside the event. x, y, z: measured x, y, z position (in millimeter) of the hit in global coordinates. volume_id: numerical identifier of the detector group. layer_id: numerical identifier of the detector layer inside the group. module_id: numerical identifier of the detector module inside the layer. The volume/layer/module id could in principle be deduced from x, y, z. They are given here to simplify detector-specific data handling. <strong>Event truth</strong> The truth file contains the mapping between hits and generating particles and the true particle state at each measured hit. Each entry maps one hit to one particle. hit_id: numerical identifier of the hit as defined in the hits file. particle_id: numerical identifier of the generating particle as defined in the particles file. A value of 0 means that the hit did not originate from a reconstructible particle, but e.g. from detector noise. tx, ty, tz true intersection point in global coordinates (in millimeters) between the particle trajectory and the sensitive surface. tpx, tpy, tpz true particle momentum (in GeV/c) in the global coordinate system at the intersection point. The corresponding vector is tangent to the particle trajectory at the intersection point. weight per-hit weight used for the scoring metric; total sum of weights within one event equals to one. <strong>Event particles</strong> The particles files contains the following values for each particle/entry: particle_id: numerical identifier of the particle inside the event. vx, vy, vz: initial position or vertex (in millimeters) in global coordinates. px, py, pz: initial momentum (in GeV/c) along each global axis. q: particle charge (as multiple of the absolute electron charge). nhits: number of hits generated by this particle. All entries contain the generated information or ground truth. <strong>Event hit cells</strong> The cells file contains the constituent active detector cells that comprise each hit. The cells can be used to refine the hit to track association. A cell is the smallest granularity inside each detector module, much like a pixel on a screen, except that depending on the volume_id a cell can be a square or a long rectangle. It is identified by two channel identifiers that are unique within each detector module and encode the position, much like column/row numbers of a matrix. A cell can provide signal information that the detector module has recorded in addition to the position. Depending on the detector type only one of the channel identifiers is valid, e.g. for the strip detectors, and the value might have different resolution. hit_id: numerical identifier of the hit as defined in the hits file. ch0, ch1: channel identifier/coordinates unique within one module. value: signal value information, e.g. how much charge a particle has deposited. <strong>Additional detector geometry information</strong> The detector is built from silicon slabs (or modules, rectangular or trapezoïdal), arranged in cylinders and disks, which measure the position (or hits) of the particles that cross them. The detector modules are organized into detector groups or volumes identified by a volume id. Inside a volume they are further grouped into layers identified by a layer id. Each layer can contain an arbitrary number of detector modules, the smallest geometrically distinct detector object, each identified by a module_id. Within each group, detector modules are of the same type have e.g. the same granularity. All simulated detector modules are so-called semiconductor sensors that are build from thin silicon sensor chips. Each module can be represented by a two-dimensional, planar, bounded sensitive surface. These sensitive surfaces are subdivided into regular grids that define the detectors cells, the smallest granularity within the detector. Each module has a different position and orientation described in the detectors file. A local, right-handed coordinate system is defined on each sensitive surface such that the first two coordinates u and v are on the sensitive surface and the third coordinate w is normal to the surface. The orientation and position are defined by the following transformation pos_xyz = rotation_matrix * pos_uvw + translation that transform a position described in local coordinates u,v,w into the equivalent position x,y,z in global coordinates using a rotation matrix and and translation vector (cx,cy,cz). volume_id: numerical identifier of the detector group. layer_id: numerical identifier of the detector layer inside the group. module_id: numerical identifier of the detector module inside the layer. cx, cy, cz: position of the local origin in the global coordinate system (in millimeter). rot_xu, rot_xv, rot_xw, rot_yu, ...: components of the rotation matrix to rotate from local u,v,w to global x,y,z coordinates. module_t: half thickness of the detector module (in millimeter). module_minhu, module_maxhu: the minimum/maximum half-length of the module boundary along the local u direction (in millimeter). module_hv: the half-length of the module boundary along the local v direction (in millimeter). pitch_u, pitch_v: the size of detector cells along the local u and v direction (in millimeter). There are two different module shapes in the detector, rectangular and trapezoidal. The pixel detector ( with volume_id = 7,8,9) is fully built from rectangular modules, and so are the cylindrical barrels in volume_id=13,17. The remaining layers are made out disks that need trapezoidal shapes to cover the full disk.
DOI: 10.2172/1570206
2019
CMS Patatrack Project [PowerPoint]
This talk presents the technical performance and lessons learned of the Patatrack demonstator, where the CMS pixel local reconstruction and pixel-only track reconstruction have been ported to NVIDIA GPUs. The demonstrator is run within the CMS software framework (CMSSW), and the model of integrating CUDA algorithms to CMSSW is discussed as well.
2020
Heterogeneous reconstruction of tracks and primary vertices with the CMS pixel tracker
The High-Luminosity upgrade of the LHC will see the accelerator reach an instantaneous luminosity of $7\times 10^{34} cm^{-2}s^{-1}$ with an average pileup of $200$ proton-proton collisions. These conditions will pose an unprecedented challenge to the online and offline reconstruction software developed by the experiments. The computational complexity will exceed by far the expected increase in processing power for conventional CPUs, demanding an alternative approach. Industry and High-Performance Computing (HPC) centres are successfully using heterogeneous computing platforms to achieve higher throughput and better energy efficiency by matching each job to the most appropriate architecture. In this paper we will describe the results of a heterogeneous implementation of pixel tracks and vertices reconstruction chain on Graphics Processing Units (GPUs). The framework has been designed and developed to be integrated in the CMS reconstruction software, CMSSW. The speed up achieved by leveraging GPUs allows for more complex algorithms to be executed, obtaining better physics output and a higher throughput.
DOI: 10.2172/1630717
2019
Bringing heterogeneity to the CMS software framework [Slides]
Co-processors or accelerators like GPUs and FPGAs are becoming more and more popular. CMS’ data processing framework (CMSSW) implements multi-threading using Intel TBB utilizing tasks as concurrent units of work. We have developed generic mechanisms within the CMSSW framework to interact effectively with non-CPU resources and configure CPU and non-CPU algorithms in a unified way. As a first step to gain experience, we have explored mechanisms for how algorithms could offload work to NVIDIA GPUs with CUDA.
DOI: 10.48550/arxiv.2008.13461
2020
Heterogeneous reconstruction of tracks and primary vertices with the CMS pixel tracker
The High-Luminosity upgrade of the LHC will see the accelerator reach an instantaneous luminosity of $7\times 10^{34} cm^{-2}s^{-1}$ with an average pileup of $200$ proton-proton collisions. These conditions will pose an unprecedented challenge to the online and offline reconstruction software developed by the experiments. The computational complexity will exceed by far the expected increase in processing power for conventional CPUs, demanding an alternative approach. Industry and High-Performance Computing (HPC) centres are successfully using heterogeneous computing platforms to achieve higher throughput and better energy efficiency by matching each job to the most appropriate architecture. In this paper we will describe the results of a heterogeneous implementation of pixel tracks and vertices reconstruction chain on Graphics Processing Units (GPUs). The framework has been designed and developed to be integrated in the CMS reconstruction software, CMSSW. The speed up achieved by leveraging GPUs allows for more complex algorithms to be executed, obtaining better physics output and a higher throughput.
DOI: 10.1109/nssmic.2006.354216
2006
The CMS Simulation Software
In this paper we present the features and the expected performance of the re-designed CMS simulation software, as well as the experience from the migration process. Today, the CMS simulation suite is based on the two principal components - Geant4 detector simulation toolkit and the new CMS offline Framework and Event Data Model. The simulation chain includes event generation, detector simulation, and digitization steps. With Geant4, we employ the full set of electromagnetic and hadronic physics processes and detailed particle tracking in the 4 Tesla magnetic field. The Framework provides "action on demand" mechanisms, to allow users to load dynamically the desired modules and to configure and tune the final application at the run time. The simulation suite is used to model the complete central CMS detector (over 1 million of geometrical volumes) and the forward systems, such as Castor calorimeter and Zero Degree Calorimeter, the Totem telescopes, Roman Pots, and the Luminosity Monitor. The designs also previews the use of the electromagnetic and hadronic showers parametrization, instead of full modelling of high energy particles passage through a complex hierarchy of volumes and materials, allowing significant gain in speed while tuning the simulation to test beam and collider data. Physics simulation has been extensively validated by comparison with test beam data and previous simulation results. The redesigned and upgraded simulation software was exercised for performance and robustness tests. It went into Production in July 2006, running in the US and EU grids, and has since delivered about 60 millions of events.
2004
Mantis: the Geant4-based simulation specialization of the CMS COBRA framework
DOI: 10.1016/0168-9002(90)90349-b
1990
Study of θ-inclined tracks in L3 muon chambers
Measurements of cosmic rays in the L3 multisampling chambers are presented. The study of tracks with polar angles from 30° < θ < 130° w.r.t. the wires show increasing pulse height like 1/sin θ. Using inclined tracks, we find a ±1.5 cm region of reduced accuracy near the glass supports of the 5.4 m long wires.
DOI: 10.1016/s0010-4655(97)00176-8
1998
CMS reconstruction and analysis: an object oriented approach
Object Oriented technologies have been chosen to design and implement the first prototype of the CMS Reconstruction and Analysis software. The architecture is based on an application framework in which physics software modules can be plugged in a flexible and customisable way. A key feature of this framework is a mechanism for reconstruction on demand which has been already subjected to intensive prototype work.
DOI: 10.1088/1742-6596/762/1/012038
2016
The CptnHook Profiler - A tool to investigate usage patterns of mathematical functions.
Transcendental mathematical functions are one of the main hot-spots of scientific applications. The usage of highly optimised, general purpose mathematical libraries can mitigate this issue. On the other hand, a more comprehensive solution is represented by the replacement of the generic mathematical functions by specific implementations targeting particular subdomains only.
2016
TrackML: a LHC Tracking Machine Learning Challenge
DOI: 10.1063/1.4771865
2012
Probing methods for automatic error resolution in a heterogeneous software environment
Views Icon Views Article contents Figures & tables Video Audio Supplementary Data Peer Review Share Icon Share Twitter Facebook Reddit LinkedIn Tools Icon Tools Reprints and Permissions Cite Icon Cite Search Site Citation Antonio Pierro, Salvatore Di Guida, Vincenzo Innocente, Ilja Kuzborskij; Probing methods for automatic error resolution in a heterogeneous software environment. AIP Conf. Proc. 10 December 2012; 1504 (1): 999–1002. https://doi.org/10.1063/1.4771865 Download citation file: Ris (Zotero) Reference Manager EasyBib Bookends Mendeley Papers EndNote RefWorks BibTex toolbar search Search Dropdown Menu toolbar search search input Search input auto suggest filter your search All ContentAIP Publishing PortfolioAIP Conference Proceedings Search Advanced Search |Citation Search
DOI: 10.1088/1742-6596/331/4/042019
2011
Fast access to the CMS detector condition data employing HTML5 technologies
This paper focuses on using HTML version 5 (HTML5) for accessing condition data for the CMS experiment, evaluating the benefits and risks posed by the use of this technology. According to the authors of HTML5, this technology attempts to solve issues found in previous iterations of HTML and addresses the needs of web applications, an area previously not adequately covered by HTML. We demonstrate that employing HTML5 brings important benefits in terms of access performance to the CMS condition data. The combined use of web storage and web sockets allows increasing the performance and reducing the costs in term of computation power, memory usage and network bandwidth for client and server. Above all, the web workers allow creating different scripts that can be executed using multi-thread mode, exploiting multi-core microprocessors. Web workers have been employed in order to substantially decrease the web page rendering time to display the condition data stored in the CMS condition database.
DOI: 10.1063/1.4771858
2012
Web application for detailed real-time database transaction monitoring for CMS condition data
In the upcoming LHC era, database have become an essential part for the experiments collecting data from LHC, in order to safely store, and consistently retrieve, a wide amount of data, which are produced by different sources. In the CMS experiment at CERN, all this information is stored in ORACLE databases, allocated in several servers, both inside and outside the CERN network. In this scenario, the task of monitoring different databases is a crucial database administration issue, since different information may be required depending on different users' tasks such as data transfer, inspection, planning and security issues. We present here a web application based on Python web framework and Python modules for data mining purposes. To customize the GUI we record traces of user interactions that are used to build use case models. In addition the application detects errors in database transactions (for example identify any mistake made by user, application failure, unexpected network shutdown or Structured Query Language (SQL) statement error) and provides warning messages from the different users' perspectives. Finally, in order to fullfill the requirements of the CMS experiment community, and to meet the new development in many Web client tools, our application was further developed, and new features were deployed.
DOI: 10.1088/1742-6596/331/4/042020
2011
jSPyDB, an open source database-independent tool for data management
Nowadays, the number of commercial tools available for accessing Databases, built on Java or .Net, is increasing. However, many of these applications have several drawbacks: usually they are not open-source, they provide interfaces only with a specific kind of database, they are platform-dependent and very CPU and memory consuming.
DOI: 10.1088/1742-6596/331/4/042013
2011
The evolution of CMS software performance studies
CMS has had an ongoing and dedicated effort to optimize software performance for several years. Initially this effort focused primarily on the cleanup of many issues coming from basic C++ errors, namely reducing dynamic memory churn, unnecessary copies/temporaries and tools to routinely monitor these things. Over the past 1.5 years, however, the transition to 64bit, newer versions of the gcc compiler, newer tools and the enabling of techniques like vectorization have made possible more sophisticated improvements to the software performance. This presentation will cover this evolution and describe the current avenues being pursued for software performance, as well as the corresponding gains.
DOI: 10.1088/1742-6596/219/7/072048
2010
CMS conditions database web application service
The application server is built upon condition python API in the CMS offline software framework and serves applications and users not involved in the event-processing.
DOI: 10.1142/9789814307529_0116
2010
Development of Web Tools for the automatic Upload of Calibration Data into the CMS Condition Data
DOI: 10.1109/rtc.2010.5750460
2010
Real time monitoring system for applications performing the population of condition databases for CMS non-event data
In real time systems, such as CMS Online Condition Database, monitoring and fast detecting errors is a very challenging task. To recover the system and to put it in a safe state requires spotting a faulty situation with strict timing constraints and a fast reaction.
2017
Tracking at LHC as a collaborative data challenge use case with RAMP
DOI: 10.1109/rtc.2010.5750484
2010
Time-critical database conditions data-handling for the CMS experiment
Automatic, synchronous and of course reliable population of the condition databases is critical for the correct operation of the online selection as well as of the offline reconstruction and analysis of data. We will describe here the system put in place in the CMS experiment to automatize the processes to populate centrally the database and make condition data promptly available both online for the high-level trigger and offline for reconstruction. The data are “dropped” by the users in a dedicate service which synchronizes them and takes care of writing them into the online database. Then they are automatically streamed to the offline database hence immediately accessible offline worldwide. This mechanism was intensively used during 2008 and 2009 operation with cosmic ray challenges and first LHC collision data, and many improvements done so far. The experience of this first years of operation will be discussed in detail.
2018
WCCI 2018 TrackML Particle Tracking Challenge
Can Machine Learning assist High Energy Physics (HEP) in discovering and characterizing of new particules? With event rates already reaching hundred of millions of collisions per second, physicists must sift through ten of petabytes of data per year. Ever better software is needed for real-time pre-processing and filtering of the most promising events, as the resolution of detectors improve, leading to an ever larger quantity of data. To mobilise the scientific community around this problem, we are organizing the TrackML challenge, whose objective is to use machine learning to quickly reconstruct particle tracks from dotted line traces left in the silicon detectors. The challenge refers to recognizing trajectories in the 3D images of proton collisions at the Large Hadron Collider (LHC) at CERN. Think of this as the picture of a fireworks: the time information is lost, but all particle trajectories have roughly the same origin and therefore there is a correspondence between arc length and time ordering. Given the coordinates of the impact of particles on detectors (3D points), the problem is to ``connect the dots'' or rather the points, i.e. return all sets of points belonging to alleged particle trajectories. The challenge will be conducted in 2 phases, the first one favoring innovation over efficiency and the second one aiming at real-time reconstruction.
DOI: 10.5281/zenodo.4730166
2018
TrackML Particle Tracking Challenge
Original source from Kaggle : https://www.kaggle.com/c/trackml-particle-identification/data The dataset comprises multiple independent events, where each event contains simulated measurements (essentially 3D points) of particles generated in a collision between proton bunches at the Large Hadron Collider at CERN. The goal of the tracking machine learning challenge is to group the recorded measurements or hits for each event into tracks, sets of hits that belong to the same initial particle. A solution must uniquely associate each hit to one track. The training dataset contains the recorded hits, their ground truth counterpart and their association to particles, and the initial parameters of those particles. The test dataset contains only the recorded hits. Once unzipped, the dataset is provided as a set of plain .csv files. Each event has four associated files that contain hits, hit cells, particles, and the ground truth association between them. The common prefix, e.g. event000000010, is always event followed by 9 digits. event000000000-hits.csv event000000000-cells.csv event000000000-particles.csv event000000000-truth.csv event000000001-hits.csv event000000001-cells.csv event000000001-particles.csv event000000001-truth.csv <strong>Event hits</strong> The hits file contains the following values for each hit/entry: hit_id: numerical identifier of the hit inside the event. x, y, z: measured x, y, z position (in millimeter) of the hit in global coordinates. volume_id: numerical identifier of the detector group. layer_id: numerical identifier of the detector layer inside the group. module_id: numerical identifier of the detector module inside the layer. The volume/layer/module id could in principle be deduced from x, y, z. They are given here to simplify detector-specific data handling. <strong>Event truth</strong> The truth file contains the mapping between hits and generating particles and the true particle state at each measured hit. Each entry maps one hit to one particle. hit_id: numerical identifier of the hit as defined in the hits file. particle_id: numerical identifier of the generating particle as defined in the particles file. A value of 0 means that the hit did not originate from a reconstructible particle, but e.g. from detector noise. tx, ty, tz true intersection point in global coordinates (in millimeters) between the particle trajectory and the sensitive surface. tpx, tpy, tpz true particle momentum (in GeV/c) in the global coordinate system at the intersection point. The corresponding vector is tangent to the particle trajectory at the intersection point. weight per-hit weight used for the scoring metric; total sum of weights within one event equals to one. <strong>Event particles</strong> The particles files contains the following values for each particle/entry: particle_id: numerical identifier of the particle inside the event. vx, vy, vz: initial position or vertex (in millimeters) in global coordinates. px, py, pz: initial momentum (in GeV/c) along each global axis. q: particle charge (as multiple of the absolute electron charge). nhits: number of hits generated by this particle. All entries contain the generated information or ground truth. <strong>Event hit cells</strong> The cells file contains the constituent active detector cells that comprise each hit. The cells can be used to refine the hit to track association. A cell is the smallest granularity inside each detector module, much like a pixel on a screen, except that depending on the volume_id a cell can be a square or a long rectangle. It is identified by two channel identifiers that are unique within each detector module and encode the position, much like column/row numbers of a matrix. A cell can provide signal information that the detector module has recorded in addition to the position. Depending on the detector type only one of the channel identifiers is valid, e.g. for the strip detectors, and the value might have different resolution. hit_id: numerical identifier of the hit as defined in the hits file. ch0, ch1: channel identifier/coordinates unique within one module. value: signal value information, e.g. how much charge a particle has deposited. <strong>Additional detector geometry information</strong> The detector is built from silicon slabs (or modules, rectangular or trapezoïdal), arranged in cylinders and disks, which measure the position (or hits) of the particles that cross them. The detector modules are organized into detector groups or volumes identified by a volume id. Inside a volume they are further grouped into layers identified by a layer id. Each layer can contain an arbitrary number of detector modules, the smallest geometrically distinct detector object, each identified by a module_id. Within each group, detector modules are of the same type have e.g. the same granularity. All simulated detector modules are so-called semiconductor sensors that are build from thin silicon sensor chips. Each module can be represented by a two-dimensional, planar, bounded sensitive surface. These sensitive surfaces are subdivided into regular grids that define the detectors cells, the smallest granularity within the detector. Each module has a different position and orientation described in the detectors file. A local, right-handed coordinate system is defined on each sensitive surface such that the first two coordinates u and v are on the sensitive surface and the third coordinate w is normal to the surface. The orientation and position are defined by the following transformation pos_xyz = rotation_matrix * pos_uvw + translation that transform a position described in local coordinates u,v,w into the equivalent position x,y,z in global coordinates using a rotation matrix and and translation vector (cx,cy,cz). volume_id: numerical identifier of the detector group. layer_id: numerical identifier of the detector layer inside the group. module_id: numerical identifier of the detector module inside the layer. cx, cy, cz: position of the local origin in the global coordinate system (in millimeter). rot_xu, rot_xv, rot_xw, rot_yu, ...: components of the rotation matrix to rotate from local u,v,w to global x,y,z coordinates. module_t: half thickness of the detector module (in millimeter). module_minhu, module_maxhu: the minimum/maximum half-length of the module boundary along the local u direction (in millimeter). module_hv: the half-length of the module boundary along the local v direction (in millimeter). pitch_u, pitch_v: the size of detector cells along the local u and v direction (in millimeter). There are two different module shapes in the detector, rectangular and trapezoidal. The pixel detector ( with volume_id = 7,8,9) is fully built from rectangular modules, and so are the cylindrical barrels in volume_id=13,17. The remaining layers are made out disks that need trapezoidal shapes to cover the full disk.
DOI: 10.22323/1.340.0159
2019
TrackML : a tracking Machine Learning challenge
The High-Luminosity LHC will see pileup levels reaching 200, which will greatly increase the complexity of the tracking component of the event reconstruction. To reach out to Computer Science specialists, a Tracking Machine Learning challenge (TrackML) was set up on Kaggle in 2018 by a team of ATLAS, CMS and LHCb physicists, tracking experts and Computer Scientists, building on the experience of the successful Higgs Machine Learning challenge in 2014. A dataset consisting of an accurate simulation of a LHC experiment tracker has been created, listing for each event the measured 3D points, and the list of 3D points associated to a true track. The data set is large to allow for appropriate training of Machine Learning methods: about 100.000 events, 1 billion tracks, 100 GigaByte. The participants of the challenge are asked to find the tracks, which means to build the list of 3D points belonging to each track (deriving the track parameters is not the topic of the challenge). Here the first lessons from the challenge are discussed, including the initial analysis of submitted results.
DOI: 10.2172/1623357
2019
Bringing heterogeneity to the CMS software framework [Slides]
Co-processors or accelerators like GPUs and FPGAs are becoming more and more popular. CMS' data processing framework (CMSSW) implements multi-threading using Intel TBB utilizing tasks as concurrent units of work. We have developed generic mechanisms within the CMSSW framework to interact effectively with non-CPU resources and configure CPU and non-CPU algorithms in a unified way. As a first step to gain experience, we have explored mechanisms for how algorithms could offload work to NVIDIA GPUs with CUDA.
2018
The TrackML challenge
2018
The TrackML challenge
2018
The TrackML challenge
2018
The TrackML challenge
2018
The TrackML challenge
2018
The TrackML challenge
2018
The TrackML challenge
2018
The TrackML challenge
2018
The TrackML challenge
2018
The TrackML challenge
2018
The TrackML challenge
2018
The TrackML challenge
2018
The TrackML challenge
2018
The TrackML challenge
2018
The TrackML challenge
DOI: 10.5281/zenodo.4730156
2018
TrackML Throughput Phase
Original source from Codalab : https://competitions.codalab.org/competitions/20112 The dataset comprises multiple independent events, where each event contains simulated measurements (essentially 3D points) of particles generated in a collision between proton bunches at the Large Hadron Collider at CERN. The goal of the tracking machine learning challenge is to group the recorded measurements or hits for each event into tracks, sets of hits that belong to the same initial particle. A solution must uniquely associate each hit to one track. The training dataset contains the recorded hits, their ground truth counterpart and their association to particles, and the initial parameters of those particles. The test dataset contains only the recorded hits. Once unzipped, the dataset is provided as a set of plain .csv files. Each event has four associated files that contain hits, hit cells, particles, and the ground truth association between them. The common prefix, e.g. event000000010, is always event followed by 9 digits. event000000000-hits.csv event000000000-cells.csv event000000000-particles.csv event000000000-truth.csv event000000001-hits.csv event000000001-cells.csv event000000001-particles.csv event000000001-truth.csv <strong>Event hits</strong> The hits file contains the following values for each hit/entry: hit_id: numerical identifier of the hit inside the event. x, y, z: measured x, y, z position (in millimeter) of the hit in global coordinates. volume_id: numerical identifier of the detector group. layer_id: numerical identifier of the detector layer inside the group. module_id: numerical identifier of the detector module inside the layer. The volume/layer/module id could in principle be deduced from x, y, z. They are given here to simplify detector-specific data handling. <strong>Event truth</strong> The truth file contains the mapping between hits and generating particles and the true particle state at each measured hit. Each entry maps one hit to one particle. hit_id: numerical identifier of the hit as defined in the hits file. particle_id: numerical identifier of the generating particle as defined in the particles file. A value of 0 means that the hit did not originate from a reconstructible particle, but e.g. from detector noise. tx, ty, tz true intersection point in global coordinates (in millimeters) between the particle trajectory and the sensitive surface. tpx, tpy, tpz true particle momentum (in GeV/c) in the global coordinate system at the intersection point. The corresponding vector is tangent to the particle trajectory at the intersection point. weight per-hit weight used for the scoring metric; total sum of weights within one event equals to one. <strong>Event particles</strong> The particles files contains the following values for each particle/entry: particle_id: numerical identifier of the particle inside the event. vx, vy, vz: initial position or vertex (in millimeters) in global coordinates. px, py, pz: initial momentum (in GeV/c) along each global axis. q: particle charge (as multiple of the absolute electron charge). nhits: number of hits generated by this particle. All entries contain the generated information or ground truth. <strong>Event hit cells</strong> The cells file contains the constituent active detector cells that comprise each hit. The cells can be used to refine the hit to track association. A cell is the smallest granularity inside each detector module, much like a pixel on a screen, except that depending on the volume_id a cell can be a square or a long rectangle. It is identified by two channel identifiers that are unique within each detector module and encode the position, much like column/row numbers of a matrix. A cell can provide signal information that the detector module has recorded in addition to the position. Depending on the detector type only one of the channel identifiers is valid, e.g. for the strip detectors, and the value might have different resolution. hit_id: numerical identifier of the hit as defined in the hits file. ch0, ch1: channel identifier/coordinates unique within one module. value: signal value information, e.g. how much charge a particle has deposited. <strong>Additional detector geometry information</strong> The detector is built from silicon slabs (or modules, rectangular or trapezoïdal), arranged in cylinders and disks, which measure the position (or hits) of the particles that cross them. The detector modules are organized into detector groups or volumes identified by a volume id. Inside a volume they are further grouped into layers identified by a layer id. Each layer can contain an arbitrary number of detector modules, the smallest geometrically distinct detector object, each identified by a module_id. Within each group, detector modules are of the same type have e.g. the same granularity. All simulated detector modules are so-called semiconductor sensors that are build from thin silicon sensor chips. Each module can be represented by a two-dimensional, planar, bounded sensitive surface. These sensitive surfaces are subdivided into regular grids that define the detectors cells, the smallest granularity within the detector. Each module has a different position and orientation described in the detectors file. A local, right-handed coordinate system is defined on each sensitive surface such that the first two coordinates u and v are on the sensitive surface and the third coordinate w is normal to the surface. The orientation and position are defined by the following transformation pos_xyz = rotation_matrix * pos_uvw + translation that transform a position described in local coordinates u,v,w into the equivalent position x,y,z in global coordinates using a rotation matrix and and translation vector (cx,cy,cz). volume_id: numerical identifier of the detector group. layer_id: numerical identifier of the detector layer inside the group. module_id: numerical identifier of the detector module inside the layer. cx, cy, cz: position of the local origin in the global coordinate system (in millimeter). rot_xu, rot_xv, rot_xw, rot_yu, ...: components of the rotation matrix to rotate from local u,v,w to global x,y,z coordinates. module_t: half thickness of the detector module (in millimeter). module_minhu, module_maxhu: the minimum/maximum half-length of the module boundary along the local u direction (in millimeter). module_hv: the half-length of the module boundary along the local v direction (in millimeter). pitch_u, pitch_v: the size of detector cells along the local u and v direction (in millimeter). There are two different module shapes in the detector, rectangular and trapezoidal. The pixel detector ( with volume_id = 7,8,9) is fully built from rectangular modules, and so are the cylindrical barrels in volume_id=13,17. The remaining layers are made out disks that need trapezoidal shapes to cover the full disk.
DOI: 10.1016/0168-9002(90)90578-t
1990
A 5 in. Si(Li)/Pb sampling calorimeter telescope for observation of cosmic gamma rays in the GeV region
A 5 in. diameter Si(Li)/Pb sampling calorimeter with a depth of 28 radiation lengths (30 unit cells × 0.93 radiation lengths) has been constructed. The energy and angular resolutions of the calorimeter have been investigated using CERN SPS positron beams with energies of 10 to 147.8 GeV. The calorimeter shows good linearity over this energy region and the energy resolution is expressed well by σE (rms)/E = (16.9 ± 0.9)%/ √E[GeV], where E represents the incident beam energy. The angular resolution of the calorimeter for a single event is 0.3° (rms) at 80 GeV/c. The agreement between these results and Monte Carlo simulations is good. We are showing a new design of the Si(Li)/Pb sampling calorimeter telescope (SSCT) with an angular resolution (point source localization capability) of about 0.04° (rms) for bright galactic gamma-ray sources. We believe that this telescope is a suitable detector for future observations of cosmic gamma rays in the GeV region, especially when used to search for point sources.
DOI: 10.1016/j.nima.2005.11.124
2006
The CMS analysis chain in a distributed environment
The CMS collaboration is undertaking a big effort to define the analysis model and to develop software tools with the purpose of analysing several millions of simulated and real data events by a large number of people in many geographically distributed sites. From the computing point of view, one of the most complex issues when doing remote analysis is the data discovery and access. Some software tools were developed in order to move data, make them available to the full international community and validate them for the subsequent analysis. The batch analysis processing is performed with workload management tools developed on purpose, which are mainly responsible for the job preparation and the job submission. The job monitoring and the output management are implemented as the last part of the analysis chain. Grid tools provided by the LCG project are evaluated to gain access to the data and the resources by providing a user friendly interface to the physicists submitting the analysis jobs. An overview of the current implementation and of the interactions between the previous components of the CMS analysis system is presented in this work.
DOI: 10.1109/nssmic.2005.1596421
2006
The CMS Object-Oriented Simulation
The CMS object oriented Geant4-based program is used to simulate the complete central CMS detector (over 1 million geometrical volumes) and the forward systems such as the Totem telescopes, Castor calorimeter, zero degree calorimeter, Roman pots, and the luminosity monitor. The simulation utilizes the full set of electromagnetic and hadronic physics processes provided by Geant4 and detailed particle tracking in the 4 tesla magnetic field. Electromagnetic shower parameterization can be used instead of full tracking of high-energy electrons and positrons, allowing significant gains in speed without detrimental precision losses. The simulation physics has been validated by comparisons with test beam data and previous simulation results. The system has been in production for almost two years and has delivered over 100 million events for various LHC physics channels. Productions are run on the US and EU grids at a rate of 3-5 million events per month. At the same time, the simulation has evolved to fulfill emerging requirements for new physics simulations, including very large heavy ion events and a variety of SUSY scenarios. The software has also undergone major technical upgrades. The framework and core services have been ported to the new CMS offline software architecture and event data model. In parallel, the program is subjected to ever more stringent quality assurance procedures, including a recently commissioned automated physics validation suite
DOI: 10.1109/nssmic.2004.1462673
2005
The LCG PI project: using interfaces for physics data analysis
In the context of the LHC Computing Grid (LCG) project, the Applications Area develops and maintains that part of the physics applications software and associated infrastructure that is shared among the LHC experiments. The "Physicist Interface" (PI) project of the LCG Application Area encompasses the interfaces and tools by which physicists will directly use the software, providing implementations based on agreed standards like the AIDA interfaces for data analysis. In collaboration with users from the experiments, work has started with implementing the AIDA interfaces for (binned and unbinned) histogramming, fitting and minimization as well as manipulation of tuples. These implementations have been developed by re-using existing packages either directly or by using a (thin) layer of wrappers. In addition, bindings of these interfaces to the Python interpreted language have been done using the dictionary subsystem of the LCG-AA/SEAL project. The actual status and the future planning of the project will be presented.
DOI: 10.1016/s0920-5632(03)01890-5
2003
CMS on the GRID: Toward a fully distributed computing architecture
The computing systems required to collect, analyse and store the physics data at LHC would need to be distributed and global in scope. CMS is actively involved in several grid-related projects to develop and deploy a fully distributed computing architecture. We present here recent developments of tools for automating job submission and for serving data to remote analysis stations. Plans for further test and deployment of a production grid are also described.
2001
CMS Object—Oriented Analysis
The CMS OO reconstruction program-ORCA-has been used since 1999 to produce large samples of reconstructed Monte-Carlo events for detector optimization,trigger and physics studies,The events are stored in several Objectivity federations at CERN,in the US,Italy and other countries.To perform their studies physicists use different event samples ranging from complete datasets of TByte size to only a few events out of these datasets.We describe the implementation of these requirements in the ORCA software and the way collctions of events are accessed for reading,writing or copying.
DOI: 10.1016/s0168-9002(97)00034-x
1997
From SA/SD to OO methods or “The new design problem”
Abstract New techniques often involve innovative approaches and a different perspective: Object-Oriented (OO) software development is not an exception. The technological shift from classical approach of structured methods (SA/SD) to evolutionary development and OO methods is not easy. We must give it time and means in terms of training, staff, and support for all to come effective. CMS has joined several R&D projects to test if and how OO can be applied to its software. We share here our considerations and understanding from practical experiences within CMS and RD41 (Moose) OO activities.