ϟ

M. Pieri

Here are all the papers by M. Pieri that you can download and read on OA.mg.
M. Pieri’s last known institution is . Download M. Pieri PDFs here.

Claim this Profile →
2016
Cited 91 times
Handbook of LHC Higgs Cross Sections: 4. Deciphering the Nature of the Higgs Sector
This Report summarizes the results of the activities of the LHC Higgs Cross Section Working Group in the period 2014-2016. The main goal of the working group was to present the state-of-the-art of Higgs physics at the LHC, integrating all new results that have appeared in the last few years. The first part compiles the most up-to-date predictions of Higgs boson production cross sections and decay branching ratios, parton distribution functions, and off-shell Higgs boson production and interference effects. The second part discusses the recent progress in Higgs effective field theory predictions, followed by the third part on pseudo-observables, simplified template cross section and fiducial cross section measurements, which give the baseline framework for Higgs boson property measurements. The fourth part deals with the beyond the Standard Model predictions of various benchmark scenarios of Minimal Supersymmetric Standard Model, extended scalar sector, Next-to-Minimal Supersymmetric Standard Model and exotic Higgs boson decays. This report follows three previous working-group reports: Handbook of LHC Higgs Cross Sections: 1. Inclusive Observables (CERN-2011-002), Handbook of LHC Higgs Cross Sections: 2. Differential Distributions (CERN-2012-002), and Handbook of LHC Higgs Cross Sections: 3. Higgs properties (CERN-2013-004). The current report serves as the baseline reference for Higgs physics in LHC Run 2 and beyond.
DOI: 10.1016/s0191-8141(01)00006-2
2001
Cited 137 times
Rheological and microstructural evolution of Carrara marble with high shear strain: results from high temperature torsion experiments
This study investigated the rheological and microstructural evolution of Carrara marble deformed to large shear strain to understand how dynamic recrystallization and lattice-preferred orientation (LPO) are related to strain softening processes. Solid cylinders of Carrara marble were deformed in torsion up to a shear strain of γ=11 at constant twist rates, which correspond to a shear strain rate of 3×10−4 s−1 at the outer surface, and at temperatures of 1000 and 1200 K (727 and 927°C, respectively). For the initial grain size of 150 μm, these conditions are within the dislocation creep regime. Substantial changes in both rheology and microstructure were observed as the marble deformed to high shear strain at 1000 K (1200 K). A peak stress was reached at about γ=1 (γ=0.5) followed by moderate strain weakening. An apparent steady-state flow stress was obtained at high shear strain of γ>5 (γ>2). The stress exponent n decreased slowly with strain from 10 (γ=1) to 6 (γ=9) at 1000 K, but it remained approximately constant at 1200 K (n around 10). At the maximum reached shear strain of γ=11 (γ=8.5), the marble had almost completely recrystallized to a fine grain size of about 10 μm (20 μm). A secondary foliation developed in the recrystallized matrix, which is at a large oblique angle to the shear zone boundary (SZB). LPO was measured by electron backscatter diffraction (EBSD). For both temperatures, the LPO evolved from an oblique deformation texture to a very sharp and symmetric single orientation component with r{101̄4} parallel to the shear plane and a〈1̄21̄0〉 parallel to the shear direction. It is concluded that strain weakening was associated with the development of a strong LPO during dynamic recrystallization to a finer grain size. Mechanical and microstructural steady-state is only reached at large shear strain. The steady-state lattice and grain shape fabrics can hardly be used as shear sense indicators in such recrystallized calcite mylonites.
DOI: 10.1016/0168-9002(91)90491-8
1991
Cited 77 times
Hadron calorimetry in the L3 detector
The characteristics of the L3 hadron calorimeter as realized in the observation of hadronic jets and other events from e+e− collisions at LEP are presented and discussed. The pattern-recognition algorithm utilizing the fine granulatiry of the calorimeter is described, and the observed overall resolution of 10.2% for hadron jets from Z decay is reported. The use of the calorimeter in providing information on muon energy losses is also noted.
DOI: 10.1016/s0040-1951(00)00225-0
2001
Cited 89 times
Texture development of calcite by deformation and dynamic recrystallization at 1000 K during torsion experiments of marble to large strains
Torsion deformation experiments were performed on solid cylinders of Carrara marble at high temperature (1000 K) and constant twist rate (about 4×10−4 rad s−1) to large twist angles (between 80° and 840°). These conditions correspond to simple shear deformation at constant shear strain rate (3×10−4 s−1) and variable amounts of shear strain γ of 1, 2, 5 and 11 at the outer mantle of the sample cylinders. Lattice preferred orientations (LPO) were measured on polished thin sections using automated electron backscatter diffraction (EBSD) to analyze the microtextures in orientation imaging micrographs (OIM). Shear deformation produced first a microstructure of elongated grains with shape and orientation close to the finite strain ellipse of the imposed shear strain (γ=1 and 2). A deformation texture developed with monoclinic sample symmetry and an oblique c-axis distribution relative to the shear plane. With increasing shear, the marble recrystallized by subgrain rotation and nucleation of small grains, starting along grain boundaries, which ultimately replaced the whole deformation microstructure. A steady-state recrystallization microstructure evolved with nearly equant grains of about 10 μm in size (γ=5 and 11). The LPO changed completely from the monoclinic deformation texture into a distribution with a sharp single orientation component with r{101̄4} parallel to the shear plane and a〈1̄21̄0〉 parallel to the shear direction. This recrystallization texture has orthorhombic symmetry with respect to shear plane and shear direction. During the transition, the applied flow stress decreased. Simple shear deformation of calcite was modeled with the self-consistent polycrystal plasticity theory. It included a model for dynamic recrystallization based on a balance between growth and nucleation. Model results agree with experimental data if slip on r{101̄4}a〈1̄21̄0〉 is introduced as a potential slip system for calcite. In that case, the steady-state recrystallization texture has an "easy slip" preferred orientation. The hypothesis remains speculative until this slip system is actually confirmed by direct observation.
DOI: 10.1017/s0016756800009006
1996
Cited 69 times
Pliocene—Quaternary sedimentation in the Northern Apennine Foredeep and related denudation
Abstract The deposits of the Pliocene—Quaternary foredeep of the Northern Apennine cover at present an area of 103000 km 2 . The original boundaries of the basin are not known, since marginal deposits have been eroded, in particular those of the inner, southwestern border. During Pliocene times the basin area was reduced by thrust tectonics and the amount of shortening may be tentatively estimated. The present volume of Pliocene and Quaternary sediments may be inferred with good approximation from the maps of the base of the Pliocene and of the Quaternary (base of the Hyalinea balthica Zone) successions. The Pliocene volume has been corrected adding the estimate of the underthrust sediments, while no correction has been attempted for the eroded marginal deposits. The estimates of 97000 and 95000km 3. reflecting the present volume of the Pliocene and Quaternary deposits, are therefore significantly less than the volumes originally deposited. Present volumes have been transformed in ‘net’ (0% porosity) volumes, in order to obtain the relative net supply rates: 0.021 (Pliocene) and 0.047 (Quaternary) km 3 /a. Other unmeasurable factors (volume variations due to the weathering of silicates, loss of leached carbonates) may induce a probably unimportant underestimate of the supply rates. Available data allow an approximate estimate of the range of the net volume of the Holocene sediments deposited during the last 6000 a BP (221–276km 3 ) and of the relative net supply rate (0.037–0.046km 3 /a), that is not significantly different from the Quaternary one. Applying a porosity correction, these supply rates may be related to the Quaternary source area (128000km 2 ) to obtain the relative denudation rates: 0.41–0.46mm/a (Quaternary) and 0.36–0.51 mm/a (Holocene). Present supply and denudation rates, deduced from the direct measurements of the bed load and suspended load of the apenninic and alpine rivers, do not differ significantly from the Quaternary and Holocene ones. Available data do not allow a reliable estimate of the Pliocene source area, and consequently of the Pliocene denudation rate. However, a minimum of 160000–177 000 km 3 has been eroded during Pliocene and Quaternary times. Assuming, as a working hypothesis, that the Pliocene source area did not significantly differ from the present one, an average thickness of 1240–1390 m could have been eroded since the beginning of Pliocene. This estimate is in agreement with the values obtained from the measurements of coalification of vegetal organic matter in the outcrops, and suggests that post-orogenic successions and ‘higher’ thrust sheets may have been completely removed in vast areas.
DOI: 10.1140/epjc/s10052-010-1244-3
2010
Cited 46 times
From the LHC to future colliders
Discoveries at the LHC will soon set the physics agenda for future colliders. This report of a CERN Theory Institute includes the summaries of Working Groups that reviewed the physics goals and prospects of LHC running with 10 to 300 fb−1 of integrated luminosity, of the proposed sLHC luminosity upgrade, of the ILC, of CLIC, of the LHeC and of a muon collider. The four Working Groups considered possible scenarios for the first 10 fb−1 of data at the LHC in which (i) a state with properties that are compatible with a Higgs boson is discovered, (ii) no such state is discovered either because the Higgs properties are such that it is difficult to detect or because no Higgs boson exists, (iii) a missing-energy signal beyond the Standard Model is discovered as in some supersymmetric models, and (iv) some other exotic signature of new physics is discovered. In the contexts of these scenarios, the Working Groups reviewed the capabilities of the future colliders to study in more detail whatever new physics may be discovered by the LHC. Their reports provide the particle physics community with some tools for reviewing the scientific priorities for future colliders after the LHC produces its first harvest of new physics from multi-TeV collisions.
DOI: 10.1016/0264-8172(93)90044-s
1993
Cited 53 times
Petroleum exploration in Italy: a review
Petroleum exploration in Italy began in the second half of the 19th century, but the first consistent discoveries were only made after World War II. At present, the original reserves of the Italian petroleum fields amount to about 160 million tons of oil and 720 billion cubic metres of gas; domestic production in 1990 was 4.6 million tons of oil and 17.3 billion cubic metres of gas. The lithology of the Italian sedimentary sequences was controlled by the geodynamic evolution of the western margin of the Adria plate. The Permian-Triassic continental and shallow water environments pre-dating the break-up of Pangea were followed by subsiding carbonate platforms and basins in the Mesozoic. Tertiary compressive tectonics, induced by the subduction of the Adria continental margin, produced a complex thrust belt, bordered by a migrating foredeep filled with clastic sediments. Most of the oil and a minor part of the gas were generated by Middle Triassic-Lower Jurassic shaly and carbonate source rocks. Most of the gas was sourced by Pliocene-Pleistocene clays in the foredeep turbiditic deposits and has a bacterial origin. A short review, integrated with some actual examples of typical fields (Cortemaggiore, Costa Molina, Settala, Agostino-Porto Garibaldi, Barbara, Candela-Palino, Rospo, Vega), exemplifies the different plays in the main tectonic domains.
DOI: 10.48550/arxiv.2306.06316
2023
Cited 3 times
Optimal 1D Ly$α$ Forest Power Spectrum Estimation -- III. DESI early data
The one-dimensional power spectrum $P_{\mathrm{1D}}$ of the Ly$\alpha$ forest provides important information about cosmological and astrophysical parameters, including constraints on warm dark matter models, the sum of the masses of the three neutrino species, and the thermal state of the intergalactic medium. We present the first measurement of $P_{\mathrm{1D}}$ with the quadratic maximum likelihood estimator (QMLE) from the Dark Energy Spectroscopic Instrument (DESI) survey early data sample. This early sample of $54~600$ quasars is already comparable in size to the largest previous studies, and we conduct a thorough investigation of numerous instrumental and analysis systematic errors to evaluate their impact on DESI data with QMLE. We demonstrate the excellent performance of the spectroscopic pipeline noise estimation and the impressive accuracy of the spectrograph resolution matrix with two-dimensional image simulations of raw DESI images that we processed with the DESI spectroscopic pipeline. We also study metal line contamination and noise calibration systematics with quasar spectra on the red side of the Ly$\alpha$ emission line. In a companion paper, we present a similar analysis based on the Fast Fourier Transform estimate of the power spectrum. We conclude with a comparison of these two approaches and implications for the upcoming DESI Year 1 analysis.
DOI: 10.1088/1742-6596/219/2/022011
2010
Cited 23 times
The CMS data acquisition system software
The CMS data acquisition system is made of two major subsystems: event building and event filter. The presented paper describes the architecture and design of the software that processes the data flow in the currently operating experiment. The central DAQ system relies on industry standard networks and processing equipment. Adopting a single software infrastructure in all subsystems of the experiment imposes, however, a number of different requirements. High efficiency and configuration flexibility are among the most important ones. The XDAQ software infrastructure has matured over an eight years development and testing period and has shown to be able to cope well with the requirements of the CMS experiment.
DOI: 10.1051/epjconf/202429502031
2024
Towards a container-based architecture for CMS data acquisition
The CMS data acquisition (DAQ) is implemented as a service-oriented architecture where DAQ applications, as well as general applications such as monitoring and error reporting, are run as self-contained services. The task of deployment and operation of services is achieved by using several heterogeneous facilities, custom configuration data and scripts in several languages. In this work, we restructure the existing system into a homogeneous, scalable cloud architecture adopting a uniform paradigm, where all applications are orchestrated in a uniform environment with standardized facilities. In this new paradigm DAQ applications are organized as groups of containers and the required software is packaged into container images. Automation of all aspects of coordinating and managing containers is provided by the Kubernetes environment, where a set of physical and virtual machines is unified in a single pool of compute resources. We demonstrate that a container-based cloud architecture provides an acrossthe-board solution that can be applied for DAQ in CMS. We show strengths and advantages of running DAQ applications in a container infrastructure as compared to a traditional application model.
DOI: 10.1051/epjconf/202429502013
2024
First year of experience with the new operational monitoring tool for data taking in CMS during Run 3
The Online Monitoring System (OMS) at the Compact Muon Solenoid experiment (CMS) at CERN aggregates and integrates different sources of information into a central place and allows users to view, compare and correlate information. It displays real-time and historical information. The tool is heavily used by run coordinators, trigger experts and shift crews, to ensure the quality and efficiency of data taking. It provides aggregated information for many use cases including data certification. OMS is the successor of Web Based Monitoring (WBM), which was in use during Run 1 and Run 2 of the LHC. WBM started as a small tool and grew substantially over the years so that maintenance became challenging. OMS was developed from scratch following several design ideas: to strictly separate the presentation layer from the data aggregation layer, to use a well-defined standard for the communication between presentation layer and aggregation layer, and to employ widely used frameworks from outside the HEP community. A report on the experience from the operation of OMS for the first year of data taking of Run 3 in 2022 is presented.
DOI: 10.1051/epjconf/202429502020
2024
MiniDAQ-3: Providing concurrent independent subdetector data-taking on CMS production DAQ resources
The data acquisition (DAQ) of the Compact Muon Solenoid (CMS) experiment at CERN, collects data for events accepted by the Level-1 Trigger from the different detector systems and assembles them in an event builder prior to making them available for further selection in the High Level Trigger, and finally storing the selected events for offline analysis. In addition to the central DAQ providing global acquisition functionality, several separate, so-called “MiniDAQ” setups allow operating independent data acquisition runs using an arbitrary subset of the CMS subdetectors. During Run 2 of the LHC, MiniDAQ setups were running their event builder and High Level Trigger applications on dedicated resources, separate from those used for the central DAQ. This cleanly separated MiniDAQ setups from the central DAQ system, but also meant limited throughput and a fixed number of possible MiniDAQ setups. In Run 3, MiniDAQ-3 setups share production resources with the new central DAQ system, allowing each setup to operate at the maximum Level-1 rate thanks to the reuse of the resources and network bandwidth. Configuration management tools had to be significantly extended to support the synchronization of the DAQ configurations needed for the various setups. We report on the new configuration management features and on the first year of operational experience with the new MiniDAQ-3 system.
DOI: 10.1051/epjconf/202429502011
2024
The CMS Orbit Builder for the HL-LHC at CERN
The Compact Muon Solenoid (CMS) experiment at CERN incorporates one of the highest throughput data acquisition systems in the world and is expected to increase its throughput by more than a factor of ten for High-Luminosity phase of Large Hadron Collider (HL-LHC). To achieve this goal, the system will be upgraded in most of its components. Among them, the event builder software, in charge of assembling all the data read out from the different sub-detectors, is planned to be modified from a single event builder to an orbit builder that assembles multiple events at the same time. The throughput of the event builder will be increased from the current 1.6 Tb/s to 51 Tb/s for the HL-LHC orbit builder. This paper presents preliminary network transfer studies in preparation for the upgrade. The key conceptual characteristics are discussed, concerning differences between the CMS event builder in Run 3 and the CMS Orbit Builder for the HL-LHC. For the feasibility studies, a pipestream benchmark, mimicking event-builder-like traffic has been developed. Preliminary performance tests and results are discussed.
DOI: 10.1088/1748-0221/17/05/c05003
2022
Cited 6 times
CMS phase-2 DAQ and timing hub prototyping results and perspectives
Abstract This paper describes recent progress on the design of the DAQ and Timing Hub, or DTH, an ATCA (Advanced Telecommunications Computing Architecture) hub board intended for the phase-2 upgrade of the CMS experiment. Prototyping was originally divided into multiple feature lines, spanning all different aspects of the DTH functionality. The second DTH prototype merges all R&D and prototyping lines into a single board, which is intended to be the production candidate. Emphasis is on the process and experience in going from the first to the second DTH prototype, which included a change of the chosen FPGA as well as the integration of a commercial networking solution.
DOI: 10.1016/s0168-9002(00)00182-0
2000
Cited 26 times
New results on silicon microstrip detectors of CMS tracker
Interstrip and backplane capacitances on silicon microstrip detectors with p+ strip on n substrate of 320μm thickness were measured for pitches between 60 and 240μm and width over pitch ratios between 0.13 and 0.5. Parametrisations of capacitance w.r.t. pitch and width were compared with data. The detectors were measured before and after being irradiated to a fluence of 4×1014protons/cm2 of 24GeV/c momentum. The effect of the crystal orientation of the silicon has been found to have a relevant influence on the surface radiation damage, favouring the choice of a 〈100〉 substrate. Working at high bias (up to 500 V in CMS) might be critical for the stability of detector, for a small width over pitch ratio. The influence of having a metal strip larger than the p+ implant has been studied and found to enhance the stability.
DOI: 10.1088/1742-6596/331/2/022021
2011
Cited 13 times
The data-acquisition system of the CMS experiment at the LHC
The data-acquisition system of the CMS experiment at the LHC performs the read-out and assembly of events accepted by the first level hardware trigger. Assembled events are made available to the high-level trigger which selects interesting events for offline storage and analysis. The system is designed to handle a maximum input rate of 100 kHz and an aggregated throughput of 100GB/s originating from approximately 500 sources. An overview of the architecture and design of the hardware and software of the DAQ system is given. We discuss the performance and operational experience from the first months of LHC physics data taking.
DOI: 10.1088/1748-0221/8/12/c12039
2013
Cited 12 times
10 Gbps TCP/IP streams from the FPGA for the CMS DAQ eventbuilder network
For the upgrade of the DAQ of the CMS experiment in 2013/2014 an interface between the custom detector Front End Drivers (FEDs) and the new DAQ eventbuilder network has to be designed. For a loss-less data collection from more then 600 FEDs a new FPGA based card implementing the TCP/IP protocol suite over 10Gbps Ethernet has been developed. We present the hardware challenges and protocol modifications made to TCP in order to simplify its FPGA implementation together with a set of performance measurements which were carried out with the current prototype.
DOI: 10.1088/1742-6596/513/1/012042
2014
Cited 11 times
10 Gbps TCP/IP streams from the FPGA for High Energy Physics
The DAQ system of the CMS experiment at CERN collects data from more than 600 custom detector Front-End Drivers (FEDs). During 2013 and 2014 the CMS DAQ system will undergo a major upgrade to address the obsolescence of current hardware and the requirements posed by the upgrade of the LHC accelerator and various detector components. For a loss-less data collection from the FEDs a new FPGA based card implementing the TCP/IP protocol suite over 10Gbps Ethernet has been developed. To limit the TCP hardware implementation complexity the DAQ group developed a simplified and unidirectional but RFC 793 compliant version of the TCP protocol. This allows to use a PC with the standard Linux TCP/IP stack as a receiver. We present the challenges and protocol modifications made to TCP in order to simplify its FPGA implementation. We also describe the interaction between the simplified TCP and Linux TCP/IP stack including the performance measurements.
DOI: 10.1109/tns.2015.2426216
2015
Cited 11 times
The New CMS DAQ System for Run-2 of the LHC
The data acquisition (DAQ) system of the CMS experiment at the CERN Large Hadron Collider assembles events at a rate of 100 kHz, transporting event data at an aggregate throughput of 100 GB/s to the high level trigger (HLT) farm. The HLT farm selects interesting events for storage and offline analysis at a rate of around 1 kHz. The DAQ system has been redesigned during the accelerator shutdown in 2013/14. The motivation is twofold: Firstly, the current compute nodes, networking, and storage infrastructure will have reached the end of their lifetime by the time the LHC restarts. Secondly, in order to handle higher LHC luminosities and event pileup, a number of sub-detectors will be upgraded, increasing the number of readout channels and replacing the off-detector readout electronics with a <formula formulatype="inline" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex Notation="TeX">$\mu {\hbox {TCA}}$</tex></formula> implementation. The new DAQ architecture will take advantage of the latest developments in the computing industry. For data concentration, 10/40 Gb/s Ethernet technologies will be used, as well as an implementation of a reduced TCP/IP in FPGA for a reliable transport between custom electronics and commercial computing hardware. A Clos network based on 56 Gb/s FDR Infiniband has been chosen for the event builder with a throughput of <formula formulatype="inline" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex Notation="TeX">$\sim 4~\hbox{Tb/s}$</tex> </formula> . The HLT processing is entirely file based. This allows the DAQ and HLT systems to be independent, and to use the HLT software in the same way as for the offline processing. The fully built events are sent to the HLT with 1/10/40 Gb/s Ethernet via network file systems. Hierarchical collection of HLT accepted events and monitoring meta-data are stored into a global file system. This paper presents the requirements, technical choices, and performance of the new system.
DOI: 10.1109/tns.2007.914036
2008
Cited 14 times
CMS DAQ Event Builder Based on Gigabit Ethernet
The CMS data acquisition system is designed to build and filter events originating from 476 detector data sources at a maximum trigger rate of 100 kHz. Different architectures and switch technologies have been evaluated to accomplish this purpose. Events will be built in two stages: the first stage will be a set of event builders called front-end driver (FED) builders. These will be based on Myrinet technology and will pre-assemble groups of about eight data sources. The second stage will be a set of event builders called readout builders. These will perform the building of full events. A single readout builder will build events from about 60 sources of 16 kB fragments at a rate of 12.5 kHz. In this paper, we present the design of a readout builder based on TCP/IP over Gigabit Ethernet and the refinement that was required to achieve the design throughput. This refinement includes architecture of the readout builder, the setup of TCP/IP, and hardware selection.
2023
The Lyman-$\alpha$ forest catalog from the Dark Energy Spectroscopic Instrument Early Data Release
We present and validate the catalog of Lyman-$\alpha$ forest fluctuations for 3D analyses using the Early Data Release (EDR) from the Dark Energy Spectroscopic Instrument (DESI) survey. We used 88,511 quasars collected from DESI Survey Validation (SV) data and the first two months of the main survey (M2). We present several improvements to the method used to extract the Lyman-$\alpha$ absorption fluctuations performed in previous analyses from the Sloan Digital Sky Survey (SDSS). In particular, we modify the weighting scheme and show that it can improve the precision of the correlation function measurement by more than 20%. This catalog can be downloaded from https://data.desi.lbl.gov/public/edr/vac/edr/lya/fuji/v0.3 and it will be used in the near future for the first DESI measurements of the 3D correlations in the Lyman-$\alpha$ forest.
DOI: 10.1016/s0370-2693(97)01082-4
1997
Cited 21 times
Measurements of mass, width and gauge couplings of the W boson at LEP
We report on measurements of mass and total decay width of the W boson and of triple-gauge-boson couplings, γWW and ZWW, with the L3 detector at LEP. W-pair events produced in e+e− interactions between 161 GeV and 172GeV centre-of-mass energy are selected in a data sample corresponding to a total luminosity of 21.2 pb−1. The mass and total decay width of the W boson are determined to be MW = 80.75−0.27+0.26(exp.) ± 0.03 (LEP) GeV and ΓW = 1.74−0.78+0.88(stat.) ± 0.25(syst.)GeV, respectively. Limits on anomalous triple-gauge-boson couplings, γWW and ZWW, are determined, in particular −1.5 < δZ < 1.9 (95% CL), excluding vanishing ZWW coupling at more than 95% confidence level.
DOI: 10.1109/nssmic.2015.7581984
2015
Cited 8 times
The CMS Timing and Control Distribution System
The Compact Muon Solenoid (CMS) experiment operating at the CERN (European Laboratory for Nuclear Physics) Large Hadron Collider (LHC) is in the process of upgrading several of its detector systems. Adding more individual detector components brings the need to test and commission those components separately from existing ones so as not to compromise physics data-taking. The CMS Trigger, Timing and Control (TTC) system had reached its limits in terms of the number of separate elements (partitions) that could be supported. A new Timing and Control Distribution System (TCDS) has been designed, built and commissioned in order to overcome this limit. It also brings additional functionality to facilitate parallel commissioning of new detector elements. The new TCDS system and its components will be described and results from the first operational experience with the TCDS in CMS will be shown.
DOI: 10.1088/1742-6596/119/2/022010
2008
Cited 9 times
The run control system of the CMS experiment
The CMS experiment at the LHC at CERN will start taking data in 2008. To configure, control and monitor the experiment during data-taking the Run Control system was developed. This paper describes the architecture and the technology used to implement the Run Control system, as well as the deployment and commissioning strategy of this important component of the online software for the CMS experiment.
DOI: 10.1109/tns.2007.911884
2008
Cited 8 times
The Terabit/s Super-Fragment Builder and Trigger Throttling System for the Compact Muon Solenoid Experiment at CERN
The Data Acquisition System of the Compact Muon Solenoid experiment at the Large Hadron Collider reads out event fragments of an average size of 2 kB from around 650 detector front-ends at a rate of up to 100 kHz. The first stage of event-building is performed by the Super-Fragment Builder employing custom-built electronics and a Myrinet optical network. It reduces the number of fragments by one order of magnitude, thereby greatly decreasing the requirements for the subsequent event-assembly stage. Back-pressure from the down-stream event-processing or variations in the size and rate of events may give rise to buffer overflows in the subdetector's front-end electronics, which would result in data corruption and would require a time-consuming re-sync procedure to recover. The Trigger-Throttling System protects against these buffer overflows. It provides fast feedback from any of the subdetector front-ends to the trigger so that the trigger can be throttled before buffers overflow. This paper reports on new performance measurements and on the recent successful integration of a scaled-down setup of the described system with the trigger and with front-ends of all major subdetectors. The on-going commissioning of the full-scale system is discussed.
DOI: 10.1088/1742-6596/219/2/022042
2010
Cited 7 times
Monitoring the CMS data acquisition system
The CMS data acquisition system comprises O(20000) interdependent services that need to be monitored in near real-time. The ability to monitor a large number of distributed applications accurately and effectively is of paramount importance for robust operations. Application monitoring entails the collection of a large number of simple and composed values made available by the software components and hardware devices. A key aspect is that detection of deviations from a specified behaviour is supported in a timely manner, which is a prerequisite in order to take corrective actions efficiently. Given the size and time constraints of the CMS data acquisition system, efficient application monitoring is an interesting research problem. We propose an approach that uses the emerging paradigm of Web-service based eventing systems in combination with hierarchical data collection and load balancing. Scalability and efficiency are achieved by a decentralized architecture, splitting up data collections into regions of collections. An implementation following this scheme is deployed as the monitoring infrastructure of the CMS experiment at the Large Hadron Collider. All services in this distributed data acquisition system are providing standard web service interfaces via XML, SOAP and HTTP [15,22]. Continuing on this path we adopted WS-* standards implementing a monitoring system layered on top of the W3C standards stack. We designed a load-balanced publisher/subscriber system with the ability to include high-speed protocols [10,12] for efficient data transmission [11,13,14] and serving data in multiple data formats.
DOI: 10.1088/1742-6596/219/2/022038
2010
Cited 7 times
The CMS event builder and storage system
The CMS event builder assembles events accepted by the first level trigger and makes them available to the high-level trigger. The event builder needs to handle a maximum input rate of 100 kHz and an aggregated throughput of 100 GB/s originating from approximately 500 sources. This paper presents the chosen hardware and software architecture. The system consists of 2 stages: an initial pre-assembly reducing the number of fragments by one order of magnitude and a final assembly by several independent readout builder (RU-builder) slices. The RU-builder is based on 3 separate services: the buffering of event fragments during the assembly, the event assembly, and the data flow manager. A further component is responsible for handling events accepted by the high-level trigger: the storage manager (SM) temporarily stores the events on disk at a peak rate of 2 GB/s until they are permanently archived offline. In addition, events and data-quality histograms are served by the SM to online monitoring clients. We discuss the operational experience from the first months of reading out cosmic ray data with the complete CMS detector.
DOI: 10.1088/1742-6596/396/1/012008
2012
Cited 7 times
The CMS High Level Trigger System: Experience and Future Development
The CMS experiment at the LHC features a two-level trigger system. Events accepted by the first level trigger, at a maximum rate of 100 kHz, are read out by the Data Acquisition system (DAQ), and subsequently assembled in memory in a farm of computers running a software high-level trigger (HLT), which selects interesting events for offline storage and analysis at a rate of order few hundred Hz. The HLT algorithms consist of sequences of offline-style reconstruction and filtering modules, executed on a farm of 0(10000) CPU cores built from commodity hardware. Experience from the operation of the HLT system in the collider run 2010/2011 is reported. The current architecture of the CMS HLT, its integration with the CMS reconstruction framework and the CMS DAQ, are discussed in the light of future development. The possible short- and medium-term evolution of the HLT software infrastructure to support extensions of the HLT computing power, and to address remaining performance and maintenance issues, are discussed.
DOI: 10.1109/tns.2012.2199331
2012
Cited 6 times
First Operational Experience With a High-Energy Physics Run Control System Based on Web Technologies
Run control systems of modern high-energy particle physics experiments have requirements similar to those of today's Internet applications. The Compact Muon Solenoid (CMS) collaboration at CERN's Large Hadron Collider (LHC) therefore decided to build the run control system for its detector based on web technologies. The system is composed of Java Web Applications distributed over a set of Apache Tomcat servlet containers that connect to a database back-end. Users interact with the system through a web browser. The present paper reports on the successful scaling of the system from a small test setup to the production data acquisition system that comprises around 10.000 applications running on a cluster of about 1600 hosts. We report on operational aspects during the first phase of operation with colliding beams including performance, stability, integration with the CMS Detector Control System and tools to guide the operator.
DOI: 10.1109/tns.2013.2282340
2013
Cited 6 times
A Comprehensive Zero-Copy Architecture for High Performance Distributed Data Acquisition Over Advanced Network Technologies for the CMS Experiment
This paper outlines a software architecture where zero-copy operations are used comprehensively at every processing point from the Application layer to the Physical layer. The proposed architecture is being used during feasibility studies on advanced networking technologies for the CMS experiment at CERN. The design relies on a homogeneous peer-to-peer message passing system, which is built around memory pool caches allowing efficient and deterministic latency handling of messages of any size through the different software layers. In this scheme portable distributed applications can be programmed to process input to output operations by mere pointer arithmetic and DMA operations only. The approach combined with the open fabric protocol stack (OFED) allows one to attain near wire-speed message transfer at application level. The architecture supports full portability of user applications by encapsulating the protocol details and network into modular peer transport services whereas a transparent replacement of the underlying protocol facilitates deployment of several network technologies like Gigabit Ethernet, Myrinet, Infiniband, etc. Therefore, this solution provides a protocol-independent communication framework and prevents having to deal with potentially difficult couplings when the underlying communication infrastructure is changed. We demonstrate the feasibility of this approach by giving efficiency and performance measurements of the software in the context of the CMS distributed event building studies.
DOI: 10.22323/1.213.0190
2015
Cited 6 times
Boosting Event Building Performance using Infiniband FDR for the CMS Upgrade
As part of the CMS upgrade during CERN's shutdown period (LS1), the CMS data acquisition system is incorporating Infiniband FDR technology to boost event-building performance for operation from 2015 onwards.Infiniband promises to provide substantial increase in data transmission speeds compared to the older 1GE network used during the 2009-2013 LHC run.Several options exist to end user developers when choosing a foundation for software upgrades, including the uDAPL (DAT Collaborative) and Infiniband verbs libraries (OFED).Due to advances in technology, the CMS data acquisition system will be able to achieve the required throughput of 100 kHz with increased event sizes while downsizing the number of nodes by using a combination of 10GE, 40GE and 56 Gb Infiniband FDR.This paper presents the analysis and results of a comparison between GE and Infiniband solutions as well as a look at how they integrate into an event building architecture, while preserving the scalability, efficiency and deterministic latency expected in a high end data acquisition network.
DOI: 10.22323/1.370.0111
2020
Cited 6 times
First measurements with the CMS DAQ and Timing Hub prototype-1
The DAQ and Timing Hub is an ATCA hub board designed for the Phase-2 upgrade of the CMS experiment.In addition to providing high-speed Ethernet connectivity to all back-end boards, it forms the bridge between the sub-detector electronics and the central DAQ, timing, and trigger control systems.One important requirement is the distribution of several high-precision, phasestable, and LHC-synchronous clock signals for use by the timing detectors.The current paper presents first measurements performed on the initial prototype, with a focus on clock quality.It is demonstrated that the current design provides adequate clock quality to satisfy the requirements of the Phase-2 CMS timing detectors.
DOI: 10.1051/epjconf/202125104023
2021
Cited 5 times
The Phase-2 Upgrade of the CMS Data Acquisition
The High Luminosity LHC (HL-LHC) will start operating in 2027 after the third Long Shutdown (LS3), and is designed to provide an ultimate instantaneous luminosity of 7:5 × 10 34 cm −2 s −1 , at the price of extreme pileup of up to 200 interactions per crossing. The number of overlapping interactions in HL-LHC collisions, their density, and the resulting intense radiation environment, warrant an almost complete upgrade of the CMS detector. The upgraded CMS detector will be read out by approximately fifty thousand highspeed front-end optical links at an unprecedented data rate of up to 80 Tb/s, for an average expected total event size of approximately 8 − 10 MB. Following the present established design, the CMS trigger and data acquisition system will continue to feature two trigger levels, with only one synchronous hardware-based Level-1 Trigger (L1), consisting of custom electronic boards and operating on dedicated data streams, and a second level, the High Level Trigger (HLT), using software algorithms running asynchronously on standard processors and making use of the full detector data to select events for offline storage and analysis. The upgraded CMS data acquisition system will collect data fragments for Level-1 accepted events from the detector back-end modules at a rate up to 750 kHz, aggregate fragments corresponding to individual Level- 1 accepts into events, and distribute them to the HLT processors where they will be filtered further. Events accepted by the HLT will be stored permanently at a rate of up to 7.5 kHz. This paper describes the baseline design of the DAQ and HLT systems for the Phase-2 of CMS.
DOI: 10.1016/s0168-9002(97)00750-x
1997
Cited 14 times
Beam test results for single- and double-sided silicon detector prototypes of the CMS central detector
We report the results of two beam tests performed in July and September 1995 at CERN using silicon microstrip detectors of various types: single sided, double sided with small angle stereo strips, double sided with orthogonal strips, double sided with pads. For the read-out electronics use was made of Preshape32, Premux128 and VA1 chips. The signal to noise ratio and the resolution of the detectors was studied for different incident angles of the incoming particles and for different values of the detector bias voltage. The goal of these tests was to check and improve the performances of the prototypes for the CMS Central Detector.
2016
Cited 5 times
Handbook of LHC Higgs cross sections: 4. Deciphering the nature of the Higgs sector
This Report summarizes the results of the activities of the LHC Higgs Cross Section Working Group in the period 2014-2016. The main goal of the working group was to present the state-of-the-art of Higgs physics at the LHC, integrating all new results that have appeared in the last few years. The first part compiles the most up-to-date predictions of Higgs boson production cross sections and decay branching ratios, parton distribution functions, and off-shell Higgs boson production and interference effects. The second part discusses the recent progress in Higgs effective field theory predictions, followed by the third part on pseudo-observables, simplified template cross section and fiducial cross section measurements, which give the baseline framework for Higgs boson property measurements. The fourth part deals with the beyond the Standard Model predictions of various benchmark scenarios of Minimal Supersymmetric Standard Model, extended scalar sector, Next-to-Minimal Supersymmetric Standard Model and exotic Higgs boson decays. This report follows three previous working-group reports: Handbook of LHC Higgs Cross Sections: 1. Inclusive Observables (CERN-2011-002), Handbook of LHC Higgs Cross Sections: 2. Differential Distributions (CERN-2012-002), and Handbook of LHC Higgs Cross Sections: 3. Higgs properties (CERN-2013-004). The current report serves as the baseline reference for Higgs physics in LHC Run 2 and beyond.
DOI: 10.1093/bja/aeq225
2010
Cited 5 times
Self-citation in anaesthesia and critical care journals: introducing a flat tax
Editor—Peer assessment of ‘impact’ in medical research is often unclear and difficult to establish. In recent years, publication-based ‘impact factor’ has been increasingly used to assess and quantify the quality of a given journal and estimate its total scientific impact. Impact factor (IF) is calculated by dividing the number of all current citations of source items from a journal during the previous 2 yr by the total number of articles published in that same journal during those 2 yr. Frequent self-citation can affect the IF of a journal. This phenomenon is widespread and has already been noted by other authors.1Tramer MR EJA 2010: new style, new team, new energy—what we want and what we do not want.Eur J Anaesthesiol. 2010; 27: 1-2PubMed Google Scholar We have analysed the self-citation trend of all 41 IF journals listed under anaesthesia and critical care and its effect on IF. The Journal Citation Reports® (JCR®) of ISI Web of KnowledgeSM database was searched for journals with a 2008 IF included in the subject categories ‘Anesthesiology’ (22 journals) and ‘Critical Care Medicine’ (21 Journals) giving a total of 41 journals seeing as only two journals officially belonged to both categories. We retrieved data from January 1, 1999, to January 1, 2009, for each journal, and extracted the following information: title, impact factor, impact factor without self-citation, number of papers published per year, total number of citations received per year, and number of self-citations per year. From these data, we calculated the self-citation rate for each year (ratio of a journal’s self-citation to the number of times it is cited by all other journals).2Science Citation Index, Journal Citation Reports, 1997, Philadelphia, PA, Institute for Scientific Information.Google Scholar The median percentage of self-citation in the 10 yr study period increased slightly during the first 8 yr (from 6.6% in 1999 to 11.5% in 2006, P=0.2), and dramatically increased thereafter (21% in 2007 and 44.4% in 2008, P<0.001) (Fig. 1). We also calculated a new IF that did not take into account self-citation exceeding 20% of the total and called it ‘new IF 20%’. In particular, when the self-citation percentage was more than 20% of the total, we applied the following formula (data refer to the period 2006–7 to compare the new IF 20% with the 2008 IF): IF=number of non self-citations+(0.2×total number of citations)number of papers Applying the ‘new IF 20%’ did not change any journal rankings by more than four positions, and had only a minimal effect on the rankings of the top 10 journals. However, we are convinced that in the near future, these variations could be striking. Correcting IF would limit the extensive use of self-citation by authors and editors and contribute to the publication of more broadly and scientifically based references. Our study confirms that journal self-citation is increasing, and we suggest that the introduction of a ‘new IF 20%’ might stop this trend. The misuse of self-citation can be explained by the fact that IF is the most widely used method to rank biomedical journals. Editors are always looking for new ways to increase the IF of their journals to attract papers of the highest quality. As suggested by other authors, one way in which journals attempt to influence their own IF is by asking submitting authors to include recent articles published in the journal itself in their references.3Smith R Journal accused of manipulating impact factor.Br Med J. 1997; 314: 461-462Crossref PubMed Scopus (129) Google Scholar The situation could get worse over the next few years and self-citation might one day account for the highest percentage of a journal’s IF. Previous work has already suggested adjustments to the methods used to calculate IF, such as omitting self-citation and inter-field impact factor normalization.4Fassoulaki A Papilas K Paraskeva A Patris K Impact factor bias and proposed adjustments for its determination.Acta Anaesthesiol Scand. 2002; 46: 902-905Crossref PubMed Scopus (75) Google Scholar We acknowledge that the high frequency of self-citation can have numerous legitimate explanations. For example, an author may prefer to submit his manuscript to a journal that has previously published relevant work in that area; many references in the paper will derive from papers previously published in that particular journal. We cannot also ignore that there are some natural consequences of a higher IF: authors indeed, and their employers, wish to have their articles published in the highest IF journal. Therefore, there will be the tendency for the more influential articles to appear in the higher IF journals and subsequent authors working in these areas of research will refer to the same journals. Furthermore, we realize that our analysis might overestimate the magnitude of the problem, since self-citation is probably performed before citation from other journals, which is consistent with the finding that the peak number of citations is generally within 2 or 3 yr of publication.5Yang H Pan BC Citation classics in fertility and sterility, 1975–2004.Shengyu yu Buyun. 2006; 86: 795-797Google Scholar 6Yang H The top 40 citation classics in the Journal of the American Society for Information Science and Technology.Scientometrics. 2009; 78: 421-426Crossref Scopus (7) Google Scholar In conclusion, self-citation is increasing, and we suggest a flat tax and a ‘new IF 20%’. None declared.
DOI: 10.1088/1742-6596/513/1/012025
2014
Cited 4 times
Prototype of a File-Based High-Level Trigger in CMS
The DAQ system of the CMS experiment at the LHC is upgraded during the accelerator shutdown in 2013/14. To reduce the interdependency of the DAQ system and the high-level trigger (HLT), we investigate the feasibility of using a file-system-based HLT. Events of ~1 MB size are built at the level-1 trigger rate of 100 kHz. The events are assembled by ~50 builder units (BUs). Each BU writes the raw events at ~2GB/s to a local file system shared with Q(10) filter-unit machines (FUs) running the HLT code. The FUs read the raw data from the file system, select Q(1%) of the events, and write the selected events together with monitoring meta-data back to a disk. This data is then aggregated over several steps and made available for offline reconstruction and online monitoring. We present the challenges, technical choices, and performance figures from the prototyping phase. In addition, the steps to the final system implementation will be discussed.
DOI: 10.1088/1742-6596/664/8/082036
2015
Cited 4 times
A scalable monitoring for the CMS Filter Farm based on elasticsearch
A flexible monitoring system has been designed for the CMS File-based Filter Farm making use of modern data mining and analytics components. All the metadata and monitoring information concerning data flow and execution of the HLT are generated locally in the form of small documents using the JSON encoding. These documents are indexed into a hierarchy of elasticsearch (es) clusters along with process and system log information. Elasticsearch is a search server based on Apache Lucene. It provides a distributed, multitenant-capable search and aggregation engine. Since es is schema-free, any new information can be added seamlessly and the unstructured information can be queried in non-predetermined ways. The leaf es clusters consist of the very same nodes that form the Filter Farm thus providing natural horizontal scaling. A separate central" es cluster is used to collect and index aggregated information. The fine-grained information, all the way to individual processes, remains available in the leaf clusters. The central es cluster provides quasi-real-time high-level monitoring information to any kind of client. Historical data can be retrieved to analyse past problems or correlate them with external information. We discuss the design and performance of this system in the context of the CMS DAQ commissioning for LHC Run 2.
DOI: 10.1109/tns.2007.910980
2008
Cited 5 times
The CMS High Level Trigger System
The CMS data acquisition (DAQ) system relies on a purely software driven high level trigger (HLT) to reduce the full Level 1 accept rate of 100 kHz to approximately 100 Hz for archiving and later offline analysis. The HLT operates on the full information of events assembled by an event builder collecting detector data from the CMS front-end systems. The HLT software consists of a sequence of reconstruction and filtering modules executed on a farm of O(1000) CPUs built from commodity hardware. This paper presents the architecture of the CMS HLT, which integrates the CMS reconstruction framework in the online environment. The mechanisms to configure, control, and monitor the filter farm and the procedures to validate the filtering code within the DAQ environment are described.
DOI: 10.1088/1742-6596/396/1/012023
2012
Cited 4 times
Status of the CMS Detector Control System
The Compact Muon Solenoid (CMS) is a CERN multi-purpose experiment that exploits the physics of the Large Hadron Collider (LHC). The Detector Control System (DCS) is responsible for ensuring the safe, correct and efficient operation of the experiment, and has contributed to the recording of high quality physics data. The DCS is programmed to automatically react to the LHC operational mode. CMS sub-detectors' bias voltages are set depending on the machine mode and particle beam conditions. An operator provided with a small set of screens supervises the system status summarized from the approximately 6M monitored parameters. Using the experience of nearly two years of operation with beam the DCS automation software has been enhanced to increase the system efficiency by minimizing the time required by sub-detectors to prepare for physics data taking. From the infrastructure point of view the DCS will be subject to extensive modifications in 2012. The current rack mounted control PCs will be replaced by a redundant pair of DELL Blade systems. These blade servers are a high-density modular solution that incorporates servers and networking into a single chassis that provides shared power, cooling and management. This infrastructure modification associated with the migration to blade servers will challenge the DCS software and hardware factorization capabilities. The on-going studies for this migration together with the latest modifications are discussed in the paper.
DOI: 10.1051/epjconf/202024501032
2020
Cited 4 times
40 MHz Level-1 Trigger Scouting for CMS
The CMS experiment will be upgraded for operation at the HighLuminosity LHC to maintain and extend its physics performance under extreme pileup conditions. Upgrades will include an entirely new tracking system, supplemented by a track finder processor providing tracks at Level-1, as well as a high-granularity calorimeter in the endcap region. New front-end and back-end electronics will also provide the Level-1 trigger with high-resolution information from the barrel calorimeter and the muon systems. The upgraded Level-1 processors, based on powerful FPGAs, will be able to carry out sophisticated feature searches with resolutions often similar to the offline ones, while keeping pileup effects under control. In this paper, we discuss the feasibility of a system capturing Level-1 intermediate data at the beam-crossing rate of 40 MHz and carrying out online analyzes based on these limited-resolution data. This 40 MHz scouting system would provide fast and virtually unlimited statistics for detector diagnostics, alternative luminosity measurements and, in some cases, calibrations. It has the potential to enable the study of otherwise inaccessible signatures, either too common to fit in the Level-1 accept budget, or with requirements which are orthogonal to “mainstream” physics, such as long-lived particles. We discuss the requirements and possible architecture of a 40 MHz scouting system, as well as some of the physics potential, and results from a demonstrator operated at the end of Run-2 using the Global Muon Trigger data from CMS. Plans for further demonstrators envisaged for Run-3 are also discussed.
DOI: 10.1016/b978-0-444-59496-9.50046-1
2012
Cited 4 times
On-line Monitoring of Gas Turbines to Improve Their Availability, Reliability, and Performance Using Both Process and Vibration Data
DOI: 10.1109/rtc.2016.7543164
2016
Cited 3 times
Performance of the new DAQ system of the CMS experiment for run-2
The data acquisition system (DAQ) of the CMS experiment at the CERN Large Hadron Collider (LHC) assembles events at a rate of 100 kHz, transporting event data at an aggregate throughput of more than 100GB/s to the Highlevel Trigger (HLT) farm. The HLT farm selects and classifies interesting events for storage and offline analysis at an output rate of around 1 kHz. The DAQ system has been redesigned during the accelerator shutdown in 2013-2014. The motivation for this upgrade was twofold. Firstly, the compute nodes, networking and storage infrastructure were reaching the end of their lifetimes. Secondly, in order to maintain physics performance with higher LHC luminosities and increasing event pileup, a number of sub-detectors are being upgraded, increasing the number of readout channels as well as the required throughput, and replacing the off-detector readout electronics with a MicroTCA-based DAQ interface. The new DAQ architecture takes advantage of the latest developments in the computing industry. For data concentration 10/40 Gbit/s Ethernet technologies are used, and a 56Gbit/s Infiniband FDR CLOS network (total throughput ≈ 4Tbit/s) has been chosen for the event builder. The upgraded DAQ - HLT interface is entirely file-based, essentially decoupling the DAQ and HLT systems. The fully-built events are transported to the HLT over 10/40 Gbit/s Ethernet via a network file system. The collection of events accepted by the HLT and the corresponding metadata are buffered on a global file system before being transferred off-site. The monitoring of the HLT farm and the data-taking performance is based on the Elasticsearch analytics tool. This paper presents the requirements, implementation, and performance of the system. Experience is reported on the first year of operation with LHC proton-proton runs as well as with the heavy ion lead-lead runs in 2015.
DOI: 10.1016/j.burns.2008.04.012
2009
Cited 4 times
Silane coupling agent chemical burns: A risk for medical personnel too
Most of our knowledge of the effects of aging on the hematopoietic system comes from studies in animal models. In this study, to explore potential effects of aging on human hematopoietic stem and progenitor cells (HSPCs), we evaluated CD34+ cells derived from young (<35 years) and old (>60 years) adult bone marrow with respect to phenotype and in vitro function. We observed an increased frequency of phenotypically defined stem and progenitor cells with age, but no distinct differences with respect to in vitro functional capacity. Given that regeneration of peripheral blood counts can serve as a functional readout of HSPCs, we compared various peripheral blood parameters between younger patients (≤50 years; n = 64) and older patients (≥60 years; n = 55) after autologous stem cell transplantation. Patient age did not affect the number of apheresis cycles or the amount of CD34+ cells harvested. Parameters for short-term regeneration did not differ significantly between the younger and older patients; however, complete recovery of all 3 blood lineages at 1 year after transplantation was strongly affected by advanced age, occurring in only 29% of the older patients, compared with 56% of the younger patients (P = .009). Collectively, these data suggest that aging has only limited effects on CD34+ HSPCs under steady-state conditions, but can be important under consitions of chemotoxic and replicative stress.
DOI: 10.1088/1742-6596/119/2/022011
2008
Cited 4 times
High level trigger configuration and handling of trigger tables in the CMS filter farm
The CMS experiment at the CERN Large Hadron Collider is currently being commissioned and is scheduled to collect the first pp collision data in 2008. CMS features a two-level trigger system. The Level-1 trigger, based on custom hardware, is designed to reduce the collision rate of 40 MHz to approximately 100 kHz. Data for events accepted by the Level-1 trigger are read out and assembled by an Event Builder. The High Level Trigger (HLT) employs a set of sophisticated software algorithms, to analyze the complete event information, and further reduce the accepted event rate for permanent storage and analysis. This paper describes the design and implementation of the HLT Configuration Management system. First experiences with commissioning of the HLT system are also reported.
DOI: 10.1088/1742-6596/513/1/012014
2014
Cited 3 times
The new CMS DAQ system for LHC operation after 2014 (DAQ2)
The Data Acquisition system of the Compact Muon Solenoid experiment at CERN assembles events at a rate of 100 kHz, transporting event data at an aggregate throughput of 100 GByte/s. We are presenting the design of the 2nd generation DAQ system, including studies of the event builder based on advanced networking technologies such as 10 and 40 Gbit/s Ethernet and 56 Gbit/s FDR Infiniband and exploitation of multicore CPU architectures. By the time the LHC restarts after the 2013/14 shutdown, the current compute nodes, networking, and storage infrastructure will have reached the end of their lifetime. In order to handle higher LHC luminosities and event pileup, a number of sub-detectors will be upgraded, increase the number of readout channels and replace the off-detector readout electronics with a μTCA implementation. The second generation DAQ system, foreseen for 2014, will need to accommodate the readout of both existing and new off-detector electronics and provide an increased throughput capacity. Advances in storage technology could make it feasible to write the output of the event builder to (RAM or SSD) disks and implement the HLT processing entirely file based.
DOI: 10.1088/1742-6596/396/1/012007
2012
Cited 3 times
Operational experience with the CMS Data Acquisition System
The data-acquisition (DAQ) system of the CMS experiment at the LHC performs the read-out and assembly of events accepted by the first level hardware trigger. Assembled events are made available to the high-level trigger (HLT), which selects interesting events for offline storage and analysis. The system is designed to handle a maximum input rate of 100 kHz and an aggregated throughput of 100 GB/s originating from approximately 500 sources and 10^8 electronic channels. An overview of the architecture and design of the hardware and software of the DAQ system is given. We report on the performance and operational experience of the DAQ and its Run Control System in the first two years of collider runs of the LHC, both in proton-proton and Pb-Pb collisions. We present an analysis of the current performance, its limitations, and the most common failure modes and discuss the ongoing evolution of the HLT capability needed to match the luminosity ramp-up of the LHC.
DOI: 10.1088/1742-6596/513/1/012031
2014
Cited 3 times
Automating the CMS DAQ
We present the automation mechanisms that have been added to the Data Acquisition and Run Control systems of the Compact Muon Solenoid (CMS) experiment during Run 1 of the LHC, ranging from the automation of routine tasks to automatic error recovery and context-sensitive guidance to the operator. These mechanisms helped CMS to maintain a data taking efficiency above 90% and to even improve it to 95% towards the end of Run 1, despite an increase in the occurrence of single-event upsets in sub-detector electronics at high LHC luminosity.
DOI: 10.1016/0168-9002(91)90117-9
1991
Cited 9 times
The L3 database system
Abstract The database system developed for the L3 experiment at the CERN large electron-positron (LEP) collider is described. The solution chosen has two components: a FORTRAN package (DBL3) and a system of distributed servers and data (L3DBSM). DBL3 provides basic database functions supplemented by facilities specific to high-energy physics experiments. The L3DBSM system, operational since the start of LEP data taking, has handled efficiently several hundred Mbytes of data over three different systems (VAX/VMS, IBM VM/CMS, Apollo Aegis SR10.x) and at eight different sites.
DOI: 10.1016/s0168-9002(00)00181-9
2000
Cited 7 times
Performance of CMS silicon microstrip detectors with the APV6 readout chip
We present results obtained with full-size wedge silicon microstrip detectors bonded to APV6 (Raymond et al., Proceedings of the 3rd Workshop on Electronics for LHC Experiments, CERN/LHCC/97-60) readout chips. We used two identical modules, each consisting of two crystals bonded together. One module was irradiated with 1.7×1014neutrons/cm2. The detectors have been characterized both in the laboratory and by exposing them to a beam of minimum ionizing particles. The results obtained are a good starting point for the evaluation of the performance of the “ensemble” detector plus readout chip in a version very similar to the final production one. We detected the signal from minimum ionizing particles with a signal-to-noise ratio ranging from 9.3 for the irradiated detector up to 20.5 for the non-irradiated detector, provided the parameters of the readout chips are carefully tuned.
DOI: 10.1007/978-94-015-9829-3_29
2001
Cited 7 times
Italian petroleum geology
DOI: 10.1088/1742-6596/219/2/022002
2010
Cited 3 times
The CMS online cluster: IT for a large data acquisition and control cluster
The CMS online cluster consists of more than 2000 computers running about 10000 application instances. These applications implement the control of the experiment, the event building, the high level trigger, the online database and the control of the buffering and transferring of data to the Central Data Recording at CERN. In this paper the IT solutions employed to fulfil the requirements of such a large cluster are revised. Details are given on the chosen network structure, configuration management system, monitoring infrastructure and on the implementation of the high availability for the services and infrastructure.
DOI: 10.1088/1742-6596/898/3/032019
2017
Cited 3 times
The CMS Data Acquisition - Architectures for the Phase-2 Upgrade
The upgraded High Luminosity LHC, after the third Long Shutdown (LS3), will provide an instantaneous luminosity of 7.5 × 1034 cm−2s−1 (levelled), at the price of extreme pileup of up to 200 interactions per crossing. In LS3, the CMS Detector will also undergo a major upgrade to prepare for the phase-2 of the LHC physics program, starting around 2025. The upgraded detector will be read out at an unprecedented data rate of up to 50 Tb/s and an event rate of 750 kHz. Complete events will be analysed by software algorithms running on standard processing nodes, and selected events will be stored permanently at a rate of up to 10 kHz for offline processing and analysis.
DOI: 10.1109/tns.2015.2409898
2015
Achieving High Performance With TCP Over 40 GbE on NUMA Architectures for CMS Data Acquisition
TCP and the socket abstraction have barely changed over the last two decades, but at the network layer there has been a giant leap from a few megabits to 100 gigabits in bandwidth. At the same time, CPU architectures have evolved into the multi-core era and applications are expected to make full use of all available resources. Applications in the data acquisition domain based on the standard socket library running in a Non-Uniform Memory Access (NUMA) architecture are unable to reach full efficiency and scalability without the software being adequately aware about the IRQ (Interrupt Request), CPU and memory affinities. During the first long shutdown of LHC, the CMS DAQ system is going to be upgraded for operation from 2015 onwards and a new software component has been designed and developed in the CMS online framework for transferring data with sockets. This software attempts to wrap the low-level socket library to ease higher-level programming with an API based on an asynchronous event driven model similar to the DAT uDAPL API. It is an event-based application with NUMA optimizations, that allows for a high throughput of data across a large distributed system. This paper describes the architecture, the technologies involved and the performance measurements of the software in the context of the CMS distributed event building.
DOI: 10.1088/1742-6596/664/8/082009
2015
Online data handling and storage at the CMS experiment
During the LHC Long Shutdown 1, the CMS Data Acquisition (DAQ) system underwent a partial redesign to replace obsolete network equipment, use more homogeneous switching technologies, and support new detector back-end electronics. The software and hardware infrastructure to provide input, execute the High Level Trigger (HLT) algorithms and deal with output data transport and storage has also been redesigned to be completely file- based. All the metadata needed for bookkeeping are stored in files as well, in the form of small documents using the JSON encoding. The Storage and Transfer System (STS) is responsible for aggregating these files produced by the HLT, storing them temporarily and transferring them to the T0 facility at CERN for subsequent offline processing. The STS merger service aggregates the output files from the HLT from ∼62 sources produced with an aggregate rate of ∼2GB/s. An estimated bandwidth of 7GB/s in concurrent read/write mode is needed. Furthermore, the STS has to be able to store several days of continuous running, so an estimated of 250TB of total usable disk space is required. In this article we present the various technological and implementation choices of the three components of the STS: the distributed file system, the merger service and the transfer system.
DOI: 10.1109/rtc.2007.4382750
2007
Cited 3 times
CMS DAQ Event Builder Based on Gigabit Ethernet
The CMS Data Acquisition System is designed to build and Alter events originating from 476 detector data sources at a maximum trigger rate of 100 KHz. Different architectures and switch technologies have been evaluated to accomplish this purpose. Events will be built in two stages: the first stage will be a set of event builders called FED Builders. These will be based on Myrinet technology and will pre-assemble groups of about 8 data sources. The second stage will be a set of event builders called Readout Builders. These will perform the building of full events. A single Readout Builder will build events from 72 sources of 16 KB fragments at a rate of 12.5 KHz. In this paper we present the design of a Readout Builder based on TCP/IP over Gigabit Ethernet and the optimization that was required to achieve the design throughput. This optimization includes architecture of the Readout Builder, the setup of TCP/IP, and hardware selection.
2014
The new CMS DAQ system for LHC operation after 2014 (DAQ2)
The Data Acquisition system of the Compact Muon Solenoid experiment at CERN assembles events at a rate of 100 kHz, transporting event data at an aggregate throughput of 100 GByte/s. We are presenting the design of the 2nd generation DAQ system, including studies of the event builder based on advanced networking technologies such as 10 and 40 Gbit/s Ethernet and 56 Gbit/s FDR Infiniband and exploitation of multicore CPU architectures. By the time the LHC restarts after the 2013/14 shutdown, the current compute nodes, networking, and storage infrastructure will have reached the end of their lifetime. In order to handle higher LHC luminosities and event pileup, a number of sub-detectors will be upgraded, increase the number of readout channels and replace the off-detector readout electronics with a μTCA implementation. The second generation DAQ system, foreseen for 2014, will need to accommodate the readout of both existing and new off-detector electronics and provide an increased throughput capacity. Advances in storage technology could make it feasible to write the output of the event builder to (RAM or SSD) disks and implement the HLT processing entirely file based.
DOI: 10.1088/1742-6596/331/2/022010
2011
An Analysis of the Control Hierarchy Modelling of the CMS Detector Control System
The supervisory level of the Detector Control System (DCS) of the CMS experiment is implemented using Finite State Machines (FSM), which model the behaviours and control the operations of all the sub-detectors and support services. The FSM tree of the whole CMS experiment consists of more than 30.000 nodes. An analysis of a system of such size is a complex task but is a crucial step towards the improvement of the overall performance of the FSM system. This paper presents the analysis of the CMS FSM system using the micro Common Representation Language 2 (mcrl2) methodology. Individual mCRL2 models are obtained for the FSM systems of the CMS sub-detectors using the ASF+SDF automated translation tool. Different mCRL2 operations are applied to the mCRL2 models. A mCRL2 simulation tool is used to closer examine the system. Visualization of a system based on the exploration of its state space is enabled with a mCRL2 tool. Requirements such as command and state propagation are expressed using modal mu-calculus and checked using a model checking algorithm. For checking local requirements such as endless loop freedom, the Bounded Model Checking technique is applied. This paper discusses these analysis techniques and presents the results of their application on the CMS FSM system.
DOI: 10.1088/1742-6596/396/1/012041
2012
High availability through full redundancy of the CMS detector controls system
The CMS detector control system (DCS) is responsible for controlling and monitoring the detector status and for the operation of all CMS sub detectors and infrastructure. This is required to ensure safe and efficient data taking so that high quality physics data can be recorded. The current system architecture is composed of more than 100 servers in order to provide the required processing resources. An optimization of the system software and hardware architecture is under development to ensure redundancy of all the controlled subsystems and to reduce any downtime due to hardware or software failures. The new optimized structure is based mainly on powerful and highly reliable blade servers and makes use of a fully redundant approach, guaranteeing high availability and reliability. The analysis of the requirements, the challenges, the improvements and the optimized system architecture as well as its specific hardware and software solutions are presented.
DOI: 10.1109/rtc.2014.7097437
2014
The new CMS DAQ system for run-2 of the LHC
Summary form only given. The data acquisition system (DAQ) of the CMS experiment at the CERN Large Hadron Collider assembles events at a rate of 100 kHz, transporting event data at an aggregate throughput of 100 GB/s to the high level trigger (HLT) farm. The HLT farm selects interesting events for storage and offline analysis at a rate of around 1 kHz. The DAQ system has been redesigned during the accelerator shutdown in 2013/14. The motivation is twofold: Firstly, the current compute nodes, networking, and storage infrastructure will have reached the end of their lifetime by the time the LHC restarts. Secondly, in order to handle higher LHC luminosities and event pileup, a number of sub-detectors will be upgraded, increasing the number of readout channels and replacing the off-detector readout electronics with a μTCA implementation. The new DAQ architecture will take advantage of the latest developments in the computing industry. For data concentration, 10/40 Gb/s Ethernet technologies will be used, as well as an implementation of a reduced TCP/IP in FPGA for a reliable transport between custom electronics and commercial computing hardware. A 56 Gb/s Infiniband FDR Clos network has been chosen for the event builder with a throughput of ~4 Tb/s. The HLT processing is entirely file based. This allows the DAQ and HLT systems to be independent, and to use the HLT software in the same way as for the offline processing. The fully built events are sent to the HLT with 1/10/40 Gb/s Ethernet via network file systems. Hierarchical collection of HLT accepted events and monitoring meta-data are stored into a global file system. This paper presents the requirements, technical choices, and performance of the new system.
2013
10 Gbps TCP/IP streams from the FPGA for the CMS DAQ eventbuilder network
DOI: 10.1109/tns.2023.3244696
2023
Progress in Design and Testing of the DAQ and Data-Flow Control for the Phase-2 Upgrade of the CMS Experiment
The CMS detector will undergo a major upgrade for the Phase-2 of theLHC program the High-Luminosity LHC.The upgraded CMS detector willbe read out at an unprecedented data rate exceed-ing50 Tb/s, with a Level-1 trigger selecting eventsat a rate of 750 kHz, and an average event size reaching8.5MB.The Phase-2 CMS back-end electronics will bebased on the ATCA standard, with node boards receiving the detectordata from the front-ends via custom, radiation-tolerant, opticallinks.The CMS Phase-2 data acquisition (DAQ) design tightens the integrationbetween trigger control and data flow, extending the synchronousregime of the DAQ system.At the core of the design is the DAQ andTiming Hub, a custom ATCA hub card forming the bridge between thedifferent, detectorspecific, control and readout electronics and thecommon timing, trigger, and control systems.The overall synchronisation and data flow of the experiment is handledby the Trigger and Timing Control and Distribution System (TCDS).Forincreased flexibility during commissioning and calibration runs, thePhase-2 architecture breaks with the traditional distribution tree, infavour of a configurable network connecting multiple independentcontrol units to all off-detector endpoints.This paper describes the overall Phase-2 TCDS architecture, andbriefly compares it to previous CMS implementations.It then discussesthe design and prototyping experience of the DTH, and concludes withthe convergence of this prototyping process into the (pre)productionphase, starting in early 2023.
DOI: 10.3390/app13105911
2023
Non-Destructive Assessment of the Functional Diameter and Hydrodynamic Roughness of Additively Manufactured Channels
Metal additive manufacturing, particularly laser powder bed fusion, is increasingly used in the gas turbine industry for the fabrication of channels with small diameters for conformal cooling and flow passage applications. A critical challenge in this context lies in evaluating aspects such as the geometrical and hydraulic diameters, the effective area and the roughness on the internal surface of the channel that affects the flow functionality. This paper proposes a new method to evaluate the geometrical and functional equivalent diameters, i.e., the hydraulic diameter of cylindrical channels and the mean surface topography height on the internal channel surface, using X-ray computed tomography. The developed methods were validated with experimental flow tests, considering the mean surface topography height to be equivalent to the hydrodynamic sand grain roughness, thereby determining the hydraulic diameter and the associated effective area. The method is a much faster approach to determining the available hydraulic diameter compared to flow tests and offers the possibility of evaluating the internal surface characteristics, with discrepancies between the two approaches being less than ±3%.
DOI: 10.1016/s0168-9002(99)00419-2
1999
Cited 7 times
The R&amp;D program for silicon detectors in CMS
This paper describes the main achievements in the development of radiation resistant silicon detectors to be used in the CMS tracker. After a general description of the basic requirements for the operation of large semiconductor systems in the LHC environment, the issue of radiation resistance is discussed in detail. Advantages and disadvantages of the different technological options are presented for comparison. Laboratory measurements and test beam data are used to check the performance of several series of prototypes fabricated by different companies. The expected performance of the final detector modules are presented together with preliminary test beam results on system prototypes.
DOI: 10.1016/j.astropartphys.2009.11.001
2010
Observation of a VHE cosmic-ray flare-signal with the L3+C muon spectrometer
The data collected by the L3+C muon spectrometer at the CERN Large Electron-Positron collider, LEP, have been used to search for short duration signals emitted by cosmic point sources.A sky survey performed from July to November 1999 and from April to November 2000 has revealed one single flux enhancement ðchance probability ¼ 2:6 Â 10 À3 Þ between the 17th and 20th of August 2000 from a direction with a galactic longitude of (265.02 ± 0.42)°and latitude of (55.58 ± 0.24)°.The energy of the detected muons was above 15 GeV.
DOI: 10.1088/1742-6596/898/3/032020
2017
Performance of the CMS Event Builder
DOI: 10.1016/s0920-5632(97)00119-9
1997
Cited 6 times
Wedge silicon detectors for the inner trackering system of CMS
One “wedge” Double Sided Silicon Detector prototype for the CMS forward inner tracker has been tested both in laboratory and on a high energy particle beam. The results obtained indicate the most reliable solutions for the strips geometry of the junction side. Three different designs of “wedge” Double Sided detectors with different solutions for the ohmic side strip geometry are presented.
DOI: 10.1109/23.682652
1998
Cited 6 times
A data acquisition system for silicon microstrip detectors
Following initial work done by some of us on the readout of the L3 silicon microvertex detector, we have developed a complete data acquisition system for silicon microstrip detectors for use both in our home institute and at the various test beam facilities at the CERN laboratory. The system uses extensive decoupling schemes allowing a fully floating connection to the detector. This feature has many advantages especially in the readout of the latest double-sided silicon microstrip detectors.
DOI: 10.5170/cern-2004-010.316
2004
Cited 3 times
The Final prototype of the Fast Merging Module (FMM) for readout status processing in CMS DAQ
The Trigger Throttling System (TTS) adapts the trigger frequency to the DAQ readout capacity in order to avoid buffer overflows and data corruption. The states of all ~640 readout units in the CMS DAQ are read out and merged by hardware modules (FMMs) to obtain the status of each detector partition. The functionality and the design of the second and final prototype of the FMM are presented in this paper.
2006
Cited 3 times
CMS DAQ event builder based on gigabit ethernet
The CMS Data Acquisition system is designed to build and filter events originating from approximately 500 data sources from the detector at a maximum Level 1 trigger rate of 100 kHz and with an aggregate throughput of 100 GByte/s. For this purpose different architectures and switch technologies have been evaluated. Events will be built in two stages: the first stage, the FED Builder, will be based on Myrinet technology and will pre-assemble groups of about 8 data sources. The next stage, the Readout Builder, will perform the building of full events. The requirement of one Readout Builder is to build events at 12.5 kHz with average size of 16 kBytes from 64 sources. In this paper we present the prospects of a Readout Builder based on TCP/IP over Gigabit Ethernet. Various Readout Builder architectures that we are considering are discussed. The results of throughput measurements and scaling performance are outlined as well as the preliminary estimates of the final performance. All these studies have been carried out at our test-bed farms that are made up of a total of 130 dual Xeon PCs interconnected with Myrinet and Gigabit Ethernet networking and switching technologies.
DOI: 10.1088/1748-0221/4/10/p10005
2009
Commissioning of the CMS High Level Trigger
The CMS experiment will collect data from the proton-proton collisions delivered by the Large Hadron Collider (LHC) at a centre-of-mass energy up to 14 TeV. The CMS trigger system is designed to cope with unprecedented luminosities and LHC bunch-crossing rates up to 40 MHz. The unique CMS trigger architecture only employs two trigger levels. The Level-1 trigger is implemented using custom electronics, while the High Level Trigger (HLT) is based on software algorithms running on a large cluster of commercial processors, the Event Filter Farm. We present the major functionalities of the CMS High Level Trigger system as of the starting of LHC beams operations in September 2008. The validation of the HLT system in the online environment with Monte Carlo simulated data and its commissioning during cosmic rays data taking campaigns are discussed in detail. We conclude with the description of the HLT operations with the first circulating LHC beams before the incident occurred the 19th September 2008.
DOI: 10.1051/epjconf/201921407017
2019
Experience with dynamic resource provisioning of the CMS online cluster using a cloud overlay
The primary goal of the online cluster of the Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC) is to build event data from the detector and to select interesting collisions in the High Level Trigger (HLT) farm for offline storage. With more than 1500 nodes and a capacity of about 850 kHEPSpecInt06, the HLT machines represent similar computing capacity of all the CMS Tier1 Grid sites together. Moreover, it is currently connected to the CERN IT datacenter via a dedicated 160 Gbps network connection and hence can access the remote EOS based storage with a high bandwidth. In the last few years, a cloud overlay based on OpenStack has been commissioned to use these resources for the WLCG when they are not needed for data taking. This online cloud facility was designed for parasitic use of the HLT, which must never interfere with its primary function as part of the DAQ system. It also allows to abstract from the different types of machines and their underlying segmented networks. During the LHC technical stop periods, the HLT cloud is set to its static mode of operation where it acts like other grid facilities. The online cloud was also extended to make dynamic use of resources during periods between LHC fills. These periods are a-priori unscheduled and of undetermined length, typically of several hours, once or more a day. For that, it dynamically follows LHC beam states and hibernates Virtual Machines (VM) accordingly. Finally, this work presents the design and implementation of a mechanism to dynamically ramp up VMs when the DAQ load on the HLT reduces towards the end of the fill.
2016
Opportunistic usage of the CMS online cluster using a cloud overlay
1998
Cited 5 times
Lower bound for the standard model Higgs boson mass from combining the results of the four LEP experiments
DOI: 10.1016/s0168-9002(01)00544-7
2001
Cited 4 times
Optimization of the silicon sensors for the CMS tracker
The CMS experiment at the LHC will comprise a large silicon strip tracker. This article highlights some of the results obtained in the R&D studies for the optimization of its silicon sensors. Measurements of the capacitances and of the high voltage stability of the devices are presented before and after irradiation to the dose expected after the full lifetime of the tracker.
DOI: 10.1016/j.nuclphysbps.2007.08.106
2007
Flexible custom designs for CMS DAQ
The CMS central DAQ system is built using commercial hardware (PCs and networking equipment), except for two components: the Front-end Readout Link (FRL) and the Fast Merger Module (FMM). The FRL interfaces the sub-detector specific front-end electronics to the central DAQ system in a uniform way. The FRL is a compact-PCI module with an additional PCI 64bit connector to host a Network Interface Card (NIC). On the sub-detector side, the data are written to the link using a FIFO-like protocol (SLINK64). The link uses the Low Voltage Differential Signal (LVDS) technology to transfer data with a throughput of up to 400 MBytes/s. The FMM modules collect status signals from the front-end electronics of the sub-detectors, merge and monitor them and provide the resulting signals with low latency to the first level trigger electronics. In particular, the throttling signals allow the trigger to avoid buffer overflows and data corruption in the front-end electronics when the data produced in the front-end exceeds the capacity of the DAQ system. Both cards are compact-PCI cards with a 6U form factor. They are implemented with FPGAs. The main FPGA implements the processing logic of the card and the interfaces to the variety of busses on the card. Another FPGA contains a custom compact-PCI interface for configuration, control and monitoring. The chosen technology provides flexibility to implement new features if required.
DOI: 10.1088/1742-6596/331/2/022004
2011
Studies of future readout links for the CMS experiment
The Compact Muon Solenoid (CMS) experiment has developed an electrical implementation of the S-LINK64 extension (Simple Link Interface 64 bit) operating at 400 MB/s in order to read out the detector. This paper studies a possible replacement of the existing S-LINK64 implementation by an optical link, based on 10 Gigabit Ethernet in order to fulfil larger throughput, replace aging hardware and simplify an architecture. A prototype transmitter unit has been developed based on the FPGA Altera PCI Express Development Kit with a custom firmware. A standard PC has been acted as receiving unit. The data transfer has been implemented on a stack of protocols: RDP over IP over Ethernet. This allows receiving the data by standard hardware components like PCs or network switches and NICs. The first test proved that basic exchange of the packets between transmitter and receiving unit works. The paper summarizes the status of these studies.
DOI: 10.1088/1742-6596/664/8/082035
2015
A New Event Builder for CMS Run II
The data acquisition system (DAQ) of the CMS experiment at the CERN Large Hadron Collider (LHC) assembles events at a rate of 100 kHz, transporting event data at an aggregate throughput of 100GB/s to the high-level trigger (HLT) farm. The DAQ system has been redesigned during the LHC shutdown in 2013/14. The new DAQ architecture is based on state-of-the-art network technologies for the event building. For the data concentration, 10/40 Gbps Ethernet technologies are used together with a reduced TCP/IP protocol implemented in FPGA for a reliable transport between custom electronics and commercial computing hardware. A 56 Gbps Infiniband FDR CLOS network has been chosen for the event builder. This paper discusses the software design, protocols, and optimizations for exploiting the hardware capabilities. We present performance measurements from small-scale prototypes and from the full-scale production system.
DOI: 10.1088/1742-6596/664/8/082033
2015
File-based data flow in the CMS Filter Farm
During the LHC Long Shutdown 1, the CMS Data Acquisition system underwent a partial redesign to replace obsolete network equipment, use more homogeneous switching technologies, and prepare the ground for future upgrades of the detector front-ends. The software and hardware infrastructure to provide input, execute the High Level Trigger (HLT) algorithms and deal with output data transport and storage has also been redesigned to be completely file- based. This approach provides additional decoupling between the HLT algorithms and the input and output data flow. All the metadata needed for bookkeeping of the data flow and the HLT process lifetimes are also generated in the form of small "documents" using the JSON encoding, by either services in the flow of the HLT execution (for rates etc.) or watchdog processes. These "files" can remain memory-resident or be written to disk if they are to be used in another part of the system (e.g. for aggregation of output data). We discuss how this redesign improves the robustness and flexibility of the CMS DAQ and the performance of the system currently being commissioned for the LHC Run 2.
DOI: 10.18429/jacow-icalepcs2015-wepgf013
2015
Increasing Availability by Implementing Software Redundancy in the CMS Detector Control System
DOI: 10.1016/j.nima.2005.03.034
2005
Feasibility study of a XML-based software environment to manage data acquisition hardware devices
A Software environment to describe configuration, control and test systems for data acquisition hardware devices is presented. The design follows a model that enforces a comprehensive use of an extensible markup language (XML) syntax to describe both the code and associated data. A feasibility study of this software, carried out for the CMS experiment at CERN, is also presented. This is based on a number of standalone applications for different hardware modules, and the design of a hardware management system to remotely access to these heterogeneous subsystems through a uniform web service interface.
2012
Searches for the Standard Model Higgs Boson at CMS
We searched for the standard model Higgs boson in many different channels using approximately 5 fb-1 of 7 TeV pp collisions data collected with the CMS detector at LHC. Combining the results of the different searches we exclude at 95% confidence level a standard model Higgs boson with mass between 127.5 and 600 GeV. The expected 95% confidence level exclusion if the Higgs boson is not present is from 114.5 and 543 GeV. The observed exclusion is weaker than expected at low mass because of some excess that is observed below about 128 GeV. The most significant excess is found at 125 GeV with a local significance of 2.8 sigma. It has a global significance of 0.8 sigma when evaluated in the full search range and of 2.1 sigma when evaluated in the range 110-145 GeV. The excess is consistent both with background fluctuation and a standard model Higgs boson with mass of about 125 GeV, and more data are needed to investigate its origin.
DOI: 10.1088/1742-6596/396/1/012038
2012
Distributed error and alarm processing in the CMS data acquisition system
The error and alarm system for the data acquisition of the Compact Muon Solenoid (CMS) at CERN was successfully used for the physics runs at Large Hadron Collider (LHC) during first three years of activities. Error and alarm processing entails the notification, collection, storing and visualization of all exceptional conditions occurring in the highly distributed CMS online system using a uniform scheme. Alerts and reports are shown on-line by web application facilities that map them to graphical models of the system as defined by the user. A persistency service keeps a history of all exceptions occurred, allowing subsequent retrieval of user defined time windows of events for later playback or analysis. This paper describes the architecture and the technologies used and deals with operational aspects during the first years of LHC operation. In particular we focus on performance, stability, and integration with the CMS sub-detectors.
DOI: 10.1109/rtc.2012.6418362
2012
Recent experience and future evolution of the CMS High Level Trigger System
The CMS experiment at the LHC uses a two-stage trigger system, with events flowing from the first level trigger at a rate of 100 kHz. These events are read out by the Data Acquisition system (DAQ), assembled in memory in a farm of computers, and finally fed into the high-level trigger (HLT) software running on the farm. The HLT software selects interesting events for offline storage and analysis at a rate of a few hundred Hz. The HLT algorithms consist of sequences of offline-style reconstruction and filtering modules, executed on a farm of 0(10000) CPU cores built from commodity hardware. Experience from the 2010–2011 collider run is detailed, as well as the current architecture of the CMS HLT, and its integration with the CMS reconstruction framework and CMS DAQ. The short- and medium-term evolution of the HLT software infrastructure is discussed, with future improvements aimed at supporting extensions of the HLT computing power, and addressing remaining performance and maintenance issues.
DOI: 10.1088/1742-6596/396/1/012039
2012
Upgrade of the CMS Event Builder
The Data Acquisition system of the Compact Muon Solenoid experiment at CERN assembles events at a rate of 100 kHz, transporting event data at an aggregate throughput of 100 GB/s. By the time the LHC restarts after the 2013/14 shut-down, the current computing and networking infrastructure will have reached the end of their lifetime. This paper presents design studies for an upgrade of the CMS event builder based on advanced networking technologies such as 10/40 Gb/s Ethernet and Infiniband. The results of performance measurements with small-scale test setups are shown.
DOI: 10.1109/rtc.2010.5750362
2010
First operational experience with the CMS run control system
The Run Control System of the Compact Muon Solenoid (CMS) experiment at CERN's new Large Hadron Collider (LHC) controls the sub-detector and central data acquisition systems and the high-level trigger farm of the experiment. It manages around 10,000 applications that control custom hardware or handle the event building and the high-level trigger processing. The CMS Run Control System is a distributed Java system running on a set of Apache Tomcat servlet containers. Users interact with the system through a web browser. The paper presents the architecture of the CMS Run Control System and deals with operational aspects during the first phase of operation with colliding beams. In particular it focuses on performance, stability, integration with the CMS Detector Control System, integration with LHC status information and tools to guide the shifter.
DOI: 10.22323/1.270.0022
2017
Opportunistic usage of the CMS online cluster using a cloud overlay
After two years of maintenance and upgrade, the Large Hadron Collider (LHC), the largest and most powerful particle accelerator in the world, has started its second three year run. Around 1500 computers make up the CMS (Compact Muon Solenoid) Online cluster. This cluster is used for Data Acquisition of the CMS experiment at CERN, selecting and sending to storage around 20 TBytes of data per day that are then analysed by the Worldwide LHC Computing Grid (WLCG) infrastructure that links hundreds of data centres worldwide. 3000 CMS physicists can access and process data, and are always seeking more computing power and data. The backbone of the CMS Online cluster is composed of 16000 cores which provide as much computing power as all CMS WLCG Tier1 sites (352K HEP-SPEC-06 score in the CMS cluster versus 300K across CMS Tier1 sites). The computing power available in the CMS cluster can significantly speed up the processing of data, so an effort has been made to allocate the resources of the CMS Online cluster to the grid when it isn’t used to its full capacity for data acquisition. This occurs during the maintenance periods when the LHC is non-operational, which corresponded to 117 days in 2015. During 2016, the aim is to increase the availability of the CMS Online cluster for data processing by making the cluster accessible during the time between two physics collisions while the LHC and beams are being prepared. This is usually the case for a few hours every day, which would vastly increase the computing power available for data processing. Work has already been undertaken to provide this functionality, as an OpenStack cloud layer has been deployed as a minimal overlay that leaves the primary role of the cluster untouched. This overlay also abstracts the different hardware and networks that the cluster is composed of. The operation of the cloud (starting and stopping the virtual machines) is another challenge that has been overcome as the cluster has only a few hours spare during the aforementioned beam preparation. By improving the virtual image deployment and integrating the OpenStack services with the core services of the Data Acquisition on the CMS Online cluster it is now possible to start a thousand virtual machines within 10 minutes and to turn them off within seconds. This document will explain the architectural choices that were made to reach a fully redundant and scalable cloud, with a minimal impact on the running cluster configuration while giving a maximal segregation between the services. It will also present how to cold start 1000 virtual machines 25 times faster, using tools commonly utilised in all data centres.
DOI: 10.1109/tns.2008.915925
2008
Effects of Adaptive Wormhole Routing in Event Builder Networks
The data acquisition system of the CMS experiment at the Large Hadron Collider features a two-stage event builder, which combines data from about 500 sources into full events at an aggregate throughput of 100 GB/s. To meet the requirements, several architectures and interconnect technologies have been quantitatively evaluated. Myrinet will be used for the communication from the underground frontend devices to the surface event building system. Gigabit Ethernet is deployed in the surface event building system. Nearly full bi-section throughput can be obtained using a custom software driver for Myrinet based on barrel shifter traffic shaping. This paper discusses the use of Myrinet dual-port network interface cards supporting channel bonding to achieve virtual 5 GBit/s links with adaptive routing to alleviate the throughput limitations associated with wormhole routing. Adaptive routing is not expected to be suitable for high-throughput event builder applications in high-energy physics. To corroborate this claim, results from the CMS event builder preseries installation at CERN are presented and the problems of wormhole routing networks are discussed.
DOI: 10.22323/1.313.0075
2018
The FEROL40, a microTCA card interfacing custom point-to-point links and standard TCP/IP
In order to accommodate new back-end electronics of upgraded CMS sub-detectors, a new FEROL40 card in the microTCA standard has been developed. The main function of the FEROL40 is to acquire event data over multiple point-to-point serial optical links, provide buffering, perform protocol conversion, and transmit multiple TCP/IP streams (4x10Gbps) to the Ethernet network of the aggregation layer of the CMS DAQ (data acquisition) event builder. This contribution discusses the design of the FEROL40 and experience from operation
DOI: 10.1109/rtc.2007.4382746
2007
The Terabit/s Super-Fragment Builder and Trigger Throttling System for the Compact Muon Solenoid Experiment at CERN
The data acquisition system of the Compact Muon Solenoid experiment at the large hadron collider reads out event fragments of an average size of 2 kilobytes from around 650 detector front-ends at a rate of up to 100 kHz. The first stage of event-building is performed by the Super-Fragment Builder employing custom-built electronics and a Myrinet optical network. It reduces the number of fragments by one order of magnitude, thereby greatly decreasing the requirements for the subsequent event-assembly stage. By providing fast feedback from any of the front-ends to the trigger, the trigger throttling system prevents buffer overflows in the front-end electronics due to variations in the size and rate of events or due to backpressure from the down-stream event-building and processing. This paper reports on the recent successful integration of a scaled-down setup of the described system with the trigger and with front-ends of all major sub-detectors and discusses the ongoing commissioning of the full-scale system.
DOI: 10.1109/rtc.2007.4382773
2007
The CMS High Level Trigger System
The CMS Data Acquisition (DAQ) System relies on a purely software driven High Level Trigger (HLT) to reduce the full Level-1 accept rate of 100 kHz to approximately 100 Hz for archiving and later offline analysis. The HLT operates on the full information of events assembled by an event builder collecting detector data from the CMS front-end systems. The HLT software consists of a sequence of reconstruction and filtering modules executed on a farm of 0(1000) CPUs built from commodity hardware. This paper presents the architecture of the CMS HLT, which integrates the CMS reconstruction framework in the online environment. The mechanisms to configure, control, and monitor the Filter Farm and the procedures to validate the filtering code within the DAQ environment are described.
DOI: 10.22323/1.343.0129
2019
Design and development of the DAQ and Timing Hub for CMS Phase-2
The CMS detector will undergo a major upgrade for Phase-2 of the LHC program, starting around 2026.The upgraded Level-1 hardware trigger will select events at a rate of 750 kHz.At an expected event size of 7.4 MB this corresponds to a data rate of up to 50 Tbit/s.Optical links will carry the signals from on-detector front-end electronics to back-end electronics in ATCA crates in the service cavern.A DAQ and Timing Hub board aggregates data streams from back-end boards over point-to-point links, provides buffering and transmits the data to the commercial data-to-surface network for processing and storage.This hub board is also responsible for the distribution of timing, control and trigger signals to the back-ends.This paper presents the current development towards the DAQ and Timing Hub and the design of the first prototype, to be used as for validation and integration with the first back-end prototypes in 2019-2020.
DOI: 10.1051/epjconf/201921401015
2019
Operational experience with the new CMS DAQ-Expert
The data acquisition (DAQ) system of the Compact Muon Solenoid (CMS) at CERN reads out the detector at the level-1 trigger accept rate of 100 kHz, assembles events with a bandwidth of 200 GB/s, provides these events to the high level-trigger running on a farm of about 30k cores and records the accepted events. Comprising custom-built and cutting edge commercial hardware and several 1000 instances of software applications, the DAQ system is complex in itself and failures cannot be completely excluded. Moreover, problems in the readout of the detectors,in the first level trigger system or in the high level trigger may provoke anomalous behaviour of the DAQ systemwhich sometimes cannot easily be differentiated from a problem in the DAQ system itself. In order to achieve high data taking efficiency with operators from the entire collaboration and without relying too heavily on the on-call experts, an expert system, the DAQ-Expert, has been developed that can pinpoint the source of most failures and give advice to the shift crew on how to recover in the quickest way. The DAQ-Expert constantly analyzes monitoring data from the DAQ system and the high level trigger by making use of logic modules written in Java that encapsulate the expert knowledge about potential operational problems. The results of the reasoning are presented to the operator in a web-based dashboard, may trigger sound alerts in the control room and are archived for post-mortem analysis - presented in a web-based timeline browser. We present the design of the DAQ-Expert and report on the operational experience since 2017, when it was first put into production.
DOI: 10.1051/epjconf/201921401048
2019
A Scalable Online Monitoring System Based on Elasticsearch for Distributed Data Acquisition in Cms
The part of the CMS Data Acquisition (DAQ) system responsible for data readout and event building is a complex network of interdependent distributed applications. To ensure successful data taking, these programs have to be constantly monitored in order to facilitate the timeliness of necessary corrections in case of any deviation from specified behaviour. A large number of diverse monitoring data samples are periodically collected from multiple sources across the network. Monitoring data are kept in memory for online operations and optionally stored on disk for post-mortem analysis. We present a generic, reusable solution based on an open source NoSQL database, Elasticsearch, which is fully compatible and non-intrusive with respect to the existing system. The motivation is to benefit from an offthe-shelf software to facilitate the development, maintenance and support efforts. Elasticsearch provides failover and data redundancy capabilities as well as a programming language independent JSON-over-HTTP interface. The possibility of horizontal scaling matches the requirements of a DAQ monitoring system. The data load from all sources is balanced by redistribution over an Elasticsearch cluster that can be hosted on a computer cloud. In order to achieve the necessary robustness and to validate the scalability of the approach the above monitoring solution currently runs in parallel with an existing in-house developed DAQ monitoring system.
DOI: 10.1051/epjconf/202024501028
2020
DAQExpert the service to increase CMS data-taking efficiency
The Data Acquisition (DAQ) system of the Compact Muon Solenoid (CMS) experiment at the LHC is a complex system responsible for the data readout, event building and recording of accepted events. Its proper functioning plays a critical role in the data-taking efficiency of the CMS experiment. In order to ensure high availability and recover promptly in the event of hardware or software failure of the subsystems, an expert system, the DAQ Expert, has been developed. It aims at improving the data taking efficiency, reducing the human error in the operations and minimising the on-call expert demand. Introduced in the beginning of 2017, it assists the shift crew and the system experts in recovering from operational faults, streamlining the post mortem analysis and, at the end of Run 2, triggering fully automatic recovery without human intervention. DAQ Expert analyses the real-time monitoring data originating from the DAQ components and the high-level trigger updated every few seconds. It pinpoints data flow problems, and recovers them automatically or after given operator approval. We analyse the CMS downtime in the 2018 run focusing on what was improved with the introduction of automated recovery; present challenges and design of encoding the expert knowledge into automated recovery jobs. Furthermore, we demonstrate the web-based, ReactJS interfaces that ensure an effective cooperation between the human operators in the control room and the automated recovery system. We report on the operational experience with automated recovery.
DOI: 10.1016/s0168-9002(01)01824-1
2002
CMS silicon tracker developments
The CMS Silicon tracker consists of 70m2 of microstrip sensors which design will be finalized at the end of 1999 on the basis of systematic studies of device characteristics as function of the most important parameters. A fundamental constraint comes from the fact that the detector has to be operated in a very hostile radiation environment with full efficiency. We present an overview of the current results and prospects for converging on a final set of parameters for the silicon tracker sensors.
1992
Cited 3 times
A Silicon hadron calorimeter module operated in a strong magnetic field with VLSI readout for LHC
DOI: 10.1109/rtc.2005.1547433
2005
The 2 Tbps "data to surface" system of the CMS data acquisition
The data acquisition system of the CMS experiment, at the CERN LHC collider, is designed to build 1 MB events at a sustained rate of 100 kHz and to provide sufficient computing power to filter the events by a factor of 1000. The data to surface (D2S) system is the first layer of the data acquisition interfacing the underground subdetector readout electronics to the surface event builder. It collects the 100 GB/s input data from a large number of front-end cards (650), implements a first stage event building by combining multiple sources into larger-size data fragments, and transports them to the surface for the full event building. The data to surface system can operate at the maximum rate of 2 Tbps. This paper describes the layout, reconfigurability and production validation of the D2S system which is to be installed by December 2005
2005
HYPERDAQ - WHERE DATA ACQUISITION MEETS THE WEB
HyperDAQ was conceived to give users access to distributed data acquisition systems easily. To achieve that, we marry two well-established technologies: the World Wide Web and Peer-to-Peer systems. An embedded HTTP protocol engine turns an executable program into a browsable Web application that can reflect its internal data structures using a data serialization package. While the Web is based on static hyperlinks to destinations known at the time of page creation, HyperDAQ creates links to data content providers dynamically. Peer-to-Peer technology enables adaptive navigation from one application to another depending on the lifetime of application modules. Traditionally, distributed systems give the user a single point of access. We take a radically different approach. Every node may become an access point from which the whole system can be explored.
DOI: 10.1007/bf03185592
1999
Comparative study of (111) and (100) crystals and capacitance measurements on Si strip detectors in CMS
For the construction of the silicon microstrip detectors for the Tracker of the CMS experiment, two different substrate choices were investigated: A high-resistivity (6 k cm) substrate with (111) crystalorientation and a low-resistivity (2k cm) one with (100) crystalorientation. The interstrip and backplane capacitances were measured before and after the exposure to radiation in a range of strip pitches from 60 μm to 240 μm and for values of the width-over-pitch ratio between 0.1 and 0.5.