ϟ

Vivian O’Dell

Here are all the papers by Vivian O’Dell that you can download and read on OA.mg.
Vivian O’Dell’s last known institution is . Download Vivian O’Dell PDFs here.

Claim this Profile →
DOI: 10.1103/physrevlett.83.22
1999
Cited 325 times
Observation of Direct<i>CP</i>Violation in<mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline"><mml:mrow><mml:msub><mml:mrow><mml:mi mathvariant="italic">K</mml:mi></mml:mrow><mml:mrow><mml:mi mathvariant="italic">S</mml:mi><mml:mo>,</mml:mo><mml:mi mathvariant="italic">L</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mspace /><mml:mo>→</mml:mo><mml:mspace /><mml:mi mathvariant="italic">π</mml:mi><mml:mi mathvariant="italic">π</mml:mi></mml:math>Decays
We have compared the decay rates of KL and KS to π+π− and π0π0 final states using a subset of the data from the KTeV experiment (E832) at Fermilab. We find that the direct-CP-violation parameter Re(ε′/ε) is equal to [28.0±3.0(stat)±2.8(syst)]×10−4. This result definitively establishes the existence of CP violation in a decay process.Received 27 May 1999DOI:https://doi.org/10.1103/PhysRevLett.83.22©1999 American Physical Society
DOI: 10.1103/physrevd.67.012005
2003
Cited 157 times
Measurements of direct<mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline"><mml:mi>CP</mml:mi></mml:math>violation,<mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline"><mml:mi>CPT</mml:mi></mml:math>symmetry, and other parameters in the neutral kaon system
We present a series of measurements based on ${K}_{L,S}\ensuremath{\rightarrow}{\ensuremath{\pi}}^{+}{\ensuremath{\pi}}^{\ensuremath{-}}$ and ${K}_{L,S}\ensuremath{\rightarrow}{\ensuremath{\pi}}^{0}{\ensuremath{\pi}}^{0}$ decays collected in 1996--1997 by the $\mathrm{KTeV}$ experiment (E832) at Fermilab. We compare these four $\stackrel{\ensuremath{\rightarrow}}{K}\ensuremath{\pi}\ensuremath{\pi}$ decay rates to measure the direct $\mathrm{CP}$ violation parameter $\mathrm{Re}({\ensuremath{\epsilon}}^{\ensuremath{'}}/\ensuremath{\epsilon})=(20.7\ifmmode\pm\else\textpm\fi{}2.8)\ifmmode\times\else\texttimes\fi{}{10}^{\ensuremath{-}4}.$ We also test $\mathrm{CPT}$ symmetry by measuring the relative phase between the $\mathrm{CP}$ violating and $\mathrm{CP}$ conserving decay amplitudes for $\stackrel{\ensuremath{\rightarrow}}{K}{\ensuremath{\pi}}^{+}{\ensuremath{\pi}}^{\ensuremath{-}}$ $({\ensuremath{\varphi}}_{+\ensuremath{-}})$ and for $\stackrel{\ensuremath{\rightarrow}}{K}{\ensuremath{\pi}}^{0}{\ensuremath{\pi}}^{0}$ $({\ensuremath{\varphi}}_{00}).$ We find the difference between the relative phases to be $\ensuremath{\Delta}\ensuremath{\varphi}\ensuremath{\equiv}{\ensuremath{\varphi}}_{00}\ensuremath{-}{\ensuremath{\varphi}}_{+\ensuremath{-}}=(+0.39\ifmmode\pm\else\textpm\fi{}0.50)\ifmmode^\circ\else\textdegree\fi{},$ and the deviation of ${\ensuremath{\varphi}}_{+\ensuremath{-}}$ from the superweak phase to be ${\ensuremath{\varphi}}_{+\ensuremath{-}}\ensuremath{-}{\ensuremath{\varphi}}_{\mathrm{SW}}=(+0.61\ifmmode\pm\else\textpm\fi{}1.19)\ifmmode^\circ\else\textdegree\fi{};$ both results are consistent with $\mathrm{CPT}$ symmetry. In addition, we present new measurements of the ${K}_{L}$-${K}_{S}$ mass difference and ${K}_{S}$ lifetime: $\ensuremath{\Delta}m=(5261\ifmmode\pm\else\textpm\fi{}15)\ifmmode\times\else\texttimes\fi{}{10}^{6}\ensuremath{\Elzxh}{\mathrm{s}}^{\ensuremath{-}1}$ and ${\ensuremath{\tau}}_{S}=(89.65\ifmmode\pm\else\textpm\fi{}0.07)\ifmmode\times\else\texttimes\fi{}{10}^{\ensuremath{-}12}\mathrm{s}.$
DOI: 10.1103/physrevlett.84.5279
2000
Cited 110 times
Search for the Decay<mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline"><mml:mrow><mml:msub><mml:mrow><mml:mi>K</mml:mi></mml:mrow><mml:mrow><mml:mi>L</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mi /><mml:mo>→</mml:mo><mml:mi /><mml:mrow><mml:msup><mml:mrow><mml:mi mathvariant="italic">π</mml:mi></mml:mrow><mml:mrow><mml:mn>0</mml:mn></mml:mrow></mml:msup></mml:mrow><mml:mrow><mml:msup><mml:mrow><mml:mi mathvariant="italic">μ</mml:mi></mml:mrow><mml:mrow><mml:mo>+</mml:…
We report on a search for the decay K(L)-->pi(0)&mgr;(+)&mgr;(-) carried out as a part of the KTeV experiment at Fermilab. This decay is expected to have a significant CP violating contribution and a direct measurement will either support the Cabibbo-Kobayashi-Maskawa mechanism for CP violation or point to new physics. Two events were observed in the 1997 data with an expected background of 0.87+/-0.15 events, and we set an upper limit B(K(L)-->pi(0)&mgr;(+)&mgr;(-))<3. 8x10(-10) at the 90% confidence level.
DOI: 10.1016/0370-2693(95)00021-c
1995
Cited 73 times
Extraction of the gluon density of the proton at x
The gluon momentum density xg(x, Q2) of the proton was extracted at Q2 = 20 GeV2 for small values of x between 4 × 10−4 and 10−2 from the scaling violations of the proton structure function F2 measured recently by ZEUS in deep inelastic neutral current ep scattering at HERA. The extraction was performed in two ways. Firstly, using a global NLO fit to the ZEUS data on F2 at low x constrained by measurementsfrom NMC at larger x; and secondly using published approximate methods for the solution of the GLAP QCD evolution equations. Consistent results are obtained. A substantial increase of the gluon density is found at small x in comparison with the NMC result obtained at larger values of x.
DOI: 10.1103/physrevlett.84.408
2000
Cited 68 times
Observation of<mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline"><mml:mi mathvariant="italic">CP</mml:mi></mml:math>Violation in<mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline"><mml:mrow><mml:msub><mml:mrow><mml:mi>K</mml:mi></mml:mrow><mml:mrow><mml:mi>L</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>→</mml:mo><mml:mrow><mml:msup><mml:mrow><mml:mi>π</mml:mi></mml:mrow><mml:mrow><mml:mo>+</mml:mo></mml:mrow></mml:msup></mml:mrow><mml:mrow><mml:…
We report the first observation of a manifestly CP violating effect in the K(L)-->pi(+)pi(-)e(+)e(-) decay mode. A large asymmetry was observed in the distribution of these decays in the CP-odd and T-odd angle straight phi between the decay planes of the e(+)e(-) and pi(+)pi(-) pairs in the K(L) center of mass system. After acceptance corrections, the overall asymmetry is found to be [13.6+/-2. 5(stat)+/-1.2(syst)]%. This is the largest CP-violating effect yet observed when integrating over the entire phase space of a mode and the first such effect observed in an angular variable.
DOI: 10.1103/physrevlett.88.181601
2002
Cited 48 times
Measurement of the<mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline"><mml:mrow><mml:msub><mml:mrow><mml:mi>K</mml:mi></mml:mrow><mml:mrow><mml:mi>L</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:math>Charge Asymmetry
We present a measurement of the charge asymmetry delta(L) in the mode K(L)-->pi(+/-)e(-/+)nu based on 298 x 10(6) analyzed decays. We measure a value of delta(L) = [3322+/-58(stat)+/-47(syst)]x10(-6), in good agreement with previous measurements and 2.4 times more precise than the current best published result. The result is used to place more stringent limits on CPT and DeltaS = DeltaQ violation in the neutral kaon system.
DOI: 10.1103/physrevd.70.079904
2004
Cited 45 times
Erratum: Measurements of direct<mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline"><mml:mi>C</mml:mi><mml:mi>P</mml:mi></mml:math>violation,<mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline"><mml:mi>C</mml:mi><mml:mi>P</mml:mi><mml:mi>T</mml:mi></mml:math>symmetry, and other parameters in the neutral kaon system [Phys. Rev. D 67, 012005 (2003)]
We present a series of measurements based on K -> pi+pi- and K -> pi0pi0 decays collected in 1996-1997 by the KTeV experiment (E832) at Fermilab. We compare these four K -> pipi decay rates to measure the direct CP violation parameter Re(e'/e) = (20.7 +- 2.8) x 10^-4. We also test CPT symmetry by measuring the relative phase between the CP violating and CP conserving decay amplitudes for K->pi+pi- (phi+-) and for K -> pi0pi0 (phi00). We find the difference between the relative phases to be Delta-phi = phi00 - phi+- = (+0.39 +- 0.50) degrees and the deviation of phi+- from the superweak phase to be phi+- - phi_SW =(+0.61 +- 1.19) degrees; both results are consistent with CPT symmetry. In addition, we present new measurements of the KL-KS mass difference and KS lifetime: Delta-m = (5261 +- 15) x 10^6 hbar/s and tauS = (89.65 +- 0.07) x 10^-12 s.
DOI: 10.1103/physrevd.61.072006
2000
Cited 48 times
Search for the decay<mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline"><mml:mrow><mml:msub><mml:mrow><mml:mi>K</mml:mi></mml:mrow><mml:mrow><mml:mi>L</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>→</mml:mo><mml:mrow><mml:msup><mml:mrow><mml:mi>π</mml:mi></mml:mrow><mml:mrow><mml:mn>0</mml:mn></mml:mrow></mml:msup></mml:mrow><mml:mi>ν</mml:mi><mml:mrow><mml:mrow><mml:mover><mml:mrow><mml:mi>ν</mml:mi></mml:mrow><mml:mrow><mml:mi>¯</mml:mi></mml:mrow></mml:mover></mml:…
We report on a search for the decay KL→π0ν¯ν, carried out as a part of E799-II, a rare KL decay experiment at Fermilab. Within the standard model, the KL→π0ν¯ν decay is dominated by direct CP violating processes, and thus an observation of the decay implies confirmation of direct CP violation. No events were observed, and we set an upper limit for the branching ratio of KL→π0ν¯ν to be <5.9×10−7 at the 90% confidence level.Received 8 July 1999DOI:https://doi.org/10.1103/PhysRevD.61.072006©2000 American Physical Society
DOI: 10.1088/1742-6596/219/2/022011
2010
Cited 23 times
The CMS data acquisition system software
The CMS data acquisition system is made of two major subsystems: event building and event filter. The presented paper describes the architecture and design of the software that processes the data flow in the currently operating experiment. The central DAQ system relies on industry standard networks and processing equipment. Adopting a single software infrastructure in all subsystems of the experiment imposes, however, a number of different requirements. High efficiency and configuration flexibility are among the most important ones. The XDAQ software infrastructure has matured over an eight years development and testing period and has shown to be able to cope well with the requirements of the CMS experiment.
DOI: 10.22323/1.449.0597
2024
Slow control and TDAQ systems installation and tests in the Mu2e experiment
The Mu2e experiment at Fermilab will attempt to detect a coherent neutrinoless conversion of a muon into an electron in the field of an aluminum nucleus, with a sensitivity that is 10,000 times greater than existing limits. The Mu2e trigger and data acquisition system (TDAQ) uses the otsdaq framework as its online Data Acquisition System (DAQ) solution. Developed at Fermilab, otsdaq integrates several components, such as an artdaq-based DAQ, an art-based event processing, and an EPICS-based detector control system (DCS), and provides a uniform multi-user interface to its components through a web browser. The data streams from the Mu2e tracker and calorimeter are handled by the artdaq-based DAQ and processed by a one-level software trigger implemented within the art framework. Events accepted by the trigger have their data combined, post-trigger, with the separately read-out data from the Mu2e Cosmic Ray Veto system. The foundation of Mu2e DCS, EPICS, an Experimental Physics and Industrial Control System, is an open-source platform for monitoring, controlling, alarming, and archiving. Over the last three years, a prototype of the TDAQ and DCS systems has been built and tested at Fermilab’s Feynman Computing Center. Currently, the production system installation is underway. At the end, this work presents a brief update on the installation of racks and DAQ hardware.
DOI: 10.1103/physrevlett.83.917
1999
Cited 34 times
Measurement of the Decay<mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline"><mml:mrow><mml:msub><mml:mrow><mml:mi>K</mml:mi></mml:mrow><mml:mrow><mml:mi>L</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mi /><mml:mo>→</mml:mo><mml:mi /><mml:mrow><mml:msup><mml:mrow><mml:mi mathvariant="italic">π</mml:mi></mml:mrow><mml:mrow><mml:mn>0</mml:mn></mml:mrow></mml:msup></mml:mrow><mml:mi mathvariant="italic">γ</mml:mi><mml:mi mathvariant="italic">γ</mml:mi></mml:math>
We report on a new measurement of the decay ${K}_{L}\ensuremath{\rightarrow}{\ensuremath{\pi}}^{0}\ensuremath{\gamma}\ensuremath{\gamma}$ by the KTeV experiment at Fermilab. We determine the ${K}_{L}\ensuremath{\rightarrow}{\ensuremath{\pi}}^{0}\ensuremath{\gamma}\ensuremath{\gamma}$ branching ratio to be $(1.68\ifmmode\pm\else\textpm\fi{}0.07\ifmmode\pm\else\textpm\fi{}0.08)\ifmmode\times\else\texttimes\fi{}{10}^{\ensuremath{-}6}$. Our data show the first evidence for a low-mass $\ensuremath{\gamma}\ensuremath{\gamma}$ signal as predicted by recent $O({p}^{6})$ chiral perturbation calculations which include vector meson exchange contributions. From our data, we extract a value for the effective vector coupling ${a}_{V}\phantom{\rule{0ex}{0ex}}=\phantom{\rule{0ex}{0ex}}\ensuremath{-}0.72\ifmmode\pm\else\textpm\fi{}0.05\ifmmode\pm\else\textpm\fi{}0.06$.
DOI: 10.1103/physrevlett.87.132001
2001
Cited 29 times
First Measurement of Form Factors of the Decay<mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline"><mml:mrow><mml:msup><mml:mrow><mml:mi>Ξ</mml:mi></mml:mrow><mml:mrow><mml:mn>0</mml:mn></mml:mrow></mml:msup></mml:mrow><mml:mi /><mml:mo>→</mml:mo><mml:mi /><mml:mrow><mml:msup><mml:mrow><mml:mi>Σ</mml:mi></mml:mrow><mml:mrow><mml:mo>+</mml:mo></mml:mrow></mml:msup></mml:mrow><mml:mrow><mml:msup><mml:mrow><mml:mi mathvariant="italic">e</mml:mi></mml:mrow><mml:mrow><mml:mo>−</…
We present the first measurement of the form factor ratios g(1)/f(1) (direct axial vector to vector), g(2)/f(1) (second class current), and f(2)/f(1) (weak magnetism) for the decay Xi(0)-->Sigma(+)e(-)nu macro(e) using the KTeV (E799) beam line and detector at Fermilab. From the Sigma(+) polarization measured with the decay Sigma(+)-->p pi(0) and the e(-)-nu; correlation, we measure g(1)/f(1) to be 1.32+/-(0.21)(0.17)(stat)+/-0.05(syst), assuming the SU(3)(f) (flavor) values for g(2)/f(1) and f(2)/f(1). Our results are all consistent with exact SU(3)(f) symmetry.
DOI: 10.1103/physrevlett.84.2593
2000
Cited 29 times
Search for the Weak Decay of a Lightly Bound<mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline"><mml:mrow><mml:msup><mml:mrow><mml:mi mathvariant="italic">H</mml:mi></mml:mrow><mml:mrow><mml:mn>0</mml:mn></mml:mrow></mml:msup></mml:mrow></mml:math>Dibaryon
We present results of a search for a new form of hadronic matter, a six-quark, dibaryon state called the H0, a state predicted to exist in several theoretical models. Analyzing data collected by experiment E799-II at Fermilab, we searched for the decay H0-->Lambdappi(-) and found no candidate events. We place an upper limit on [B(H0-->Lambdappi(-))dsigma(H)/dOmega]/(dsigma(Xi)/dOmega) and, in the context of published models, exclude the region of lightly bound mass states just below the LambdaLambda mass threshold, 2.194<M(H)<2.231 GeV/c(2), with lifetimes from approximately 5x10(-10) sec to approximately 1x10(-3) sec.
DOI: 10.1103/physrevlett.79.4083
1997
Cited 29 times
Search for Light Gluinos via the Spontaneous Appearance of π+π- Pairs with an 800 GeV/c Proton Beam at Fermilab
We searched for the appearance of π+π− pairs with invariant mass ≥648MeV/c2 in a neutral beam. Such an observation could signify the decay of a long-lived light neutral particle. We find no evidence for this decay. Our null result severely constrains the existence of an R0 hadron, which is the lightest bound state of a gluon and a light gluino (g˜g), and thereby also the existence of a light gluino. Depending on the photino mass, we exclude the R0 in the mass and lifetime ranges of 1.2–4.6GeV/c2 and 2×10−10–7×10−4s, respectively.Received 18 July 1997DOI:https://doi.org/10.1103/PhysRevLett.79.4083©1997 American Physical Society
DOI: 10.1103/physrevlett.86.397
2001
Cited 24 times
Search for the DecayKL→π0e+e−
We report on a search for the decay KL-->pi(0)e+e- carried out by the KTeV/E799 experiment at Fermilab. This decay is expected to have a significant CP violating contribution and the measurement of its branching ratio could support the Cabibbo-Kobayashi-Maskawa mechanism for CP violation or could point to new physics. Two events were observed in the 1997 data with an expected background of 1.06+/-0.41 events, and we set an upper limit B(KL-->pi(0)e+e-)<5.1 x 10(-10) at the 90% confidence level.
DOI: 10.1088/1742-6596/331/2/022021
2011
Cited 13 times
The data-acquisition system of the CMS experiment at the LHC
The data-acquisition system of the CMS experiment at the LHC performs the read-out and assembly of events accepted by the first level hardware trigger. Assembled events are made available to the high-level trigger which selects interesting events for offline storage and analysis. The system is designed to handle a maximum input rate of 100 kHz and an aggregated throughput of 100GB/s originating from approximately 500 sources. An overview of the architecture and design of the hardware and software of the DAQ system is given. We discuss the performance and operational experience from the first months of LHC physics data taking.
DOI: 10.1088/1748-0221/8/12/c12039
2013
Cited 12 times
10 Gbps TCP/IP streams from the FPGA for the CMS DAQ eventbuilder network
For the upgrade of the DAQ of the CMS experiment in 2013/2014 an interface between the custom detector Front End Drivers (FEDs) and the new DAQ eventbuilder network has to be designed. For a loss-less data collection from more then 600 FEDs a new FPGA based card implementing the TCP/IP protocol suite over 10Gbps Ethernet has been developed. We present the hardware challenges and protocol modifications made to TCP in order to simplify its FPGA implementation together with a set of performance measurements which were carried out with the current prototype.
DOI: 10.1109/tns.2015.2426216
2015
Cited 12 times
The New CMS DAQ System for Run-2 of the LHC
The data acquisition (DAQ) system of the CMS experiment at the CERN Large Hadron Collider assembles events at a rate of 100 kHz, transporting event data at an aggregate throughput of 100 GB/s to the high level trigger (HLT) farm. The HLT farm selects interesting events for storage and offline analysis at a rate of around 1 kHz. The DAQ system has been redesigned during the accelerator shutdown in 2013/14. The motivation is twofold: Firstly, the current compute nodes, networking, and storage infrastructure will have reached the end of their lifetime by the time the LHC restarts. Secondly, in order to handle higher LHC luminosities and event pileup, a number of sub-detectors will be upgraded, increasing the number of readout channels and replacing the off-detector readout electronics with a <formula formulatype="inline" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex Notation="TeX">$\mu {\hbox {TCA}}$</tex></formula> implementation. The new DAQ architecture will take advantage of the latest developments in the computing industry. For data concentration, 10/40 Gb/s Ethernet technologies will be used, as well as an implementation of a reduced TCP/IP in FPGA for a reliable transport between custom electronics and commercial computing hardware. A Clos network based on 56 Gb/s FDR Infiniband has been chosen for the event builder with a throughput of <formula formulatype="inline" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex Notation="TeX">$\sim 4~\hbox{Tb/s}$</tex> </formula> . The HLT processing is entirely file based. This allows the DAQ and HLT systems to be independent, and to use the HLT software in the same way as for the offline processing. The fully built events are sent to the HLT with 1/10/40 Gb/s Ethernet via network file systems. Hierarchical collection of HLT accepted events and monitoring meta-data are stored into a global file system. This paper presents the requirements, technical choices, and performance of the new system.
DOI: 10.1088/1742-6596/513/1/012042
2014
Cited 11 times
10 Gbps TCP/IP streams from the FPGA for High Energy Physics
The DAQ system of the CMS experiment at CERN collects data from more than 600 custom detector Front-End Drivers (FEDs). During 2013 and 2014 the CMS DAQ system will undergo a major upgrade to address the obsolescence of current hardware and the requirements posed by the upgrade of the LHC accelerator and various detector components. For a loss-less data collection from the FEDs a new FPGA based card implementing the TCP/IP protocol suite over 10Gbps Ethernet has been developed. To limit the TCP hardware implementation complexity the DAQ group developed a simplified and unidirectional but RFC 793 compliant version of the TCP protocol. This allows to use a PC with the standard Linux TCP/IP stack as a receiver. We present the challenges and protocol modifications made to TCP in order to simplify its FPGA implementation. We also describe the interaction between the simplified TCP and Linux TCP/IP stack including the performance measurements.
DOI: 10.1016/0370-2693(95)00022-d
1995
Cited 24 times
Observation of hard scattering in photoproduction events with a large rapidity gap at HERA
Events with a large rapidity gap and total transverse energy greater than 5 GeV have been observed in quasi-real photoproduction at HERA with the ZEUS detector. The distribution of these events as a function of the $\gamma p$ centre of mass energy is consistent with diffractive scattering. For total transverse energies above 12 GeV, the hadronic final states show predominantly a two-jet structure with each jet having a transverse energy greater than 4 GeV. For the two-jet events, little energy flow is found outside the jets. This observation is consistent with the hard scattering of a quasi-real photon with a colourless object in the proton.
2000
Cited 23 times
Models of networked analysis at regional centres for lhc experiments (monarc). phase 2 report.
DOI: 10.1103/physrevlett.83.2128
1999
Cited 22 times
Search for Light Gluinos via Decays Containing<mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline"><mml:mrow><mml:msup><mml:mrow><mml:mi>π</mml:mi></mml:mrow><mml:mrow><mml:mo>+</mml:mo></mml:mrow></mml:msup></mml:mrow><mml:mrow><mml:msup><mml:mrow><mml:mi>π</mml:mi></mml:mrow><mml:mrow><mml:mo>−</mml:mo></mml:mrow></mml:msup></mml:mrow></mml:math>or<mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline"><mml:mrow><mml:msup><mml:mrow><mml:mi>π</mml:mi></…
We report on two null searches, one for the spontaneous appearance of ${\ensuremath{\pi}}^{+}{\ensuremath{\pi}}^{\ensuremath{-}}$ pairs, another for a single ${\ensuremath{\pi}}^{0}$, consistent with the decay of a long-lived neutral particle into hadrons and an unseen neutral particle. For the lowest level gluon-gluino bound state, known as the ${R}^{0}$, we exclude the decays ${R}^{0}\ensuremath{\rightarrow}{\ensuremath{\pi}}^{+}{\ensuremath{\pi}}^{\ensuremath{-}}\stackrel{\ifmmode \tilde{}\else \~{}\fi{}}{\ensuremath{\gamma}}$ and ${R}^{0}\ensuremath{\rightarrow}{\ensuremath{\pi}}^{0}\stackrel{\ifmmode \tilde{}\else \~{}\fi{}}{\ensuremath{\gamma}}$ for the masses of ${R}^{0}$ and $\stackrel{\ifmmode \tilde{}\else \~{}\fi{}}{\ensuremath{\gamma}}$ in the theoretically allowed range. In the most interesting ${R}^{0}$ mass range, $\ensuremath{\le}3\mathrm{GeV}/{c}^{2}$, we exclude ${R}^{0}$ lifetimes from $3\ifmmode\times\else\texttimes\fi{}{10}^{\ensuremath{-}10}\mathrm{sec}$ to as high as ${10}^{\ensuremath{-}3}\mathrm{sec}$, assuming perturbative QCD production for the ${R}^{0}$.
DOI: 10.1103/physrevlett.80.4123
1998
Cited 21 times
Measurement of the Branching Fraction of the Decay<mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline"><mml:mrow><mml:msub><mml:mrow><mml:mi>K</mml:mi></mml:mrow><mml:mrow><mml:mi mathvariant="italic">L</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mi /><mml:mo>→</mml:mo><mml:mrow><mml:msup><mml:mrow><mml:mi>π</mml:mi></mml:mrow><mml:mrow><mml:mo>+</mml:mo></mml:mrow></mml:msup></mml:mrow><mml:mrow><mml:msup><mml:mrow><mml:mi>π</mml:mi></mml:mrow><mml:mrow><mml:mo>−</mml:mo…
We report the first branching fraction measurement of the decay ${K}_{L}\ensuremath{\rightarrow}{\ensuremath{\pi}}^{+}{\ensuremath{\pi}}^{\ensuremath{-}}{e}^{+}{e}^{\ensuremath{-}}$. With a sample of 46 candidates, and an expected background level of 9.4 events, the branching fraction is determined to be $B({K}_{L}\ensuremath{\rightarrow}{\ensuremath{\pi}}^{+}{\ensuremath{\pi}}^{\ensuremath{-}}{e}^{+}{e}^{\ensuremath{-}})\phantom{\rule{0ex}{0ex}}=\phantom{\rule{0ex}{0ex}}[3.2\ifmmode\pm\else\textpm\fi{}0.6(\mathrm{stat})\ifmmode\pm\else\textpm\fi{}0.4(\mathrm{syst})]\ifmmode\times\else\texttimes\fi{}{10}^{\ensuremath{-}7}$. This measurement was carried out as part of the Fermilab KTeV (E799-II) experiment and is in good agreement with the expectations from the mechanisms of direct emission and inner bremsstrahlung.
DOI: 10.1103/physrevlett.87.071801
2001
Cited 19 times
Measurement of the Branching Ratio and Form Factor of<mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline"><mml:mrow><mml:msub><mml:mrow><mml:mi>K</mml:mi></mml:mrow><mml:mrow><mml:mi>L</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>→</mml:mo><mml:mrow><mml:msup><mml:mrow><mml:mi>μ</mml:mi></mml:mrow><mml:mrow><mml:mo>+</mml:mo></mml:mrow></mml:msup></mml:mrow><mml:mrow><mml:msup><mml:mrow><mml:mi>μ</mml:mi></mml:mrow><mml:mrow><mml:mo>−</mml:mo></mml:mrow></mml:msup></mml…
We report on the analysis of the rare decay K(L)-->mu(+)mu(-)gamma the 1997 data from the KTeV experiment at Fermilab. A total of 9327 candidate events are observed with 2.4% background, representing a factor of 40 increase in statistics over the current world sample. We find that B(K(L)-->mu(+)mu(-)gamma) = (3.62 +/- 0.04(stat) +/- 0.08(syst)) x 10(-7). The form factor parameter alpha(K*) is measured to be alpha(K*) = -0.160(+0.026)(-0.028). In addition, we make the first measurement of the parameter alpha from the D'Ambrosio-Isidori-Portolés form factor, finding alpha = -1.54 +/- 0.10. In that model, this alpha measurement limits the Cabibbo-Kobayashi-Maskawa parameter rho>-0.2.
DOI: 10.1109/nssmic.2015.7581984
2015
Cited 8 times
The CMS Timing and Control Distribution System
The Compact Muon Solenoid (CMS) experiment operating at the CERN (European Laboratory for Nuclear Physics) Large Hadron Collider (LHC) is in the process of upgrading several of its detector systems. Adding more individual detector components brings the need to test and commission those components separately from existing ones so as not to compromise physics data-taking. The CMS Trigger, Timing and Control (TTC) system had reached its limits in terms of the number of separate elements (partitions) that could be supported. A new Timing and Control Distribution System (TCDS) has been designed, built and commissioned in order to overcome this limit. It also brings additional functionality to facilitate parallel commissioning of new detector elements. The new TCDS system and its components will be described and results from the first operational experience with the TCDS in CMS will be shown.
DOI: 10.1103/physrevd.43.2787
1991
Cited 18 times
Photoproduction of an isovector ρπ state at 1775 MeV
Evidence is presented for the charge-exchange photoproduction, in two distinct reactions, of an isovector \ensuremath{\rho}\ensuremath{\pi} state of mass \ensuremath{\sim}1775 MeV. Results of an analysis of the decay-angular distributions are also presented, from which it is concluded that ${\mathit{J}}^{\mathit{P}}$=${1}^{\mathrm{\ensuremath{-}}}$, ${2}^{\mathrm{\ensuremath{-}}}$, or ${3}^{+}$.
DOI: 10.1103/physrevlett.86.5425
2001
Cited 17 times
Measurements of the Rare Decay<mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline"><mml:mrow><mml:msub><mml:mrow><mml:mi>K</mml:mi></mml:mrow><mml:mrow><mml:mi>L</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mspace /><mml:mo>→</mml:mo><mml:mspace /><mml:mrow><mml:msup><mml:mrow><mml:mi mathvariant="italic">e</mml:mi></mml:mrow><mml:mrow><mml:mo>+</mml:mo></mml:mrow></mml:msup></mml:mrow><mml:mrow><mml:msup><mml:mrow><mml:mi mathvariant="italic">e</mml:mi></mml:mrow><mml:…
We observe 441 K(L)-->e(+)e(-)e(+)e(-) candidate events with a background of 4.2 events and measure B(K(L)-->e(+)e(-)e(+)e(-)) = [3.72+/-0.18(stat)+/-0.23(syst)]x10(-8) in the KTeV/E799II experiment at Fermilab. Using the distribution of the angle between the planes of the e(+)e(-) pairs, we measure the CP parameters beta(CP) = -0.23+/-0.09(stat)+/-0.02(syst) and gamma(CP) = -0.09+/-0.09(stat)+/-0.02(syst). We also present the first detailed study of the e(+)e(-) invariant mass spectrum in this decay mode.
DOI: 10.1103/physrevlett.86.761
2001
Cited 16 times
Study of the<mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline"><mml:mrow><mml:msubsup><mml:mrow><mml:mi mathvariant="italic">K</mml:mi></mml:mrow><mml:mrow><mml:mi mathvariant="italic">L</mml:mi></mml:mrow><mml:mrow><mml:mn>0</mml:mn></mml:mrow></mml:msubsup></mml:mrow><mml:mi /><mml:mo>→</mml:mo><mml:mi /><mml:mrow><mml:msup><mml:mrow><mml:mi>π</mml:mi></mml:mrow><mml:mrow><mml:mo>+</mml:mo></mml:mrow></mml:msup></mml:mrow><mml:mrow><mml:msup><mml:mrow><mml:mi>π</mml:mi…
We have performed studies of the K(0)(L)-->pi(+)pi(-)gamma direct emission ( DE) and inner Bremsstrahlung ( IB) vertices, based on data collected by KTeV during the 1996 Fermilab fixed target run. We find a(1)/a(2) = -0.737+/-0.034 GeV2 for the DE form-factor parameter in the rho-propagator parametrization, and report on fits of the form factor to linear and quadratic functions as well. We concurrently measure gamma(K(0)(L)-->pi(+)pi(-)gamma,E(*)(gamma)>20 MeV)/gamma(K(0)(L)-->pi(+)pi(-)) = (20.8+/-0.3)x10(-3), and a K(0)(L)-->pi(+)pi(-)gamma DE/(DE+IB) branching ratio of 0.683+/-0.011.
DOI: 10.1103/physrevlett.89.211801
2002
Cited 15 times
Search for the<mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline"><mml:msub><mml:mi>K</mml:mi><mml:mi>L</mml:mi></mml:msub><mml:mo>→</mml:mo><mml:msup><mml:mi>π</mml:mi><mml:mn>0</mml:mn></mml:msup><mml:msup><mml:mi>π</mml:mi><mml:mn>0</mml:mn></mml:msup><mml:msup><mml:mi>e</mml:mi><mml:mo>+</mml:mo></mml:msup><mml:msup><mml:mi>e</mml:mi><mml:mo>−</mml:mo></mml:msup></mml:math>Decay in the KTeV Experiment
The recent discovery of a large CP violating asymmetry in KL-->pi+pi-e+e- mode has prompted us to seach for the associated KL-->pi 0 pi 0 e+e- decay mode in the KTeV-E799 experiment at Fermilab. In 2.7 x 10(11) K(L) decays, one candidate event has been observed with an expected background of 0.3 event, resulting in an upper limit for the KL-->pi 0 pi 0 e+e- branching ratio of 6.6 x 10(-9) at the 90% C.L.
DOI: 10.1103/physrevlett.95.081801
2005
Cited 14 times
Observation of the Decay<mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline"><mml:msup><mml:mi>Ξ</mml:mi><mml:mn>0</mml:mn></mml:msup><mml:mo>→</mml:mo><mml:msup><mml:mi>Σ</mml:mi><mml:mo>+</mml:mo></mml:msup><mml:msup><mml:mi>μ</mml:mi><mml:mo>−</mml:mo></mml:msup><mml:msub><mml:mover accent="true"><mml:mi>ν</mml:mi><mml:mo>¯</mml:mo></mml:mover><mml:mi>μ</mml:mi></mml:msub></mml:math>
The xi0 muon semileptonic decay has been observed for the first time with nine identified events using the KTeV beam line and detector at Fermilab. The decay is normalized to the xi0 beta decay mode and yields a value for the ratio of decay rates gamma(xi0 --> sigma+ mu- nu(mu))/gamma(xi0 --> sigma+ e- nu(e)) of [1.8(-0.5)(+0.7)(stat) +/- 0.2(syst)] x 10(-2). This is in agreement with the SU(3) flavor symmetric quark model.
DOI: 10.1088/1742-6596/119/2/022010
2008
Cited 9 times
The run control system of the CMS experiment
The CMS experiment at the LHC at CERN will start taking data in 2008. To configure, control and monitor the experiment during data-taking the Run Control system was developed. This paper describes the architecture and the technology used to implement the Run Control system, as well as the deployment and commissioning strategy of this important component of the online software for the CMS experiment.
DOI: 10.1088/1742-6596/219/2/022042
2010
Cited 7 times
Monitoring the CMS data acquisition system
The CMS data acquisition system comprises O(20000) interdependent services that need to be monitored in near real-time. The ability to monitor a large number of distributed applications accurately and effectively is of paramount importance for robust operations. Application monitoring entails the collection of a large number of simple and composed values made available by the software components and hardware devices. A key aspect is that detection of deviations from a specified behaviour is supported in a timely manner, which is a prerequisite in order to take corrective actions efficiently. Given the size and time constraints of the CMS data acquisition system, efficient application monitoring is an interesting research problem. We propose an approach that uses the emerging paradigm of Web-service based eventing systems in combination with hierarchical data collection and load balancing. Scalability and efficiency are achieved by a decentralized architecture, splitting up data collections into regions of collections. An implementation following this scheme is deployed as the monitoring infrastructure of the CMS experiment at the Large Hadron Collider. All services in this distributed data acquisition system are providing standard web service interfaces via XML, SOAP and HTTP [15,22]. Continuing on this path we adopted WS-* standards implementing a monitoring system layered on top of the W3C standards stack. We designed a load-balanced publisher/subscriber system with the ability to include high-speed protocols [10,12] for efficient data transmission [11,13,14] and serving data in multiple data formats.
DOI: 10.1088/1742-6596/219/2/022038
2010
Cited 7 times
The CMS event builder and storage system
The CMS event builder assembles events accepted by the first level trigger and makes them available to the high-level trigger. The event builder needs to handle a maximum input rate of 100 kHz and an aggregated throughput of 100 GB/s originating from approximately 500 sources. This paper presents the chosen hardware and software architecture. The system consists of 2 stages: an initial pre-assembly reducing the number of fragments by one order of magnitude and a final assembly by several independent readout builder (RU-builder) slices. The RU-builder is based on 3 separate services: the buffering of event fragments during the assembly, the event assembly, and the data flow manager. A further component is responsible for handling events accepted by the high-level trigger: the storage manager (SM) temporarily stores the events on disk at a peak rate of 2 GB/s until they are permanently archived offline. In addition, events and data-quality histograms are served by the SM to online monitoring clients. We discuss the operational experience from the first months of reading out cosmic ray data with the complete CMS detector.
DOI: 10.1088/1742-6596/396/1/012008
2012
Cited 7 times
The CMS High Level Trigger System: Experience and Future Development
The CMS experiment at the LHC features a two-level trigger system. Events accepted by the first level trigger, at a maximum rate of 100 kHz, are read out by the Data Acquisition system (DAQ), and subsequently assembled in memory in a farm of computers running a software high-level trigger (HLT), which selects interesting events for offline storage and analysis at a rate of order few hundred Hz. The HLT algorithms consist of sequences of offline-style reconstruction and filtering modules, executed on a farm of 0(10000) CPU cores built from commodity hardware. Experience from the operation of the HLT system in the collider run 2010/2011 is reported. The current architecture of the CMS HLT, its integration with the CMS reconstruction framework and the CMS DAQ, are discussed in the light of future development. The possible short- and medium-term evolution of the HLT software infrastructure to support extensions of the HLT computing power, and to address remaining performance and maintenance issues, are discussed.
DOI: 10.1109/tns.2012.2199331
2012
Cited 6 times
First Operational Experience With a High-Energy Physics Run Control System Based on Web Technologies
Run control systems of modern high-energy particle physics experiments have requirements similar to those of today's Internet applications. The Compact Muon Solenoid (CMS) collaboration at CERN's Large Hadron Collider (LHC) therefore decided to build the run control system for its detector based on web technologies. The system is composed of Java Web Applications distributed over a set of Apache Tomcat servlet containers that connect to a database back-end. Users interact with the system through a web browser. The present paper reports on the successful scaling of the system from a small test setup to the production data acquisition system that comprises around 10.000 applications running on a cluster of about 1600 hosts. We report on operational aspects during the first phase of operation with colliding beams including performance, stability, integration with the CMS Detector Control System and tools to guide the operator.
DOI: 10.1109/tns.2013.2282340
2013
Cited 6 times
A Comprehensive Zero-Copy Architecture for High Performance Distributed Data Acquisition Over Advanced Network Technologies for the CMS Experiment
This paper outlines a software architecture where zero-copy operations are used comprehensively at every processing point from the Application layer to the Physical layer. The proposed architecture is being used during feasibility studies on advanced networking technologies for the CMS experiment at CERN. The design relies on a homogeneous peer-to-peer message passing system, which is built around memory pool caches allowing efficient and deterministic latency handling of messages of any size through the different software layers. In this scheme portable distributed applications can be programmed to process input to output operations by mere pointer arithmetic and DMA operations only. The approach combined with the open fabric protocol stack (OFED) allows one to attain near wire-speed message transfer at application level. The architecture supports full portability of user applications by encapsulating the protocol details and network into modular peer transport services whereas a transparent replacement of the underlying protocol facilitates deployment of several network technologies like Gigabit Ethernet, Myrinet, Infiniband, etc. Therefore, this solution provides a protocol-independent communication framework and prevents having to deal with potentially difficult couplings when the underlying communication infrastructure is changed. We demonstrate the feasibility of this approach by giving efficiency and performance measurements of the software in the context of the CMS distributed event building studies.
DOI: 10.22323/1.213.0190
2015
Cited 6 times
Boosting Event Building Performance using Infiniband FDR for the CMS Upgrade
As part of the CMS upgrade during CERN's shutdown period (LS1), the CMS data acquisition system is incorporating Infiniband FDR technology to boost event-building performance for operation from 2015 onwards.Infiniband promises to provide substantial increase in data transmission speeds compared to the older 1GE network used during the 2009-2013 LHC run.Several options exist to end user developers when choosing a foundation for software upgrades, including the uDAPL (DAT Collaborative) and Infiniband verbs libraries (OFED).Due to advances in technology, the CMS data acquisition system will be able to achieve the required throughput of 100 kHz with increased event sizes while downsizing the number of nodes by using a combination of 10GE, 40GE and 56 Gb Infiniband FDR.This paper presents the analysis and results of a comparison between GE and Infiniband solutions as well as a look at how they integrate into an event building architecture, while preserving the scalability, efficiency and deterministic latency expected in a high end data acquisition network.
DOI: 10.22323/1.370.0111
2020
Cited 6 times
First measurements with the CMS DAQ and Timing Hub prototype-1
The DAQ and Timing Hub is an ATCA hub board designed for the Phase-2 upgrade of the CMS experiment.In addition to providing high-speed Ethernet connectivity to all back-end boards, it forms the bridge between the sub-detector electronics and the central DAQ, timing, and trigger control systems.One important requirement is the distribution of several high-precision, phasestable, and LHC-synchronous clock signals for use by the timing detectors.The current paper presents first measurements performed on the initial prototype, with a focus on clock quality.It is demonstrated that the current design provides adequate clock quality to satisfy the requirements of the Phase-2 CMS timing detectors.
DOI: 10.1103/physrevlett.83.922
1999
Cited 14 times
Measurement of the Branching Ratio of<mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline"><mml:mrow><mml:msup><mml:mrow><mml:mi>π</mml:mi></mml:mrow><mml:mrow><mml:mn>0</mml:mn></mml:mrow></mml:msup></mml:mrow><mml:mspace /><mml:mo>→</mml:mo><mml:mspace /><mml:mrow><mml:msup><mml:mrow><mml:mi>e</mml:mi></mml:mrow><mml:mrow><mml:mo>+</mml:mo></mml:mrow></mml:msup></mml:mrow><mml:mrow><mml:msup><mml:mrow><mml:mi>e</mml:mi></mml:mrow><mml:mrow><mml:mo>−</mml:mo></mml:mrow></…
The branching ratio of the rare decay ${\ensuremath{\pi}}^{0}\ensuremath{\rightarrow}{e}^{+}{e}^{\ensuremath{-}}$ has been measured in E799-II, a rare kaon decay experiment using the KTeV detector at Fermilab. We observed 275 candidate ${\ensuremath{\pi}}^{0}\ensuremath{\rightarrow}{e}^{+}{e}^{\ensuremath{-}}$ events, with an expected background of $21.4\ifmmode\pm\else\textpm\fi{}6.2$ events which includes the contribution from Dalitz decays. We measured $B[{\ensuremath{\pi}}^{0}\ensuremath{\rightarrow}{e}^{+}{e}^{\ensuremath{-}},({m}_{{e}^{+}{e}^{\ensuremath{-}}}{/m}_{{\ensuremath{\pi}}^{0}}{)}^{2}&gt;0.95]\phantom{\rule{0ex}{0ex}}=\phantom{\rule{0ex}{0ex}}(6.09\ifmmode\pm\else\textpm\fi{}0.40\ifmmode\pm\else\textpm\fi{}0.24)\ifmmode\times\else\texttimes\fi{}{10}^{\ensuremath{-}8}$, where the first error is statistical and the second systematic. This result is the first significant observation of the excess rate for this decay above the unitarity lower bound.
DOI: 10.1103/physrevlett.86.3239
2001
Cited 11 times
Measurement of the Branching Ratio and Asymmetry of the Decay<mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline"><mml:mi mathvariant="italic">Ξ</mml:mi><mml:mi>°</mml:mi><mml:mo>→</mml:mo><mml:mi mathvariant="italic">Σ</mml:mi><mml:mi>°</mml:mi><mml:mi mathvariant="italic">γ</mml:mi></mml:math>
We have studied the rare weak radiative hyperon decay Xi degrees -->Sigma degrees gamma in the KTeV experiment at Fermilab. We have identified 4045 signal events over a background of 804 events. The dominant Xi degrees -->Lambdapi degrees decay, which was used for normalization, is the only important background source. An analysis of the acceptance of both modes yields a branching ratio of B(Xi degrees -->Sigma degrees gamma)/B(Xi degrees -->Lambdapi degrees ) = (3.34+/-0.05+/-0.09)x10(-3). By analyzing the final state decay distributions, we have also determined that the Sigma degrees emission asymmetry parameter for this decay is alpha(XiSigma) = -0.63+/-0.09.
2007
Cited 7 times
Energy Response and Longitudinal Shower Profiles Measured in CMS HCAL and Comparison With Geant4
DOI: 10.1088/1742-6596/513/1/012025
2014
Cited 4 times
Prototype of a File-Based High-Level Trigger in CMS
The DAQ system of the CMS experiment at the LHC is upgraded during the accelerator shutdown in 2013/14. To reduce the interdependency of the DAQ system and the high-level trigger (HLT), we investigate the feasibility of using a file-system-based HLT. Events of ~1 MB size are built at the level-1 trigger rate of 100 kHz. The events are assembled by ~50 builder units (BUs). Each BU writes the raw events at ~2GB/s to a local file system shared with Q(10) filter-unit machines (FUs) running the HLT code. The FUs read the raw data from the file system, select Q(1%) of the events, and write the selected events together with monitoring meta-data back to a disk. This data is then aggregated over several steps and made available for offline reconstruction and online monitoring. We present the challenges, technical choices, and performance figures from the prototyping phase. In addition, the steps to the final system implementation will be discussed.
DOI: 10.1088/1742-6596/664/8/082036
2015
Cited 4 times
A scalable monitoring for the CMS Filter Farm based on elasticsearch
A flexible monitoring system has been designed for the CMS File-based Filter Farm making use of modern data mining and analytics components. All the metadata and monitoring information concerning data flow and execution of the HLT are generated locally in the form of small documents using the JSON encoding. These documents are indexed into a hierarchy of elasticsearch (es) clusters along with process and system log information. Elasticsearch is a search server based on Apache Lucene. It provides a distributed, multitenant-capable search and aggregation engine. Since es is schema-free, any new information can be added seamlessly and the unstructured information can be queried in non-predetermined ways. The leaf es clusters consist of the very same nodes that form the Filter Farm thus providing natural horizontal scaling. A separate central" es cluster is used to collect and index aggregated information. The fine-grained information, all the way to individual processes, remains available in the leaf clusters. The central es cluster provides quasi-real-time high-level monitoring information to any kind of client. Historical data can be retrieved to analyse past problems or correlate them with external information. We discuss the design and performance of this system in the context of the CMS DAQ commissioning for LHC Run 2.
DOI: 10.1103/physrevd.64.112004
2001
Cited 10 times
New measurement of the radiative<mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline"><mml:mrow><mml:msub><mml:mrow><mml:mi>K</mml:mi></mml:mrow><mml:mrow><mml:mi>e</mml:mi><mml:mn>3</mml:mn><mml:mn /></mml:mrow></mml:msub></mml:mrow></mml:math>branching ratio and photon spectrum
We present a new measurement of the branching ratio of the decay K_L -> pi e nu gamma (Ke3g) with respect to K_L -> pi e nu (Ke3), and the first study of the photon energy spectrum in this decay. We find BR(Ke3g, E*g>30 MeV, theta*_eg>20 deg)/BR(Ke3) = 0.908 +- 0.008 (stat.) +0.013-0.012 (syst.). Our measurement of the spectrum is consistent with inner bremsstrahlung as the only source of photons in Ke3g.
DOI: 10.1088/1742-6596/396/1/012023
2012
Cited 4 times
Status of the CMS Detector Control System
The Compact Muon Solenoid (CMS) is a CERN multi-purpose experiment that exploits the physics of the Large Hadron Collider (LHC). The Detector Control System (DCS) is responsible for ensuring the safe, correct and efficient operation of the experiment, and has contributed to the recording of high quality physics data. The DCS is programmed to automatically react to the LHC operational mode. CMS sub-detectors' bias voltages are set depending on the machine mode and particle beam conditions. An operator provided with a small set of screens supervises the system status summarized from the approximately 6M monitored parameters. Using the experience of nearly two years of operation with beam the DCS automation software has been enhanced to increase the system efficiency by minimizing the time required by sub-detectors to prepare for physics data taking. From the infrastructure point of view the DCS will be subject to extensive modifications in 2012. The current rack mounted control PCs will be replaced by a redundant pair of DELL Blade systems. These blade servers are a high-density modular solution that incorporates servers and networking into a single chassis that provides shared power, cooling and management. This infrastructure modification associated with the migration to blade servers will challenge the DCS software and hardware factorization capabilities. The on-going studies for this migration together with the latest modifications are discussed in the paper.
DOI: 10.1109/tns.2015.2409898
2015
Cited 3 times
Achieving High Performance With TCP Over 40 GbE on NUMA Architectures for CMS Data Acquisition
TCP and the socket abstraction have barely changed over the last two decades, but at the network layer there has been a giant leap from a few megabits to 100 gigabits in bandwidth. At the same time, CPU architectures have evolved into the multi-core era and applications are expected to make full use of all available resources. Applications in the data acquisition domain based on the standard socket library running in a Non-Uniform Memory Access (NUMA) architecture are unable to reach full efficiency and scalability without the software being adequately aware about the IRQ (Interrupt Request), CPU and memory affinities. During the first long shutdown of LHC, the CMS DAQ system is going to be upgraded for operation from 2015 onwards and a new software component has been designed and developed in the CMS online framework for transferring data with sockets. This software attempts to wrap the low-level socket library to ease higher-level programming with an API based on an asynchronous event driven model similar to the DAT uDAPL API. It is an event-based application with NUMA optimizations, that allows for a high throughput of data across a large distributed system. This paper describes the architecture, the technologies involved and the performance measurements of the software in the context of the CMS distributed event building.
DOI: 10.1088/1742-6596/664/8/082009
2015
Cited 3 times
Online data handling and storage at the CMS experiment
During the LHC Long Shutdown 1, the CMS Data Acquisition (DAQ) system underwent a partial redesign to replace obsolete network equipment, use more homogeneous switching technologies, and support new detector back-end electronics. The software and hardware infrastructure to provide input, execute the High Level Trigger (HLT) algorithms and deal with output data transport and storage has also been redesigned to be completely file- based. All the metadata needed for bookkeeping are stored in files as well, in the form of small documents using the JSON encoding. The Storage and Transfer System (STS) is responsible for aggregating these files produced by the HLT, storing them temporarily and transferring them to the T0 facility at CERN for subsequent offline processing. The STS merger service aggregates the output files from the HLT from ∼62 sources produced with an aggregate rate of ∼2GB/s. An estimated bandwidth of 7GB/s in concurrent read/write mode is needed. Furthermore, the STS has to be able to store several days of continuous running, so an estimated of 250TB of total usable disk space is required. In this article we present the various technological and implementation choices of the three components of the STS: the distributed file system, the merger service and the transfer system.
DOI: 10.1109/rtc.2016.7543164
2016
Cited 3 times
Performance of the new DAQ system of the CMS experiment for run-2
The data acquisition system (DAQ) of the CMS experiment at the CERN Large Hadron Collider (LHC) assembles events at a rate of 100 kHz, transporting event data at an aggregate throughput of more than 100GB/s to the Highlevel Trigger (HLT) farm. The HLT farm selects and classifies interesting events for storage and offline analysis at an output rate of around 1 kHz. The DAQ system has been redesigned during the accelerator shutdown in 2013-2014. The motivation for this upgrade was twofold. Firstly, the compute nodes, networking and storage infrastructure were reaching the end of their lifetimes. Secondly, in order to maintain physics performance with higher LHC luminosities and increasing event pileup, a number of sub-detectors are being upgraded, increasing the number of readout channels as well as the required throughput, and replacing the off-detector readout electronics with a MicroTCA-based DAQ interface. The new DAQ architecture takes advantage of the latest developments in the computing industry. For data concentration 10/40 Gbit/s Ethernet technologies are used, and a 56Gbit/s Infiniband FDR CLOS network (total throughput ≈ 4Tbit/s) has been chosen for the event builder. The upgraded DAQ - HLT interface is entirely file-based, essentially decoupling the DAQ and HLT systems. The fully-built events are transported to the HLT over 10/40 Gbit/s Ethernet via a network file system. The collection of events accepted by the HLT and the corresponding metadata are buffered on a global file system before being transferred off-site. The monitoring of the HLT farm and the data-taking performance is based on the Elasticsearch analytics tool. This paper presents the requirements, implementation, and performance of the system. Experience is reported on the first year of operation with LHC proton-proton runs as well as with the heavy ion lead-lead runs in 2015.
DOI: 10.22323/1.398.0823
2022
Online DAQ and slow control interface for the Mu2e experiment
The Mu2e experiment at the Fermilab Muon Campus will search for the coherent neutrinoless conversion of a muon into an electron in the field of an aluminum nucleus with a sensitivity improvement by a factor of 10,000 over existing limits.The Mu2e Trigger and Data Acquisition System (TDAQ) uses otsdaq as the online Data Acquisition System (DAQ) solution.Developed at Fermilab, otsdaq integrates both the artdaq DAQ and the art analysis frameworks for event transfer, filtering, and processing.otsdaq is an online DAQ software suite with a focus on flexibility and scalability and provides a multi-user, web-based, interface accessible through a web browser.The data stream from the detector subsystems is read by a software filter algorithm that selects events which are combined with the data flux coming from a Cosmic Ray Veto System.The Detector Control System (DCS) has been developed using the Experimental Physics and Industrial Control System (EPICS) open source platform for monitoring, controlling, alarming, and archiving.The DCS System has been integrated into otsdaq.A prototype of the TDAQ and the DCS systems has been built at Fermilab's Feynman Computing Center.In this paper, we report on the progress of the integration of this prototype in the online otsdaq software.
DOI: 10.1103/physrevd.64.012003
2001
Cited 8 times
Measurement of the branching ratio of<mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline"><mml:mrow><mml:msub><mml:mrow><mml:mi>K</mml:mi></mml:mrow><mml:mrow><mml:mi>L</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>→</mml:mo><mml:mrow><mml:msup><mml:mrow><mml:mi>e</mml:mi></mml:mrow><mml:mrow><mml:mo>+</mml:mo></mml:mrow></mml:msup></mml:mrow><mml:mrow><mml:msup><mml:mrow><mml:mi>e</mml:mi></mml:mrow><mml:mrow><mml:mi>−</mml:mi></mml:mrow></mml:msup></mml:mrow><mml:mi>γ…
We report on a study of the decay KL→e+e−γγ carried out as a part of the KTeV/E799 experiment at Fermilab. The 1997 data yielded a sample of 1543 events, including an expected background of 56±8 events. An effective form factor was determined from the observed distribution of the e+e− invariant mass. Using this form factor in the calculation of the detector acceptance, the branching ratio was measured to be B(KL→e+e−γγ,Eγ*>5MeV)=(5.84±0.15(stat)±0.32(syst))×10−7. Received 23 October 2000DOI:https://doi.org/10.1103/PhysRevD.64.012003©2001 American Physical Society
DOI: 10.1088/1742-6596/119/2/022011
2008
Cited 4 times
High level trigger configuration and handling of trigger tables in the CMS filter farm
The CMS experiment at the CERN Large Hadron Collider is currently being commissioned and is scheduled to collect the first pp collision data in 2008. CMS features a two-level trigger system. The Level-1 trigger, based on custom hardware, is designed to reduce the collision rate of 40 MHz to approximately 100 kHz. Data for events accepted by the Level-1 trigger are read out and assembled by an Event Builder. The High Level Trigger (HLT) employs a set of sophisticated software algorithms, to analyze the complete event information, and further reduce the accepted event rate for permanent storage and analysis. This paper describes the design and implementation of the HLT Configuration Management system. First experiences with commissioning of the HLT system are also reported.
DOI: 10.1088/1742-6596/513/1/012014
2014
Cited 3 times
The new CMS DAQ system for LHC operation after 2014 (DAQ2)
The Data Acquisition system of the Compact Muon Solenoid experiment at CERN assembles events at a rate of 100 kHz, transporting event data at an aggregate throughput of 100 GByte/s. We are presenting the design of the 2nd generation DAQ system, including studies of the event builder based on advanced networking technologies such as 10 and 40 Gbit/s Ethernet and 56 Gbit/s FDR Infiniband and exploitation of multicore CPU architectures. By the time the LHC restarts after the 2013/14 shutdown, the current compute nodes, networking, and storage infrastructure will have reached the end of their lifetime. In order to handle higher LHC luminosities and event pileup, a number of sub-detectors will be upgraded, increase the number of readout channels and replace the off-detector readout electronics with a μTCA implementation. The second generation DAQ system, foreseen for 2014, will need to accommodate the readout of both existing and new off-detector electronics and provide an increased throughput capacity. Advances in storage technology could make it feasible to write the output of the event builder to (RAM or SSD) disks and implement the HLT processing entirely file based.
DOI: 10.1088/1742-6596/396/1/012007
2012
Cited 3 times
Operational experience with the CMS Data Acquisition System
The data-acquisition (DAQ) system of the CMS experiment at the LHC performs the read-out and assembly of events accepted by the first level hardware trigger. Assembled events are made available to the high-level trigger (HLT), which selects interesting events for offline storage and analysis. The system is designed to handle a maximum input rate of 100 kHz and an aggregated throughput of 100 GB/s originating from approximately 500 sources and 10^8 electronic channels. An overview of the architecture and design of the hardware and software of the DAQ system is given. We report on the performance and operational experience of the DAQ and its Run Control System in the first two years of collider runs of the LHC, both in proton-proton and Pb-Pb collisions. We present an analysis of the current performance, its limitations, and the most common failure modes and discuss the ongoing evolution of the HLT capability needed to match the luminosity ramp-up of the LHC.
DOI: 10.1088/1742-6596/513/1/012031
2014
Cited 3 times
Automating the CMS DAQ
We present the automation mechanisms that have been added to the Data Acquisition and Run Control systems of the Compact Muon Solenoid (CMS) experiment during Run 1 of the LHC, ranging from the automation of routine tasks to automatic error recovery and context-sensitive guidance to the operator. These mechanisms helped CMS to maintain a data taking efficiency above 90% and to even improve it to 95% towards the end of Run 1, despite an increase in the occurrence of single-event upsets in sub-detector electronics at high LHC luminosity.
DOI: 10.1103/physrevlett.76.4312
1996
Cited 9 times
First Evidence for the Decay<mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline"><mml:mrow><mml:msub><mml:mrow><mml:mi>K</mml:mi></mml:mrow><mml:mrow><mml:mi>L</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>→</mml:mo><mml:mrow><mml:msup><mml:mrow><mml:mi mathvariant="italic">e</mml:mi></mml:mrow><mml:mrow><mml:mo>+</mml:mo></mml:mrow></mml:msup></mml:mrow><mml:mrow><mml:msup><mml:mrow><mml:mi mathvariant="italic">e</mml:mi></mml:mrow><mml:mrow><mml:mo>−</mml:mo></mml:mrow…
We present the first evidence for the decay ${K}_{L}\ensuremath{\rightarrow}{e}^{+}{e}^{\ensuremath{-}}{\ensuremath{\mu}}^{+}{\ensuremath{\mu}}^{\ensuremath{-}}$ based on the observation of one event with an estimated background of ${0.067}_{\ensuremath{-}0.025}^{+0.057}$ event. We determine the branching ratio to be $B({K}_{L}\ensuremath{\rightarrow}{e}^{+}{e}^{\ensuremath{-}}{\ensuremath{\mu}}^{+}{\ensuremath{\mu}}^{\ensuremath{-}})\phantom{\rule{0ex}{0ex}}=\phantom{\rule{0ex}{0ex}}({2.9}_{\ensuremath{-}2.4}^{+6.7})\ifmmode\times\else\texttimes\fi{}{10}^{\ensuremath{-}9}$. In addition, we set a 90% confidence upper limit on the combined branching ratio for the lepton flavor violating decays ${K}_{L}\ensuremath{\rightarrow}{e}^{\ensuremath{\mp}}{e}^{\ensuremath{\mp}}{\ensuremath{\mu}}^{\ifmmode\pm\else\textpm\fi{}}{\ensuremath{\mu}}^{\ifmmode\pm\else\textpm\fi{}}$ to be $B({K}_{L}\ensuremath{\rightarrow}{e}^{\ensuremath{\mp}}{e}^{\ensuremath{\mp}}{\ensuremath{\mu}}^{\ifmmode\pm\else\textpm\fi{}}{\ensuremath{\mu}}^{\ifmmode\pm\else\textpm\fi{}})&lt;6.1\ifmmode\times\else\texttimes\fi{}{10}^{\ensuremath{-}9}$ assuming a uniform phase space distribution.
DOI: 10.1103/physrevlett.89.072001
2002
Cited 7 times
Radiative Decay Width Measurements of Neutral Kaon Excitations Using the Primakoff Effect
We use K(L)'s in the 100-200 GeV energy range to produce 147 candidate events of the axial vector pair K1(1270)-K1(1400) in the nuclear Coulomb field of a Pb target and determine the radiative widths Gamma(K1(1400)-->K0+gamma)=280.8+/-23.2(stat)+/-40.4(syst) keV and Gamma(K1(1270)-->K0+gamma)=73.2+/-6.1(stat)+/-28.3(syst) keV. These first measurements appear to be lower than the quark-model predictions. We also place upper limits on the radiative widths for K(*)(1410) and K(*)(2)(1430) and find that the latter is vanishingly small in accord with SU(3) invariance in the naive quark model.
DOI: 10.1103/physrevd.62.112001
2000
Cited 7 times
Evidence for the decay<mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline"><mml:mrow><mml:msub><mml:mrow><mml:mi>K</mml:mi></mml:mrow><mml:mrow><mml:mi>L</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>→</mml:mo><mml:mrow><mml:msup><mml:mrow><mml:mi>μ</mml:mi></mml:mrow><mml:mrow><mml:mo>+</mml:mo></mml:mrow></mml:msup></mml:mrow><mml:mrow><mml:msup><mml:mrow><mml:mi>μ</mml:mi></mml:mrow><mml:mrow><mml:mi>−</mml:mi></mml:mrow></mml:msup></mml:mrow><mml:mi>γ</mml:mi><mml:mi…
We have observed the decay ${K}_{L}\ensuremath{\rightarrow}{\ensuremath{\mu}}^{+}{\ensuremath{\mu}}^{\ensuremath{-}}\ensuremath{\gamma}\ensuremath{\gamma}$ at the KTeV experiment at Fermilab. This decay presents a formidable background to the search for new physics in ${K}_{L}\ensuremath{\rightarrow}{\ensuremath{\pi}}^{0}{\ensuremath{\mu}}^{+}{\ensuremath{\mu}}^{\ensuremath{-}}.$ The 1997 data yielded a sample of 4 signal events, with an expected background of $0.155\ifmmode\pm\else\textpm\fi{}0.081$ events. The branching ratio is $\mathcal{B} {(K}_{L}\ensuremath{\rightarrow}{\ensuremath{\mu}}^{+}{\ensuremath{\mu}}^{\ensuremath{-}}\ensuremath{\gamma}\ensuremath{\gamma})=[{10.4}_{\ensuremath{-}5.9}^{+7.5}\mathrm{}(\mathrm{stat})\ifmmode\pm\else\textpm\fi{}0.7\mathrm{}(\mathrm{syst})]\ifmmode\times\else\texttimes\fi{}{10}^{\ensuremath{-}9}$ with ${m}_{\ensuremath{\gamma}\ensuremath{\gamma}}&gt;~1 \mathrm{MeV}{/c}^{2},$ consistent with a QED calculation which predicts $(9.1\ifmmode\pm\else\textpm\fi{}0.8)\ifmmode\times\else\texttimes\fi{}{10}^{\ensuremath{-}9}.$
DOI: 10.1088/1742-6596/219/2/022002
2010
Cited 3 times
The CMS online cluster: IT for a large data acquisition and control cluster
The CMS online cluster consists of more than 2000 computers running about 10000 application instances. These applications implement the control of the experiment, the event building, the high level trigger, the online database and the control of the buffering and transferring of data to the Central Data Recording at CERN. In this paper the IT solutions employed to fulfil the requirements of such a large cluster are revised. Details are given on the chosen network structure, configuration management system, monitoring infrastructure and on the implementation of the high availability for the services and infrastructure.
DOI: 10.1088/1742-6596/898/3/032019
2017
Cited 3 times
The CMS Data Acquisition - Architectures for the Phase-2 Upgrade
The upgraded High Luminosity LHC, after the third Long Shutdown (LS3), will provide an instantaneous luminosity of 7.5 × 1034 cm−2s−1 (levelled), at the price of extreme pileup of up to 200 interactions per crossing. In LS3, the CMS Detector will also undergo a major upgrade to prepare for the phase-2 of the LHC physics program, starting around 2025. The upgraded detector will be read out at an unprecedented data rate of up to 50 Tb/s and an event rate of 750 kHz. Complete events will be analysed by software algorithms running on standard processing nodes, and selected events will be stored permanently at a rate of up to 10 kHz for offline processing and analysis.
DOI: 10.1051/epjconf/201921407017
2019
Cited 3 times
Experience with dynamic resource provisioning of the CMS online cluster using a cloud overlay
The primary goal of the online cluster of the Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC) is to build event data from the detector and to select interesting collisions in the High Level Trigger (HLT) farm for offline storage. With more than 1500 nodes and a capacity of about 850 kHEPSpecInt06, the HLT machines represent similar computing capacity of all the CMS Tier1 Grid sites together. Moreover, it is currently connected to the CERN IT datacenter via a dedicated 160 Gbps network connection and hence can access the remote EOS based storage with a high bandwidth. In the last few years, a cloud overlay based on OpenStack has been commissioned to use these resources for the WLCG when they are not needed for data taking. This online cloud facility was designed for parasitic use of the HLT, which must never interfere with its primary function as part of the DAQ system. It also allows to abstract from the different types of machines and their underlying segmented networks. During the LHC technical stop periods, the HLT cloud is set to its static mode of operation where it acts like other grid facilities. The online cloud was also extended to make dynamic use of resources during periods between LHC fills. These periods are a-priori unscheduled and of undetermined length, typically of several hours, once or more a day. For that, it dynamically follows LHC beam states and hibernates Virtual Machines (VM) accordingly. Finally, this work presents the design and implementation of a mechanism to dynamically ramp up VMs when the DAQ load on the HLT reduces towards the end of the fill.
DOI: 10.1103/physrevd.41.3317
1990
Cited 7 times
Charge-exchange photoproduction of the<mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline"><mml:mrow><mml:msubsup><mml:mrow><mml:mi>a</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow><mml:mrow><mml:mo>−</mml:mo></mml:mrow></mml:msubsup></mml:mrow><mml:mn /><mml:mo>(</mml:mo><mml:mn>1320</mml:mn><mml:mo>)</mml:mo><mml:mn /></mml:math>in association with<mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline"><mml:mrow><mml:msup><mml:mrow><mml:mi>Δ</…
We examine the negative $3\ensuremath{\pi}$ final state produced in association with ${\ensuremath{\Delta}}^{++}(1232)$ in the reaction $\ensuremath{\gamma}p\ensuremath{\rightarrow}{\ensuremath{\Delta}}^{++}{\ensuremath{\pi}}^{+}{\ensuremath{\pi}}^{\ensuremath{-}}{\ensuremath{\pi}}^{\ensuremath{-}}$ at an incident photon energy of 19.3 GeV. The most prominent enhancement in the $3\ensuremath{\pi}$ spectrum occurs at a mass and with a width consistent with the parameters of the ${a}_{2}(1320)$. This identification is confirmed by the various angular distributions. The ${a}_{2}$ production cross section, corrected for efficiencies and alternate ${a}_{2}$ decay modes, is 0.45\ifmmode\pm\else\textpm\fi{}0.05 \ensuremath{\mu}b.
DOI: 10.1103/physrevd.37.2379
1988
Cited 7 times
Production and decay properties of the ω<mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline"><mml:mrow><mml:msup><mml:mrow><mml:mi>π</mml:mi></mml:mrow><mml:mrow><mml:mn>0</mml:mn></mml:mrow></mml:msup></mml:mrow></mml:math>state at 1250 MeV/<mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline"><mml:mrow><mml:msup><mml:mrow><mml:mi>c</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup></mml:mrow></mml:math>produced by 20-GeV polarized …
The low-mass \ensuremath{\omega}${\ensuremath{\pi}}^{0}$ enhancement in \ensuremath{\gamma}p\ensuremath{\rightarrow}p\ensuremath{\omega}${\ensuremath{\pi}}^{0}$ has been of considerable interest in the past due to its suggested vector nature and possible role in the spectroscopy of the \ensuremath{\rho}-meson radial recurrences. We have measured the properties of this photoproduced \ensuremath{\omega}${\ensuremath{\pi}}^{0}$ system using the SLAC Hybrid Facility. The experimental data consists of 306 785 usable hadronic events for which excellent \ensuremath{\gamma}-ray detection is provided by the large lead-glass array located behind the bubble-chamber. The photon beam had a 52% polarization. We have examined in detail the angular distributions of the 274 events from the reaction \ensuremath{\gamma}p\ensuremath{\rightarrow}p\ensuremath{\omega}${\ensuremath{\pi}}^{0}$. The angular distribution of the production plane relative to the polarization vector shows a structure inconsistent with an s-channel helicity-conserving process. We have extracted the moments of the decay angular distribution. Our data favors a B(1235) interpretation of the \ensuremath{\omega}${\ensuremath{\pi}}^{0}$ state over a vector-meson interpretation.
DOI: 10.1103/physrevlett.87.111802
2001
Cited 6 times
Branching Ratio Measurement of the Decay<mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline"><mml:mrow><mml:msub><mml:mrow><mml:mi>K</mml:mi></mml:mrow><mml:mrow><mml:mi>L</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mspace /><mml:mo>→</mml:mo><mml:mspace /><mml:mrow><mml:msup><mml:mrow><mml:mi mathvariant="italic">e</mml:mi></mml:mrow><mml:mrow><mml:mo>+</mml:mo></mml:mrow></mml:msup></mml:mrow><mml:mrow><mml:msup><mml:mrow><mml:mi mathvariant="italic">e</mml:mi></mml:…
We have collected a 43 event sample of the decay K(L)-->e(+)e(-)mu(+)mu(-) with negligible backgrounds and measured its branching ratio to be (2.62+/-0.40+/-0.17)x10(-9). We see no evidence for CP violation in this decay. In addition, we set the 90% confidence upper limit on the combined branching ratios for the lepton flavor violating decays K(L)-->e(+/-)e(+/-)mu(-/+)mu(-/+) at B(K(L)-->e(+/-)e(+/-)mu(-/+)mu(-/+))< or =1.23x10(-10), assuming a uniform phase space distribution.
2014
The new CMS DAQ system for LHC operation after 2014 (DAQ2)
The Data Acquisition system of the Compact Muon Solenoid experiment at CERN assembles events at a rate of 100 kHz, transporting event data at an aggregate throughput of 100 GByte/s. We are presenting the design of the 2nd generation DAQ system, including studies of the event builder based on advanced networking technologies such as 10 and 40 Gbit/s Ethernet and 56 Gbit/s FDR Infiniband and exploitation of multicore CPU architectures. By the time the LHC restarts after the 2013/14 shutdown, the current compute nodes, networking, and storage infrastructure will have reached the end of their lifetime. In order to handle higher LHC luminosities and event pileup, a number of sub-detectors will be upgraded, increase the number of readout channels and replace the off-detector readout electronics with a μTCA implementation. The second generation DAQ system, foreseen for 2014, will need to accommodate the readout of both existing and new off-detector electronics and provide an increased throughput capacity. Advances in storage technology could make it feasible to write the output of the event builder to (RAM or SSD) disks and implement the HLT processing entirely file based.
DOI: 10.1088/1742-6596/331/2/022010
2011
An Analysis of the Control Hierarchy Modelling of the CMS Detector Control System
The supervisory level of the Detector Control System (DCS) of the CMS experiment is implemented using Finite State Machines (FSM), which model the behaviours and control the operations of all the sub-detectors and support services. The FSM tree of the whole CMS experiment consists of more than 30.000 nodes. An analysis of a system of such size is a complex task but is a crucial step towards the improvement of the overall performance of the FSM system. This paper presents the analysis of the CMS FSM system using the micro Common Representation Language 2 (mcrl2) methodology. Individual mCRL2 models are obtained for the FSM systems of the CMS sub-detectors using the ASF+SDF automated translation tool. Different mCRL2 operations are applied to the mCRL2 models. A mCRL2 simulation tool is used to closer examine the system. Visualization of a system based on the exploration of its state space is enabled with a mCRL2 tool. Requirements such as command and state propagation are expressed using modal mu-calculus and checked using a model checking algorithm. For checking local requirements such as endless loop freedom, the Bounded Model Checking technique is applied. This paper discusses these analysis techniques and presents the results of their application on the CMS FSM system.
DOI: 10.1109/rtc.2014.7097437
2014
The new CMS DAQ system for run-2 of the LHC
Summary form only given. The data acquisition system (DAQ) of the CMS experiment at the CERN Large Hadron Collider assembles events at a rate of 100 kHz, transporting event data at an aggregate throughput of 100 GB/s to the high level trigger (HLT) farm. The HLT farm selects interesting events for storage and offline analysis at a rate of around 1 kHz. The DAQ system has been redesigned during the accelerator shutdown in 2013/14. The motivation is twofold: Firstly, the current compute nodes, networking, and storage infrastructure will have reached the end of their lifetime by the time the LHC restarts. Secondly, in order to handle higher LHC luminosities and event pileup, a number of sub-detectors will be upgraded, increasing the number of readout channels and replacing the off-detector readout electronics with a μTCA implementation. The new DAQ architecture will take advantage of the latest developments in the computing industry. For data concentration, 10/40 Gb/s Ethernet technologies will be used, as well as an implementation of a reduced TCP/IP in FPGA for a reliable transport between custom electronics and commercial computing hardware. A 56 Gb/s Infiniband FDR Clos network has been chosen for the event builder with a throughput of ~4 Tb/s. The HLT processing is entirely file based. This allows the DAQ and HLT systems to be independent, and to use the HLT software in the same way as for the offline processing. The fully built events are sent to the HLT with 1/10/40 Gb/s Ethernet via network file systems. Hierarchical collection of HLT accepted events and monitoring meta-data are stored into a global file system. This paper presents the requirements, technical choices, and performance of the new system.
DOI: 10.1016/j.nima.2022.167732
2023
Status of the data acquisition, trigger, and slow control systems of the Mu2e experiment at Fermilab
The Mu2e experiment at the Fermilab will search for a coherent neutrinoless conversion of a muon into an electron in the field of an aluminum nucleus with a sensitivity improvement by a factor of 10,000 over existing limits. The Mu2e Trigger and Data Acquisition System (TDAQ) uses otsdaq framework as the online Data Acquisition System (DAQ) solution. Developed at Fermilab, otsdaq integrates several framework components — an artdaq-based DAQ, an art-based event processing, and an EPICS-based detector control system (DCS), and provides a uniform multi-user interface to its components through a web browser. Data streams from the Mu2e tracker and calorimeter are handled by the artdaq-based DAQ and processed by a one-level software trigger implemented within the art framework. Events accepted by the trigger have their data combined, post-trigger, with the separately read out data from the Mu2e Cosmic Ray Veto system. Foundation of the Mu2e DCS, EPICS – an Experimental Physics and Industrial Control System – is an open-source platform for monitoring, controlling, alarming, and archiving. A prototype of the TDAQ and the DCS systems has been built and tested over the last three years at Fermilab’s Feynman Computing Center, and now the production system installation is underway. This work presents their status and focus on the installation plans and procedures for racks, workstations, network switches, gateway computers, DAQ hardware, slow controls implementation, and testing.
DOI: 10.1103/physrevlett.87.021801
2001
Cited 5 times
First Observation of the Decay<mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline"><mml:mrow><mml:msub><mml:mrow><mml:mi>K</mml:mi></mml:mrow><mml:mrow><mml:mi>L</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mi /><mml:mo>→</mml:mo><mml:mi /><mml:mrow><mml:msup><mml:mrow><mml:mi mathvariant="italic">π</mml:mi></mml:mrow><mml:mrow><mml:mn>0</mml:mn></mml:mrow></mml:msup></mml:mrow><mml:mrow><mml:msup><mml:mrow><mml:mi mathvariant="italic">e</mml:mi></mml:mrow><mml:mrow><mml:…
We report on the first observation of the decay KL -> pi0 ee gamma by the KTeV E799 experiment at Fermilab. Based upon a sample of 48 events with an estimated background of 3.6 +/- 1.1 events, we measure the KL -> pi0 ee gamma branching ratio to be (2.34 +/- 0.35 +/- 0.13)x10^{-8}. Our data agree with recent O(p^6) calculations in chiral perturbation theory that include contributions from vector meson exchange through the parameter a_V. A fit was made to the KL -> pi0 ee gamma data for a_V with the result -0.67 +/- 0.21 +/- 0.12, which is consistent with previous results from KTeV.
DOI: 10.1088/1742-6596/396/1/012041
2012
High availability through full redundancy of the CMS detector controls system
The CMS detector control system (DCS) is responsible for controlling and monitoring the detector status and for the operation of all CMS sub detectors and infrastructure. This is required to ensure safe and efficient data taking so that high quality physics data can be recorded. The current system architecture is composed of more than 100 servers in order to provide the required processing resources. An optimization of the system software and hardware architecture is under development to ensure redundancy of all the controlled subsystems and to reduce any downtime due to hardware or software failures. The new optimized structure is based mainly on powerful and highly reliable blade servers and makes use of a fully redundant approach, guaranteeing high availability and reliability. The analysis of the requirements, the challenges, the improvements and the optimized system architecture as well as its specific hardware and software solutions are presented.
DOI: 10.1088/1742-6596/898/3/032020
2017
Performance of the CMS Event Builder
DOI: 10.1088/1748-0221/4/10/p10005
2009
Commissioning of the CMS High Level Trigger
The CMS experiment will collect data from the proton-proton collisions delivered by the Large Hadron Collider (LHC) at a centre-of-mass energy up to 14 TeV. The CMS trigger system is designed to cope with unprecedented luminosities and LHC bunch-crossing rates up to 40 MHz. The unique CMS trigger architecture only employs two trigger levels. The Level-1 trigger is implemented using custom electronics, while the High Level Trigger (HLT) is based on software algorithms running on a large cluster of commercial processors, the Event Filter Farm. We present the major functionalities of the CMS High Level Trigger system as of the starting of LHC beams operations in September 2008. The validation of the HLT system in the online environment with Monte Carlo simulated data and its commissioning during cosmic rays data taking campaigns are discussed in detail. We conclude with the description of the HLT operations with the first circulating LHC beams before the incident occurred the 19th September 2008.
2016
Opportunistic usage of the CMS online cluster using a cloud overlay
2003
Cited 3 times
The CMS Event Builder
The data acquisition system of the CMS experiment at the Large Hadron Collider will employ an event builder which will combine data from about 500 data sources into full events at an aggregate throughput of 100 GByte/s. Several architectures and switch technologies have been evaluated for the DAQ Technical Design Report by measurements with test benches and by simulation. This paper describes studies of an EVB test-bench based on 64 PCs acting as data sources and data consumers and employing both Gigabit Ethernet and Myrinet technologies as the interconnect. In the case of Ethernet, protocols based on Layer-2 frames and on TCP/IP are evaluated. Results from ongoing studies, including measurements on throughput and scaling are presented. The architecture of the baseline CMS event builder will be outlined. The event builder is organised into two stages with intelligent buffers in between. The first stage contains 64 switches performing a first level of data concentration by building super-fragments from fragments of 8 data sources. The second stage combines the 64 super-fragments into full events. This architecture allows installation of the second stage of the event builder in steps, with the overall throughput scaling linearly with the number of switches in the second stage. Possible implementations of the components of the event builder are discussed and the expected performance of the full event builder is outlined.
2007
The run control and monitoring system of the CMS experiment
DOI: 10.1088/1742-6596/331/2/022004
2011
Studies of future readout links for the CMS experiment
The Compact Muon Solenoid (CMS) experiment has developed an electrical implementation of the S-LINK64 extension (Simple Link Interface 64 bit) operating at 400 MB/s in order to read out the detector. This paper studies a possible replacement of the existing S-LINK64 implementation by an optical link, based on 10 Gigabit Ethernet in order to fulfil larger throughput, replace aging hardware and simplify an architecture. A prototype transmitter unit has been developed based on the FPGA Altera PCI Express Development Kit with a custom firmware. A standard PC has been acted as receiving unit. The data transfer has been implemented on a stack of protocols: RDP over IP over Ethernet. This allows receiving the data by standard hardware components like PCs or network switches and NICs. The first test proved that basic exchange of the packets between transmitter and receiving unit works. The paper summarizes the status of these studies.
DOI: 10.1088/1742-6596/664/8/082035
2015
A New Event Builder for CMS Run II
The data acquisition system (DAQ) of the CMS experiment at the CERN Large Hadron Collider (LHC) assembles events at a rate of 100 kHz, transporting event data at an aggregate throughput of 100GB/s to the high-level trigger (HLT) farm. The DAQ system has been redesigned during the LHC shutdown in 2013/14. The new DAQ architecture is based on state-of-the-art network technologies for the event building. For the data concentration, 10/40 Gbps Ethernet technologies are used together with a reduced TCP/IP protocol implemented in FPGA for a reliable transport between custom electronics and commercial computing hardware. A 56 Gbps Infiniband FDR CLOS network has been chosen for the event builder. This paper discusses the software design, protocols, and optimizations for exploiting the hardware capabilities. We present performance measurements from small-scale prototypes and from the full-scale production system.
DOI: 10.1088/1742-6596/664/8/082033
2015
File-based data flow in the CMS Filter Farm
During the LHC Long Shutdown 1, the CMS Data Acquisition system underwent a partial redesign to replace obsolete network equipment, use more homogeneous switching technologies, and prepare the ground for future upgrades of the detector front-ends. The software and hardware infrastructure to provide input, execute the High Level Trigger (HLT) algorithms and deal with output data transport and storage has also been redesigned to be completely file- based. This approach provides additional decoupling between the HLT algorithms and the input and output data flow. All the metadata needed for bookkeeping of the data flow and the HLT process lifetimes are also generated in the form of small "documents" using the JSON encoding, by either services in the flow of the HLT execution (for rates etc.) or watchdog processes. These "files" can remain memory-resident or be written to disk if they are to be used in another part of the system (e.g. for aggregation of output data). We discuss how this redesign improves the robustness and flexibility of the CMS DAQ and the performance of the system currently being commissioned for the LHC Run 2.
DOI: 10.18429/jacow-icalepcs2015-wepgf013
2015
Increasing Availability by Implementing Software Redundancy in the CMS Detector Control System
DOI: 10.1088/1742-6596/396/1/012038
2012
Distributed error and alarm processing in the CMS data acquisition system
The error and alarm system for the data acquisition of the Compact Muon Solenoid (CMS) at CERN was successfully used for the physics runs at Large Hadron Collider (LHC) during first three years of activities. Error and alarm processing entails the notification, collection, storing and visualization of all exceptional conditions occurring in the highly distributed CMS online system using a uniform scheme. Alerts and reports are shown on-line by web application facilities that map them to graphical models of the system as defined by the user. A persistency service keeps a history of all exceptions occurred, allowing subsequent retrieval of user defined time windows of events for later playback or analysis. This paper describes the architecture and the technologies used and deals with operational aspects during the first years of LHC operation. In particular we focus on performance, stability, and integration with the CMS sub-detectors.
DOI: 10.1109/rtc.2012.6418362
2012
Recent experience and future evolution of the CMS High Level Trigger System
The CMS experiment at the LHC uses a two-stage trigger system, with events flowing from the first level trigger at a rate of 100 kHz. These events are read out by the Data Acquisition system (DAQ), assembled in memory in a farm of computers, and finally fed into the high-level trigger (HLT) software running on the farm. The HLT software selects interesting events for offline storage and analysis at a rate of a few hundred Hz. The HLT algorithms consist of sequences of offline-style reconstruction and filtering modules, executed on a farm of 0(10000) CPU cores built from commodity hardware. Experience from the 2010–2011 collider run is detailed, as well as the current architecture of the CMS HLT, and its integration with the CMS reconstruction framework and CMS DAQ. The short- and medium-term evolution of the HLT software infrastructure is discussed, with future improvements aimed at supporting extensions of the HLT computing power, and addressing remaining performance and maintenance issues.
DOI: 10.1088/1742-6596/396/1/012039
2012
Upgrade of the CMS Event Builder
The Data Acquisition system of the Compact Muon Solenoid experiment at CERN assembles events at a rate of 100 kHz, transporting event data at an aggregate throughput of 100 GB/s. By the time the LHC restarts after the 2013/14 shut-down, the current computing and networking infrastructure will have reached the end of their lifetime. This paper presents design studies for an upgrade of the CMS event builder based on advanced networking technologies such as 10/40 Gb/s Ethernet and Infiniband. The results of performance measurements with small-scale test setups are shown.
DOI: 10.1088/1742-6596/219/2/022003
2010
Dynamic configuration of the CMS Data Acquisition cluster
The CMS Data Acquisition cluster, which runs around 10000 applications, is configured dynamically at run time. XML configuration documents determine what applications are executed on each node and over what networks these applications communicate. Through this mechanism the DAQ System may be adapted to the required performance, partitioned in order to perform (test-) runs in parallel, or re-structured in case of hardware faults. This paper presents the configuration procedure and the CMS DAQ Configurator tool, which is used to generate comprehensive configurations of the CMS DAQ system based on a high-level description given by the user. Using a database of configuration templates and a database containing a detailed model of hardware modules, data and control links, nodes and the network topology, the tool automatically determines which applications are needed, on which nodes they should run, and over which networks the event traffic will flow. The tool computes application parameters and generates the XML configuration documents and the configuration of the run-control system. The performance of the configuration procedure and the tool as well as operational experience during CMS commissioning and the first LHC runs are discussed.
DOI: 10.1109/rtc.2010.5750362
2010
First operational experience with the CMS run control system
The Run Control System of the Compact Muon Solenoid (CMS) experiment at CERN's new Large Hadron Collider (LHC) controls the sub-detector and central data acquisition systems and the high-level trigger farm of the experiment. It manages around 10,000 applications that control custom hardware or handle the event building and the high-level trigger processing. The CMS Run Control System is a distributed Java system running on a set of Apache Tomcat servlet containers. Users interact with the system through a web browser. The paper presents the architecture of the CMS Run Control System and deals with operational aspects during the first phase of operation with colliding beams. In particular it focuses on performance, stability, integration with the CMS Detector Control System, integration with LHC status information and tools to guide the shifter.
DOI: 10.22323/1.270.0022
2017
Opportunistic usage of the CMS online cluster using a cloud overlay
After two years of maintenance and upgrade, the Large Hadron Collider (LHC), the largest and most powerful particle accelerator in the world, has started its second three year run. Around 1500 computers make up the CMS (Compact Muon Solenoid) Online cluster. This cluster is used for Data Acquisition of the CMS experiment at CERN, selecting and sending to storage around 20 TBytes of data per day that are then analysed by the Worldwide LHC Computing Grid (WLCG) infrastructure that links hundreds of data centres worldwide. 3000 CMS physicists can access and process data, and are always seeking more computing power and data. The backbone of the CMS Online cluster is composed of 16000 cores which provide as much computing power as all CMS WLCG Tier1 sites (352K HEP-SPEC-06 score in the CMS cluster versus 300K across CMS Tier1 sites). The computing power available in the CMS cluster can significantly speed up the processing of data, so an effort has been made to allocate the resources of the CMS Online cluster to the grid when it isn’t used to its full capacity for data acquisition. This occurs during the maintenance periods when the LHC is non-operational, which corresponded to 117 days in 2015. During 2016, the aim is to increase the availability of the CMS Online cluster for data processing by making the cluster accessible during the time between two physics collisions while the LHC and beams are being prepared. This is usually the case for a few hours every day, which would vastly increase the computing power available for data processing. Work has already been undertaken to provide this functionality, as an OpenStack cloud layer has been deployed as a minimal overlay that leaves the primary role of the cluster untouched. This overlay also abstracts the different hardware and networks that the cluster is composed of. The operation of the cloud (starting and stopping the virtual machines) is another challenge that has been overcome as the cluster has only a few hours spare during the aforementioned beam preparation. By improving the virtual image deployment and integrating the OpenStack services with the core services of the Data Acquisition on the CMS Online cluster it is now possible to start a thousand virtual machines within 10 minutes and to turn them off within seconds. This document will explain the architectural choices that were made to reach a fully redundant and scalable cloud, with a minimal impact on the running cluster configuration while giving a maximal segregation between the services. It will also present how to cold start 1000 virtual machines 25 times faster, using tools commonly utilised in all data centres.
DOI: 10.1103/physrevd.36.1
1987
Cited 4 times
Forward charge asymmetry in 20-GeV<i>gammap</i>reactions
Fast forward particles photoproduced in 20-GeV interactions on a hydrogen target are shown to be preferentially positive, the asymmetry increasing with transverse momentum and Feynman x. Evidence is given that this effect is not due to forward-going target fragments. A model in which production from the photon of a forward-going spectator u is preferred over a ū, due to a higher probability for interactions of antiquarks with the proton constituents, is shown to be qualitatively consistent with the data.Received 7 July 1986DOI:https://doi.org/10.1103/PhysRevD.36.1©1987 American Physical Society
DOI: 10.22323/1.313.0075
2018
The FEROL40, a microTCA card interfacing custom point-to-point links and standard TCP/IP
In order to accommodate new back-end electronics of upgraded CMS sub-detectors, a new FEROL40 card in the microTCA standard has been developed. The main function of the FEROL40 is to acquire event data over multiple point-to-point serial optical links, provide buffering, perform protocol conversion, and transmit multiple TCP/IP streams (4x10Gbps) to the Ethernet network of the aggregation layer of the CMS DAQ (data acquisition) event builder. This contribution discusses the design of the FEROL40 and experience from operation
2007
The run control and monitoring system of the CMS experiment
The CMS experiment at the LHC at CERN will start taking data in 2008. To configure, control and monitor the experiment during data-taking the Run Control and Monitoring System (RCMS) was developed. This paper describes the architecture and the technology used to implement the RCMS, as well as the deployment and commissioning strategy of this important component of the online software for the CMS experiment.
DOI: 10.22323/1.343.0129
2019
Design and development of the DAQ and Timing Hub for CMS Phase-2
The CMS detector will undergo a major upgrade for Phase-2 of the LHC program, starting around 2026.The upgraded Level-1 hardware trigger will select events at a rate of 750 kHz.At an expected event size of 7.4 MB this corresponds to a data rate of up to 50 Tbit/s.Optical links will carry the signals from on-detector front-end electronics to back-end electronics in ATCA crates in the service cavern.A DAQ and Timing Hub board aggregates data streams from back-end boards over point-to-point links, provides buffering and transmits the data to the commercial data-to-surface network for processing and storage.This hub board is also responsible for the distribution of timing, control and trigger signals to the back-ends.This paper presents the current development towards the DAQ and Timing Hub and the design of the first prototype, to be used as for validation and integration with the first back-end prototypes in 2019-2020.
DOI: 10.1051/epjconf/201921401044
2019
Presentation layer of CMS Online Monitoring System
The Compact Muon Solenoid (CMS) is one of the experiments at the CERN Large Hadron Collider (LHC). The CMS Online Monitoring system (OMS) is an upgrade and successor to the CMS Web-Based Monitoring (WBM)system, which is an essential tool for shift crew members, detector subsystem experts, operations coordinators, and those performing physics analyses. The CMS OMS is divided into aggregation and presentation layers. Communication between layers uses RESTful JSON:API compliant requests. The aggregation layer is responsible for collecting data from heterogeneous sources, storage of transformed and pre-calculated (aggregated) values and exposure of data via the RESTful API. The presentation layer displays detector information via a modern, user-friendly and customizable web interface. The CMS OMS user interface is composed of a set of cutting-edge software frameworks and tools to display non-event data to any authenticated CMS user worldwide. The web interface tree-like component structure comprises (top-down): workspaces, folders, pages, controllers and portlets. A clear hierarchy gives the required flexibility and control for content organization. Each bottom element instantiates a portlet and is a reusable component that displays a single aspect of data, like a table, a plot, an article, etc. Pages consist of multiple different portlets and can be customized at runtime by using a drag-and-drop technique. This is how a single page can easily include information from multiple online sources. Different pages give access to a summary of the current status of the experiment, as well as convenient access to historical data. This paper describes the CMS OMS architecture, core concepts and technologies of the presentation layer.
DOI: 10.1051/epjconf/201921401048
2019
A Scalable Online Monitoring System Based on Elasticsearch for Distributed Data Acquisition in Cms
The part of the CMS Data Acquisition (DAQ) system responsible for data readout and event building is a complex network of interdependent distributed applications. To ensure successful data taking, these programs have to be constantly monitored in order to facilitate the timeliness of necessary corrections in case of any deviation from specified behaviour. A large number of diverse monitoring data samples are periodically collected from multiple sources across the network. Monitoring data are kept in memory for online operations and optionally stored on disk for post-mortem analysis. We present a generic, reusable solution based on an open source NoSQL database, Elasticsearch, which is fully compatible and non-intrusive with respect to the existing system. The motivation is to benefit from an offthe-shelf software to facilitate the development, maintenance and support efforts. Elasticsearch provides failover and data redundancy capabilities as well as a programming language independent JSON-over-HTTP interface. The possibility of horizontal scaling matches the requirements of a DAQ monitoring system. The data load from all sources is balanced by redistribution over an Elasticsearch cluster that can be hosted on a computer cloud. In order to achieve the necessary robustness and to validate the scalability of the approach the above monitoring solution currently runs in parallel with an existing in-house developed DAQ monitoring system.
DOI: 10.1088/1748-0221/16/12/c12020
2021
Data acquisition and slow control interface for the Mu2e experiment
Abstract The Mu2e experiment at the Fermilab Muon Campus will search for the coherent neutrinoless conversion of a muon into an electron in the field of an aluminum nucleus with a sensitivity improvement by a factor of 10000 over existing limits. The Mu2e Trigger and Data Acquisition System (TDAQ) uses otsdaq as the online Data Acquisition System (DAQ) solution. Developed at Fermilab, otsdaq integrates both the artdaq DAQ and the art analysis frameworks for event transfer, filtering, and processing. otsdaq is an online DAQ software suite with a focus on flexibility and scalability and provides a multi-user, web-based, interface accessible through a web browser. The data stream from the detector subsystems is read by a software filter algorithm that selects events which are combined with the data flux coming from a cosmic ray veto system. The Detector Control System (DCS) has been developed using the Experimental Physics and Industrial Control System (EPICS) open source platform for monitoring, controlling, alarming, and archiving. The DCS system has been integrated into otsdaq . A prototype of the TDAQ and the DCS systems has been built at Fermilab’s Feynman Computing Center. In this paper, we report on the progress of the integration of this prototype in the online otsdaq software.
2004
Jet energy scale and resolution for p13 data Monte Carlo
2014
10 Gbps TCP/IP streams from the FPGA for High Energy Physics
The DAQ system of the CMS experiment at CERN collects data from more than 600 custom detector Front-End Drivers (FEDs). During 2013 and 2014 the CMS DAQ system will undergo a major upgrade to address the obsolescence of current hardware and the requirements posed by the upgrade of the LHC accelerator and various detector components. For a loss-less data collection from the FEDs a new FPGA based card implementing the TCP/IP protocol suite over 10Gbps Ethernet has been developed. To limit the TCP hardware implementation complexity the DAQ group developed a simplified and unidirectional but RFC 793 compliant version of the TCP protocol. This allows to use a PC with the standard Linux TCP/IP stack as a receiver. We present the challenges and protocol modifications made to TCP in order to simplify its FPGA implementation. We also describe the interaction between the simplified TCP and Linux TCP/IP stack including the performance measurements.
DOI: 10.18429/jacow-icalepcs2015-tua3o01
2015
Detector Controls Meets JEE on the Web
DOI: 10.18429/jacow-icalepcs2015-mopgf025
2015
Enhancing the Detector Control System of the CMS Experiment with Object Oriented Modelling
2015
A scalable monitoring for the CMS Filter Farm based on elasticsearch
A flexible monitoring system has been designed for the CMS File-based Filter Farm making use of modern data mining and analytics components. All the metadata and monitoring information concerning data flow and execution of the HLT are generated locally in the form of small documents using the JSON encoding. These documents are indexed into a hierarchy of elasticsearch (es) clusters along with process and system log information. Elasticsearch is a search server based on Apache Lucene. It provides a distributed, multitenant-capable search and aggregation engine. Since es is schema-free, any new information can be added seamlessly and the unstructured information can be queried in non-predetermined ways. The leaf es clusters consist of the very same nodes that form the Filter Farm thus providing natural horizontal scaling. A separate central” es cluster is used to collect and index aggregated information. The fine-grained information, all the way to individual processes, remains available in the leaf clusters. The central es cluster provides quasi-real-time high-level monitoring information to any kind of client. Historical data can be retrieved to analyse past problems or correlate them with external information. We discuss the design and performance of this system in the context of the CMS DAQ commissioningmore » for LHC Run 2.« less
2014
Automating the CMS DAQ
2016
Detector Development and Performance : A Brief and Biased Excerpt
2014
Boosting Event Building Performance using Infiniband FDR for the CMS Upgrade
DOI: 10.1109/rtc.2014.7097439
2014
Achieving high performance with TCP over 40GbE on NUMA architectures for CMS data acquisition
TCP and the socket abstraction have barely changed over the last two decades, but at the network layer there has been a giant leap from a few megabits to 100 gigabits in bandwidth. At the same time, CPU architectures have evolved into the multicore era and applications are expected to make full use of all available resources. Applications in the data acquisition domain based on the standard socket library running in a Non-Uniform Memory Access (NUMA) architecture are unable to reach full efficiency and scalability without the software being adequately aware about the IRQ (Interrupt Request), CPU and memory affinities. During the first long shutdown of LHC, the CMS DAQ system is going to be upgraded for operation from 2015 onwards and a new software component has been designed and developed in the CMS online framework for transferring data with sockets. This software attempts to wrap the low-level socket library to ease higher-level programming with an API based on an asynchronous event driven model similar to the DAT uDAPL API. It is an event-based application with NUMA optimizations, that allows for a high throughput of data across a large distributed system. This paper describes the architecture, the technologies involved and the performance measurements of the software in the context of the CMS distributed event building.