ϟ

E. Meschi

Here are all the papers by E. Meschi that you can download and read on OA.mg.
E. Meschi’s last known institution is . Download E. Meschi PDFs here.

Claim this Profile →
DOI: 10.1007/jhep01(2014)164
2014
Cited 294 times
First look at the physics case of TLEP
A bstract The discovery by the ATLAS and CMS experiments of a new boson with mass around 125 GeV and with measured properties compatible with those of a Standard-Model Higgs boson, coupled with the absence of discoveries of phenomena beyond the Standard Model at the TeV scale, has triggered interest in ideas for future Higgs factories. A new circular e + e − collider hosted in a 80 to 100 km tunnel, TLEP, is among the most attractive solutions proposed so far. It has a clean experimental environment, produces high luminosity for top-quark, Higgs boson, W and Z studies, accommodates multiple detectors, and can reach energies up to the $$ \mathrm{t}\overline{\mathrm{t}} $$ threshold and beyond. It will enable measurements of the Higgs boson properties and of Electroweak Symmetry-Breaking (EWSB) parameters with unequalled precision, offering exploration of physics beyond the Standard Model in the multi-TeV range. Moreover, being the natural precursor of the VHE-LHC, a 100 TeV hadron machine in the same tunnel, it builds up a long-term vision for particle physics. Altogether, the combination of TLEP and the VHE-LHC offers, for a great cost effectiveness, the best precision and the best search reach of all options presently on the market. This paper presents a first appraisal of the salient features of the TLEP physics potential, to serve as a baseline for a more extensive design study.
DOI: 10.1016/j.nima.2003.11.078
2004
Cited 140 times
The CDF Silicon Vertex Trigger
The Collider Detector at Fermilab (CDF) experiment's Silicon Vertex Trigger (SVT) is a system of 150 custom 9U VME boards that reconstructs axial tracks in the CDF silicon strip detector in a 15μs pipeline. SVT's 35μm impact parameter resolution enables CDF's Level 2 trigger to distinguish primary and secondary particles, and hence to collect large samples of hadronic bottom and charm decays. We review some of SVT's key design features. Speed is achieved with custom VLSI pattern recognition, linearized track fitting, pipelining, and parallel processing. Testing and reliability are aided by built-in logic state analysis and test-data sourcing at each board's input and output, a common interboard data link, and a universal “Merger” board for data fan-in/fan-out. Speed and adaptability are enhanced by use of modern FPGAs.
DOI: 10.1088/1742-6596/219/2/022011
2010
Cited 23 times
The CMS data acquisition system software
The CMS data acquisition system is made of two major subsystems: event building and event filter. The presented paper describes the architecture and design of the software that processes the data flow in the currently operating experiment. The central DAQ system relies on industry standard networks and processing equipment. Adopting a single software infrastructure in all subsystems of the experiment imposes, however, a number of different requirements. High efficiency and configuration flexibility are among the most important ones. The XDAQ software infrastructure has matured over an eight years development and testing period and has shown to be able to cope well with the requirements of the CMS experiment.
DOI: 10.1051/epjconf/202429502013
2024
First year of experience with the new operational monitoring tool for data taking in CMS during Run 3
The Online Monitoring System (OMS) at the Compact Muon Solenoid experiment (CMS) at CERN aggregates and integrates different sources of information into a central place and allows users to view, compare and correlate information. It displays real-time and historical information. The tool is heavily used by run coordinators, trigger experts and shift crews, to ensure the quality and efficiency of data taking. It provides aggregated information for many use cases including data certification. OMS is the successor of Web Based Monitoring (WBM), which was in use during Run 1 and Run 2 of the LHC. WBM started as a small tool and grew substantially over the years so that maintenance became challenging. OMS was developed from scratch following several design ideas: to strictly separate the presentation layer from the data aggregation layer, to use a well-defined standard for the communication between presentation layer and aggregation layer, and to employ widely used frameworks from outside the HEP community. A report on the experience from the operation of OMS for the first year of data taking of Run 3 in 2022 is presented.
DOI: 10.1016/s0168-9002(97)01345-4
1998
Cited 35 times
SVT: an online Silicon Vertex Tracker for the CDF upgrade
The SVT is an online tracker for the CDF upgrade which will reconstruct 2D tracks using information from the Silicon VerteX detector (SVXII) and Central Outer Tracker (COT). The precision measurement of the track impact parameter will then be used to select and record large samples of B hadrons. We discuss the overall architecture, algorithms, and hardware implementation of the system.
DOI: 10.1088/1748-0221/17/05/c05003
2022
Cited 6 times
CMS phase-2 DAQ and timing hub prototyping results and perspectives
Abstract This paper describes recent progress on the design of the DAQ and Timing Hub, or DTH, an ATCA (Advanced Telecommunications Computing Architecture) hub board intended for the phase-2 upgrade of the CMS experiment. Prototyping was originally divided into multiple feature lines, spanning all different aspects of the DTH functionality. The second DTH prototype merges all R&D and prototyping lines into a single board, which is intended to be the production candidate. Emphasis is on the process and experience in going from the first to the second DTH prototype, which included a change of the chosen FPGA as well as the integration of a commercial networking solution.
DOI: 10.1016/s0168-9002(00)00190-x
2000
Cited 28 times
Silicon vertex tracker: a fast precise tracking trigger for CDF
The Silicon Vertex Tracker (SVT), currently being built for the CDF II experiment, is a hardware device that reconstructs 2-D tracks online using measurements from the Silicon Vertex Detector (SVXII) and the Central Outer Tracker (COT). The precise measurement of the impact parameter of the SVT tracks will allow, for the first time in a hadron collider environment, to trigger on events containing B hadrons that are very important for many studies, such as CP violation in the b sector and searching for new heavy particles decaying to bb̄ . In this report we describe the overall architecture, algorithms and the hardware implementation of the SVT.
DOI: 10.1088/1742-6596/331/2/022021
2011
Cited 13 times
The data-acquisition system of the CMS experiment at the LHC
The data-acquisition system of the CMS experiment at the LHC performs the read-out and assembly of events accepted by the first level hardware trigger. Assembled events are made available to the high-level trigger which selects interesting events for offline storage and analysis. The system is designed to handle a maximum input rate of 100 kHz and an aggregated throughput of 100GB/s originating from approximately 500 sources. An overview of the architecture and design of the hardware and software of the DAQ system is given. We discuss the performance and operational experience from the first months of LHC physics data taking.
DOI: 10.1088/1748-0221/8/12/c12039
2013
Cited 12 times
10 Gbps TCP/IP streams from the FPGA for the CMS DAQ eventbuilder network
For the upgrade of the DAQ of the CMS experiment in 2013/2014 an interface between the custom detector Front End Drivers (FEDs) and the new DAQ eventbuilder network has to be designed. For a loss-less data collection from more then 600 FEDs a new FPGA based card implementing the TCP/IP protocol suite over 10Gbps Ethernet has been developed. We present the hardware challenges and protocol modifications made to TCP in order to simplify its FPGA implementation together with a set of performance measurements which were carried out with the current prototype.
DOI: 10.1109/tns.2015.2426216
2015
Cited 12 times
The New CMS DAQ System for Run-2 of the LHC
The data acquisition (DAQ) system of the CMS experiment at the CERN Large Hadron Collider assembles events at a rate of 100 kHz, transporting event data at an aggregate throughput of 100 GB/s to the high level trigger (HLT) farm. The HLT farm selects interesting events for storage and offline analysis at a rate of around 1 kHz. The DAQ system has been redesigned during the accelerator shutdown in 2013/14. The motivation is twofold: Firstly, the current compute nodes, networking, and storage infrastructure will have reached the end of their lifetime by the time the LHC restarts. Secondly, in order to handle higher LHC luminosities and event pileup, a number of sub-detectors will be upgraded, increasing the number of readout channels and replacing the off-detector readout electronics with a <formula formulatype="inline" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex Notation="TeX">$\mu {\hbox {TCA}}$</tex></formula> implementation. The new DAQ architecture will take advantage of the latest developments in the computing industry. For data concentration, 10/40 Gb/s Ethernet technologies will be used, as well as an implementation of a reduced TCP/IP in FPGA for a reliable transport between custom electronics and commercial computing hardware. A Clos network based on 56 Gb/s FDR Infiniband has been chosen for the event builder with a throughput of <formula formulatype="inline" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex Notation="TeX">$\sim 4~\hbox{Tb/s}$</tex> </formula> . The HLT processing is entirely file based. This allows the DAQ and HLT systems to be independent, and to use the HLT software in the same way as for the offline processing. The fully built events are sent to the HLT with 1/10/40 Gb/s Ethernet via network file systems. Hierarchical collection of HLT accepted events and monitoring meta-data are stored into a global file system. This paper presents the requirements, technical choices, and performance of the new system.
DOI: 10.1088/1742-6596/513/1/012042
2014
Cited 11 times
10 Gbps TCP/IP streams from the FPGA for High Energy Physics
The DAQ system of the CMS experiment at CERN collects data from more than 600 custom detector Front-End Drivers (FEDs). During 2013 and 2014 the CMS DAQ system will undergo a major upgrade to address the obsolescence of current hardware and the requirements posed by the upgrade of the LHC accelerator and various detector components. For a loss-less data collection from the FEDs a new FPGA based card implementing the TCP/IP protocol suite over 10Gbps Ethernet has been developed. To limit the TCP hardware implementation complexity the DAQ group developed a simplified and unidirectional but RFC 793 compliant version of the TCP protocol. This allows to use a PC with the standard Linux TCP/IP stack as a receiver. We present the challenges and protocol modifications made to TCP in order to simplify its FPGA implementation. We also describe the interaction between the simplified TCP and Linux TCP/IP stack including the performance measurements.
DOI: 10.1109/tns.2007.914036
2008
Cited 14 times
CMS DAQ Event Builder Based on Gigabit Ethernet
The CMS data acquisition system is designed to build and filter events originating from 476 detector data sources at a maximum trigger rate of 100 kHz. Different architectures and switch technologies have been evaluated to accomplish this purpose. Events will be built in two stages: the first stage will be a set of event builders called front-end driver (FED) builders. These will be based on Myrinet technology and will pre-assemble groups of about eight data sources. The second stage will be a set of event builders called readout builders. These will perform the building of full events. A single readout builder will build events from about 60 sources of 16 kB fragments at a rate of 12.5 kHz. In this paper, we present the design of a readout builder based on TCP/IP over Gigabit Ethernet and the refinement that was required to achieve the design throughput. This refinement includes architecture of the readout builder, the setup of TCP/IP, and hardware selection.
DOI: 10.1016/j.nima.2022.167805
2023
A 40 MHz Level-1 trigger scouting system for the CMS Phase-2 upgrade
The CMS Phase-2 upgrade for the HL-LHC aims at preserving and expanding the current physics capability of the experiment under extreme pileup conditions. A new tracking system incorporates a track finder processor, providing tracks to the Level-1 (L1) trigger. A new high-granularity calorimeter provides fine-grained energy deposition information in the endcap region. New front-end and back-end electronics feed the L1 trigger with high-resolution information from the barrel calorimeter and the muon systems. The upgraded L1 will be based primarily on the Xilinx Ultrascale Plus series of FPGAs, capable of sophisticated feature searches with resolution often similar to the offline reconstruction. The L1 Data Scouting system (L1DS) will capture L1 intermediate data produced by the trigger processors at the beam-crossing rate of 40 MHz, and carry out online analyses based on these limited-resolution data. The L1DS will provide fast and virtually unlimited statistics for detector diagnostics, alternative luminosity measurements, and, in some cases, calibrations. It also has the potential to enable the study of otherwise inaccessible signatures, either too common to fit in the L1 trigger accept budget or with requirements that are orthogonal to “mainstream” physics. The requirements and architecture of the L1DS system are presented, as well as some of the potential physics opportunities under study. The first results from the assembly and commissioning of a demonstrator currently being installed for LHC Run-3 are also presented. The demonstrator collects data from the Global Muon Trigger, the Layer-2 Calorimeter Trigger, the Barrel Muon Track Finder, and the Global Trigger systems of the current CMS L1. This demonstrator, as a data acquisition (DAQ) system operating at the LHC bunch-crossing rate, faces many of the challenges of the Phase-2 system, albeit with scaled-down connectivity, reduced data throughput and physics capabilities, providing a testing ground for new techniques of online data reduction and processing.
DOI: 10.1088/1742-6596/664/8/082032
2015
Cited 7 times
The DAQ needle in the big-data haystack
In the last three decades, HEP experiments have faced the challenge of manipulating larger and larger masses of data from increasingly complex, heterogeneous detectors with millions and then tens of millions of electronic channels. LHC experiments abandoned the monolithic architectures of the nineties in favor of a distributed approach, leveraging the appearence of high speed switched networks developed for digital telecommunication and the internet, and the corresponding increase of memory bandwidth available in off-the-shelf consumer equipment. This led to a generation of experiments where custom electronics triggers, analysing coarser-granularity "fast" data, are confined to the first phase of selection, where predictable latency and real time processing for a modest initial rate reduction are "a necessary evil". Ever more sophisticated algorithms are projected for use in HL- LHC upgrades, using tracker data in the low-level selection in high multiplicity environments, and requiring extremely complex data interconnects. These systems are quickly obsolete and inflexible but must nonetheless survive and be maintained across the extremely long life span of current detectors.
DOI: 10.1109/tns.2002.1039633
2002
Cited 13 times
Performance of the CDF online silicon vertex tracker
The online silicon vertex tracker (SVT) is the new trigger processor dedicated to the two-dimensional (2-D) reconstruction of charged particle trajectories at the Level 2 of the Collider Detector at Fermilab (CDF) trigger. The SVT links the digitized pulse heights found within the silicon vertex detector to the tracks reconstructed in the central outer tracker by the Level 1 fast-track finder. Preliminary tests of the system took place during the October 2000 commissioning run of the Tevatron Collider. During the April-October 2001 data taking, it was possible to evaluate the performance of the system. In this paper, we review the tracking algorithms implemented in the SVT and we report on the performance achieved during the early phase of run II.
DOI: 10.1088/1742-6596/119/2/022010
2008
Cited 9 times
The run control system of the CMS experiment
The CMS experiment at the LHC at CERN will start taking data in 2008. To configure, control and monitor the experiment during data-taking the Run Control system was developed. This paper describes the architecture and the technology used to implement the Run Control system, as well as the deployment and commissioning strategy of this important component of the online software for the CMS experiment.
DOI: 10.1088/1742-6596/219/2/022042
2010
Cited 7 times
Monitoring the CMS data acquisition system
The CMS data acquisition system comprises O(20000) interdependent services that need to be monitored in near real-time. The ability to monitor a large number of distributed applications accurately and effectively is of paramount importance for robust operations. Application monitoring entails the collection of a large number of simple and composed values made available by the software components and hardware devices. A key aspect is that detection of deviations from a specified behaviour is supported in a timely manner, which is a prerequisite in order to take corrective actions efficiently. Given the size and time constraints of the CMS data acquisition system, efficient application monitoring is an interesting research problem. We propose an approach that uses the emerging paradigm of Web-service based eventing systems in combination with hierarchical data collection and load balancing. Scalability and efficiency are achieved by a decentralized architecture, splitting up data collections into regions of collections. An implementation following this scheme is deployed as the monitoring infrastructure of the CMS experiment at the Large Hadron Collider. All services in this distributed data acquisition system are providing standard web service interfaces via XML, SOAP and HTTP [15,22]. Continuing on this path we adopted WS-* standards implementing a monitoring system layered on top of the W3C standards stack. We designed a load-balanced publisher/subscriber system with the ability to include high-speed protocols [10,12] for efficient data transmission [11,13,14] and serving data in multiple data formats.
DOI: 10.1088/1742-6596/219/2/022038
2010
Cited 7 times
The CMS event builder and storage system
The CMS event builder assembles events accepted by the first level trigger and makes them available to the high-level trigger. The event builder needs to handle a maximum input rate of 100 kHz and an aggregated throughput of 100 GB/s originating from approximately 500 sources. This paper presents the chosen hardware and software architecture. The system consists of 2 stages: an initial pre-assembly reducing the number of fragments by one order of magnitude and a final assembly by several independent readout builder (RU-builder) slices. The RU-builder is based on 3 separate services: the buffering of event fragments during the assembly, the event assembly, and the data flow manager. A further component is responsible for handling events accepted by the high-level trigger: the storage manager (SM) temporarily stores the events on disk at a peak rate of 2 GB/s until they are permanently archived offline. In addition, events and data-quality histograms are served by the SM to online monitoring clients. We discuss the operational experience from the first months of reading out cosmic ray data with the complete CMS detector.
DOI: 10.1088/1742-6596/396/1/012008
2012
Cited 7 times
The CMS High Level Trigger System: Experience and Future Development
The CMS experiment at the LHC features a two-level trigger system. Events accepted by the first level trigger, at a maximum rate of 100 kHz, are read out by the Data Acquisition system (DAQ), and subsequently assembled in memory in a farm of computers running a software high-level trigger (HLT), which selects interesting events for offline storage and analysis at a rate of order few hundred Hz. The HLT algorithms consist of sequences of offline-style reconstruction and filtering modules, executed on a farm of 0(10000) CPU cores built from commodity hardware. Experience from the operation of the HLT system in the collider run 2010/2011 is reported. The current architecture of the CMS HLT, its integration with the CMS reconstruction framework and the CMS DAQ, are discussed in the light of future development. The possible short- and medium-term evolution of the HLT software infrastructure to support extensions of the HLT computing power, and to address remaining performance and maintenance issues, are discussed.
DOI: 10.1109/tns.2012.2199331
2012
Cited 6 times
First Operational Experience With a High-Energy Physics Run Control System Based on Web Technologies
Run control systems of modern high-energy particle physics experiments have requirements similar to those of today's Internet applications. The Compact Muon Solenoid (CMS) collaboration at CERN's Large Hadron Collider (LHC) therefore decided to build the run control system for its detector based on web technologies. The system is composed of Java Web Applications distributed over a set of Apache Tomcat servlet containers that connect to a database back-end. Users interact with the system through a web browser. The present paper reports on the successful scaling of the system from a small test setup to the production data acquisition system that comprises around 10.000 applications running on a cluster of about 1600 hosts. We report on operational aspects during the first phase of operation with colliding beams including performance, stability, integration with the CMS Detector Control System and tools to guide the operator.
DOI: 10.1109/tns.2013.2282340
2013
Cited 6 times
A Comprehensive Zero-Copy Architecture for High Performance Distributed Data Acquisition Over Advanced Network Technologies for the CMS Experiment
This paper outlines a software architecture where zero-copy operations are used comprehensively at every processing point from the Application layer to the Physical layer. The proposed architecture is being used during feasibility studies on advanced networking technologies for the CMS experiment at CERN. The design relies on a homogeneous peer-to-peer message passing system, which is built around memory pool caches allowing efficient and deterministic latency handling of messages of any size through the different software layers. In this scheme portable distributed applications can be programmed to process input to output operations by mere pointer arithmetic and DMA operations only. The approach combined with the open fabric protocol stack (OFED) allows one to attain near wire-speed message transfer at application level. The architecture supports full portability of user applications by encapsulating the protocol details and network into modular peer transport services whereas a transparent replacement of the underlying protocol facilitates deployment of several network technologies like Gigabit Ethernet, Myrinet, Infiniband, etc. Therefore, this solution provides a protocol-independent communication framework and prevents having to deal with potentially difficult couplings when the underlying communication infrastructure is changed. We demonstrate the feasibility of this approach by giving efficiency and performance measurements of the software in the context of the CMS distributed event building studies.
DOI: 10.1088/1742-6596/513/1/012025
2014
Cited 4 times
Prototype of a File-Based High-Level Trigger in CMS
The DAQ system of the CMS experiment at the LHC is upgraded during the accelerator shutdown in 2013/14. To reduce the interdependency of the DAQ system and the high-level trigger (HLT), we investigate the feasibility of using a file-system-based HLT. Events of ~1 MB size are built at the level-1 trigger rate of 100 kHz. The events are assembled by ~50 builder units (BUs). Each BU writes the raw events at ~2GB/s to a local file system shared with Q(10) filter-unit machines (FUs) running the HLT code. The FUs read the raw data from the file system, select Q(1%) of the events, and write the selected events together with monitoring meta-data back to a disk. This data is then aggregated over several steps and made available for offline reconstruction and online monitoring. We present the challenges, technical choices, and performance figures from the prototyping phase. In addition, the steps to the final system implementation will be discussed.
DOI: 10.1088/1742-6596/664/8/082036
2015
Cited 4 times
A scalable monitoring for the CMS Filter Farm based on elasticsearch
A flexible monitoring system has been designed for the CMS File-based Filter Farm making use of modern data mining and analytics components. All the metadata and monitoring information concerning data flow and execution of the HLT are generated locally in the form of small documents using the JSON encoding. These documents are indexed into a hierarchy of elasticsearch (es) clusters along with process and system log information. Elasticsearch is a search server based on Apache Lucene. It provides a distributed, multitenant-capable search and aggregation engine. Since es is schema-free, any new information can be added seamlessly and the unstructured information can be queried in non-predetermined ways. The leaf es clusters consist of the very same nodes that form the Filter Farm thus providing natural horizontal scaling. A separate central" es cluster is used to collect and index aggregated information. The fine-grained information, all the way to individual processes, remains available in the leaf clusters. The central es cluster provides quasi-real-time high-level monitoring information to any kind of client. Historical data can be retrieved to analyse past problems or correlate them with external information. We discuss the design and performance of this system in the context of the CMS DAQ commissioning for LHC Run 2.
DOI: 10.1007/bf03185589
1999
Cited 11 times
The CDF Silicon Vertex Tracker: Online precision tracking of the CDF Silicon Vertex Detector
The Silicon Vertex Tracker is the CDF online tracker which will reconstruct 2D tracks using hit positions measured by the Silicon Vertex Detector and Central Outer Chamber tracks found by the extremely Fast Tracker. The precision measurement of the track impact parameter will allow triggering on events containing B hadrons. This will allow the investigation of several important problems in B physics, like CP violation and Bs mixing, and to search for new heavy particles deca ying to bb.
DOI: 10.1109/tns.2007.910980
2008
Cited 5 times
The CMS High Level Trigger System
The CMS data acquisition (DAQ) system relies on a purely software driven high level trigger (HLT) to reduce the full Level 1 accept rate of 100 kHz to approximately 100 Hz for archiving and later offline analysis. The HLT operates on the full information of events assembled by an event builder collecting detector data from the CMS front-end systems. The HLT software consists of a sequence of reconstruction and filtering modules executed on a farm of O(1000) CPUs built from commodity hardware. This paper presents the architecture of the CMS HLT, which integrates the CMS reconstruction framework in the online environment. The mechanisms to configure, control, and monitor the filter farm and the procedures to validate the filtering code within the DAQ environment are described.
DOI: 10.1088/1742-6596/396/1/012023
2012
Cited 4 times
Status of the CMS Detector Control System
The Compact Muon Solenoid (CMS) is a CERN multi-purpose experiment that exploits the physics of the Large Hadron Collider (LHC). The Detector Control System (DCS) is responsible for ensuring the safe, correct and efficient operation of the experiment, and has contributed to the recording of high quality physics data. The DCS is programmed to automatically react to the LHC operational mode. CMS sub-detectors' bias voltages are set depending on the machine mode and particle beam conditions. An operator provided with a small set of screens supervises the system status summarized from the approximately 6M monitored parameters. Using the experience of nearly two years of operation with beam the DCS automation software has been enhanced to increase the system efficiency by minimizing the time required by sub-detectors to prepare for physics data taking. From the infrastructure point of view the DCS will be subject to extensive modifications in 2012. The current rack mounted control PCs will be replaced by a redundant pair of DELL Blade systems. These blade servers are a high-density modular solution that incorporates servers and networking into a single chassis that provides shared power, cooling and management. This infrastructure modification associated with the migration to blade servers will challenge the DCS software and hardware factorization capabilities. The on-going studies for this migration together with the latest modifications are discussed in the paper.
DOI: 10.1109/tns.2015.2409898
2015
Cited 3 times
Achieving High Performance With TCP Over 40 GbE on NUMA Architectures for CMS Data Acquisition
TCP and the socket abstraction have barely changed over the last two decades, but at the network layer there has been a giant leap from a few megabits to 100 gigabits in bandwidth. At the same time, CPU architectures have evolved into the multi-core era and applications are expected to make full use of all available resources. Applications in the data acquisition domain based on the standard socket library running in a Non-Uniform Memory Access (NUMA) architecture are unable to reach full efficiency and scalability without the software being adequately aware about the IRQ (Interrupt Request), CPU and memory affinities. During the first long shutdown of LHC, the CMS DAQ system is going to be upgraded for operation from 2015 onwards and a new software component has been designed and developed in the CMS online framework for transferring data with sockets. This software attempts to wrap the low-level socket library to ease higher-level programming with an API based on an asynchronous event driven model similar to the DAT uDAPL API. It is an event-based application with NUMA optimizations, that allows for a high throughput of data across a large distributed system. This paper describes the architecture, the technologies involved and the performance measurements of the software in the context of the CMS distributed event building.
DOI: 10.1088/1742-6596/664/8/082009
2015
Cited 3 times
Online data handling and storage at the CMS experiment
During the LHC Long Shutdown 1, the CMS Data Acquisition (DAQ) system underwent a partial redesign to replace obsolete network equipment, use more homogeneous switching technologies, and support new detector back-end electronics. The software and hardware infrastructure to provide input, execute the High Level Trigger (HLT) algorithms and deal with output data transport and storage has also been redesigned to be completely file- based. All the metadata needed for bookkeeping are stored in files as well, in the form of small documents using the JSON encoding. The Storage and Transfer System (STS) is responsible for aggregating these files produced by the HLT, storing them temporarily and transferring them to the T0 facility at CERN for subsequent offline processing. The STS merger service aggregates the output files from the HLT from ∼62 sources produced with an aggregate rate of ∼2GB/s. An estimated bandwidth of 7GB/s in concurrent read/write mode is needed. Furthermore, the STS has to be able to store several days of continuous running, so an estimated of 250TB of total usable disk space is required. In this article we present the various technological and implementation choices of the three components of the STS: the distributed file system, the merger service and the transfer system.
DOI: 10.1109/rtc.2016.7543164
2016
Cited 3 times
Performance of the new DAQ system of the CMS experiment for run-2
The data acquisition system (DAQ) of the CMS experiment at the CERN Large Hadron Collider (LHC) assembles events at a rate of 100 kHz, transporting event data at an aggregate throughput of more than 100GB/s to the Highlevel Trigger (HLT) farm. The HLT farm selects and classifies interesting events for storage and offline analysis at an output rate of around 1 kHz. The DAQ system has been redesigned during the accelerator shutdown in 2013-2014. The motivation for this upgrade was twofold. Firstly, the compute nodes, networking and storage infrastructure were reaching the end of their lifetimes. Secondly, in order to maintain physics performance with higher LHC luminosities and increasing event pileup, a number of sub-detectors are being upgraded, increasing the number of readout channels as well as the required throughput, and replacing the off-detector readout electronics with a MicroTCA-based DAQ interface. The new DAQ architecture takes advantage of the latest developments in the computing industry. For data concentration 10/40 Gbit/s Ethernet technologies are used, and a 56Gbit/s Infiniband FDR CLOS network (total throughput ≈ 4Tbit/s) has been chosen for the event builder. The upgraded DAQ - HLT interface is entirely file-based, essentially decoupling the DAQ and HLT systems. The fully-built events are transported to the HLT over 10/40 Gbit/s Ethernet via a network file system. The collection of events accepted by the HLT and the corresponding metadata are buffered on a global file system before being transferred off-site. The monitoring of the HLT farm and the data-taking performance is based on the Elasticsearch analytics tool. This paper presents the requirements, implementation, and performance of the system. Experience is reported on the first year of operation with LHC proton-proton runs as well as with the heavy ion lead-lead runs in 2015.
DOI: 10.1016/s0010-4655(01)00264-8
2001
Cited 8 times
The CMS event builder demonstrator and results with Myrinet
The data acquisition system for the CMS experiment at the Large Hadron Collider (LHC) will require a large and high performance event building network. Several switch technologies are currently being evaluated in order to compare different architectures for the event builder. One candidate is Myrinet. This paper describes the demonstrator which has been setup to study a small-scale (16×16) event builder based on PCs running Linux connected to Myrinet and Ethernet switches. A detailed study of the Myrinet switch performance has been performed for various traffic conditions, including the behaviour of composite switches. Results from event building studies are presented, including measurements on throughput, overhead and scaling. Traffic shaping techniques have been implemented and the effect on the event building performance has been investigated. The paper reports on performances and maximum event rate obtainable using custom software, not described, for the Myrinet control program and the low-level communication layer, implemented in a driver for Linux. A high performance sender is emulated by creating a dummy buffer that remains resident in the network interface and moving from the host only the first 64 bytes used by the event building protocol. An approximate scaling in N is presented assuming a balanced system where each source sends on average data to all destinations with the same rate.
DOI: 10.1088/1742-6596/119/2/022011
2008
Cited 4 times
High level trigger configuration and handling of trigger tables in the CMS filter farm
The CMS experiment at the CERN Large Hadron Collider is currently being commissioned and is scheduled to collect the first pp collision data in 2008. CMS features a two-level trigger system. The Level-1 trigger, based on custom hardware, is designed to reduce the collision rate of 40 MHz to approximately 100 kHz. Data for events accepted by the Level-1 trigger are read out and assembled by an Event Builder. The High Level Trigger (HLT) employs a set of sophisticated software algorithms, to analyze the complete event information, and further reduce the accepted event rate for permanent storage and analysis. This paper describes the design and implementation of the HLT Configuration Management system. First experiences with commissioning of the HLT system are also reported.
DOI: 10.1088/1742-6596/513/1/012014
2014
Cited 3 times
The new CMS DAQ system for LHC operation after 2014 (DAQ2)
The Data Acquisition system of the Compact Muon Solenoid experiment at CERN assembles events at a rate of 100 kHz, transporting event data at an aggregate throughput of 100 GByte/s. We are presenting the design of the 2nd generation DAQ system, including studies of the event builder based on advanced networking technologies such as 10 and 40 Gbit/s Ethernet and 56 Gbit/s FDR Infiniband and exploitation of multicore CPU architectures. By the time the LHC restarts after the 2013/14 shutdown, the current compute nodes, networking, and storage infrastructure will have reached the end of their lifetime. In order to handle higher LHC luminosities and event pileup, a number of sub-detectors will be upgraded, increase the number of readout channels and replace the off-detector readout electronics with a μTCA implementation. The second generation DAQ system, foreseen for 2014, will need to accommodate the readout of both existing and new off-detector electronics and provide an increased throughput capacity. Advances in storage technology could make it feasible to write the output of the event builder to (RAM or SSD) disks and implement the HLT processing entirely file based.
DOI: 10.1088/1742-6596/396/1/012007
2012
Cited 3 times
Operational experience with the CMS Data Acquisition System
The data-acquisition (DAQ) system of the CMS experiment at the LHC performs the read-out and assembly of events accepted by the first level hardware trigger. Assembled events are made available to the high-level trigger (HLT), which selects interesting events for offline storage and analysis. The system is designed to handle a maximum input rate of 100 kHz and an aggregated throughput of 100 GB/s originating from approximately 500 sources and 10^8 electronic channels. An overview of the architecture and design of the hardware and software of the DAQ system is given. We report on the performance and operational experience of the DAQ and its Run Control System in the first two years of collider runs of the LHC, both in proton-proton and Pb-Pb collisions. We present an analysis of the current performance, its limitations, and the most common failure modes and discuss the ongoing evolution of the HLT capability needed to match the luminosity ramp-up of the LHC.
DOI: 10.1088/1742-6596/513/1/012031
2014
Cited 3 times
Automating the CMS DAQ
We present the automation mechanisms that have been added to the Data Acquisition and Run Control systems of the Compact Muon Solenoid (CMS) experiment during Run 1 of the LHC, ranging from the automation of routine tasks to automatic error recovery and context-sensitive guidance to the operator. These mechanisms helped CMS to maintain a data taking efficiency above 90% and to even improve it to 95% towards the end of Run 1, despite an increase in the occurrence of single-event upsets in sub-detector electronics at high LHC luminosity.
DOI: 10.1016/s0168-9002(98)00577-4
1998
Cited 9 times
A programmable associative memory for track finding
We present a device, based on the concept of associative memory for pattern recognition, dedicated to on-line track finding in high-energy physics experiments. A large pattern bank, describing all possible tracks, can be organized into Field Programmable Gate Arrays where all patterns are compared in parallel to data coming from the detector during readout. Patterns, recognized among 266 possible combinations, are output in a few 30 MHz clock cycles. Programmability results in a flexible, simple architecture and it allows to keep up smoothly with technology improvements.
DOI: 10.1088/1742-6596/219/2/022002
2010
Cited 3 times
The CMS online cluster: IT for a large data acquisition and control cluster
The CMS online cluster consists of more than 2000 computers running about 10000 application instances. These applications implement the control of the experiment, the event building, the high level trigger, the online database and the control of the buffering and transferring of data to the Central Data Recording at CERN. In this paper the IT solutions employed to fulfil the requirements of such a large cluster are revised. Details are given on the chosen network structure, configuration management system, monitoring infrastructure and on the implementation of the high availability for the services and infrastructure.
DOI: 10.1109/nssmic.1998.775151
2002
Cited 6 times
A large associative memory system for the CDF level 2 trigger
A large Associative Memory system for on-line track reconstruction in a hadron collider experiment has been designed, prototyped and tested. This is the first such application of the Associative Memory concept and it is based on a full custom VLSI chip developed within this project. The Associative Memory is the heart of the Silicon Vertex Tracker, which is part of the Level 2 trigger of the CDF experiment, and is able to complete track finding in the CDF silicon vertex detector less then 1 /spl mu/sec after detector readout is over. This system is a multi-board project running on a common 30 MHz clock, but critical parts multiply clock frequency to operate up to 120 MHz. The Associative Memory board architecture, design, implementation and test are described. The main characteristics of this project are the use of sophisticated clock distribution techniques and the high density of components.
DOI: 10.1016/s0168-9002(02)02034-x
2003
Cited 5 times
Initial experience with the CDF SVT trigger
The Collider Detector at Fermilab (CDF) Silicon Vertex Tracker (SVT) is a device that works inside the CDF Level 2 trigger to find and fit tracks in real time using the central silicon vertex detector information. SVT starts from tracks found by the Level 1 central chamber fast trigger and adds the silicon information to compute transverse track parameters with offline quality in about 15μs. The CDF SVT is fully installed and functional and has been exercised with real data during the spring and summer 2001. It is a complex digital device of more than 100 VME boards that performs a dramatic data reduction (only about one event in a thousand is accepted by the trigger). Diagnosing rare failures poses a special challenge and SVT internal data flow is monitored by dedicated hardware and software. This paper briefly covers the SVT architecture and design and reports on the SVT building/commissioning experience (hardware and software) and on the first results from the initial running.
DOI: 10.1088/1742-6596/331/2/022010
2011
An Analysis of the Control Hierarchy Modelling of the CMS Detector Control System
The supervisory level of the Detector Control System (DCS) of the CMS experiment is implemented using Finite State Machines (FSM), which model the behaviours and control the operations of all the sub-detectors and support services. The FSM tree of the whole CMS experiment consists of more than 30.000 nodes. An analysis of a system of such size is a complex task but is a crucial step towards the improvement of the overall performance of the FSM system. This paper presents the analysis of the CMS FSM system using the micro Common Representation Language 2 (mcrl2) methodology. Individual mCRL2 models are obtained for the FSM systems of the CMS sub-detectors using the ASF+SDF automated translation tool. Different mCRL2 operations are applied to the mCRL2 models. A mCRL2 simulation tool is used to closer examine the system. Visualization of a system based on the exploration of its state space is enabled with a mCRL2 tool. Requirements such as command and state propagation are expressed using modal mu-calculus and checked using a model checking algorithm. For checking local requirements such as endless loop freedom, the Bounded Model Checking technique is applied. This paper discusses these analysis techniques and presents the results of their application on the CMS FSM system.
DOI: 10.1109/rtc.2014.7097437
2014
The new CMS DAQ system for run-2 of the LHC
Summary form only given. The data acquisition system (DAQ) of the CMS experiment at the CERN Large Hadron Collider assembles events at a rate of 100 kHz, transporting event data at an aggregate throughput of 100 GB/s to the high level trigger (HLT) farm. The HLT farm selects interesting events for storage and offline analysis at a rate of around 1 kHz. The DAQ system has been redesigned during the accelerator shutdown in 2013/14. The motivation is twofold: Firstly, the current compute nodes, networking, and storage infrastructure will have reached the end of their lifetime by the time the LHC restarts. Secondly, in order to handle higher LHC luminosities and event pileup, a number of sub-detectors will be upgraded, increasing the number of readout channels and replacing the off-detector readout electronics with a μTCA implementation. The new DAQ architecture will take advantage of the latest developments in the computing industry. For data concentration, 10/40 Gb/s Ethernet technologies will be used, as well as an implementation of a reduced TCP/IP in FPGA for a reliable transport between custom electronics and commercial computing hardware. A 56 Gb/s Infiniband FDR Clos network has been chosen for the event builder with a throughput of ~4 Tb/s. The HLT processing is entirely file based. This allows the DAQ and HLT systems to be independent, and to use the HLT software in the same way as for the offline processing. The fully built events are sent to the HLT with 1/10/40 Gb/s Ethernet via network file systems. Hierarchical collection of HLT accepted events and monitoring meta-data are stored into a global file system. This paper presents the requirements, technical choices, and performance of the new system.
DOI: 10.1109/tns.2023.3244696
2023
Progress in Design and Testing of the DAQ and Data-Flow Control for the Phase-2 Upgrade of the CMS Experiment
The CMS detector will undergo a major upgrade for the Phase-2 of theLHC program the High-Luminosity LHC.The upgraded CMS detector willbe read out at an unprecedented data rate exceed-ing50 Tb/s, with a Level-1 trigger selecting eventsat a rate of 750 kHz, and an average event size reaching8.5MB.The Phase-2 CMS back-end electronics will bebased on the ATCA standard, with node boards receiving the detectordata from the front-ends via custom, radiation-tolerant, opticallinks.The CMS Phase-2 data acquisition (DAQ) design tightens the integrationbetween trigger control and data flow, extending the synchronousregime of the DAQ system.At the core of the design is the DAQ andTiming Hub, a custom ATCA hub card forming the bridge between thedifferent, detectorspecific, control and readout electronics and thecommon timing, trigger, and control systems.The overall synchronisation and data flow of the experiment is handledby the Trigger and Timing Control and Distribution System (TCDS).Forincreased flexibility during commissioning and calibration runs, thePhase-2 architecture breaks with the traditional distribution tree, infavour of a configurable network connecting multiple independentcontrol units to all off-detector endpoints.This paper describes the overall Phase-2 TCDS architecture, andbriefly compares it to previous CMS implementations.It then discussesthe design and prototyping experience of the DTH, and concludes withthe convergence of this prototyping process into the (pre)productionphase, starting in early 2023.
DOI: 10.1016/s0168-9002(01)01830-7
2002
Cited 5 times
The CDF silicon vertex tracker
Real time pattern recognition is becoming a key issue in many position sensitive detector applications. The CDF collaboration is building SVT: a specialized electronic device designed to perform real time track reconstruction using the Silicon VerteX detector (SVX II). This will strongly improve the CDF capability of triggering on events containing b quarks, usually characterized by the presence of a secondary vertex. SVT is designed to reconstruct in real time charged particles trajectories using data coming from the silicon vertex detector and the central outer tracker drift chamber. The SVT architecture and algorithm have been specially tuned to minimize processing time without degrading parameter resolution.
DOI: 10.1088/1742-6596/396/1/012041
2012
High availability through full redundancy of the CMS detector controls system
The CMS detector control system (DCS) is responsible for controlling and monitoring the detector status and for the operation of all CMS sub detectors and infrastructure. This is required to ensure safe and efficient data taking so that high quality physics data can be recorded. The current system architecture is composed of more than 100 servers in order to provide the required processing resources. An optimization of the system software and hardware architecture is under development to ensure redundancy of all the controlled subsystems and to reduce any downtime due to hardware or software failures. The new optimized structure is based mainly on powerful and highly reliable blade servers and makes use of a fully redundant approach, guaranteeing high availability and reliability. The analysis of the requirements, the challenges, the improvements and the optimized system architecture as well as its specific hardware and software solutions are presented.
DOI: 10.5170/cern-2004-010.316
2004
Cited 3 times
The Final prototype of the Fast Merging Module (FMM) for readout status processing in CMS DAQ
The Trigger Throttling System (TTS) adapts the trigger frequency to the DAQ readout capacity in order to avoid buffer overflows and data corruption. The states of all ~640 readout units in the CMS DAQ are read out and merged by hardware modules (FMMs) to obtain the status of each detector partition. The functionality and the design of the second and final prototype of the FMM are presented in this paper.
DOI: 10.1088/1748-0221/4/10/p10005
2009
Commissioning of the CMS High Level Trigger
The CMS experiment will collect data from the proton-proton collisions delivered by the Large Hadron Collider (LHC) at a centre-of-mass energy up to 14 TeV. The CMS trigger system is designed to cope with unprecedented luminosities and LHC bunch-crossing rates up to 40 MHz. The unique CMS trigger architecture only employs two trigger levels. The Level-1 trigger is implemented using custom electronics, while the High Level Trigger (HLT) is based on software algorithms running on a large cluster of commercial processors, the Event Filter Farm. We present the major functionalities of the CMS High Level Trigger system as of the starting of LHC beams operations in September 2008. The validation of the HLT system in the online environment with Monte Carlo simulated data and its commissioning during cosmic rays data taking campaigns are discussed in detail. We conclude with the description of the HLT operations with the first circulating LHC beams before the incident occurred the 19th September 2008.
DOI: 10.1016/j.nuclphysbps.2007.08.106
2007
Flexible custom designs for CMS DAQ
The CMS central DAQ system is built using commercial hardware (PCs and networking equipment), except for two components: the Front-end Readout Link (FRL) and the Fast Merger Module (FMM). The FRL interfaces the sub-detector specific front-end electronics to the central DAQ system in a uniform way. The FRL is a compact-PCI module with an additional PCI 64bit connector to host a Network Interface Card (NIC). On the sub-detector side, the data are written to the link using a FIFO-like protocol (SLINK64). The link uses the Low Voltage Differential Signal (LVDS) technology to transfer data with a throughput of up to 400 MBytes/s. The FMM modules collect status signals from the front-end electronics of the sub-detectors, merge and monitor them and provide the resulting signals with low latency to the first level trigger electronics. In particular, the throttling signals allow the trigger to avoid buffer overflows and data corruption in the front-end electronics when the data produced in the front-end exceeds the capacity of the DAQ system. Both cards are compact-PCI cards with a 6U form factor. They are implemented with FPGAs. The main FPGA implements the processing logic of the card and the interfaces to the variety of busses on the card. Another FPGA contains a custom compact-PCI interface for configuration, control and monitoring. The chosen technology provides flexibility to implement new features if required.
DOI: 10.1088/1742-6596/331/2/022004
2011
Studies of future readout links for the CMS experiment
The Compact Muon Solenoid (CMS) experiment has developed an electrical implementation of the S-LINK64 extension (Simple Link Interface 64 bit) operating at 400 MB/s in order to read out the detector. This paper studies a possible replacement of the existing S-LINK64 implementation by an optical link, based on 10 Gigabit Ethernet in order to fulfil larger throughput, replace aging hardware and simplify an architecture. A prototype transmitter unit has been developed based on the FPGA Altera PCI Express Development Kit with a custom firmware. A standard PC has been acted as receiving unit. The data transfer has been implemented on a stack of protocols: RDP over IP over Ethernet. This allows receiving the data by standard hardware components like PCs or network switches and NICs. The first test proved that basic exchange of the packets between transmitter and receiving unit works. The paper summarizes the status of these studies.
DOI: 10.1088/1742-6596/664/8/082035
2015
A New Event Builder for CMS Run II
The data acquisition system (DAQ) of the CMS experiment at the CERN Large Hadron Collider (LHC) assembles events at a rate of 100 kHz, transporting event data at an aggregate throughput of 100GB/s to the high-level trigger (HLT) farm. The DAQ system has been redesigned during the LHC shutdown in 2013/14. The new DAQ architecture is based on state-of-the-art network technologies for the event building. For the data concentration, 10/40 Gbps Ethernet technologies are used together with a reduced TCP/IP protocol implemented in FPGA for a reliable transport between custom electronics and commercial computing hardware. A 56 Gbps Infiniband FDR CLOS network has been chosen for the event builder. This paper discusses the software design, protocols, and optimizations for exploiting the hardware capabilities. We present performance measurements from small-scale prototypes and from the full-scale production system.
DOI: 10.1088/1742-6596/664/8/082033
2015
File-based data flow in the CMS Filter Farm
During the LHC Long Shutdown 1, the CMS Data Acquisition system underwent a partial redesign to replace obsolete network equipment, use more homogeneous switching technologies, and prepare the ground for future upgrades of the detector front-ends. The software and hardware infrastructure to provide input, execute the High Level Trigger (HLT) algorithms and deal with output data transport and storage has also been redesigned to be completely file- based. This approach provides additional decoupling between the HLT algorithms and the input and output data flow. All the metadata needed for bookkeeping of the data flow and the HLT process lifetimes are also generated in the form of small "documents" using the JSON encoding, by either services in the flow of the HLT execution (for rates etc.) or watchdog processes. These "files" can remain memory-resident or be written to disk if they are to be used in another part of the system (e.g. for aggregation of output data). We discuss how this redesign improves the robustness and flexibility of the CMS DAQ and the performance of the system currently being commissioned for the LHC Run 2.
DOI: 10.1016/j.nima.2005.03.034
2005
Feasibility study of a XML-based software environment to manage data acquisition hardware devices
A Software environment to describe configuration, control and test systems for data acquisition hardware devices is presented. The design follows a model that enforces a comprehensive use of an extensible markup language (XML) syntax to describe both the code and associated data. A feasibility study of this software, carried out for the CMS experiment at CERN, is also presented. This is based on a number of standalone applications for different hardware modules, and the design of a hardware management system to remotely access to these heterogeneous subsystems through a uniform web service interface.
DOI: 10.1088/1742-6596/396/1/012038
2012
Distributed error and alarm processing in the CMS data acquisition system
The error and alarm system for the data acquisition of the Compact Muon Solenoid (CMS) at CERN was successfully used for the physics runs at Large Hadron Collider (LHC) during first three years of activities. Error and alarm processing entails the notification, collection, storing and visualization of all exceptional conditions occurring in the highly distributed CMS online system using a uniform scheme. Alerts and reports are shown on-line by web application facilities that map them to graphical models of the system as defined by the user. A persistency service keeps a history of all exceptions occurred, allowing subsequent retrieval of user defined time windows of events for later playback or analysis. This paper describes the architecture and the technologies used and deals with operational aspects during the first years of LHC operation. In particular we focus on performance, stability, and integration with the CMS sub-detectors.
DOI: 10.1109/rtc.2012.6418362
2012
Recent experience and future evolution of the CMS High Level Trigger System
The CMS experiment at the LHC uses a two-stage trigger system, with events flowing from the first level trigger at a rate of 100 kHz. These events are read out by the Data Acquisition system (DAQ), assembled in memory in a farm of computers, and finally fed into the high-level trigger (HLT) software running on the farm. The HLT software selects interesting events for offline storage and analysis at a rate of a few hundred Hz. The HLT algorithms consist of sequences of offline-style reconstruction and filtering modules, executed on a farm of 0(10000) CPU cores built from commodity hardware. Experience from the 2010–2011 collider run is detailed, as well as the current architecture of the CMS HLT, and its integration with the CMS reconstruction framework and CMS DAQ. The short- and medium-term evolution of the HLT software infrastructure is discussed, with future improvements aimed at supporting extensions of the HLT computing power, and addressing remaining performance and maintenance issues.
DOI: 10.1088/1742-6596/219/2/022003
2010
Dynamic configuration of the CMS Data Acquisition cluster
The CMS Data Acquisition cluster, which runs around 10000 applications, is configured dynamically at run time. XML configuration documents determine what applications are executed on each node and over what networks these applications communicate. Through this mechanism the DAQ System may be adapted to the required performance, partitioned in order to perform (test-) runs in parallel, or re-structured in case of hardware faults. This paper presents the configuration procedure and the CMS DAQ Configurator tool, which is used to generate comprehensive configurations of the CMS DAQ system based on a high-level description given by the user. Using a database of configuration templates and a database containing a detailed model of hardware modules, data and control links, nodes and the network topology, the tool automatically determines which applications are needed, on which nodes they should run, and over which networks the event traffic will flow. The tool computes application parameters and generates the XML configuration documents and the configuration of the run-control system. The performance of the configuration procedure and the tool as well as operational experience during CMS commissioning and the first LHC runs are discussed.
DOI: 10.1109/rtc.2010.5750362
2010
First operational experience with the CMS run control system
The Run Control System of the Compact Muon Solenoid (CMS) experiment at CERN's new Large Hadron Collider (LHC) controls the sub-detector and central data acquisition systems and the high-level trigger farm of the experiment. It manages around 10,000 applications that control custom hardware or handle the event building and the high-level trigger processing. The CMS Run Control System is a distributed Java system running on a set of Apache Tomcat servlet containers. Users interact with the system through a web browser. The paper presents the architecture of the CMS Run Control System and deals with operational aspects during the first phase of operation with colliding beams. In particular it focuses on performance, stability, integration with the CMS Detector Control System, integration with LHC status information and tools to guide the shifter.
DOI: 10.1109/23.790709
1999
Cited 4 times
A real-time tracker for hadronic collider experiments
In this paper we propose highly parallel dedicated processors, able to provide precise on-line track reconstruction for future hadronic collider experiments. The processors, organized in a 2-level pipelined architecture, execute very fast algorithms based on the use of a large bank of pre-stored patterns of trajectory points. An associative memory implements the first stage by recognizing track candidates at low resolution to match the demanding task of tracking at the detector readout rate. Alternative technological implementations for the associative memory are compared. The second stage receives track candidates and high resolution hits to refine pattern recognition at the associative memory output rate. A parallel and pipelined hardware implements a binary search strategy inside a hierarchically structured pattern bank, stored in high density commercial RAMs.
DOI: 10.5170/cern-2002-003.285
2002
Cited 3 times
CMS data to surface transportation architecture
The front-end electronics of the CMS experiment will be read out in parallel into approximately 650 modules which will be located in the underground counting room. The data read out will then be transported over a distance of ~200 m to the surface counting room where they will be received into deep buffers, the Readout Units. The latter also provide the first step in the CMS event building process, by combining the data from multiple detector data sources into larger-size (~16 kB) data fragments. The second and final event-building step merges 64 such super-fragments into a full event. The first stage of the Event Builder, referred to as the Data to Surface (D2S) system is structured in a way to allow for a modular and scalable DAQ system whose performance can grow with the increasing luminosity of the LHC
DOI: 10.1109/rtc.2007.4382746
2007
The Terabit/s Super-Fragment Builder and Trigger Throttling System for the Compact Muon Solenoid Experiment at CERN
The data acquisition system of the Compact Muon Solenoid experiment at the large hadron collider reads out event fragments of an average size of 2 kilobytes from around 650 detector front-ends at a rate of up to 100 kHz. The first stage of event-building is performed by the Super-Fragment Builder employing custom-built electronics and a Myrinet optical network. It reduces the number of fragments by one order of magnitude, thereby greatly decreasing the requirements for the subsequent event-assembly stage. By providing fast feedback from any of the front-ends to the trigger, the trigger throttling system prevents buffer overflows in the front-end electronics due to variations in the size and rate of events or due to backpressure from the down-stream event-building and processing. This paper reports on the recent successful integration of a scaled-down setup of the described system with the trigger and with front-ends of all major sub-detectors and discusses the ongoing commissioning of the full-scale system.
DOI: 10.1109/rtc.2007.4382773
2007
The CMS High Level Trigger System
The CMS Data Acquisition (DAQ) System relies on a purely software driven High Level Trigger (HLT) to reduce the full Level-1 accept rate of 100 kHz to approximately 100 Hz for archiving and later offline analysis. The HLT operates on the full information of events assembled by an event builder collecting detector data from the CMS front-end systems. The HLT software consists of a sequence of reconstruction and filtering modules executed on a farm of 0(1000) CPUs built from commodity hardware. This paper presents the architecture of the CMS HLT, which integrates the CMS reconstruction framework in the online environment. The mechanisms to configure, control, and monitor the Filter Farm and the procedures to validate the filtering code within the DAQ environment are described.
DOI: 10.1109/rtc.2005.1547433
2005
The 2 Tbps "data to surface" system of the CMS data acquisition
The data acquisition system of the CMS experiment, at the CERN LHC collider, is designed to build 1 MB events at a sustained rate of 100 kHz and to provide sufficient computing power to filter the events by a factor of 1000. The data to surface (D2S) system is the first layer of the data acquisition interfacing the underground subdetector readout electronics to the surface event builder. It collects the 100 GB/s input data from a large number of front-end cards (650), implements a first stage event building by combining multiple sources into larger-size data fragments, and transports them to the surface for the full event building. The data to surface system can operate at the maximum rate of 2 Tbps. This paper describes the layout, reconfigurability and production validation of the D2S system which is to be installed by December 2005
2015
File-based data flow in the CMS Filter Farm
2015
A scalable monitoring for the CMS Filter Farm based on elasticsearch
A flexible monitoring system has been designed for the CMS File-based Filter Farm making use of modern data mining and analytics components. All the metadata and monitoring information concerning data flow and execution of the HLT are generated locally in the form of small documents using the JSON encoding. These documents are indexed into a hierarchy of elasticsearch (es) clusters along with process and system log information. Elasticsearch is a search server based on Apache Lucene. It provides a distributed, multitenant-capable search and aggregation engine. Since es is schema-free, any new information can be added seamlessly and the unstructured information can be queried in non-predetermined ways. The leaf es clusters consist of the very same nodes that form the Filter Farm thus providing natural horizontal scaling. A separate central” es cluster is used to collect and index aggregated information. The fine-grained information, all the way to individual processes, remains available in the leaf clusters. The central es cluster provides quasi-real-time high-level monitoring information to any kind of client. Historical data can be retrieved to analyse past problems or correlate them with external information. We discuss the design and performance of this system in the context of the CMS DAQ commissioningmore » for LHC Run 2.« less
2015
Online data handling and storage at the CMS experiment
2014
Boosting Event Building Performance using Infiniband FDR for the CMS Upgrade
DOI: 10.1109/rtc.2014.7097439
2014
Achieving high performance with TCP over 40GbE on NUMA architectures for CMS data acquisition
TCP and the socket abstraction have barely changed over the last two decades, but at the network layer there has been a giant leap from a few megabits to 100 gigabits in bandwidth. At the same time, CPU architectures have evolved into the multicore era and applications are expected to make full use of all available resources. Applications in the data acquisition domain based on the standard socket library running in a Non-Uniform Memory Access (NUMA) architecture are unable to reach full efficiency and scalability without the software being adequately aware about the IRQ (Interrupt Request), CPU and memory affinities. During the first long shutdown of LHC, the CMS DAQ system is going to be upgraded for operation from 2015 onwards and a new software component has been designed and developed in the CMS online framework for transferring data with sockets. This software attempts to wrap the low-level socket library to ease higher-level programming with an API based on an asynchronous event driven model similar to the DAT uDAPL API. It is an event-based application with NUMA optimizations, that allows for a high throughput of data across a large distributed system. This paper describes the architecture, the technologies involved and the performance measurements of the software in the context of the CMS distributed event building.
DOI: 10.1088/1742-6596/396/4/042049
2012
Health and performance monitoring of the online computer cluster of CMS
The CMS experiment at the LHC features over 2'500 devices that need constant monitoring in order to ensure proper data taking. The monitoring solution has been migrated from Nagios to Icinga, with several useful plugins. The motivations behind the migration and the selection of the plugins are discussed.
DOI: 10.1088/1742-6596/331/2/022009
2011
The LHC Compact Muon Solenoid experiment Detector Control System
The Compact Muon Solenoid (CMS) experiment at CERN is a multi-purpose experiment designed to exploit the physics of proton-proton collisions at the Large Hadron Collider collision energy (14TeV at centre of mass) over the full range of expected luminosities (up to 1034cm−2s−1). The CMS detector control system (DCS) ensures a safe, correct and efficient operation of the detector so that high quality physics data can be recorded. The system is also required to operate the detector with a small crew of experts who can take care of the maintenance of its software and hardware infrastructure. The subsystems size sum up to more than a million parameters that need to be supervised by the DCS. A cluster of roughly 100 servers is used to provide the required processing resources. A scalable approach has been chosen factorizing the DCS system as much as possible. CMS DCS has made clear a division between its computing resources and functionality by creating a computing framework allowing plugging in of functional components. DCS components are developed by the subsystems expert groups while the computing infrastructure is developed centrally. To ensure the correct operation of the detector, DCS organizes the communication between the accelerator and the experiment systems making sure that the detector is in a safe state during hazardous situations and is fully operational when stable conditions are present. This paper describes the current status of the CMS DCS focusing on operational aspects and the role of DCS in this communication.
2013
The First Running Period of the CMS Detector Controls System - A Success Story
DOI: 10.1109/rtc.2012.6418171
2012
A comprehensive zero-copy architecture for high performance distributed data acquisition over advanced network technologies for the CMS experiment
This paper outlines a software architecture where zero-copy operations are used comprehensively at every processing point from the Application layer to the Physical layer. The proposed architecture is being used during feasibility studies on advanced networking technologies for the CMS experiment at CERN. The design relies on a homogeneous peer-to-peer message passing system, which is built around memory pool caches allowing efficient and deterministic latency handling of messages of any size through the different software layers. In this scheme portable distributed applications can be programmed to process input to output operations by mere pointer arithmetic and DMA operations only. The approach combined with the open fabric protocol stack (OFED) allows one to attain near wire-speed message transfer at application level. The architecture supports full portability of user applications by encapsulating the protocol details and network into modular peer transport services whereas a transparent replacement of the underlying protocol facilitates deployment of several network technologies like Gigabit Ethernet, Myrinet, Infiniband etc. Therefore, this solution provides a protocol-independent communication framework and prevents having to deal with potentially difficult couplings when the underlying communication infrastructure is changed. We demonstrate the feasibility of this approach by giving efficiency and performance measurements of the software in the context of the CMS distributed event building studies.
DOI: 10.5170/cern-2000-010.569
2000
CMS front-end / DAQ interfacing
DOI: 10.1109/rtc.2010.5750482
2010
The CMS electronic logbook
The CMS ELogbook (ELog) is a collaborative tool, which provides a platform to share and store information about various events or problems occurring in the Compact Muon Solenoid (CMS) experiment at CERN during operation. The ELog is based on a Model-View-Controller (MVC) software architectural pattern and uses an Oracle database to store messages and attachments. The ELog is developed as a pluggable web component in Oracle Portal in order to provide better management, monitoring and security.
DOI: 10.1088/1742-6596/219/4/042051
2010
CMS partial releases: Model, tools, and applications online and framework-light releases
With the integration of all CMS software packages into one release, the CMS software release management team faced the problem that for some applications a big distribution size and a large number of unused packages have become a real issue. TWe describe a solution to this problem. Based on functionality requirements and dependency analysis, we define a self-contained subset of the full CMS software release and create a Partial Release for such applications. We describe a high level architecture for this model, and tools that are used to automate the release preparation. Finally we discuss the two most important use cases for which this approach is currently implemented.
2009
High Level Trigger Configuration and Handling of Trigger Tables in the CMS Filter Farm
The CMS experiment at the CERN Large Hadron Collider is currently being commissioned and is scheduled to collect the first pp collision data in 2008. CMS features a two-level trigger system. The Level-1 trigger, based on custom hardware, is designed to reduce the collision rate of 40 MHz to approximately 100 kHz. Data for events accepted by the Level-1 trigger are read out and assembled by an Event Builder. The High Level Trigger (HLT) employs a set of sophisticated software algorithms, to analyze the complete event information, and further reduce the accepted event rate for permanent storage and analysis. This paper describes the design and implementation of the HLT Configuration Management system. First experiences with commissioning of the HLT system are also reported.
DOI: 10.1109/23.790708
1999
A prototype of programmable associative memory for track finding
We present a device, based on the concept of associative memory for pattern recognition, dedicated to on-line track finding in high-energy physics experiments. A large pattern bank, describing all possible tracks, can be organized into Field Programmable Gate Arrays where all patterns are compared in parallel to data coming from the detector during readout. Patterns, recognized among 2/sup 66/ possible combinations, are output in a few 30 MHz clock cycles. Programmability results in a flexible, simple architecture and it allows to keep up smoothly with technology improvements. A 64 PAM array has been assembled on a prototype VME board and fully tested up to 30 MHz.
DOI: 10.1109/rtc.2007.4382749
2007
Effects of Adaptive Wormhole Routing in Event Builder Networks
The data acquisition system of the CMS experiment at the Large Hadron Collider features a two-stage event builder, which combines data from about 500 sources into full events at an aggregate throughput of 100 GByte/s. To meet the requirements, several architectures and interconnect technologies have been quantitatively evaluated. Both gigabit Ethernet and Myrinet networks will be employed during the first run. Nearly full bi-section throughput can be obtained using a custom software driver for Myrinet based on barrel shifter traffic shaping. This paper discusses the use of Myrinet dual-port network interface cards supporting channel bonding to achieve virtual 5 GBit/s links with adaptive routing to alleviate the throughput limitations associated with wormhole routing. Adaptive routing is not expected to be suitable for high-throughput event builder applications in high-energy physics. To corroborate this claim, results from the CMS event builder pre-series installation at CERN are presented and the problems of wormhole routing networks are discussed.
1992
[Celiac disease and its diagnostic evolution. Comparisons and experiences in a hospital pediatric department (1975-1992). I].
The coeliac disease (CD) or gluten-sensitive enteropathy (GSE) is a permanent intolerance to wheat gliadin and to correlated proteins inducing malabsorption and typical damages of the jejunal mucosa (total or subtotal villous atrophy = SVA) in genetically-predisposed individuals ("DQW2"). A large amount of research has been devoted to CD pathogenesis: the most recent studies, thanks to sophisticated and experimental methods, support the pathogenetic immunological theory and the one of direct cytotoxicity. The correct diagnostic procedure for CD, established in 1970 by the European Society for Pediatric Gastroenterology and Nutrition (ESPGAN), suggested three small bowel mucosal biopsies. In the last years, because of the difficulties of such a practice, the necessity of non-invasive diagnostic approaches has developed; such approaches have been verified in absorption tests (one-hour blood xylose, intestinal permeability methods) and in immunogenetic tests (antibodies antigliadin, anti-reticulin, anti-endomysium, anti 90 KD glycoprotein, anti-human jejunum, HLA I/II antigens). The specific MHC antigens establish CD's incidence in several population and in particular situations, as in first-degree relatives and in diseases associated with CD (dermatitis herpetiformis (DH), insulin dependent diabetes mellitus (IDDM) and other auto-immune syndromes). The specific serum antibodies singly used as first level screening if estimated in combination with absorption tests, reach the highest levels of specificity and sensibility in CD diagnosis. It's anyway fundamental the comparison with at least a typical CD histological feature, caused by a challenge with a sufficient gluten to be carried in dubious cases and in non high auxological risk age (ESPGAN 1989). Adolescence is a period of frequent non compliance with a gluten-free diet and of particular psychological and physical problems: the apparent "gluten insensitivity", typical of teen-agers and adults, recalls the definitions of silent CD and latent CD (iceberg like). In the first case the jejunal mucosa is abnormal and the symptomatology isn't evident. In latent CD, genetically restricted, the mucosa is normal but there are minimal markers of inappropriate immunity to gliadin (at intestinal humoral immunity level) and a possible worsening of histological lesions to the third stage under environmental stimuli. This represents a two-stage model CD. That's why CD is still under-evaluated despite recent statistics reporting an increasing incidence (late and atypical forms). Prevalence rates between 1:300 and 1:4,000 and more are quoted in literature. The necessity of a strict gluten-free diet is confirmed by the evident frequency of lymphoma and by the increased risk of malignancy in untreated CD.(ABSTRACT TRUNCATED AT 400 WORDS)
DOI: 10.48550/arxiv.physics/0306150
2003
The CMS Event Builder
The data acquisition system of the CMS experiment at the Large Hadron Collider will employ an event builder which will combine data from about 500 data sources into full events at an aggregate throughput of 100 GByte/s. Several architectures and switch technologies have been evaluated for the DAQ Technical Design Report by measurements with test benches and by simulation. This paper describes studies of an EVB test-bench based on 64 PCs acting as data sources and data consumers and employing both Gigabit Ethernet and Myrinet technologies as the interconnect. In the case of Ethernet, protocols based on Layer-2 frames and on TCP/IP are evaluated. Results from ongoing studies, including measurements on throughput and scaling are presented. The architecture of the baseline CMS event builder will be outlined. The event builder is organised into two stages with intelligent buffers in between. The first stage contains 64 switches performing a first level of data concentration by building super-fragments from fragments of 8 data sources. The second stage combines the 64 super-fragments into full events. This architecture allows installation of the second stage of the event builder in steps, with the overall throughput scaling linearly with the number of switches in the second stage. Possible implementations of the components of the event builder are discussed and the expected performance of the full event builder is outlined.
DOI: 10.1142/9789812792433_0087
2000
MSSM HIGGS SEARCHES USING τ DECAY MODES AT CMS
DOI: 10.5170/cern-2001-005.321
2001
Front-End/DAQ Interfaces in CMS
2001
The CDF 2 online silicon vertex tracker
DOI: 10.48550/arxiv.hep-ph/0112141
2001
The CDF-II Online Silicon Vertex Tracker
The Online Silicon Vertex Tracker is the new CDF-II level 2 trigger processor designed to reconstruct 2-D tracks within the Silicon Vertex Detector with high speed and accuracy. By performing a precise measurement of impact parameters the SVT allows tagging online B events which typically show displaced secondary vertices. Physics simulations show that this will greatly enhance the CDF-II B-physics capability. The SVT has been fully assembled and operational since the beginning of Tevatron RunII in April 2001. In this paper we briefly review the SVT design and physics motivation and then describe its performance during the early phase (April-October 2001) of run II.
2001
The CDF 2 online silicon vertex tracker
Reference EPFL-ARTICLE-173148 URL: http://arxiv.org/abs/hep-ph/0112141 Record created on 2011-12-23, modified on 2017-05-12
2001
CMS Object—Oriented Analysis
The CMS OO reconstruction program-ORCA-has been used since 1999 to produce large samples of reconstructed Monte-Carlo events for detector optimization,trigger and physics studies,The events are stored in several Objectivity federations at CERN,in the US,Italy and other countries.To perform their studies physicists use different event samples ranging from complete datasets of TByte size to only a few events out of these datasets.We describe the implementation of these requirements in the ORCA software and the way collctions of events are accessed for reading,writing or copying.
DOI: 10.1142/9789812776464_0019
2002
THE CDF ONLINE SILICON VERTEX TRACKER
The Online Silicon Vertex Tracker is the new CDF-II level 2 trigger processor designed to reconstruct 2-D tracks within the Silicon Vertex Detector with high speed and accuracy. By performing a precise measurement of impact parameters the SVT allows tagging online B events which typically show displaced secondary vertices. Physics simulations show that this will greatly enhance the CDF-II B-physics capability. The SVT has been fully assembled and operational since the beginning of Tevatron RunII in April 2001. In this paper we briefly review the SVT design and physics motivation and then describe its performance during the early phase (April-October 2001) of run II.
1981
[Topical treatment of Lyell's syndrome with dextranomer (Debrisan) (author's transl)].
DOI: 10.1016/0920-5632(96)00369-6
1996
B physics at CDF
Results on B physics and heavy quarkonia production based on data collected during the Tevatron run Ia and Ib are presented. For B physics, results on B meson mass measurement, B meson lifetimes, rare decay searches, B0B0 mixing and B meson polarization are discussed. Accuracies comparable to those of leading e+e−experiments are attained or expected to be attained by the end of Tevatron Run I in almost all these fields. Unexpected features of J/ψ , ψ(2S), χc, andbound state production are also discussed.
1983
[Long-term non-malformative urinary infections in children. (I)].
Non malformative long term urinary tract infections affect above all the female sex. The specific anatomic conditions explain only in part the predisposition for the female sex. Other determinants are the individual susceptibility and bacterial virulence. The authors studied 169 pediatric patients with recurrent urinary tract infections; of which 159 where females. The study of the blood group in 70 patients showed a net predominance of the groups B and AB. About one-third of the patients studied presented a scarce symptomatology or complete absence. The urodynamic study in 120 patients has revealed an abnormal pattern in more than 80% of the cases. According to the authors the abnormal urodynamic pattern is related to the long-term infection and is reversible in the cases which are curable. In about one-third of the cases with long-term infections which are resistent to the therapy, the cystoscopy has revealed a cystitis cystica. The study carried out by the authors permits precise indications on the specific tests that must be effectuated in this particular type of pathology and the indications of treatment.
1983
[Polymorphism of the cleidocranial dysplasia syndrome. Presentation of 2 cases].
Following a brief description of the clinical and radiological features of cleidocranial dysplasia (see Table), two patients are presented who respectively exemplify the classic and the incomplete form of this syndrome. The morphologic appearance of the bone segments involved may suggest a diagnosis of cleidocranial dysplasia even through occasional radiologic examinations, specially in the pediatric age. The patients suffering from this condition, which in itself is not incapacitating, should be serially observed so that appropriate therapeutic measures can be adopted, mainly in the presence of hip dysplasia or when the thoracic cage or the spine are severely involved.
1985
[Infantile gluteal granuloma. Case report].
A 5 months old infant having "granuloma gluteale infantum" (G.G.I.) is reported. One or more tumorous red-purple nodules on gluteal or on genitocrural area are the usual cutaneus injuries of G.G.I. The histologic aspect resembles pyogenic granulomas. The exact pathogenesis of G.G.I. is still to be defined; nevertheless the use of plastic diaper covers and topical fluorinated steroid preparations seems to have great influence.
1985
[Prolonged Q-T syndrome (Romano-Ward syndrome). Description of a case diagnosed in infancy].
A case of prolonged Q-T interval syndrome without deafness (Romano-Ward syndrome) is reported. A 2-month-old female was seen in consultation because of a near-miss event (syncopal attack). An EKG showed a long Q-T interval. Successful therapy was achieved with propanolol.
1987
[A rare congenital malformation: genu recurvatum. Presentation of a case].
The authors report a case of congenital genu recurvatum: a rare malformation characterized for abnormal hyperextension of knee and wide limitation of flexion. Pathogenesis, clinical pictures and therapy are described.
1993
[Celiac disease and the evolution of its diagnosis. Comparison and experience at a hospital pediatric department (1975-1993). (Second part)].
In a period of over 18 years the prominent medical bibliographic marks with regard to definition, diagnosis and examinations of coeliac disease (CD) have been compared and as far as possible reproduced. The results confirm the remarks derivating from wider statistics. From the beginning of 1975 to the first six months of 1993 in Merate Hospital Pediatric Division, 323 patients were submitted to a first jejunal peroral biopsy in 133 cases (41.2%) CD was diagnosed. Since 34 children (25.6%) concluded the ESPGAN diagnostic iter with 3 consecutive biopsies, the reasons why the other patients didn't finish or respect the programs are here examined. Since 1987 a specific anti-gliadin (IgA and IgG) antibodies titrimetry has been available either in the investigation of suspect symptomatology or like control mark during the assessment or after a sure CD diagnosis. Since october 1992 antiendomysium antibodies (EMA or AEA IgA) have been determined only in selected patients. From the examination of 24 subjects now checked with AGA IgA/IgG and EMA and with a first positive biopsy, it is possible to point out that only one jejunal biopsy (or at the most a second one as a control during the gluten challenge) with the guarantee of haematologic patterns doesn't raise doubts about a CD diagnosis. Analogous considerations mainly refer to the atypical CD "late onset" when a constant lack of AGA and EMA during gluten free diet (GFD) or their changes in a non compliance or in gluten challenge, can exclude a following hystological confirmation. By this experience it follows that a specific antigliadin and antiendomysium antibodies investigation is indispensable to the shortening of diagnostic times, to the reduction of an often unwelcome invasive diagnostic method and to the discovery of the "CD iceberg".
1986
[Indications and results of surgical treatment in gastroesophageal reflux and hiatal hernia].
It is well known that closure of the cardia is incomplete in about 25-30% of all infants; the GER is a direct consequence. Roughly two-thirds of these infants do not show symptoms and only one-third become symptomatic. The symptoms are mild in about 75% of the symptomatic children; no treatment or medical treatment by pediatrician is required. In the remaining 25% the symptoms are moderate or severe and the clinical treatment is necessary. About the 85% of these children are cured with conservative treatment and only 15% of this small remaining group require surgery. In the paper the diagnostic problems and indications for surgery are considered. The Authors report the results of 66 children operated on for GER without (44 children) and with (22 children) hiatus hernia. The operative technique was gastropexy according to Boerema plus retroesophageal hiatopexy in the cases of important hiatus hernia. At the follow-up 61 children (92.5%) were completely asymptomatic and three showed mild symptoms without pathological radiological findings. Clinical and radiological recurrences occurred in two patients (4.5%) with severe brain damage. Two children were reoperated on postoperatively for an ileus due to adhesion. The mortality rate has been zero. In the author's opinion, the Boerema procedure is a simple, physiologic and fast technique, associated with very few complications and no mortality rate and should be considered the elective method in the surgical treatment of GER and hiatal hernia in pediatric patients.
DOI: 10.2172/1372340
1995
A Measurement of the B&lt;sup&gt;0&lt;/sup&gt;$\bar{B}$&lt;sup&gt;0&lt;/sup&gt; mixing Using Muon Pairs at CDF
This thesis concerns the experimental study of B mesons (mesons containing a b quark) produced in proton-antiproton collisions at center of mass energy of 1800 GeV. This work has been performed within the CDF collaboration. CDF is a general purpose detector located at Fermilab, in Batavia, which exploits the Fermilab proton-antiproton collider.
1993
Recent results on QCD at the Tevatron (CDF and D0)
1992
Color coherence in multijet events at CDF
Results of a search for an evidence of color coherence in CDF [bar p]p [yields] 3jet + X data from the 1988--89 run high statistics inclusive jet sample (4.2pb[sup [minus]1] of integrated luminosity) are presented. We study the geometric correlation between the third jet (regarded as the product of soft' branchings in the Leading Log Approximation) and the second one, in comparison to Isajet and Herwig shower Monte Carlos predictions. A geometric variable for this correlation is found which is sensitive to interference: the qualitative agreement of Herwig (with coherent shower development) to the data distribution, contrasted to the disagreement of Isajet (independent development) is consistent with the observation of a color interference effect. Further evidence for this interpretation comes from switching off'' interference in Herwig by means of a proper event selection, which yields a distribution much similar to the Isajet one.