ϟ

Zeynep Demiragli

Here are all the papers by Zeynep Demiragli that you can download and read on OA.mg.
Zeynep Demiragli’s last known institution is . Download Zeynep Demiragli PDFs here.

Claim this Profile →
DOI: 10.1016/j.dark.2019.100371
2020
Cited 150 times
Dark Matter benchmark models for early LHC Run-2 Searches: Report of the ATLAS/CMS Dark Matter Forum
This document is the final report of the ATLAS-CMS Dark Matter Forum, a forum organized by the ATLAS and CMS collaborations with the participation of experts on theories of Dark Matter, to select a minimal basis set of dark matter simplified models that should support the design of the early LHC Run-2 searches. A prioritized, compact set of benchmark models is proposed, accompanied by studies of the parameter space of these models and a repository of generator implementations. This report also addresses how to apply the Effective Field Theory formalism for collider searches and present the results of such interpretations.
DOI: 10.1140/epjc/s10052-022-11048-8
2022
Cited 17 times
Theory, phenomenology, and experimental avenues for dark showers: a Snowmass 2021 report
Abstract In this work, we consider the case of a strongly coupled dark/hidden sector, which extends the Standard Model (SM) by adding an additional non-Abelian gauge group. These extensions generally contain matter fields, much like the SM quarks, and gauge fields similar to the SM gluons. We focus on the exploration of such sectors where the dark particles are produced at the LHC through a portal and undergo rapid hadronization within the dark sector before decaying back, at least in part and potentially with sizeable lifetimes, to SM particles, giving a range of possibly spectacular signatures such as emerging or semi-visible jets. Other, non-QCD-like scenarios leading to soft unclustered energy patterns or glueballs are also discussed. After a review of the theory, existing benchmarks and constraints, this work addresses how to build consistent benchmarks from the underlying physical parameters and present new developments for the pythia Hidden Valley module, along with jet substructure studies. Finally, a series of improved search strategies is presented in order to pave the way for a better exploration of the dark showers at the LHC.
DOI: 10.1088/1748-0221/18/08/p08014
2023
Cited 3 times
Performance of the CMS High Granularity Calorimeter prototype to charged pion beams of 20–300 GeV/c
Abstract The upgrade of the CMS experiment for the high luminosity operation of the LHC comprises the replacement of the current endcap calorimeter by a high granularity sampling calorimeter (HGCAL). The electromagnetic section of the HGCAL is based on silicon sensors interspersed between lead and copper (or copper tungsten) absorbers. The hadronic section uses layers of stainless steel as an absorbing medium and silicon sensors as an active medium in the regions of high radiation exposure, and scintillator tiles directly read out by silicon photomultipliers in the remaining regions. As part of the development of the detector and its readout electronic components, a section of a silicon-based HGCAL prototype detector along with a section of the CALICE AHCAL prototype was exposed to muons, electrons and charged pions in beam test experiments at the H2 beamline at the CERN SPS in October 2018. The AHCAL uses the same technology as foreseen for the HGCAL but with much finer longitudinal segmentation. The performance of the calorimeters in terms of energy response and resolution, longitudinal and transverse shower profiles is studied using negatively charged pions, and is compared to GEANT4 predictions. This is the first report summarizing results of hadronic showers measured by the HGCAL prototype using beam test data.
DOI: 10.1109/nssmic.2015.7581984
2015
Cited 8 times
The CMS Timing and Control Distribution System
The Compact Muon Solenoid (CMS) experiment operating at the CERN (European Laboratory for Nuclear Physics) Large Hadron Collider (LHC) is in the process of upgrading several of its detector systems. Adding more individual detector components brings the need to test and commission those components separately from existing ones so as not to compromise physics data-taking. The CMS Trigger, Timing and Control (TTC) system had reached its limits in terms of the number of separate elements (partitions) that could be supported. A new Timing and Control Distribution System (TCDS) has been designed, built and commissioned in order to overcome this limit. It also brings additional functionality to facilitate parallel commissioning of new detector elements. The new TCDS system and its components will be described and results from the first operational experience with the TCDS in CMS will be shown.
DOI: 10.22323/1.370.0120
2020
Cited 7 times
The APOLLO ATCA Platform
We have developed a novel and generic open-source platform - Apollo - which simplifies the design of custom Advanced Telecommunications Computing Architecture (ATCA) blades by factoring the design into generic infrastructure and application-specific parts. The Apollo "Service Module" provides the required ATCA Intelligent Platform Management Controller, power entry and conditioning, a powerful system-on-module computer, and flexible clock and communications infrastructure. The Apollo "Command Module" is customized for each application and typically includes two large field-programmable gate arrays, several hundred optical fiber interfaces operating at speeds up to 28 Gbps, memories, and other supporting infrastructure. The command and service module boards can be operated together or independently on the bench without need for an ATCA shelf.
DOI: 10.1088/1742-6596/664/8/082009
2015
Cited 3 times
Online data handling and storage at the CMS experiment
During the LHC Long Shutdown 1, the CMS Data Acquisition (DAQ) system underwent a partial redesign to replace obsolete network equipment, use more homogeneous switching technologies, and support new detector back-end electronics. The software and hardware infrastructure to provide input, execute the High Level Trigger (HLT) algorithms and deal with output data transport and storage has also been redesigned to be completely file- based. All the metadata needed for bookkeeping are stored in files as well, in the form of small documents using the JSON encoding. The Storage and Transfer System (STS) is responsible for aggregating these files produced by the HLT, storing them temporarily and transferring them to the T0 facility at CERN for subsequent offline processing. The STS merger service aggregates the output files from the HLT from ∼62 sources produced with an aggregate rate of ∼2GB/s. An estimated bandwidth of 7GB/s in concurrent read/write mode is needed. Furthermore, the STS has to be able to store several days of continuous running, so an estimated of 250TB of total usable disk space is required. In this article we present the various technological and implementation choices of the three components of the STS: the distributed file system, the merger service and the transfer system.
DOI: 10.1109/rtc.2016.7543164
2016
Cited 3 times
Performance of the new DAQ system of the CMS experiment for run-2
The data acquisition system (DAQ) of the CMS experiment at the CERN Large Hadron Collider (LHC) assembles events at a rate of 100 kHz, transporting event data at an aggregate throughput of more than 100GB/s to the Highlevel Trigger (HLT) farm. The HLT farm selects and classifies interesting events for storage and offline analysis at an output rate of around 1 kHz. The DAQ system has been redesigned during the accelerator shutdown in 2013-2014. The motivation for this upgrade was twofold. Firstly, the compute nodes, networking and storage infrastructure were reaching the end of their lifetimes. Secondly, in order to maintain physics performance with higher LHC luminosities and increasing event pileup, a number of sub-detectors are being upgraded, increasing the number of readout channels as well as the required throughput, and replacing the off-detector readout electronics with a MicroTCA-based DAQ interface. The new DAQ architecture takes advantage of the latest developments in the computing industry. For data concentration 10/40 Gbit/s Ethernet technologies are used, and a 56Gbit/s Infiniband FDR CLOS network (total throughput ≈ 4Tbit/s) has been chosen for the event builder. The upgraded DAQ - HLT interface is entirely file-based, essentially decoupling the DAQ and HLT systems. The fully-built events are transported to the HLT over 10/40 Gbit/s Ethernet via a network file system. The collection of events accepted by the HLT and the corresponding metadata are buffered on a global file system before being transferred off-site. The monitoring of the HLT farm and the data-taking performance is based on the Elasticsearch analytics tool. This paper presents the requirements, implementation, and performance of the system. Experience is reported on the first year of operation with LHC proton-proton runs as well as with the heavy ion lead-lead runs in 2015.
DOI: 10.1088/1742-6596/898/3/032019
2017
Cited 3 times
The CMS Data Acquisition - Architectures for the Phase-2 Upgrade
The upgraded High Luminosity LHC, after the third Long Shutdown (LS3), will provide an instantaneous luminosity of 7.5 × 1034 cm−2s−1 (levelled), at the price of extreme pileup of up to 200 interactions per crossing. In LS3, the CMS Detector will also undergo a major upgrade to prepare for the phase-2 of the LHC physics program, starting around 2025. The upgraded detector will be read out at an unprecedented data rate of up to 50 Tb/s and an event rate of 750 kHz. Complete events will be analysed by software algorithms running on standard processing nodes, and selected events will be stored permanently at a rate of up to 10 kHz for offline processing and analysis.
DOI: 10.1051/epjconf/201921407017
2019
Cited 3 times
Experience with dynamic resource provisioning of the CMS online cluster using a cloud overlay
The primary goal of the online cluster of the Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC) is to build event data from the detector and to select interesting collisions in the High Level Trigger (HLT) farm for offline storage. With more than 1500 nodes and a capacity of about 850 kHEPSpecInt06, the HLT machines represent similar computing capacity of all the CMS Tier1 Grid sites together. Moreover, it is currently connected to the CERN IT datacenter via a dedicated 160 Gbps network connection and hence can access the remote EOS based storage with a high bandwidth. In the last few years, a cloud overlay based on OpenStack has been commissioned to use these resources for the WLCG when they are not needed for data taking. This online cloud facility was designed for parasitic use of the HLT, which must never interfere with its primary function as part of the DAQ system. It also allows to abstract from the different types of machines and their underlying segmented networks. During the LHC technical stop periods, the HLT cloud is set to its static mode of operation where it acts like other grid facilities. The online cloud was also extended to make dynamic use of resources during periods between LHC fills. These periods are a-priori unscheduled and of undetermined length, typically of several hours, once or more a day. For that, it dynamically follows LHC beam states and hibernates Virtual Machines (VM) accordingly. Finally, this work presents the design and implementation of a mechanism to dynamically ramp up VMs when the DAQ load on the HLT reduces towards the end of the fill.
DOI: 10.48550/arxiv.2306.13567
2023
Detector R&D needs for the next generation $e^+e^-$ collider
The 2021 Snowmass Energy Frontier panel wrote in its final report "The realization of a Higgs factory will require an immediate, vigorous and targeted detector R&D program". Both linear and circular $e^+e^-$ collider efforts have developed a conceptual design for their detectors and are aggressively pursuing a path to formalize these detector concepts. The U.S. has world-class expertise in particle detectors, and is eager to play a leading role in the next generation $e^+e^-$ collider, currently slated to become operational in the 2040s. It is urgent that the U.S. organize its efforts to provide leadership and make significant contributions in detector R&D. These investments are necessary to build and retain the U.S. expertise in detector R&D and future projects, enable significant contributions during the construction phase and maintain its leadership in the Energy Frontier regardless of the choice of the collider project. In this document, we discuss areas where the U.S. can and must play a leading role in the conceptual design and R&D for detectors for $e^+e^-$ colliders.
DOI: 10.1088/1748-0221/18/08/p08024
2023
Neutron irradiation and electrical characterisation of the first 8” silicon pad sensor prototypes for the CMS calorimeter endcap upgrade
As part of its HL-LHC upgrade program, the CMS collaboration is replacing its existing endcap calorimeters with a high-granularity calorimeter (CE). The new calorimeter is a sampling calorimeter with unprecedented transverse and longitudinal readout for both electromagnetic and hadronic compartments. Due to its compactness, intrinsic time resolution, and radiation hardness, silicon has been chosen as active material for the regions exposed to higher radiation levels. The silicon sensors are fabricated as 20 cm (8") wide hexagonal wafers and are segmented into several hundred pads which are read out individually. As part of the sensor qualification strategy, 8" sensor irradiation with neutrons has been conducted at the Rhode Island Nuclear Science Center (RINSC) and followed by their electrical characterisation in 2020-21. The completion of this important milestone in the CE's R&D program is documented in this paper and it provides detailed account of the associated infrastructure and procedures. The results on the electrical properties of the irradiated CE silicon sensors are presented.
DOI: 10.1088/1742-6596/898/3/032020
2017
Performance of the CMS Event Builder
2018
Performance of the CMS muon detector and muon reconstruction with proton-proton collisions at √s = 13 TeV
DOI: 10.1051/epjconf/201921401015
2019
Operational experience with the new CMS DAQ-Expert
The data acquisition (DAQ) system of the Compact Muon Solenoid (CMS) at CERN reads out the detector at the level-1 trigger accept rate of 100 kHz, assembles events with a bandwidth of 200 GB/s, provides these events to the high level-trigger running on a farm of about 30k cores and records the accepted events. Comprising custom-built and cutting edge commercial hardware and several 1000 instances of software applications, the DAQ system is complex in itself and failures cannot be completely excluded. Moreover, problems in the readout of the detectors,in the first level trigger system or in the high level trigger may provoke anomalous behaviour of the DAQ systemwhich sometimes cannot easily be differentiated from a problem in the DAQ system itself. In order to achieve high data taking efficiency with operators from the entire collaboration and without relying too heavily on the on-call experts, an expert system, the DAQ-Expert, has been developed that can pinpoint the source of most failures and give advice to the shift crew on how to recover in the quickest way. The DAQ-Expert constantly analyzes monitoring data from the DAQ system and the high level trigger by making use of logic modules written in Java that encapsulate the expert knowledge about potential operational problems. The results of the reasoning are presented to the operator in a web-based dashboard, may trigger sound alerts in the control room and are archived for post-mortem analysis - presented in a web-based timeline browser. We present the design of the DAQ-Expert and report on the operational experience since 2017, when it was first put into production.
2016
Opportunistic usage of the CMS online cluster using a cloud overlay
2016
Search for pair-produced vectorlike B quarks in proton-proton collisions at √s = 8 TeV
2016
Search for long-lived charged particles in proton-proton collisions at √s=13 TeV
DOI: 10.48550/arxiv.2203.12035
2022
Displaying dark matter constraints from colliders with varying simplified model parameters
The search for dark matter is one of the main science drivers of the particle and astroparticle physics communities. Determining the nature of dark matter will require a broad approach, with a range of experiments pursuing different experimental hypotheses. Within this search program, collider experiments provide insights on dark matter which are complementary to direct/indirect detection experiments and to astrophysical evidence. To compare results from a wide variety of experiments, a common theoretical framework is required. The ATLAS and CMS experiments have adopted a set of simplified models which introduce two new particles, a dark matter particle and a mediator, and whose interaction strengths are set by the couplings of the mediator. So far, the presentation of LHC and future hadron collider results has focused on four benchmark scenarios with specific coupling values within these simplified models. In this work, we describe ways to extend those four benchmark scenarios to arbitrary couplings, and release the corresponding code for use in further studies. This will allow for more straightforward comparison of collider searches to accelerator experiments that are sensitive to smaller couplings, such as those for the US Community Study on the Future of Particle Physics (Snowmass 2021), and will give a more complete picture of the coupling dependence of dark matter collider searches when compared to direct and indirect detection searches. By using semi-analytical methods to rescale collider limits, we drastically reduce the computing resources needed relative to traditional approaches based on the generation of additional simulated signal samples.
DOI: 10.1088/1748-0221/17/04/c04033
2022
The Apollo ATCA design for the CMS track finder and the pixel readout at the HL-LHC
The challenging conditions of the High-Luminosity LHC require tailored hardware designs for the trigger and data acquisition systems. The Apollo platform features a "Service Module" with a powerful system-on-module computer that provides standard ATCA communications and application-specific "Command Module"s with large FPGAs and high-speed optical fiber links. The CMS version of Apollo will be used for the track finder and the pixel readout. It features up to two large FPGAs and more than 100 optical links with speeds up to 25\,Gb/s. We study carefully the design and performance of the board by using customized firmware to test power consumption, heat dissipation, and optical link integrity. This paper presents the results of these performance tests, design updates, and future plans.
DOI: 10.22323/1.270.0022
2017
Opportunistic usage of the CMS online cluster using a cloud overlay
After two years of maintenance and upgrade, the Large Hadron Collider (LHC), the largest and most powerful particle accelerator in the world, has started its second three year run. Around 1500 computers make up the CMS (Compact Muon Solenoid) Online cluster. This cluster is used for Data Acquisition of the CMS experiment at CERN, selecting and sending to storage around 20 TBytes of data per day that are then analysed by the Worldwide LHC Computing Grid (WLCG) infrastructure that links hundreds of data centres worldwide. 3000 CMS physicists can access and process data, and are always seeking more computing power and data. The backbone of the CMS Online cluster is composed of 16000 cores which provide as much computing power as all CMS WLCG Tier1 sites (352K HEP-SPEC-06 score in the CMS cluster versus 300K across CMS Tier1 sites). The computing power available in the CMS cluster can significantly speed up the processing of data, so an effort has been made to allocate the resources of the CMS Online cluster to the grid when it isn’t used to its full capacity for data acquisition. This occurs during the maintenance periods when the LHC is non-operational, which corresponded to 117 days in 2015. During 2016, the aim is to increase the availability of the CMS Online cluster for data processing by making the cluster accessible during the time between two physics collisions while the LHC and beams are being prepared. This is usually the case for a few hours every day, which would vastly increase the computing power available for data processing. Work has already been undertaken to provide this functionality, as an OpenStack cloud layer has been deployed as a minimal overlay that leaves the primary role of the cluster untouched. This overlay also abstracts the different hardware and networks that the cluster is composed of. The operation of the cloud (starting and stopping the virtual machines) is another challenge that has been overcome as the cluster has only a few hours spare during the aforementioned beam preparation. By improving the virtual image deployment and integrating the OpenStack services with the core services of the Data Acquisition on the CMS Online cluster it is now possible to start a thousand virtual machines within 10 minutes and to turn them off within seconds. This document will explain the architectural choices that were made to reach a fully redundant and scalable cloud, with a minimal impact on the running cluster configuration while giving a maximal segregation between the services. It will also present how to cold start 1000 virtual machines 25 times faster, using tools commonly utilised in all data centres.
2017
Inclusive search for supersymmetry using razor variables in pp collisions at √s = 13 TeV
2017
Observation of Charge-Dependent Azimuthal Correlations in p-Pb Collisions and Its Implication for the Search for the Chiral Magnetic Effect
DOI: 10.22323/1.313.0075
2018
The FEROL40, a microTCA card interfacing custom point-to-point links and standard TCP/IP
In order to accommodate new back-end electronics of upgraded CMS sub-detectors, a new FEROL40 card in the microTCA standard has been developed. The main function of the FEROL40 is to acquire event data over multiple point-to-point serial optical links, provide buffering, perform protocol conversion, and transmit multiple TCP/IP streams (4x10Gbps) to the Ethernet network of the aggregation layer of the CMS DAQ (data acquisition) event builder. This contribution discusses the design of the FEROL40 and experience from operation
DOI: 10.22323/1.343.0129
2019
Design and development of the DAQ and Timing Hub for CMS Phase-2
The CMS detector will undergo a major upgrade for Phase-2 of the LHC program, starting around 2026.The upgraded Level-1 hardware trigger will select events at a rate of 750 kHz.At an expected event size of 7.4 MB this corresponds to a data rate of up to 50 Tbit/s.Optical links will carry the signals from on-detector front-end electronics to back-end electronics in ATCA crates in the service cavern.A DAQ and Timing Hub board aggregates data streams from back-end boards over point-to-point links, provides buffering and transmits the data to the commercial data-to-surface network for processing and storage.This hub board is also responsible for the distribution of timing, control and trigger signals to the back-ends.This paper presents the current development towards the DAQ and Timing Hub and the design of the first prototype, to be used as for validation and integration with the first back-end prototypes in 2019-2020.
DOI: 10.1051/epjconf/201921401044
2019
Presentation layer of CMS Online Monitoring System
The Compact Muon Solenoid (CMS) is one of the experiments at the CERN Large Hadron Collider (LHC). The CMS Online Monitoring system (OMS) is an upgrade and successor to the CMS Web-Based Monitoring (WBM)system, which is an essential tool for shift crew members, detector subsystem experts, operations coordinators, and those performing physics analyses. The CMS OMS is divided into aggregation and presentation layers. Communication between layers uses RESTful JSON:API compliant requests. The aggregation layer is responsible for collecting data from heterogeneous sources, storage of transformed and pre-calculated (aggregated) values and exposure of data via the RESTful API. The presentation layer displays detector information via a modern, user-friendly and customizable web interface. The CMS OMS user interface is composed of a set of cutting-edge software frameworks and tools to display non-event data to any authenticated CMS user worldwide. The web interface tree-like component structure comprises (top-down): workspaces, folders, pages, controllers and portlets. A clear hierarchy gives the required flexibility and control for content organization. Each bottom element instantiates a portlet and is a reusable component that displays a single aspect of data, like a table, a plot, an article, etc. Pages consist of multiple different portlets and can be customized at runtime by using a drag-and-drop technique. This is how a single page can easily include information from multiple online sources. Different pages give access to a summary of the current status of the experiment, as well as convenient access to historical data. This paper describes the CMS OMS architecture, core concepts and technologies of the presentation layer.
DOI: 10.1051/epjconf/201921401048
2019
A Scalable Online Monitoring System Based on Elasticsearch for Distributed Data Acquisition in Cms
The part of the CMS Data Acquisition (DAQ) system responsible for data readout and event building is a complex network of interdependent distributed applications. To ensure successful data taking, these programs have to be constantly monitored in order to facilitate the timeliness of necessary corrections in case of any deviation from specified behaviour. A large number of diverse monitoring data samples are periodically collected from multiple sources across the network. Monitoring data are kept in memory for online operations and optionally stored on disk for post-mortem analysis. We present a generic, reusable solution based on an open source NoSQL database, Elasticsearch, which is fully compatible and non-intrusive with respect to the existing system. The motivation is to benefit from an offthe-shelf software to facilitate the development, maintenance and support efforts. Elasticsearch provides failover and data redundancy capabilities as well as a programming language independent JSON-over-HTTP interface. The possibility of horizontal scaling matches the requirements of a DAQ monitoring system. The data load from all sources is balanced by redistribution over an Elasticsearch cluster that can be hosted on a computer cloud. In order to achieve the necessary robustness and to validate the scalability of the approach the above monitoring solution currently runs in parallel with an existing in-house developed DAQ monitoring system.
2014
Performance of e/$\gamma$-based Triggers at the CMS High Level Trigger
The CMS experiment has been designed with a two-level trigger system: the Level 1 (L1) Trigger, implemented on custom-designed electronics, and the High Level Trigger (HLT), a streamlined version of the CMS reconstruction and analysis software running on a computer farm. In order to achieve a good rate reduction with as little as possible impact on the physics efficiency, the algorithms used at HLT are designed to follow as closely as possible the ones used in the offline reconstruction. Here, we will present the algorithms used for the online reconstruction of electrons and photons (e/$\gamma$), both at L1 and HLT, and their performance and the planned improvements of these HLT objects.
2014
Shining Light On Dark Matter with the CMS Experiment
We present a search for large extra dimensions and dark matter pair-production using events with a photon and missing transverse energy in pp collisions at $\sqrt{s} =8$ TeV. This search is done with the data taken by the CMS experiment at the LHC corresponding to an integrated luminosity of 19.6 fb$^{-1}$. We find no deviations with respect to the standard model expectation and improve the current limits on several models.
DOI: 10.2172/1182548
2015
Search for new physics in final states with low transverse energy photon and missing transverse energy in proton-proton collisions at $\sqrt{s} = 8$ TeV
A search for new physics in the γ+ET final state is performed using pp collision data corresponding to an integrated luminosity of 7.3 fb-1 collected at √s = 8 TeV using low threshold triggers in a phase space region defined by ET > 45 GeV and E-slashT > 40 GeV. The data are also examined using optimized selections for maximum sensitivity to an exotic (gravitino/neutralino) decay of the Higgs boson predicted in a low-scale SUSY breaking scenario. The results are found to be compatible with the Standard Model hypothesis. These results are the first limits on this model from collider searches. Furthermore, proton-proton collision events containing high-energy photon and missing transverse momentum have been investigated. No deviations from the standard model have been observed using the √s = 8 TeV data set corresponding to 19.6 fb-1 of integrated luminosity. Further constraints are set on χ production and translated into upper limits on vector and axial-vector contributions to the χ-nucleon scattering cross section. For Mχ = 10 GeV, the χ-nucleon cross section is constrained to be 2.6 x 10-39 cm2 (9.6 x 10-41 cm2) for a spin-independent (spin-dependent) interaction at 90% confidence level. In addition the most stringent limits to date are obtained on the effective Planck scale in the ADD model with large spatial extra dimensions and on the brane tension scale in the branon model.
2016
Measurement of differential cross sections for Higgs boson production in the diphoton decay channel in pp collisions at t √s = 8 TeV
2016
Measurement of the t[bar over t] production cross section in the all-jets final state in pp collisions at √s = 8 TeV
DOI: 10.1103/baps.2014.april.u13.2
2014
Search for New Physics in the Photon$+$MET Final State
2016
Measurement of the integrated and differential t[bar over t] production cross sections for high- pT top quarks in pp collisions at √s = 8 TeV
2015
Online data handling and storage at the CMS experiment
2015
Searches for Dark Matter and Extra Dimensions at the LHC
2016
Study of B meson production in p + Pb collisions at √s[subscript NN] = 5.02 TeV using exclusive hadronic decays
2016
Search for Narrow Resonances in Dijet Final States at √s = 8 TeV with the Novel CMS Technique of Data Scouting
2016
Search for Resonant Production of High-Mass Photon Pairs in Proton-Proton Collisions at √s=8 and 13 TeV
2015
Search for a light charged Higgs boson decaying to c[bar over s] in pp collisions at √s = 8 TeV
DOI: 10.48550/arxiv.1409.4089
2014
Shining Light On Dark Matter with the CMS Experiment
We present a search for large extra dimensions and dark matter pair-production using events with a photon and missing transverse energy in pp collisions at $\sqrt{s} =8$ TeV. This search is done with the data taken by the CMS experiment at the LHC corresponding to an integrated luminosity of 19.6 fb$^{-1}$. We find no deviations with respect to the standard model expectation and improve the current limits on several models.
DOI: 10.48550/arxiv.1409.4077
2014
Performance of e/$γ$-based Triggers at the CMS High Level Trigger
The CMS experiment has been designed with a two-level trigger system: the Level 1 (L1) Trigger, implemented on custom-designed electronics, and the High Level Trigger (HLT), a streamlined version of the CMS reconstruction and analysis software running on a computer farm. In order to achieve a good rate reduction with as little as possible impact on the physics efficiency, the algorithms used at HLT are designed to follow as closely as possible the ones used in the offline reconstruction. Here, we will present the algorithms used for the online reconstruction of electrons and photons (e/$γ$), both at L1 and HLT, and their performance and the planned improvements of these HLT objects.
2017
New operator assistance features in the CMS Run Control System
2017
Search for Physics Beyond the Standard Model in Events with Two Leptons of Same Sign, Missing Transverse Momentum, and Jets in Proton–proton Collisions at √s = 13
2017
Measurement of the Top Quark Mass in the Dileptonic tt¯ Decay Channel Using the Mass Observables M[subscript bℓ], M[subscript T2], and M[subscript bℓν] in pp Collisions at √s = 8 TeV
2017
Combination of searches for heavy resonances decaying to WW, WZ, ZZ, WH, and ZH boson pairs in proton–proton collisions at √s = 8 and 13 TeV
2017
Measurement of charged pion, kaon, and proton production in proton-proton collisions at √s = 13 TeV
2017
Search for Dijet Resonances in Proton–proton Collisions at √s = 13 TeV and Constraints on Dark Matter and Other Models
2017
Search for high-mass diphoton resonances in proton–proton collisions at 13 TeV and combination with 8 TeV search
2017
Search for Single Production of Vector-Like Quarks Decaying into a b Quark and a W Boson in Proton–proton Collisions at √s = 13 TeV
2017
Search for supersymmetry in multijet events with missing transverse momentum in proton-proton collisions at 13 TeV
2017
Measurement of the Cross Section for Electroweak Production of Zγ in Association with Two Jets and Constraints on Anomalous Quartic Gauge Couplings in Proton–proton Collisions At √s = 8 TeV
2017
Mechanical stability of the CMS strip tracker measured with a laser alignment system
DOI: 10.1088/1742-6596/898/3/032028
2017
New operator assistance features in the CMS Run Control System
During Run-1 of the LHC, many operational procedures have been automated in the run control system of the Compact Muon Solenoid (CMS) experiment. When detector high voltages are ramped up or down or upon certain beam mode changes of the LHC, the DAQ system is automatically partially reconfigured with new parameters. Certain types of errors such as errors caused by single-event upsets may trigger an automatic recovery procedure. Furthermore, the top-level control node continuously performs cross-checks to detect sub-system actions becoming necessary because of changes in configuration keys, changes in the set of included front-end drivers or because of potential clock instabilities. The operator is guided to perform the necessary actions through graphical indicators displayed next to the relevant command buttons in the user interface. Through these indicators, consistent configuration of CMS is ensured. However, manually following the indicators can still be inefficient at times. A new assistant to the operator has therefore been developed that can automatically perform all the necessary actions in a streamlined order. If additional problems arise, the new assistant tries to automatically recover from these. With the new assistant, a run can be started from any state of the sub-systems with a single click. An ongoing run may be recovered with a single click, once the appropriate recovery action has been selected. We review the automation features of CMS Run Control and discuss the new assistant in detail including first operational experience.
2017
Study of Jet Quenching with Z + jet Correlations in Pb-Pb and pp Collisions at √s[subscript NN] = 5.02 TeV
2017
Search for Evidence of the Type-III Seesaw Mechanism in Multilepton Final States in Proton-Proton Collisions at √s = 13 TeV
2017
Measurements of differential cross sections for associated production of a W boson and jets in proton-proton collisions at √s=8 TeV
2017
Search for Charged Higgs Bosons Produced via Vector Boson Fusion and Decaying into a Pair of W and Z Bosons Using Pp Collisions at √s=13 TeV
2017
Azimuthal anisotropy of charged particles with transverse momentum up to 100 GeV/c in PbPb collisions at √SNN = 5.02TeV
2017
Search for heavy gauge W′ bosons in events with an energetic lepton and large missing transverse momentum at √s = 13 Te
2017
Measurement of the B± Meson Nuclear Modification Factor in Pb-Pb Collisions at √s[subscript NN] =5.02 TeV
DOI: 10.48550/arxiv.2203.08322
2022
DarkQuest: A dark sector upgrade to SpinQuest at the 120 GeV Fermilab Main Injector
Expanding the mass range and techniques by which we search for dark matter is an important part of the worldwide particle physics program. Accelerator-based searches for dark matter and dark sector particles are a uniquely compelling part of this program as a way to both create and detect dark matter in the laboratory and explore the dark sector by searching for mediators and excited dark matter particles. This paper focuses on developing the DarkQuest experimental concept and gives an outlook on related enhancements collectively referred to as LongQuest. DarkQuest is a proton fixed-target experiment with leading sensitivity to an array of visible dark sector signatures in the MeV-GeV mass range. Because it builds off of existing accelerator and detector infrastructure, it offers a powerful but modest-cost experimental initiative that can be realized on a short timescale.
DOI: 10.48550/arxiv.2209.13128
2022
Report of the Topical Group on Physics Beyond the Standard Model at Energy Frontier for Snowmass 2021
This is the Snowmass2021 Energy Frontier (EF) Beyond the Standard Model (BSM) report. It combines the EF topical group reports of EF08 (Model-specific explorations), EF09 (More general explorations), and EF10 (Dark Matter at Colliders). The report includes a general introduction to BSM motivations and the comparative prospects for proposed future experiments for a broad range of potential BSM models and signatures, including compositeness, SUSY, leptoquarks, more general new bosons and fermions, long-lived particles, dark matter, charged-lepton flavor violation, and anomaly detection.
DOI: 10.1088/1742-6596/1085/3/032021
2018
DAQExpert - An expert system to increase CMS data-taking efficiency
The efficiency of the Data Acquisition (DAQ) of the Compact Muon Solenoid (CMS) experiment for LHC Run 2 is constantly being improved. A significant factor affecting the data taking efficiency is the experience of the DAQ operator. One of the main responsibilities of the DAQ operator is to carry out the proper recovery procedure in case of failure of data-taking. At the start of Run 2, understanding the problem and finding the right remedy could take a considerable amount of time (up to many minutes). Operators heavily relied on the support of on-call experts, also outside working hours. Wrong decisions due to time pressure sometimes lead to an additional overhead in recovery time. To increase the efficiency of CMS data-taking we developed a new expert system, the DAQExpert, which provides shifters with optimal recovery suggestions instantly when a failure occurs. DAQExpert is a web application analyzing frequently updating monitoring data from all DAQ components and identifying problems based on expert knowledge expressed in small, independent logic-modules written in Java. Its results are presented in real-time in the control room via a web-based GUI and a sound-system in a form of short description of the current failure, and steps to recover.
DOI: 10.22323/1.313.0123
2018
CMS DAQ Current and Future Hardware Upgrades up to Post Long Shutdown 3 (LS3) Times
Following the first LHC collisions seen and recorded by CMS in 2009, the DAQ hardware went through a major upgrade during LS1 (2013-2014) and new detectors have been connected during 2015-2016 and 2016-2017 winter shutdowns.Now, LS2 (2019-2020) and LS3 (2024-mid 2026) are actively being prepared.This paper shows how CMS DAQ hardware has evolved from the beginning and will continue to evolve in order to meet the future challenges posed by High Luminosity LHC (HL-LHC) and the CMS detector evolution.In particular, post LS3 DAQ architectures are focused upon.
DOI: 10.48550/arxiv.1806.08975
2018
The CMS Data Acquisition System for the Phase-2 Upgrade
During the third long shutdown of the CERN Large Hadron Collider, the CMS Detector will undergo a major upgrade to prepare for Phase-2 of the CMS physics program, starting around 2026. The upgraded CMS detector will be read out at an unprecedented data rate of up to 50 Tb/s with an event rate of 750 kHz, selected by the level-1 hardware trigger, and an average event size of 7.4 MB. Complete events will be analyzed by the High-Level Trigger (HLT) using software algorithms running on standard processing nodes, potentially augmented with hardware accelerators. Selected events will be stored permanently at a rate of up to 7.5 kHz for offline processing and analysis. This paper presents the baseline design of the DAQ and HLT systems for Phase-2, taking into account the projected evolution of high speed network fabrics for event building and distribution, and the anticipated performance of general purpose CPU. In addition, some opportunities offered by reading out and processing parts of the detector data at the full LHC bunch crossing rate (40 MHz) are discussed.
DOI: 10.1051/epjconf/201921401006
2019
The CMS Event-Builder System for LHC Run 3 (2021-23)
The data acquisition system (DAQ) of the CMS experiment at the CERN Large Hadron Collider (LHC) assembles events of 2MB at a rate of 100 kHz. The event builder collects event fragments from about 750 sources and assembles them into complete events which are then handed to the High-Level Trigger (HLT) processes running on O (1000) computers. The aging eventbuilding hardware will be replaced during the long shutdown 2 of the LHC taking place in 2019/20. The future data networks will be based on 100 Gb/s interconnects using Ethernet and Infiniband technologies. More powerful computers may allow to combine the currently separate functionality of the readout and builder units into a single I/O processor handling simultaneously 100 Gb/s of input and output traffic. It might be beneficial to preprocess data originating from specific detector parts or regions before handling it to generic HLT processors. Therefore, we will investigate how specialized coprocessors, e.g. GPUs, could be integrated into the event builder. We will present the envisioned changes to the event-builder compared to today’s system. Initial measurements of the performance of the data networks under the event-building traffic pattern will be shown. Implications of a folded network architecture for the event building and corresponding changes to the software implementation will be discussed.
2018
Search for an exotic decay of the Higgs boson to a pair of light pseudoscalars in the final state with two b quarks and two τ leptons in proton–proton collisions at √s = 13 TeV
2018
Observation of the χ[subscript b1](3P) and χ[subscript b2](3P) and Measurement of their Masses
DOI: 10.18154/rwth-2018-224144
2018
Measurement of normalized differential tt cross sections in the dilepton channel from pp collisions at √s = 13 TeV
2018
Elliptic Flow of Charm and Strange Hadrons in High-Multiplicity
2018
Search for a heavy resonance decaying into a Z boson and a vector boson in the vv̄ qq̄ final state
DOI: 10.18429/jacow-pcapac2018-wep17
2019
Extending the Remote Control Capabilities in the CMS Detector Control System with Remote Procedure Call Services
The CMS Detector Control System (DCS) is implemented as a large distributed and redundant system, with applications interacting and sharing data in multiple ways. The CMS XML-RPC is a software toolkit implementing the standard Remote Procedure Call (RPC) protocol, using the Extensible Mark-up Language (XML) and a custom lightweight variant using the JavaScript Object Notation (JSON) to model, encode and expose resources through the Hypertext Transfer Protocol (HTTP). The CMS XML-RPC toolkit complies with the standard specification of the XML-RPC protocol that allows system developers to build collaborative software architectures with self-contained and reusable logic, and with encapsulation of well-defined processes. The implementation of this protocol introduces not only a powerful communication method to operate and exchange data with web-based applications, but also a new programming paradigm to design service-oriented software architectures within the CMS DCS domain. This paper presents details of the CMS XML-RPC implementation in WinCC Open Architecture (OA) Control Language using an object-oriented approach.
2018
Search for massive resonances decaying into
2018
Inclusive Search for a Highly Boosted Higgs Boson Decaying to a Bottom Quark-Antiquark Pair
2018
Search for massive resonances decaying into WW, WZ, ZZ, qW, and qZ with dijet final states at √s = 13 TeV
2018
Search for Leptoquarks Coupled to Third-Generation Quarks in Proton-Proton Collisions at √s = 13 TeV
2018
Search for Supersymmetry in Events with One Lepton and Multiple Jets Exploiting the Angular Correlation Between the Lepton and the Missing Transverse Momentum in Proton–proton Collisions at √s = 13 TeV
2018
Evidence for the Higgs boson decay to a bottom quark–antiquark pair
2018
Search for Gauge-Mediated Supersymmetry in Events with at Least One Photon and Missing Transverse Momentum in pp Collisions at √s = 13 TeV
2018
Charged-Particle Nuclear Modification Factors in XeXe Collisions at √s[subscript NN] = 5.44 TeV
2019
Search for dark matter in events with a leptoquark and missing transverse momentum in proton-proton collisions at 13 TeV
2020
Search for an exotic decay of the Higgs boson to a pair of light pseudoscalars in the final state with two muons and two b quarks in pp collisions at 13 TeV
2021
The Apollo ATCA Design for the CMS Track Finder and the Pixel Readout at the HL-LHC
The challenging conditions of the High-Luminosity LHC require tailored hardware designs for the trigger and data acquisition systems. The Apollo platform features a "Service Module" with a powerful system-on-module computer that provides standard ATCA communications and an application-specific "Command Module"s with large FPGAs and high-speed optical fiber links. The CMS version of Apollo will be used for the track finder and the pixel readout. It features up to two large FPGAs and more than 100 optical links with speeds up to 25\,Gb/s. We study carefully the design and performance of the board by using customized firmware to test power consumption, heat dissipation, and optical link integrity. This paper presents the results of these performance tests, design updates, and future plans.