ϟ

Markus Klute

Here are all the papers by Markus Klute that you can download and read on OA.mg.
Markus Klute’s last known institution is . Download Markus Klute PDFs here.

Claim this Profile →
DOI: 10.1007/jhep01(2014)164
2014
Cited 294 times
First look at the physics case of TLEP
A bstract The discovery by the ATLAS and CMS experiments of a new boson with mass around 125 GeV and with measured properties compatible with those of a Standard-Model Higgs boson, coupled with the absence of discoveries of phenomena beyond the Standard Model at the TeV scale, has triggered interest in ideas for future Higgs factories. A new circular e + e − collider hosted in a 80 to 100 km tunnel, TLEP, is among the most attractive solutions proposed so far. It has a clean experimental environment, produces high luminosity for top-quark, Higgs boson, W and Z studies, accommodates multiple detectors, and can reach energies up to the $$ \mathrm{t}\overline{\mathrm{t}} $$ threshold and beyond. It will enable measurements of the Higgs boson properties and of Electroweak Symmetry-Breaking (EWSB) parameters with unequalled precision, offering exploration of physics beyond the Standard Model in the multi-TeV range. Moreover, being the natural precursor of the VHE-LHC, a 100 TeV hadron machine in the same tunnel, it builds up a long-term vision for particle physics. Altogether, the combination of TLEP and the VHE-LHC offers, for a great cost effectiveness, the best precision and the best search reach of all options presently on the market. This paper presents a first appraisal of the salient features of the TLEP physics potential, to serve as a baseline for a more extensive design study.
DOI: 10.1103/physrevlett.109.101801
2012
Cited 131 times
Measuring Higgs Couplings from LHC Data
Following recent ATLAS and CMS publications we interpret the results of their Higgs searches in terms of Standard Model operators. For a Higgs mass of 125 GeV we determine several Higgs couplings from 2011 data and extrapolate the results towards different scenarios of LHC running. Even though our analysis is limited by low statistics we already derive meaningful constraints on modified Higgs sectors.
2015
Cited 88 times
Technical Proposal for the Phase-II Upgrade of the CMS Detector
This Technical Proposal presents the upgrades foreseen to prepare the CMS experiment for the High Luminosity LHC. In this second phase of the LHC physics program, the accelerator will provide to CMS an additional integrated luminosity of about 2500 fb-1 over 10 years of operation, starting in 2025. This will substantially enlarge the mass reach in the search for new particles and will also greatly extend the potential to study the properties of the Higgs boson discovered at the LHC in 2012. In order to meet the experimental challenges of unprecedented p-p luminosity, the CMS collaboration will need to address the aging of the present detector and to improve the ability of the apparatus to isolate and precisely measure the products of the most interesting collisions. This document describes the conceptual designs and the expected performance of the upgrades, along with the plans to develop the appropriate experimental techniques. The infrastructure upgrades and the logistics of the installation in the experimental area are also discussed. Finally, the initial cost estimates of the upgrades are presented.
DOI: 10.1140/epjcd/s2003-01-010-8
2003
Cited 133 times
Prospects for the search for a standard model Higgs boson in ATLAS using vector boson fusion
The potential for the discovery of a Standard Model Higgs boson in the mass range m_H < 2 m_Z in the vector boson fusion mode has been studied for the ATLAS experiment at the LHC. The characteristic signatures of additional jets in the forward regions of the detector and of low jet activity in the central region allow for an efficient background rejection. Analyses for the H -> WW and H -> tau tau decay modes have been performed using a realistic simulation of the expected detector performance. The results obtained demonstrate the large discovery potential in the H -> WW decay channel and the sensitivity to Higgs boson decays into tau-pairs in the low-mass region around 120 GeV.
DOI: 10.1209/0295-5075/101/51001
2013
Cited 67 times
Measuring Higgs couplings at a linear collider
Higgs couplings can be measured at a linear collider with high precision. We estimate the uncertainties of such measurements, including theoretical errors. Based on these results we show an extrapolation for a combined analysis at a linear collider and a high-luminosity LHC.
DOI: 10.48550/arxiv.1310.8361
2013
Cited 54 times
Higgs Working Group Report of the Snowmass 2013 Community Planning Study
This report summarizes the work of the Energy Frontier Higgs Boson working group of the 2013 Community Summer Study (Snowmass). We identify the key elements of a precision Higgs physics program and document the physics potential of future experimental facilities as elucidated during the Snowmass study. We study Higgs couplings to gauge boson and fermion pairs, double Higgs production for the Higgs self-coupling, its quantum numbers and $CP$-mixing in Higgs couplings, the Higgs mass and total width, and prospects for direct searches for additional Higgs bosons in extensions of the Standard Model. Our report includes projections of measurement capabilities from detailed studies of the Compact Linear Collider (CLIC), a Gamma-Gamma Collider, the International Linear Collider (ILC), the Large Hadron Collider High-Luminosity Upgrade (HL-LHC), Very Large Hadron Colliders up to 100 TeV (VLHC), a Muon Collider, and a Triple-Large Electron Positron Collider (TLEP).
DOI: 10.1016/j.nima.2014.07.010
2014
Cited 32 times
Future hadron colliders: From physics perspectives to technology R&amp;D
High energy hadron colliders have been instrumental to discoveries in particle physics at the energy frontier and their role as discovery machines will remain unchallenged for the foreseeable future. The full exploitation of the LHC is now the highest priority of the energy frontier collider program. This includes the high luminosity LHC project which is made possible by a successful technology-readiness program for Nb3Sn superconductor and magnet engineering based on long-term high-field magnet R&D programs. These programs open the path towards collisions with luminosity of 5×1034 cm−2 s−1 and represents the foundation to consider future proton colliders of higher energies. This paper discusses physics requirements, experimental conditions, technological aspects and design challenges for the development towards proton colliders of increasing energy and luminosity.
DOI: 10.1016/j.revip.2018.11.001
2018
Cited 24 times
Vector boson scattering: Recent experimental and theory developments
This document summarises the talks and discussions happened during the VBSCan Split17 workshop, the first general meeting of the VBSCan COST Action network. This collaboration is aiming at a consistent and coordinated study of vector-boson scattering from the phenomenological and experimental point of view, for the best exploitation of the data that will be delivered by existing and future particle colliders.
DOI: 10.21468/scipostphysproc.12.016
2023
Cited 3 times
DELight: A Direct search Experiment for Light dark matter with superfluid helium
To reach ultra-low detection thresholds necessary to probe unprecedentedly low Dark Matter masses, target material alternatives and novel detector designs are essential. One such target material is superfluid ^4 <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline"><mml:msup><mml:mi /><mml:mn>4</mml:mn></mml:msup></mml:math> He which has the potential to probe so far uncharted light Dark Matter parameter space at sub-GeV masses. The new “Direct search Experiment for Light dark matter”, DELight, will be using superfluid helium as active target, instrumented with magnetic micro-calorimeters. It is being designed to reach sensitivity to masses well below 100 MeV in Dark Matter-nucleus scattering interactions.
2013
Cited 22 times
Working Group Report: Higgs Boson
This report summarizes the work of the Energy Frontier Higgs Boson working group of the 2013 Community Summer Study (Snowmass). We identify the key elements of a precision Higgs physics program and document the physics potential of future experimental facilities as elucidated during the Snowmass study. We study Higgs couplings to gauge boson and fermion pairs, double Higgs production for the Higgs self-coupling, its quantum numbers and $CP$-mixing in Higgs couplings, the Higgs mass and total width, and prospects for direct searches for additional Higgs bosons in extensions of the Standard Model. Our report includes projections of measurement capabilities from detailed studies of the Compact Linear Collider (CLIC), a Gamma-Gamma Collider, the International Linear Collider (ILC), the Large Hadron Collider High-Luminosity Upgrade (HL-LHC), Very Large Hadron Colliders up to 100 TeV (VLHC), a Muon Collider, and a Triple-Large Electron Positron Collider (TLEP).
DOI: 10.48550/arxiv.2401.07564
2024
Focus topics for the ECFA study on Higgs / Top / EW factories
In order to stimulate new engagement and trigger some concrete studies in areas where further work would be beneficial towards fully understanding the physics potential of an $e^+e^-$ Higgs / Top / Electroweak factory, we propose to define a set of focus topics. The general reasoning and the proposed topics are described in this document.
DOI: 10.1140/epjc/s10052-024-12418-0
2024
Prospects for $$B_c^+$$ and $$B^+\rightarrow \tau ^+ \nu _\tau $$ at FCC-ee
Abstract The prospects are presented for precise measurements of the branching ratios of the purely leptonic $$B_c^+ \rightarrow \tau ^+ \nu _\tau $$ <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML"> <mml:mrow> <mml:msubsup> <mml:mi>B</mml:mi> <mml:mi>c</mml:mi> <mml:mo>+</mml:mo> </mml:msubsup> <mml:mo>→</mml:mo> <mml:msup> <mml:mi>τ</mml:mi> <mml:mo>+</mml:mo> </mml:msup> <mml:msub> <mml:mi>ν</mml:mi> <mml:mi>τ</mml:mi> </mml:msub> </mml:mrow> </mml:math> and $$B^+ \rightarrow \tau ^+ \nu _\tau $$ <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML"> <mml:mrow> <mml:msup> <mml:mi>B</mml:mi> <mml:mo>+</mml:mo> </mml:msup> <mml:mo>→</mml:mo> <mml:msup> <mml:mi>τ</mml:mi> <mml:mo>+</mml:mo> </mml:msup> <mml:msub> <mml:mi>ν</mml:mi> <mml:mi>τ</mml:mi> </mml:msub> </mml:mrow> </mml:math> decays at the Future Circular Collider (FCC). This work is focused on the hadronic $$\tau ^{+} \rightarrow \pi ^+ \pi ^+ \pi ^- {\bar{\nu }}_\tau $$ <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML"> <mml:mrow> <mml:msup> <mml:mi>τ</mml:mi> <mml:mo>+</mml:mo> </mml:msup> <mml:mo>→</mml:mo> <mml:msup> <mml:mi>π</mml:mi> <mml:mo>+</mml:mo> </mml:msup> <mml:msup> <mml:mi>π</mml:mi> <mml:mo>+</mml:mo> </mml:msup> <mml:msup> <mml:mi>π</mml:mi> <mml:mo>-</mml:mo> </mml:msup> <mml:msub> <mml:mover> <mml:mrow> <mml:mi>ν</mml:mi> </mml:mrow> <mml:mrow> <mml:mo>¯</mml:mo> </mml:mrow> </mml:mover> <mml:mi>τ</mml:mi> </mml:msub> </mml:mrow> </mml:math> decay in both $$B_c^+ \rightarrow \tau ^+ \nu _\tau $$ <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML"> <mml:mrow> <mml:msubsup> <mml:mi>B</mml:mi> <mml:mi>c</mml:mi> <mml:mo>+</mml:mo> </mml:msubsup> <mml:mo>→</mml:mo> <mml:msup> <mml:mi>τ</mml:mi> <mml:mo>+</mml:mo> </mml:msup> <mml:msub> <mml:mi>ν</mml:mi> <mml:mi>τ</mml:mi> </mml:msub> </mml:mrow> </mml:math> and $$B^+ \rightarrow \tau ^+ \nu _\tau $$ <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML"> <mml:mrow> <mml:msup> <mml:mi>B</mml:mi> <mml:mo>+</mml:mo> </mml:msup> <mml:mo>→</mml:mo> <mml:msup> <mml:mi>τ</mml:mi> <mml:mo>+</mml:mo> </mml:msup> <mml:msub> <mml:mi>ν</mml:mi> <mml:mi>τ</mml:mi> </mml:msub> </mml:mrow> </mml:math> processes. Events are selected with two Boosted Decision Tree algorithms to optimise the separation between the two signal processes as well as the generic hadronic Z decay backgrounds. The range of the expected precision for both signals are evaluated in different scenarios of non-ideal background modelling. This paper demonstrates, for the first time, that the $$B^+ \rightarrow \tau ^+ \nu _\tau $$ <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML"> <mml:mrow> <mml:msup> <mml:mi>B</mml:mi> <mml:mo>+</mml:mo> </mml:msup> <mml:mo>→</mml:mo> <mml:msup> <mml:mi>τ</mml:mi> <mml:mo>+</mml:mo> </mml:msup> <mml:msub> <mml:mi>ν</mml:mi> <mml:mi>τ</mml:mi> </mml:msub> </mml:mrow> </mml:math> decay can be well separated from both $$B_c^+ \rightarrow \tau ^+ \nu _\tau $$ <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML"> <mml:mrow> <mml:msubsup> <mml:mi>B</mml:mi> <mml:mi>c</mml:mi> <mml:mo>+</mml:mo> </mml:msubsup> <mml:mo>→</mml:mo> <mml:msup> <mml:mi>τ</mml:mi> <mml:mo>+</mml:mo> </mml:msup> <mml:msub> <mml:mi>ν</mml:mi> <mml:mi>τ</mml:mi> </mml:msub> </mml:mrow> </mml:math> and generic $$Z\rightarrow b{\bar{b}}$$ <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML"> <mml:mrow> <mml:mi>Z</mml:mi> <mml:mo>→</mml:mo> <mml:mi>b</mml:mi> <mml:mover> <mml:mrow> <mml:mi>b</mml:mi> </mml:mrow> <mml:mrow> <mml:mo>¯</mml:mo> </mml:mrow> </mml:mover> </mml:mrow> </mml:math> processes in the FCC-ee collision environment and proposes the corresponding branching ratio measurement as a novel way to determine the CKM matrix element $$|V_{ub}|$$ <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML"> <mml:mrow> <mml:mrow> <mml:mo>|</mml:mo> </mml:mrow> <mml:msub> <mml:mi>V</mml:mi> <mml:mrow> <mml:mi>ub</mml:mi> </mml:mrow> </mml:msub> <mml:mrow> <mml:mo>|</mml:mo> </mml:mrow> </mml:mrow> </mml:math> . The theoretical impacts of both $$B^+ \rightarrow \tau ^+ \nu _\tau $$ <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML"> <mml:mrow> <mml:msup> <mml:mi>B</mml:mi> <mml:mo>+</mml:mo> </mml:msup> <mml:mo>→</mml:mo> <mml:msup> <mml:mi>τ</mml:mi> <mml:mo>+</mml:mo> </mml:msup> <mml:msub> <mml:mi>ν</mml:mi> <mml:mi>τ</mml:mi> </mml:msub> </mml:mrow> </mml:math> and $$B_c^+ \rightarrow \tau ^+ \nu _\tau $$ <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML"> <mml:mrow> <mml:msubsup> <mml:mi>B</mml:mi> <mml:mi>c</mml:mi> <mml:mo>+</mml:mo> </mml:msubsup> <mml:mo>→</mml:mo> <mml:msup> <mml:mi>τ</mml:mi> <mml:mo>+</mml:mo> </mml:msup> <mml:msub> <mml:mi>ν</mml:mi> <mml:mi>τ</mml:mi> </mml:msub> </mml:mrow> </mml:math> measurements on New Physics cases are discussed for interpretations in the generic Two-Higgs-doublet model and leptoquark models.
DOI: 10.1103/physrevd.109.043035
2024
Optimum filter-based analysis for the characterization of a high-resolution magnetic microcalorimeter
DOI: 10.1088/2632-2153/ad43b1
2024
Distilling particle knowledge for fast reconstruction at high-energy physics experiments
Abstract Knowledge distillation is a form of model compression that allows artificial neural networks of different sizes to learn from one another. Its main application is the compactification of large deep neural networks to free up computational resources, in particular on edge devices. In this article, we consider proton-proton collisions at the High-Luminosity LHC (HL-LHC) and demonstrate a successful knowledge transfer from an event-level graph neural network (GNN) to a particle-level small deep neural network (DNN). Our algorithm, \DN, is a DNN that is trained to learn about the provenance of particles, as provided by the soft labels that are the GNN outputs, to predict whether or not a particle originates from the primary interaction vertex. The results indicate that for this problem, which is one of the main challenges at the HL-LHC, there is minimal loss during the transfer of knowledge to the small student network, while improving significantly the computational resource needs compared to the teacher. This is demonstrated for the distilled student network on a CPU, as well as for a quantized and pruned student network deployed on an FPGA. Our study proves that knowledge transfer between networks of different complexity can be used for fast artificial intelligence (AI) in high-energy physics that improves the expressiveness of observables over non-AI-based reconstruction algorithms. Such an approach can become essential at the HL-LHC experiments, e.g., to comply with the resource budget of their trigger stages.
DOI: 10.1088/1748-0221/15/01/p01009
2020
Cited 11 times
Opportunities and challenges of Standard Model production cross section measurements in proton-proton collisions at √<i>s</i>=8 TeV using CMS Open Data
The CMS Open Data project offers new opportunities to measure cross sections of standard model (SM) processes which have not been probed so far. We evaluate the challenges and the opportunities of the CMS Open Data project in the view of cross section measurements. In particular, we reevaluate the SM cross sections of the production of W bosons, Z bosons, top-quark pairs and WZ dibosons in several decay channels at a center of mass energy of 8 TeV with an integrated luminosity of 1.8 fb−1. These cross sections were previously measured by the ATLAS and CMS Collaborations and are used to validate our analysis and calibration strategy. The results indicate the achievable level of precision for future measurements using the CMS Open Data performed by scientists who are not members of the LHC Collaborations and hence lack detailed knowledge of experimental and detector related effects and their handling.
DOI: 10.1088/1742-6596/396/4/042018
2012
Cited 13 times
A new era for central processing and production in CMS
The goal for CMS computing is to maximise the throughput of simulated event generation while also processing event data generated by the detector as quickly and reliably as possible. To maintain this achievement as the quantity of events increases CMS computing has migrated at the Tier 1 level from its old production framework, ProdAgent, to a new one, WMAgent. The WMAgent framework offers improved processing efficiency and increased resource usage as well as a reduction in operational manpower.
2002
Cited 22 times
The Higgs Working Group: Summary Report
DOI: 10.1109/tns.2007.914036
2008
Cited 14 times
CMS DAQ Event Builder Based on Gigabit Ethernet
The CMS data acquisition system is designed to build and filter events originating from 476 detector data sources at a maximum trigger rate of 100 kHz. Different architectures and switch technologies have been evaluated to accomplish this purpose. Events will be built in two stages: the first stage will be a set of event builders called front-end driver (FED) builders. These will be based on Myrinet technology and will pre-assemble groups of about eight data sources. The second stage will be a set of event builders called readout builders. These will perform the building of full events. A single readout builder will build events from about 60 sources of 16 kB fragments at a rate of 12.5 kHz. In this paper, we present the design of a readout builder based on TCP/IP over Gigabit Ethernet and the refinement that was required to achieve the design throughput. This refinement includes architecture of the readout builder, the setup of TCP/IP, and hardware selection.
DOI: 10.2172/1419183
2012
Cited 11 times
LEP3: A High Luminosity $e^+e^-$ Collider to Study the Higgs Boson
A strong candidate for the Standard Model Scalar boson, H(126), has been discovered by the Large Hadron Collider (LHC) experiments. In order to study this fundamental particle with unprecedented precision, and to perform precision tests of the closure of the Standard Model, we investigate the possibilities offered by An e+e- storage ring collider. We use a design inspired by the B-factories, taking into account the performance achieved at LEP2, and imposing a synchrotron radiation power limit of 100 MW. At the most relevant centre-of-mass energy of 240 GeV, near-constant luminosities of 10^34 cm^{-2}s^{-1} are possible in up to four collision points for a ring of 27km circumference. The achievable luminosity increases with the bending radius, and for 80km circumference, a luminosity of 5 10^34 cm^{-2}s^{-1} in four collision points appears feasible. Beamstrahlung becomes relevant at these high luminosities, leading to a design requirement of large momentum acceptance both in the accelerating system and in the optics. The larger machine could reach the top quark threshold, would yield luminosities per interaction point of 10^36 cm^{-2}s^{-1} at the Z pole (91 GeV) and 2 10^35 cm^{-2}s^{-1} at the W pair production threshold (80 GeV per beam). The energy spread is reduced in the larger ring with respect to what is was at LEP, giving confidence that beam polarization for energy calibration purposes should be available up to the W pair threshold. The capabilities in term of physics performance are outlined.
DOI: 10.1088/1742-6596/513/3/032040
2014
Cited 7 times
CMS computing operations during run 1
During the first run, CMS collected and processed more than 10B data events and simulated more than 15B events. Up to 100k processor cores were used simultaneously and 100PB of storage was managed. Each month petabytes of data were moved and hundreds of users accessed data samples. In this document we discuss the operational experience from this first run. We present the workflows and data flows that were executed, and we discuss the tools and services developed, and the operations and shift models used to sustain the system. Many techniques were followed from the original computing planning, but some were reactions to difficulties and opportunities. We also address the lessons learned from an operational perspective, and how this is shaping our thoughts for 2015.
2015
Cited 7 times
CMS Phase II Upgrade Scope Document
2019
Cited 7 times
FCC-ee: Your Questions Answered
This document answers in simple terms many FAQs about FCC-ee, including comparisons with other colliders. It complements the FCC-ee CDR and the FCC Physics CDR by addressing many questions from non-experts and clarifying issues raised during the European Strategy symposium in Granada, with a view to informing discussions in the period between now and the final endorsement by the CERN Council in 2020 of the European Strategy Group recommendations. This document will be regularly updated as more questions appear or new information becomes available.
DOI: 10.1146/annurev-nucl-102115-044812
2016
Cited 6 times
Physics Goals and Experimental Challenges of the Proton–Proton High-Luminosity Operation of the LHC
The completion of Run 1 of the Large Hadron Collider (LHC) at CERN has seen the discovery of the Higgs boson and an unprecedented number of precise measurements of the Standard Model, and Run 2 has begun to provide the first data at higher energy. The high-luminosity upgrade of the LHC (HL-LHC) and the four experiments (ATLAS, CMS, ALICE, and LHCb) will exploit the full potential of the collider to discover and explore new physics beyond the Standard Model. We review the experimental challenges and the physics opportunities in proton–proton collisions at the HL-LHC.
DOI: 10.1088/1742-6596/119/2/022010
2008
Cited 9 times
The run control system of the CMS experiment
The CMS experiment at the LHC at CERN will start taking data in 2008. To configure, control and monitor the experiment during data-taking the Run Control system was developed. This paper describes the architecture and the technology used to implement the Run Control system, as well as the deployment and commissioning strategy of this important component of the online software for the CMS experiment.
DOI: 10.1109/tns.2007.911884
2008
Cited 8 times
The Terabit/s Super-Fragment Builder and Trigger Throttling System for the Compact Muon Solenoid Experiment at CERN
The Data Acquisition System of the Compact Muon Solenoid experiment at the Large Hadron Collider reads out event fragments of an average size of 2 kB from around 650 detector front-ends at a rate of up to 100 kHz. The first stage of event-building is performed by the Super-Fragment Builder employing custom-built electronics and a Myrinet optical network. It reduces the number of fragments by one order of magnitude, thereby greatly decreasing the requirements for the subsequent event-assembly stage. Back-pressure from the down-stream event-processing or variations in the size and rate of events may give rise to buffer overflows in the subdetector's front-end electronics, which would result in data corruption and would require a time-consuming re-sync procedure to recover. The Trigger-Throttling System protects against these buffer overflows. It provides fast feedback from any of the subdetector front-ends to the trigger so that the trigger can be throttled before buffers overflow. This paper reports on new performance measurements and on the recent successful integration of a scaled-down setup of the described system with the trigger and with front-ends of all major subdetectors. The on-going commissioning of the full-scale system is discussed.
DOI: 10.1088/1742-6596/219/2/022038
2010
Cited 7 times
The CMS event builder and storage system
The CMS event builder assembles events accepted by the first level trigger and makes them available to the high-level trigger. The event builder needs to handle a maximum input rate of 100 kHz and an aggregated throughput of 100 GB/s originating from approximately 500 sources. This paper presents the chosen hardware and software architecture. The system consists of 2 stages: an initial pre-assembly reducing the number of fragments by one order of magnitude and a final assembly by several independent readout builder (RU-builder) slices. The RU-builder is based on 3 separate services: the buffering of event fragments during the assembly, the event assembly, and the data flow manager. A further component is responsible for handling events accepted by the high-level trigger: the storage manager (SM) temporarily stores the events on disk at a peak rate of 2 GB/s until they are permanently archived offline. In addition, events and data-quality histograms are served by the SM to online monitoring clients. We discuss the operational experience from the first months of reading out cosmic ray data with the complete CMS detector.
DOI: 10.48550/arxiv.1208.0504
2012
Cited 6 times
A High Luminosity e+e- Collider to study the Higgs Boson
A strong candidate for the Standard Model Scalar boson, H(126), has been discovered by the Large Hadron Collider (LHC) experiments. In order to study this fundamental particle with unprecedented precision, and to perform precision tests of the closure of the Standard Model, we investigate the possibilities offered by An e+e- storage ring collider. We use a design inspired by the B-factories, taking into account the performance achieved at LEP2, and imposing a synchrotron radiation power limit of 100 MW. At the most relevant centre-of-mass energy of 240 GeV, near-constant luminosities of 10^34 cm^{-2}s^{-1} are possible in up to four collision points for a ring of 27km circumference. The achievable luminosity increases with the bending radius, and for 80km circumference, a luminosity of 5 10^34 cm^{-2}s^{-1} in four collision points appears feasible. Beamstrahlung becomes relevant at these high luminosities, leading to a design requirement of large momentum acceptance both in the accelerating system and in the optics. The larger machine could reach the top quark threshold, would yield luminosities per interaction point of 10^36 cm^{-2}s^{-1} at the Z pole (91 GeV) and 2 10^35 cm^{-2}s^{-1} at the W pair production threshold (80 GeV per beam). The energy spread is reduced in the larger ring with respect to what is was at LEP, giving confidence that beam polarization for energy calibration purposes should be available up to the W pair threshold. The capabilities in term of physics performance are outlined.
DOI: 10.48550/arxiv.1208.1662
2012
Cited 5 times
Prospective Studies for LEP3 with the CMS Detector
On July 4, 2012, the discovery of a new boson, with mass around 125 GeV/c2 and with properties compatible with those of a standard-model Higgs boson, was announced at CERN. In this context, a high-luminosity electron-positron collider ring, operating in the LHC tunnel at a centre-of-mass energy of 240 GeV and called LEP3, becomes an attractive opportunity both from financial and scientific point of views. The performance and the suitability of the CMS detector are evaluated, with emphasis on an accurate measurement of the Higgs boson properties. The precision expected for the Higgs boson couplings is found to be significantly better than that predicted by Linear Collider studies.
DOI: 10.1109/tns.2007.910980
2008
Cited 5 times
The CMS High Level Trigger System
The CMS data acquisition (DAQ) system relies on a purely software driven high level trigger (HLT) to reduce the full Level 1 accept rate of 100 kHz to approximately 100 Hz for archiving and later offline analysis. The HLT operates on the full information of events assembled by an event builder collecting detector data from the CMS front-end systems. The HLT software consists of a sequence of reconstruction and filtering modules executed on a farm of O(1000) CPUs built from commodity hardware. This paper presents the architecture of the CMS HLT, which integrates the CMS reconstruction framework in the online environment. The mechanisms to configure, control, and monitor the filter farm and the procedures to validate the filtering code within the DAQ environment are described.
DOI: 10.1088/1748-0221/12/03/p03018
2017
Cited 4 times
Beam imaging and luminosity calibration
We discuss a method to reconstruct two-dimensional proton bunch densities using vertex distributions accumulated during LHC beam-beam scans. The $x$-$y$ correlations in the beam shapes are studied and an alternative luminosity calibration technique is introduced. We demonstrate the method on simulated beam-beam scans and estimate the uncertainty on the luminosity calibration associated to the beam-shape reconstruction to be below 1\%.
2012
Cited 4 times
Performance of CMS muon reconstruction in pp collision events at √s = 7 TeV
DOI: 10.48550/arxiv.hep-ph/0203056
2002
Cited 8 times
The Higgs Working Group: Summary Report (2001)
Report of the Higgs working group for the Workshop `Physics at TeV Colliders', Les Houches, France, 21 May - 1 June 2001. It contains 7 separate sections: A. Theoretical Developments B. Higgs Searches at the Tevatron C. Experimental Observation of an invisible Higgs Boson at LHC D. Search for the Standard Model Higgs Boson using Vector Boson Fusion at the LHC E. Study of the MSSM channel $A/H \to ττ$ at the LHC F. Searching for Higgs Bosons in $t\bar t H$ Production G. Studies of Charged Higgs Boson Signals for the Tevatron and the LHC
DOI: 10.1088/1742-6596/119/2/022011
2008
Cited 4 times
High level trigger configuration and handling of trigger tables in the CMS filter farm
The CMS experiment at the CERN Large Hadron Collider is currently being commissioned and is scheduled to collect the first pp collision data in 2008. CMS features a two-level trigger system. The Level-1 trigger, based on custom hardware, is designed to reduce the collision rate of 40 MHz to approximately 100 kHz. Data for events accepted by the Level-1 trigger are read out and assembled by an Event Builder. The High Level Trigger (HLT) employs a set of sophisticated software algorithms, to analyze the complete event information, and further reduce the accepted event rate for permanent storage and analysis. This paper describes the design and implementation of the HLT Configuration Management system. First experiences with commissioning of the HLT system are also reported.
2013
Cited 3 times
Energy calibration and resolution of the CMS electromagnetic calorimeter in pp collisions at √s = 7 TeV
DOI: 10.2172/1255142
2016
The Higgs Portal and Cosmology
Higgs portal interactions provide a simple mechanism for addressing two open problems in cosmology: dark matter and the baryon asymmetry. In the latter instance, Higgs portal interactions may contain the ingredients for a strong first-order electroweak phase transition as well as new CP-violating interactions as needed for electroweak baryogenesis. These interactions may also allow for a viable dark matter candidate. We survey the opportunities for probing the Higgs portal as it relates to these questions in cosmology at the LHC and possible future colliders.
DOI: 10.1109/rtc.2007.4382750
2007
Cited 3 times
CMS DAQ Event Builder Based on Gigabit Ethernet
The CMS Data Acquisition System is designed to build and Alter events originating from 476 detector data sources at a maximum trigger rate of 100 KHz. Different architectures and switch technologies have been evaluated to accomplish this purpose. Events will be built in two stages: the first stage will be a set of event builders called FED Builders. These will be based on Myrinet technology and will pre-assemble groups of about 8 data sources. The second stage will be a set of event builders called Readout Builders. These will perform the building of full events. A single Readout Builder will build events from 72 sources of 16 KB fragments at a rate of 12.5 KHz. In this paper we present the design of a Readout Builder based on TCP/IP over Gigabit Ethernet and the optimization that was required to achieve the design throughput. This optimization includes architecture of the Readout Builder, the setup of TCP/IP, and hardware selection.
DOI: 10.48550/arxiv.1310.0290
2013
High Energy Hadron Colliders - Report of the Snowmass 2013 Frontier Capabilities Hadron Collider Study Group
High energy hadron colliders have been the tools for discovery at the highest mass scales of the energy frontier from the SppS, to the Tevatron and now the LHC. This report reviews future hadron collider projects from the high luminosity LHC upgrade to a 100 TeV hadron collider in a large tunnel, the underlying technology challenges and R&amp;D directions and presents a series of recommendations for the future development of hadron collider research and technology.
DOI: 10.1088/1742-6596/396/3/032089
2012
No file left behind - monitoring transfer latencies in PhEDEx
The CMS experiment has to move Petabytes of data among dozens of computing centres with low latency in order to make ecient use of its resources. Transfer operations are well established to achieve the desired level of throughput, but operators lack a system to identify early on transfers that will need manual intervention to reach completion. File transfer latencies are sensitive to the underlying problems in the transfer infrastructure, and their measurement can be used as prompt trigger for preventive actions. For this reason, PhEDEx, the CMS transfer management system, has recently implemented a monitoring system to measure the transfer latencies at the level of individual files. For the first time now, the system can predict the completion time for the transfer of a data set. The operators can detect abnormal patterns in transfer latencies early, and correct the issues while the transfer is still in progress. Statistics are aggregated for blocks of files, recording a historical log to monitor the long-term evolution of transfer latencies, which are used as cumulative metrics to evaluate the performance of the transfer infrastructure, and to plan the global data placement strategy. In this contribution, we present the typical patterns of transfer latencies that may be identified with the latency monitor, and we show how we are able to detect the sources of latency arising from the underlying infrastructure (such as stuck files) which need operator intervention.
2013
Working Group Report: Hadron Colliders
DOI: 10.48550/arxiv.2305.02998
2023
Prospects for $B_c^+$ and $B^+\to τ^+ ν_τ$ at FCC-ee
The prospects are presented for precise measurements of the branching ratios of the purely leptonic $B_c^+ \to \tau^+ \nu_\tau$ and $B^+ \to \tau^+ \nu_\tau$ decays at the Future Circular Collider (FCC). Common FCC software tools are employed in all steps of this study. This work is focused on the hadronic $\tau^{+} \to \pi^+ \pi^+ \pi^- \bar{\nu}_\tau$ decay in both $B_c^+ \to \tau^+ \nu_\tau$ and $B^+ \to \tau^+ \nu_\tau$ processes. Events are selected with two Boosted Decision Tree algorithms to optimise the separation between the two signal processes as well as the generic hadronic $Z$ decay backgrounds. The range of the expected precision for both signals are evaluated in different scenarios of non-ideal background modelling. The theoretical impacts of such measurements are discussed in both the Standard Model context, for measurements of CKM matrix elements, as well as New Physics cases, for interpretations in the generic Two-Higgs-doublet model and leptoquark models.
DOI: 10.48550/arxiv.2308.00515
2023
Technical Design Report for the LUXE Experiment
This Technical Design Report presents a detailed description of all aspects of the LUXE (Laser Und XFEL Experiment), an experiment that will combine the high-quality and high-energy electron beam of the European XFEL with a high-intensity laser, to explore the uncharted terrain of strong-field quantum electrodynamics characterised by both high energy and high intensity, reaching the Schwinger field and beyond. The further implications for the search of physics beyond the Standard Model are also discussed.
DOI: 10.48550/arxiv.2310.08512
2023
Optimum filter-based analysis for the characterization of a high-resolution magnetic microcalorimeter towards the DELight experiment
Ultra-sensitive cryogenic calorimeters have become a favored technology with widespread application where eV-scale energy resolutions are needed. In this article, we characterize the performance of an X-ray magnetic microcalorimeter (MMC) using a Fe-55 source. Employing an optimum filter-based amplitude estimation and energy reconstruction, we demonstrate that an unprecedented FWHM resolution of $\Delta E_\mathrm{FWHM} = \left(1.25\pm0.17\mathrm{\scriptsize{(stat)}}^{+0.05}_{-0.07}\mathrm{\scriptsize{(syst)}}\right)\,\text{eV}$ can be achieved. We also derive the best possible resolution and discuss limiting factors affecting the measurement. The analysis pipeline for the MMC data developed in this paper is furthermore an important step for the realization of the proposed superfluid helium-based experiment DELight, which will search for direct interaction of dark matter with masses below 100 MeV/c$^2$.
DOI: 10.48550/arxiv.2311.12551
2023
Distilling particle knowledge for fast reconstruction at high-energy physics experiments
Knowledge distillation is a form of model compression that allows to transfer knowledge between intelligent algorithms. Its main application is the compactification of large deep neural networks to free up computational resources, in particular on edge devices. In this article, we consider proton-proton collisions at the High-Luminosity LHC (HL-LHC) and demonstrate a successful knowledge transfer from an event-level graph neural network (GNN) to a particle-level small deep neural network (DNN). Our algorithm, DistillNet, is a DNN that is trained to learn about the provenance of particles, as provided by the soft labels that are the GNN outputs, to predict whether or not a particle originates from the primary interaction vertex. The results indicate that for this problem, which is one of the main challenges at the HL-LHC, there is minimal loss during the transfer of knowledge to the small student network, while improving significantly the computational resource needs compared to the teacher. This is demonstrated for the distilled student network on a CPU, as well as for a quantized and pruned student network deployed on an FPGA. Our study proves that knowledge transfer between networks of different complexity can be used for fast artificial intelligence (AI) in high-energy physics that improves the expressiveness of observables over non-AI-based reconstruction algorithms. Such an approach can become essential at the HL-LHC experiments, e.g., to comply with the resource budget of their trigger stages.
DOI: 10.1109/icops45740.2023.10481013
2023
Minature Microwave Inductive Copuled Plasma as a Source of Atomic Oxygen for Plasma Deposition of Thin Films
DOI: 10.1109/icops45740.2023.10480972
2023
The Nonlinear Behaviour of the Microwave Driven ICP Source
DOI: 10.2172/15017347
2004
Cited 3 times
A measurement of the t anti-t production cross-section in proton anti-proton collisions at √s = 1.96-TeV with the D0 detector at the Tevatron using final states with a muon and jets
A preliminary measurement of the t$\bar{t}$ production cross section at √s = 1.96 TeV is presented. The μ-plus-jets final state is analyzed in a data sample of 94 pb-1 and a total of 14 events are selected with a background expectation of 11.7 ± 1.9 events. The measurement yields: σp$\bar{p}$ → t$\bar{t}$ + X} = 2.4$+4.2\atop{-3.5}$(stat.)$+2.5\atop{-2.6}$(syst.) ± 0.3(lumi.) pb. The analysis, being part of a larger effort to re-observe the top quark in Tevatron Run II data and to measure the production cross section, is combined with results from all available analyses channels. The combined result yields: σ p$\bar{p}$ → t$\bar{t}$ + X = 8.1$+2.2\atop{-2.0}$(stat.)$+1.6\atop{-1.4}$(syst.) ± 0.8(lumi.) pb.
2018
Performance of the CMS muon detector and muon reconstruction with proton-proton collisions at √s = 13 TeV
2016
Search for pair-produced vectorlike B quarks in proton-proton collisions at √s = 8 TeV
2016
Search for long-lived charged particles in proton-proton collisions at √s=13 TeV
DOI: 10.3389/fphy.2022.913510
2022
Detector Simulation Challenges for Future Accelerator Experiments
Detector simulation is a key component for studies on prospective future high-energy colliders, the design, optimization, testing and operation of particle physics experiments, and the analysis of the data collected to perform physics measurements. This review starts from the current state of the art technology applied to detector simulation in high-energy physics and elaborates on the evolution of software tools developed to address the challenges posed by future accelerator programs beyond the HL-LHC era, into the 2030–2050 period. New accelerator, detector, and computing technologies set the stage for an exercise in how detector simulation will serve the needs of the high-energy physics programs of the mid 21st century, and its potential impact on other research domains.
DOI: 10.48550/arxiv.1604.05324
2016
The Higgs Portal and Cosmology
Higgs portal interactions provide a simple mechanism for addressing two open problems in cosmology: dark matter and the baryon asymmetry. In the latter instance, Higgs portal interactions may contain the ingredients for a strong first order electroweak phase transition as well as new CP-violating interactions as needed for electroweak baryogenesis. These interactions may also allow for a viable dark matter candidate. We survey the opportunities for probing the Higgs portal as it relates to these questions in cosmology at the LHC and possible future colliders.
DOI: 10.48550/arxiv.1401.6114
2014
Planning the Future of U.S. Particle Physics (Snowmass 2013): Chapter 6: Accelerator Capabilities
These reports present the results of the 2013 Community Summer Study of the APS Division of Particles and Fields ("Snowmass 2013") on the future program of particle physics in the U.S. Chapter 6, on Accelerator Capabilities, discusses the future progress of accelerator technology, including issues for high-energy hadron and lepton colliders, high-intensity beams, electron-ion colliders, and necessary R&amp;D for future accelerator technologies.
DOI: 10.1088/1742-6596/331/7/072019
2011
CMS distributed computing workflow experience
The vast majority of the CMS Computing capacity, which is organized in a tiered hierarchy, is located away from CERN.The 7 Tier-1 sites archive the LHC proton-proton collision data that is initially processed at CERN.These sites provide access to all recorded and simulated data for the Tier-2 sites, via wide-area network (WAN) transfers.All central data processing workflows are executed at the Tier-1 level, which contain re-reconstruction and skimming workflows of collision data as well as reprocessing of simulated data to adapt to changing detector conditions.This paper describes the operation of the CMS processing infrastructure at the Tier-1 level.The Tier-1 workflows are described in detail.The operational optimization of resource usage is described.In particular, the variation of different workflows during the data taking period of 2010, their efficiencies and latencies as well as their impact on the delivery of physics results is discussed and lessons are drawn from this experience.The simulation of proton-proton collisions for the CMS experiment is primarily carried out at the second tier of the CMS computing infrastructure.Half of the Tier-2 sites of CMS are reserved for central Monte Carlo (MC) production while the other half is available for user analysis.This paper summarizes the large throughput of the MC production operation during the data taking period of 2010 and discusses the latencies and efficiencies of the various types of MC production workflows.We present the operational procedures to optimize the usage of available resources and we the operational model of CMS for including opportunistic resources, such as the larger Tier-3 sites, into the central production operation.
2015
Measurements of jet multiplicity and differential production cross sections of Z + jets events in proton-proton collisions at √s = 7 TeV
DOI: 10.1209/0295-5075/103/1/00000
2013
Highlights from the previous volumes
Equilibrium noise correlations in quantum Hall effect devices as a test of a formula by Büttiker Directional interactions and cooperativity between mechanosensitive membrane proteins Measuring Higgs couplings at a linear collider Gauge theory of topological phases of matter
2017
Inclusive search for supersymmetry using razor variables in pp collisions at √s = 13 TeV
2017
Observation of Charge-Dependent Azimuthal Correlations in p-Pb Collisions and Its Implication for the Search for the Chiral Magnetic Effect
DOI: 10.1109/tns.2008.915925
2008
Effects of Adaptive Wormhole Routing in Event Builder Networks
The data acquisition system of the CMS experiment at the Large Hadron Collider features a two-stage event builder, which combines data from about 500 sources into full events at an aggregate throughput of 100 GB/s. To meet the requirements, several architectures and interconnect technologies have been quantitatively evaluated. Myrinet will be used for the communication from the underground frontend devices to the surface event building system. Gigabit Ethernet is deployed in the surface event building system. Nearly full bi-section throughput can be obtained using a custom software driver for Myrinet based on barrel shifter traffic shaping. This paper discusses the use of Myrinet dual-port network interface cards supporting channel bonding to achieve virtual 5 GBit/s links with adaptive routing to alleviate the throughput limitations associated with wormhole routing. Adaptive routing is not expected to be suitable for high-throughput event builder applications in high-energy physics. To corroborate this claim, results from the CMS event builder preseries installation at CERN are presented and the problems of wormhole routing networks are discussed.
DOI: 10.1109/rtc.2007.4382746
2007
The Terabit/s Super-Fragment Builder and Trigger Throttling System for the Compact Muon Solenoid Experiment at CERN
The data acquisition system of the Compact Muon Solenoid experiment at the large hadron collider reads out event fragments of an average size of 2 kilobytes from around 650 detector front-ends at a rate of up to 100 kHz. The first stage of event-building is performed by the Super-Fragment Builder employing custom-built electronics and a Myrinet optical network. It reduces the number of fragments by one order of magnitude, thereby greatly decreasing the requirements for the subsequent event-assembly stage. By providing fast feedback from any of the front-ends to the trigger, the trigger throttling system prevents buffer overflows in the front-end electronics due to variations in the size and rate of events or due to backpressure from the down-stream event-building and processing. This paper reports on the recent successful integration of a scaled-down setup of the described system with the trigger and with front-ends of all major sub-detectors and discusses the ongoing commissioning of the full-scale system.
DOI: 10.1109/rtc.2007.4382773
2007
The CMS High Level Trigger System
The CMS Data Acquisition (DAQ) System relies on a purely software driven High Level Trigger (HLT) to reduce the full Level-1 accept rate of 100 kHz to approximately 100 Hz for archiving and later offline analysis. The HLT operates on the full information of events assembled by an event builder collecting detector data from the CMS front-end systems. The HLT software consists of a sequence of reconstruction and filtering modules executed on a farm of 0(1000) CPUs built from commodity hardware. This paper presents the architecture of the CMS HLT, which integrates the CMS reconstruction framework in the online environment. The mechanisms to configure, control, and monitor the Filter Farm and the procedures to validate the filtering code within the DAQ environment are described.
2006
Top Quark Properties from the Tevatron
This report describes latest measurements and studies of top quark properties from the Tevatron in RunII with an integrated luminosity of up to 750pb-1. Due to its large mass of about 172GeV, the top quark provides a unique environment for tests of the Standard Model and is believed to yield sensitivity to new physics beyond the Standard Model. With data samples of close to 1fb-1 the CDF and D0 collaborations at the Tevatron enter a new aera of precision top quark measurements.
2015
Search for a standard model-like Higgs boson in the μ[superscript +]μ[superscript −] and e[superscript +]e[superscript −] decay channels at the LHC
2014
Differential cross section measurements for the production of a W boson in association with jets in proton–proton collisions at √s = 7 TeV
2014
Measurement of the t[bar over t] production cross section in pp collisions at √s = 8 TeV in dilepton final states containing one τ lepton
2015
Search for narrow high-mass resonances in proton–proton collisions at √s = 8 TeV decaying to a Z and a Higgs boson
2015
Search for a pseudoscalar boson decaying into a Z boson and the 125 GeV Higgs boson in ℓ[superscript +]ℓ[superscript −]b[bar over b] final states
2014
CMS computing operations during run 1
2014
Future hadron colliders: From physics perspectives to technology R&D
2015
Measurement of the cross section ratio σ[subscript t[bar over t]b[bar over b]]/σ[subscript t[bar over t]jj] in pp collisions at √s = 8 TeV
2014
1 Higgs working group report
This report summarizes the work of the Energy Frontier Higgs Boson working group of the 2013 Community Summer Study (Snowmass). We identify the key elements of a precision Higgs physics program and document the physics potential of future experimental facilities as elucidated during the Snowmass study. We study Higgs couplings to gauge boson and fermion pairs, double Higgs production for the Higgs self-coupling, its quantum numbers andCP -mixing in Higgs couplings, the Higgs mass and total width, and prospects for direct searches for additional Higgs bosons in extensions of the Standard Model. Our report includes projections of measurement capabilities from detailed studies of the Compact Linear Collider (CLIC), a Gamma-Gamma Collider, the International Linear Collider (ILC), the Large Hadron Collider High-Luminosity Upgrade (HLLHC), Very Large Hadron Colliders up to 100 TeV (VLHC), a Muon Collider, and a Triple-Large Electron Positron Collider (TLEP).
2014
Modification of Jet Shapes in PbPb Collisions at √s[subscript NN] = 2.76 TeV
2015
Measurement of the production cross section ratio σ ([subscript χb2](1P))/σ ([subscript χb1](1P)) in pp collisions at √s = 8 TeV
2014
Measurement of the top-quark mass in all-jets t[bar over t] events in pp collisions at √s =7 TeV
2014
Measurement of the pp → ZZ production cross section and constraints on anomalous triple gauge couplings in four-lepton final states at √s = 8 TeV
2015
Search for heavy majorana neutrinos in μ[superscript ±]μ[superscript ±] + jets events in proton–proton collisions at √s = 8 TeV
2014
Evidence of b-Jet Quenching in PbPb Collisions at √s[subscript NN] = 2.76 TeV
2015
High Energy Hadron Colliders
High energy hadron colliders have been the tools for discovery at the highest mass scales of the energy frontier from the S ppS, to the Tevatron and now the LHC. They will remain so, unchallenged for the foreseeable future. The discovery of the Higgs boson at the LHC, opens a new era for particle physics. After this discovery, understanding what is the origin of electro-weak symmetry breaking becomes the next key challenge for collider physics. This challenge can be expressed in terms of two questions: up to which level of precision does the Higgs boson behave like predicted by the SM? Where are the new particles that should solve the electro-weak (EW) naturalness problem and, possibly, oer some insight into the origin of dark matter, the matter-antimatter asymmetry, and neutrino masses? The approved CERN LHC programme, its future upgrade towards higher luminosities (HL-LHC), and the study of an LHC energy upgrade (HE-LHC) or of a new proton collider delivering collisions at a center of mass energy up to 100 TeV (VHE-LHC), are all essential components of this endeavor.
2016
Measurement of differential cross sections for Higgs boson production in the diphoton decay channel in pp collisions at t √s = 8 TeV
2016
Measurement of the t[bar over t] production cross section in the all-jets final state in pp collisions at √s = 8 TeV
2014
Inclusive Search for a Vector-Like T Quark with Charge 2/3 in pp Collisions at √s = 8 TeV
2016
Measurement of the integrated and differential t[bar over t] production cross sections for high- pT top quarks in pp collisions at √s = 8 TeV
2015
Angular coefficients of Z bosons produced in pp collisions at √s = 8 TeV and decaying to μ[superscript +]μ[superscript −] as a function of transverse momentum and rapidity
2015
Nuclear effects on the transverse momentum spectra of charged particles in pPb collisions at √s[subscript NN] = 5.02 TeV
2016
Study of B meson production in p + Pb collisions at √s[subscript NN] = 5.02 TeV using exclusive hadronic decays
2015
Long-range two-particle correlations of strange hadrons with charged particles in pPb and PbPb collisions at LHC energies
2014
Search for new resonances decaying via WZ to leptons in proton–proton collisions at √s = 8 TeV
2014
Measurement of the production cross section for a W boson and two b jets in pp collisions at √s = 7 TeV
2015
Search for stealth supersymmetry in events with jets, either photons or leptons, and low missing transverse momentum in pp collisions at 8 TeV
2016
Search for New Phenomena in Monophoton Final States in Proton–proton Collisions at √s = 8 TeV
2014
Search for baryon number violation in top-quark decays
2016
Search for Narrow Resonances in Dijet Final States at √s = 8 TeV with the Novel CMS Technique of Data Scouting
2014
Measurement of pseudorapidity distributions of charged particles in proton–proton collisions at √s = 8 TeV by the CMS and TOTEM experiments
2014
Studies of dijet transverse momentum balance and pseudorapidity distributions in pPb collisions at √s[subscript NN] = 5.02 TeV
2015
Search for a standard model Higgs boson produced in association with a top-quark pair and decaying to bottom quarks using a matrix element method
2014
Observation of the diphoton decay of the Higgs boson and measurement of its properties
2014
Searches for electroweak production of charginos, neutralinos, and sleptons decaying to leptons and W, Z, and Higgs bosons in pp collisions at 8 TeV
2014
Search for heavy neutrinos and W bosons with right-handed couplings in proton–proton collisions at √s = 8 TeV
2014
Measurement of differential cross sections for the production of a pair of isolated photons in pp collisions at √s = 7 TeV
2016
Search for Resonant Production of High-Mass Photon Pairs in Proton-Proton Collisions at √s=8 and 13 TeV
2014
Search for excited quarks in the γ + jet final state in proton-proton collisions at √s = 8 TeV