ϟ

Nicholas Wardle

Here are all the papers by Nicholas Wardle that you can download and read on OA.mg.
Nicholas Wardle’s last known institution is . Download Nicholas Wardle PDFs here.

Claim this Profile →
DOI: 10.21468/scipostphys.12.1.037
2022
Cited 31 times
Publishing statistical models: Getting the most out of particle physics experiments
The statistical models used to derive the results of experimental analyses are of incredible scientific value and are essential information for analysis preservation and reuse. In this paper, we make the scientific case for systematically publishing the full statistical models and discuss the technical developments that make this practical. By means of a variety of physics cases - including parton distribution functions, Higgs boson measurements, effective field theory interpretations, direct searches for new physics, heavy flavor physics, direct dark matter detection, world averages, and beyond the Standard Model global fits - we illustrate how detailed information on the statistical modelling can enhance the short- and long-term impact of experimental results.
DOI: 10.1088/1748-0221/10/04/p04015
2015
Cited 62 times
Handling uncertainties in background shapes: the discrete profiling method
A common problem in data analysis is that the functional form, as well as the parameter values, of the underlying model which should describe a dataset is not known a priori. In these cases some extra uncertainty must be assigned to the extracted parameters of interest due to lack of exact knowledge of the functional form of the model. A method for assigning an appropriate error is presented. The method is based on considering the choice of functional form as a discrete nuisance parameter which is profiled in an analogous way to continuous nuisance parameters. The bias and coverage of this method are shown to be good when applied to a realistic example.
DOI: 10.1016/j.dark.2017.02.002
2017
Cited 38 times
Towards the next generation of simplified Dark Matter models
This White Paper is an input to the ongoing discussion about the extension and refinement of simplified Dark Matter (DM) models. It is not intended as a comprehensive review of the discussed subjects, but instead summarises ideas and concepts arising from a brainstorming workshop that can be useful when defining the next generation of simplified DM models (SDMM). In this spirit, based on two concrete examples, we show how existing SDMM can be extended to provide a more accurate and comprehensive framework to interpret and characterise collider searches. In the first example we extend the canonical SDMM with a scalar mediator to include mixing with the Higgs boson. We show that this approach not only provides a better description of the underlying kinematic properties that a complete model would possess, but also offers the option of using this more realistic class of scalar mixing models to compare and combine consistently searches based on different experimental signatures. The second example outlines how a new physics signal observed in a visible channel can be connected to DM by extending a simplified model including effective couplings. In the next part of the White Paper we outline other interesting options for SDMM that could be studied in more detail in the future. Finally, we review important aspects of supersymmetric models for DM and use them to propose how to develop more complete SDMMs. This White Paper is a summary of the brainstorming meeting "Next generation of simplified Dark Matter models" that took place at Imperial College, London on May 6, 2016, and corresponding follow-up studies on selected subjects.
DOI: 10.1007/jhep04(2019)064
2019
Cited 15 times
The simplified likelihood framework
We discuss the simplified likelihood framework as a systematic approximation scheme for experimental likelihoods such as those originating from LHC experiments. We develop the simplified likelihood from the Central Limit Theorem keeping the next-to-leading term in the large $N$ expansion to correctly account for asymmetries. Moreover, we present an efficient method to compute the parameters of the simplified likelihood from Monte Carlo simulations. The approach is validated using a realistic LHC-like analysis, and the limits of the approximation are explored. Finally, we discuss how the simplified likelihood data can be conveniently released in the HepData error source format and automatically built from it, making this framework a convenient tool to transmit realistic experimental likelihoods to the community.
DOI: 10.1088/1748-0221/11/02/c02008
2016
Cited 11 times
Triggering on electrons, jets and tau leptons with the CMS upgraded calorimeter trigger for the LHC RUN II
The Compact Muon Solenoid (CMS) experiment has implemented a sophisticated two-level online selection system that achieves a rejection factor of nearly 105. During Run II, the LHC will increase its centre-of-mass energy up to 13 TeV and progressively reach an instantaneous luminosity of 2 × 1034 cm−2 s−1. In order to guarantee a successful and ambitious physics programme under this intense environment, the CMS Trigger and Data acquisition (DAQ) system has been upgraded. A novel concept for the L1 calorimeter trigger is introduced: the Time Multiplexed Trigger (TMT) . In this design, nine main processors receive each all of the calorimeter data from an entire event provided by 18 preprocessors. This design is not different from that of the CMS DAQ and HLT systems. The advantage of the TMT architecture is that a global view and full granularity of the calorimeters can be exploited by sophisticated algorithms. The goal is to maintain the current thresholds for calorimeter objects and improve the performance for their selection. The performance of these algorithms will be demonstrated, both in terms of efficiency and rate reduction. The callenging aspects of the pile-up mitigation and firmware design will be presented.
DOI: 10.1088/1361-6471/aa9408
2018
Cited 11 times
Statistical issues in searches for new phenomena in High Energy Physics
Many analyses of data in High Energy Physics are concerned with searches for New Physics. We review the statistical issues that arise in such searches, and then illustrate these using the specific example of the recent successful search for the Higgs boson, produced in collisions between high energy protons at CERN's Large Hadron Collider.
DOI: 10.1088/1748-0221/12/01/c01065
2017
Cited 8 times
The CMS Level-1 Calorimeter Trigger for the LHC Run II
Results from the completed Phase 1 Upgrade of the Compact Muon Solenoid (CMS) Level-1 Calorimeter Trigger are presented. The upgrade was performed in two stages, with the first running in 2015 for proton and heavy ion collisions and the final stage for 2016 data taking. The Level-1 trigger has been fully commissioned and has been used by CMS to collect over 43 fb−1 of data since the start of the Run II of the Large Hadron Collider (LHC). The new trigger has been designed to improve the performance at high luminosity and large number of simultaneous inelastic collisions per crossing (pile-up). For this purpose it uses a novel design, the Time Multiplexed Trigger (TMT), which enables the data from an event to be processed by a single trigger processor at full granularity over several bunch crossings. The TMT design is a modular design based on the μTCA standard. The trigger processors are instrumented with Xilinx Virtex-7 690 FPGAs and 10 Gbps optical links. The TMT architecture is flexible and the number of trigger processors can be expanded according to the physics needs of CMS. Sophisticated and innovative algorithms are now the core of the first decision layer of the experiment. The system has been able to adapt to the outstanding performance of the LHC, which ran with an instantaneous luminosity well above design. The performance of the system for single physics objects are presented along with the optimizations foreseen to maintain the thresholds for the harsher conditions expected during the LHC Run II and Run III periods.
DOI: 10.48550/arxiv.1607.06680
2016
Cited 4 times
Towards the next generation of simplified Dark Matter models
This White Paper is an input to the ongoing discussion about the extension and refinement of simplified Dark Matter (DM) models. Based on two concrete examples, we show how existing simplified DM models (SDMM) can be extended to provide a more accurate and comprehensive framework to interpret and characterise collider searches. In the first example we extend the canonical SDMM with a scalar mediator to include mixing with the Higgs boson. We show that this approach not only provides a better description of the underlying kinematic properties that a complete model would possess, but also offers the option of using this more realistic class of scalar mixing models to compare and combine consistently searches based on different experimental signatures. The second example outlines how a new physics signal observed in a visible channel can be connected to DM by extending a simplified model including effective couplings. This discovery scenario uses the recently observed excess in the high-mass diphoton searches of ATLAS and CMS for a case study to show that such a pragmatic approach can aid the experimental search programme to verify/falsify a potential signal and to study its underlying nature. In the next part of the White Paper we outline other interesting options for SDMM that could be studied in more detail in the future. Finally, we discuss important aspects of supersymmetric models for DM and how these could help to develop of more complete SDMM.
DOI: 10.1088/1748-0221/12/02/c02014
2017
Cited 3 times
The CMS Level-1 electron and photon trigger: for Run II of LHC
The Compact Muon Solenoid (CMS) employs a sophisticated two-level online triggering system that has a rejection factor of up to 105. Since the beginning of Run II of LHC, the conditions that CMS operates in have become increasingly challenging. The centre-of-mass energy is now 13 TeV and the instantaneous luminosity currently peaks at 1.5 ×1034 cm−2s−1. In order to keep low physics thresholds and to trigger efficiently in such conditions, the CMS trigger system has been upgraded. A new trigger architecture, the Time Multiplexed Trigger (TMT) has been introduced which allows the full granularity of the calorimeters to be exploited at the first level of the online trigger. The new trigger has also benefited immensely from technological improvements in hardware. Sophisticated algorithms, developed to fully exploit the advantages provided by the new hardware architecture, have been implemented. The new trigger system started taking physics data in 2016 following a commissioning period in 2015, and since then has performed extremely well. The hardware and firmware developments, electron and photon algorithms together with their performance in challenging 2016 conditions is presented.
DOI: 10.1101/2020.11.19.20235036
2020
Statistical techniques to estimate the SARS-CoV-2 infection fatality rate
A bstract The determination of the infection fatality rate (IFR) for the novel SARS-CoV-2 coronavirus is a key aim for many of the field studies that are currently being undertaken in response to the pandemic. The IFR together with the basic reproduction number R 0 , are the main epidemic parameters describing severity and transmissibility of the virus, respectively. The IFR can be also used as a basis for estimating and monitoring the number of infected individuals in a population, which may be subsequently used to inform policy decisions relating to public health interventions and lockdown strategies. The interpretation of IFR measurements requires the calculation of confidence intervals. We present a number of statistical methods that are relevant in this context and develop an inverse problem formulation to determine correction factors to mitigate time-dependent effects that can lead to biased IFR estimates. We also review a number of methods to combine IFR estimates from multiple independent studies, provide example calculations throughout this note and conclude with a summary and “best practice” recommendations. The developed code is available online.
DOI: 10.1088/1748-0221/11/01/c01051
2016
Run 2 upgrades to the CMS Level-1 calorimeter trigger
The CMS Level-1 calorimeter trigger is being upgraded in two stages to maintain performance as the LHC increases pile-up and instantaneous luminosity in its second run. In the first stage, improved algorithms including event-by-event pile-up corrections are used. New algorithms for heavy ion running have also been developed. In the second stage, higher granularity inputs and a time-multiplexed approach allow for improved position and energy resolution. Data processing in both stages of the upgrade is performed with new, Xilinx Virtex-7 based AMC cards.
DOI: 10.17863/cam.20495
2018
Pushing the precision frontier at the LHC with V+jets
2015
Run 2 Upgrades to the CMS Level-1 Calorimeter Trigger
2012
Search for the SM Higgs in the Two Photon and two Z to Four Lepton Decay Channels at CMS
DOI: 10.25560/12230
2013
Observation of a new particle in the search for the Standard Model Higgs boson at the CMS detector
2018
Pushing the precision frontier at the LHC with V+jets
This documents the proceedings from a workshop titled `Illuminating Standard candles at the LHC: V+jets' held at Imperial College London on 25th-26th April 2017. It summarises the numerous contributions to the workshop, from the experimental overview of V+jets measurements at CMS and ATLAS and their role in searching for physics beyond the Standard Model to the status of higher order perturbative calculations to these processes and their inclusion in state of the art Monte Carlo simulations. An executive summary of the ensuing discussions including a list of outcomes and wishlist for future consideration is also presented.
DOI: 10.48550/arxiv.1902.08508
2019
Per-event significance indicator to visualise significant events
In this note, an alternative for presenting the distribution of `significant' events in searches for new phenomena is described. The alternative is based on probability density functions used in the evaluation of the `significance' of an observation, rather than the typical ratio of signal to background. The method is also applicable to searches that use unbinned data, for which the concept of signal to background can be ambiguous. In the case of simple searches using binned data, this method reproduces the familiar quantity $\log(s/b)$ , when the signal to background ratio is small.
2020
Statistical techniques to estimate the SARS-CoV-2 infection fatality rate
The determination of the infection fatality rate (IFR) for the novel SARS-CoV-2 coronavirus is a key aim for many of the field studies that are currently being undertaken in response to the pandemic. The IFR together with the basic reproduction number $R_0$, are the main epidemic parameters describing severity and transmissibility of the virus, respectively. The IFR can be also used as a basis for estimating and monitoring the number of infected individuals in a population, which may be subsequently used to inform policy decisions relating to public health interventions and lockdown strategies. The interpretation of IFR measurements requires the calculation of confidence intervals. We present a number of statistical methods that are relevant in this context and develop an inverse problem formulation to determine correction factors to mitigate time-dependent effects that can lead to biased IFR estimates. We also review a number of methods to combine IFR estimates from multiple independent studies, provide example calculations throughout this note and conclude with a summary and best practice recommendations. The developed code is available online.
DOI: 10.48550/arxiv.2012.02100
2020
Statistical techniques to estimate the SARS-CoV-2 infection fatality rate
The determination of the infection fatality rate (IFR) for the novel SARS-CoV-2 coronavirus is a key aim for many of the field studies that are currently being undertaken in response to the pandemic. The IFR together with the basic reproduction number $R_0$, are the main epidemic parameters describing severity and transmissibility of the virus, respectively. The IFR can be also used as a basis for estimating and monitoring the number of infected individuals in a population, which may be subsequently used to inform policy decisions relating to public health interventions and lockdown strategies. The interpretation of IFR measurements requires the calculation of confidence intervals. We present a number of statistical methods that are relevant in this context and develop an inverse problem formulation to determine correction factors to mitigate time-dependent effects that can lead to biased IFR estimates. We also review a number of methods to combine IFR estimates from multiple independent studies, provide example calculations throughout this note and conclude with a summary and "best practice" recommendations. The developed code is available online.
DOI: 10.48550/arxiv.1802.02100
2018
Pushing the precision frontier at the LHC with V+jets
This documents the proceedings from a workshop titled `Illuminating Standard candles at the LHC: V+jets' held at Imperial College London on 25th-26th April 2017. It summarises the numerous contributions to the workshop, from the experimental overview of V+jets measurements at CMS and ATLAS and their role in searching for physics beyond the Standard Model to the status of higher order perturbative calculations to these processes and their inclusion in state of the art Monte Carlo simulations. An executive summary of the ensuing discussions including a list of outcomes and wishlist for future consideration is also presented.