ϟ

F. Legger

Here are all the papers by F. Legger that you can download and read on OA.mg.
F. Legger’s last known institution is . Download F. Legger PDFs here.

Claim this Profile →
DOI: 10.1016/j.physletb.2007.03.056
2007
Cited 50 times
Photon polarization from helicity suppression in radiative decays of polarized to spin-3/2 baryons
We give a general parameterization of the Λb→Λ(1520)γ decay amplitude, applicable to any strange isosinglet spin-3/2 baryon, and calculate the branching fraction and helicity amplitudes. Large-energy form factor relations are worked out, and it is shown that the helicity-3/2 amplitudes vanish at lowest order in soft-collinear effective theory (SCET). The suppression can be tested experimentally at the LHC and elsewhere, thus providing a benchmark for SCET. We apply the results to assess the experimental reach for a possible wrong-helicity b→sγ dipole coupling in Λb→Λ(1520)γ→pKγ decays. Furthermore we revisit Λb-polarization at hadron colliders and update the prediction from heavy-quark effective theory. Opportunities associated with b→dγ afforded by high-statistics Λb samples are briefly discussed in the general context of CP and flavour violation.
DOI: 10.1088/1742-6596/513/4/042049
2014
Cited 30 times
Data federation strategies for ATLAS using XRootD
In the past year the ATLAS Collaboration accelerated its program to federate data storage resources using an architecture based on XRootD with its attendant redirection and storage integration services. The main goal of the federation is an improvement in the data access experience for the end user while allowing more efficient and intelligent use of computing resources. Along with these advances come integration with existing ATLAS production services (PanDA and its pilot services) and data management services (DQ2, and in the next generation, Rucio). Functional testing of the federation has been integrated into the standard ATLAS and WLCG monitoring frameworks and a dedicated set of tools provides high granularity information on its current and historical usage. We use a federation topology designed to search from the site's local storage outward to its region and to globally distributed storage resources. We describe programmatic testing of various federation access modes including direct access over the wide area network and staging of remote data files to local disk. To support job-brokering decisions, a time-dependent cost-of-data-access matrix is made taking into account network performance and key site performance factors. The system's response to production-scale physics analysis workloads, either from individual end-users or ATLAS analysis services, is discussed.
DOI: 10.1051/epjconf/202429504045
2024
The CMS monitoring applications for LHC Run 3
Data taking at the Large Hadron Collider (LHC) at CERN restarted in 2022. The CMS experiment relies on a distributed computing infrastructure based on WLCG (Worldwide LHC Computing Grid) to support the LHC Run 3 physics program. The CMS computing infrastructure is highly heterogeneous and relies on a set of centrally provided services, such as distributed workload management and data management, and computing resources hosted at almost 150 sites worldwide. Smooth data taking and processing requires all computing subsystems to be fully operational, and available computing and storage resources need to be continuously monitored. During the long shutdown between LHC Run 2 and Run 3, the CMS monitoring infrastructure has undergone major changes to increase the coverage of monitored applications and services, while becoming more sustainable and easier to operate and maintain. The used technologies are based on open-source solutions, either provided by the CERN IT department through the MONIT infrastructure, or managed by the CMS monitoring team. Monitoring applications for distributed workload management, submission infrastructure based on HTCondor, distributed data management, facilities have been ported from mostly custom-built applications to use common data flow and visualization services. Data are mostly stored in non-SQL databases and storage technologies such as ElasticSearch, VictoriaMetrics, Prometheus, InfluxDB and HDFS, and accessed either via programmatic APIs, Apache Spark or Sqoop jobs, or visualized preferentially using Grafana. Most CMS monitoring applications are deployed on Kubernetes clusters to minimize maintenance operations. In this contribution we present the full stack of CMS monitoring services and show how we leveraged the use of common technologies to cover a variety of monitoring applications and cope with the computing challenges of LHC Run 3.
DOI: 10.1016/j.physletb.2006.12.011
2007
Cited 24 times
Photon helicity in decays
Radiative decays of polarized Λb baryons represent an attractive possibility to measure the helicity of the photon emitted in the b→sγ quark transition and thus to subject the Standard Model to a stringent test at existing and future hadron colliders. The most abundant mode, Λ(1116)γ, is experimentally very challenging because of the long decay length of the Λ(1116). We show that the experimentally more accessible Λb→pKγ decays proceeding via Λ resonances may be used to extract the photon helicity for sufficient Λb polarization, if the resonance spin does not exceed 3/2. A direct comparison of the potential of such resonance decays to assess the photon polarization at a hadron collider with respect to the decay to Λ(1116) is given.
DOI: 10.1103/physrevd.102.092013
2020
Cited 13 times
Measurement of the top quark Yukawa coupling from <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline"><mml:mi>t</mml:mi><mml:mover accent="true"><mml:mi>t</mml:mi><mml:mo stretchy="false">¯</mml:mo></mml:mover></mml:math> kinematic distributions in the dilepton final state in proton-proton collisions at <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline"><mml:msqrt><mml:mi>s</mml:mi></mml:msqrt><mml:mo>=</mml:mo><mml:mn>13</mml:mn><mml:mtext> </mml:…
A measurement of the Higgs boson Yukawa coupling to the top quark is presented using proton-proton collision data at $\sqrt{s} =$ 13 TeV, corresponding to an integrated luminosity of 137 fb$^{-1}$, recorded with the CMS detector. The coupling strength with respect to the standard model value, $Y_\mathrm{t}$, is determined from kinematic distributions in $\mathrm{t\bar{t}}$ final states containing ee, $μμ$, or e$μ$ pairs. Variations of the Yukawa coupling strength lead to modified distributions for $\mathrm{t\bar{t}}$ production. In particular, the distributions of the mass of the $\mathrm{t\bar{t}}$ system and the rapidity difference of the top quark and antiquark are sensitive to the value of $Y_\mathrm{t}$. The measurement yields a best fit value of $Y_\mathrm{t} =$ 1.16 $^{+0.24}_{-0.35}$, bounding $Y_\mathrm{t}$ $\lt$ 1.54 at a 95% confidence level.
DOI: 10.1016/j.nima.2009.06.086
2010
Cited 18 times
Development of muon drift-tube detectors for high-luminosity upgrades of the Large Hadron Collider
The muon detectors of the experiments at the Large Hadron Collider (LHC) have to cope with unprecedentedly high neutron and gamma ray background rates. In the forward regions of the muon spectrometer of the ATLAS detector, for instance, counting rates of 1.7 kHz/square cm are reached at the LHC design luminosity. For high-luminosity upgrades of the LHC, up to 10 times higher background rates are expected which require replacement of the muon chambers in the critical detector regions. Tests at the CERN Gamma Irradiation Facility showed that drift-tube detectors with 15 mm diameter aluminum tubes operated with Ar:CO2 (93:7) gas at 3 bar and a maximum drift time of about 200 ns provide efficient and high-resolution muon tracking up to the highest expected rates. For 15 mm tube diameter, space charge effects deteriorating the spatial resolution at high rates are strongly suppressed. The sense wires have to be positioned in the chamber with an accuracy of better than 50 ?micons in order to achieve the desired spatial resolution of a chamber of 50 ?microns up to the highest rates. We report about the design, construction and test of prototype detectors which fulfill these requirements.
DOI: 10.1007/s41781-020-00051-x
2021
Cited 7 times
The CMS monitoring infrastructure and applications
Abstract The globally distributed computing infrastructure required to cope with the multi-petabyte datasets produced by the Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC) at CERN comprises several subsystems, such as workload management, data management, data transfers, and submission of users’ and centrally managed production requests. To guarantee the efficient operation of the whole infrastructure, CMS monitors all subsystems according to their performance and status. Moreover, we track key metrics to evaluate and study the system performance over time. The CMS monitoring architecture allows both real-time and historical monitoring of a variety of data sources. It relies on scalable and open source solutions tailored to satisfy the experiment’s monitoring needs. We present the monitoring data flow and software architecture for the CMS distributed computing applications. We discuss the challenges, components, current achievements, and future developments of the CMS monitoring infrastructure.
DOI: 10.1016/j.nima.2010.06.306
2011
Cited 10 times
Development of fast high-resolution muon drift-tube detectors for high counting rates
Pressurized drift-tube chambers are efficient detectors for high-precision tracking over large areas. The Monitored Drift-Tube (MDT) chambers of the muon spectrometer of the ATLAS detector at the Large Hadron Collider (LHC) reach a spatial resolution of 35μm and almost 100% tracking efficiency with 6 layers of 30 mm diameter drift tubes operated with an Ar:CO2 (93:7) gas mixture at 3 bar and a gas gain of 20 000. The ATLAS MDT chambers are designed to cope with background counting rates due to neutrons and γ rays of up to about 300 kHz per tube which will be exceeded for LHC luminosities larger than the design value of 1034 cm−1 s−1. Decreasing the drift-tube diameter to 15 mm while keeping the other parameters, including the gas gain, unchanged reduces the maximum drift time from about 700 to 200 ns and the drift-tube occupancy by a factor of 7. New drift-tube chambers for the endcap regions of the ATLAS muon spectrometer have been designed. A prototype chamber consisting of 12 times 8 layers of 15 mm diameter drift tubes of 1 m length has been constructed with a sense wire positioning accuracy of 20μm. The 15 mm diameter drift-tubes have been tested with cosmic rays in the Gamma Irradiation Facility at CERN at γ counting rates of up to 1.85 MHz.
DOI: 10.1051/epjconf/202024503017
2020
Cited 7 times
Operational Intelligence for Distributed Computing Systems for Exascale Science
In the near future, large scientific collaborations will face unprecedented computing challenges. Processing and storing exabyte datasets require a federated infrastructure of distributed computing resources. The current systems have proven to be mature and capable of meeting the experiment goals, by allowing timely delivery of scientific results. However, a substantial amount of interventions from software developers, shifters and operational teams is needed to efficiently manage such heterogeneous infrastructures. A wealth of operational data can be exploited to increase the level of automation in computing operations by using adequate techniques, such as machine learning (ML), tailored to solve specific problems. The Operational Intelligence project is a joint effort from various WLCG communities aimed at increasing the level of automation in computing operations. We discuss how state-of-the-art technologies can be used to build general solutions to common problems and to reduce the operational cost of the experiment computing infrastructure.
DOI: 10.48550/arxiv.2110.05916
2021
Cited 6 times
First search for exclusive diphoton production at high mass with tagged protons in proton-proton collisions at $\sqrt{s} =$ 13 TeV
A search for exclusive two-photon production via photon exchange in proton-proton collisions, pp $\to$ p$γγ$p with intact protons, is presented. The data correspond to an integrated luminosity of 9.4 fb$^{-1}$ collected in 2016 using the CMS and TOTEM detectors at a center-of-mass energy of 13 TeV at the LHC. Events are selected with a diphoton invariant mass above 350 GeV and with both protons intact in the final state, to reduce backgrounds from strong interactions. The events of interest are those where the invariant mass and rapidity calculated from the momentum losses of the forward-moving protons matches the mass and rapidity of the central, two-photon system. No events are found that satisfy this condition. Interpreting this result in an effective dimension-8 extension of the standard model, the first limits are set on the two anomalous four-photon coupling parameters. If the other parameter is constrained to its standard model value, the limits at 95% CL are $\lvertζ_1\rvert$ $\lt$ 2.9 $\times$ 10$^{-13}$ GeV$^{-4}$ and $\lvertζ_2\rvert$ $\lt$ 6.0 $\times$ 10$^{-13}$ GeV$^{-4}$.
DOI: 10.1016/j.physletb.2007.02.044
2007
Cited 7 times
Erratum to: “Photon helicity in <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" altimg="si1.gif" overflow="scroll"><mml:msub><mml:mi>Λ</mml:mi><mml:mi>b</mml:mi></mml:msub><mml:mo>→</mml:mo><mml:mi>p</mml:mi><mml:mi>K</mml:mi><mml:mi>γ</mml:mi></mml:math> decays” [Phys. Lett. B 645 (2007) 204]
DOI: 10.1088/1742-6596/396/3/032111
2012
Cited 5 times
Experience in Grid Site Testing for ATLAS, CMS and LHCb with HammerCloud
Frequent validation and stress testing of the network, storage and CPU resources of a grid site is essential to achieve high performance and reliability. HammerCloud was previously introduced with the goals of enabling VO- and site-administrators to run such tests in an automated or on-demand manner. The ATLAS, CMS and LHCb experiments have all developed VO plugins for the service and have successfully integrated it into their grid operations infrastructures. This work will present the experience in running HammerCloud at full scale for more than 3 years and present solutions to the scalability issues faced by the service. First, we will show the particular challenges faced when integrating with CMS and LHCb offline computing, including customized dashboards to show site validation reports for the VOs and a new API to tightly integrate with the LHCbDIRAC Resource Status System. Next, a study of the automatic site exclusion component used by ATLAS will be presented along with results for tuning the exclusion policies. A study of the historical test results for ATLAS, CMS and LHCb will be presented, including comparisons between the experiments' grid availabilities and a search for site-based or temporal failure correlations. Finally, we will look to future plans that will allow users to gain new insights into the test results; these include developments to allow increased testing concurrency, increased scale in the number of metrics recorded per test job (up to hundreds), and increased scale in the historical job information (up to many millions of jobs per VO).
DOI: 10.1088/1742-6596/513/3/032030
2014
Cited 4 times
Grid site testing for ATLAS with HammerCloud
With the exponential growth of LHC (Large Hadron Collider) data in 2012, distributed computing has become the established way to analyze collider data. The ATLAS grid infrastructure includes more than 130 sites worldwide, ranging from large national computing centers to smaller university clusters. HammerCloud was previously introduced with the goals of enabling virtual organisations (VO) and site-administrators to run validation tests of the site and software infrastructure in an automated or on-demand manner. The HammerCloud infrastructure has been constantly improved to support the addition of new test workflows. These new workflows comprise e.g. tests of the ATLAS nightly build system, ATLAS Monte Carlo production system, XRootD federation (FAX) and new site stress test workflows. We report on the development, optimization and results of the various components in the HammerCloud framework.
DOI: 10.1016/j.nuclphysbps.2011.03.160
2011
Cited 4 times
Development of Precision Muon Drift Tube Detectors for the High-Luminosity Upgrade of the LHC
For use at the future Super-LHC a new type of muon detector has been developed. It is based on the proven MDT drift tube design, but with tubes of half the diameter, leading to higher rate capabilities by an order of magnitude. We present test results on efficiency and position resolution at high background rates and describe the practical implementation in a real-size prototype.
DOI: 10.1016/j.nima.2004.07.282
2004
Cited 6 times
TELL1: development of a common readout board for LHCb
LHCb is one of the four experiments currently under construction at LHC (Large Hadron Collider) at CERN, and its aim is the study of b-quark physics (LHCb Collaboration, CERN-LHCC/98-4). LHCb trigger strategy is based on three levels, and will reduce the event rate from 40 MHz to a few hundred Hz (LHCb Collaboration, CERN/LHCC 2003-031, LHCb TDR 10, September 2003). The first two levels (L0 and L1) will use signals from some part of the detector in order to take fast decisions, while the last one, called High Level Trigger (HLT), will have access to the full event data. An “off detector” readout board (TELL1) has been developed and will be used by the majority of LHCb subdetectors. It takes L0 accepted data as input and, after data processing which includes event synchronization, L1 Trigger pre-processing and zero suppression, L1 buffering, and HLT zero suppression, the output is sent to L1 Trigger and HLT .
DOI: 10.1109/nssmic.2008.4775036
2008
Cited 4 times
Precision drift-tube chambers for the ATLAS muon spectrometer at super-LHC
The precise measurement of muon momenta up to 1 TeV/c is one of the most challenging aspects of the ATLAS experiment at the Large Hadron Collider (LHC) at CERN. The ATLAS muon spectrometer is equipped with three layers of Monitored Drift Tube (MDT) chambers in a magnetic field generated by a superconducting air-core magnet system which are designed to cope with neutron background counting rates of up to 500 cm <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">−2</sup> s <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">−1</sup> . However, 10 times higher background rates are to be expected at Super-LHC, the high-luminosity upgrade of the LHC. We investigate the possibility of increasing the rate capability of the drift tube detectors by reducing the tubes diameter from the current value of 30 mm to 15 mm. Cosmic ray test results of a prototype detector with 15 mm diameter drift tubes in the presence of γ ray fluxes of up to 2000 cm <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">−2</sup> s <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">−1</sup> are discussed.
DOI: 10.1088/1742-6596/513/6/062031
2014
Cited 3 times
Testing as a Service with HammerCloud
HammerCloud was designed and born under the needs of the grid community to test the resources and automate operations from a user perspective. The recent developments in the IT space propose a shift to the software defined data centres, in which every layer of the infrastructure can be offered as a service.
DOI: 10.1088/1742-6596/396/3/032066
2012
Cited 3 times
Improving ATLAS grid site reliability with functional tests using HammerCloud
With the exponential growth of LHC (Large Hadron Collider) data in 2011, and more coming in 2012, distributed computing has become the established way to analyse collider data. The ATLAS grid infrastructure includes almost 100 sites worldwide, ranging from large national computing centers to smaller university clusters. These facilities are used for data reconstruction and simulation, which are centrally managed by the ATLAS production system, and for distributed user analysis. To ensure the smooth operation of such a complex system, regular tests of all sites are necessary to validate the site capability of successfully executing user and production jobs. We report on the development, optimization and results of an automated functional testing suite using the HammerCloud framework. Functional tests are short lightweight applications covering typical user analysis and production schemes, which are periodically submitted to all ATLAS grid sites. Results from those tests are collected and used to evaluate site performances. Sites that fail or are unable to run the tests are automatically excluded from the PanDA brokerage system, therefore avoiding user or production jobs to be sent to problematic sites.
2017
Cited 3 times
Measurement of jet fragmentation in Pb+Pb and $pp$ collisions at $\sqrt{s_\mathrm{NN}} = 2.76$ TeV with the ATLAS detector
2019
Cited 3 times
Comparison of fragmentation functions for light-quark- and gluon-dominated jets from $pp$ and Pb+Pb collisions in ATLAS
DOI: 10.1109/nssmic.2007.4436512
2007
Cited 3 times
Development of precision drift tube detectors for very high background rates at the super-LHC
The muon spectrometer of the ATLAS experiment at the large hadron collider (LHC) is instrumented with three layers of precision tracking detectors each consisting of 6 or 8 layers of pressurized aluminum drift tubes of 30 mm diameter. The magnetic field of the spectrometer is generated by superconducting air-core toroid magnets. Already at the LHC design luminosity of 10 <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">34</sup> cm <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> s <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">-1</sup> , the ATLAS muon chambers have to cope with unprecedentedly high neutron and gamma ray background rates of up to 500 Hz/cm <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> in the inner and middle chamber layers in the forward regions of the spectrometer. At a high-luminosity upgrade of the LHC (S-LHC), the background rates are expected to increase by an order of magnitude. The resulting high occupancies lead to a significant deterioration of the muon detection efficiency compromising the physics goals. The possibility to improve the muon detection efficiency by reducing the diameter of the drift tubes has been investigated. We report about the design and test results of prototype drift-tube detectors with thin-walled aluminum tubes of 15 mm diameter.
DOI: 10.1088/1742-6596/331/7/072050
2011
Distributed analysis functional testing using GangaRobot in the ATLAS experiment
Automated distributed analysis tests are necessary to ensure smooth operations of the ATLAS grid resources. The HammerCloud framework allows for easy definition, submission and monitoring of grid test applications. Both functional and stress test applications can be defined in HammerCloud. Stress tests are large-scale tests meant to verify the behaviour of sites under heavy load. Functional tests are light user applications running at each site with high frequency, to ensure that the site functionalities are available at all times. Success or failure rates of these tests jobs are individually monitored. Test definitions and results are stored in a database and made available to users and site administrators through a web interface. In this work we present the recent developments of the GangaRobot framework. GangaRobot monitors the outcome of functional tests, creates a blacklist of sites failing the tests, and exports the results to the ATLAS Site Status Board (SSB) and to the Service Availability Monitor (SAM), providing on the one hand a fast way to identify systematic or temporary site failures, and on the other hand allowing for an effective distribution of the work load on the available resources.
2017
Search for the direct production of charginos and neutralinos in $\sqrt{s} = $ 13 TeV $pp$ collisions with the ATLAS detector : arXiv
2017
Measurement of longitudinal flow de-correlations in Pb+Pb collisions at $\sqrt{s_\mathrm{NN}} = 2.76$ and 5.02 TeV with the ATLAS detector : arXiv
2017
A search for $B-L$ $R$-parity-violating top squarks in $\sqrt{s} = 13$ TeV $pp$ collisions with the ATLAS experiment
2018
ATLAS Analytics and Machine Learning Platforms
DOI: 10.3389/fdata.2021.753409
2022
Preparing Distributed Computing Operations for the HL-LHC Era With Operational Intelligence
As a joint effort from various communities involved in the Worldwide LHC Computing Grid, the Operational Intelligence project aims at increasing the level of automation in computing operations and reducing human interventions. The distributed computing systems currently deployed by the LHC experiments have proven to be mature and capable of meeting the experimental goals, by allowing timely delivery of scientific results. However, a substantial number of interventions from software developers, shifters, and operational teams is needed to efficiently manage such heterogenous infrastructures. Under the scope of the Operational Intelligence project, experts from several areas have gathered to propose and work on "smart" solutions. Machine learning, data mining, log analysis, and anomaly detection are only some of the tools we have evaluated for our use cases. In this community study contribution, we report on the development of a suite of operational intelligence services to cover various use cases: workload management, data management, and site operations.
DOI: 10.1088/1742-6596/513/3/032053
2014
The ATLAS distributed analysis system
In the LHC operations era, analysis of the multi-petabyte ATLAS data sample by globally distributed physicists is a challenging task.To attain the required scale the ATLAS Computing Model was designed around the concept of Grid computing, realized in the Worldwide LHC Computing Grid (WLCG), the largest distributed computational resource existing in the sciences.The ATLAS experiment currently stores over 140 PB of data and runs about 140,000 concurrent jobs continuously at WLCG sites.During the first run of the LHC, the ATLAS Distributed Analysis (DA) service has operated stably and scaled as planned.More than 1600 users submitted jobs in 2012, with 2 million or more analysis jobs per week, peaking at about a million jobs per day.The system dynamically distributes popular data to expedite processing and maximally utilize resources.The reliability of the DA service is high and steadily improving; Grid sites are continually validated against a set of standard tests, and a dedicated team of expert shifters provides user support and communicates user problems to the sites.Both the user support techniques and the direct feedback of users have been effective in improving the success rate and user experience when utilizing the distributed computing environment.In this contribution a description of the main components, activities and achievements of ATLAS distributed analysis is given.Several future improvements being undertaken will be described.
DOI: 10.5075/epfl-thesis-3602
2006
Contribution to the Development of the LHCb acquisition electronics and Study of polarized radiative $\Lambda_b$ decays
LHCb is one of the four main experiments that will take place at the future Large Hadron Collider at CERN. The data taking is foreseen to start in 2007. The LHCb detector is a forward single-arm spectrometer dedicated to precision measurements of CP violation and rare decays in the b-quark sector. The goal is to over-constrain the Standard Model (SM) and – hopefully – to exhibit inconsistencies which will be a signal of new physics beyond. Building such a large experiment as LHCb is a big challenge, and many contributions are needed. The Lausanne institute is responsible for the development of a common off-detector readout board (TELL1), which provides the interface to the copper and optical links used for the detector readout, and outputs them to the data acquisition system, after performing intensive processing. It performs: event synchronization, pedestal calculation and subtraction, common mode subtraction and monitoring, zero suppression. The TELL1 board will be used by the majority of the LHCb subdetectors. We present here a contribution to the RD necessary for the realization of the final board. In particular the feasibility of a mixed architecture using DSP and FPGA technologies has been studied. We show that the performance of this architecture satisfies LHCb electronics requirements at the time of the study (2002). Within the rich LHCb physics program, b → sγ transitions represent an interesting sector to look for evidence of physics beyond the SM. Even if the measured decay rate is in good agreement with the SM prediction up to now, new physics may still be hidden in more subtle observables. One of the most promising is the polarization of the emitted photon, which is predicted to be mainly left-handed in the SM. However right-handed components are present in a variety of new physics models. The photon polarization can be tested at LHCb by exploiting decays of polarized b baryons. If the initial baryon is polarized, asymmetries appear in the final states angular distributions, which can be used to probe the chirality of the effective Hamiltonian, and possibly to unveil new sources of CP violation. We present a phenomenological approach to the study of radiative decays of the type Λb → Λ(X)γ, where Λ(X) can be any Λ baryon of mass X. Calculations of the angular distributions are carried out employing the helicity formalism, for decays which involve Λ baryons of spin 1/2 and 3/2. Finally, detailed simulation studies of these channels in the LHCb environment allow us to assess the LHCb sensitivity to the photon polarization in b → s transitions.
2004
TELL1: a common readout board for LHCb 1
LHCb is one of the four experiments currently under construction at LHC (Large Hadron Collider) at CERN, and its aim is the study of b-quark physics [1]. LHCb trigger strategy is based on three levels, and will reduce the event rate from 40 MHz to a few hundred Hz [2]. The flrst two levels (L0 and L1) will use signals from some part of the in order to take fast decisions, while the last one, called High Level Trigger (HLT), will have access to the full event data. An ofi detector readout board (TELL1) has been developed and will be used by the majority of LHCb subdetectors. It will take L0 accepted data as input, and output them to L1 Trigger and HLT after data processing which includes event synchronization, L1 Trigger pre-processing and zero suppression, L1 bufiering, and HLT zero suppression.
DOI: 10.1016/s0168-9002(04)01715-2
2004
TELL1: development of a common readout board for LHCb
LHCb is one of the four experiments currently under construction at LHC (Large Hadron Collider) at CERN, and its aim is the study of b-quark physics (LHCb Collaboration, CERN-LHCC/98-4). LHCb trigger strategy is based on three levels, and will reduce the event rate from 40 MHz to a few hundred Hz (LHCb Collaboration, CERN/LHCC 2003-031, LHCb TDR 10, September 2003). The first two levels (L0 and L1) will use signals from some part of the detector in order to take fast decisions, while the last one, called High Level Trigger (HLT), will have access to the full event data. An “off detector” readout board (TELL1) has been developed and will be used by the majority of LHCb subdetectors. It takes L0 accepted data as input and, after data processing which includes event synchronization, L1 Trigger pre-processing and zero suppression, L1 buffering, and HLT zero suppression, the output is sent to L1 Trigger and HLT .
DOI: 10.1109/nssmic.2010.5874110
2010
Performance of fast high-resolution Muon drift tube chambers for LHC upgrades
Monitored drift tube chambers are used as precision tracking detectors in the muon spectrometer of the ATLAS experiment at the LHC at CERN. These chambers provide a spatial resolution of 35 mm and a tracking efficiency of close to 100 % up to background rates of 0.5 kHz/cm <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> , the former being limited at higher rates mainly due to space charge effects and the latter due to the maximum drift time of 700 ns. For LHC upgrades, a faster drift tube chamber has been developed, using drift tubes with a diameter of 15mm instead of 30 mm. The increased channel density and shorter drift time of about 200 ns raise the rate capability to about 10 kHz/cm <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> , while retaining the spatial resolution. A prototype chamber with trapezoidal shape consisting of 2×8 layers of 15mm diameter drift tubes with an active surface of 0.8 m <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> has been constructed. This chamber has been tested at CERN with a 180 GeV muon beam (H8) and with cosmic ray muons at the Gamma Irradiation Facility (GIF) at high γ radiation rates.
DOI: 10.1007/978-3-540-32841-4_67
2007
The LHCb trigger and readout
AbstractWe give a brief overview of the LHCb readout scheme and trigger strategy. The latter is based on three levels designed to reduce the event rate from 40 MHz to 2 kHz.KeywordsTrigger LevelTechnical Design ReportTrigger StrategyLHCb DetectorReadout SchemeThese keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
DOI: 10.1051/epjconf/201921404025
2019
Performance and impact of dynamic data placement in ATLAS
For high-throughput computing the efficient use of distributed computing resources relies on an evenly distributed workload, which in turn requires wide availability of input data that is used in physics analysis. In ATLAS, the dynamic data placement agent C3PO was implemented in the ATLAS distributed data management system Rucio which identifies popular data and creates additional, transient replicas to make data more widely and more reliably available. This proceedings presents studies on the performance of C3PO and the impact it has on throughput rates of distributed computing in ATLAS. Furthermore, results of a study on popularity prediction using machine learning techniques are presented.
DOI: 10.1051/epjconf/201921403049
2019
Operation of the ATLAS Distributed Computing
ATLAS is one of the four experiments collecting data from the proton-proton collisions at the Large Hadron Collider. The offline processing and storage of the data is handled by a custom heterogenous distributed computing system. This paper summarizes some of the challenges and operations-driven solutions introduced in the system.
DOI: 10.1051/epjconf/202024503022
2020
Big data solutions for CMS computing monitoring and analytics
The CMS computing infrastructure is composed of several subsystems that accomplish complex tasks such as workload and data management, transfers, submission of user and centrally managed production requests. Till recently, most subsystems were monitored through custom tools and web applications, and logging information was scattered over several sources and typically accessible only by experts. In the last year, CMS computing fostered the adoption of common big data solutions based on open-source, scalable, and no-SQL tools, such as Hadoop, InfluxDB, and ElasticSearch, available through the CERN IT infrastructure. Such systems allow for the easy deployment of monitoring and accounting applications using visualisation tools such as Kibana and Grafana. Alarms can be raised when anomalous conditions in the monitoring data are met, and the relevant teams are automatically notified. Data sources from different subsystems are used to build complex workflows and predictive analytics (such as data popularity, smart caching, transfer latency), and for performance studies. We describe the full software architecture and data flow, the CMS computing data sources and monitoring applications, and show how the stored data can be used to gain insights into the various subsystems by exploiting scalable solutions based on Spark.
DOI: 10.1051/epjconf/202125102004
2021
The evolution of the CMS monitoring infrastructure
The CMS experiment at the CERN LHC (Large Hadron Collider) relies on a distributed computing infrastructure to process the multi-petabyte datasets where the collision and simulated data are stored. A scalable and reliable monitoring system is required to ensure efficient operation of the distributed computing services, and to provide a comprehensive set of measurements of the system performances. In this paper we present the full stack of CMS monitoring applications, partly based on the MONIT infrastructure, a suite of monitoring services provided by the CERN IT department. These are complemented by a set of applications developed over the last few years by CMS, leveraging open-source technologies that are industry-standards in the IT world, such as Kubernetes and Prometheus. We discuss how this choice helped the adoption of common monitoring solutions within the experiment, and increased the level of automation in the operation and deployment of our services.
2016
Performance of the ATLAS Muon Drift-Tube Chambers at High Background Rates and in Magnetic Fields
The ATLAS muon spectrometer uses drift-tube chambers for precision tracking. The performance of these chambers in the presence of magnetic field and high radiation fluxes is studied in this article using test-beam data recorded in the Gamma Irradiation Facility at CERN. The measurements are compared to detailed predictions provided by the Garfield drift-chamber simulation programme.
DOI: 10.1088/1742-6596/664/6/062004
2015
Improved ATLAS HammerCloud Monitoring for Local Site Administration
Every day hundreds of tests are run on the Worldwide LHC Computing Grid for the ATLAS, and CMS experiments in order to evaluate the performance and reliability of the different computing sites. All this activity is steered, controlled, and monitored by the HammerCloud testing infrastructure.
DOI: 10.1088/1742-6596/664/3/032020
2015
Distributed analysis in ATLAS
The ATLAS experiment accumulated more than 140 PB of data during the first run of the Large Hadron Collider (LHC) at CERN. The analysis of such an amount of data is a challenging task for the distributed physics community. The Distributed Analysis (DA) system of the ATLAS experiment is an established and stable component of the ATLAS distributed computing operations. About half a million user jobs are running daily on DA resources, submitted by more than 1500 ATLAS physicists. The reliability of the DA system during the first run of the LHC and the following shutdown period has been high thanks to the continuous automatic validation of the distributed analysis sites and the user support provided by a dedicated team of expert shifters. During the LHC shutdown, the ATLAS computing model has undergone several changes to improve the analysis workflows, including the re-design of the production system, a new analysis data format and event model, and the development of common reduction and analysis frameworks. We report on the impact such changes have on the DA infrastructure, describe the new DA components, and include recent performance measurements.
2014
Electroweak SUSY searches at the LHC
2016
Search for heavy long-lived charged $\mathrm{R}$-hadrons with the ATLAS detector in 3.2 $\mathrm{fb^{-1}}$ of proton-proton collision data at $\mathrm{\sqrt{s} = 13}$ TeV
2016
Distributed analysis challenges in ATLAS
DOI: 10.3204/desy-proc-2012-02/121
2012
Searches for strong R-parity conserving SUSY production at the LHC with the ATLAS detector
2011
Search for Supersymmetry in events with large missing transverse momentum and two leptons in p-p collisions at 7 TeV with the ATLAS detector
2011
Identification of leptons in the search for supersymmetry in final states with two leptons with the ATLAS detector
2011
Search for supersymmetry in events with three leptons with ATLAS at the LHC
2011
The consistency service of the ATLAS distributed data management system
2013
Validation of ATLAS grid sites with HammerCloud
2012
Validation of ATLAS distributed analysis resources using HammerCloud
2011
Distributed analysis functional testing using GangaRobot in the ATLAS experiment
DOI: 10.1063/1.3327573
2010
New Developments in Data-driven Background Determinations for SUSY Searches in ATLAS
Any discovery of new physics relies on detailed understanding of the Standard Model background. At the LHC, we expect to extract the backgrounds from the data itself, with minimum reliance on Monte Carlo simulations. We describe new developments in ATLAS on such data‐driven techniques, and prospects for their application on first data.
DOI: 10.1088/1742-6596/898/5/052005
2017
Evolution of user analysis on the grid in ATLAS
More than one thousand physicists analyse data collected by the ATLAS experiment at the Large Hadron Collider (LHC) at CERN through 150 computing facilities around the world. Efficient distributed analysis requires optimal resource usage and the interplay of several factors: robust grid and software infrastructures, and system capability to adapt to different workloads. The continuous automatic validation of grid sites and the user support provided by a dedicated team of expert shifters have been proven to provide a solid distributed analysis system for ATLAS users. Typical user workflows on the grid, and their associated metrics, are discussed. Measurements of user job performance and typical requirements are also shown.
2017
arXiv : Measurement of detector-corrected observables sensitive to the anomalous production of events with jets and large missing transverse momentum in $pp$ collisions at $\sqrt{s} = 13$ TeV using the ATLAS detector
2017
Measurement of detector-corrected observables sensitive to the anomalous production of events with jets and large missing transverse momentum in $pp$ collisions at $\sqrt{s} = 13$ TeV using the ATLAS detector : arXiv
2010
Search for SUSY with two equally charged leptons
2010
Parallel computing of ATLAS data with PROOF
2010
t anti t background analysis for the inclusive 1-lepton SUSY search
2010
Functional testing of the ATLAS distributed analysis resources with Ganga
DOI: 10.1063/1.3051935
2008
Data-driven estimations of Standard Model backgrounds to SUSY searches in ATLAS
At the Large Hadron Collider (LHC), the strategy for the observation of supersymmetry in the early days is mainly based on inclusive searches. Major backgrounds are constituted by mismeasured multi‐jet events and W, Z and t quark production in association with jets. We describe recent work performed in the ATLAS Collaboration to derive these backgrounds from the first ATLAS data.
DOI: 10.5170/cern-2008-008.359
2008
Upgrade of the Readout Electronics of the ATLAS MDT Detector for SLHC
Simulation predicts a high level of ionizing radiation in the ATLAS experimental hall during LHC operation. This radiation will act as a source of background signals to the four subsystems of the ATLAS muon detector. We present the performance of the Monitored Drift Tube detector (MDT) under these background conditions and discuss the consequences for the much higher background rates at SLHC with respect to tracking efficiency, resolution and readout bandwidth. For rates beyond the expected LHC levels, we discuss options to improve the performance of detector and electronics.
2008
Data-driven estimations of Standard Model backgrounds to SUSY searches
DOI: 10.22323/1.414.0221
2022
OpenForBC, the GPU partitioning framework
In recent years, compute performances of GPUs (Graphics Processing Units) dramatically increased, especially in comparison to those of CPUs (Central Processing Units).GPUs are nowadays the hardware of choice for scientific applications involving massive parallel operations, such as deep learning (DL) and Artificial Intelligence (AI) workflows.Large-scale computing infrastructures such as on-premises data centers, HPC (High Performance Computing) centers, and public or private clouds offer high performance GPUs to researchers.The programming paradigms for GPUs significantly vary according to the GPU model and vendor, often posing a barrier to their use in scientific applications.In addition, powerful GPUs are hardly saturated by typical computing applications.GPU partitioning may be the key to exploit GPU computing power in an efficient and affordable manner.Multiple vendors proposed custom solutions to allow for GPU partitioning, often with poor portability across different platforms and OSs (Operating Systems).OpenForBC (Open For Better Computing) is an open source software framework that allows for effortless and unified partitioning of GPUs from different vendors in Linux KVM virtualized environments.OpenForBC supports dynamic partitioning for various configurations of the GPU, which can be used to optimize the utilization of GPU kernels from different users or different applications.For example training complex DL models may require a full GPU, but inference may only need a fraction of it, leaving free resources for multiple cloned instances or other tasks.In this contribution we describe the most common GPU partitioning options available on the market, discuss the implementation of the OpenForBC interface, and show the results of benchmark tests in typical use case scenarios.
DOI: 10.1109/escience55777.2022.00010
2022
Keynotes
Since the start of data taking at the Large Hadron Collider (LHC) at CERN in 2009, the four LHC experiments (ALICE, ATLAS, CMS and LHCb) have collected more than an Exabyte of physics data.Storing and processing such a large amount of data requires a distributed computing infrastructure, the Worldwide LHC Computing Grid (WLCG), made up of almost 150 computing facilities spread in 42 countries around the world.The current computing infrastructures are expected to grow by an order of magnitude in size and complexity for the HL-LHC (the high luminosity upgrade of the LHC) era ( 2030).In this talk, I will review the challenges of designing, deploying and operating a distributed and heterogeneous computing infrastructure, composed of on-premises data centers, public and private clouds, HPC centers.We will discover how machine learning and artificial intelligence techniques can be exploited to address such complex challenges, from data taking to data processing to data analysis in WLCG.
2018
Improving ATLAS computing resource utilization with HammerCloud
DOI: 10.18154/rwth-2019-06073
2019
Combinations of single-top-quark production cross-section measurements and $|f_{\rm LV}V_{tb}|$ determinations at $\sqrt{s}=7$ and 8 TeV with the ATLAS and CMS experimentsCombinations of single-top-quark production cross-section measurements and |f$_{LV}$V$_{tb}$| determinations at $ \sqrt{s} $ = 7 and 8 TeV with the ATLAS and CMS experiments
2020
The CMS monitoring infrastructure and applications
The globally distributed computing infrastructure required to cope with the multi-petabytes datasets produced by the Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC) at CERN comprises several subsystems, such as workload management, data management, data transfers, and submission of users' and centrally managed production requests. The performance and status of all subsystems must be constantly monitored to guarantee the efficient operation of the whole infrastructure. Moreover, key metrics need to be tracked to evaluate and study the system performance over time. The CMS monitoring architecture allows both real-time and historical monitoring of a variety of data sources and is based on scalable and open source solutions tailored to satisfy the experiment's monitoring needs. We present the monitoring data flow and software architecture for the CMS distributed computing applications. We discuss the challenges, components, current achievements, and future developments of the CMS monitoring infrastructure.
2020
Search for strong electric fields in PbPb collisions at $\sqrt{s_\mathrm{NN}} =$ 5.02 TeV using azimuthal anisotropy of prompt $\mathrm{D}^0$ and $\overline{\mathrm{D}}^0$ mesons
The strong Coulomb field created in ultrarelativistic heavy ion collisions is expected to produce a rapidity-dependent difference ($\Delta v_2$) in the second Fourier coefficient of the azimuthal distribution (elliptic flow, $v_2$) between $\mathrm{D}^0$ ($\mathrm{\bar{u}c}$) and $\overline{\mathrm{D}}^0$ ($\mathrm{u\bar{c}}$) mesons. Motivated by the search for evidence of this field, the CMS detector at the LHC is used to perform the first measurement of $\Delta v_2$. The rapidity-averaged value is found to be $\langle\Delta v_2 \rangle =$ 0.001 $\pm$ 0.001 (stat) $\pm$ 0.003 (syst) in PbPb collisions at $\sqrt{s_\mathrm{NN}} =$ 5.02 TeV. In addition, the influence of the collision geometry is explored by measuring the $\mathrm{D}^0$ and $\overline{\mathrm{D}}^0$ mesons $v_2$ and triangular flow coefficient ($v_3$) as functions of rapidity, transverse momentum ($p_\mathrm{T}$), and event centrality (a measure of the overlap of the two Pb nuclei). A clear centrality dependence of prompt $\mathrm{D}^0$ meson $v_2$ values is observed, while the $v_3$ is largely independent of centrality. These trends are consistent with expectations of flow driven by the initial-state geometry.
DOI: 10.1051/epjconf/202024508016
2020
Delivering a machine learning course on HPC resources
In recent years, proficiency in data science and machine learning (ML) became one of the most requested skills for jobs in both industry and academy. Machine learning algorithms typically require large sets of data to train the models and extensive usage of computing resources, both for training and inference. Especially for deep learning algorithms, training performances can be dramatically improved by exploiting Graphical Processing Units (GPUs). The needed skill set for a data scientist is therefore extremely broad, and ranges from knowledge of ML models to distributed programming on heterogeneous resources. While most of the available training resources focus on ML algorithms and tools such as TensorFlow, we designed a course for doctoral students where model training is tightly coupled with underlying technologies that can be used to dynamically provision resources. Throughout the course, students have access to a dedicated cluster of computing nodes on local premises. A set of libraries and helper functions is provided to execute a parallelized ML task by automatically deploying a Spark driver and several Spark execution nodes as Docker containers. Task scheduling is managed by an orchestration layer (Kubernetes). This solution automates the delivery of the software stack required by a typical ML workflow and enables scalability by allowing the execution of ML tasks, including training, over commodity (i.e. CPUs) or high-performance (i.e. GPUs) resources distributed over different hosts across a network. The adaptation of the same model on OCCAM, the HPC facility at the University of Turin, is currently under development.
DOI: 10.18154/rwth-2021-05460
2020
Angular analysis of the decay B$^+$ $\to$ K$^*$(892)$^+\mu^+\mu^-$ in proton-proton collisions at $\sqrt{s} =$ 8 TeV
DOI: 10.3204/pubdb-2020-02623
2020
Measurement of the CP-violating phase ${\phi_{\mathrm{s}}}$ in the ${\mathrm{B^{0}_{s}}\to\mathrm{J}/\psi\,\phi(1020) \to \mu^{+}\mu^{-}\,{\mathrm{K^{+}}\mathrm{K^{-}}} } $ channel in proton-proton collisions at $\sqrt{s} = $ 13 TeV
2006
Reconstruction of the decays $\Lambda_b \to \Lambda(1115) \gamma$ and $\Lambda_b \to \Lambda(1670) \gamma$ at LHCb
DOI: 10.48550/arxiv.2007.03630
2020
The CMS monitoring infrastructure and applications
The globally distributed computing infrastructure required to cope with the multi-petabytes datasets produced by the Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC) at CERN comprises several subsystems, such as workload management, data management, data transfers, and submission of users' and centrally managed production requests. The performance and status of all subsystems must be constantly monitored to guarantee the efficient operation of the whole infrastructure. Moreover, key metrics need to be tracked to evaluate and study the system performance over time. The CMS monitoring architecture allows both real-time and historical monitoring of a variety of data sources and is based on scalable and open source solutions tailored to satisfy the experiment's monitoring needs. We present the monitoring data flow and software architecture for the CMS distributed computing applications. We discuss the challenges, components, current achievements, and future developments of the CMS monitoring infrastructure.
DOI: 10.5170/cern-2003-006.129
2003
TELL 1 : A common data acquisition board for LHCb
2021
Observation of $\mathrm{B^{0}_{s}}$ mesons and measurement of the $\mathrm{B^{0}_{s}}/\mathrm{B^{+}}$ yield ratio in PbPb collisions at ${\sqrt {\smash [b]{s_{_{\mathrm {NN}}}}}} = $ 5.02 TeV
2021
High precision measurements of Z boson production in PbPb collisions at ${\sqrt {\smash [b]{s_{_{\mathrm {NN}}}}}} = $ 5.02 TeV
The CMS experiment at the LHC has measured the differential cross sections of Z bosons decaying to pairs of leptons, as functions of transverse momentum and rapidity, in lead-lead collisions at a nucleon-nucleon center-of-mass energy of 5.02 TeV. The measured Z boson elliptic azimuthal anisotropy coefficient is compatible with zero, showing that Z bosons do not experience significant final-state interactions in the medium produced in the collision. Yields of Z bosons are compared to Glauber model predictions and are found to deviate from these expectations in peripheral collisions, indicating the presence of initial collision geometry and centrality selection effects. The precision of the measurement allows, for the first time, for a data-driven determination of the nucleon-nucleon integrated luminosity as a function of lead-lead centrality, thereby eliminating the need for its estimation based on a Glauber model.
DOI: 10.2172/948148
2000
Formation of the state h_c of Charmonium in the reaction p anti-p -&gt; h_c -&gt; etac gamma -&gt; phi phi gamma -&gt; 4k gamma
results presented are preliminary. We have measured {Beta}(B{sup +} {yields} {eta}{sub c}K{sup +}) = (1.50 {+-} 0.19 {+-} 0.15 {+-} 0.46) x 10{sup -3}; and {Beta}(B{sup 0} {yields} {eta}{sub c}K{sup 0}) = (1.06 {+-} 0.28 {+-} 0.11 {+-} 0.33) x 10{sup -3} where the first error is statistical, the second systematic and the last due to the uncertainty on the world average {eta}{sub c} {yields} K{bar K}{pi} branching fraction.