ϟ

A. Krasznahorkay

Here are all the papers by A. Krasznahorkay that you can download and read on OA.mg.
A. Krasznahorkay’s last known institution is . Download A. Krasznahorkay PDFs here.

Claim this Profile →
DOI: 10.48550/arxiv.physics/0703039
2007
Cited 262 times
TMVA - Toolkit for Multivariate Data Analysis
In high-energy physics, with the search for ever smaller signals in ever larger data sets, it has become essential to extract a maximum of the available information from the data. Multivariate classification methods based on machine learning techniques have become a fundamental ingredient to most analyses. Also the multivariate classifiers themselves have significantly evolved in recent years. Statisticians have found new ways to tune and to combine classifiers to further gain in performance. Integrated into the analysis framework ROOT, TMVA is a toolkit which hosts a large variety of multivariate classification algorithms. Training, testing, performance evaluation and application of all available classifiers is carried out simultaneously via user-friendly interfaces. With version 4, TMVA has been extended to multivariate regression of a real-valued target vector. Regression is invoked through the same user interfaces as classification. TMVA 4 also features more flexible data handling allowing one to arbitrarily form combined MVA methods. A generalised boosting method is the first realisation benefiting from the new framework.
DOI: 10.1007/s41781-021-00078-8
2022
Cited 29 times
A Common Tracking Software Project
Abstract The reconstruction of the trajectories of charged particles, or track reconstruction, is a key computational challenge for particle and nuclear physics experiments. While the tuning of track reconstruction algorithms can depend strongly on details of the detector geometry, the algorithms currently in use by experiments share many common features. At the same time, the intense environment of the High-Luminosity LHC accelerator and other future experiments is expected to put even greater computational stress on track reconstruction software, motivating the development of more performant algorithms. We present here A Common Tracking Software (ACTS) toolkit, which draws on the experience with track reconstruction algorithms in the ATLAS experiment and presents them in an experiment-independent and framework-independent toolkit. It provides a set of high-level track reconstruction tools which are agnostic to the details of the detection technologies and magnetic field configuration and tested for strict thread-safety to support multi-threaded event processing. We discuss the conceptual design and technical implementation of ACTS, selected applications and performance of ACTS, and the lessons learned.
2007
Cited 92 times
TMVA - Toolkit for Multivariate Data Analysis
In high-energy physics, with the search for ever smaller signals in ever larger data sets, it has become essential to extract a maximum of the available information from the data. Multivariate classification methods based on machine learning techniques have become a fundamental ingredient to most analyses. Also the multivariate classifiers themselves have significantly evolved in recent years. Statisticians have found new ways to tune and to combine classifiers to further gain in performance. Integrated into the analysis framework ROOT, TMVA is a toolkit which hosts a large variety of multivariate classification algorithms. Training, testing, performance evaluation and application of all available classifiers is carried out simultaneously via user-friendly interfaces. With version 4, TMVA has been extended to multivariate regression of a real-valued target vector. Regression is invoked through the same user interfaces as classification. TMVA 4 also features more flexible data handling allowing one to arbitrarily form combined MVA methods. A generalised boosting method is the first realisation benefiting from the new framework.
DOI: 10.3390/universe10040168
2024
Checking the 8Be Anomaly with a Two-Arm Electron Positron Pair Spectrometer
We have repeated the experiment performed recently by ATOMKI Laboratory (Debrecen, Hungary), which may indicate a new particle called X17 in the literature. In order to obtain a reliable and independent result, we used a different structure of the electron–positron pair spectrometer at the VNU University of Science. The spectrometer has two arms and simpler acceptance and efficiency as a function of the correlation angle, but the other conditions of the experiment were very similar to the published ones. We could confirm the presence of the anomaly measured at Ep = 1225 keV, which is above the Ep = 1040 keV resonance.
DOI: 10.1088/1748-0221/3/08/p08002
2008
Cited 46 times
The ATLAS central level-1 trigger logic and TTC system
The ATLAS central level-1 trigger logic consists in the Central Trigger Processor and the interface to the detector-specific muon level-1 trigger electronics. It is responsible for forming a level-1 trigger in the ATLAS experiment. The distribution of the timing, trigger and control information from the central trigger processor to the readout electronics of the ATLAS subdetectors is done with the TTC system. Both systems are presented.
DOI: 10.1051/epjconf/202429505019
2024
The ATLAS experiment software on ARM
With an increased dataset obtained during the Run 3 of the LHC at CERN and the even larger expected increase of the dataset by more than one order of magnitude for the HL-LHC, the ATLAS experiment is reaching the limits of the current data processing model in terms of traditional CPU resources based on x86_64 architectures and an extensive program for software upgrades towards the HL-LHC has been set up. The ARM architecture is becoming a competitive and energy efficient alternative. Some surveys indicate its increased presence in HPCs and commercial clouds, and some WLCG sites have expressed their interest. Chip makers are also developing their next generation solutions on ARM architectures, sometimes combining ARM and GPU processors in the same chip. Consequently it is important that the ATLAS software embraces the change and is able to successfully exploit this architecture. We report on the successful porting to ARM of the Athena software framework, which is used by ATLAS for both online and offline computing operations. Furthermore we report on the successful validation of simulation workflows running on ARM resources. For this we have set up an ATLAS Grid site using ARM compatible middleware and containers on Amazon Web Services (AWS) ARM resources. The ARM version of Athena is fully integrated in the regular software build system and distributed in the same way as other software releases. In addition, the workflows have been integrated into the HEPscore benchmark suite which is the planned WLCG wide replacement of the HepSpec06 benchmark used for Grid site pledges. In the overall porting process we have used resources on AWS, Google Cloud Platform (GCP) and CERN. A performance comparison of different architectures and resources will be discussed.
DOI: 10.1145/3629526.3645034
2024
Using Evolutionary Algorithms to Find Cache-Friendly Generalized Morton Layouts for Arrays
DOI: 10.1051/epjconf/202024506014
2020
Cited 11 times
Evolution of the ATLAS analysis model for Run-3 and prospects for HL-LHC
With an increased dataset obtained during the Run-2 of the LHC at CERN, the even larger forthcoming Run-3 data and the expected increase of the dataset by more than one order of magnitude for the HL-LHC, the ATLAS experiment is reaching the limits of the current data production model in terms of disk storage resources. The anticipated availability of an improved fast simulation will enable ATLAS to produce significantly larger Monte Carlo samples with the available CPU, which will then be limited by insufficient disk resources. The ATLAS Analysis Model Study Group for Run-3 was setup at the end of Run-2. Its tasks have been to analyse the efficiency and suitability of the current analysis model and to propose significant improvements. The group has considered options allowing ATLAS to save, for the same sample of data and simulated events, at least 30% disk space overall, and has given recommendations on how significantly larger savings could be realised for the HL-LHC. Furthermore, suggestions were made to harmonise the current stage of analysis across the collaboration. The group has now completed its work: key recommendations will be the new small sized analysis formats DAOD_PHYS and DAOD_PHYSLITE and the increased usage of a tape carousel mode in the centralised production of these formats. This proceeding reviews the recommended ATLAS analysis model for Run-3 and the status of its implementation. It also provides an outlook to the HL-LHC analysis.
DOI: 10.1088/1742-6596/664/7/072045
2015
Cited 12 times
Implementation of the ATLAS Run 2 event data model
During the 2013-2014 shutdown of the Large Hadron Collider, ATLAS switched to a new event data model for analysis, called the xAOD. A key feature of this model is the separation of the object data from the objects themselves (the 'auxiliary store'). Rather than being stored as member variables of the analysis classes, all object data are stored separately, as vectors of simple values. Thus, the data are stored in a 'structure of arrays' format, while the user still can access it as an 'array of structures'. This organization allows for on-demand partial reading of objects, the selective removal of object properties, and the addition of arbitrary user- defined properties in a uniform manner. It also improves performance by increasing the locality of memory references in typical analysis code. The resulting data structures can be written to ROOT files with data properties represented as simple ROOT tree branches. This paper focuses on the design and implementation of the auxiliary store and its interaction with ROOT.
DOI: 10.1088/1742-6596/2438/1/012050
2023
Managing heterogeneous device memory using C++17 memory resources
Abstract Programmers using the C++ programming language are increasingly taught to manage memory implicitly through containers provided by the C++ standard library. However, heterogeneous programming platforms often require explicit allocation and deallocation of memory. This discrepancy in memory management strategies can be daunting and problematic for C++ developers who are not already familiar with heterogeneous programming. The C++17 standard introduces the concept of memory resources , which allow the user to control how standard library containers allocate memory; we believe that this addition to the C++17 standard is a powerful tool towards the unification of memory management for heterogeneous systems with best-practice C++ development. In this paper, we present vecmem , a library of memory resources which allows efficient and user-friendly allocation of memory on CUDA, HIP, and SYCL devices through standard C++ containers. We investigate the design and use cases of such a library, the potential performance gains over naive memory allocation, and the limitations of this memory allocation model.
DOI: 10.1088/1742-6596/2438/1/012026
2023
Detray: a compile time polymorphic tracking geometry description
Abstract A detailed geometry description is essential to any high quality track reconstruction application. In current C++ based track reconstruction software libraries this is often achieved by an object oriented, polymorphic geometry description that implements different shapes and objects by extending a common base class. Such a design, however, has been shown to be problematic when attempting to adapt these applications to run on heterogeneous computing hardware, particularly on hardware accelerators. We present detray, a compile time polymorphic and yet accurate track reconstruction geometry description which is part of the ACTS parallelization R&D effort. detray is built as an index based geometry description with a shallow memory layout, that uses variadic template programming to allow custom shapes and intersection algorithms rather than inheritance from abstract base classes. It is designed to serve as a potential geometry and navigation backend for ACTS and as such implements the ACTS navigation model of boundary portals and purely surface based geometric entities. detray is designed to work with a dedicated memory management library and thus can be instantiated as a geometry model in host and device code.
DOI: 10.5170/cern-2007-007.453
2007
Cited 8 times
The ATLAS Level-1 Muon to Central Trigger Processor Interface
The Muon to Central Trigger Processor Interface (MUCTPI) is part of the ATLAS Level-1 trigger system and connects the output of muon trigger system to the Central Trigger Processor (CTP). At every bunch crossing (BC), the MUCTPI receives information on muon candidates from each of the 208 muon trigger sectors and calculates the total multiplicity for each of six transverse momentum (pT) thresholds. This multiplicity value is then sent to the CTP, where it is used together with the input from the Calorimeter trigger to make the final Level-1 Accept (L1A) decision. In addition the MUCTPI provides summary information to the Level-2 trigger and to the data acquisition (DAQ) system for events selected at Level-1. This information is used to define the regions of interest (RoIs) that drive the Level-2 muontrigger processing. The MUCTPI system consists of a 9U VME chassis with a dedicated active backplane and 18 custom designed modules. The design of the modules is based on state-of-the-art FPGA devices and special attention was paid to low-latency in the data transmission and processing. We present the design and implementation of the final version of the MUCTPI. A partially populated MUCTPI system is already installed in the ATLAS experiment and is being used regularly for commissioning tests and combined cosmic ray data taking runs. I. ATLAS FIRST LEVEL MUON TRIGGER The ATLAS Level-1 trigger [1] uses information on clusters and global energy in the calorimeters and multiplicities from tracks found in the dedicated fast muon trigger detectors in order to reduce the event rate to 100 kHz with an overall latency of less than 2.5 μs. The muon trigger detectors are resistive plate chambers (RPC) in the barrel region and thin-gap chambers (TGC) in the end-cap and forward regions of ATLAS. Coincidences of hits in different detector layers are used to identify muon candidates. The muon trigger electronics also determines the transverse-momentum (pT) of the muon candidates and classifies them according to six programmable pT thresholds. The muon trigger detectors are divided into sectors, 64 for the barrel, 96 for the end-cap and 48 for the forward region. Each sector can identify up to two muon candidates. The trigger sector logic modules send information about the position and pT threshold of the muon candidates to the MUCTPI at the bunch crossing (BC) rate of 40.08 MHz. An overview of the muon trigger system is shown in Figure 1 below.
DOI: 10.5281/zenodo.2583131
2019
Cited 5 times
lwtnn/lwtnn: Version 2.8.1
DOI: 10.5170/cern-2003-006.270
2005
Cited 7 times
The ATLAS Level-1 Central Trigger Processor (CTP)
The ATLAS Level-1 Central Trigger Processor (CTP) com- bines information from the Level-1 calorimeter and muon trig- ger processors, as well as from other sources such as calibration triggers, and makes the final Level-1 Accept deci- sion. The CTP synchronises the trigger inputs from different sources to the internal clock and aligns them with respect to the same bunch crossing. The algorithm used by the CTP to com- bine the different inputs allows events to be selected on the basis of trigger menus. The CTP provides trigger summary information to the data acquisition and to the Level-2 trigger system, and allows one to monitor various counters of bunch- by-bunch as well as accumulated information on the trigger inputs. The design of the CTP with its six different module types and two dedicated back-planes will be presented.
2007
Cited 6 times
TMVA - Toolkit for Multivariate Data Analysis with ROOT : Users guide
DOI: 10.1109/rtc.2005.1547406
2005
Cited 6 times
The ATLAS Level-1 central trigger processor
ATLAS is a multi-purpose particle physics detector at CERN's Large Hadron Collider where two pulsed beams of protons are brought to collision at very high energy. There are collisions every 25 ns, corresponding to a rate of 40 MHz. A three-level trigger system reduces this rate to about 200 Hz while keeping bunch crossings which potentially contain interesting processes. The Level-1 trigger, implemented in electronics and firmware, makes an initial selection in under 2.5 /spl mu/s with an output rate of less than 100 kHz. A key element of this is the central trigger processor (CTP) which combines trigger information from the calorimeter and muon trigger processors to make the final Level-1 accept decision in under 100 ns on the basis of lists of selection criteria, implemented as a trigger menu. Timing and trigger signals are fanned out to all sub-detectors, while busy signals from all sub-detector read-out systems are collected and fed into the CTP in order to throttle the generation of Level-1 triggers.
DOI: 10.1556/aph.13.2001.1-3.12
2001
Cited 7 times
Hyperdeformation and Clusterization in the Actinide Region
DOI: 10.1088/1742-6596/898/7/072010
2017
Cited 3 times
Large Scale Software Building with CMake in ATLAS
The offline software of the ATLAS experiment at the Large Hadron Collider (LHC) serves as the platform for detector data reconstruction, simulation and analysis. It is also used in the detector's trigger system to select LHC collision events during data taking. The ATLAS offline software consists of several million lines of C++ and Python code organized in a modular design of more than 2000 specialized packages. Because of different workflows, many stable numbered releases are in parallel production use. To accommodate specific workflow requests, software patches with modified libraries are distributed on top of existing software releases on a daily basis. The different ATLAS software applications also require a flexible build system that strongly supports unit and integration tests. Within the last year this build system was migrated to CMake.
DOI: 10.1088/1742-6596/664/3/032007
2015
Dual-use tools and systematics-aware analysis workflows in the ATLAS Run-2 analysis model
The ATLAS analysis model has been overhauled for the upcoming run of data collection in 2015 at 13 TeV. One key component of this upgrade was the Event Data Model (EDM), which now allows for greater flexibility in the choice of analysis software framework and provides powerful new features that can be exploited by analysis software tools. A second key component of the upgrade is the introduction of a dual-use tool technology, which provides abstract interfaces for analysis software tools to run in either the Athena framework or a ROOT-based framework. The tool interfaces, including a new interface for handling systematic uncertainties, have been standardized for the development of improved analysis workflows and consolidation of high-level analysis tools. This paper will cover the details of the dual-use tool functionality, the systematics interface, and how these features fit into a centrally supported analysis environment.
DOI: 10.1109/tns.2007.910676
2008
Cited 3 times
Status of the ATLAS Level-1 Central Trigger and Muon Barrel Trigger and First Results from Cosmic-Ray Data
The ATLAS detector at CERN's Large Hadron Collider (LHC) will be exposed to proton-proton collisions from beams crossing at 40 MHz. A three-level trigger system will select potentially interesting events in order to reduce the read-out rate to about 200 Hz. The first trigger level is implemented in custom-built electronics and makes an initial fast selection based on detector data of coarse granularity. It has to reduce the rate by a factor of to less than 100 kHz. The other two consecutive trigger levels are in software and run on PC farms. We present an overview of the first-level central trigger and the muon barrel trigger system and report on the current installation status. Moreover, we show analysis results of cosmic-ray data recorded in situ at the ATLAS experimental site with final or close-to-final hardware.
DOI: 10.3360/dis.2007.167
2007
Cited 3 times
Outlook for b Physics at the LHC in ATLAS and CMS
An overview is presented for the planned B-physics programme of the ATLAS and CMS experiments at the LHC.The physics programmes of both experiments have been prepared for the different running conditions of the accelerator.Analyses and their expected sensitivities are presented, which are planned for different luminosity configurations of the LHC.
DOI: 10.1145/3578244.3583723
2023
Systematically Exploring High-Performance Representations of Vector Fields Through Compile-Time Composition
We present a novel benchmark suite for implementations of vector fields in high-performance computing environments to aid developers in quantifying and ranking their performance. We decompose the design space of such benchmarks into access patterns and storage backends, the latter of which can be further decomposed into components with different functional and non-functional properties. Through compile-time meta-programming, we generate a large number of benchmarks with minimal effort and ensure the extensibility of our suite. Our empirical analysis, based on real-world applications in high-energy physics, demonstrates the feasibility of our approach on CPU and GPU platforms, and highlights that our suite is able to evaluate performance-critical design choices. Finally, we propose that our work towards composing vector fields from elementary components is not only useful for the purposes of benchmarking, but that it naturally gives rise to a novel library for implementing such fields in domain applications.
DOI: 10.5281/zenodo.8119769
2023
CTD2022: traccc - GPU Track reconstruction demonstrator for HEP
DOI: 10.48550/arxiv.2309.07002
2023
Finding Morton-Like Layouts for Multi-Dimensional Arrays Using Evolutionary Algorithms
The layout of multi-dimensional data can have a significant impact on the efficacy of hardware caches and, by extension, the performance of applications. Common multi-dimensional layouts include the canonical row-major and column-major layouts as well as the Morton curve layout. In this paper, we describe how the Morton layout can be generalized to a very large family of multi-dimensional data layouts with widely varying performance characteristics. We posit that this design space can be efficiently explored using a combinatorial evolutionary methodology based on genetic algorithms. To this end, we propose a chromosomal representation for such layouts as well as a methodology for estimating the fitness of array layouts using cache simulation. We show that our fitness function correlates to kernel running time in real hardware, and that our evolutionary strategy allows us to find candidates with favorable simulated cache properties in four out of the eight real-world applications under consideration in a small number of generations. Finally, we demonstrate that the array layouts found using our evolutionary method perform well not only in simulated environments but that they can effect significant performance gains -- up to a factor ten in extreme cases -- in real hardware.
DOI: 10.1109/rtc.2005.1547526
2005
Cited 3 times
The ATLAS Level-1 trigger timing setup
The ATLAS detector at CERN's LHC will be exposed to proton-proton collisions at a bunch-crossing rate of 40 MHz. In order to reduce the data rate, a three-level trigger system selects potentially interesting physics processes. The first trigger level is implemented in electronics and firmware. It aims at reducing the output rate to less than 100 kHz. The central trigger processor (CTP) combines information from the calorimeter and muon trigger processors and makes the final Level-1-Accept (L1A) decision. It is a central element in the timing setup of the experiment. Three aspects are considered in this article: the timing setup with respect to the Level-1 trigger, with respect to the experiment, and with respect to the world. Trigger signals from the muon and calorimeter trigger processors have to be synchronized in phase with respect to the local clock, and aligned in terms of the bunch crossing they originate from. The Level-1 latency is defined as the time between the collision and the arrival of the L1A at the sub-detectors. It is fixed and less than 2.5 musec. During this time, the data from all sub-detectors are stored in front-end pipeline buffers. In order to guarantee read-out of the same collision, the pipeline lengths must be carefully tuned in order to match the Level-1 latency using several strategies with and without particle beam. The CTP further calculates a UTC time stamp derived from a GPS-based time-stamping system with a stability of 5 nsec and high absolute time precision. The time stamp will allow us to correlate ATLAS events with those in other particle-physics or astronomic detectors at CERN or elsewhere
DOI: 10.5170/cern-2007-001.315
2006
Cited 3 times
Commissioning of the ATLAS level-1 central trigger
The ATLAS Level-1 Central Trigger consists of the Central Trigger Processor (CTP) and the Muon-to-CTP-Interface (MUCTPI). The CTP receives trigger information from the Level-1 Calorimeter Trigger system directly, and from the Level-1 Muon Trigger systems through the MUCTPI. It also receives timing signals from the LHC machine, and fans them out along with the Level-1 Accept (L1A) signal and other control signals to all sub-detectors. From them, it collects BUSY signals in order to throttle the L1A generation. Upon L1A the Level-1 trigger systems send region-of-interest information to the Level-2 trigger system. The MUCTPI and CTP crates are already installed in the ATLAS underground counting rooms with final or close-to-final boards. We present their status and discuss first commissioning steps. Particular emphasis is put on the integration of the Central Trigger with the Muon and Calorimeter Trigger systems, the Level-2 trigger, and the readout part of the different sub-detectors.
DOI: 10.22323/1.021.0391
2007
The first level trigger of ATLAS
Due to the huge interaction rates and the tough experimental environment of pp collisions at a centre-of-mass energy ¡ s ¢ 14 TeV and luminosities of up to 10 34 cm£ 2 s£ 1 , one of the experimental challenges at the LHC is the triggering of interesting events.In the ATLAS experiment a three-level tigger system is foreseen for this purpose.The first-level trigger is implemented in custom hardware and has been designed to reduce the data rate from the initial bunch-crossing rate of 40 MHz to around 75 kHz.Its event selection is based on information from the calorimeters and dedicated muon detectors.This article gives an overview over the full first-level trigger system including the Calorimeter Trigger, the Muon Trigger and the Central Trigger Processor.In addition, recent results are reported that have been obtained from test-beam studies performed at CERN where the full first-level trigger chain was established successfully for the first time and used to trigger the read-out of up to nine ATLAS sub-detector systems.
DOI: 10.1016/j.nima.2007.08.031
2007
The ATLAS level-1 trigger: Status of the system and first results from cosmic-ray data
The ATLAS detector at CERN's Large Hadron Collider (LHC) will be exposed to proton–proton collisions from beams crossing at 40 MHz. At the design luminosity of 1034cm-2s-1 there are on average 23 collisions per bunch crossing. A three-level trigger system will select potentially interesting events in order to reduce the readout rate to about 200 Hz. The first trigger level is implemented in custom-built electronics and makes an initial fast selection based on detector data of coarse granularity. It has to reduce the rate by a factor of 104 to less than 100 kHz. The other two consecutive trigger levels are in software and run on PC farms. We present an overview of the first-level trigger system and report on the current installation status. Moreover, we show analysis results of cosmic-ray data recorded in situ at the ATLAS experimental site with final or close-to-final hardware.
DOI: 10.5170/cern-2007-007.217
2007
The ATLAS level-1 Central Trigger
DOI: 10.1088/1742-6596/523/1/012019
2014
The evolution of the Trigger and Data Acquisition System in the ATLAS experiment
The ATLAS experiment, aimed at recording the results of LHC proton-proton collisions, is upgrading its Trigger and Data Acquisition (TDAQ) system during the current LHC first long shutdown. The purpose of the upgrade is to add robustness and flexibility to the selection and the conveyance of the physics data, simplify the maintenance of the infrastructure, exploit new technologies and, overall, make ATLAS data-taking capable of dealing with increasing event rates. The TDAQ system used to date is organised in a three-level selection scheme, including a hardware-based first-level trigger and second- and third-level triggers implemented as separate software systems distributed on separate, commodity hardware nodes. While this architecture was successfully operated well beyond the original design goals, the accumulated experience stimulated interest to explore possible evolutions. We will also be upgrading the hardware of the TDAQ system by introducing new elements to it. For the high-level trigger, the current plan is to deploy a single homogeneous system, which merges the execution of the second and third trigger levels, still separated, on a unique hardware node. Prototyping efforts already demonstrated many benefits to the simplified design. In this paper we report on the design and the development status of this new system.
DOI: 10.1088/1742-6596/396/2/022047
2012
Toolkit for data reduction to tuples for the ATLAS experiment
The final step in a HEP data-processing chain is usually to reduce the data to a 'tuple' form which can be efficiently read by interactive analysis tools such as ROOT. Often, this is implemented independently by each group analyzing the data, leading to duplicated effort and needless divergence in the format of the reduced data. ATLAS has implemented a common toolkit for performing this processing step. By using tools from this package, physics analysis groups can produce tuples customized for a particular analysis but which are still consistent in format and vocabulary with those produced by other physics groups. The package is designed so that almost all the code is independent of the specific form used to store the tuple. The code that does depend on this is grouped into a set of small backend packages. While the ROOT backend is the most used, backends also exist for HDF5 and for specialized databases. By now, the majority of ATLAS analyses rely on this package, and it is an important contributor to the ability of ATLAS to rapidly analyze physics data.
DOI: 10.1088/1742-6596/898/7/072009
2017
A Roadmap to Continuous Integration for ATLAS Software Development
The ATLAS software infrastructure facilitates efforts of more than 1000 developers working on the code base of 2200 packages with 4 million lines of C++ and 1.4 million lines of python code. The ATLAS offline code management system is the powerful, flexible framework for processing new package versions requests, probing code changes in the Nightly Build System, migration to new platforms and compilers, deployment of production releases for worldwide access and supporting physicists with tools and interfaces for efficient software use. It maintains multi-stream, parallel development environment with about 70 multi-platform branches of nightly releases and provides vast opportunities for testing new packages, for verifying patches to existing software and for migrating to new platforms and compilers. The system evolution is currently aimed on the adoption of modern continuous integration (CI) practices focused on building nightly releases early and often, with rigorous unit and integration testing. This paper describes the CI incorporation program for the ATLAS software infrastructure. It brings modern open source tools such as Jenkins and GitLab into the ATLAS Nightly System, rationalizes hardware resource allocation and administrative operations, provides improved feedback and means to fix broken builds promptly for developers. Once adopted, ATLAS CI practices will improve and accelerate innovation cycles and result in increased confidence in new software deployments. The paper reports the status of Jenkins integration with the ATLAS Nightly System as well as short and long term plans for the incorporation of CI practices.
2009
Triggering top quark events
DOI: 10.1051/epjconf/202024505006
2020
GPU Usage in ATLAS Reconstruction and Analysis
With Graphical Processing Units (GPUs) and other kinds of accelerators becoming ever more accessible, High Performance Computing Centres all around the world using them ever more, ATLAS has to find the best way of making use of such accelerators in much of its computing. Tests with GPUs – mainly with CUDA – have been performed in the past in the experiment. At that time the conclusion was that it was not advantageous for the ATLAS offline and trigger software to invest time and money into GPUs. However as the usage of accelerators has become cheaper and simpler in recent years, their re-evaluation in ATLAS’s offline software is warranted. We show new results of using GPU accelerated calculations in ATLAS’s offline software environment using the ATLAS offline/analysis (xAOD) Event Data Model. We compare the performance and flexibility of a couple of the available GPU programming methods, and show how different memory management setups affect our ability to offload different types of calculations to a GPU efficiently.
DOI: 10.5170/cern-2007-001.319
2006
The octant module of the ATLAS level-1 muon to central trigger processor interface
The Muon to Central Trigger Processor Interface (MUCTPI) of the ATLAS Level-1 trigger receives data from the sector logic modules of the muon trigger at every bunch crossing and calculates the total multiplicity of muon candidates, which is then sent to the Central Trigger Processor where the final Level-1 decision is taken. The MUCTPI system consists of a 9U VME crate with a special backplane and 18 custom designed modules. We focus on the design and implementation of the octant module (MIOCT). Each of the 16 MIOCT modules processes the muon candidates from 13 sectors of one half-octant of the detector and forms the local muon candidate multiplicities for the trigger decision. It also resolves the overlaps between chambers in order to avoid double-counting of muon candidates that are detected in more than one sector. The handling of overlapping sectors is based on Look-Up-Tables (LUT) for maximum flexibility. The MIOCT also sends the information on the muon candidates over the custom backplane via the Readout Driver module to the Level-2 trigger and the DAQ systems when a Level-1 Accept is received. The design is based on state-of-the-art FPGA devices and special attention was paid to low-latency in the data transmission and processing.
2006
Lepton pairs from a forbidden M0 transition : Signaling an elusive light neutral boson?
Electron-positron pairs have been observed in the 10.95 MeV 0(-) -> 0(+) decay in O-16. This magnetic monopole (M0) transition cannot proceed by gamma-ray decay and is, to first order, forbidden for internal pair creation. However, the transition may also proceed by the emission of a light neutral 0(-) or 1(+) boson, which might play a role in the current quest for light dark matter in the Universe.
2014
The Run 2 ATLAS Analysis Event Data Model
DOI: 10.48550/arxiv.1402.6203
2014
Level density and gamma-ray strength function in the odd-odd 238Np
The level density and gamma-ray strength function in the quasi-continuum of 238Np has been measured using the Oslo method. The level density function follows closely the constant-temperature level density formula and reaches 43 million levels per MeV at Sn = 5.488 MeV of excitation energy. The gamma-ray strength function displays a two-humped resonance at low-energy as also seen in previous investigations of Th, Pa and U isotopes. The structure is interpreted as the scissors resonance and has an average centroid of wSR = 2.26(5) MeV and a total strength of BSR = 10.8(12)m2N, which is in excellent agreement with sum-rule estimates. The scissors resonance is shown to have an impact on the 237Np(n; g)238Np cross section.
DOI: 10.22323/1.093.0048
2011
SFrame - A high-performance ROOT-based framework for HEP analysis
In a typical data analysis in high-energy-physics a large number of collision events are studied.For each event the reconstruction software of the experiments stores a large number of measured event properties in sometimes complex data objects and formats.Usually this huge amount of initial data is reduced in several analysis steps, selecting a subset of interesting events and observables.In addition, the same selection is applied to simulated Monte-Carlo events and the final results are compared to the data.A fast processing of the events is mandatory for an efficient analysis.In this paper we introduce the SFrame package, a ROOT-based analysis framework, that is widely used in the context of ATLAS data analyses.It features (i) consecutive data reduction in multiple user-defined analysis cycles performing a selection of interesting events and observables, making it easy to calculate and store new derived event variables; (ii) a user-friendly combination of data and MC events using weighting techniques; and in particular (iii) a high-speed processing of the events.We study the timing performance of SFrame and find a highly superior performance compared to other analysis frameworks.
DOI: 10.48550/arxiv.1312.0420
2013
A new fission-fragment detector to complement the CACTUS-SiRi setup at the Oslo Cyclotron Laboratory
An array of Parallel Plate Avalanche Counters (PPAC) for the detection of heavy ions has been developed. The new device, NIFF (Nuclear Instrument for Fission Fragments), consists of four individual detectors and covers $60\%$ of 2$\pi$. It was designed to be used in conjunction with the SiRi array of ${\Delta}E-E$ silicon telescopes for light charged particles and fits into the CACTUS array of 28 large-volume NaI scintillation detectors at the Oslo Cyclotron Laboratory. The low-pressure gas-filled PPACs are sensitive for the detection of fission fragments, but are insensitive to scattered beam particles of light ions or light-ion ejectiles. The PPAC detectors of NIFF have good time resolution and can be used either to select or to veto fission events in in-beam experiments with light-ion beams and actinide targets. The powerful combination of SiRi, CACTUS, and NIFF provides new research opportunities for the study of nuclear structure and nuclear reactions in the actinide region. The new setup is particularly well suited to study the competition of fission and $\gamma$ decay as a function of excitation energy.
DOI: 10.1109/nssmic.2010.5873865
2010
Tools for trigger aware analyses in ATLAS
In order to search for rare processes, all four Large Hadron Collider experiments have to use advanced triggering methods for selecting and recording the events of interest. For this reason the understanding and evaluation of the trigger performance is one of the most crucial parts of any physics analysis. This paper summarizes the status of some of the software projects in the ATLAS Collaboration meant to help physicists analyze the performance of the online trigger.
2017
QCD cross section measurements with the OPAL and ATLAS detectors
2010
SFrame: A high-performance ROOT-based framework for HEP data analysis
In a typical data analysis in high-energy-physics a large number of collision events are studied. For each event the reconstruction software of the experiments stores a large number of measured event properties in sometimes complex data objects and formats. Usually this huge amount of initial data is reduced in several analysis steps, selecting a subset of interesting events and observables. In addition, the same selection is applied to simulated Monte-Carlo events and the final results are compared to the data. A fast processing of the events is mandatory for an efficient analysis. In this paper we introduce the SFrame package, a ROOT-based analysis framework, that is widely used in the context of ATLAS data analyses. It features (i) consecutive data reduction in multiple user-defined analysis cycles performing a selection of interesting events and observables, making it easy to calculate and store new derived event variables; (ii) a user-friendly combination of data and MC events using weighting techniques; and in particular (iii) a high-speed processing of the events. We study the timing performance of SFrame and find a highly superior performance compared to other analysis frameworks.
2009
THE PARIS PROJECT
The PARIS project is an initiative to develop and build a high-efficiency gamma-calorimeter principally for use at SPIRAL2. It is intended to comprise a double shell of scintillators and use the novel scintillator material LaBr3(Ce), which promises a step-change in energy and time resolutions over what is achievable using conventional scintillators. The array could be used in a stand-alone mode, in conjunction with an inner particle detection system, or with high-purity germanium arrays. Its potential physics opportunities as well as initial designs and simulations will be discussed.
2009
Performance study of the level-1 di-muon trigger
DOI: 10.1142/9789812819093_0079
2008
Implementation and performance of the ATLAS Trigger Muon "Vertical Slice"
DOI: 10.1007/s41781-022-00086-2
2022
Constraints on Future Analysis Metadata Systems in High Energy Physics
Abstract In high energy physics (HEP), analysis metadata comes in many forms—from theoretical cross-sections, to calibration corrections, to details about file processing. Correctly applying metadata is a crucial and often time-consuming step in an analysis, but designing analysis metadata systems has historically received little direct attention. Among other considerations, an ideal metadata tool should be easy to use by new analysers, should scale to large data volumes and diverse processing paradigms, and should enable future analysis reinterpretation. This document, which is the product of community discussions organised by the HEP Software Foundation, categorises types of metadata by scope and format and gives examples of current metadata solutions. Important design considerations for metadata systems, including sociological factors, analysis preservation efforts, and technical factors, are discussed. A list of best practices and technical requirements for future analysis metadata systems is presented. These best practices could guide the development of a future cross-experimental effort for analysis metadata tools.
DOI: 10.1109/mascots56607.2022.00026
2022
Modelling Performance Loss due to Thread Imbalance in Stochastic Variable-Length SIMT Workloads
When designing algorithms for single-instruction multiple-thread (SIMT) devices such as general purpose graphics processing units (GPGPUs), thread imbalance is an important performance consideration. Thread imbalance can emerge in iterative applications where workloads are of variable length, because threads processing larger amounts of work will cause threads with less work to idle. This form of thread imbalance influences the design space of algorithms-particularly in terms of processing granularity-but we lack models to quantify its impact on application performance. In this paper, we present a statistical model for quantifying the performance loss due to thread imbalance for iterative SIMT applications with stochastic, variable-length workloads. Our model is designed to operate with minimal knowledge of the implementation details of the algorithm, relying solely on an understanding of the probability distribution of the lengths of the workloads. We validate our model against a synthetic benchmark based on a Monte Carlo simulation of matrix exponentiation, and show that our model achieves nearly perfect accuracy. Compared to empirical data extracted from real hardware, our model maintains a high degree of accuracy, predicting mean performance loss within a margin of 2%.
DOI: 10.1016/j.nuclphysbps.2007.08.136
2007
Implementation and performance of the ATLAS Trigger Muon “Vertical Slice”
The ATLAS (A Toroidal LHC ApparatuS) trigger system is designed to keep high effiency for interesting events while achieving a rejection of low transverse momentum (pT) physics of about 107, thus reaching the ∼200 Hz data storage capability of the Data Aquisition system. A three levels structure has been implemented for this purpose, as described in this work for the case of the muon trigger system. After describing the implementation, some performance results are presented in terms of final trigger rates, resolutions, efficiencies, background rejection and algorithm latency.
DOI: 10.3360/dis.2007.173
2007
High Momentum Hadron and Jet Production in Photon-Photon Collisions at LEP2
The inclusive production of charged hadrons (e + e -→ e + e -+ X) and jets (e + e -→ e + e -+ jet + X) have been studied in collisions of quasi-real photons radiated by the LEP beams at e + e -centre-of-mass energies √ see from 183 to 209 GeV.The differential cross-sections measured as a function of the transverse momentum and pseudorapidity of the hadrons and jets are compared to theoretical calculations in next-to-leading order of the strong coupling constant.
DOI: 10.1109/nssmic.2007.4436573
2007
The ATLAS level-1 trigger: Status of the system and experience from commissioning with cosmic ray muons
The ATLAS detector at CERN’s Large Hadron Collider (LHC) will be exposed to proton-proton collisions from beams crossing at 40 MHz. A three-level trigger system will select potentially interesting events in order to reduce this rate to 100– 200 Hz. A trigger decision is made by the Level-1 Central Trigger Processor (CTP) reducing the incoming rate to less than 100 kHz. The Level-1 decision is based on Calorimeter information and hits in dedicated Muon Trigger detectors. The final Level-1 trigger system is currently being installed in the experiment with completion expected in autumn 2007. Cosmic ray data are regularly recorded as an increasing fraction of the trigger system comes online. We present an overview of the Level-1 trigger system architecture and report on the installation and commissioning process at the ATLAS experimental site. Emphasis is put on the integration of the CTP with the Calorimeter and Muon Trigger systems. We show results from analyses of cosmic ray data recorded in situ and verify, where possible, that the Level-1 trigger meets the requirements and will be ready for data taking.
DOI: 10.1088/1742-6596/1085/3/032033
2018
Modernising ATLAS Software Build Infrastructure
In the last year ATLAS has radically updated its software development infrastructure hugely reducing the complexity of building releases and greatly improving build speed, flexibility and code testing. The first step in this transition was the adoption of CMake as the software build system over the older CMT. This required the development of an automated translation from the old system to the new, followed by extensive testing and improvements. This resulted in a far more standard build process that was married to the method of building ATLAS software as a series of 12 separate projects from Subversion.
2018
The PADME calorimeters for missing mass dark photon searches
2019
New Results from ATLAS & CMS
2019
GPU Usage in ATLAS Reconstruction and Analysis
DOI: 10.1088/1742-6596/53/1/014
2006
B-physics and LVL1 di-muon trigger in the ATLAS experiment at the LHC
Many interesting physics processes in the ATLAS experiment at the LHC will be characterized by the presence of pairs of muons in the final state. For this reason, the ATLAS first level muon trigger has been designed to allow the selection of di-muon events. However, in order to increase the trigger acceptance of the muon spectrometer, several regions of overlap between the trigger chambers are foreseen in the detector layout. A muon crossing one of these regions may generate two separate triggers, thus producing a false di-muon trigger. The trigger system must therefore be aware of the geometrical overlaps, in order to resolve such fake double triggers. The overlap solving mechanism of the ATLAS LVL1 muon trigger has been intensively studied with the final detector layout. The overlap flags, needed by the trigger logic to properly handle fake double triggers due to geometrical overlap of the trigger detectors, have been set on a strip-by-strip basis. The chosen method consisted mainly in simulating the propagation of single-muons through the ATLAS spectrometer. The simulated response of the trigger system was then analyzed in order to locate the events for which two muon triggers were generated. Two studies have been performed to evaluate the impact of the overlap flags on the performance of the LVL1 muon trigger and its impact on B-physics: a single muon sample was used to check the proper removal of the fake double triggers and a di-muon B physics sample was used to check the amount of real double triggers that are lost due to the overlap removing mechanism. In addition the chosen B physics channel was the Λb → Λμμ, a rare decay channel with a particular muon trigger interest because of the di-muon topology.
DOI: 10.1142/9789812702579_0014
2004
TRIPLE-HUMPED FISSION BARRIER AND CLUSTERIZATION IN THE ACTINIDE REGION
2006
The Scientific Objectives of the SPIRAL2 project
DOI: 10.1016/s0375-9474(04)90024-3
2004
Neutron-skin thickness in neutron-rich isotopes
After a short overview of the methods applied for measuring the neutron-skin thickness, we present the recent experimental results for the neutron-skin thicknesses of the112–124Sn even-even isotopes and of208Pb. We have used inelastic alpha scattering to excite the giant dipole resonance (GDR). The cross section of this process depends strongly on ΔRnp/R, the relative neutron-skin thickness. We have also measured the excitation of the spin-dipole resonance (SDR) to deduce the neutron-skin thickness since the summed L=1 strength of the SDR is sensitive to it. The results obtained are in good agreement with the previous experimental and theoretical ones.
DOI: 10.1016/s0375-9474\(04\)90024-3
2004
Neutron-skin thickness in neutron-rich isotopes
After a short overview of the methods applied for measuring the neutron-skin thickness, we present the recent experimental results for the neutron-skin thicknesses of the112–124Sn even-even isotopes and of208Pb. We have used inelastic alpha scattering to excite the giant dipole resonance (GDR). The cross section of this process depends strongly on ΔRnp/R, the relative neutron-skin thickness. We have also measured the excitation of the spin-dipole resonance (SDR) to deduce the neutron-skin thickness since the summed L=1 strength of the SDR is sensitive to it. The results obtained are in good agreement with the previous experimental and theoretical ones.
DOI: 10.5170/cern-2005-011.274
2005
ATLAS Level-1 Trigger Timing-In Strategies
The ATLAS detector at CERN’s LHC will be exposed to proton-proton collisions at a bunch-crossing rate of 40 MHz. In order to reduce the data rate, a three-level trigger system selects potentially interesting events. Its first level is implemented in electronics and firmware, and aims at reducing the output rate to under 100 kHz. The Central Trigger Processor (CTP) combines information from the calorimeter and muon trigger processors, and makes the final Level-1 Accept (L1A) decision, which is transferred to all sub-detector front-ends. The functioning of the Level-1 Trigger is based on the correct timing of the signals. In this paper we present various strategies for sub-detector timing-in, in particular how to arrive at a decent initial timing setup using test-pulses in standalone mode, and in global mode with the CTP. In addition we describe how the beam pick-up detectors are a powerful tool to further refine the timing with bunches in the LHC machine. In this context we describe new developments on a proposal for precision read-out of the ATLAS beam pick-up detectors with commercial oscilloscopes in order to monitor the phase of the clock with respect to the LHC bunches. I. I NTRODUCTION
DOI: 10.1142/9789812701749_0009
2005
DECOUPLED PROTON-NEUTRON DISTRIBUTIONS IN <sup>16</sup>C
2005
International Symposium on Exotic Nuclear Systems
DOI: 10.1051/epjconf/202125103006
2021
ATLAS in-file metadata and multi-threaded processing
Processing and scientific analysis of the data taken by the ATLAS experiment requires reliable information describing the event data recorded by the detector or generated in software. ATLAS event processing applications store such descriptive metadata information in the output data files along with the event information. To better leverage the available computing resources during LHC Run3 the ATLAS experiment has migrated its data processing and analysis software to a multi-threaded framework: AthenaMT. Therefore in-file metadata must support concurrent event processing, especially around input file boundaries. The in-file metadata handling software was originally designed for serial event processing. It grew into a rather complex system over the many years of ATLAS operation. To migrate this system to the multi-threaded environment it was necessary to adopt several pragmatic solutions, mainly because of the shortage of available person-power to work on this project in early phases of the AthenaMT development. In order to simplify the migration, first the redundant parts of the code were cleaned up wherever possible. Next the infrastructure was improved by removing reliance on constructs that are problematic during multi-threaded processing. Finally, the remaining software infrastructure was redesigned for thread safety.
2021
arXiv : A Common Tracking Software Project
DOI: 10.48550/arxiv.2106.13593
2021
A Common Tracking Software Project
The reconstruction of the trajectories of charged particles, or track reconstruction, is a key computational challenge for particle and nuclear physics experiments. While the tuning of track reconstruction algorithms can depend strongly on details of the detector geometry, the algorithms currently in use by experiments share many common features. At the same time, the intense environment of the High-Luminosity LHC accelerator and other future experiments is expected to put even greater computational stress on track reconstruction software, motivating the development of more performant algorithms. We present here A Common Tracking Software (ACTS) toolkit, which draws on the experience with track reconstruction algorithms in the ATLAS experiment and presents them in an experiment-independent and framework-independent toolkit. It provides a set of high-level track reconstruction tools which are agnostic to the details of the detection technologies and magnetic field configuration and tested for strict thread-safety to support multi-threaded event processing. We discuss the conceptual design and technical implementation of ACTS, selected applications and performance of ACTS, and the lessons learned.
DOI: 10.1142/9789812799753_0058
2001
SUPER- AND HYPERDEFORMED STATES IN THE ACTINIDE REGION
DOI: 10.1556/aph.13.2001.1-3.1
2001
Dedication
DOI: 10.1142/9789812776723_0030
2002
SPECTROSCOPY OF SUPER- AND HYPERDEFORMED ACTINIDE NUCLEI