ϟ

Artur Gottmann

Here are all the papers by Artur Gottmann that you can download and read on OA.mg.
Artur Gottmann’s last known institution is . Download Artur Gottmann PDFs here.

Claim this Profile →
DOI: 10.48550/arxiv.2403.14903
2024
Modeling Distributed Computing Infrastructures for HEP Applications
Predicting the performance of various infrastructure design options in complex federated infrastructures with computing sites distributed over a wide area network that support a plethora of users and workflows, such as the Worldwide LHC Computing Grid (WLCG), is not trivial. Due to the complexity and size of these infrastructures, it is not feasible to deploy experimental test-beds at large scales merely for the purpose of comparing and evaluating alternate designs. An alternative is to study the behaviours of these systems using simulation. This approach has been used successfully in the past to identify efficient and practical infrastructure designs for High Energy Physics (HEP). A prominent example is the Monarc simulation framework, which was used to study the initial structure of the WLCG. New simulation capabilities are needed to simulate large-scale heterogeneous computing systems with complex networks, data access and caching patterns. A modern tool to simulate HEP workloads that execute on distributed computing infrastructures based on the SimGrid and WRENCH simulation frameworks is outlined. Studies of its accuracy and scalability are presented using HEP as a case-study. Hypothetical adjustments to prevailing computing architectures in HEP are studied providing insights into the dynamics of a part of the WLCG and candidates for improvements.
DOI: 10.1051/epjconf/202429504032
2024
Modeling Distributed Computing Infrastructures for HEP Applications
Predicting the performance of various infrastructure design options in complex federated infrastructures with computing sites distributed over a wide area network that support a plethora of users and workflows, such as the Worldwide LHC Computing Grid (WLCG), is not trivial. Due to the complexity and size of these infrastructures, it is not feasible to deploy experimental test-beds at large scales merely for the purpose of comparing and evaluating alternate designs. An alternative is to study the behaviours of these systems using simulation. This approach has been used successfully in the past to identify efficient and practical infrastructure designs for High Energy Physics (HEP). A prominent example is the Monarc simulation framework, which was used to study the initial structure of the WLCG. New simulation capabilities are needed to simulate large-scale heterogeneous computing systems with complex networks, data access and caching patterns. A modern tool to simulate HEP workloads that execute on distributed computing infrastructures based on the SimGrid and WRENCH simulation frameworks is outlined. Studies of its accuracy and scalability are presented using HEP as a case-study. Hypothetical adjustments to prevailing computing architectures in HEP are studied providing insights into the dynamics of a part of the WLCG and candidates for improvements.
DOI: 10.1051/epjconf/202429501020
2024
Efficient interface to the GridKa tape storage system
Providing high performance and reliable tape storage system is GridKa’s top priority. The GridKa tape storage system was recently migrated from IBM SP to High Performance Storage System (HPSS) for LHC and non-LHC HEP experiments. These are two different tape backends and each has its own design and specifics that need to be studied thoroughly. Taking into account the features and characteristics of HPSS, a new approach has been developed for flushing and staging files to and from tape storage system. This new approach allows better optimized and efficient flush and stage operations and leads to a substantial improvement in the overall performance of the GridKa tape storage system. The efficient interface that was developed to use IBM SP is now adapted to the HPSS use-case to connect the access point from experiments to the tape storage system. This contribution provides details on these changes and the results of the Tape Challenge 2022 within the new HPSS tape storage configuration.
DOI: 10.1140/epjc/s10052-022-10070-0
2022
Cited 3 times
Punzi-loss:
We present the novel implementation of a non-differentiable metric approximation and a corresponding loss-scheduling aimed at the search for new particles of unknown mass in high energy physics experiments. We call the loss-scheduling, based on the minimisation of a figure-of-merit related function typical of particle physics, a Punzi-loss function, and the neural network that utilises this loss function a Punzi-net. We show that the Punzi-net outperforms standard multivariate analysis techniques and generalises well to mass hypotheses for which it was not trained. This is achieved by training a single classifier that provides a coherent and optimal classification of all signal hypotheses over the whole search space. Our result constitutes a complementary approach to fully differentiable analyses in particle physics. We implemented this work using PyTorch and provide users full access to a public repository containing all the codes and a training example.
2018
Next-to-leading order reweighting method for simulated processes of gluon fusion Higgs boson production
DOI: 10.5445/ir/1000124886
2020
Global Interpretation of $\tau\tau$ Events in the Context of the Standard Model and Beyond
The nature of interactions between elementary particles nowadays is successfully described by the Standard Model of particle physics (SM), with predictions covering a wide range of observed phenomena, including the existence of the Higgs boson, discovered by two independent experiments, ATLAS and CMS. Measurements of its properties are compatible with the SM, however, there is strong belief to consider it as part of an extended Higgs sector, motivating searches for additional heavy Higgs bosons. Until now, measurements of properties of the observed Higgs boson, and their interpretation in models beyond the Standard Model (BSM) in the framework of effective field theories, are performed independently from searches for additional heavy Higgs bosons. The main topic of this thesis is to perform a unification of the two analysis approaches on the example of the H→ττ decay channel into one, consistent, global interpretation of ττ events, based on Run 2 CMS data at a centre of mass energy of 13 TeV and comprising 137 fb-1. At first, a measurement of signal strengths of the observed Higgs boson results in an observed (expected) significance of  6.1 (5.0)σ for gluon fusion and 1.9 (3.8)σ for vector boson fusion production channels of the Higgs boson to ensure a good expected sensitivity to the observed Higgs boson. Then, a classic search for additional heavy Higgs bosons is performed, superseeding the current CMS results based on data collected in 2016. In the last step, the two analyses are combined into a consistent interpretation of ττ events in the framework of BSM benchmark scenarios to demonstrate an increased sensitivity to deviations in scenario predictions for the BSM Higgs boson to be compatible with the observed Higgs boson, leading to an increased exclusion power on BSM benchmark scenarios.
2021
Punzi-loss: A non-differentiable metric approximation for sensitivity optimisation in the search for new particles
We present the novel implementation of a non-differentiable metric approximation and a corresponding loss-scheduling aimed at the search for new particles of unknown mass in high energy physics experiments. We call the loss-scheduling, based on the minimisation of a figure-of-merit related function typical of particle physics, a Punzi-loss function, and the neural network that utilises this loss function a Punzi-net. We show that the Punzi-net outperforms standard multivariate analysis techniques and generalises well to mass hypotheses for which it was not trained. Our result constitutes a step towards fully differentiable analyses in particle physics. This work is implemented using PyTorch, and we provide users full access to a public repository containing all the codes.