ϟ

Dmitry Golubkov

Here are all the papers by Dmitry Golubkov that you can download and read on OA.mg.
Dmitry Golubkov’s last known institution is . Download Dmitry Golubkov PDFs here.

Claim this Profile →
DOI: 10.1007/jhep10(2012)093
2012
Cited 268 times
Measurement of the neutrino velocity with the OPERA detector in the CNGS beam
A bstract The OPERA neutrino experiment at the underground Gran Sasso Laboratory has measured the velocity of neutrinos from the CERN CNGS beam over a baseline of about 730 km. The measurement is based on data taken by OPERA in the years 2009, 2010 and 2011. Dedicated upgrades of the CNGS timing system and of the OPERA detector, as well as a high precision geodesy campaign for the measurement of the neutrino baseline, allowed reaching comparable systematic and statistical accuracies. An arrival time of CNGS muon neutrinos with respect to the one computed assuming the speed of light in vacuum of $ \left( {6.5\pm 7.4\left( {\mathrm{stat}.} \right)_{-8.0}^{+8.3}\left( {\mathrm{sys}.} \right)} \right)\mathrm{ns} $ was measured corresponding to a relative difference of the muon neutrino velocity with respect to the speed of light $ {{{\left( {\upsilon -c} \right)}} \left/ {c} \right.}=\left( {2.7\pm 3.1\left( {\mathrm{stat}.} \right)_{-3.3}^{+3.4}\left( {\mathrm{sys}.} \right)} \right)\times {10^{-6 }} $ . The above result, obtained by comparing the time distributions of neutrino interactions and of protons hitting the CNGS target in 10.5 μ s long extractions, was confirmed by a test performed at the end of 2011 using a short bunch beam allowing to measure the neutrino time of flight at the single interaction level.
DOI: 10.1088/1748-0221/12/05/p05011
2017
Cited 26 times
The active muon shield in the SHiP experiment
The SHiP experiment is designed to search for very weakly interacting particles beyond the Standard Model which are produced in a 400 GeV/c proton beam dump at the CERN SPS. An essential task for the experiment is to keep the Standard Model background level to less than 0.1 event after 2× 1020 protons on target. In the beam dump, around 1011 muons will be produced per second. The muon rate in the spectrometer has to be reduced by at least four orders of magnitude to avoid muon-induced combinatorial background. A novel active muon shield is used to magnetically deflect the muons out of the acceptance of the spectrometer. This paper describes the basic principle of such a shield, its optimization and its performance.
DOI: 10.1088/1742-6596/664/6/062005
2015
Cited 15 times
Scaling up ATLAS production system for the LHC Run 2 and beyond: project ProdSys2
The Big Data processing needs of the ATLAS experiment grow continuously, as more data and more use cases emerge. For Big Data processing the ATLAS experiment adopted the data transformation approach, where software applications transform the input data into outputs. In the ATLAS production system, each data transformation is represented by a task, a collection of many jobs, submitted by the ATLAS workload management system (PanDA) and executed on the Grid. Our experience shows that the rate of task submission grows exponentially over the years. To scale up the ATLAS production system for new challenges, we started the ProdSys2 project. PanDA has been upgraded with the Job Execution and Definition Interface (JEDI). Patterns in ATLAS data transformation workflows composed of many tasks provided a scalable production system framework for template definitions of the many-tasks workflows. These workflows are being implemented in the Database Engine for Tasks (DEfT) that generates individual tasks for processing by JEDI. We report on the ATLAS experience with many-task workflow patterns in preparation for the LHC Run 2.
DOI: 10.1088/1742-6596/513/3/032078
2014
Cited 12 times
Task Management in the New ATLAS Production System
This document describes the design of the new Production System of the ATLAS experiment at the LHC [1]. The Production System is the top level workflow manager which translates physicists' needs for production level processing and analysis into actual workflows executed across over a hundred Grid sites used globally by ATLAS. As the production workload increased in volume and complexity in recent years (the ATLAS production tasks count is above one million, with each task containing hundreds or thousands of jobs) there is a need to upgrade the Production System to meet the challenging requirements of the next LHC run while minimizing the operating costs. In the new design, the main subsystems are the Database Engine for Tasks (DEFT) and the Job Execution and Definition Interface (JEDI). Based on users' requests, DEFT manages inter-dependent groups of tasks (Meta-Tasks) and generates corresponding data processing workflows. The JEDI component then dynamically translates the task definitions from DEFT into actual workload jobs executed in the PanDA Workload Management System [2]. We present the requirements, design parameters, basics of the object model and concrete solutions utilized in building the new Production System and its components.
DOI: 10.1051/epjconf/202024504035
2020
Cited 10 times
ATLAS Data Carousel
The ATLAS experiment at CERN’s LHC stores detector and simulation data in raw and derived data formats across more than 150 Grid sites world-wide, currently in total about 200PB on disk and 250PB on tape. Data have different access characteristics due to various computational workflows, and can be accessed from different media, such as remote I/O, disk cache on hard disk drives or SSDs. Also, larger data centers provide the majority of offline storage capability via tape systems. For the HighLuminosity LHC (HL-LHC), the estimated data storage requirements are several factors bigger than the present forecast of available resources, based on a flat budget assumption. On the computing side, ATLAS Distributed Computing was very successful in the last years with high performance and high throughput computing integration and in using opportunistic computing resources for the Monte Carlo simulation. On the other hand, equivalent opportunistic storage does not exist. ATLAS started the Data Carousel project to increase the usage of less expensive storage, i.e. tapes or even commercial storage, so it is not limited to tape technologies exclusively. Data Carousel orchestrates data processing between workload management, data management, and storage services with the bulk data resident on offline storage. The processing is executed by staging and promptly processing a sliding window of inputs onto faster buffer storage, such that only a small percentage of input data are available at any one time. With this project, we aim to demonstrate that this is the natural way to dramatically reduce our storage cost. The first phase of the project was started in the fall of 2018 and was related to I/O tests of the sites archiving systems. Phase II now requires a tight integration of the workload and data management systems. Additionally, the Data Carousel studies the feasibility to run multiple computing workflows from tape. The project is progressing very well and the results presented in this document will be used before the LHC Run 3.
DOI: 10.1088/1742-6596/396/3/032049
2012
Cited 11 times
ATLAS Grid Data Processing: system evolution and scalability
The production system for Grid Data Processing handles petascale ATLAS data reprocessing and Monte Carlo activities. The production system empowered further data processing steps on the Grid performed by dozens of ATLAS physics groups with coordinated access to computing resources worldwide, including additional resources sponsored by regional facilities. The system provides knowledge management of configuration parameters for massive data processing tasks, reproducibility of results, scalable database access, orchestrated workflow and performance monitoring, dynamic workload sharing, automated fault tolerance and petascale data integrity control. The system evolves to accommodate a growing number of users and new requirements from our contacts in ATLAS main areas: Trigger, Physics, Data Preparation and Software & Computing. To assure scalability, the next generation production system architecture development is in progress. We report on scaling up the production system for a growing number of users providing data for physics analysis and other ATLAS main activities.
DOI: 10.1088/1742-6596/608/1/012015
2015
Cited 6 times
Multilevel Workflow System in the ATLAS Experiment
The ATLAS experiment is scaling up Big Data processing for the next LHC run using a multilevel workflow system comprised of many layers. In Big Data processing ATLAS deals with datasets, not individual files. Similarly a task (comprised of many jobs) has become a unit of the ATLAS workflow in distributed computing, with about 0.8M tasks processed per year. In order to manage the diversity of LHC physics (exceeding 35K physics samples per year), the individual data processing tasks are organized into workflows. For example, the Monte Carlo workflow is composed of many steps: generate or configure hard-processes, hadronize signal and minimum-bias (pileup) events, simulate energy deposition in the ATLAS detector, digitize electronics response, simulate triggers, reconstruct data, convert the reconstructed data into ROOT ntuples for physics analysis, etc. Outputs are merged and/or filtered as necessary to optimize the chain. The bi-level workflow manager - ProdSys2 - generates actual workflow tasks and their jobs are executed across more than a hundred distributed computing sites by PanDA - the ATLAS job-level workload management system. On the outer level, the Database Engine for Tasks (DEfT) empowers production managers with templated workflow definitions. On the next level, the Job Execution and Definition Interface (JEDI) is integrated with PanDA to provide dynamic job definition tailored to the sites capabilities. We report on scaling up the production system to accommodate a growing number of requirements from main ATLAS areas: Trigger, Physics and Data Preparation.
DOI: 10.1109/ivmem51402.2020.00010
2020
High Energy Physics Data Popularity : ATLAS Datasets Popularity Case Study
The amount of scientific data generated by the LHC experiments has hit the exabyte scale. These data are transferred, processed and analyzed in hundreds of computing centers. The popularity of data among individual physicists and University groups has become one of the key factors of efficient data management and processing. It was actively used during LHC Run 1 and Run 2 by the experiments for the central data processing, and allowed the optimization of data placement policies and to spread the workload more evenly over the existing computing resources. Besides the central data processing, the LHC experiments provide storage and computing resources for physics analysis to thousands of users. Taking into account the significant increase of data volume and processing time after the collider upgrade for the High Luminosity Runs (2027- 2036) an intelligent data placement based on data access pattern becomes even more crucial than at the beginning of LHC. In this study we provide a detailed exploration of data popularity using ATLAS data samples. In addition, we analyze the geolocations of computing sites where the data were processed, and the locality of the home institutes of users carrying out physics analysis. Cartography visualization, based on this data, allows the correlation of existing data placement with physics needs, providing a better understanding of data utilization by different categories of user's tasks.
2016
ATLAS production system
DOI: 10.1088/1742-6596/513/3/032101
2014
Reliability Engineering Analysis of ATLAS Data Reprocessing Campaigns
During three years of LHC data taking, the ATLAS collaboration completed three petascale data reprocessing campaigns on the Grid, with up to 2 PB of data being reprocessed every year. In reprocessing on the Grid, failures can occur for a variety of reasons, while Grid heterogeneity makes failures hard to diagnose and repair quickly. As a result, Big Data processing on the Grid must tolerate a continuous stream of failures, errors and faults. While ATLAS fault-tolerance mechanisms improve the reliability of Big Data processing in the Grid, their benefits come at costs and result in delays making the performance prediction difficult. Reliability Engineering provides a framework for fundamental understanding of the Big Data processing on the Grid, which is not a desirable enhancement but a necessary requirement. In ATLAS, cost monitoring and performance prediction became critical for the success of the reprocessing campaigns conducted in preparation for the major physics conferences. In addition, our Reliability Engineering approach supported continuous improvements in data reprocessing throughput during LHC data taking. The throughput doubled in 2011 vs. 2010 reprocessing, then quadrupled in 2012 vs. 2011 reprocessing. We present the Reliability Engineering analysis of ATLAS data reprocessing campaigns providing the foundation needed to scale up the Big Data processing technologies beyond the petascale.
DOI: 10.1134/s1063778821100306
2021
Study of $$\text{B}_{\text{s}}^{0}$$ Meson Decays to J/ψΚ+Κ-π+π- Final State
DOI: 10.48550/arxiv.1508.07174
2015
Unified System for Processing Real and Simulated Data in the ATLAS Experiment
The physics goals of the next Large Hadron Collider run include high precision tests of the Standard Model and searches for new physics. These goals require detailed comparison of data with computational models simulating the expected data behavior. To highlight the role which modeling and simulation plays in future scientific discovery, we report on use cases and experience with a unified system built to process both real and simulated data of growing volume and variety.
DOI: 10.1016/j.procs.2015.11.069
2015
Big Data Processing in the ATLAS Experiment: Use Cases and Experience
Abstract The physics goals of the next Large Hadron Collider run include high precision tests of the Standard Model and searches for new physics. These goals require detailed comparison of data with computational models simulating the expected data behavior. To highlight the role which modeling and simulation plays in future scientific discovery, we report on use cases and experience with a unified system built to process both real and simulated data of growing volume and variety.
2015
The Next Generation ATLAS Production System
2016
Corporate political connections in Russia and their implications for firm-level operational, financial, and investment activities
This dissertation consists of three chapters representing three self-contained essays on the effects of corporate political connections on firm operational, financial, and investment activities. The research is based on a sample of Russian non-state-owned companies operating within the period of 2000-2013. Chapter 1 investigates the effect of corporate political connections on firm performance and profitability. I find that political connections to the executive branch of the central (federal) government positively affect connected firm’s return on sales, return on assets, return on equity and market-to-book ratio. These improvements are conditioned by better operating performance of the connected firm. At the same time financial and taxation costs are not seriously affected by political connections. Contrary to the effect of federal ties, connections to regional authorities bring more costs than benefits to the connected firms with both operating performance and overall performance indicators showing decline in presence of regional political ties. The latter effect can be explained by greater costs which regionally connected firms have to bear in order to contribute to the economic development of regions and provinces to which they are connected. Overall, Chapter 1 provides direct evidence on the effects of corporate political connections on firm profitability, performance, and their basic determinants, also showing that different types of connections differently affect performance. Chapter 2 examines the effect of corporate political and bank connections on firm-level cost of debt. I find that corporate connections to banks decrease cost of debt of a firm. However this effect works only if a firm has connections to a state-owned bank, not a private bank, and connections to a state-owned bank are to be maintained through a significant shareholder of the firm, not CEO, or board member. I also find that corporate connections to the executive branch of the central (federal) government decrease cost of debt. The latter effect works only if political connections are strong and cohesive enough, i.e. they were formed under circumstances that required high level of mutual trust and reliability between parties. Overall, the second chapter provides evidence that political and bank connections do really affect cost of debt and reveals important conditions under which connections can have an impact on this variable. Chapter 3 investigates the effect of corporate political connections on firm-level acquisitions activity. I find that political connections to central (federal) government positively affect firm’s propensity to purchase stakes in other firms. This effect works well in the domestic market, but not in the foreign markets. It does also work well with regard to acquisitions of stakes in the open market, but, ironically, not in the process of privatization. At the same time I find that political connections to regional governments are negatively associated with the probability of purchasing a stake by the acquirer. The latter effect may have an explanation that in a “small world” of regional political and business elites it is risky for participants to violate the regional equilibrium of wealth and power, thus firms demonstrate acquisitions activity levels lower than that of the reference group of unconnected firms. Overall, the third chapter provides evidence on the effects of corporate political connections on bidder’s acquisitions activity, showing, however, that different types of connections may differently impact bidder’s propensity to acquire stakes in other firms.
2014
PanDA Beyond ATLAS : A Scalable Workload Management System For Data Intensive Science
2015
Unified System for Processing Real and Simulated Data in the ATLAS Experiment
The physics goals of the next Large Hadron Collider run include high precision tests of the Standard Model and searches for new physics. These goals require detailed comparison of data with computational models simulating the expected data behavior. To highlight the role which modeling and simulation plays in future scientific discovery, we report on use cases and experience with a unified system built to process both real and simulated data of growing volume and variety.
DOI: 10.1088/1742-6596/798/1/012174
2017
Calibration system of the LHCb hadronic calorimeter
The Hadron Calorimeter of LHCb (HCAL) is one of the four sub-detectors of the experiment calorimetric system, which also includes: Scintillator Pad Detector (SPD), Pre-Shower Detector (PS), and electromagnetic (ECAL) calorimeter. The main purpose of HCAL is to provide data for Level-0 trigger for selection events with high transverse energy hadrons. It is important to have a precise and reliable calibration system which produces result immediately after the calibration run. LHCb HCAL is equipped with a calibration system based on 137Cs radioactive source embedded into the calorimeter structure. It allows to obtain absolute calibration with good precision and monitor technical condition of the detector.
2017
The ATLAS Production System Evolution
2017
Predictive analytics as an essential mechanism for situational awareness at the ATLAS Production System
DOI: 10.1088/1742-6596/1085/3/032051
2018
Predictive analytics tools to adjust and monitor performance metrics for the ATLAS Production System
Having information such as an estimation of the processing time or possibility of system outage (abnormal behaviour) helps to assist to monitor system performance and to predict its next state. The current cyber-infrastructure of the ATLAS Production System presents computing conditions in which contention for resources among high-priority data analyses happens routinely, that might lead to significant workload and data handling interruptions. The lack of the possibility to monitor and to predict the behaviour of the analysis process (its duration) and system's state itself provides motivation for a focus on design of the built-in situational awareness analytic tools.
2018
The ATLAS Production System Predictive Analytics service: an approach for intelligent task analysis
DOI: 10.1051/epjconf/201921403007
2019
Advanced Analytics service to enhance workflow control at the ATLAS Production System
Modern workload management systems that are responsible for central data production and processing in High Energy and Nuclear Physics experiments have highly complicated architectures and require a specialized control service for resource and processing components balancing. Such a service represents a comprehensive set of analytical tools, management utilities and monitoring views aimed at providing a deep understanding of internal processes, and is considered as an extension for situational awareness analytic service. Its key points are analysis of task processing, e.g., selection and regulation of key task features that affect its processing the most; modeling of processed data life-cycles for further analysis, e.g., generate guidelines for particular stage of data processing; and forecasting processes with focus on data and tasks states as well as on the management system itself, e.g., to detect the source of any potential malfunction. The prototype of the advanced analytics service will be an essential part of the analytical service of the ATLAS Production System (ProdSys2). Advanced analytics service uses such tools as Time-To-Complete (TTC) estimation towards units of the processing (i.e., tasks and chains of tasks) to control the processing state and to be able to highlight abnormal operations and executions. Obtained metrics are used in decision making processes to regulate the system behaviour and resources consumption.
DOI: 10.1134/1.1553505
2003
Contribution of the hadronic component of a virtual photon to the structure function for charm leptoproduction at high x and Q 2
DOI: 10.48550/arxiv.hep-ph/0106284
2001
A Contribution of Photon Hadronic Component in the Leptoproduction Charmed Structure Function at Large x and Q^2
We calculated the contribution of the photon hadronic component in the charmed structure function of the leptoproduction. The contribution comes from scattering of the c quarks of the virtual photon on the quarks and gluons of the proton. Comparison of our calculations with the measurements of the charm production in $μ^+p$-scattering by EMC shows, that contribution of resolved photon can explain the excess of EMC data over the predictions of the photon-gluon fusion model at large momentum transfers. Thus, one does not need to use a non-perturbative admixture of the charmed quarks in the proton wave function (''intrinsic charm'') to describe such an excess in EMC data.
1999
Progress report towards an experiment to study atmospheric neutrino oscillations with a massive magnetized iron detector
1998
Towards an experiment to study atmospheric neutrino oscillations with a massive magnetized iron detector (Progress Report)