ϟ

Viktor Khristenko

Here are all the papers by Viktor Khristenko that you can download and read on OA.mg.
Viktor Khristenko’s last known institution is . Download Viktor Khristenko PDFs here.

Claim this Profile →
DOI: 10.1088/1748-0221/12/03/p03017
2017
Cited 9 times
Radiation Hard Plastic Scintillators for a New Generation of Particle Detectors
The radiation hardness of specific scintillating materials used in particle physics experiments is one of the main focuses of research in detector development. This report summarizes the preparation methods, light yield characterization and radiation damage tests of a plastic scintillator with a polysiloxane base and pTP and bis-MSB dopants. The scintillator is shown to be a promising candidate for particle detectors with its intense light output around 400 nm and very little scintillation or transmission loss after proton irradiation of 4 × 105 Gy.
DOI: 10.1088/1748-0221/7/07/p07004
2012
Cited 6 times
Quartz plate calorimeter prototype with wavelength shifting fibers
The quartz plate calorimeters are considered as hadronic calorimeter options for upgrading Large Hadron Collider experiments. Previous studies have shown that quartz can resist up to 12 MGy of proton irradiation. Using uniformly distributed wavelength shifting fibers embedded the quartz plates are shown to solve the problem of low visible light production on Cherenkov process. Here, we report the performance tests of a 20-layer quartz plate calorimeter prototype, which is constructed with this approach. The calorimeter prototype was tested at CERN H2 area in hadronic and electromagnetic configuration, at various energies of pion and electron beams. We report the beam test and simulation results of this prototype, we also discuss future improvement directions on manufacturing radiation hard wavelength shifting fibers for this type of hadronic calorimeter design.
DOI: 10.1088/1742-6596/1085/4/042030
2018
Cited 6 times
CMS Analysis and Data Reduction with Apache Spark
Experimental Particle Physics has been at the forefront of analyzing the world's largest datasets for decades. The HEP community was among the first to develop suitable software and computing tools for this task. In recent times, new toolkits and systems for distributed data processing, collectively called "Big Data" technologies have emerged from industry and open source projects to support the analysis of Petabyte and Exabyte datasets in industry. While the principles of data analysis in HEP have not changed (filtering and transforming experiment-specific data formats), these new technologies use different approaches and tools, promising a fresh look at analysis of very large datasets that could potentially reduce the time-to-physics with increased interactivity. Moreover these new tools are typically actively developed by large communities, often profiting of industry resources, and under open source licensing. These factors result in a boost for adoption and maturity of the tools and for the communities supporting them, at the same time helping in reducing the cost of ownership for the end users. In this talk, we are presenting studies of using Apache Spark for end user data analysis. We are studying the HEP analysis workflow separated into two thrusts: the reduction of centrally produced experiment datasets and the end analysis up to the publication plot. Studying the first thrust, CMS is working together with CERN openlab and Intel on the CMS Big Data Reduction Facility. The goal is to reduce 1 PB of official CMS data to 1 TB of ntuple output for analysis. We are presenting the progress of this 2-year project with first results of scaling up Spark-based HEP analysis. Studying the second thrust, we are presenting studies on using Apache Spark for a CMS Dark Matter physics search, investigating Spark's feasibility, usability and performance compared to the traditional ROOT-based analysis.
DOI: 10.1088/1748-0221/9/06/t06005
2014
Cited 5 times
Characterization of 1800 Hamamatsu R7600-M4 PMTs for CMS HF Calorimeter upgrade
The Hadronic Forward calorimeters of the CMS experiment are Cherenkov calorimeters that use quartz fibers and 1728 photomultiplier tubes (PMTs) for readout. The CMS detector upgrade project requires the current Hamamatsu R7525 PMTs to be replaced with 4-anode, high quantum efficiency R7600-M4 PMTs. The new PMTs will improve the detector resolution, as well as the capability of removing fake events due to signal created in the glass window of the PMT. Here, we report the dark current, anode gain, transit time, transit time spread, pulse width, rise time, and linearity measurements performed on 1800 Hamamatsu R7600-200-M4 PMTs.
DOI: 10.1051/epjconf/202024507035
2020
Cited 5 times
Using HEP experiment workflows for the benchmarking and accounting of WLCG computing resources
Benchmarking of CPU resources in WLCG has been based on the HEP-SPEC06 (HS06) suite for over a decade. It has recently become clear that HS06, which is based on real applications from non-HEP domains, no longer describes typical HEP workloads. The aim of the HEP-Benchmarks project is to develop a new benchmark suite for WLCG compute resources, based on real applications from the LHC experiments. By construction, these new benchmarks are thus guaranteed to have a score highly correlated to the throughputs of HEP applications, and a CPU usage pattern similar to theirs. Linux containers and the CernVM-FS filesystem are the two main technologies enabling this approach, which had been considered impossible in the past. In this paper, we review the motivation, implementation and outlook of the new benchmark suite.
DOI: 10.1088/1742-6596/1525/1/012040
2020
Cited 3 times
Heterogeneous computing for the local reconstruction algorithms of the CMS calorimeters
Abstract The increasing LHC luminosity in Run III and, consequently, the increased number of simultaneous proton-proton collisions (pile-up) pose significant challenges for the CMS experiment. These challenges will affect not only the data taking conditions, but also the data processing environment of CMS, which requires an improvement in the online triggering system to match the required detector performance. In order to mitigate the increasing collision rates and complexity of a single event, various approaches are being investigated. Heterogenous computing resources, recently becoming prominent and abundant, may be significantly better performing for certain types of workflows. In this work, we investigate implementations of common algorithms targeting heterogenous platforms, such as GPUs and FPGAs. The local reconstruction algorithms of the CMS calorimeters, given their granularity and intrinsic parallelizability, are among the first candidates considered for implementation in such heterogenous platforms. We will present the current development status and preliminary performance results. Challenges and various obstacles related to each platform, together with the integration into CMS experiments framework, will be further discussed.
DOI: 10.48550/arxiv.1307.8051
2013
Secondary Emission Calorimetry: Fast and Radiation-Hard
A novel calorimeter sensor for electron, photon and hadron energy measurement based on Secondary Emission(SE) to measure ionization is described, using sheet-dynodes directly as the active detection medium; the shower particles in an SE calorimeter cause direct secondary emission from dynode arrays comprising the sampling or absorbing medium. Data is presented on prototype tests and Monte Carlo simulations. This sensor can be made radiation hard at GigaRad levels, is easily transversely segmentable at the mm scale, and in a calorimeter has energy signal rise-times and integration comparable to or better than plastic scintillation/PMT calorimeters. Applications are mainly in the energy and intensity frontiers.
DOI: 10.1051/epjconf/201921406030
2019
Using Big Data Technologies for HEP Analysis
The HEP community is approaching an era were the excellent performances of the particle accelerators in delivering collision at high rate will force the experiments to record a large amount of information. The growing size of the datasets could potentially become a limiting factor in the capability to produce scientific results timely and efficiently. Recently, new technologies and new approaches have been developed in industry to answer to the necessity to retrieve information as quickly as possible to analyze PB and EB datasets. Providing the scientists with these modern computing tools will lead to rethinking the principles of data analysis in HEP, making the overall scientific process faster and smoother. In this paper, we are presenting the latest developments and the most recent results on the usage of Apache Spark for HEP analysis. The study aims at evaluating the efficiency of the application of the new tools both quantitatively, by measuring the performances, and qualitatively, focusing on the user experience. The first goal is achieved by developing a data reduction facility: working together with CERN Openlab and Intel, CMS replicates a real physics search using Spark-based technologies, with the ambition of reducing 1 PB of public data in 5 hours, collected by the CMS experiment, to 1 TB of data in a format suitable for physics analysis. The second goal is achieved by implementing multiple physics use-cases in Apache Spark using as input preprocessed datasets derived from official CMS data and simulation. By performing different end-analyses up to the publication plots on different hardware, feasibility, usability and portability are compared to the ones of a traditional ROOT-based workflow.
2017
CMS Analysis and Data Reduction with Apache Spark
Experimental Particle Physics has been at the forefront of analyzing the world's largest datasets for decades. The HEP community was among the first to develop suitable software and computing tools for this task. In recent times, new toolkits and systems for distributed data processing, collectively called technologies have emerged from industry and open source projects to support the analysis of Petabyte and Exabyte datasets in industry. While the principles of data analysis in HEP have not changed (filtering and transforming experiment-specific data formats), these new technologies use different approaches and tools, promising a fresh look at analysis of very large datasets that could potentially reduce the time-to-physics with increased interactivity. Moreover these new tools are typically actively developed by large communities, often profiting of industry resources, and under open source licensing. These factors result in a boost for adoption and maturity of the tools and for the communities supporting them, at the same time helping in reducing the cost of ownership for the end-users. In this talk, we are presenting studies of using Apache Spark for end user data analysis. We are studying the HEP analysis workflow separated into two thrusts: the reduction of centrally produced experiment datasets and the end-analysis up to the publication plot. Studying the first thrust, CMS is working together with CERN openlab and Intel on the CMS Big Data Reduction Facility. The goal is to reduce 1 PB of official CMS data to 1 TB of ntuple output for analysis. We are presenting the progress of this 2-year project with first results of scaling up Spark-based HEP analysis. Studying the second thrust, we are presenting studies on using Apache Spark for a CMS Dark Matter physics search, comparing Spark's feasibility, usability and performance to the ROOT-based analysis.
DOI: 10.48550/arxiv.1711.00375
2017
CMS Analysis and Data Reduction with Apache Spark
Experimental Particle Physics has been at the forefront of analyzing the world's largest datasets for decades. The HEP community was among the first to develop suitable software and computing tools for this task. In recent times, new toolkits and systems for distributed data processing, collectively called "Big Data" technologies have emerged from industry and open source projects to support the analysis of Petabyte and Exabyte datasets in industry. While the principles of data analysis in HEP have not changed (filtering and transforming experiment-specific data formats), these new technologies use different approaches and tools, promising a fresh look at analysis of very large datasets that could potentially reduce the time-to-physics with increased interactivity. Moreover these new tools are typically actively developed by large communities, often profiting of industry resources, and under open source licensing. These factors result in a boost for adoption and maturity of the tools and for the communities supporting them, at the same time helping in reducing the cost of ownership for the end-users. In this talk, we are presenting studies of using Apache Spark for end user data analysis. We are studying the HEP analysis workflow separated into two thrusts: the reduction of centrally produced experiment datasets and the end-analysis up to the publication plot. Studying the first thrust, CMS is working together with CERN openlab and Intel on the CMS Big Data Reduction Facility. The goal is to reduce 1 PB of official CMS data to 1 TB of ntuple output for analysis. We are presenting the progress of this 2-year project with first results of scaling up Spark-based HEP analysis. Studying the second thrust, we are presenting studies on using Apache Spark for a CMS Dark Matter physics search, comparing Spark's feasibility, usability and performance to the ROOT-based analysis.
DOI: 10.17077/etd.co8bu9ri
2018
A search for the standard model Higgs Boson in the µ+µ- decay channel in PP collisions at √s=13 TeV with CMS, calibration of CMS Hadron forward calorimeter, and simulations of modern calorimeter systems
2019
arXiv : Using Big Data Technologies for HEP Analysis
2019
A Collection of White Papers from the BDEC2 Workshop in San Diego, CA
2019
Using Big Data Technologies for HEP Analysis
DOI: 10.1051/epjconf/202125102042
2021
Exploitation of HPC Resources for data intensive sciences
The Large Hadron Collider (LHC) will enter a new phase beginning in 2027 with the upgrade to the High Luminosity LHC (HL-LHC). The increase in the number of simultaneous collisions coupled with a more complex structure of a single event will result in each LHC experiment collecting, storing, and processing exabytes of data per year. The amount of generated and/or collected data greatly outweighs the expected available computing resources. In this paper, we discuss effcient usage of HPC resources as a prerequisite for data-intensive science at exascale. First, we discuss the experience of porting CMS Hadron and Electromagnetic calorimeters reconstruction code to utilize Nvidia GPUs within the DEEP-EST project; second, we look at the tools and their adoption in order to perform benchmarking of a variety of resources available at HPC centers. Finally, we touch on one of the most important aspects of the future of HEP - how to handle the flow of petabytes of data to and from computing facilities, be it clouds or HPCs, for exascale data processing in a flexible, scalable and performant manner. These investigations are a key contribution to technical work within the HPC collaboration among CERN, SKA, GEANT and PRACE.
DOI: 10.1080/10611991.2001.11049773
2001
The Reform of Interbudgetary Relations
2002
FISCAL FEDERALISM DEVELOPMENT IN RUSSIA: RESULTS OF THE 1990S
Evolution of intergovernmental fiscal relations in Russia in the 1990s is analysed, including the results of the realisation of the Concept of intergovernmental fiscal relations reform in the Russian Federation in 1999-2001. On the basis of centralisation of budget resources and decentralisation of budget responsibilities there have been elaborated alternative approaches to the evaluation of results and perspectives of the formation of a Russian model of fiscal federalism. It is argued for policy reorientation in the field of intergovernmental fiscal relations to distinguish between tax and budget powers, to distribute responsibilities among authorities of different levels in view of stable proportions achieved by 2002 related to budget resources' allocation. Main points of the realisation of the government Programme in the field of the fiscal federalism development in the Russian Federation for the period till 2005 are disclosed.
1977
Induction enhancement of magnetic field of solenoids made of NT-50 alloy at T < 4.2 K