ϟ

Dmitri Konstantinov

Here are all the papers by Dmitri Konstantinov that you can download and read on OA.mg.
Dmitri Konstantinov’s last known institution is . Download Dmitri Konstantinov PDFs here.

Claim this Profile →
DOI: 10.1002/mp.14226
2020
Cited 103 times
Report on G4‐Med, a Geant4 benchmarking system for medical physics applications developed by the Geant4 Medical Simulation Benchmarking Group
Geant4 is a Monte Carlo code extensively used in medical physics for a wide range of applications, such as dosimetry, micro- and nanodosimetry, imaging, radiation protection, and nuclear medicine. Geant4 is continuously evolving, so it is crucial to have a system that benchmarks this Monte Carlo code for medical physics against reference data and to perform regression testing.To respond to these needs, we developed G4-Med, a benchmarking and regression testing system of Geant4 for medical physics.G4-Med currently includes 18 tests. They range from the benchmarking of fundamental physics quantities to the testing of Monte Carlo simulation setups typical of medical physics applications. Both electromagnetic and hadronic physics processes and models within the prebuilt Geant4 physics lists are tested. The tests included in G4-Med are executed on the CERN computing infrastructure via the use of the geant-val web application, developed at CERN for Geant4 testing. The physical observables can be compared to reference data for benchmarking and to results of previous Geant4 versions for regression testing purposes.This paper describes the tests included in G4-Med and shows the results derived from the benchmarking of Geant4 10.5 against reference data.Our results indicate that the Geant4 electromagnetic physics constructor G4EmStandardPhysics_option4 gives a good agreement with the reference data for all the tests. The QGSP_BIC_HP physics list provided an overall adequate description of the physics involved in hadron therapy, including proton and carbon ion therapy. New tests should be included in the next stage of the project to extend the benchmarking to other physical quantities and application scenarios of interest for medical physics.The results presented and discussed in this paper will aid users in tailoring physics lists to their particular application.
DOI: 10.1016/j.cpc.2020.107310
2021
Cited 40 times
The HepMC3 event record library for Monte Carlo event generators
In high-energy physics, Monte Carlo event generators (MCEGs) are used to simulate the interactions of high energy particles. MCEG event records store the information on the simulated particles and their relationships, and thus reflect the simulated evolution of physics phenomena in each collision event. We present the HepMC3 library, a next-generation framework for MCEG event record encoding and manipulation, which builds on the functionality of its widely-used predecessors to enable more sophisticated algorithms for event-record analysis. As compared to previous versions, the event record structure has been simplified, while adding the possibility to encode arbitrary information. The I/O functionality has been extended to support common input and output formats of various HEP MCEGs, including formats used in Fortran MCEGs, the formats established by the HepMC2 library, and binary formats such as ROOT; custom input or output handlers may also be used. HepMC3 is already supported by popular modern MCEGs and can replace the older HepMC versions in many others. Program Title: HepMC 3 Program Files doi: http://dx.doi.org/10.17632/6fpf82rp8b.1 Licensing provisions: GPLv3 Programming language: C++ Nature of problem: The simulation of elementary particle reactions at high energies requires to store and/or modify information related to the simulation. Solution method: Provide a library that allows for manipulation and storage of information that origins from in the simulations of elementary particle reactions at high energies and can be interfaced to the modern Monte Carlo event generator software as well as to the particle detector simulation programs.
DOI: 10.1007/s41781-021-00055-1
2021
Cited 38 times
Challenges in Monte Carlo Event Generator Software for High-Luminosity LHC
Abstract We review the main software and computing challenges for the Monte Carlo physics event generators used by the LHC experiments, in view of the High-Luminosity LHC (HL-LHC) physics programme. This paper has been prepared by the HEP Software Foundation (HSF) Physics Event Generator Working Group as an input to the LHCC review of HL-LHC computing, which has started in May 2020.
DOI: 10.3997/2214-4609.202439099
2024
Data Pre Processing Techniques in Integrated Asset Modeling
Summary This paper underscores the pivotal role of data preprocessing techniques in the context of integrated asset modeling. Emphasizing the significance of outlier filtering, handling missing values, categorical data encoding, and numerical data standardization, the study highlights their collective impact on model accuracy and robust decision-making. The integration of these techniques ensures a consistent and reliable foundation, enabling efficient resource allocation and adaptability to dynamic conditions. By meticulously preparing raw data, organizations can optimize their asset management strategies, leveraging sophisticated models for informed decision-making in dynamic environments. This comprehensive approach enhances the reliability of integrated asset models, fostering accurate insights crucial for effective and strategic decision support.
DOI: 10.1051/epjconf/202429505019
2024
The ATLAS experiment software on ARM
With an increased dataset obtained during the Run 3 of the LHC at CERN and the even larger expected increase of the dataset by more than one order of magnitude for the HL-LHC, the ATLAS experiment is reaching the limits of the current data processing model in terms of traditional CPU resources based on x86_64 architectures and an extensive program for software upgrades towards the HL-LHC has been set up. The ARM architecture is becoming a competitive and energy efficient alternative. Some surveys indicate its increased presence in HPCs and commercial clouds, and some WLCG sites have expressed their interest. Chip makers are also developing their next generation solutions on ARM architectures, sometimes combining ARM and GPU processors in the same chip. Consequently it is important that the ATLAS software embraces the change and is able to successfully exploit this architecture. We report on the successful porting to ARM of the Athena software framework, which is used by ATLAS for both online and offline computing operations. Furthermore we report on the successful validation of simulation workflows running on ARM resources. For this we have set up an ATLAS Grid site using ARM compatible middleware and containers on Amazon Web Services (AWS) ARM resources. The ARM version of Athena is fully integrated in the regular software build system and distributed in the same way as other software releases. In addition, the workflows have been integrated into the HEPscore benchmark suite which is the planned WLCG wide replacement of the HepSpec06 benchmark used for Grid site pledges. In the overall porting process we have used resources on AWS, Google Cloud Platform (GCP) and CERN. A performance comparison of different architectures and resources will be discussed.
DOI: 10.1051/epjconf/202125103056
2021
Cited 6 times
Building HEP Software with Spack: Experiences from Pilot Builds for Key4hep and Outlook for LCG Releases
Consistent, efficient software builds and deployments are a common concern for all HEP experiments. This paper describes the evolution of the usage of the Spack package manager in HEP in the context of the LCG stacks and the current Spack-based management of Key4hep software. Whereas previously Key4hep software used Spack only for a thin layer of FCC experiment software on top of the LCG releases, it is now possible to build the complete stack, from system libraries to FCC-, iLCSoft- and CEPC software packages with Spack. This pilot build doubles as a prototype for a Spack-based LCG release. The workflows and mechanisms that can be used for this purpose, potential for improvement as well as the roadmap towards a complete LCG release in spack are discussed.
DOI: 10.1088/1742-6596/295/1/012018
2011
Cited 7 times
Preparation of new polarization experiment SPASCHARM at IHEP
A new experiment SPASCHARM devoted to a systematic study of polarization phenomena in hadron-hadron interactions in the energy range 10-70 GeV is under preparation at IHEP (Protvino). The physical observables will be single-spin asymmetries, hyperon polarizations and spin-density matrix elements. A universal setup will detect and identify various neutral and charge particles in the full azimuthal angle and a wide polar angle range. A polarized target is used to measure the SSA. The SPASCHARM sub-detectors are being designed and constructed now. The possibility of obtaining a polarized proton beam for the SPASCHARM experiment from Lambda decays is under study.
DOI: 10.1088/1742-6596/1085/3/032037
2018
Cited 7 times
GeantV alpha release
In the fall 2016, GeantV went through a thorough community evaluation of the project status and of its strategy for sharing the R&D results with the LHC experiments and with the HEP simulation community in general. Following this discussion, GeantV has engaged onto an ambitious 2-year road-path aiming to deliver a beta version that has most of the final design and several performance features of the final product, partially integrated with some of the experiment's frameworks. The initial GeantV prototype has been updated to a vector-aware concurrent framework, which is able to deliver high-density floating-point computations for most of the performance-critical components such as propagation in field and physics models. Electromagnetic physics models were adapted for the specific GeantV requirements, aiming for the full demonstration of shower physics performance in the alpha release at the end of 2017. We have revisited and formalized GeantV user interfaces and helper protocols, allowing to: connect to user code, provide recipes to access efficiently MC truth and generate user data in a concurrent environment.
DOI: 10.48550/arxiv.2005.00949
2020
Cited 4 times
GeantV: Results from the prototype of concurrent vector particle transport simulation in HEP
Full detector simulation was among the largest CPU consumer in all CERN experiment software stacks for the first two runs of the Large Hadron Collider (LHC). In the early 2010's, the projections were that simulation demands would scale linearly with luminosity increase, compensated only partially by an increase of computing resources. The extension of fast simulation approaches to more use cases, covering a larger fraction of the simulation budget, is only part of the solution due to intrinsic precision limitations. The remainder corresponds to speeding-up the simulation software by several factors, which is out of reach using simple optimizations on the current code base. In this context, the GeantV R&D project was launched, aiming to redesign the legacy particle transport codes in order to make them benefit from fine-grained parallelism features such as vectorization, but also from increased code and data locality. This paper presents extensively the results and achievements of this R&D, as well as the conclusions and lessons learnt from the beta prototype.
DOI: 10.48550/arxiv.2008.13636
2020
Cited 4 times
HL-LHC Computing Review: Common Tools and Community Software
Common and community software packages, such as ROOT, Geant4 and event generators have been a key part of the LHC's success so far and continued development and optimisation will be critical in the future. The challenges are driven by an ambitious physics programme, notably the LHC accelerator upgrade to high-luminosity, HL-LHC, and the corresponding detector upgrades of ATLAS and CMS. In this document we address the issues for software that is used in multiple experiments (usually even more widely than ATLAS and CMS) and maintained by teams of developers who are either not linked to a particular experiment or who contribute to common software within the context of their experiment activity. We also give space to general considerations for future software and projects that tackle upcoming challenges, no matter who writes it, which is an area where community convergence on best practice is extremely useful.
DOI: 10.1088/1742-6596/898/4/042026
2017
Cited 3 times
Stochastic optimization of GeantV code by use of genetic algorithms
GeantV is a complex system based on the interaction of different modules needed for detector simulation, which include transport of particles in fields, physics models simulating their interactions with matter and a geometrical modeler library for describing the detector and locating the particles and computing the path length to the current volume boundary. The GeantV project is recasting the classical simulation approach to get maximum benefit from SIMD/MIMD computational architectures and highly massive parallel systems. This involves finding the appropriate balance between several aspects influencing computational performance (floating-point performance, usage of off-chip memory bandwidth, specification of cache hierarchy, etc.) and handling a large number of program parameters that have to be optimized to achieve the best simulation throughput. This optimization task can be treated as a black-box optimization problem, which requires searching the optimum set of parameters using only point-wise function evaluations. The goal of this study is to provide a mechanism for optimizing complex systems (high energy physics particle transport simulations) with the help of genetic algorithms and evolution strategies as tuning procedures for massive parallel simulations. One of the described approaches is based on introducing a specific multivariate analysis operator that could be used in case of resource expensive or time consuming evaluations of fitness functions, in order to speed-up the convergence of the black-box optimization problem.
DOI: 10.1051/epjconf/201921405020
2019
Cited 3 times
Building, testing and distributing common software for the LHC experiments
Building, testing and deploying of coherent large software stacks is very challenging, in particular when they consist of the diverse set of packages required by the LHC experiments, the CERN Beams Department and data analysis services such as SWAN. These software stacks include several packages (Grid middleware, Monte Carlo generators, Machine Learning tools, Python modules) all available for a large number of compilers, operating systems and hardware architectures. To address this challenge, we developed an infrastructure around a tool called lcgcmake. Dedicated modules are responsible for building the packages, con-trolling the dependencies in a reliable and scalable way. The distribution relies on a robust and automatic system, responsible for building and testing the packages, installing them on CernVM-FS and packaging the binaries in RPMs and tarballs. This system is orchestrated through Jenkins on build machines provided by the CERN Openstack facility. The results are published through user-friendly web pages. In this paper we will present an overview of these infrastructure tools and policies. We also discuss the role of this effort within the HEP Software Foundation (HSF). Finally we will discuss the evolution of the infrastructure towards container (Docker) technologies and the future directions and challenges of the project.
DOI: 10.1007/s41781-020-00048-6
2021
Cited 3 times
GeantV
Abstract Full detector simulation was among the largest CPU consumers in all CERN experiment software stacks for the first two runs of the Large Hadron Collider. In the early 2010s, it was projected that simulation demands would scale linearly with increasing luminosity, with only partial compensation from increasing computing resources. The extension of fast simulation approaches to cover more use cases that represent a larger fraction of the simulation budget is only part of the solution, because of intrinsic precision limitations. The remainder corresponds to speeding up the simulation software by several factors, which is not achievable by just applying simple optimizations to the current code base. In this context, the GeantV R&D project was launched, aiming to redesign the legacy particle transport code in order to benefit from features of fine-grained parallelism, including vectorization and increased locality of both instruction and data. This paper provides an extensive presentation of the results and achievements of this R&D project, as well as the conclusions and lessons learned from the beta version prototype.
DOI: 10.33920/med-12-2309-05
2023
Clinical and epidemiological analysis of a case of leptospirosis caused by Sejroe serogroup pathogens
Leptospirosis is one of the most common zoonoses in the Russian Federa tion, including in the Samara Region. Every year, Leptospira pathogens are found in environmental objects. The article presents our own clinical observation of a severe course of the disease caused by leptospires of the Sejroe serogroup, strain 3705, against the background of the novel coronavirus infection СOVID -19.
DOI: 10.1051/epjconf/201921402031
2019
Electromagnetic physics vectorization in the GeantV transport framework
The development of the GeantV Electromagnetic (EM) physics package has evolved following two necessary paths towards code modernization. A first phase required the revision of the main electromagnetic physics models and their implementation. The main objectives were to improve their accuracy, extend them to the new high-energy frontier posed by the Future Circular Collider (FCC) programme and allow a better adaptation to a multi-particle flow. Most of the EM physics models in GeantV have been reviewed from theoretical perspective and rewritten with vector-friendly implementations, being now available in scalar mode in the alpha release. The second phase consists of a thorough investigation on the possibility to vectorise the most CPU-intensive physics code parts, such as final state sampling. We have shown the feasibility of implementing electromagnetic physics models that take advantage of SIMD/SIMT architectures, thus obtaining gains in performance. After this phase, the time has come for the GeantV project to take a step forward towards the final proof of concept. This takes shape through the testing of the full simulation chain (transport + physics + geometry) running in vectorized mode. In this paper we will present the first benchmark results obtained after vectorizing a full set of electromagnetic physics models.
DOI: 10.2172/1437300
2018
HEP Software Foundation Community White Paper Working Group - Detector Simulation
A working group on detector simulation was formed as part of the high-energy physics (HEP) Software Foundation's initiative to prepare a Community White Paper that describes the main software challenges and opportunities to be faced in the HEP field over the next decade. The working group met over a period of several months in order to review the current status of the Full and Fast simulation applications of HEP experiments and the improvements that will need to be made in order to meet the goals of future HEP experimental programmes. The scope of the topics covered includes the main components of a HEP simulation application, such as MC truth handling, geometry modeling, particle propagation in materials and fields, physics modeling of the interactions of particles with matter, the treatment of pileup and other backgrounds, as well as signal processing and digitisation. The resulting work programme described in this document focuses on the need to improve both the software performance and the physics of detector simulation. The goals are to increase the accuracy of the physics models and expand their applicability to future physics programmes, while achieving large factors in computing performance gains consistent with projections on available computing resources.
2016
DoSSiER: Database of Scientific Simulation and Experimental Results
\abstract{The Geant4, GeantV and GENIE collaborations regularly perform validation and regression tests for simulation results. DoSSiER ({\bf D}atabase of {\bf S}cientific {\bf Si}mulation and {\bf E}xperimental {\bf R}esults) is being developed as a central repository to store the simulation results as well as the experimental data used for validation. DoSSiER can be easily accessed via a web application. In addition, a web service allows for programmatic access to the repository to extract records in json or xml exchange formats. In this article, we describe the functionality and the current status of various components of DoSSiER as well as the technology choices we made.}
2006
Selection of Single Top Events with the CMS Detector at LHC
DOI: 10.1051/epjconf/201921402007
2019
Recent progress with the top to bottom approach to vectorization in GeantV
SIMD acceleration can potentially boost by factors the application throughput. Achieving efficient SIMD vectorization for scalar code with complex data flow and branching logic, goes however way beyond breaking some loop dependencies and relying on the compiler. Since the refactoring effort scales with the number of lines of code, it is important to understand what kind of performance gains can be expected in such complex cases. We started to investigate a couple of years ago a top to bottom vectorization approach to particle transport simulation. Percolating vector data to algorithms was mandatory since not all the components can internally vectorize. Vectorizing low-level algorithms is certainly necessary, but not sufficient to achieve relevant SIMD gains. In addition, the overheads for maintaining the concurrent vector data flow and copy data have to be minimized. In the context of a vectorization R&D for simulation we developed a framework to allow different categories of scalar and vectorized components to co-exist, dealing with data flow management and real-time heuristic optimizations. The paper describes our approach on coordinating SIMD vectorization at framework level, making a detailed quantitative analysis of the SIMD gain versus overheads, with a breakdown by components in terms of geometry, physics and magnetic field propagation. We also present the more general context of this R&D work and goals for 2018.
DOI: 10.1088/1742-6596/513/2/022029
2014
Status and new developments of the Generator Services
The Generator Services (GENSER) provide ready-to-use Monte Carlo generators, compiled on multiple platforms or ready to be compiled, for the LHC experiments. In this paper we discuss the recent developments in the build machinery, which allowed to fully automatize the installation process. The new system is based on and is integrated entirely with the LCG external software infrastructure, providing all the external packages needed by the LHC experiments.
2016
DoSSiER: Database of Scientific Simulation and Experimental
DOI: 10.18664/1994-7852.145.2014.80604
2014
MODELING OF DISTRIBUTION SYSTEM OF TRAFFIC VOLUMES OF TECHNICAL STATIONS
The article deals with traffic volume modelling at technical stations of Ukraine’s railways. It has beenstudied the functioning of a sorting yard and revealed major shortcomings of the existing system of trafficvolume control. Lack of efficient methods and state-of-the-art information technologies requiresimprovements in operation management. The present stage of development and wide application ofinformation technologies requires the implementation of a decision-making system. The article describes amodel of the decision support system intended for traffic volume forecasting. The model has been designedon the base of the mathematical apparatus of dynamics of average and differential Kolmogorov equations.
2016
DoSSiER: Database of scientific simulation and experimental results
DOI: 10.1088/1742-6596/898/4/042030
2017
Software aspects of the Geant4 validation repository
The Geant4, GeantV and GENIE collaborations regularly perform validation and regression tests for simulation results. DoSSiER (Database of Scientific Simulation and Experimental Results) is being developed as a central repository to store the simulation results as well as the experimental data used for validation. DoSSiER is easily accessible via a web application. In addition, a web service allows for programmatic access to the repository to extract records in JSON or XML exchange formats. In this article, we describe the functionality and the current status of various components of DoSSiER as well as the technology choices we made.
DOI: 10.1088/1742-6596/898/4/042019
2017
Verification of Electromagnetic Physics Models for Parallel Computing Architectures in the GeantV Project
An intensive R&D and programming effort is required to accomplish new challenges posed by future experimental high-energy particle physics (HEP) programs. The GeantV project aims to narrow the gap between the performance of the existing HEP detector simulation software and the ideal performance achievable, exploiting latest advances in computing technology. The project has developed a particle detector simulation prototype capable of transporting in parallel particles in complex geometries exploiting instruction level microparallelism (SIMD and SIMT), task-level parallelism (multithreading) and high-level parallelism (MPI), leveraging both the multi-core and the many-core opportunities. We present preliminary verification results concerning the electromagnetic (EM) physics models developed for parallel computing architectures within the GeantV project. In order to exploit the potential of vectorization and accelerators and to make the physics model effectively parallelizable, advanced sampling techniques have been implemented and tested. In this paper we introduce a set of automated statistical tests in order to verify the vectorized models by checking their consistency with the corresponding Geant4 models and to validate them against experimental data.
DOI: 10.22323/1.282.0171
2017
DoSSiER: Database of Scientific Simulation and Experimental Results
The Geant4, GeantV and GENIE collaborations regularly perform validation and regression tests for simulation results.DoSSiER (Database of Scientific Simulation and Experimental Results) is being developed as a central repository to store the simulation results as well as the experimental data used for validation.DoSSiER can be easily accessed via a web application.In addition, a web service allows for programmatic access to the repository to extract records in json or xml exchange formats.In this article, we describe the functionality and the current status of various components of DoSSiER as well as the technology choices we made.
DOI: 10.48550/arxiv.0912.5062
2009
The First Stage of Polarization Program Spascharm at the Accelerator U-70 of Ihep
The first stage of the proposed polarization program SPASCHARM includes the measurements of the single-spin asymmetry (SSA) in exclusive and inclusive reactions with production of stable hadrons and the light meson and baryon resonances.In this study we foresee of using the variety of the unpolarized beams (pions, kaons, protons and antiprotons) in the energy range of 30-60 GeV. The polarized proton and deuteron targets will be used for revealing the flavor and isotopic spin dependencies of the polarization phenomena. The neutral and charged particles in the final state will be detected.
DOI: 10.1134/s0020441208020061
2008
A study of the RAMPEX electromagnetic calorimeter stability
DOI: 10.20403/2078-0575-2022-1-63-68
2022
EXPERIMENTAL AND METHODOLOGICAL WORK ON MEASURING THE VALUES OF GRAVITY AND ITS VERTICAL GRADIENT
The article describes the methodology of field work on gravity surveying and measurement of the vertical gravity gradient using a high-precision gravimeter. The processing teqnique of received data by the Geosoft Oasis Montaj software package (Canada) is described. Research results, conclusions and recommendations for the performance of such works by the method of multi-level measurements of gravity values are presented.
DOI: 10.1051/epjconf/202024505030
2020
Optimizing Provisioning of LCG Software Stacks with Kubernetes
The building, testing and deployment of coherent large software stacks is very challenging, in particular when they consist of the diverse set of packages required by the LHC*** experiments, the CERN Beams department and data analysis services such as SWAN. These software stacks comprise a large number of packages (Monte Carlo generators, machine learning tools, Python modules, HEP**** specific software), all available for several compilers, operating systems and hardware architectures. Along with several releases per year, development builds are provided each night to allow for quick updates and testing of development versions of packages such as ROOT, Geant4, etc. It also provides the possibility to test new compilers and new configurations. Timely provisioning of these development and release stacks requires a large amount of computing resources. A dedicated infrastructure, based on the Jenkins continuous integration system, has been developed to this purpose. Resources are taken from the CERN OpenStack cloud; Puppet configurations are used to control the environment on virtual machines, which are either used directly as resource nodes or as hosts for Docker containers. Containers are used more and more to optimize the usage of our resources and ensure a consistent build environment while providing quick access to new Linux flavours and specific configurations. In order to add build resources on demand more easily, we investigated the integration of a CERN provided Kubernetes cluster into the existing infrastructure. In this contribution we present the status of this prototype, focusing on the new challenges faced, such as the integration of these ephemeral build nodes into CERN’s IT infrastructure, job priority control, and debugging of job failures.
DOI: 10.48550/arxiv.0712.2691
2007
New Polarization Program at U70 (SPASCHARM Project)
The new polarization program SPASCHARM is being prepared in Protvino. The program has two stages. The first stage is dedicated to single-spin asymmetries in the production of miscellaneous light resonances with the use of 34 GeV $π^-$-beam. Inclusive and exclusive reactions will be studied simultaneously. The second stage is dedicated to single-spin and double-spin asymmetries in charmonium production with the use of 70 GeV polarized proton beam which will allow us to understand charmonium hadronic production mechanism and make gluon polarization $Δg(x)$ extraction at large $x$.
DOI: 10.22323/1.390.0804
2021
Calibration and Performance of the CMS Electromagnetic Calorimeter in LHC Run 2
Many physics analyses using the Compact Muon Solenoid (CMS) detector at the LHC require accurate, high resolution electron and photon energy measurements.The excellent energy resolution is crucial for studies of Higgs boson decays with electromagnetic particles in the final state, as well as searches for very high mass resonances decaying to energetic photons or electrons.The CMS electromagnetic calorimeter (ECAL) is a fundamental instrument for these analyses and its energy resolution is crucial for the Higgs boson mass measurement.Recently the energy response of the calorimeter has been precisely calibrated exploiting the full Run 2 data, aiming at a legacy reprocessing of the data.A dedicated calibration of each detector channel has been performed with physics events exploiting electrons from W and Z boson decay, photons from 0 decays, and from the azimuthally symmetric energy distribution of minimum bias events.The calibration strategies that have been implemented and the excellent performance achieved by the CMS ECAL with the ultimate calibration of Run 2 data, in terms of energy scale stability and energy resolution, are presented.