ϟ

G. Franzoni

Here are all the papers by G. Franzoni that you can download and read on OA.mg.
G. Franzoni’s last known institution is . Download G. Franzoni PDFs here.

Claim this Profile →
DOI: 10.1007/s41781-018-0018-8
2019
Cited 114 times
A Roadmap for HEP Software and Computing R&D for the 2020s
Particle physics has an ambitious and broad experimental programme for the coming decades. This programme requires large investments in detector hardware, either to build new facilities and experiments, or to upgrade existing ones. Similarly, it requires commensurate investment in the R&D of software to acquire, manage, process, and analyse the shear amounts of data to be recorded. In planning for the HL-LHC in particular, it is critical that all of the collaborating stakeholders agree on the software goals and priorities, and that the efforts complement each other. In this spirit, this white paper describes the R&D activities required to prepare for this software upgrade.
DOI: 10.1140/epjcd/s2006-02-002-x
2006
Cited 45 times
Reconstruction of the signal amplitude of the CMS electromagnetic calorimeter
The amplitude of the signal collected from the PbWO4 crystals of the CMS electromagnetic calorimeter is reconstructed by a digital filtering technique. The amplitude reconstruction has been studied with test beam data recorded from a fully equipped barrel supermodule. Issues specific to data taken in the test beam are investigated, and the implementation of the method for CMS data taking is discussed.
DOI: 10.1088/1748-0221/16/04/t04002
2021
Cited 14 times
Construction and commissioning of CMS CE prototype silicon modules
Abstract As part of its HL-LHC upgrade program, the CMS collaboration is developing a High Granularity Calorimeter (CE) to replace the existing endcap calorimeters. The CE is a sampling calorimeter with unprecedented transverse and longitudinal readout for both electromagnetic (CE-E) and hadronic (CE-H) compartments. The calorimeter will be built with ∼30,000 hexagonal silicon modules. Prototype modules have been constructed with 6-inch hexagonal silicon sensors with cell areas of 1.1 cm 2 , and the SKIROC2-CMS readout ASIC. Beam tests of different sampling configurations were conducted with the prototype modules at DESY and CERN in 2017 and 2018. This paper describes the construction and commissioning of the CE calorimeter prototype, the silicon modules used in the construction, their basic performance, and the methods used for their calibration.
DOI: 10.1140/epjcd/s2005-02-011-3
2006
Cited 31 times
Results of the first performance tests * of the CMS electromagnetic calorimeter
Performance tests of some aspects of the CMS ECAL were carried out on modules of the "barrel" sub-system in 2002 and 2003. A brief test with high energy electron beams was made in late 2003 to validate prototypes of the new Very Front End electronics. The final versions of the monitoring and cooling systems, and of the high and low voltage regulation were used in these tests. The results are consistent with the performance targets including those for noise and overall energy resolution, required to fulfil the physics programme of CMS at the LHC.
DOI: 10.1088/1748-0221/3/10/p10007
2008
Cited 27 times
Intercalibration of the barrel electromagnetic calorimeter of the CMS experiment at start-up
Calibration of the relative response of the individual channels of the barrel electromagnetic calorimeter of the CMS detector was accomplished, before installation, with cosmic ray muons and test beams. One fourth of the calorimeter was exposed to a beam of high energy electrons and the relative calibration of the channels, the intercalibration, was found to be reproducible to a precision of about 0.3%. Additionally, data were collected with cosmic rays for the entire ECAL barrel during the commissioning phase. By comparing the intercalibration constants obtained with the electron beam data with those from the cosmic ray data, it is demonstrated that the latter provide an intercalibration precision of 1.5% over most of the barrel ECAL. The best intercalibration precision is expected to come from the analysis of events collected in situ during the LHC operation. Using data collected with both electrons and pion beams, several aspects of the intercalibration procedures based on electrons or neutral pions were investigated.
DOI: 10.1088/1748-0221/5/03/p03010
2010
Cited 18 times
Radiation hardness qualification of PbWO<sub>4</sub>scintillation crystals for the CMS Electromagnetic Calorimeter
Ensuring the radiation hardness of PbWO4 crystals was one of the main priorities during the construction of the electromagnetic calorimeter of the CMS experiment at CERN. The production on an industrial scale of radiation hard crystals and their certification over a period of several years represented a difficult challenge both for CMS and for the crystal suppliers. The present article reviews the related scientific and technological problems encountered.
DOI: 10.1051/epjconf/201921406008
2019
Cited 10 times
Anomaly detection using Deep Autoencoders for the assessment of the quality of the data acquired by the CMS experiment
The certification of the CMS experiment data as usable for physics analysis is a crucial task to ensure the quality of all physics results published by the collaboration. Currently, the certification conducted by human experts is labor intensive and based on the scrutiny of distributions integrated on several hours of data taking. This contribution focuses on the design and prototype of an automated certification system assessing data quality on a per-luminosity section (i.e. 23 seconds of data taking) basis. Anomalies caused by detector malfunctioning or sub-optimal reconstruction are difficult to enumerate a priori and occur rarely, making it difficult to use classical supervised classification methods such as feedforward neural networks. We base our prototype on a semi-supervised approach which employs deep autoencoders. This approach has been qualified successfully on CMS data collected during the 2016 LHC run: we demonstrate its ability to detect anomalies with high accuracy and low false positive rate, when compared against the outcome of the manual certification by experts. A key advantage of this approach over other machine learning technologies is the great interpretability of the results, which can be further used to ascribe the origin of the problems in the data to a specific sub-detector or physics objects.
DOI: 10.1088/1748-0221/11/04/p04012
2016
Cited 9 times
Beam test evaluation of electromagnetic calorimeter modules made from proton-damaged PbWO4crystals
The performance of electromagnetic calorimeter modules made of proton-irradiated PbWO4 crystals has been studied in beam tests. The modules, similar to those used in the Endcaps of the CMS electromagnetic calorimeter (ECAL), were formed from 5×5 matrices of PbWO4 crystals, which had previously been exposed to 24 GeV protons up to integrated fluences between 2.1× 1013 and 1.3× 1014 cm−2. These correspond to the predicted charged-hadron fluences in the ECAL Endcaps at pseudorapidity η = 2.6 after about 500 fb−1 and 3000 fb−1 respectively, corresponding to the end of the LHC and High Luminosity LHC operation periods. The irradiated crystals have a lower light transmission for wavelengths corresponding to the scintillation light, and a correspondingly reduced light output. A comparison with four crystals irradiated in situ in CMS showed no significant rate dependence of hadron-induced damage. A degradation of the energy resolution and a non-linear response to electron showers are observed in damaged crystals. Direct measurements of the light output from the crystals show the amplitude decreasing and pulse becoming faster as the fluence increases. The latter is interpreted, through comparison with simulation, as a side-effect of the degradation in light transmission. The experimental results obtained can be used to estimate the long term performance of the CMS ECAL.
DOI: 10.1145/3324884.3415288
2020
Cited 8 times
PerfCI
Software performance testing is an essential quality assurance mechanism that can identify optimization opportunities. Automating this process requires strong tool support, especially in the case of Continuous Integration (CI) where tests need to run completely automatically and it is desirable to provide developers with actionable feedback. A lack of existing tools means that performance testing is normally left out of the scope of CI. In this paper, we propose a toolchain - PerfCI - to pave the way for developers to easily set up and carry out automated performance testing under CI. Our toolchain is based on allowing users to (1) specify performance testing tasks, (2) analyze unit tests on a variety of python projects ranging from scripts to full-blown flask-based web services, by extending a performance analysis framework (VyPR) and (3) evaluate performance data to get feedback on the code. We demonstrate the feasibility of our toolchain by using it on a web service running at the Compact Muon Solenoid (CMS) experiment at the world's largest particle physics laboratory --- CERN.
DOI: 10.1016/j.radphyschem.2014.02.018
2014
Cited 7 times
A study of the behavior of bi-oriented PVC exposed to ionizing radiation and its possible use in nuclear applications
The paper discusses whether bi-oriented PVC, obtained by modifying the structures of polymers chains to enhance the mechanical properties of unplasticized PVC, could successfully replace metallic materials in industrial applications where radioactive fluids are processed and an intense field of ionizing radiation is present. Tests have been carried out in order to study the behavior of a commercial bi-oriented PVC when exposed to ionizing radiations. A numerical simulation allows comparing the effects of radiation expected on the pipe in nuclear industry applications with those resulting from the irradiation tests. Contamination and decontamination tests of bi-oriented PVC in contact with a radioactive solution have been performed too. Results show that the bi-oriented PVC can withstand high β and γ radiation doses (up to 100 kGy) without showing significant degradation in mechanical properties; bi-orientation of polymers chains in the bulk of material is not affected even to much higher doses (250 kGy); the decontamination of the material is satisfactory. The results suggest that tested commercial bi-oriented PVC could be considered in nuclear industry applications.
2011
Cited 7 times
LHC Bunch Current Normalisation for the April-May 2010 Luminosity Calibration Measurements
DOI: 10.1007/978-3-030-17465-1_6
2019
Cited 6 times
VyPR2: A Framework for Runtime Verification of Python Web Services
Runtime Verification (RV) is the process of checking whether a run of a system holds a given property. In order to perform such a check online, the algorithm used to monitor the property must induce minimal overhead. This paper focuses on two areas that have received little attention from the RV community: Python programs and web services. Our first contribution is the VyPR runtime verification tool for single-threaded Python programs. The tool handles specifications in our, previously introduced, Control-Flow Temporal Logic (CFTL), which supports the specification of state and time constraints over runs of functions. VyPR minimally (in terms of reachability) instruments the input program with respect to a CFTL specification and then uses instrumentation information to optimise the monitoring algorithm. Our second contribution is the lifting of VyPR to the web service setting, resulting in the VyPR2 tool. We first describe the necessary modifications to the architecture of VyPR, and then describe our experience applying VyPR2 to a service that is critical to the physics reconstruction pipeline on the CMS Experiment at CERN.
2019
Cited 6 times
A roadmap for HEP software and computing R&D for the 2020s
Particle physics has an ambitious and broad experimental programme for the coming decades. This programme requires large investments in detector hardware, either to build new facilities and experiments, or to upgrade existing ones. Similarly, it requires commensurate investment in the R&D of software to acquire, manage, process, and analyse the shear amounts of data to be recorded. In planning for the HL-LHC in particular, it is critical that all of the collaborating stakeholders agree on the software goals and priorities, and that the efforts complement each other. In this spirit, this white paper describes the R&D activities required to prepare for this software upgrade.
DOI: 10.1109/imtc.2004.1351446
2004
Cited 8 times
Design and performance of the cooling system for the electromagnetic calorimeter of CMS
For the physics program of the CMS experiment at the LHC to be carried out successfully, excellent electromagnetic calorimetry is required. Given the thermal properties of CMS ECAL, keeping the constant term of the energy resolution below 0.5% needs its temperature to be stabilized at 18/spl deg/C within 0.05/spl deg/C. A prototype module of ECAL with the final cooling system has been tested at CERN to check its integration with the read-out electronics and verify that it complies with the severe thermal requirements. The thermal performance of the cooling system is reported here.
DOI: 10.1088/1742-6596/664/7/072018
2015
Cited 3 times
Monte Carlo Production Management at CMS
The analysis of the LHC data at the Compact Muon Solenoid (CMS) experiment requires the production of a large number of simulated events. During the RunI of LHC (20102012), CMS has produced over 12 Billion simulated events, organized in approximately sixty different campaigns each emulating specific detector conditions and LHC running conditions (pile up). In order to aggregate the information needed for the configuration and prioritization of the events production, assure the book-keeping of all the processing requests placed by the physics analysis groups, and to interface with the CMS production infrastructure, the web- based service Monte Carlo Management (McM) has been developed and put in production in 2013. McM is based on recent server infrastructure technology (CherryPy + AngularJS) and relies on a CouchDB database back-end. This contribution covers the one and half year of operational experience managing samples of simulated events for CMS, the evolution of its functionalities and the extension of its capability to monitor the status and advancement of the events production.
DOI: 10.1016/j.nima.2010.06.291
2011
Cited 3 times
Performance of CMS ECAL with first LHC data
In the Compact Muon Solenoid (CMS) experiment at the CERN Large Hadron Collider (LHC), the high resolution Electromagnetic Calorimeter (ECAL), consisting of 75 848 lead tungstate crystals and a silicon/lead preshower, will play a crucial role in the physics program. In preparation for the data taking a detailed procedure was followed to commission the ECAL readout and trigger, and to pre-calibrate, with test beam and cosmic ray data, each channel of the calorimeter to a precision of 2% or less in the central region. The first LHC collisions have been used to complete the detector commissioning and will have to provide the first in situ calibration. In this talk the CMS ECAL status and performance with the first collisions delivered from LHC will be reviewed.
DOI: 10.1109/nssmic.2013.6829574
2013
Cited 3 times
Data preparation for the Compact Muon Solenoid experiment
During the first 3 years of operations at the Large Hadron Collider, the Compact Muon Solenoid detector has collected data across vastly evolving conditions for the center of mass energy, the instantaneous luminosity and the events' pile up. The CMS collaboration has followed this evolution in a continuous way providing high-quality and prompt data reconstruction, necessary for achieving excellent physics performance requested by such high energy physics experiment. The scientific success of CMS came from keeping a constant attention on the key areas: a careful preparation and maintenance of the reconstruction algorithms and core software infrastructure, efficient and robust strategies and algorithms for the calibration and the alignment of the diverse detector elements, up to a continuous and meticulous scrutiny of the data quality and validation of any software infrastructure and detector calibration changes which was deemed necessary. This contribution covers the major development and operational aspects of the CMS offline workflows during data taking experience of 2010-2013, underlying their essential role towards the main physics achievements and discoveries of the CMS experiment during the last years.
DOI: 10.1088/1742-6596/898/4/042047
2017
Cited 3 times
Functional tests of a prototype for the CMS-ATLAS common non-event data handling framework
Since 2014 the ATLAS and CMS experiments share a common vision on the database infrastructure for the handling of the non-event data in forthcoming LHC runs. The wide commonality in the use cases has allowed to agree on a common overall design solution that is meeting the requirements of both experiments. A first prototype has been completed in 2016 and has been made available to both experiments. The prototype is based on a web service implementing a REST api with a set of functions for the management of conditions data. In this contribution, we describe this prototype architecture and the tests that have been performed within the CMS computing infrastructure, with the aim of validating the support of the main use cases and of suggesting future improvements.
DOI: 10.1007/978-3-030-60508-7_4
2020
Cited 3 times
Analysing the Performance of Python-Based Web Services with the VyPR Framework
In this tutorial paper, we present the current state of VyPR, a framework for the performance analysis of Python-based web services. We begin by summarising our theoretical contributions which take the form of an engineer-friendly specification language; instrumentation and monitoring algorithms; and an approach for explanation of property violations. We then summarise the VyPR ecosystem, which includes an intuitive library for writing specifications and powerful tools for analysing monitoring results. We conclude with a brief description of how VyPR was used to improve our understanding of the performance of a critical web service at the CMS Experiment at CERN.
DOI: 10.18682/cdc.vi181.9235
2023
Desarrollo de soluciones para el tratamiento de efluentes en comunidades de artesanos textiles
La identificación de técnicas efectivas y naturales para el tratamiento de efluentes recalcitrantes de la industria textil artesanal, utilizando como fuente de información la búsqueda de bases de datos de patentes, entre otros recursos, da como resultado una capacitación para artesanos de la región andina en Latinoamérica.
DOI: 10.1142/9789812702708_0031
2004
Cited 3 times
PERFORMANCE OF THE COOLING SYSTEM OF ECAL CMS
DOI: 10.48550/arxiv.1711.07051
2017
Deep learning for inferring cause of data anomalies
Daily operation of a large-scale experiment is a resource consuming task, particularly from perspectives of routine data quality monitoring. Typically, data comes from different sub-detectors and the global quality of data depends on the combinatorial performance of each of them. In this paper, the problem of identifying channels in which anomalies occurred is considered. We introduce a generic deep learning model and prove that, under reasonable assumptions, the model learns to identify 'channels' which are affected by an anomaly. Such model could be used for data quality manager cross-check and assistance and identifying good channels in anomalous data samples. The main novelty of the method is that the model does not require ground truth labels for each channel, only global flag is used. This effectively distinguishes the model from classical classification methods. Being applied to CMS data collected in the year 2010, this approach proves its ability to decompose anomaly by separate channels.
DOI: 10.1088/1742-6596/898/3/032034
2017
A Web-based application for the collection, management and release of Alignment and Calibration configurations used in data processing at the Compact Muon Solenoid experiment
The Compact Muon Solenoid (CMS) experiment makes a vast use of alignment and calibration measurements in several data processing workflows: in the High Level Trigger, in the processing of the recorded collisions and in the production of simulated events for data analysis and studies of detector upgrades. A complete alignment and calibration scenario is factored in approximately three-hundred records, which are updated independently and can have a time-dependent content, to reflect the evolution of the detector and data taking conditions. Given the complexity of the CMS condition scenarios and the large number (50) of experts who actively measure and release calibration data, in 2015 a novel web-based service has been developed to structure and streamline their management. The cmsDbBrowser provides an intuitive and easily accessible entry point for the navigation of existing conditions by any CMS member, for the bookkeeping of record updates and for the actual composition of complete calibration scenarios. This paper describes the design, choice of technologies and the first year of usage in production of the cmsDbBrowser.
DOI: 10.1088/1742-6596/119/2/022018
2008
Data quality monitoring for the CMS electromagnetic calorimeter
The detector performance of the CMS electromagnetic calorimeter is monitored using applications based on the CMS Data Quality Monitoring (DQM) framework and running on the High-Level Trigger Farm as well as on local DAQ systems. The monitorable quantities are organized into hierarchical structures based on the physics content. The information produced is accessible by client applications according to their subscription requests. The client applications process the received quantities, according to pre-defined analyses, making the results immediately available, while also storing the results in a database, and in the form of static web pages, for subsequent studies. We describe here the functionalities of the CMS ECAL DQM applications and report about their use in real environments.
DOI: 10.1016/j.nuclphysbps.2007.11.131
2008
Data Quality Monitoring for the CMS Electromagnetic Calorimeter
One of the aims of the CMS design is to construct and operate a very high quality electromagnetic calorimeter. The detector performance will be monitored using applications based on the CMS Data Quality Monitoring (DQM) framework and running on the High-Level Trigger Farm as well as on local DAQ systems. The monitorable quantities are organized into hierarchical structures based on the physics content. The information produced is delivered to client applications according to their subscription requests. The client applications process the received quantities, according to pre-defined analyses, making the results immediately available, and store the results in a database, and in the form of static web pages, for subsequent studies. We describe here the functionalities of the CMS ECAL DQM applications and report about their use in a real environment.
2006
CMS ECAL intercalibration of ECAL crystals using laboratory measurements
2015
Monte Carlo Management at CMS
The analysis of the LHC data at the Compact Muon Solenoid (CMS) experiment requires the production of a large number of simulated events.During the runI of LHC (2010-2012), CMS has produced over 12 Billion simulated events,organized in approximately sixty different campaigns each emulating specific detector conditions and LHC running conditions (pile up).In order toaggregate the information needed for the configuration and prioritization of the events production,assure the book-keeping and of all the processing requests placed by the physics analysis groups,and to interface with the CMS production infrastructure,the web-based service Monte Carlo Management (McM) has been developed and put in production in 2012.McM is based on recent server infrastructure technology (CherryPy + java) and relies on a CouchDB database back-end.This contribution will coverthe one and half year of operational experience managing samples of simulated events for CMS,the evolution of its functionalitiesand the extension of its capability to monitor the status and advancement of the events production. Presented at CHEP2015 21st International Conference on Computing in High Energy and Nuclear Physics Monte Carlo Production Management at CMS G Boudoul 1, G Franzoni 2, A Norkus2,3, A Pol 2, P Srimanobhas 4 and J-R Vlimant 5 for the Compact Muon Solenoid collaboration 1 U. C. Bernard-Lyon I, 43 boulevard du 11 Novembre 1918 69622 Villeurbanne, France 2 CERN, PH CMG-CO; CH-1211 Geneva 23 Switzerland 3 University of Vilnius, Universiteto g. 3, Vilnius 01513, Lithuania 4 Chulalongkorn University, 254 Phayathai Road, Pathumwan, Bangkok 10330, Thailand 5 California Institute of Technology, 1200 East California Boulevard, Pasadena, CA 91125, United States E-mail: giovanni.franzoni@cern.ch q Abstract. The analysis of the LHC data at the Compact Muon Solenoid (CMS) experiment requires the production of a large number of simulated events. During the RunI of LHC (20102012), CMS has produced over 12 Billion simulated events, organized in approximately sixty different campaigns each emulating specific detector conditions and LHC running conditions (pile up). In order to aggregate the information needed for the configuration and prioritization of the events production, assure the book-keeping of all the processing requests placed by the physics analysis groups, and to interface with the CMS production infrastructure, the webbased service Monte Carlo Management (McM) has been developed and put in production in 2013. McM is based on recent server infrastructure technology (CherryPy + AngularJS) and relies on a CouchDB database back-end. This contribution covers the one and half year of operational experience managing samples of simulated events for CMS, the evolution of its functionalities and the extension of its capability to monitor the status and advancement of the events production. The analysis of the LHC data at the Compact Muon Solenoid (CMS) experiment requires the production of a large number of simulated events. During the RunI of LHC (20102012), CMS has produced over 12 Billion simulated events, organized in approximately sixty different campaigns each emulating specific detector conditions and LHC running conditions (pile up). In order to aggregate the information needed for the configuration and prioritization of the events production, assure the book-keeping of all the processing requests placed by the physics analysis groups, and to interface with the CMS production infrastructure, the webbased service Monte Carlo Management (McM) has been developed and put in production in 2013. McM is based on recent server infrastructure technology (CherryPy + AngularJS) and relies on a CouchDB database back-end. This contribution covers the one and half year of operational experience managing samples of simulated events for CMS, the evolution of its functionalities and the extension of its capability to monitor the status and advancement of the events production.
2015
Le picole aperture di Francesco
DOI: 10.1016/j.nuclphysbps.2015.09.144
2016
Dataset definition for CMS operations and physics analyses
Data recorded at the CMS experiment are funnelled into streams, integrated in the HLT menu, and further organised in a hierarchical structure of primary datasets and secondary datasets/dedicated skims. Datasets are defined according to the final-state particles reconstructed by the high level trigger, the data format and the use case (physics analysis, alignment and calibration, performance studies). During the first LHC run, new workflows have been added to this canonical scheme, to exploit at best the flexibility of the CMS trigger and data acquisition systems. The concepts of data parking and data scouting have been introduced to extend the physics reach of CMS, offering the opportunity of defining physics triggers with extremely loose selections (e.g. dijet resonance trigger collecting data at a 1 kHz). In this presentation, we review the evolution of the dataset definition during the LHC run I, and we discuss the plans for the run II.
DOI: 10.1088/1742-6596/664/4/042017
2015
User and group storage management the CMS CERN T2 centre
A wide range of detector commissioning, calibration and data analysis tasks is carried out by CMS using dedicated storage resources available at the CMS CERN Tier-2 centre. Relying on the functionalities of the EOS disk-only storage technology, the optimal exploitation of the CMS user/group resources has required the introduction of policies for data access management, data protection, cleanup campaigns based on access pattern, and long term tape archival. The resource management has been organised around the definition of working groups and the delegation to an identified responsible of each group composition. In this paper we illustrate the user/group storage management, and the development and operational experience at the CMS CERN Tier-2 centre in the 2012-2015 period.
DOI: 10.1400/237002
2016
Rassegnati o ribelli
DOI: 10.1400/231450
2015
Religioni ed eccidi nella storia
2016
Physics performance and fast turn around: the challenge of calibration and alignment at the CMS experiment during the LHC Run-II
DOI: 10.1400/230548
2015
Giubileo o seconda Perdonanza
DOI: 10.1400/235866
2015
Ho fatto un sogno
DOI: 10.1400/231886
2015
Chiesa cattolica e maschilismo
DOI: 10.1400/226285
2014
L'iceberg della pedofilia
DOI: 10.1400/229567
2015
Gli spazi della tradizione
DOI: 10.1400/171052
2011
Restiamo umani : la parola fatta carne
DOI: 10.1400/164704
2011
Tempo di colloquio ebraico-cristiano
DOI: 10.1400/166647
2011
Rifiutare una vita in cui non ci si riconosce
DOI: 10.1400/174594
2011
La discesa del divino nella carne del servo
2013
PHYSICS PERFORMANCE & DATASET (PPD)
DOI: 10.1400/229336
2011
Ma il rispetto della vita vale solo per quella umana
DOI: 10.1400/173735
2011
Ricordi, speranze, delusioni di un padre conciliare : intervista a Giovanni Franzoni
DOI: 10.1088/1742-6596/219/2/022013
2010
ECAL front-end monitoring in the CMS experiment
The CMS detector at LHC is equipped with a high precision lead tungstate crystal electromagnetic calorimeter (ECAL). The front-end boards and the photodetectors are monitored using a network of DCU (Detector Control Unit) chips located on the detector electronics. The DCU data are accessible through token rings controlled by an XDAQ-based software component. Relevant parameters are transferred to DCS (Detector Control System) and stored into the Condition DataBase. The operational experience from the ECAL commissioning at the CMS experimental cavern is discussed and summarized.
DOI: 10.1016/j.nima.2009.09.028
2010
The electromagnetic calorimeter of CMS: Status and performance with cosmic rays data and first LHC data
The CMS detector at the LHC is ready to take physics data.The high resolution Electromagnetic Calorimeter, which consists of 75848 lead tungstate crystals, will play a crucial role in the physics searches undertaken by CMS.The design and the status of the calorimeter will be presented, and its performance in tests with beams, cosmic rays, and data from the first LHC beams
DOI: 10.1400/145158
2010
Nuda fides o Propaganda fide
DOI: 10.1400/127606
2010
Il paradigma Rosarno
DOI: 10.1400/127628
2010
Il vero volto di Eluana
DOI: 10.1088/1742-6596/898/9/092038
2017
Monitoring of the data processing and simulated production at CMS with a web-based service: the Production Monitoring Platform (pMp)
Physics analysis at the Compact Muon Solenoid requires both the production of simulated events and processing of the data collected by the experiment. Since the end of the LHC Run-I in 2012, CMS has produced over 20 billion simulated events, from 75 thousand processing requests organised in one hundred different campaigns. These campaigns emulate different configurations of collision events, the detector, and LHC running conditions. In the same time span, sixteen data processing campaigns have taken place to reconstruct different portions of the Run-I and Run-II data with ever improving algorithms and calibrations. The scale and complexity of the events simulation and processing, and the requirement that multiple campaigns must proceed in parallel, demand that a comprehensive, frequently updated and easily accessible monitoring be made available. The monitoring must serve both the analysts, who want to know which and when datasets will become available, and the central production teams in charge of submitting, prioritizing, and running the requests across the distributed computing infrastructure. The Production Monitoring Platform (pMp) web-based service, has been developed in 2015 to address those needs. It aggregates information from multiple services used to define, organize, and run the processing requests. Information is updated hourly using a dedicated elastic database and the monitoring provides multiple configurable views to assess the status of single datasets as well as entire production campaigns. This contribution will describe the pMp development, the evolution of its functionalities, and one and half year of operational experience.
DOI: 10.1088/1742-6596/898/9/092047
2017
Monitoring of the infrastructure and services used to handle and automatically produce Alignment and Calibration conditions at CMS
The CMS experiment at CERN LHC has a dedicated infrastructure to handle the alignment and calibration data. This infrastructure is composed of several services, which take on various data management tasks required for the consumption of the non-event data (also called as condition data) in the experiment activities. The criticality of these tasks imposes tights requirements for the availability and the reliability of the services executing them. In this scope, a comprehensive monitoring and alarm generating system has been developed. The system has been implemented based on the Nagios open source industry standard for monitoring and alerting services, and monitors the database back-end, the hosting nodes and key heart-beat functionalities for all the services involved. This paper describes the design, implementation and operational experience with the monitoring system developed and deployed at CMS in 2016.
DOI: 10.1088/1742-6596/898/3/032041
2017
Continuous and fast calibration of the CMS experiment: design of the automated workflows and operational experience
The exploitation of the full physics potential of the LHC experiments requires fast and efficient processing of the largest possible dataset with the most refined understanding of the detector conditions. To face this challenge, the CMS collaboration has setup an infrastructure for the continuous unattended computation of the alignment and calibration constants, allowing for a refined knowledge of the most time-critical parameters already a few hours after the data have been saved to disk. This is the prompt calibration framework which, since the beginning of the LHC Run-I, enables the analysis and the High Level Trigger of the experiment to consume the most up-to-date conditions optimizing the performance of the physics objects. In the Run-II this setup has been further expanded to include even more complex calibration algorithms requiring higher statistics to reach the needed precision. This imposed the introduction of a new paradigm in the creation of the calibration datasets for unattended workflows and opened the door to a further step in performance. The paper reviews the design of these automated calibration workflows, the operational experience in the Run-II and the monitoring infrastructure developed to ensure the reliability of the service.
DOI: 10.22323/1.282.0988
2017
Physics performance and fast turn around: the challenge of calibration and alignment at the CMS experiment during the LHC Run-II
The CMS detector at the Large Hadron Collider (LHC) is a very complex apparatus with more than 70 million acquisition channels.To exploit its full physics potential, a very careful calibration of the various components, together with an optimal knowledge of their position in space, is essential.The CMS Collaboration has set up a powerful infrastructure to allow for the best knowledge of these conditions at any given moment.The quick turnaround of these workflows was proven crucial both for the algorithms performing the online event selection and for the ultimate resolution of the offline reconstruction of the physics objects.The contribution will report about the design and performance of these workflows during the operations of the 13TeV LHC RunII.
DOI: 10.1400/251953
2017
Eden : il peccato originale, un malinteso
DOI: 10.1400/250982
2017
Cresima e cresimandi : parliamone
DOI: 10.1400/151254
2010
La guerra con le mani pulite
DOI: 10.5281/zenodo.1034833
2017
Machine learning for data certification
DOI: 10.1400/133531
2010
Un'etica condivisa per una società pluralista
2008
Conversione o apostasia
DOI: 10.1400/119808
2009
I crocifissi in carne e ossa che ignoriamo ogni giorno
DOI: 10.1400/113813
2009
I ricordi utili di un padre conciliare
DOI: 10.1400/113847
2009
Sicurezza e vulnerabilità
2009
Il concilio tradito
DOI: 10.1051/epjconf/201921404006
2019
Development and operational experience of the web based application to collect, manage, and release the alignment and calibration configurations for data processing at CMS
The alignment and calibration workflows at the Compact Muon Solenoid (CMS) experiment are fundamental to provide a high quality physics data and to maintain the design performance of the experiment. To facilitate the operational efforts required by the experiment, the alignment and calibration team has developed and deployed a set of web-based applications to search, navigate and prepare a consistent set of calibrations to be consumed in reconstruction of data for physics, accessible through the Condition DB Browser. The Condition DB Browser hosts also various data management tools, including a vi-sualization tool that allows to easily inspect alignment an calibration contents, an user-defined notification agent for delivering updates on modification to the database, a logging service for the user and the automatic online-to-offline condition uploads. In this paper we report on the operational experience of this web application from 2017 data taking, with focus on new features and tools incorporated during this period.
DOI: 10.1088/1742-6596/1085/4/042015
2018
Deep learning for inferring cause of data anomalies
Daily operation of a large-scale experiment is a resource consuming task, particularly from perspectives of routine data quality monitoring. Typically, data comes from different sub-detectors and the global quality of data depends on the combinatorial performance of each of them. In this paper, the problem of identifying channels in which anomalies occurred is considered. We introduce a generic deep learning model and prove that, under reasonable assumptions, the model learns to identify 'channels' which are affected by an anomaly. Such model could be used for data quality manager cross-check and assistance and identifying good channels in anomalous data samples. The main novelty of the method is that the model does not require ground truth labels for each channel, only global flag is used. This effectively distinguishes the model from classical classification methods. Being applied to CMS data collected in the year 2010, this approach proves its ability to decompose anomaly by separate channels.
DOI: 10.1088/1742-6596/1525/1/012045
2020
Deep learning for certification of the quality of the data acquired by the CMS Experiment
Abstract Certifying the data recorded by the Compact Muon Solenoid (CMS) experiment at CERN is a crucial and demanding task as the data is used for publication of physics results. Anomalies caused by detector malfunctioning or sub-optimal data processing are difficult to enumerate a priori and occur rarely, making it difficult to use classical supervised classification. We base out prototype towards the automation of such procedure on a semi-supervised approach using deep autoencoders. We demonstrate the ability of the model to detect anomalies with high accuracy, when compared against the outcome of the fully supervised methods. We show that the model has great interpretability of the results, ascribing the origin of the problems in the data to a specific sub-detector or physics object. Finally, we address the issue of feature dependency on the LHC beam intensity.
DOI: 10.1051/epjconf/202024505013
2020
Analysis Tools for the VyPR Performance Analysis Framework for Python
VyPR (http://pyvypr.github.io/home/) is a framework being developed with the aim of automating as much as possible the performance analysis of Python programs. To achieve this, it uses an analysis-by-specification approach; developers specify the performance requirements of their programs (without any modifications of the source code) and such requirements are checked at runtime. VyPR then provides tools which allow developers to perform detailed analyses of the performance of their code. Such analyses can include determining the common paths taken to reach badly performing parts of code, deciding whether a single path through code led to variations in time taken by future observations, and more. This paper describes the developments that have taken place in the past year on VyPR’s analysis tools to yield a Python shell-based analysis library, and a web-based application. It concludes by demonstrating the use of the analysis tools on the CMS Experiment’s Conditions Upload service.
DOI: 10.48550/arxiv.2012.06336
2020
Construction and commissioning of CMS CE prototype silicon modules
As part of its HL-LHC upgrade program, the CMS Collaboration is developing a High Granularity Calorimeter (CE) to replace the existing endcap calorimeters. The CE is a sampling calorimeter with unprecedented transverse and longitudinal readout for both electromagnetic (CE-E) and hadronic (CE-H) compartments. The calorimeter will be built with $\sim$30,000 hexagonal silicon modules. Prototype modules have been constructed with 6-inch hexagonal silicon sensors with cell areas of 1.1~$cm^2$, and the SKIROC2-CMS readout ASIC. Beam tests of different sampling configurations were conducted with the prototype modules at DESY and CERN in 2017 and 2018. This paper describes the construction and commissioning of the CE calorimeter prototype, the silicon modules used in the construction, their basic performance, and the methods used for their calibration.
DOI: 10.1063/1.2396977
2006
Cosmic ray calibration of the PbWO4 crystal electromagnetic calorimeter of CMS
The Compact Muon Solenoid experiment at the CERN LHC features a high precision PbWO4 crystal electromagnetic calorimeter. Each crystal is first precalibrated with a radioactive source and by means of optical measurements. After the assembly, each supermodule (1700 crystals) is exposed to comics rays.The comparison between intercalibration obtained from cosmic muons and electrons from test beam was performed at the end of 2004 for an initial set of 130 channels and showed that a precalibration with a statistical precision of 1 to 2% can be achieved within approximately one week. An important aspect of the comics muons analysis is that it is entirely based on the calorimeter data, without using any external tracking device.We will present the setup and results from the 2004 test as well as recent data recorded on many supermodule.
2005
Measurement of the APD Gain Using Laser Monitoring Data During the 2002 CMS ECAL Test-Beam
DOI: 10.1400/265887
2001
L'ebraismo di Gesù nei Vangeli
2000
Giordano Bruno : attualità di un'eresia