ϟ

Giuseppe Bagliesi

Here are all the papers by Giuseppe Bagliesi that you can download and read on OA.mg.
Giuseppe Bagliesi’s last known institution is . Download Giuseppe Bagliesi PDFs here.

Claim this Profile →
DOI: 10.1007/s41781-018-0018-8
2019
Cited 121 times
A Roadmap for HEP Software and Computing R&D for the 2020s
Particle physics has an ambitious and broad experimental programme for the coming decades. This programme requires large investments in detector hardware, either to build new facilities and experiments, or to upgrade existing ones. Similarly, it requires commensurate investment in the R&D of software to acquire, manage, process, and analyse the shear amounts of data to be recorded. In planning for the HL-LHC in particular, it is critical that all of the collaborating stakeholders agree on the software goals and priorities, and that the efforts complement each other. In this spirit, this white paper describes the R&D activities required to prepare for this software upgrade.
DOI: 10.1016/s0168-9002(00)00182-0
2000
Cited 26 times
New results on silicon microstrip detectors of CMS tracker
Interstrip and backplane capacitances on silicon microstrip detectors with p+ strip on n substrate of 320μm thickness were measured for pitches between 60 and 240μm and width over pitch ratios between 0.13 and 0.5. Parametrisations of capacitance w.r.t. pitch and width were compared with data. The detectors were measured before and after being irradiated to a fluence of 4×1014protons/cm2 of 24GeV/c momentum. The effect of the crystal orientation of the silicon has been found to have a relevant influence on the surface radiation damage, favouring the choice of a 〈100〉 substrate. Working at high bias (up to 500 V in CMS) might be critical for the stability of detector, for a small width over pitch ratio. The influence of having a metal strip larger than the p+ implant has been studied and found to enhance the stability.
2000
Cited 23 times
Models of networked analysis at regional centres for lhc experiments (monarc). phase 2 report.
DOI: 10.1016/0168-9002(87)90966-1
1987
Cited 19 times
The ALEPH minivertex detector
Vertex detectors allow high precision reconstruction of particle tracks and therefore make possible the investigation of the decay topology of short-lived particles in collider experiments. In the ALEPH experiment at LEP a minivertex detector will be installed. It consists of silicon microstrip detectors arranged on two concentric “cylindrical” surfaces around the interaction point. With this geometry it will be possible to measure the r − ϕ − z coordinates of particles traversing the detector. The expected position resolution is 10 μm in r − ϕ and 20 μm in r − z. For optimum signal processing monolithic CMOS readout electronics are under development. Each chip consists of 60 charge sensitive preamplifiers, multiplexed into one output channel. Fast power switching will reduce heat dissipation. Details about construction and expected device performance will be described.
DOI: 10.1007/s10723-010-9152-1
2010
Cited 12 times
Distributed Analysis in CMS
The CMS experiment expects to manage several Pbytes of data each year during the LHC programme, distributing them over many computing sites around the world and enabling data access at those centers for analysis. CMS has identified the distributed sites as the primary location for physics analysis to support a wide community with thousands potential users. This represents an unprecedented experimental challenge in terms of the scale of distributed computing resources and number of user. An overview of the computing architecture, the software tools and the distributed infrastructure is reported. Summaries of the experience in establishing efficient and scalable operations to get prepared for CMS distributed analysis are presented, followed by the user experience in their current analysis activities.
DOI: 10.1088/1742-6596/513/3/032040
2014
Cited 7 times
CMS computing operations during run 1
During the first run, CMS collected and processed more than 10B data events and simulated more than 15B events. Up to 100k processor cores were used simultaneously and 100PB of storage was managed. Each month petabytes of data were moved and hundreds of users accessed data samples. In this document we discuss the operational experience from this first run. We present the workflows and data flows that were executed, and we discuss the tools and services developed, and the operations and shift models used to sustain the system. Many techniques were followed from the original computing planning, but some were reactions to difficulties and opportunities. We also address the lessons learned from an operational perspective, and how this is shaping our thoughts for 2015.
2019
Cited 6 times
A roadmap for HEP software and computing R&D for the 2020s
Particle physics has an ambitious and broad experimental programme for the coming decades. This programme requires large investments in detector hardware, either to build new facilities and experiments, or to upgrade existing ones. Similarly, it requires commensurate investment in the R&D of software to acquire, manage, process, and analyse the shear amounts of data to be recorded. In planning for the HL-LHC in particular, it is critical that all of the collaborating stakeholders agree on the software goals and priorities, and that the efforts complement each other. In this spirit, this white paper describes the R&D activities required to prepare for this software upgrade.
DOI: 10.1016/s0168-9002(97)00750-x
1997
Cited 14 times
Beam test results for single- and double-sided silicon detector prototypes of the CMS central detector
We report the results of two beam tests performed in July and September 1995 at CERN using silicon microstrip detectors of various types: single sided, double sided with small angle stereo strips, double sided with orthogonal strips, double sided with pads. For the read-out electronics use was made of Preshape32, Premux128 and VA1 chips. The signal to noise ratio and the resolution of the detectors was studied for different incident angles of the incoming particles and for different values of the detector bias voltage. The goal of these tests was to check and improve the performances of the prototypes for the CMS Central Detector.
DOI: 10.1088/1742-6596/219/6/062055
2010
Cited 6 times
Debugging data transfers in CMS
The CMS experiment at CERN is preparing for LHC data taking in several computing preparation activities. In early 2007 a traffic load generator infrastructure for distributed data transfer tests was designed and deployed to equip the WLCG tiers which support the CMS virtual organization with a means for debugging, load-testing and commissioning data transfer routes among CMS computing centres. The LoadTest is based upon PhEDEx as a reliable, scalable data set replication system. The Debugging Data Transfers (DDT) task force was created to coordinate the debugging of the data transfer links. The task force aimed to commission most crucial transfer routes among CMS tiers by designing and enforcing a clear procedure to debug problematic links. Such procedure aimed to move a link from a debugging phase in a separate and independent environment to a production environment when a set of agreed conditions are achieved for that link. The goal was to deliver one by one working transfer routes to the CMS data operations team. The preparation, activities and experience of the DDT task force within the CMS experiment are discussed. Common technical problems and challenges encountered during the lifetime of the taskforce in debugging data transfer links in CMS are explained and summarized.
DOI: 10.1088/1742-6596/119/3/032005
2008
Cited 6 times
Reconstruction and identification of tau decays at CMS
Tau leptons play a key role in physics studies at the LHC. Interests in using tau leptons include (but are not limited to) their ability to offer a relatively low background environment, a competitive way of probing new physics as well as the possibility to explore new physics regions not accessible otherwise. The tau identification and reconstruction algorithms developed for the CMS experiment are described, from the first level of the trigger to the off-line reconstruction and selection.
DOI: 10.1016/0168-9002(88)90599-2
1988
Cited 10 times
Operation of limited streamer tubes with the gas mixture Ar + CO2 + n-pentane
The active detectors of the ALEPH hadron calorimeter at LEP consist of plastic streamer tubes developed in Frascati. The standard gas mixture for the operation of such devices is argon-isobutane (3070). However, in underground experiments, for safety reasons, one has to reduce the hydrocarbon content. Therefore a study of the behaviour of streamer tubes operated with a Ar/CO2/n-pentane mixture has been performed. The influence of gas composition on efficiency, charge distribution and stability of operation has been investigated, and the results of these tests are presented.
DOI: 10.1007/bf01630588
1987
Cited 10 times
Λ c Photoproduction and lifetime measurement
A measurement of the lifetime of theΛ c baryon photoproduced coherently off a Germanium-Silicon target is presented. A signal ofΛ c →ΔK*→pKππ0 has been observed and the two different decay diagrams for this process are compared. A sample of 9Λ c decays gives a lifetime of 1.1 −0.4 +0.8 10−13 s.
DOI: 10.48550/arxiv.0707.0928
2007
Cited 6 times
Tau tagging at Atlas and CMS
The tau identification and reconstruction algorithms developed for the LHC experiments Atlas and CMS are presented. Reconstruction methods suitable for use at High Level Trigger and off-line are described in detail
DOI: 10.48550/arxiv.0902.0180
2009
Cited 4 times
Proceedings of the Workshop on Monte Carlo's, Physics and Simulations at the LHC PART II
These proceedings collect the presentations given at the first three meetings of the INFN "Workshop on Monte Carlo's, Physics and Simulations at the LHC", held at the Frascati National Laboratories in 2006. The first part of these proceedings contains pedagogical introductions to several basic topics of both theoretical and experimental high pT LHC physics. The second part collects more specialised presentations.
2008
Cited 4 times
Track Reconstruction with Cosmic Ray Data at the Tracker Integration Facility
The subsystems of the CMS silicon strip tracker were integrated and commissioned at the Tracker Integration Facility (TIF) in the period from November 2006 to July 2007. As part of the commissioning, large samples of cosmic ray data were recorded under various running conditions in the absence of a magnetic field. Cosmic rays detected by scintillation counters were used to trigger the readout of up to 15\,\% of the final silicon strip detector, and over 4.7~million events were recorded. This document describes the cosmic track reconstruction and presents results on the performance of track and hit reconstruction as from dedicated analyses.
DOI: 10.1109/nssmic.2008.4775085
2008
Cited 4 times
The CMS data transfer test environment in preparation for LHC data taking
The CMS experiment is preparing for LHC data taking in several computing preparation activities. In distributed data transfer tests, in early 2007 a traffic load generator infrastructure was designed and deployed, to equip the WLCG Tiers which support the CMS Virtual Organization with a means for debugging, load-testing and commissioning data transfer routes among CMS Computing Centres. The LoadTest is based upon PhEDEx as a reliable, scalable dataset replication system. In addition, a Debugging Data Transfers (DDT) Task Force was created to coordinate the debugging of data transfer links in the preparation period and during the Computing Software and Analysis challenge in 2007 (CSA07). The task force aimed to commission most crucial transfer routes among CMS tiers by designing and enforcing a clear procedure to debug problematic links. Such procedure aimed to move a link from a debugging phase in a separate and independent environment to a production environment when a set of agreed conditions are achieved for that link. The goal was to deliver one by one working transfer routes to Data Operations. The experiences with the overall test transfers infrastructure within computing challenges - as in the WLCG Common-VO Computing Readiness Challenge (CCRC’08) - as well as in daily testing and debugging activities are reviewed and discussed, and plans for the future are presented.
DOI: 10.1016/s0168-9002(00)00181-9
2000
Cited 7 times
Performance of CMS silicon microstrip detectors with the APV6 readout chip
We present results obtained with full-size wedge silicon microstrip detectors bonded to APV6 (Raymond et al., Proceedings of the 3rd Workshop on Electronics for LHC Experiments, CERN/LHCC/97-60) readout chips. We used two identical modules, each consisting of two crystals bonded together. One module was irradiated with 1.7×1014neutrons/cm2. The detectors have been characterized both in the laboratory and by exposing them to a beam of minimum ionizing particles. The results obtained are a good starting point for the evaluation of the performance of the “ensemble” detector plus readout chip in a version very similar to the final production one. We detected the signal from minimum ionizing particles with a signal-to-noise ratio ranging from 9.3 for the irradiated detector up to 20.5 for the non-irradiated detector, provided the parameters of the readout chips are carefully tuned.
DOI: 10.1016/j.nima.2006.09.081
2007
Cited 3 times
First level trigger using pixel detector for the CMS experiment
A proposal for a pixel-based Level 1 trigger for the Super-LHC is presented. The trigger is based on fast track reconstruction using the full pixel granularity exploiting a readout which connects different layers in specific trigger towers. The trigger will implement the current CMS high level trigger functionality in a novel concept of intelligent detector. A possible layout is discussed and implications on data links are evaluated.
DOI: 10.1088/1742-6596/396/3/032041
2012
Towards higher reliability of CMS computing facilities
The CMS experiment has adopted a computing system where resources are distributed worldwide in more than 50 sites. The operation of the system requires a stable and reliable behaviour of the underlying infrastructure. CMS has established procedures to extensively test all relevant aspects of a site and their capability to sustain the various CMS computing workflows at the required scale. The Site Readiness monitoring infrastructure has been instrumental in understanding how the system as a whole was improving towards LHC operations, measuring the reliability of sites when running CMS activities, and providing sites with the information they need to troubleshoot any problem. This contribution reviews the complete automation of the Site Readiness program, with the description of monitoring tools and their inclusion into the Site Status Board (SSB), the performance checks, the use of tools like HammerCloud, and the impact in improving the overall reliability of the Grid from the point of view of the CMS computing system. These results are used by CMS to select good sites to conduct workflows, in order to maximize workflows efficiencies. The performance against these tests seen at the sites during the first years of LHC running is as well reviewed.
DOI: 10.1016/s0168-9002(99)00419-2
1999
Cited 7 times
The R&D program for silicon detectors in CMS
This paper describes the main achievements in the development of radiation resistant silicon detectors to be used in the CMS tracker. After a general description of the basic requirements for the operation of large semiconductor systems in the LHC environment, the issue of radiation resistance is discussed in detail. Advantages and disadvantages of the different technological options are presented for comparison. Laboratory measurements and test beam data are used to check the performance of several series of prototypes fabricated by different companies. The expected performance of the final detector modules are presented together with preliminary test beam results on system prototypes.
DOI: 10.1016/0168-9002(90)90207-m
1990
Cited 6 times
The combined response of the ALEPH electromagnetic and hadronic calorimeter to pions
The response to pions of an ALEPH electromagnetic calorimeter petal combined with the ALEPH hadron calorimeter prototype has been studied in the energy range between 2 and 30 GeV. The resolution of the combined calorimeters was found to be lower than that for the hadron calorimeter alone at low energies and approached this value at higher energies.
DOI: 10.22323/1.351.0014
2019
Integration of the Italian cache federation within the CMS computing model
The next decades at HL-LHC will be characterized by a huge increase of both storage and computing requirements (between one and two orders of magnitude). Moreover we foresee a shift on resources provisioning towards the exploitation of dynamic (on private or public cloud and HPC facilities) solutions. In this scenario the computing model of the CMS experiment is pushed towards an evolution for the optimization of the amount of space that is managed centrally and the CPU efficiency of the jobs that run on "storage-less" resources. In particular the computing resources of the "Tier2" sites layer, for the most part, can be instrumented to read data from a geographically distributed cache storage based on unmanaged resources, reducing, in this way, the operational efforts by a large fraction and generating additional flexibility. The objective of this contribution is to present the first implementation of an INFN federation of cache servers, developed also in collaboration with the eXtreme Data Cloud EU project. The CNAF Tier-1 plus Bari and Legnaro Tier-2s provide unmanaged storages which have been organized under a common namespace. This distributed cache federation has been seamlessly integrated in the CMS computing infrastructure, while the technical implementation of this solution is based on XRootD, largely adopted in the CMS computing model under the "Anydata, Anytime, Anywhere project" (AAA). The results in terms of CMS workflows performances will be shown. In addition a complete simulation of the effects of the described model under several scenarios, including dynamic hybrid cloud resource provisioning, will be discussed. Finally a plan for the upgrade of such a prototype towards a stable INFN setup seamlessly integrated with production CMS computing infrastructure will be discussed.
DOI: 10.1016/s0168-9002(01)00544-7
2001
Cited 4 times
Optimization of the silicon sensors for the CMS tracker
The CMS experiment at the LHC will comprise a large silicon strip tracker. This article highlights some of the results obtained in the R&D studies for the optimization of its silicon sensors. Measurements of the capacitances and of the high voltage stability of the devices are presented before and after irradiation to the dose expected after the full lifetime of the tracker.
DOI: 10.1140/epjcd/s2004-03-1804-1
2004
CMS high-level trigger selection
The CMS High-Level Trigger (HLT) is based on sets of dedicated commercial processors, whose goal is to reduce the Level-1 trigger rate of 100 kHz to ≈100 Hz, a value compatible with permanent storage of data. In this report the results from the recently published DAQ TDR are presented. The offline code with little modifications can be used in the trigger chain soon after Level 1 by making use of regional and conditional analyses. For the benchmark channels considered, the signal efficiencies and background rejection factors, obtained within ≈500 ms of a recent CPU (1 GHz processor), are competitive with full offline analyses done without any timing limitation. PACS: CMS Compact muon solenoid experiment – HLT High-Level Trigger
DOI: 10.1088/1742-6596/396/3/032012
2012
CMS resource utilization and limitations on the grid after the first two years of LHC collisions
After years of development, the CMS distributed computing system is now in full operation. The LHC continues to set records for operational performance, and CMS records data at more than 300 Hz. Because of the intensity of the beams, there are multiple proton-proton interactions per beam crossing, leading to ever-larger event sizes and processing times. The CMS computing system has responded admirably to these challenges, but some reoptimization of the computing model has been required to maximize the efficient delivery of data analysis results by the collaboration in the face of increasingly constrained computing resources. We present the current status of the system, describe the recent performance, and discuss the challenges ahead and how CMS intends to meet them.
DOI: 10.1088/1742-6596/898/9/092042
2017
Consolidating WLCG topology and configuration in the Computing Resource Information Catalogue
The Worldwide LHC Computing Grid infrastructure links about 200 participating computing centres affiliated with several partner projects. It is built by integrating heterogeneous computer and storage resources in diverse data centres all over the world and provides CPU and storage capacity to the LHC experiments to perform data processing and physics analysis. In order to be used by the experiments, these distributed resources should be well described, which implies easy service discovery and detailed description of service configuration. Currently this information is scattered over multiple generic information sources like GOCDB, OIM, BDII and experiment-specific information systems. Such a model does not allow to validate topology and configuration information easily. Moreover, information in various sources is not always consistent. Finally, the evolution of computing technologies introduces new challenges. Experiments are more and more relying on opportunistic resources, which by their nature are more dynamic and should also be well described in the WLCG information system. This contribution describes the new WLCG configuration service CRIC (Computing Resource Information Catalogue) which collects information from various information providers, performs validation and provides a consistent set of UIs and APIs to the LHC VOs for service discovery and usage configuration. The main requirements for CRIC are simplicity, agility and robustness. CRIC should be able to be quickly adapted to new types of computing resources, new information sources, and allow for new data structures to be implemented easily following the evolution of the computing models and operations of the experiments.
DOI: 10.1209/0295-5075/5/5/005
1988
Cited 4 times
A Measurement of D <sup>0</sup> Lifetime
A measurement of the lifetime of D0-mesons photoproduced coherently off a germanium target is presented. Signals have been observed for the production of D0 into several channels and for D*+ → D0π+. A sample of 58 D0 decays gives a lifetime of (3.4−0.5+0.6 ± 0.3) · 10−13 s.
DOI: 10.1109/escience.2018.00082
2018
Distributed and On-demand Cache for CMS Experiment at LHC
In the CMS [1] computing model the experiment owns dedicated resources around the world that, for the most part, are located in computing centers with a well defined Tier hierarchy. The geo-distributed storage is then controlled centrally by the CMS Computing Operations. In this architecture data are distributed and replicated across the centers following a preplacement model, mostly human controlled. Analysis jobs are then mostly executed on computing resources close to the data location. This of course allow to avoid CPU wasting due to I/O latency, although it does not allow to optimize the available job slots.
DOI: 10.1016/0168-9002(95)00641-9
1995
Cited 4 times
Performance of a prototype of the CMS central detector
A prototype of the barrel Tracking Detector of the Compact Muon Solenoid (CMS) experiment proposed for LHC was built and tested in a beam and in a magnetic field of up to 3 T. It contained six microstrip gas chambers, 25 cm long, and three double-sided silicon microstrip detectors, 12.5 cm long. We report some preliminary results on the performance of the chambers.
DOI: 10.1016/0168-9002(88)90304-x
1988
Cited 3 times
Monte Carlo simulation of the ALEPH hadron prototype calorimeter
In this paper the capability of GHEISHA as hadron shower generator is investigated and discussed by comparing the test run data from the ALEPH hadron calorimeter prototype with the Monte Carlo simulation developed in the GEANT3 framework. The results on muon, electron and pion simulation are reported. The longitudinal development of hadron showers is well reproduced and saturation effects are also explained. The resolution of the calorimeter for incident pions as computed by the Monte Carlo method is slightly larger than what observed.
DOI: 10.1016/s0168-9002(01)01824-1
2002
CMS silicon tracker developments
The CMS Silicon tracker consists of 70m2 of microstrip sensors which design will be finalized at the end of 1999 on the basis of systematic studies of device characteristics as function of the most important parameters. A fundamental constraint comes from the fact that the detector has to be operated in a very hostile radiation environment with full efficiency. We present an overview of the current results and prospects for converging on a final set of parameters for the silicon tracker sensors.
DOI: 10.1016/s0168-9002(01)00536-8
2001
The CMS tracker and expected performances
A robust tracking and a detailed vertex reconstruction are essential characteristics of each modern High Energy Physics experiment. This is particularly true in the LHC environment where thousands of tracks are expected to cross the inner part of the detector each bunch crossing (25 ns). In order to meet these requirements the Compact Muon Solenoid (CMS) collaboration has recently proposed to build a multi-layer full-silicon tracker with a pixel vertex detector close to the beam pipe. This layout is different from the one originally proposed in the CMS Tracker TDR (Technical Design Report, CERN/LHCC 98-6) where a inner silicon tracker surrounded by an outer MSGC tracker was foreseen. In this paper the reasons which led to change the CMS baseline proposal are explained, and the layout and expected performances of the recently proposed all-silicon tracker are described.
2006
The experimental world
DOI: 10.1007/bf03185592
1999
Comparative study of (111) and (100) crystals and capacitance measurements on Si strip detectors in CMS
For the construction of the silicon microstrip detectors for the Tracker of the CMS experiment, two different substrate choices were investigated: A high-resistivity (6 k cm) substrate with (111) crystalorientation and a low-resistivity (2k cm) one with (100) crystalorientation. The interstrip and backplane capacitances were measured before and after the exposure to radiation in a range of strip pitches from 60 μm to 240 μm and for values of the width-over-pitch ratio between 0.1 and 0.5.
DOI: 10.1016/s0168-9002(98)01461-2
1999
The CMS silicon microstrip detectors: research and development
Abstract A large quantity of silicon microstrip detectors is foreseen to be used as part of the CMS tracker. A specific research and development program has been carried out with the aim of defining layouts and technological solutions suitable for the use of silicon detectors in high radiation environment. Results presented here summarise this work on many research areas such as techniques for device manufacturing, pre- and post-irradiation electrical characterization, silicon bulk defects analysis and simulations, system performance analytical calculations and simulations and test beam analysis. As a result of this work we have chosen to use single-sided, AC-coupled, poly silicon biased, 300 μm thick, p + on n substrate detectors. We feel confident that these devices will match the required performances for the CMS tracker provided they can be operated at bias voltages as high as 500 V. Such high-voltage devices have been succesfully manufactured and we are now concentrating our efforts in enhancing yield and reliability.
DOI: 10.1016/0168-9002(90)91737-v
1990
Hadron showers in an iron-streamer tube sampling calorimeter
Hadronic showers in an iron-streamer tube sampling calorimeter have been studied for energies ranging between 3 and 25 GeV. Longitudinal and transverse energy distributions have been parametrized and compared with those determined for iron-scintillator calorimeters.
DOI: 10.1007/bf03185593
1999
High-voltage breakdown studies on Si microstrip detectors
The breakdown performance of CMS barrelmodule prototype detectors and test devices with single and multi-guard structures were studied before and after neutron irradiation up to 2·1014 1 MeV equivalent neutrons. Before irradiation avalanche breakdown occurred at the guard ring implant edges. We measured 100–300 V higher breakdown voltage values for the devices with multi-guard than for devices with single-guard ring. After irradiation and type inversion the breakdown was smoother than before irradiation and the breakdown voltage value increased to 500–600 V for most of the devices.
DOI: 10.1088/1742-6596/396/4/042003
2012
Optimization of HEP Analysis Activities Using a Tier2 Infrastructure
While the model for a Tier2 is well understood and implemented within the HEP Community, a refined design for Analysis specific sites has not been agreed upon as clearly. We aim to describe the solutions adopted at the INFN Pisa, the biggest Tier2 in the Italian HEP Community. A Standard Tier2 infrastructure is optimized for Grid CPU and Storage access, while a more interactive oriented use of the resources is beneficial to the final data analysis step. In this step, POSIX file storage access is easier for the average physicist, and has to be provided in a real or emulated way. Modern analysis techniques use advanced statistical tools (like RooFit and RooStat), which can make use of multi core systems. The infrastructure has to provide or create on demand computing nodes with many cores available, above the existing and less elastic Tier2 flat CPU infrastructure. At last, the users do not want to have to deal with data placement policies at the various sites, and hence a transparent WAN file access, again with a POSIX layer, must be provided, making use of the soon-to-be-installed 10 Gbit/s regional lines. Even if standalone systems with such features are possible and exist, the implementation of an Analysis site as a virtual layer over an existing Tier2 requires novel solutions; the ones used in Pisa are described here.
DOI: 10.1088/1742-6596/396/3/032009
2012
Building a Prototype of LHC Analysis Oriented Computing Centers
A Consortium between four LHC Computing Centers (Bari, Milano, Pisa and Trieste) has been formed in 2010 to prototype Analysis-oriented facilities for CMS data analysis, profiting from a grant from the Italian Ministry of Research. The Consortium aims to realize an ad-hoc infrastructure to ease the analysis activities on the huge data set collected at the LHC Collider. While Tier2 Computing Centres, specialized in organized processing tasks like Monte Carlo simulation, are nowadays a well established concept, with years of running experience, site specialized towards end user chaotic analysis activities do not yet have a defacto standard implementation. In our effort, we focus on all the aspects that can make the analysis tasks easier for a physics user not expert in computing. On the storage side, we are experimenting on storage techniques allowing for remote data access and on storage optimization on the typical analysis access patterns. On the networking side, we are studying the differences between flat and tiered LAN architecture, also using virtual partitioning of the same physical networking for the different use patterns. Finally, on the user side, we are developing tools and instruments to allow for an exhaustive monitoring of their processes at the site, and for an efficient support system in case of problems. We will report about the results of the test executed on different subsystem and give a description of the layout of the infrastructure in place at the site participating to the consortium.
DOI: 10.1016/0168-9002(92)90776-z
1992
The gain monitoring system of the Aleph hadron calorimeter
The gas gain monitoring system of the Aleph hadron calorimeter is described. The dependence of the charge response to a) the ratio between the pressure to temperature and b) the gas mixture parameters (Ar/CO2 ratio and isobutane percentage) have been determined. The total gain variation is measured with a precision of about 0.4%.
DOI: 10.1109/nssmic.1995.504319
2002
Double-sided silicon detectors using n-side pad readout for the CMS silicon inner tracker
Double-sided silicon detector prototypes produced for the CMS inner tracker are described which divide the n-side into pads rather than strips. The signal routing to the readout electronics is made on a separate flexible z-print which is glued on the detector and then wire-bonded to the pads.
DOI: 10.1016/s0168-9002(00)00616-1
2000
The CMS silicon tracker
This paper describes the Silicon microstrip Tracker of the CMS experiment at LHC. It consists of a barrel part with 5 layers and two endcaps with 10 disks each. About 10 000 single-sided equivalent modules have to be built, each one carrying two daisy-chained silicon detectors and their front-end electronics. Back-to-back modules are used to read-out the radial coordinate. The tracker will be operated in an environment kept at a temperature of T=−10°C to minimize the Si sensors radiation damage. Heavily irradiated detectors will be safely operated due to the high-voltage capability of the sensors. Full-size mechanical prototypes have been built to check the system aspects before starting the construction.
DOI: 10.1088/1742-6596/898/9/092014
2017
Grid site availability evaluation and monitoring at CMS
The Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC) uses distributed grid computing to store, process, and analyse the vast quantity of scientific data recorded every year. The computing resources are grouped into sites and organized in a tiered structure. Each site provides computing and storage to the CMS computing grid. Over a hundred sites worldwide contribute with resources from hundred to well over ten thousand computing cores and storage from tens of TBytes to tens of PBytes. In such a large computing setup scheduled and unscheduled outages occur continually and are not allowed to significantly impact data handling, processing, and analysis. Unscheduled capacity and performance reductions need to be detected promptly and corrected. CMS developed a sophisticated site evaluation and monitoring system for Run 1 of the LHC based on tools of the Worldwide LHC Computing Grid. For Run 2 of the LHC the site evaluation and monitoring system is being overhauled to enable faster detection/reaction to failures and a more dynamic handling of computing resources. Enhancements to better distinguish site from central service issues and to make evaluations more transparent and informative to site support staff are planned.
DOI: 10.1088/1742-6596/119/7/072015
2008
Real-time dataflow and workflow with the CMS tracker data
The Tracker detector took data with cosmics rays at the Tracker Integration Facility (TIF) at CERN. First on-line monitoring tasks were executed at the Tracker Analysis Centre (TAC) which is a dedicated Control Room at TIF with limited computing resources. A set of software agents were developed to perform the real-time data conversion in a standard format, to archive data on tape at CERN and to publish them in the official CMS data bookkeeping systems. According to the CMS computing and analysis model, most of the subsequent data processing has to be done in remote Tier-1 and Tier-2 sites, so data were automatically transferred from CERN to the sites interested to analyze them, currently Fermilab, Bari and Pisa. Official reconstruction in the distributed environment was triggered in real-time by using the tool currently used for the processing of simulated events. Automatic end-user analysis of data was performed in a distributed environment, in order to derive the distributions of important physics variables. The tracker data processing is currently migrating to the Tier-0 CERN as a prototype for the global data taking chain. Tracker data were also registered into the most recent version of the data bookkeeping system, DBS-2, by profiting from the new features to handle real data. A description of the dataflow/workflow and of the tools developed is given, together with the results about the performance of the real-time chain. Almost 7.2 million events were officially registered, moved, reconstructed and analyzed in remote sites by using the distributed environment.
2009
Test of the Inner Tracker Silicon Microstrip Modules
The inner portion of the CMS microstrip Tracker consists of 3540 silicon detector modules; its construction has been under full responsibility of seven INFN (Istituto Nazionale di Fisica Nucleare) and University laboratories in Italy. In this note procedures and strategies, which were developed and perfected to qualify the Tracker Inner Barrel and Inner Disks modules for installation, are described. In particular the tests required to select highly reliable detector modules are illustrated and a summary of the results from the full Inner Tracker module test is presented. 1) INFN sez. di Catania and Universita di Catania, Italy 2) INFN sez. di Perugia and Universita di Perugia, Italy 3) INFN sez. di Pisa and Scuola Normale Superiore di Pisa, Italy 4) INFN sez. di Pisa and Universita di Pisa, Italy 5) INFN sez. di Pisa, Italy 6) INFN sez. di Torino and Universita di Torino, Italy 7) INFN sez. di Torino, Italy 8) INFN sez. di Firenze, Italy 9) INFN sez. di Bari and Dipartimento Interateneo di Fisica di Bari, Italy 10) INFN sez. di Bari, Italy 11) INFN sez. di Padova, Italy 12) INFN sez. di Firenze and Universita di Firenze, Italy 13) INFN sez. di Padova and Universita di Padova, Italy 14) INFN sez. di Perugia, Italy a) On leave from ISS, Bucharest, Romania b) On leave from IFIN-HH, Bucharest, Romania c) Corresponding Author
2009
Proceedings, Workshop on Monte Carlo's, Physics and Simulations at the LHC. Part II : Frascati. Italy, 2006
DOI: 10.48550/arxiv.0902.0293
2009
Proceedings of the Workshop on Monte Carlo's, Physics and Simulations at the LHC PART I
These proceedings collect the presentations given at the first three meetings of the INFN "Workshop on Monte Carlo's, Physics and Simulations at the LHC", held at the Frascati National Laboratories in 2006. The first part of these proceedings contains pedagogical introductions to several basic topics of both theoretical and experimental high pT LHC physics. The second part collects more specialised presentations.
DOI: 10.22323/1.414.0367
2022
Incorporating creativity and interdisciplinarity in science teaching: The case of Art &amp; Science across Italy"
Art&Science across Italy is an INFN/CERN project for the Italian High School students (16-18 y.o.).More than 10.000 students joined since the 2016.Creativity and vision capability are common to many disciplines and are involved in artistic and scientific thinking and activities.Scientists and artists are often asked to see and think beyond the perceivable reality, to imagine aspects of things and events, which can be better seen from an unusual perspective.The main idea is to put in practice the basic concept of the STEAM field in which neither STEM nor arts are privileged over the other, but both are equally in play.Therefore, our aim is to engage high school students with science using artistic languages, regardless of students' specific skills or level of knowledge
DOI: 10.1016/s0168-9002(98)01101-2
1999
Test results of heavily irradiated Si detectors
Abstract A large use of silicon microstrip detectors is foreseen for the intermediate part of the CMS tracker. A specific research and development program has been carried out with the aim of finding design layouts and technological solutions for allowing silicon microstrip detectors to be reliably used on a high radiation level environment. As a result of this work single sided, AC-coupled, polysilicon biased, 300 μ m thick, p + on n substrate detectors were chosen. Irradiation tests have been performed on prototypes up to fluence 2×10 14  n/cm 2 . The detector performances do not significantly change if the detectors are biased well above the depletion voltage. S / N is reduced by less than 20%, still enough to insure a good efficiency and space resolution. Multiguard structures has been developed in order to reach high voltage operation (above 500 V).
DOI: 10.1016/j.nima.2006.09.089
2007
First performance studies of a pixel-based trigger in the CMS experiment
An important tool for the discovery of new physics at LHC is the design of a low level trigger with an high power of background rejection. The contribution of pixel detector to the lowest level trigger at CMS is studied focusing on low-energy jet identification, matching the information from calorimeters and pixel detector. In addition, primary vertex algorithms are investigated. The performances are evaluated in terms of, respectively, QCD rejection and multihadronic jets final states efficiency.
2007
Tau tagging at Atlas and CMS
The tau identification and reconstruction algorithms developed for the LHC experiments Atlas and CMS are presented. Reconstruction methods suitable for use at High Level Trigger and off-line are described in detail
DOI: 10.1109/23.903854
2000
Test results on heavily irradiated silicon detectors for the CMS experiment at LHC
We report selected results of laboratory measurements and beam tests of heavily irradiated microstrip silicon detectors. The detectors were single-sided devices, produced by different manufacturers and irradiated with different sources, for several total ionizing doses and fluences up to 4 /spl times/10/sup 14/ 1-MeV-equivalent neutrons per cm/sup 2/. Strip resistance and capacitance, detector leakage currents and breakdown performance were measured before and after irradiations. Signal-to-noise ratio and detector efficiency were studied in beam tests, for different values of the detector temperature and of the read-out pitch, as a function of the detector bias voltage. The goal of these test is to optimise the design of the final prototypes for the Silicon Strip Tracker of the CMS experiment at the CERN LHC collider.
2000
Models of Networked Analysis at Regional Centres for LHC Experiments (MONARC), Phase 2 Report, 24th March 2000
DOI: 10.48550/arxiv.hep-ex/9911015
1999
Object oriented data analysis in ALEPH
This article describes the status of the ALPHA^{++} project of the ALEPH collaboration. The ALEPH data have been converted from Fortran data structures (BOS banks) into C^{++} objects and stored in a object oriented database (Objectivity/DB), using tools provided by the RD45 collaboration and the LHC^{++} software project at CERN. A description of the database setup and of a preliminary version of an object oriented analysis program is given.
DOI: 10.1016/s0920-5632(99)00565-4
1999
R&amp;D for the CMS silicon tracker
DOI: 10.1016/s0168-9002(98)00831-6
1998
The CMS silicon tracker
Abstract The new silicon tracker layout (V4) is presented. The system aspects of the construction are discussed together with the expected tracking performance. Because of the high radiation environment in which the detectors will operate, particular care has been devoted to the study of the characteristics of heavily irradiated detectors. This includes studies on performance (charge collection, cluster size, resolution, efficiency) as a function of the bias voltage, integrated fluence, incidence angle and temperature.
DOI: 10.1016/s0920-5632(99)00564-2
1999
The silicon microstrip tracker for CMS
The CMS silicon strip tracker involves about 70 m2 of instrumented silicon, with approximately 18500 microstrip detectors read out by 5 × 106 electronics channels. It has to satisfy a set of stringent requirements imposed by the environment and by the physics expected at the LHC: low cell occupancy and good resolution, radiation hardness aided by adequate cooling, low mass combined with high stability. These conditions have been incorporated in a highly modular design of the detector modules and their support structures, chosen to facilitate construction and to allow for easy assembly and maintenance.
DOI: 10.1016/s0168-9002(97)01247-3
1998
The CMS silicon tracker at LHC
The paper describes the Silicon Tracking System of the Compact Muon Solenoid (CMS) experiment that is foreseen to collect events from p–p collision at the Ecm=14 TeV at the CERN future Large Hadron Collider (LHC). The proposed system consists of four layers of silicon microstrip detectors placed between the two inner layers of the pixel detector and the outer microstrip gas chamber system. The barrel part covers the η region up to 1.8, instrumenting the central radial region between 20 and 50 cm. The forward–backward disks extend the coverage up to η=2.6. This paper will review the main characteristics and performances of the system, the actual status of the R&D activities that we are carrying on, and the status of the milestones we have to fulfill in view of the Technical Design Report expected at the end of the year.
1997
Searches for R-parity violating supersymmetry at LEP-2
DOI: 10.1007/bf03185596
1999
The silicon microstrip tracker for CMS
This paper describes the silicon microstrip tracker of the CMS experiment at the future LHC. The silicon tracker consists of a barrel part with 5 layers and two endcaps with 10 disks each. About 6500 modules will have to be built, each one carrying two daisy-chained silicon sensors and their front-end electronics. The modules have been designed to be as simple and robust as possible. Radiation damage in the silicon sensors is minimized by cooling the whole system down to -10°C. Safe operation after heavy irradiation will be possible due to the high-voltage capability of the sensors. We expect the sensors to have a signal-to-noise ratio of 10 at the end of 10years of LHC running, which still gives an efficiency of almost 100%.