ϟ

I. González Caballero

Here are all the papers by I. González Caballero that you can download and read on OA.mg.
I. González Caballero’s last known institution is . Download I. González Caballero PDFs here.

Claim this Profile →
DOI: 10.1016/j.nima.2023.168103
2023
The Analytical Method algorithm for trigger primitives generation at the LHC Drift Tubes detector
The Compact Muon Solenoid (CMS) experiment prepares its Phase-2 upgrade for the high-luminosity era of the LHC operation (HL-LHC). Due to the increase of occupancy, trigger latency and rates, the full electronics of the CMS Drift Tube (DT) chambers will need to be replaced. In the new design, the time bin for the digitization of the chamber signals will be of around 1 ns, and the totality of the signals will be forwarded asynchronously to the service cavern at full resolution. The new backend system will be in charge of building the trigger primitives of each chamber. These trigger primitives contain the information at chamber level about the muon candidates position, direction, and collision time, and are used as input in the L1 CMS trigger. The added functionalities will improve the robustness of the system against ageing. An algorithm based on analytical solutions for reconstructing the DT trigger primitives, called Analytical Method, has been implemented both as a software C++ emulator and in firmware. Its performance has been estimated using the software emulator with simulated and real data samples, and through hardware implementation tests. Measured efficiencies are 96 to 98% for all qualities and time and spatial resolutions are close to the ultimate performance of the DT chambers. A prototype chain of the HL-LHC electronics using the Analytical Method for trigger primitive generation has been installed during Long Shutdown 2 of the LHC and operated in CMS cosmic data taking campaigns in 2020 and 2021. Results from this validation step, the so-called Slice Test, are presented.
2003
Cited 8 times
The Virtual Monte Carlo
The concept of Virtual Monte Carlo (VMC) has been developed by the ALICE Software Project to allow different Monte Carlo simulation programs to run without changing the user code, such as the geometry definition, the detector response simulation or input and output formats. Recently, the VMC classes have been integrated into the ROOT framework, and the other relevant packages have been separated from the AliRoot framework and can be used individually by any other HEP project. The general concept of the VMC and its set of base classes provided in ROOT will be presented. Existing implementations for Geant3, Geant4 and FLUKA and simple examples of usage will be described.
DOI: 10.1088/1742-6596/396/3/032091
2012
Cited 3 times
Integrating PROOF Analysis in Cloud and Batch Clusters
High Energy Physics (HEP) analysis are becoming more complex and demanding due to the large amount of data collected by the current experiments.The Parallel ROOT Facility (PROOF) provides researchers with an interactive tool to speed up the analysis of huge volumes of data by exploiting parallel processing on both multicore machines and computing clusters.The typical PROOF deployment scenario is a permanent set of cores configured to run the PROOF daemons.However, this approach is incapable of adapting to the dynamic nature of interactive usage.Several initiatives seek to improve the use of computing resources by integrating PROOF with a batch system, such as Proof on Demand (PoD) or PROOF Cluster.These solutions are currently in production at Universidad de Oviedo and IFCA and are positively evaluated by users.Although they are able to adapt to the computing needs of users, they must comply with the specific configuration, OS and software installed at the batch nodes.Furthermore, they share the machines with other workloads, which may cause disruptions in the interactive service for users.These limitations make PROOF a typical usecase for cloud computing.In this work we take profit from Cloud Infrastructure at IFCA in order to provide a dynamic PROOF environment where users can control the software configuration of the machines.The Proof Analysis Framework (PAF) facilitates the development of new analysis and offers a transparent access to PROOF resources.Several performance measurements are presented for the different scenarios (PoD, SGE and Cloud), showing a speed improvement closely correlated with the number of cores used.
DOI: 10.1088/1742-6596/331/6/062032
2011
Cited 3 times
CMS Distributed Computing Integration in the LHC sustained operations era
After many years of preparation the CMS computing system has reached a situation where stability in operations limits the possibility to introduce innovative features. Nevertheless it is the same need of stability and smooth operations that requires the introduction of features that were considered not strategic in the previous phases. Examples are: adequate authorization to control and prioritize the access to storage and computing resources; improved monitoring to investigate problems and identify bottlenecks on the infrastructure; increased automation to reduce the manpower needed for operations; effective process to deploy in production new releases of the software tools. We present the work of the CMS Distributed Computing Integration Activity that is responsible for providing a liaison between the CMS distributed computing infrastructure and the software providers, both internal and external to CMS. In particular we describe the introduction of new middleware features during the last 18 months as well as the requirements to Grid and Cloud software developers for the future.
DOI: 10.1088/1748-0221/14/12/c12010
2019
Cited 3 times
Study of the effects of radiation on the CMS Drift Tubes Muon Detector for the HL-LHC
The CMS drift tubes (DT) muon detector, built for withstanding the LHC expected integrated and instantaneous luminosities, will be used also in the High Luminosity LHC (HL-LHC) at a 5 times larger instantaneous luminosity and, consequently, much higher levels of radiation, reaching about 10 times the LHC integrated luminosity. Initial irradiation tests of a spare DT chamber at the CERN gamma irradiation facility (GIF++), at large (∼ O(100)) acceleration factor, showed ageing effects resulting in a degradation of the DT cell performance. However, full CMS simulations have shown almost no impact in the muon reconstruction efficiency over the full barrel acceptance and for the full integrated luminosity. A second spare DT chamber was moved inside the GIF++ bunker in October 2017. The chamber was being irradiated at lower acceleration factors, and only 2 out of the 12 layers of the chamber were switched at working voltage when the radioactive source was active, being the other layers in standby. In this way the other non-aged layers are used as reference and as a precise and unbiased telescope of muon tracks for the efficiency computation of the aged layers of the chamber, when set at working voltage for measurements. An integrated dose equivalent to two times the expected integrated luminosity of the HL-LHC run has been absorbed by this second spare DT chamber and the final impact on the muon reconstruction efficiency is under study. Direct inspection of some extracted aged anode wires presented a melted resistive deposition of materials. Investigation on the outgassing of cell materials and of the gas components used at the GIF++ are underway. Strategies to mitigate the ageing effects are also being developed. From the long irradiation measurements of the second spare DT chamber, the effects of radiation in the performance of the DTs expected during the HL-LHC run will be presented.
DOI: 10.1088/1742-6596/331/7/072061
2011
Interactive Analysis using PROOF in a GRID Infrastructure
Current high energy physics experiments aim to explore new territories where new physics is expected. In order to achieve that, a huge amount of data has to be collected and analyzed. The accomplishment of these scientific projects require computing resources beyond the capabilities of a single user or group, thus the data is treated under the grid infrastructure. Despite the reduction applied to the data, the sample used in the last step of the analysis is still large. At this phase, interactivity contributes to a faster optimization of the final cuts in order to improve the results. The Parallel ROOT Facility (PROOF) is intended to speed up even further this procedure providing the user analysis results within a shorter time by simultaneously using more cores. Taking profit of the computing resources and facilities available at Instituto de Física de Cantabria (IFCA), shared between two major projects LHC-CMS Tier-2 and GRID-CSIC, we have developed a setup that integrates PROOF with SGE as local resource management system and GPFS as file system, both common to the grid infrastructure. The setup was also integrated in a similar infrastructure for the LHC-CMS Tier-3 at Universidad de Oviedo that uses Torque (PBS) as local job manager and Hadoop as file system. In addition, to ease the transition from a sequential analysis code to PROOF, an analysis framework based on the TSelector class is provided. Integrating PROOF in a cluster provides users the potential usage of thousands of cores (1,680 in the IFCA case). Performance measurements have been done showing a speed improvement closely correlated with the number of cores used.
DOI: 10.1088/1742-6596/219/7/072027
2010
The CMS experiment workflows on StoRM based storage at Tier-1 and Tier-2 centers
Approaching LHC data taking, the CMS experiment is deploying, commissioning and operating the building tools of its grid-based computing infrastructure. The commissioning program includes testing, deployment and operation of various storage solutions to support the computing workflows of the experiment. Recently, some of the Tier-1 and Tier-2 centers supporting the collaboration have started to deploy StoRM based storage systems. These are POSIX-based disk storage systems on top of which StoRM implements the Storage Resource Manager (SRM) version 2 interface allowing for a standard-based access from the Grid. In this notes we briefly describe the experience so far achieved at the CNAF Tier-1 center and at the IFCA Tier-2 center.
DOI: 10.1088/1742-6596/396/2/022017
2012
A PROOF Analysis Framework
The analysis of the complex LHC data usually follows a standard path that aims at minimizing not only the amount of data but also the number of observables used. After a number of steps of slimming and skimming the data, the remaining few terabytes of ROOT files hold a selection of the events and a flat structure for the variables needed that can be more easily inspected and traversed in the final stages of the analysis. PROOF arises at this point as an efficient mechanism to distribute the analysis load by taking advantage of all the cores in modern CPUs through PROOF Lite, or by using PROOF Cluster or PROOF on Demand tools to build dynamic PROOF cluster on computing facilities with spare CPUs. However using PROOF at the level required for a serious analysis introduces some difficulties that may scare new adopters. We have developed the PROOF Analysis Framework (PAF) to facilitate the development of new analysis by uniformly exposing the PROOF related configurations across technologies and by taking care of the routine tasks as much as possible. We describe the details of the PAF implementation as well as how we succeeded in engaging a group of CMS physicists to use PAF as their daily analysis framework.
DOI: 10.48550/arxiv.physics/0306025
2003
ALICE experience with GEANT4
Since its release in 1999, the LHC experiments have been evaluating GEANT4 in view of adopting it as a replacement for the obsolescent GEANT3 transport MonteCarlo. The ALICE collaboration has decided to perform a detailed physics validation of elementary hadronic processes against experimental data already used in international benchmarks. In one test, proton interactions on different nuclear targets have been simulated, and the distribution of outgoing particles has been compared to data. In a second test, penetration of quasi-monoenergetic low energy neutrons through a thick shielding has been simulated and again compared to experimental data. In parallel, an effort has been put on the integration of GEANT4 in the AliRoot framework. An overview of the present status of ALICE GEANT4 simulation and the remaining problems will be presented. This document will describe in detail the results of these tests, together with the improvements that the GEANT4 team has made to the program as a result of the feedback received from the ALICE collaboration. We will also describe the remaining problems that have been communicated to GEANT4 but not yet addressed.
DOI: 10.1016/j.radphyschem.2020.108747
2020
Irradiation aging of the CMS Drift Tube muon detector
During the High Luminosity LHC, the Drift Tube chambers installed in the CMS detector need to operate with an integrated dose ten times higher than expected at the LHC due to the increase in integrated luminosity from 300 fb-1 to 3000 fb-1. Irradiations have been performed to assess the performance of the detector under such conditions and to characterize the radiation aging of the detector. The presented analysis focuses on the behaviour of the high voltage currents and the dose measurements needed to extrapolate the results to High Luminosity conditions, using data from the photon irradiation campaign at GIF++ in 2016 as well as the efficiency analysis from the irradiation campaign started in 2017. Although the single-wire loss of high voltage gain observed of 70% is very high, the muon reconstruction efficiency is expected to decrease less than 20% during the full duration of High Luminosity LHC in the areas under highest irradiation.
DOI: 10.1088/1742-6596/664/3/032009
2015
PROOF Analysis Framework (PAF)
The PROOF Analysis Framework (PAF) has been designed to improve the ability of the physicist to develop software for the final stages of an analysis where typically simple ROOT Trees are used and where the amount of data used is in the order of several terabytes. It hides the technicalities of dealing with PROOF leaving the scientist to concentrate on the analysis. PAF is capable of using available non specific resources on, for example, local batch systems, remote grid sites or clouds through the integration of other toolkit like PROOF Cluster or PoD. While it has been successfully used on LHC Run-1 data for some key analysis, including the H →WW dilepton channel, the higher instantaneous and integrated luminosity together with the increase of the center-of-mass energy foreseen for the LHC Run-2, which will increment the total size of the samples by a factor 6 to 20, will demand PAF to improve its scalability and to reduce the latencies as much as possible. In this paper we address the possible problems of processing such big data volumes with PAF and the solutions implemented to overcome them. We will also show the improvements in order to make PAF more modular and accessible to other communities.
DOI: 10.1088/1742-6596/331/7/072015
2011
Using widgets to monitor the LHC experiments
The complexity of the LHC experiments requires monitoring systems to verify the correct functioning of different sub-systems and to allow operators to quickly spot problems and issues that may cause loss of information and data. Due to the distributed nature of the collaborations and the different technologies involved, the information data that need to be correlated is usually spread over several databases, web pages and monitoring systems. On the other hand, although the complete set of monitorable aspects is known and fixed, the subset that each person needs to monitor is often different for each individual. Therefore, building a unique monitoring tool that suits every single collaborator becomes close to impossible. A modular approach with a set of customizable widgets, small autonomous portions of HTML and JavaScript, that can be aggregated to form private or public monitoring web pages can be a scalable and robust solution, where the information can be provided by a simple and thin set of web services. Among the different widget development toolkits available today, we have chosen the open project UWA (Unified Widget API) because of its portability to the most popular widget platforms (including iGoogle, Netvibes and Apple Dashboard). As an example, we show how this technology is currently being used to monitor parts of the CMS Computing project.
DOI: 10.1088/1742-6596/219/5/052003
2010
Operational experience with CMS Tier-2 sites
In the CMS computing model, more than one third of the computing resources are located at Tier-2 sites, which are distributed across the countries in the collaboration. These sites are the primary platform for user analyses; they host datasets that are created at Tier-1 sites, and users from all CMS institutes submit analysis jobs that run on those data through grid interfaces. They are also the primary resource for the production of large simulation samples for general use in the experiment. As a result, Tier-2 sites have an interesting mix of organized experiment-controlled activities and chaotic user-controlled activities. CMS currently operates about 40 Tier-2 sites in 22 countries, making the sites a far-flung computational and social network. We describe our operational experience with the sites, touching on our achievements, the lessons learned, and the challenges for the future.
DOI: 10.1088/1742-6596/119/7/072008
2008
A software and computing prototype for CMS muon system alignment
A precise alignment of Muon System is one of the requirements to fulfill the CMS expected performance to cover its physics program. A first prototype of the software and computing tools to achieve this goal has been successfully tested during the CSA06, Computing, Software and Analysis Challenge in 2006. Data was exported from Tier-0 to Tier-1 and Tier-2, where the alignment software was run. Re-reconstruction with new geometry files was also performed at remote sites. Performance and validation of the software has also been tested on cosmic data, taken during the MTCC in 2006.
DOI: 10.1088/1742-6596/119/5/052008
2008
Exercising CMS dataflows and workflows in computing challenges at the SpanishTier-1 and Tier-2 sites
An overview of the data transfer, processing and analysis operations conducted at the Spanish Tier-1 (PIC, Barcelona) and Tier-2 (CIEMAT-Madrid and IFCA-Santander federation) centres during the past CMS CSA06 Computing, Software and Analysis challenge and in preparation for CSA07 is presented.
2018
Big data en el LHC
Desde que en el ano 2009 comenzaran a circular los primeros haces de protones en el LHC del CERN, los cuatro grandes detectores situados en los puntos de colision han recogido un ingente volumen de datos, produciendo una media de 10 PetaBytes/ano. Estos datos deben estar accesibles para todos los miembros de los distintos experimentos para ser analizados en el menor tiempo posible. Se esta claramente ante lo que hoy se conoce como un problema de big data. Para resolver este problema se diseno un Modelo Computacional que ha ido evolucionando con los anos adaptandose tanto a las necesidades como a los recursos disponibles de almacenamiento, procesado y accesibilidad de los datos. Ademas, abordaremos las mejoras en las metodologias utilizadas en los analisis en el sentido de la obtencion eficiente de conocimiento.
DOI: 10.1109/nss/mic42101.2019.9059698
2019
Study of the Effects of Radiation at the CERN Gamma Irradiation Facility on the CMS Drift Tube Muon Detector for HL-LHC
To sustain and extend its discovery potential, the Large Hadron Collider (LHC) will undergo a major upgrade in the coming years, referred to as High Luminosity LHC (HLLHC), aimed to increase its instantaneous luminosity, 5 times larger than the designed limit, and, consequently leading to high levels of radiation, with the goal to collect 10 times larger the original designed integrated luminosity. The drift tube chambers (DT) of CMS muon detector system is built to proficiently measure and trigger on muons in the harsh radiation environment expected during the HL-LHC era. Ageing studies are performed at the CERNs gamma ray irradiation facility (GIF++) by measuring the muon hit efficiency of these detectors at various LHC operation conditions. One such irradiation campaign was started in October 2017, when a spare MB2 chamber moved inside the bunker and irradiated at lower acceleration factors. Two out of twelve layers of the DT chamber were operated while being irradiated with the radioactive source and then their muon hit efficiency was calculated in coincidence with other ten layers which were kept on the standby. The chamber absorbed an integrated dose equivalent to two times the expected integrated luminosity of the HL-LHC. Investigation on the outgassing of cell materials and of the gas components used at the GIF++ are underway and strategies to mitigate the aging effects are also being developed. The effect of radiation on the performance of DT chamber and its impact on the overall muon reconstruction efficiency expected during the HL-LHC are presented.
2003
The Virtual Monte Carlo
The concept of Virtual Monte Carlo (VMC) has been developed by the ALICE Software Project to allow different Monte Carlo simulation programs to run without changing the user code, such as the geometry definition, the detector response simulation or input and output formats. Recently, the VMC classes have been integrated into the ROOT framework, and the other relevant packages have been separated from the AliRoot framework and can be used individually by any other HEP project. The general concept of the VMC and its set of base classes provided in ROOT will be presented. Existing implementations for Geant3, Geant4 and FLUKA and simple examples of usage will be described.
2003
The Virtual Monte Carlo
The concept of Virtual Monte Carlo (VMC) has been developed by the ALICE Software Project to allow different Monte Carlo simulation programs to run without changing the user code, such as the geometry definition, the detector response simulation or input and output formats. Recently, the VMC classes have been integrated into the ROOT framework, and the other relevant packages have been separated from the AliRoot framework and can be used individually by any other HEP project. The general concept of the VMC and its set of base classes provided in ROOT will be presented. Existing implementations for Geant3, Geant4 and FLUKA and simple examples of usage will be described.
DOI: 10.48550/arxiv.cs/0306005
2003
The Virtual Monte Carlo
The concept of Virtual Monte Carlo (VMC) has been developed by the ALICE Software Project to allow different Monte Carlo simulation programs to run without changing the user code, such as the geometry definition, the detector response simulation or input and output formats. Recently, the VMC classes have been integrated into the ROOT framework, and the other relevant packages have been separated from the AliRoot framework and can be used individually by any other HEP project. The general concept of the VMC and its set of base classes provided in ROOT will be presented. Existing implementations for Geant3, Geant4 and FLUKA and simple examples of usage will be described.