ϟ

Riccardo Di Maria

Here are all the papers by Riccardo Di Maria that you can download and read on OA.mg.
Riccardo Di Maria’s last known institution is . Download Riccardo Di Maria PDFs here.

Claim this Profile →
DOI: 10.5327/1516-3180.141suppl1
2023
Cited 7 times
São Paulo Medical Journal
DOI: 10.1051/epjconf/201921407027
2019
Cited 8 times
Exploiting private and commercial clouds to generate on-demand CMS computing facilities with DODAS
Minimising time and cost is key to exploit private or commercial clouds. This can be achieved by increasing setup and operational efficiencies. The success and sustainability are thus obtained reducing the learning curve, as well as the operational cost of managing community-specific services running on distributed environments. The greater beneficiaries of this approach are communities willing to exploit opportunistic cloud resources. DODAS builds on several EOSC-hub services developed by the INDIGO-DataCloud project and allows to instantiate on-demand container-based clusters. These execute software applications to benefit of potentially “any cloud provider”, generating sites on demand with almost zero effort. DODAS provides ready-to-use solutions to implement a “Batch System as a Service” as well as a BigData platform for a “Machine Learning as a Service”, offering a high level of customization to integrate specific scenarios. A description of the DODAS architecture will be given, including the CMS integration strategy adopted to connect it with the experiment’s HTCondor Global Pool. Performance and scalability results of DODAS-generated tiers processing real CMS analysis jobs will be presented. The Instituto de Física de Cantabria and Imperial College London use cases will be sketched. Finally a high level strategy overview for optimizing data ingestion in DODAS will be described.
DOI: 10.1088/1742-6596/762/1/012013
2016
Elastic Extension of a CMS Computing Centre Resources on External Clouds
After the successful LHC data taking in Run-I and in view of the future runs, the LHC experiments are facing new challenges in the design and operation of the computing facilities. The computing infrastructure for Run-II is dimensioned to cope at most with the average amount of data recorded. The usage peaks, as already observed in Run-I, may however originate large backlogs, thus delaying the completion of the data reconstruction and ultimately the data availability for physics analysis. In order to cope with the production peaks, CMS - along the lines followed by other LHC experiments - is exploring the opportunity to access Cloud resources provided by external partners or commercial providers. Specific use cases have already been explored and successfully exploited during Long Shutdown 1 (LS1) and the first part of Run 2.
DOI: 10.1051/epjconf/202125102056
2021
ESCAPE Data Lake
The European-funded ESCAPE project (Horizon 2020) aims to address computing challenges in the context of the European Open Science Cloud. The project targets Particle Physics and Astronomy facilities and research infrastructures, focusing on the development of solutions to handle Exabyte-scale datasets. The science projects in ESCAPE are in different phases of evolution and count a variety of specific use cases and challenges to be addressed. This contribution describes the shared-ecosystem architecture of services, the Data Lake, fulfilling the needs in terms of data organisation, management, and access of the ESCAPE community. The Pilot Data Lake consists of several storage services operated by the partner institutes and connected through reliable networks, and it adopts Rucio to orchestrate data management and organisation. The results of a 24-hour Full Dress Rehearsal are also presented, highlighting the achievements of the Data Lake model and of the ESCAPE sciences.
2011
Interpolation inequalities for weak solutions of nonlinear parabolic systems
DOI: 10.1088/1742-6596/898/5/052024
2017
Elastic extension of a local analysis facility on external clouds for the LHC experiments
The computing infrastructures serving the LHC experiments have been designed to cope at most with the average amount of data recorded. The usage peaks, as already observed in Run-I, may however originate large backlogs, thus delaying the completion of the data reconstruction and ultimately the data availability for physics analysis. In order to cope with the production peaks, the LHC experiments are exploring the opportunity to access Cloud resources provided by external partners or commercial providers. In this work we present the proof of concept of the elastic extension of a local analysis facility, specifically the Bologna Tier-3 Grid site, for the LHC experiments hosted at the site, on an external OpenStack infrastructure. We focus on the Cloud Bursting of the Grid site using DynFarm, a newly designed tool that allows the dynamic registration of new worker nodes to LSF. In this approach, the dynamically added worker nodes instantiated on an OpenStack infrastructure are transparently accessed by the LHC Grid tools and at the same time they serve as an extension of the farm for the local usage.
DOI: 10.22323/1.270.0023
2017
Elastic Computing from Grid sites to External Clouds
LHC experiments are now in Run-II data taking and approaching new challenges in the operation of the computing facilities in future Runs.Despite having demonstrated to be able to sustain operations at scale during Run-I, it has become evident that the computing infrastructure for Run-II already is dimensioned to cope at most with the average amount of data recorded, and not for peak usage.The latter are frequent and may create large backlogs and have a direct impact on data reconstruction completion times, hence to data availability for physics analysis.Among others, the CMS experiment is exploring (since the first Long Shutdown period after Run-I) the access and utilisation of Cloud resources provided by external partners or commercial providers.In this work we present proof of concepts of the elastic extension of a CMS Tier-3 site in Bologna (Italy), on an external OpenStack infrastructure.We start from presenting the experience on a first work on the "Cloud Bursting" of a CMS Grid site using a novel LSF configuration to dynamically register new worker nodes.Then, we move to an even more recent work on a "Cloud Site as-a-Service" prototype, based on a more direct access/integration of OpenStack resources into the CMS workload management system.Results with real CMS workflows and future plans are also presented and discussed.
DOI: 10.5281/zenodo.6077556
2022
Astronomical data organization, management and access in Scientific Data Lakes
DOI: 10.21747/978-989-9082-12-0/pan
2022
Panoramas
DOI: 10.22323/1.321.0057
2018
Higgs-to-Invisible Searches with the CMS experiment at the LHC
Searches for invisible decays of the Higgs boson are presented using data at a centre-of-mass energy of $\sqrt{s} = 13~\text{TeV}$ collected with the CMS detector at the LHC in 2016. The dataset corresponds to an integrated luminosity of 35.9 \fb. The search channel targets Higgs boson production via vector boson fusion. The results are presented in terms of an upper limit on the branching fraction of the Higgs boson decay to invisible particles $\mathcal{B}(H\rightarrow \text{inv.})$. The data are in agreement with the contribution of backgrounds from standard model processes. An observed (expected) upper limit on $\mathcal{B}(H\rightarrow \text{inv.})$ at 95\% confidence level is set at 0.53 (0.27) for the cut-and-count and at 0.28 (0.21) for the shape approaches respectively. A combination with other relevant analyses to further improve the sensitivity to $\mathcal{B}(H\rightarrow \text{inv.})$ is also presented using the 2016 dataset. An observed (expected) upper limit on $\mathcal{B}(H\rightarrow \text{inv.})$ at 95\% confidence level is set at 0.24 (0.18), assuming standard model production rates. This result represents the most sensitive Higgs-to-invisible search, and it is interpreted in the context of Higgs-portal models for dark matter where upper bounds are placed on the dark matter-nucleon spin-independent cross-section.
DOI: 10.25560/73920
2019
Searches for invisibly decaying Higgs bosons produced through vector boson fusion at 13 TeV and cloud computing for high energy physics with the Compact Muon Solenoid experiment
DOI: 10.1051/epjconf/202125102030
2021
Experience with Rucio in the wider HEP community
Managing the data of scientific projects is an increasingly complicated challenge, which was historically met by developing experiment-specific solutions. However, the ever-growing data rates and requirements of even small experiments make this approach very difficult, if not prohibitive. In recent years, the scientific data management system Rucio has evolved into a successful open-source project that is now being used by many scientific communities and organisations. Rucio is incorporating the contributions and expertise of many scientific projects and is offering common features useful to a diverse research community. This article describes the recent experiences in operating Rucio, as well as contributions to the project, by ATLAS, Belle II, CMS, ESCAPE, IGWN, LDMX, Folding@Home, and the UK’s Science and Technology Facilities Council (STFC).
DOI: 10.1051/epjconf/202125102060
2021
The ESCAPE Data Lake: The machinery behind testing, monitoring and supporting a unified federated storage infrastructure of the exabyte-scale
The EU-funded ESCAPE project aims at enabling a prototype federated storage infrastructure, a Data Lake, that would handle data on the exabyte-scale, address the FAIR data management principles and provide science projects a unified scalable data management solution for accessing and analyzing large volumes of scientific data. In this respect, data transfer and management technologies such as Rucio, FTS and GFAL are employed along with monitoring enabling solutions such as Grafana, Elasticsearch and perf- SONAR. This paper presents and describes the technical details behind the machinery of testing and monitoring of the Data Lake – this includes continuous automated functional testing, network monitoring and development of insightful visualizations that reflect the current state of the system. Topics that are also addressed include the integration with the CRIC information system as well as the initial support for token based authentication / authorization by using OpenID Connect. The current architecture of these components is provided and future enhancements are discussed.
2001
Usage of energy- dispersial analysis in studying rocks melts
1999
Modelling of heat flow in Earths Crust