ϟ

David Colling

Here are all the papers by David Colling that you can download and read on OA.mg.
David Colling’s last known institution is . Download David Colling PDFs here.

Claim this Profile →
DOI: 10.1007/s41781-018-0018-8
2019
Cited 121 times
A Roadmap for HEP Software and Computing R&D for the 2020s
Particle physics has an ambitious and broad experimental programme for the coming decades. This programme requires large investments in detector hardware, either to build new facilities and experiments, or to upgrade existing ones. Similarly, it requires commensurate investment in the R&D of software to acquire, manage, process, and analyse the shear amounts of data to be recorded. In planning for the HL-LHC in particular, it is critical that all of the collaborating stakeholders agree on the software goals and priorities, and that the efforts complement each other. In this spirit, this white paper describes the R&D activities required to prepare for this software upgrade.
DOI: 10.1016/j.nima.2019.163047
2020
Cited 113 times
The LUX-ZEPLIN (LZ) experiment
We describe the design and assembly of the LUX-ZEPLIN experiment, a direct detection search for cosmic WIMP dark matter particles. The centerpiece of the experiment is a large liquid xenon time projection chamber sensitive to low energy nuclear recoils. Rejection of backgrounds is enhanced by a Xe skin veto detector and by a liquid scintillator Outer Detector loaded with gadolinium for efficient neutron capture and tagging. LZ is located in the Davis Cavern at the 4850' level of the Sanford Underground Research Facility in Lead, South Dakota, USA. We describe the major subsystems of the experiment and its key design features and requirements.
DOI: 10.1140/epjc/s10052-020-8420-x
2020
Cited 44 times
The LUX-ZEPLIN (LZ) radioactivity and cleanliness control programs
Abstract LUX-ZEPLIN (LZ) is a second-generation direct dark matter experiment with spin-independent WIMP-nucleon scattering sensitivity above $${1.4 \times 10^{-48}}\, {\hbox {cm}}^{2}$$ <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML"> <mml:mrow> <mml:mrow> <mml:mn>1.4</mml:mn> <mml:mo>×</mml:mo> <mml:msup> <mml:mn>10</mml:mn> <mml:mrow> <mml:mo>-</mml:mo> <mml:mn>48</mml:mn> </mml:mrow> </mml:msup> </mml:mrow> <mml:mspace /> <mml:msup> <mml:mrow> <mml:mtext>cm</mml:mtext> </mml:mrow> <mml:mn>2</mml:mn> </mml:msup> </mml:mrow> </mml:math> for a WIMP mass of $${40}\, \hbox {GeV}/{\hbox {c}}^{2}$$ <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML"> <mml:mrow> <mml:mn>40</mml:mn> <mml:mspace /> <mml:mtext>GeV</mml:mtext> <mml:mo>/</mml:mo> <mml:msup> <mml:mrow> <mml:mtext>c</mml:mtext> </mml:mrow> <mml:mn>2</mml:mn> </mml:msup> </mml:mrow> </mml:math> and a $${1000}\, \hbox {days}$$ <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML"> <mml:mrow> <mml:mn>1000</mml:mn> <mml:mspace /> <mml:mtext>days</mml:mtext> </mml:mrow> </mml:math> exposure. LZ achieves this sensitivity through a combination of a large $${5.6}\, \hbox {t}$$ <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML"> <mml:mrow> <mml:mrow> <mml:mn>5.6</mml:mn> </mml:mrow> <mml:mspace /> <mml:mtext>t</mml:mtext> </mml:mrow> </mml:math> fiducial volume, active inner and outer veto systems, and radio-pure construction using materials with inherently low radioactivity content. The LZ collaboration performed an extensive radioassay campaign over a period of six years to inform material selection for construction and provide an input to the experimental background model against which any possible signal excess may be evaluated. The campaign and its results are described in this paper. We present assays of dust and radon daughters depositing on the surface of components as well as cleanliness controls necessary to maintain background expectations through detector construction and assembly. Finally, examples from the campaign to highlight fixed contaminant radioassays for the LZ photomultiplier tubes, quality control and quality assurance procedures through fabrication, radon emanation measurements of major sub-systems, and bespoke detector systems to assay scintillator are presented.
DOI: 10.1140/epjc/s10052-011-1722-2
2011
Cited 68 times
Supersymmetry and dark matter in light of LHC 2010 and XENON100 data
We make frequentist analyses of the CMSSM, NUHM1, VCMSSM and mSUGRA parameter spaces taking into account all the public results of searches for supersymmetry using data from the 2010 LHC run and the XENON100 direct search for dark matter scattering. The LHC data set includes ATLAS and CMS searches for $\mathrm{jets} + {\not}E_{T}$ events (with or without leptons) and for the heavier MSSM Higgs bosons, and the upper limit on BR(B s →μ + μ −) including data from LHCb as well as CDF and DØ. The absence of signals in the LHC data favours somewhat heavier mass spectra than in our previous analyses of the CMSSM, NUHM1 and VCMSSM, and somewhat smaller dark matter scattering cross sections, all close to or within the pre-LHC 68% CL ranges, but does not impact significantly the favoured regions of the mSUGRA parameter space. We also discuss the impact of the XENON100 constraint on spin-independent dark matter scattering, stressing the importance of taking into account the uncertainty in the π-nucleon σ term Σ πN , which affects the spin-independent scattering matrix element, and we make predictions for spin-dependent dark matter scattering. Finally, we discuss briefly the potential impact of the updated predictions for sparticle masses in the CMSSM, NUHM1, VCMSSM and mSUGRA on future e + e − colliders.
DOI: 10.1098/rsta.2009.0036
2009
Cited 57 times
GridPP: the UK grid for particle physics
The start-up of the Large Hadron Collider (LHC) at CERN, Geneva, presents a huge challenge in processing and analysing the vast amounts of scientific data that will be produced. The architecture of the worldwide grid that will handle 15 PB of particle physics data annually from this machine is based on a hierarchical tiered structure. We describe the development of the UK component (GridPP) of this grid from a prototype system to a full exploitation grid for real data analysis. This includes the physical infrastructure, the deployment of middleware, operational experience and the initial exploitation by the major LHC experiments.
DOI: 10.1140/epjc/s10052-011-1583-8
2011
Cited 44 times
Frequentist analysis of the parameter space of minimal supergravity
We make a frequentist analysis of the parameter space of minimal supergravity (mSUGRA), in which, as well as the gaugino and scalar soft supersymmetry-breaking parameters being universal, there is a specific relation between the trilinear, bilinear and scalar supersymmetry-breaking parameters, A 0=B 0+m 0, and the gravitino mass is fixed by m 3/2=m 0. We also consider a more general model, in which the gravitino mass constraint is relaxed (the VCMSSM). We combine in the global likelihood function the experimental constraints from low-energy electroweak precision data, the anomalous magnetic moment of the muon, the lightest Higgs boson mass M h , B physics and the astrophysical cold dark matter density, assuming that the lightest supersymmetric particle (LSP) is a neutralino. In the VCMSSM, we find a preference for values of m 1/2 and m 0 similar to those found previously in frequentist analyses of the constrained MSSM (CMSSM) and a model with common non-universal Higgs masses (NUHM1). On the other hand, in mSUGRA we find two preferred regions: one with larger values of both m 1/2 and m 0 than in the VCMSSM, and one with large m 0 but small m 1/2. We compare the probabilities of the frequentist fits in mSUGRA, the VCMSSM, the CMSSM and the NUHM1: the probability that mSUGRA is consistent with the present data is significantly less than in the other models. We also discuss the mSUGRA and VCMSSM predictions for sparticle masses and other observables, identifying potential signatures at the LHC and elsewhere.
DOI: 10.1103/physrevd.89.071301
2014
Cited 33 times
Light sterile neutrino sensitivity at the nuSTORM facility
A facility that can deliver beams of electron and muon neutrinos from the decay of a stored muon beam has the potential to unambiguously resolve the issue of the evidence for light sterile neutrinos that arises in short-baseline neutrino oscillation experiments and from estimates of the effective number of neutrino flavors from fits to cosmological data. In this paper, we show that the nuSTORM facility, with stored muons of $3.8\text{ }\mathrm{GeV}/\mathrm{c}\text{ }\ifmmode\pm\else\textpm\fi{}\text{ }10%$, will be able to carry out a conclusive muon neutrino appearance search for sterile neutrinos and test the LSND and MiniBooNE experimental signals with $10\ensuremath{\sigma}$ sensitivity, even assuming conservative estimates for the systematic uncertainties. This experiment would add greatly to our knowledge of the contribution of light sterile neutrinos to the number of effective neutrino flavors from the abundance of primordial helium production and from constraints on neutrino energy density from the cosmic microwave background. The appearance search is complemented by a simultaneous muon neutrino disappearance analysis that will facilitate tests of various sterile neutrino models.
DOI: 10.1006/jmcc.2000.1230
2000
Cited 39 times
Immunogold-labeled L-type Calcium Channels are Clustered in the Surface Plasma Membrane Overlying Junctional Sarcoplasmic Reticulum in Guinea-pig Myocytes—Implications for Excitation–contraction Coupling in Cardiac Muscle
Ca(2+) release through ryanodine receptors, located in the membrane of the junctional sarcoplasmic reticulum (SR), initiates contraction of cardiac muscle. Ca(2+)influx through plasma membrane L-type Ca(2+)channels is thought to be an important trigger for opening ryanodine receptors ("Ca(2+)-induced Ca(2+)-release"). Optimal transmission of the transmembrane Ca(2+)influx signal to SR release is predicted to involve spatial juxtaposition of L-type Ca(2+)channels to the ryanodine receptors of the junctional SR. Although such spatial coupling has often been implicitly assumed, and data from immunofluorescence microscopy are consistent with its existence, the definitive demonstration of such a structural organization in mammalian tissue is lacking at the electron-microscopic level. To determine the spatial distribution of plasma membrane L-type Ca(2+)channels and their location in relation to underlying junctional SR, we applied two high-resolution immunogold-labeling techniques, label-fracture and cryothin-sectioning, combined with quantitative analysis, to guinea-pig ventricular myocytes. Label-fracture enabled visualization of colloidal gold-labeled L-type Ca(2+)channels in planar freeze-fracture electron-microscopic views of the plasma membrane. Mathematical analysis of the gold label distribution (by nearest-neighbor distance distribution and the radial distribution function) demonstrated genuine clustering of the labeled channels. Gold-labeled cryosections showed that labeled L-type Ca(2+)channels quantitatively predominated in domains of the plasma membrane overlying junctional SR. These findings provide an ultrastructural basis for functional coupling between L-type Ca(2+)channels and junctional SR and for excitation-contraction coupling in guinea-pig cardiac muscle.
DOI: 10.1177/0270467611417852
2011
Cited 23 times
Wind Turbines Make Waves
People who live near wind turbines complain of symptoms that include some combination of the following: difficulty sleeping, fatigue, depression, irritability, aggressiveness, cognitive dysfunction, chest pain/pressure, headaches, joint pain, skin irritations, nausea, dizziness, tinnitus, and stress. These symptoms have been attributed to the pressure (sound) waves that wind turbines generate in the form of noise and infrasound. However, wind turbines also generate electromagnetic waves in the form of poor power quality (dirty electricity) and ground current, and these can adversely affect those who are electrically hypersensitive. Indeed, the symptoms mentioned above are consistent with electrohypersensitivity. Sensitivity to both sound and electromagnetic waves differs among individuals and may explain why not everyone in the same home experiences similar effects. Ways to mitigate the adverse health effects of wind turbines are presented.
DOI: 10.1007/978-3-642-04633-9_3
2009
Cited 14 times
Analyzing the EGEE Production Grid Workload: Application to Jobs Submission Optimization
Grids reliability remains an order of magnitude below clusters on production infrastructures. This work is aims at improving grid application performances by improving the job submission system. A stochastic model, capturing the behavior of a complex grid workload management system is proposed. To instantiate the model, detailed statistics are extracted from dense grid activity traces. The model is exploited in a simple job resubmission strategy. It provides quantitative inputs to improve job submission performance and it enables quantifying the impact of faults and outliers on grid operations.
DOI: 10.1007/978-3-642-04633-9
2009
Cited 14 times
Job Scheduling Strategies for Parallel Processing
This book constitutes the revised papers of the 14th International Workshop on Job Scheduling Strategies for Parallel Processing, JSSPP 2009, which was held in Rome, Italy, in May 2009. The 15 revised
DOI: 10.1007/s10723-010-9151-2
2010
Cited 13 times
Optimization of Jobs Submission on the EGEE Production Grid: Modeling Faults Using Workload
It is commonly observed that production Grids are inherently unreliable. The aim of this work is to improve Grid application performances by tuning the job submission system. A stochastic model, capturing the behavior of a complex Grid workload management system is proposed. To instantiate the model, detailed statistics are extracted from dense Grid activity traces. The model is exploited for optimizing a simple job resubmission strategy. It provides quantitative inputs to improve job submission performance and it enables the impact of faults and outliers on Grid operations to be quantified.
DOI: 10.1007/s10723-005-0150-7
2004
Cited 16 times
The DataGrid Workload Management System: Challenges and Results
DOI: 10.1145/1273360.1273362
2007
Cited 11 times
GRIDCC
The Grid is a concept which allows the sharing of resources between distributed communities, allowing each to progress towards potentially different goals. As adoption of the Grid increases so are the activities that people wish to conduct through it. The GRIDCC project is a European Union funded project addressing the issues of integrating instruments into the Grid. This increases the requirement of workflows and Quality of Service upon these workflows as many of these instruments have real-time requirements. In thispaper we present the workflow management service within the GRIDCC project which is tasked with optimising the workflows and ensuring that they meet the pre-defined QoS requirements specified upon them.
DOI: 10.48550/arxiv.cs/0306072
2003
Cited 12 times
The EU DataGrid Workload Management System: towards the second major release
In the first phase of the European DataGrid project, the 'workload management' package (WP1) implemented a working prototype, providing users with an environment allowing to define and submit jobs to the Grid, and able to find and use the ``best'' resources for these jobs. Application users have now been experiencing for about a year now with this first release of the workload management system. The experiences acquired, the feedback received by the user and the need to plug new components implementing new functionalities, triggered an update of the existing architecture. A description of this revised and complemented workload management system is given.
DOI: 10.1007/11758532_129
2006
Cited 10 times
Adding Instruments and Workflow Support to Existing Grid Architectures
Many Grid architectures have been developed in recent years. These range from the large community Grids such as LHG and EGEE to single site deployments such as Condor. However, these Grid architectures have tended to focus on the single or batch submission of executable jobs. Application scientists are now seeking to manage and use physical instrumentation on the Grid, integrating these with the computational tasks they already perform. This will require the functionality of current Grid systems to be extended to allow the submission of entire workflows. Thus allowing the scientists to perform increasingly larger parts of their experiments within the Grid environment. We propose here a set of high level services which may be used on-top of these existing Grid architectures such that the benefits of these architectures may be exploited along with the new functionality of workflows.
DOI: 10.1007/978-0-387-09663-6_21
2009
Cited 8 times
On Quality of Service Support for Grid Computing
Computing Grids are hardware and software infrastructures that support secure sharing and concurrent access to distributed services by a large number of competing users from different virtual organizations. Concurrency can easily lead to overload and resource shortcomings in large-scale Grid infrastructures, as today’s Grids do not offer differentiated services. We propose a framework for supporting quality of service guarantees via both reservation and discovery of best-effort services based on the matchmaking of application requirements and quality of service performance profiles of the candidate services. We illustrate the middleware components needed to support both strict and loose guarantees and the performance assessment techniques for the discovery of suitable services.
DOI: 10.1098/rsta.2012.0073
2013
Cited 6 times
RAPPORT: running scientific high-performance computing applications on the cloud
Cloud computing infrastructure is now widely used in many domains, but one area where there has been more limited adoption is research computing, in particular for running scientific high-performance computing (HPC) software. The Robust Application Porting for HPC in the Cloud (RAPPORT) project took advantage of existing links between computing researchers and application scientists in the fields of bioinformatics, high-energy physics (HEP) and digital humanities, to investigate running a set of scientific HPC applications from these domains on cloud infrastructure. In this paper, we focus on the bioinformatics and HEP domains, describing the applications and target cloud platforms. We conclude that, while there are many factors that need consideration, there is no fundamental impediment to the use of cloud infrastructure for running many types of HPC applications and, in some cases, there is potential for researchers to benefit significantly from the flexibility offered by cloud platforms.
DOI: 10.1088/1742-6596/664/6/062031
2015
Cited 6 times
Using the glideinWMS System as a Common Resource Provisioning Layer in CMS
CMS will require access to more than 125k processor cores for the beginning of Run 2 in 2015 to carry out its ambitious physics program with more and higher complexity events. During Run1 these resources were predominantly provided by a mix of grid sites and local batch resources. During the long shut down cloud infrastructures, diverse opportunistic resources and HPC supercomputing centers were made available to CMS, which further complicated the operations of the submission infrastructure. In this presentation we will discuss the CMS effort to adopt and deploy the glideinWMS system as a common resource provisioning layer to grid, cloud, local batch, and opportunistic resources and sites. We will address the challenges associated with integrating the various types of resources, the efficiency gains and simplifications associated with using a common resource provisioning layer, and discuss the solutions found. We will finish with an outlook of future plans for how CMS is moving forward on resource provisioning for more heterogenous architectures and services.
2019
Cited 6 times
A roadmap for HEP software and computing R&D for the 2020s
Particle physics has an ambitious and broad experimental programme for the coming decades. This programme requires large investments in detector hardware, either to build new facilities and experiments, or to upgrade existing ones. Similarly, it requires commensurate investment in the R&D of software to acquire, manage, process, and analyse the shear amounts of data to be recorded. In planning for the HL-LHC in particular, it is critical that all of the collaborating stakeholders agree on the software goals and priorities, and that the efforts complement each other. In this spirit, this white paper describes the R&D activities required to prepare for this software upgrade.
DOI: 10.1088/1742-6596/219/6/062020
2010
Cited 6 times
Real Time Monitor of Grid job executions
In this paper we describe the architecture and operation of the Real Time Monitor (RTM), developed by the Grid team in the HEP group at Imperial College London. This is arguably the most popular dissemination tool within the EGEE [1] Grid. Having been used, on many occasions including GridFest and LHC inauguration events held at CERN in October 2008. The RTM gathers information from EGEE sites hosting Logging and Bookkeeping (LB) services. Information is cached locally at a dedicated server at Imperial College London and made available for clients to use in near real time. The system consists of three main components: the RTM server, enquirer and an apache Web Server which is queried by clients. The RTM server queries the LB servers at fixed time intervals, collecting job related information and storing this in a local database. Job related data includes not only job state (i.e. Scheduled, Waiting, Running or Done) along with timing information but also other attributes such as Virtual Organization and Computing Element (CE) queue – if known. The job data stored in the RTM database is read by the enquirer every minute and converted to an XML format which is stored on a Web Server. This decouples the RTM server database from the client removing the bottleneck problem caused by many clients simultaneously accessing the database. This information can be visualized through either a 2D or 3D Java based client with live job data either being overlaid on to a 2 dimensional map of the world or rendered in 3 dimensions over a globe map using OpenGL.
DOI: 10.1088/1742-6596/513/3/032019
2014
Cited 5 times
Using the CMS High Level Trigger as a Cloud Resource
The CMS High Level Trigger is a compute farm of more than 10,000 cores. During data taking this resource is heavily used and is an integral part of the experiment's triggering system. However, outside of data taking periods this resource is largely unused. We describe why CMS wants to use the HLT as a cloud resource (outside of data taking periods) and how this has been achieved. In doing this we have turned a single-use cluster into an agile resource for CMS production computing.
DOI: 10.1109/cit.2007.43
2007
Cited 7 times
Enabling QoS for Service-Oriented Workflow on GRID
As the main computing paradigm for resource- intensive scientific applications, Grid[1] enables resource sharing and dynamic allocation of computational resources. It promotes access to distributed data, operational flexibility and collaboration, and allows service providers to be distributed both conceptually and physically to meeting different requirements. Large- scale grids are normally composed of huge numbers of components from different sites, which increases the requirements of workflows and quality of service (QoS) upon these workflows as many of these components have real-time requirements. In this paper, we describe our web services based QoS-aware workflow management system(WfMS) from GridCC project[7] and show how this WfMS can aid to ensure workflows meet the pre-defined QoS requirements and optimise them accordingly.
DOI: 10.1088/1742-6596/219/7/072007
2010
Cited 5 times
CMS analysis operations
During normal data taking CMS expects to support potentially as many as 2000 analysis users. Since the beginning of 2008 there have been more than 800 individuals who submitted a remote analysis job to the CMS computing infrastructure. The bulk of these users will be supported at the over 40 CMS Tier-2 centres. Supporting a globally distributed community of users on a globally distributed set of computing clusters is a task that requires reconsidering the normal methods of user support for Analysis Operations. In 2008 CMS formed an Analysis Support Task Force in preparation for large-scale physics analysis activities. The charge of the task force was to evaluate the available support tools, the user support techniques, and the direct feedback of users with the goal of improving the success rate and user experience when utilizing the distributed computing environment. The task force determined the tools needed to assess and reduce the number of non-zero exit code applications submitted through the grid interfaces and worked with the CMS experiment dashboard developers to obtain the necessary information to quickly and proactively identify issues with user jobs and data sets hosted at various sites. Results of the analysis group surveys were compiled. Reference platforms for testing and debugging problems were established in various geographic regions. The task force also assessed the resources needed to make the transition to a permanent Analysis Operations task. In this presentation the results of the task force will be discussed as well as the CMS Analysis Operations plans for the start of data taking.
DOI: 10.48550/arxiv.cs/0306007
2003
Cited 8 times
The first deployment of workload management services on the EU DataGrid Testbed: feedback on design and implementation
Application users have now been experiencing for about a year with the standardized resource brokering services provided by the 'workload management' package of the EU DataGrid project (WP1). Understanding, shaping and pushing the limits of the system has provided valuable feedback on both its design and implementation. A digest of the lessons, and "better practices", that were learned, and that were applied towards the second major release of the software, is given.
DOI: 10.1088/1742-6596/664/2/022012
2015
Cited 3 times
The Diverse use of Clouds by CMS
The resources CMS is using are increasingly being offered as clouds. In Run 2 of the LHC the majority of CMS CERN resources, both in Meyrin and at the Wigner Computing Centre, will be presented as cloud resources on which CMS will have to build its own infrastructure. This infrastructure will need to run all of the CMS workflows including: Tier 0, production and user analysis. In addition, the CMS High Level Trigger will provide a compute resource comparable in scale to the total offered by the CMS Tier 1 sites, when it is not running as part of the trigger system. During these periods a cloud infrastructure will be overlaid on this resource, making it accessible for general CMS use. Finally, CMS is starting to utilise cloud resources being offered by individual institutes and is gaining experience to facilitate the use of opportunistically available cloud resources.
DOI: 10.1088/1742-6596/664/6/062036
2015
Cited 3 times
The GridPP DIRAC project - DIRAC for non-LHC communities
The GridPP consortium in the UK is currently testing a multi-VO DIRAC service aimed at non-LHC VOs. These VOs (Virtual Organisations) are typically small and generally do not have a dedicated computing support post. The majority of these represent particle physics experiments (e.g. NA62 and COMET), although the scope of the DIRAC service is not limited to this field. A few VOs have designed bespoke tools around the EMI-WMS & LFC, while others have so far eschewed distributed resources as they perceive the overhead for accessing them to be too high. The aim of the GridPP DIRAC project is to provide an easily adaptable toolkit for such VOs in order to lower the threshold for access to distributed resources such as Grid and cloud computing. As well as hosting a centrally run DIRAC service, we will also publish our changes and additions to the upstream DIRAC codebase under an open-source license. We report on the current status of this project and show increasing adoption of DIRAC within the non-LHC communities.
DOI: 10.1109/tns.2004.839094
2004
Cited 6 times
Scalability tests of R-GMA-based grid job monitoring system for CMS Monte Carlo data production
High-energy physics experiments, such as the compact muon solenoid (CMS) at the large hadron collider (LHC), have large-scale data processing computing requirements. The grid has been chosen as the solution. One important challenge when using the grid for large-scale data processing is the ability to monitor the large numbers of jobs that are being executed simultaneously at multiple remote sites. The relational grid monitoring architecture (R-GMA) is a monitoring and information management service for distributed resources based on the GMA of the Global Grid Forum. We report on the first measurements of R-GMA as part of a monitoring architecture to be used for batch submission of multiple Monte Carlo simulation jobs running on a CMS-specific LHC computing grid test bed. Monitoring information was transferred in real time from remote execution nodes back to the submitting host and stored in a database. In scalability tests, the job submission rates supported by successive releases of R-GMA improved significantly, approaching that expected in full-scale production.
DOI: 10.1155/2007/516174
2007
Cited 5 times
GRIDCC: A Real-Time Grid Workflow System with QoS
The over-arching aim of Grid computing is to move computational resources from individual institutions where they can only be used for in-house work, to a more open vision of vast online ubiquitous `virtual computational' resources which support individuals and collaborative projects. A major step towards realizing this vision is the provision of instrumentation – such as telescopes, accelerators or electrical power stations – as Grid resources, and the tools to manage these resources online. The GRIDCC project attempts to satisfy these requirements by providing the following four co-dependent components; a flexible wrapper for publishing instruments as Grid resources; workflow support for the orchestration of multiple Grid resources in a timely manner; the machinery to make reservation agreements on Grid resources; and the facility to satisfy quality of service (QoS) requirements on elements within workflows. In this paper we detail the set of services developed as part of the GRIDCC project to provide the last three of these components. We provide a detailed architecture for these services along with experimental results from load testing experiments. These services are currently deployed as a test-bed at a number of institutions across Europe, and are poised to provide a 'virtual lab' to production level applications.
2012
Cited 3 times
Kai Michael Töpfer, Signa Militaria. Die römischen Feldzeichen in der Republik und im Prinzipat. Mayence, RGZM, 2011
DOI: 10.1016/s0168-9002(98)00644-5
1998
Cited 9 times
CVD diamond film for neutron counting
A detector capable of measuring fast neutrons based on proton recoil process has been developed. It consists of a 1 mm thick paraffin wax proton recoil radiator attached to a CVD diamond film charged particle detector. Its radiation hardness and low gamma sensitivity make diamond advantageous over other semi-conductor materials as a neutron counter. Measurements have been carried out with an accelerator neutron source and it has been shown that fast 2.85 MeV neutrons can be detected with improved efficiency and good gamma discrimination. Possible applications of this detector are also discussed.
DOI: 10.1109/tns.2004.835905
2004
Cited 5 times
Using the grid for the BaBar experiment
The BaBar experiment has been taking data since 1999. In 2001 the computing group started to evaluate the possibility to evolve toward a distributed computing model in a grid environment. We built a prototype system, based on the European Data Grid (EDG), to submit full-scale analysis and Monte Carlo simulation jobs. Computing elements, storage elements, and worker nodes have been installed at SLAC and at various European sites. A BaBar virtual organization (VO) and a test replica catalog (RC) are maintained in Manchester, U.K., and the experiment is using three EDG testbed resource brokers in the U.K. and in Italy. First analysis tests were performed under the assumption that a standard BaBar software release was available at the grid target sites, using RC to register information about the executable and the produced n-tuples. Hundreds of analysis jobs accessing either Objectivity or Root data files ran on the grid. We tested the Monte Carlo production using a farm of the INFN-grid testbed customized to install an Objectivity database and run BaBar simulation software. First simulation production tests were performed using standard Job Description Language commands and the output files were written on the closest storage element. A package that can be officially distributed to grid sites not specifically customized for BaBar has been prepared. We are studying the possibility to add a user friendly interface to access grid services for BaBar.
DOI: 10.1088/1742-6596/664/5/052031
2015
Possibilities for Named Data Networking in HEP
Named Data Networking is a novel networking architecture which places emphasis on the naming and signing of data. Once so named, the location of data sources becomes less important and the emphasis moves from host to host transfers to pulling data from the network. Furthermore data are cached en route to their destination. We believe this approach has interesting possibilities for High Energy Physics (HEP) and report on work we have done in this area including the development of a scalable repository, a ROOT plugin and a small local testbed, the submission of jobs to a grid cluster and some ideas on an authentication system for LHC VOs.
DOI: 10.48550/arxiv.cs/0306027
2003
Cited 5 times
HEP Applications Evaluation of the EDG Testbed and Middleware
Workpackage 8 of the European Datagrid project was formed in January 2001 with representatives from the four LHC experiments, and with experiment independent people from five of the six main EDG partners. In September 2002 WP8 was strengthened by the addition of effort from BaBar and D0. The original mandate of WP8 was, following the definition of short- and long-term requirements, to port experiment software to the EDG middleware and testbed environment. A major additional activity has been testing the basic functionality and performance of this environment. This paper reviews experiences and evaluations in the areas of job submission, data management, mass storage handling, information systems and monitoring. It also comments on the problems of remote debugging, the portability of code, and scaling problems with increasing numbers of jobs, sites and nodes. Reference is made to the pioneeering work of Atlas and CMS in integrating the use of the EDG Testbed into their data challenges. A forward look is made to essential software developments within EDG and to the necessary cooperation between EDG and LCG for the LCG prototype due in mid 2003.
DOI: 10.1088/1742-6596/513/4/042029
2014
Implementing the data preservation and open access policy in CMS
Implementation of the CMS policy on long-term data preservation, re-use and open access has started.Current practices in providing data additional to published papers and distributing simplified data-samples for outreach are promoted and consolidated.The first measures have been taken for analysis and data preservation for the internal use of the collaboration and for open access to part of the data.Two complementary approaches are followed.First, a virtual machine environment, which will pack all ingredients needed to compile and run a software release with which the legacy data was reconstructed.Second, a validation framework, maintaining the capability not only to read the old raw data, but also to reprocess them with an updated release or to another format to help ensure long-term reusability of the legacy data.
DOI: 10.1017/s1350482702004073
2002
Cited 5 times
A study of the theory and operation of a resonance fluorescence water vapour sensor for upper tropospheric humidity measurements
Abstract In‐situ methods for measuring upper tropospheric humidity are important for two reasons: (i) they can be used as accurate spot measurements with which to calibrate more extensive data from satellites and other sensors; (ii) they can provide high accuracy measurements from aircraft or balloon with which individual processes of transport, phase change or chemistry can be studied. In either case the accuracy of the in‐situ measurement is of paramount importance. This study compares the performance and accuracy of a resonance fluorescence type of sensor (the Fluorescence Water Vapour Sensor) with a standard frost‐point hygrometer. An intercomparison of these two hygrometers has confirmed a long‐standing difference between these two types of sensors (the FWVS overestimates the water vapour volume mixing ratio by 10–20%, depending on pressure). Testing the FWVS experimentally in the laboratory, along with a modelling study of the sensor has revealed that a significant source of error is due to contamination of the FWVS source emission, and the subsequent underestimation of the oxygen absorption cross‐section. Copyright © 2002 Royal Meteorological Society.
2003
Cited 4 times
Running CMS software on GRID Testbeds
Starting in the middle of November 2002, the CMS experiment undertook an evaluation of the European DataGrid Project (EDG) middleware using its event simulation programs. A joint CMS-EDG task force performed a stress test by submitting a large number of jobs to many distributed sites. The EDG testbed was complemented with additional CMS-dedicated resources. A total of ~ 10000 jobs consisting of two different computational types were submitted from four different locations in Europe over a period of about one month. Nine sites were active, providing integrated resources of more than 500 CPUs and about 5 TB of disk space (with the additional use of two Mass Storage Systems). Descriptions of the adopted procedures, the problems encountered and the corresponding solutions are reported. Results and evaluations of the test, both from the CMS and the EDG perspectives, are described.
DOI: 10.1007/s10723-004-7141-y
2004
Cited 4 times
HEP Applications and Their Experience with the Use of DataGrid Middleware
DOI: 10.1109/cit.2010.272
2010
Adding Standards Based Job Submission to a Commodity Grid Broker
The Condor matchmaker provides a powerful mechanism for optimally matching between user task and resource provider requirements, making the Condor system a good choice for use as a meta-scheduler within the Grid. Integrating Condor within a wider Grid context is possible but only through modification to the Condor source code as each new mechanism for connection to remote resources is defined. In this paper we describe how the emerging standards for job submission and resource description can be integrated into the Condor system, thereby allowing arbitrary remote Grid resources which support these standards to be brokered using Condor.
2006
Cited 3 times
GRIDCC - Providing a real-time grid for distributed instrumentation
The GRIDCC project is extending the use of Grid computing to include access to and control of distributed instrumentation. Access to the instruments will be via an interface to a Virtual Instrument Grid Service (VIGS). VIGS is a new concept and its design and implementation, together with middleware that can provide the appropriate Quality of Service (QoS), is a key part of the GRIDCC development plan. An overall architecture for GRIDCC has been defined and some of the application areas, which include distributed power systems, remote control of an accelerator and the remote monitoring of a large particle physics experiment, are briefly discussed.
DOI: 10.1109/escience.2008.148
2008
Adding Standards Based Job Submission to a Commodity Grid Broker
The Condor matchmaker provides a powerful mechanism for matching together both users job requirements and resource providers requirements in such a manner that not only is a paring selected which satisfies both requirements but is optimal for both. This has made the Condor system a good choice for use as a meta-scheduler within the Grid. Integrating Condor within a wider Grid context is possible but only through modification to the Condor source code. Here we describe how the standards for job submission and resource descriptions can be integrated into the Condor system to allow arbitrary Grid resources which support these standards to be brokered through Condor.
DOI: 10.1088/1748-0221/16/08/p08046
2021
Performance of the MICE diagnostic system
Abstract Muon beams of low emittance provide the basis for the intense, well-characterised neutrino beams of a neutrino factory and for multi-TeV lepton-antilepton collisions at a muon collider. The international Muon Ionization Cooling Experiment (MICE) has demonstrated the principle of ionization cooling, the technique by which it is proposed to reduce the phase-space volume occupied by the muon beam at such facilities. This paper documents the performance of the detectors used in MICE to measure the muon-beam parameters, and the physical properties of the liquid hydrogen energy absorber during running.
DOI: 10.1140/epjc/s10052-022-09991-7
2022
Erratum to: The LUX-ZEPLIN (LZ) radioactivity and cleanliness control programs
DOI: 10.1098/rsta.2012.0094
2013
Processing LHC data in the UK
The Large Hadron Collider (LHC) is one of the greatest scientific endeavours to date. The construction of the collider itself and the experiments that collect data from it represent a huge investment, both financially and in terms of human effort, in our hope to understand the way the Universe works at a deeper level. Yet the volumes of data produced are so large that they cannot be analysed at any single computing centre. Instead, the experiments have all adopted distributed computing models based on the LHC Computing Grid. Without the correct functioning of this grid infrastructure the experiments would not be able to understand the data that they have collected. Within the UK, the Grid infrastructure needed by the experiments is provided by the GridPP project. We report on the operations, performance and contributions made to the experiments by the GridPP project during the years of 2010 and 2011--the first two significant years of the running of the LHC.
DOI: 10.1109/comswa.2006.1665228
2006
The GRIDCC Project the GRIDCC Collaboration
The GRIDCC project is integrating into the grid remote interaction with instruments, along with distributed control and real time interaction. The GRIDCC middleware is being designed with use cases from a very diverse set of applications and so the GRIDCC architecture provides access to the instruments in as generic a way as possible. The middleware will be validated on a representative subset of these applications. GRIDCC is also developing an adaptable user interface and a mechanism for performing complex workflows in order to increase both the usability and the usefulness of the system. Wherever possible the GRIDCC middleware builds on top of other middleware stacks allowing the effort to be concentrated on the more novel elements of the project. The GRIDCC project is a collaboration between 10 organizations in 4 different countries and is funded by the European Union
DOI: 10.1109/nssmic.2004.1462663
2005
Performance of R-GMA Based Grid Job Monitoring System for CMS Data Production
High Energy Physics experiments, such as the Compact Muon Solenoid (CMS) at the CERN laboratory in Geneva, have large-scale data processing requirements, with stored data accumulating at a rate of 1 Gbyte/s. This load comfortably exceeds any previous processing requirements and we believe it may be most efficiently satisfied through Grid computing. Management of large Monte Carlo productions (/spl sim/3000 jobs) or data analyses and the quality assurance of the results requires careful monitoring and bookkeeping, and an important requirement when using the Grid is the ability to monitor transparently the large number of jobs that are being executed simultaneously at multiple remote sites. R-GMA is a monitoring and information management service for distributed resources based on the Grid Monitoring Architecture of the Global Grid Forum. We have previously developed a system allowing us to test its performance under a heavy load while using few real Grid resources. We present the latest results on this system and compare them with the data collected while running actual CMS simulation jobs on the LCG2 Grid test bed.
DOI: 10.1109/tns.2004.829575
2004
Efficiency of resource brokering in grids for high-energy physics computing
This paper presents a study of the efficiency of resource brokering in a computational grid constructed for CPU and data intensive scientific analysis. Real data is extracted from the logging records of an in-use resource broker relating to the running of Monte Carlo simulation jobs, and compared to detailed modeling of job processing in a grid system. This analysis uses performance indicators relating to how efficiently the jobs are run, as well as how effectively the available computational resources are being utilized. In the case of a heavily loaded grid, the delays incurred at different stages of brokering and scheduling are studied, in order to determine where the bottlenecks appear in this process. The performance of different grid setups is tested, for instance, homogeneous and heterogeneous resource distribution, and varying numbers of resource brokers. The importance of the speed of the grid information services (IS) is also investigated.
DOI: 10.1109/nssmic.2005.1596391
2006
Performance of R-GMA for Monitoring Grid Jobs for CMS Data Production
High energy physics experiments, such as the Compact Muon Solenoid (CMS) at the CERN laboratory in Geneva, have large-scale data processing requirements, with data accumulating at a rate of 1 Gbyte/s. This load comfortably exceeds any previous processing requirements and we believe it may be most efficiently satisfied through grid computing. Furthermore the production of large quantities of Monte Carlo simulated data provides an ideal test bed for grid technologies and will drive their development. One important challenge when using the grid for data analysis is the ability to monitor transparently the large number of jobs that are being executed simultaneously at multiple remote sites. R-GMA is a monitoring and information management service for distributed resources based on the grid monitoring architecture of the Global Grid Forum. We have previously developed a system allowing us to test its performance under a heavy load while using few real grid resources. We present the latest results on this system running on the LCG 2 grid test bed using the LCG 2.6.0 middleware release. For a sustained load equivalent to 7 generations of 1000 simultaneous jobs, R-GMA was able to transfer all published messages and store them in a database for 98% of the individual jobs. The failures experienced were at the remote sites, rather than at the archiver's MON box as had been expected
DOI: 10.1088/1742-6596/396/3/032065
2012
Preparing for long-term data preservation and access in CMS
The data collected by the LHC experiments are unique and present an opportunity and a challenge for a long-term preservation and re-use. The CMS experiment has defined a policy for the data preservation and access to its data and is starting its implementation. This note describes the driving principles of the policy and summarises the actions and activities which are planned in the starting phase of the project.
DOI: 10.1007/978-0-387-09663-6_23
2009
Enabling Scientists Through Workflowand Quality of Service
There is a strong desire within scientific communities to Grid-enable their experiments. This is fueled by the advantages of having remote (collaborative) access to instruments, computational resources and storage. In order to make the scientists experience as rewarding as possible two technologies need to be adopted into the Grid paradigm: those of workflow, to allow the whole scientific process to be automated, and Quality of Service (QoS) to ensure that this automation meets the scientists’ needs and expectations. In this chapter we present an end-to-end workflow pipeline which takes a user’s design and automates the processes of workflow design, resource selections and reservation through to enacting the workflow on the Grid, thus removing much of the complexity inherent within this process.
DOI: 10.5170/cern-2005-002.959
2005
HEP Applications Experience with the European DataGrid Middleware and Testbed
The European DataGrid (EDG) project ran from 2001 to 2004, with the aim of producing middleware which could form the basis of a production Grid, and of running a testbed to demonstrate the middleware. HEP experiments (initially the four LHC experiments and subsequently BaBar and D0) were involved from the start in specifying requirements, and subsequently in evaluating the performance of the middleware, both with generic tests and through increasingly complex data challenges. A lot of experience has therefore been gained which may be valuable to future Grid projects, in particular LCG and EGEE which are using a substantial amount of the middleware developed in EDG. We report our experiences with job submission, data management and mass storage, information and monitoring systems, Virtual Organisation management and Grid operations, and compare them with some typical Use Cases defined in the context of LCG. We also describe some of the main lessons learnt from the project, in particular in relation to configuration, fault-tolerance, interoperability and scalability, as well as the software development process itself, and point out some areas where further work is needed. We also make some comments on how these issues are being addressed in LCG and EGEE.
2003
HEP Applications Evaluation of the EDG Testbed and Middleware
Workpackage 8 of the European Datagrid project was formed in January 2001 with representatives from the four LHC experiments, and with experiment independent people from five of the six main EDG partners. In September 2002 WP8 was strengthened by the addition of effort from BaBar and D0. The original mandate of WP8 was, following the definition of short- and long-term requirements, to port experiment software to the EDG middleware and testbed environment. A major additional activity has been testing the basic functionality and performance of this environment. This paper reviews experiences and evaluations in the areas of job submission, data management, mass storage handling, information systems and monitoring. It also comments on the problems of remote debugging, the portability of code, and scaling problems with increasing numbers of jobs, sites and nodes. Reference is made to the pioneeering work of Atlas and CMS in integrating the use of the EDG Testbed into their data challenges. A forward look is made to essential software developments within EDG and to the necessary cooperation between EDG and LCG for the LCG prototype due in mid 2003.
2003
CMS Test of the European DataGrid Testbed
2014
Lucien Bidaine (1908-1996) : peintre arlonais, chantre de la nature
Ne a Arlon le 19 juin 1908, et y habitant, Lucien Bidaine a marque la vie artistique arlonaise par son activite au sein de l'Academie des Beaux-Arts du chef-lieu, ou il enseigna de 1945 jusqu'a sa retraite, a sa retraite en 1976. A ce titre, il eut comme eleves, de nombreux peintres luxembourgeois, puisqu'il enseignait le dessin et la peinture. Il avait acquis sa formation en suivant un cycle d'etudes de trois ans a l'Ecole Superieure de Peinture Pierre Logelain, a Bruxelles. Sa carriere artistique s'amorca des les annees 1930. Sa methode consistait a se deplacer sur le motif et a y realiser une premiere etude qui devait servir d'aide-memoire (ainsi pratiquait Camille Barthelemy peignant des gouaches preparatoires a ses huiles). D'inspiration post-impressionniste, Lucien Bidaine fit de nombreuses expositions tant en Belgique, notamment a Arlon ou il montrait periodiquement ses œuvres, qu'a l'etranger. Pendant longtemps, il demeura le seul peintre arlonais a organiser des expositions individuelles. Ses œuvres revelent une grande sensibilite a la couleur. Il soignait les avant-plans de ses œuvres, tandis que les lointains devenaient de plus en plus indistincts, traduisant ainsi l'effet de la lumiere sur les choses. La nature etait son principal theme d'inspiration parce qu'il estimait qu'elle pouvait lui offrir tous les sujets d'inspiration qu'un peintre peut souhaiter. Il a donc surtout peint des paysages (Arlon et ses vieux quartiers, sa region, la vallee de la Semois), mais egalement des natures mortes et des portraits. (Source : Dictionnaire des peintres du Luxembourg belge) L'exposition presente essentiellement sa vision des paysages et des scenes de genre de la region d'Arlon et des deux Luxembourg. Sa parfaite maitrise des couleurs l'ont amene a reproduire parfaitement l'atmosphere des campagnes luxembourgeoises, la force des elements naturels, la rudesse du climat et des habitants. L'exposition du Musee Gaspar est la premiere consacree a l'artiste a etre accompagnee d'une publication reprenant sa biographie, une approche stylistique, ainsi qu'un catalogue illustre des œuvres exposees.
2015
Arlon 1914-1918 : l'occupation du Sud-Luxembourg durant la Première Guerre mondiale
Presentation des thematiques developpees dans l'exposition eponyme qui eut lieu au Musee Gaspar d'Arlon du 1er mai au 19 decembre 2014
2015
La statuette d'Intarabus de Foy-Noville
Presentation de la correspondance entre Leon Parmentier et Franz Cumont, au sujet d'une statuette antique conservee chez un particulier a Bastogne.
2015
Strukturen des kaiserlichen römischen Reiches: was Verwaltungsgrenzen über eine Randregion aussagen
Presentation des structures administratives imperiales romaines dans lesquelles s'inserent les territoires correspondant a l'actuelle Communaute germanophone de Belgique
2015
La bienfaisance de Charles Gaspar
Contribution a la connaissance de l'action de bienfaisance de Charles Gaspar dans le Sud-Luxembourg, particulierement durant la Premiere Guerre mondiale
2015
Bibliographie capucine de référence
Bibliographie capucine de reference realisee dans le cadre de l'exposition Capucins en Luxembourg (1616-1796), ainsi que la publication associee
2015
Présence capucine luxembourgeoise en Amérique française au XVIIIe siècle
Article presentant l'histoire de la presence capucine luxembourgeoise en Louisiane, notamment quelques personnalites parmi lesquelles le Pere Raphael de Luxembourg
DOI: 10.1016/j.nuclphysbps.2015.09.195
2016
Preparations for the public release of high-level CMS data
The CMS Collaboration, in accordance with its commitment to open access and data preservation, is preparing for the public release of up to half of the reconstructed collision data collected in 2010. Efforts at present are focused on the usability of the data in education. The data will be accompanied by example applications tailored for different levels of access, including ready-to-use web-based applications for histogramming or visualising individual collision events and a virtual machine image of the CMS software environment that is compatible with these data. The virtual machine image will contain instructions for using the data with the online applications as well as examples of simple analyses. The novelty of this initiative is two-fold: in terms of open science, it lies in releasing the data in a format that is good for analysis; from an outreach perspective, it is to provide the possibility for people outside CMS to build educational applications using our public data. CMS will rely on services for data preservation and open access being prototyped at CERN with input from CMS and the other LHC experiments.
DOI: 10.1088/1742-6596/664/6/062009
2015
The GridPP DIRAC project: Implementation of a multi-VO DIRAC service
The GridPP consortium provides computing support to many high energy physics projects in the UK. As part of this GridPP offers access to a large amount of highly distributed resources across the UK for multiple collaborations. The userbase supported by GridPP includes hundreds of users spanning multiple virtual organisations with many different computing requirements. In order to provide a common interface to these distributed a centralised DIRAC instance has been setup at Imperial College London. This paper describes the experiences learnt from deploying this DIRAC instance and the modifications that have made to support the GridPP use case.
2016
Le Fonds Godefroid Kurth de l'Institut Archéologique du Luxembourg, conservé au Musée Gaspar
2015
Catherine Wolff, L’armée romaine. Une armée modèle ? Paris, CNRS Éditions, 2012
2015
Christoph Schmetterer, Die rechtliche Stellung römischer Soldaten im Prinzipat. Wiesbaden, Harrassowitz, 2012 (Philippika, Marburger altertumskundliche Abhandlungen, 54)
2012
Le trésor de Saint-Mard II : hommage à Jacqueline Lallemand
2011
Les numismates belges au Musée Archéologique Luxembourgeois
2013
Acquisitions de l'Institut Archéologique du Luxembourg (juillet 2012-novembre 2013)
Inventaire des acquisitions de l'Institut Archeologique du Luxembourg entre juillet 2012 a novembre 2013, conservees au Musee Gaspar d'Arlon
2012
La typologie "Weiler" des casques romains
2012
Compte rendu de Sites cisterciens d'Europe - Cistercian sites in Europe, Paris, 2012
2011
Compte-rendu de EICH Armin, Die Verwaltung der kaiserzeitlichen römischen Armee. Studien für Hartmut Wolff, Stuttgart, 2010
2012
Compte rendu de Kaï Michael Topfer : Signa militaria. Die römischen Feldzeichen in der Republik und im Prinzipat, Mayence, 2011
2012
Compte rendu de BURNAND (Yves), Primores Galliarum : sénateurs et chevaliers romains originaires de Gaule de la fin de la République au IIIe siècle. IV - Indices, Bruxelles, 2010
2012
Étude de monnaies antiques arlonaises : regain d'intérêt et perspectives
2012
Exposition "Tout un plat ! Faïences fines d'Attert, Arlon et environs"
2012
Découvertes numismatiques récentes à Arlon
2011
Les scènes de banquet funéraire ou Totenmahlreliefs originaires d'Arlon
2013
Spécificités des soldats originaires des provinces germaniques dans l'armée romaine impériale
2011
La chapelle-reposoir au Saint Curé d’Ars, à Waltzing
2012
Réactions à Insus, fils de Vodullus
2011
Arlon : découvertes numismatiques récentes
2011
Les soldats belges dans l'armée romaine à l'époque impériale
2011
Janus et Georges Thinès
2012
De rebus societatis
2013
Croiser les sources, évaluer l'épaisseur historiographique, décaler le regard. Une méthode toujours fructueuse
2011
Catherine Wolff, La campagne de Julien en Perse, 363 ap. J.-C., (Illustoria, HA6) 2010
2011
Armin Eich (Ed.), Die Verwaltung der kaiserzeitlichen römischen Armee. Studien für Hartmut Wolff, (Historia Einzelschriften, 211) 2010
2010
Compte-rendu de RICCI Cecicila, Soldati e veterani nella vita cittadina dell’Italia imperiale, Rome, 2010
2010
Le dessin, outil pédagogique de reconstitution
A travers le dessin, l'aquarelliste arlonais Jean Depiesse fait revivre toute une serie de bas-reliefs conserves au Musee Archeologique Luxembourgeois d'Arlon. Plusieurs de ses oeuvres, leguees a l'Institut Archeologique du Luxembourg, evoquent des scenes de la vie quotidienne antique, presentes sur les reliefs.
2009
De rebus societatis : du neuf dans nos musées
2008
Compte-rendu de REDDÉ M., BRULET R., FELLMANN R., HAALEBOS J. K., VON SCHNURBEIN S., L'architecture en Gaule romaine : les fortifications militaires, Paris, 2006
2009
Nouveau regard sur le "monstre marin" gallo-romain d'Arlon : un exemple de Capricorne ?
2009
L'intégration de la province de Gaule Belgique dans l'Empire romain
2008
Towards a statistical model of EGEE load
2008
Les Belges dans l'armée romaine à l'époque impériale
2007
Compte-rendu de BURNAND Yves, Primores Galliarum : sénateurs et chevaliers romains originaires de Gaule de la fin de la République au IIIe siècle. I. Méthodologie, Bruxelles, Éditions Latomus, 2005 (Collection Latomus, 290)
2007
L'évolution du paysage d'Orolaunum vicus suite aux troubles du IIIe siècle
DOI: 10.1051/epjconf/201921403046
2019
The LZ UK Data Centre
LUX-ZEPLIN (LZ) is a Dark Matter experiment based at the Sanford Underground Research Facility in South Dakota, USA. It is currently under construction and aims to start data taking in 2020. Its computing model stipulates two independent data centres, one in the USA and one in the UK. Both data centres will hold a complete copy of the experiment’s data and are expected to handle all aspects of data processing and user analysis. Here we discuss the set-up of the UK data centre within the context of the existing UK Grid infrastructure and show that a mature distributed computing system such as the Grid can be extended to serve as a central data centre for a reasonably large non-LHC experiment.
DOI: 10.1051/epjconf/201921404039
2019
An open source data transfer tool kit for research data
We present the prototype of an open source data transfer tool kit. It provides an easy to use ‘drag-and-drop’ web interface for users to transfer files between institutions that do not have a grid infrastructure in place. The underlying technology leverages standard grid technologies. e.g. automatic generation of X.509 certificates, but remains completely hidden from the user.
DOI: 10.5170/cern-2005-002.1074
2004
Distributed Grid Experiences in CMS DC04