ϟ

P. Eerola

Here are all the papers by P. Eerola that you can download and read on OA.mg.
P. Eerola’s last known institution is . Download P. Eerola PDFs here.

Claim this Profile →
DOI: 10.48550/arxiv.1608.07537
2016
Cited 62 times
Updated baseline for a staged Compact Linear Collider
The Compact Linear Collider (CLIC) is a multi-TeV high-luminosity linear e+e- collider under development. For an optimal exploitation of its physics potential, CLIC is foreseen to be built and operated in a staged approach with three centre-of-mass energy stages ranging from a few hundred GeV up to 3 TeV. The first stage will focus on precision Standard Model physics, in particular Higgs and top-quark measurements. Subsequent stages will focus on measurements of rare Higgs processes, as well as searches for new physics processes and precision measurements of new states, e.g. states previously discovered at LHC or at CLIC itself. In the 2012 CLIC Conceptual Design Report, a fully optimised 3 TeV collider was presented, while the proposed lower energy stages were not studied to the same level of detail. This report presents an updated baseline staging scenario for CLIC. The scenario is the result of a comprehensive study addressing the performance, cost and power of the CLIC accelerator complex as a function of centre-of-mass energy and it targets optimal physics output based on the current physics landscape. The optimised staging scenario foresees three main centre-of-mass energy stages at 380 GeV, 1.5 TeV and 3 TeV for a full CLIC programme spanning 22 years. For the first stage, an alternative to the CLIC drive beam scheme is presented in which the main linac power is produced using X-band klystrons.
DOI: 10.48550/arxiv.1812.06018
2018
Cited 55 times
The Compact Linear Collider (CLIC) - 2018 Summary Report
The Compact Linear Collider (CLIC) is a TeV-scale high-luminosity linear $e^+e^-$ collider under development at CERN. Following the CLIC conceptual design published in 2012, this report provides an overview of the CLIC project, its current status, and future developments. It presents the CLIC physics potential and reports on design, technology, and implementation aspects of the accelerator and the detector. CLIC is foreseen to be built and operated in stages, at centre-of-mass energies of 380 GeV, 1.5 TeV and 3 TeV, respectively. CLIC uses a two-beam acceleration scheme, in which 12 GHz accelerating structures are powered via a high-current drive beam. For the first stage, an alternative with X-band klystron powering is also considered. CLIC accelerator optimisation, technical developments and system tests have resulted in an increased energy efficiency (power around 170 MW) for the 380 GeV stage, together with a reduced cost estimate at the level of 6 billion CHF. The detector concept has been refined using improved software tools. Significant progress has been made on detector technology developments for the tracking and calorimetry systems. A wide range of CLIC physics studies has been conducted, both through full detector simulations and parametric studies, together providing a broad overview of the CLIC physics potential. Each of the three energy stages adds cornerstones of the full CLIC physics programme, such as Higgs width and couplings, top-quark properties, Higgs self-coupling, direct searches, and many precision electroweak measurements. The interpretation of the combined results gives crucial and accurate insight into new physics, largely complementary to LHC and HL-LHC. The construction of the first CLIC energy stage could start by 2026. First beams would be available by 2035, marking the beginning of a broad CLIC physics programme spanning 25-30 years.
DOI: 10.1088/1748-0221/16/02/p02027
2021
Cited 33 times
The CMS Phase-1 pixel detector upgrade
The CMS detector at the CERN LHC features a silicon pixel detector as its innermost subdetector. The original CMS pixel detector has been replaced with an upgraded pixel system (CMS Phase-1 pixel detector) in the extended year-end technical stop of the LHC in 2016/2017. The upgraded CMS pixel detector is designed to cope with the higher instantaneous luminosities that have been achieved by the LHC after the upgrades to the accelerator during the first long shutdown in 2013–2014. Compared to the original pixel detector, the upgraded detector has a better tracking performance and lower mass with four barrel layers and three endcap disks on each side to provide hit coverage up to an absolute value of pseudorapidity of 2.5. This paper describes the design and construction of the CMS Phase-1 pixel detector as well as its performance from commissioning to early operation in collision data-taking.
DOI: 10.1016/j.cpc.2004.02.005
2004
Cited 97 times
BCVEGPY: an event generator for hadronic production of the Bc meson
We have written a Fortran program BCVEGPY, which is an event generator for the hadronic production of the Bc meson through the dominant hard subprocess gg→Bc(Bc∗)+b+c̄. To achieve a compact program, we have written the amplitude of the subprocess with the particle helicity technique and made it as symmetric as possible, by decomposing the gluon self couplings and then applying the symmetries. To check the program, various cross sections of the subprocess have been computed numerically and compared with those in the literature. BCVEGPY is written in a PYTHIA-compatible format, thus it is easy to implement in PYTHIA. Title of program: BCVEGPY Version: 1.0 (September, 2003) Catalogue identifier: ADTJ Program summary URL: http://cpc.cs.qub.ac.uk/summaries/ADTJ Program obtained from: CPC Program Library, Queen's University of Belfast, N. Ireland. Computer: Any computer with FORTRAN 77 compiler. The program has been tested on HP-SC45 Sigma-X workgroups, Linux PCs and Windows PCs with Visual Fortran. Operating systems: UNIX, Linux and Windows. Programming language used: FORTRAN 77. Memory required to execute with typical data: About 2.0 MB. No. of bytes in distributed program, including test data, etc.: 477630 No. of lines in distributed program, including test data, etc.: 66461 Distribution format: tar gzip file Nature of physical problem: Hadronic production of Bc meson. Method of solution: Improved helicity-approach to the amplitude and symmetries of the amplitude itself have been used to compact the program so as to save cpu as possible as one can. The code with option can generate weighted and un-weighted events. For jet hadronization, an interface to PYTHIA is provided. Restrictions on the complexity of the problem: The hadronic production of the Bc meson in the S-wave states, i.e. pseudo-scalar state (1S0) and vector state (3S1) are included by the 'complete calculation' approach. The hadronic production of Bc meson in P-wave states has not been implemented into the BCVEGPY yet. Typical running time: It depends on which option one chooses to match PYTHIA when generating the Bc events. Typically, if IDWTUP=1, then it takes about 20 hour on a 1.8 GHz Intel P4-processor machine to generate 1000 events; however if IDWTUP=3, to generate 106 events, it takes about 40 minutes only.
DOI: 10.1016/0168-9002(93)90663-3
1993
Cited 72 times
The DELPHI Microvertex detector
The DELPHI Microvertex detector, which has been in operation since the start of the 1990 LEP run, consists of three layers of silicon microstrip detectors at average radii of 6.3, 9.0 and 11.0 cm. The 73728 readout strips, oriented along the beam, have a total active area of 0.42 m2. The strip pitch is 25 μm and every other strip is read out by low power charge amplifiers, giving a signal to noise ratio of 15:1 for minimum ionizing particles. On-line zero suppression results in an average data size of 4 kbyte for Z0 events. After a mechanical survey and an alignment with tracks, the impact parameter uncertainty as determined from hadronic Z0 decays is well described by (69pt)2 + 242 μm, with pt in GeV/c. For the 45 GeV/c tracks from Z0 → μ− decays we find an uncertainty of 21 μm for the impact parameter, which corresponds to a precision of 8 μm per point. The stability during the run is monitored using light spots and capacitive probes. An analysis of tracks through sector overlaps provides an additional check of the stability. The same analysis also results in a value of 6 μm for the intrinsic precision of the detector.
DOI: 10.1088/1748-0221/3/02/p02013
2008
Cited 44 times
The ATLAS Transition Radiation Tracker (TRT) proportional drift tube: design and performance
A straw proportional counter is the basic element of the ATLAS Transition Radiation Tracker (TRT). Its detailed properties as well as the main properties of a few TRT operating gas mixtures are described. Particular attention is paid to straw tube performance in high radiation conditions and to its operational stability.
DOI: 10.1016/j.nima.2010.04.054
2010
Cited 39 times
Study of energy response and resolution of the ATLAS barrel calorimeter to hadrons of energies from 20 to 350GeV
A fully instrumented slice of the ATLAS detector was exposed to test beams from the SPS (Super Proton Synchrotron) at CERN in 2004. In this paper, the results of the measurements of the response of the barrel calorimeter to hadrons with energies in the range 20–350 GeV and beam impact points and angles corresponding to pseudo-rapidity values in the range 0.2–0.65 are reported. The results are compared to the predictions of a simulation program using the Geant 4 toolkit.
DOI: 10.1088/1748-0221/3/02/p02014
2008
Cited 38 times
The ATLAS TRT Barrel Detector
The ATLAS TRT barrel is a tracking drift chamber using 52,544 individual tubular drift tubes. It is one part of the ATLAS Inner Detector, which consists of three sub-systems: the pixel detector spanning the radius range 4 to 20 cm, the semiconductor tracker (SCT) from 30 to 52 cm, and the transition radiation tracker (TRT) from 56 to 108 cm. The TRT barrel covers the central pseudo-rapidity region |η|< 1, and the TRT while endcaps cover the forward and backward eta regions. These TRT systems provide a combination of continuous tracking with many measurements in individual drift tubes (or straws) and of electron identification based on transition radiation from fibers or foils interleaved between the straws themselves. This paper describes the recently-completed construction of the TRT Barrel detector, including the quality control procedures used in the fabrication of the detector.
DOI: 10.1088/1748-0221/3/10/p10003
2008
Cited 35 times
The ATLAS TRT end-cap detectors
The ATLAS TRT end-cap is a tracking drift chamber using 245,760 individual tubular drift tubes. It is a part of the TRT tracker which consist of the barrel and two end-caps. The TRT end-caps cover the forward and backward pseudo-rapidity region 1.0 < |η| < 2.0, while the TRT barrel central η region |η| < 1.0. The TRT system provides a combination of continuous tracking with many measurements in individual drift tubes (or straws) and of electron identification based on transition radiation from fibers or foils interleaved between the straws themselves. Along with other two sub-system, namely the Pixel detector and Semi Conductor Tracker (SCT), the TRT constitutes the ATLAS Inner Detector. This paper describes the recently completed and installed TRT end-cap detectors, their design, assembly, integration and the acceptance tests applied during the construction.
DOI: 10.1088/1748-0221/12/06/p06018
2017
Cited 25 times
P-Type Silicon Strip Sensors for the new CMS Tracker at HL-LHC
The upgrade of the LHC to the High-Luminosity LHC (HL-LHC) is expected to increase the LHC design luminosity by an order of magnitude. This will require silicon tracking detectors with a significantly higher radiation hardness. The CMS Tracker Collaboration has conducted an irradiation and measurement campaign to identify suitable silicon sensor materials and strip designs for the future outer tracker at the CMS experiment. Based on these results, the collaboration has chosen to use n-in-p type silicon sensors and focus further investigations on the optimization of that sensor type. This paper describes the main measurement results and conclusions that motivated this decision.
2005
Cited 32 times
ATLAS computing: Technical Design Report
The ATLAS Computing Model embraces the Grid paradigm and a high degree of decentralization and sharing of computing resources. The required level of computing resources means that off-site facilities will be vital to the operation of ATLAS in a way that was not the case for previous CERN-based experiments. The primary event processing occurs at CERN in a Tier-0 facility. The RAW data is archived at CERN and copied (along with the primary processed data) to the Tier-1 facilities around the world. These facilities archive the raw data, provide the reprocessing capacity, provide access to the various processed versions, and allow scheduled analysis of the processed data by physics analysis groups. Derived datasets produced by the physics groups are copied to the Tier-2 facilities for further analysis. The Tier-2 facilities also provide the simulation capacity for the experiment, with the simulated data housed at Tier-1s. In addition, Tier-2 centres will provide analysis facilities, and some will provide the capacity to produce calibrations based on processing raw data. A CERN Analysis Facility provides an additional analysis capacity, with an important role in the calibration and algorithmic development work. ATLAS has adopted an object-oriented approach to software, based primarily on the C++ programming language, but with some components implemented using FORTRAN and Java. A component-based model has been adopted, whereby applications are built up from collections of plug-compatible components based on a variety of configuration files. This capability is supported by a common framework that provides common data-processing support. This approach results in great flexibility in meeting the basic processing needs of the experiment, and also for responding to changing requirements throughout its lifetime. The heavy use of abstract interfaces allows for different implementations to be provided, supporting different persistency technologies, or optimized for the offline or high-level trigger environments. The Athena framework is an enhanced version of the Gaudi framework that was originally developed by the LHCb experiment, but is now a common ATLAS-LHCb project. Major design principles are the clear separation of data and algorithms, and of transient (in-memory) and persistent (in-file) data. All levels of processing of ATLAS data, from high-level trigger to event simulation, reconstruction and analysis, take place within the Athena framework; in this way it is easier for code developers and users to test and run algorithmic code, with the assurance that all geometry and conditions data will be the same for all types of applications (simulation, reconstruction, analysis, visualization). One of the principal challenges for ATLAS computing is to develop and operate a data storage and management infrastructure able to meet the demands of a yearly data volume of O(10 PB) utilized by data processing and analysis activities spread around the world. The ATLAS Computing Model establishes the environment and operational requirements that ATLAS data-handling systems must support, and, together with the operational experience gained to date in test beams and data challenges, provides the primary guidance for the development of the data management systems. The ATLAS Databases and Data Management Project (DB Project) leads and coordinates ATLAS activities in these areas, with a scope encompassing technical databases (detector production, installation and survey data), detector geometry, online/TDAQ databases, conditions databases (online and offline), event data, offline processing configuration and book-keeping, distributed data management, and distributed database and data management services. The project is responsible for ensuring the coherent development, integration, and operational capability of the distributed database and data management software and infrastructure for ATLAS across these areas. The ATLAS Computing Model foresees the distribution of raw and processed data to Tier-1 and Tier-2 centres, so as to be able to exploit fully the computing resources that are made available to the Collaboration. Additional computing resources will be available for data processing and analysis at Tier-3 centres and other computing facilities to which ATLAS may have access. A complex set of tools and distributed services, enabling the automatic distribution and processing of the large amounts of data, has been developed and deployed by ATLAS in cooperation with the LHC Computing Grid (LCG) Project and with the middleware providers of the three large Grid infrastructures we use: EGEE, OSG and NorduGrid. The tools are designed in a flexible way, in order to have the possibility to extend them to use other types of Grid middleware in the future. These tools, and the service infrastructure on which they depend, were initially developed in the context of centrally managed, distributed Monte Carlo production exercises. They will be re-used wherever possible to create systems and tools for individual users to access data and compute resources, providing a distributed analysis environment for general usage by the ATLAS Collaboration. The first version of the production system was deployed in summer 2004 and has been used since the second half of 2004. It was used for Data Challenge 2, for the production of simulated data for the 5th ATLAS Physics Workshop (Rome, June 2005) and for the reconstruction and analysis of the 2004 Combined Test-Beam data. The main computing operations that ATLAS will have to run comprise the preparation, distribution and validation of ATLAS software, and the computing and data management operations run centrally on Tier-0, Tier-1s and Tier-2s. The ATLAS Virtual Organization will allow production and analysis users to run jobs and access data at remote sites using the ATLAS-developed Grid tools. In the past few years the Computing Model has been tested and developed by running Data Challenges of increasing scope and magnitude, as was proposed by the LHC Computing Review in 2001. We have run two major Data Challenges since 2002 and performed other massive productions in order to provide simulated data to the physicists and to reconstruct and analyse real data coming from test-beam activities; this experience is now useful in setting up the operations model for the start of LHC data-taking in 2007. The Computing Model, together with the knowledge of the resources needed to store and process each ATLAS event, gives rise to estimates of required resources that can be used to design and set up the various facilities. It is not assumed that all Tier-1s or Tier-2s will be of the same size; however, in order to ensure a smooth operation of the Computing Model, all Tier-1s should have broadly similar proportions of disk, tape and CPU, and the same should apply for the Tier-2s. The organization of the ATLAS Software & Computing Project reflects all areas of activity within the project itself. Strong high-level links have been established with other parts of the ATLAS organization, such as the T-DAQ Project and Physics Coordination, through cross-representation in the respective steering boards. The Computing Management Board, and in particular the Planning Officer, acts to make sure that software and computing developments take place coherently across sub-systems and that the project as a whole meets its milestones. The International Computing Board assures the information flow between the ATLAS Software & Computing Project and the national resources and their Funding Agencies.
2000
Cited 32 times
$B$ Decays at the LHC
We review the prospects for B-decay physics at the LHC as discussed in the 1999 workshop on Standard Model physics at the LHC. Contents
DOI: 10.1109/grid.2003.1261711
2004
Cited 30 times
The Nordugrid production grid infrastructure, status and plans
Nordugrid offers reliable grid services for academic users over an increasing set of computing & storage resources spanning through the Nordic countries Denmark, Finland, Norway and Sweden. A small group of scientists has already been using the Nordugrid as their daily computing utility. In the near future we expect a rapid growth both in the number of active users and available resources thanks to the recently launched Nordic grid projects.We report on the present status and short term plans of the Nordic grid infrastructure and describe the available and foreseen resources, grid services and our forming user base.
DOI: 10.1142/9789814537315
1993
Cited 30 times
Physics and Experiments with Linear Colliders
DOI: 10.1016/s0370-2693(01)00238-6
2001
Cited 29 times
QCD signatures of narrow graviton resonances in hadron colliders
We show that the characteristic p_\perp spectrum yields valuable information for the test of models for the production of narrow graviton resonances in the TeV range at LHC. Furthermore, it is demonstrated that in those scenarios the parton showering formalism agrees with the prediction of NLO matrix element calculations.
DOI: 10.1007/3-540-44860-8_27
2003
Cited 28 times
The NorduGrid Architecture and Middleware for Scientific Applications
The NorduGrid project operates a production Grid infrastructure in Scandinavia and Finland using own innovative middleware solutions. The resources range from small test clusters at academic institutions to large farms at several supercomputer centers, and are used for various scientific applications. This talk reviews the architecture and describes the Grid services, implemented via the NorduGrid middleware.
DOI: 10.1088/1742-6596/513/6/062047
2014
Cited 14 times
A scalable infrastructure for CMS data analysis based on OpenStack Cloud and Gluster file system
The challenge of providing a resilient and scalable computational and data management solution for massive scale research environments requires continuous exploration of new technologies and techniques. In this project the aim has been to design a scalable and resilient infrastructure for CERN HEP data analysis. The infrastructure is based on OpenStack components for structuring a private Cloud with the Gluster File System. We integrate the state-of-the-art Cloud technologies with the traditional Grid middleware infrastructure. Our test results show that the adopted approach provides a scalable and resilient solution for managing resources without compromising on performance and high availability.
DOI: 10.1109/tns.2018.2881138
2019
Cited 13 times
Effect of Gamma-Ray Energy on Image Quality in Passive Gamma Emission Tomography of Spent Nuclear Fuel
Gamma-ray images of VVER-440 and SVEA-96 spent nuclear fuel assemblies were reconstructed using the filtered backprojection algorithm from measurements with a passive gamma emission tomography prototype instrument at Finnish nuclear power plants. Image quality evaluation criteria based on line profiles through the reconstructed image are used to evaluate image quality for spent fuel assemblies with different cooling times, and thus different mixtures of gamma-ray emitting isotopes. Image characteristics at the locations of water channels and central fuel pins are compared in two gamma-ray energy windows, 600-700 and >700 keV, for cooling times up to 10 years for SVEA-96 fuel and 24.5 years for VVER-440 fuel. For SVEA-96 fuel, images in the >700-keV gamma-ray energy window present better water-to-fuel contrast for all investigated cooling times. For VVER-440, images in the >700-keV gamma-ray energy window have higher water-to-fuel contrast up to and including a cooling time of 18.5 years, whereas the water-to-fuel contrast of the images taken in the two gamma-ray energy windows is equivalent for a cooling time of 24.5 years. Images reconstructed from higher energy gamma rays such as those in the >700-keV energy window present better water-to-fuel contrast in fuel cooled for up to 20 years and thus have the most potential for missing fuel pin detection.
DOI: 10.1016/s0168-9002(01)00878-6
2001
Cited 26 times
Particle identification using the time-over-threshold method in the ATLAS Transition Radiation Tracker
Test-beam studies of the ATLAS Transition Radiation Tracker (TRT) straw tube performance in terms of electron–pion separation using a time-over-threshold method are described. The test-beam data are compared with Monte Carlo simulations of charged particles passing through the straw tubes of the TRT. For energies below 10GeV, the time-over-threshold method combined with the standard transition-radiation cluster-counting technique significantly improves the electron–pion separation in the TRT. The use of the time-over-threshold information also provides some kaon–pion separation, thereby significantly enhancing the B-physics capabilities of the ATLAS detector.
DOI: 10.1109/mic.2003.1215657
2003
Cited 21 times
Building a production grid in scandinavia
Innovative middleware solutions are key to the NorduGrid testbed, which spans academic institutes and supercomputing centers throughout Scandinavia and Finland and provides continuous grid services to its users.
DOI: 10.1016/j.nima.2009.05.158
2009
Cited 16 times
Study of the response of the ATLAS central calorimeter to pions of energies from 3 to 9GeV
A fully instrumented slice of the ATLAS central detector was exposed to test beams from the SPS (Super Proton Synchrotron) at CERN in 2004. In this paper, the response of the central calorimeters to pions with energies in the range between 3 and 9 GeV is presented. The linearity and the resolution of the combined calorimetry (electromagnetic and hadronic calorimeters) was measured and compared to the prediction of a detector simulation program using the toolkit Geant 4.
DOI: 10.1088/1748-0221/11/04/p04023
2016
Cited 10 times
Trapping in proton irradiated p<sup>+</sup>-n-n<sup>+</sup>silicon sensors at fluences anticipated at the HL-LHC outer tracker
The degradation of signal in silicon sensors is studied under conditions expected at the CERN High-Luminosity LHC. 200 μm thick n-type silicon sensors are irradiated with protons of different energies to fluences of up to 3 · 1015 neq/cm2. Pulsed red laser light with a wavelength of 672 nm is used to generate electron-hole pairs in the sensors. The induced signals are used to determine the charge collection efficiencies separately for electrons and holes drifting through the sensor. The effective trapping rates are extracted by comparing the results to simulation. The electric field is simulated using Synopsys device simulation assuming two effective defects. The generation and drift of charge carriers are simulated in an independent simulation based on PixelAV. The effective trapping rates are determined from the measured charge collection efficiencies and the simulated and measured time-resolved current pulses are compared. The effective trapping rates determined for both electrons and holes are about 50% smaller than those obtained using standard extrapolations of studies at low fluences and suggest an improved tracker performance over initial expectations.
DOI: 10.1088/1748-0221/15/03/p03014
2020
Cited 8 times
Beam test performance of prototype silicon detectors for the Outer Tracker for the Phase-2 Upgrade of CMS
A new CMS tracker detector will be installed for operation at the High Luminosity LHC (HL-LHC). This detector comprises modules with two closely spaced parallel sensor plates and front-end ASICs capable of transmitting tracking information to the CMS Level-1 (L1) trigger at the 40 MHz beam crossing rate. The inclusion of tracking information in the L1 trigger decision will be essential for selecting events of interest efficiently at the HL-LHC. The CMS Binary Chip (CBC) has been designed to read out and correlate hits from pairs of tracker sensors, forming so-called track stubs. For the first time, a prototype irradiated module and a full-sized module, both equipped with the version 2 of the CBC, have been operated in test beam facilities. The efficiency of the stub finding logic of the modules for various angles of incidence has been studied. The ability of the modules to reject tracks with transverse momentum less than 2 GeV has been demonstrated. For modules built with irradiated sensors, no significant drop in the stub finding performance has been observed. Results from the beam tests are described in this paper.
DOI: 10.1016/j.nima.2004.01.033
2004
Cited 16 times
Status of design and construction of the Transition Radiation Tracker (TRT) for the ATLAS experiment at the LHC
The ATLAS Inner Detector consists of three sub-systems, the Pixel Detector at the innermost radius, the Semi-Conductor Tracker at intermediate radii, and the Transition Radiation Tracker (TRT) at the outermost radius in front of the electromagnetic calorimeter. The TRT provides a combination of continuous tracking with many projective measurements based on individual drift-tubes (or straws) and of electron identification based on radiator fibres or foils interleaved between the straws themselves. This paper describes the current status of design and construction of the various components of the TRT: the assembly of the barrel modules has recently been completed, that of the end-cap wheels is well underway, and the on-detector front-end electronics is in production. The detector modules and front-end electronics boards will be integrated together over the next year, the barrel and end-cap TRT parts will be assembled and tested with their SCT counterparts during 2005 and installation and commissioning in the ATLAS pit will take place at the end of 2005 and the beginning of 2006.
DOI: 10.1016/j.nima.2003.08.145
2003
Cited 16 times
Aging studies for the ATLAS Transition Radiation Tracker (TRT)
A summary of the aging and material validation studies carried out for the ATLAS Transition Radiation Tracker (TRT) is presented. Particular emphasis is put on the different phenomena observed in straw tubes operating with the chosen Xe/CF4/CO2 mixture. The most serious effects observed are silicon deposition on the anode wire and damage of the anode wire gold plating. Etching phenomena and active radical effects are also discussed. With a careful choice of all materials and components, and with good control of the water contamination in the active gas, the ATLAS TRT will operate reliably for 10 years at the LHC design luminosity. To demonstrate this fully, more work is still needed on the gas system purification elements, in particular to understand their interplay with the active species containing fluorine created in the avalanche process under irradiation.
DOI: 10.1088/1748-0221/3/06/p06007
2008
Cited 12 times
The ATLAS TRT electronics
The ATLAS inner detector consists of three sub-systems: the pixel detector spanning the radius range 4cm-20cm, the semiconductor tracker at radii from 30 to 52 cm, and the transition radiation tracker (TRT), tracking from 56 to 107 cm. The TRT provides a combination of continuous tracking with many projective measurements based on individual drift tubes (or straws) and of electron identification based on transition radiation from fibres or foils interleaved between the straws themselves. This paper describes the on and off detector electronics for the TRT as well as the TRT portion of the data acquisition (DAQ) system.
DOI: 10.1016/j.nima.2004.01.013
2004
Cited 13 times
Operation of the ATLAS Transition Radiation Tracker under very high irradiation at the CERN LHC
The ATLAS Transition Radiation Tracker (TRT) performance depends critically on the choice of the active gas and on its properties. The most important operational aspects, which have led to the final choice of the active gas for the operation of the TRT at the LHC design luminosity, are presented. The TRT performance expected at these conditions is reviewed, including pile-up effects at high luminosity.
DOI: 10.1088/1748-0221/16/12/p12014
2021
Cited 6 times
Comparative evaluation of analogue front-end designs for the CMS Inner Tracker at the High Luminosity LHC
Abstract The CMS Inner Tracker, made of silicon pixel modules, will be entirely replaced prior to the start of the High Luminosity LHC period. One of the crucial components of the new Inner Tracker system is the readout chip, being developed by the RD53 Collaboration, and in particular its analogue front-end, which receives the signal from the sensor and digitizes it. Three different analogue front-ends (Synchronous, Linear, and Differential) were designed and implemented in the RD53A demonstrator chip. A dedicated evaluation program was carried out to select the most suitable design to build a radiation tolerant pixel detector able to sustain high particle rates with high efficiency and a small fraction of spurious pixel hits. The test results showed that all three analogue front-ends presented strong points, but also limitations. The Differential front-end demonstrated very low noise, but the threshold tuning became problematic after irradiation. Moreover, a saturation in the preamplifier feedback loop affected the return of the signal to baseline and thus increased the dead time. The Synchronous front-end showed very good timing performance, but also higher noise. For the Linear front-end all of the parameters were within specification, although this design had the largest time walk. This limitation was addressed and mitigated in an improved design. The analysis of the advantages and disadvantages of the three front-ends in the context of the CMS Inner Tracker operation requirements led to the selection of the improved design Linear front-end for integration in the final CMS readout chip.
2000
Cited 16 times
Bottom Production
DOI: 10.1109/tns.2004.829496
2004
Cited 12 times
Recent aging studies for the ATLAS transition radiation tracker
The transition radiation tracker (TRT) is one of the three subsystems of the inner detector of the ATLAS experiment. It is designed to operate for 10 yr at the LHC, with integrated charges of /spl sim/10 C/cm of wire and radiation doses of about 10 Mrad and 2/spl times/10/sup 14/ neutrons/cm/sup 2/. These doses translate into unprecedented ionization currents and integrated charges for a large-scale gaseous detector. This paper describes studies leading to the adoption of a new ionization gas regime for the ATLAS TRT. In this new regime, the primary gas mixture is 70%Xe-27%CO/sub 2/-3%O/sub 2/. It is planned to occasionally flush and operate the TRT detector with an Ar-based ternary mixture, containing a small percentage of CF/sub 4/, to remove, if needed, silicon pollution from the anode wires. This procedure has been validated in realistic conditions and would require a few days of dedicated operation. This paper covers both performance and aging studies with the new TRT gas mixture.
DOI: 10.1109/tsc.2015.2469292
2018
Cited 7 times
Secure Cloud Connectivity for Scientific Applications
Cloud computing improves utilization and flexibility in allocating computing resources while reducing the infrastructural costs. However, in many cases cloud technology is still proprietary and tainted by security issues rooted in the multi-user and hybrid cloud environment. A lack of secure connectivity in a hybrid cloud environment hinders the adaptation of clouds by scientific communities that require scaling-out of the local infrastructure using publicly available resources for large-scale experiments. In this article, we present a case study of the DII-HEP secure cloud infrastructure and propose an approach to securely scale-out a private cloud deployment to public clouds in order to support hybrid cloud scenarios. A challenge in such scenarios is that cloud vendors may offer varying and possibly incompatible ways to isolate and interconnect virtual machines located in different cloud networks. Our approach is tenant driven in the sense that the tenant provides its connectivity mechanism. We provide a qualitative and quantitative analysis of a number of alternatives to solve this problem. We have chosen one of the standardized alternatives, Host Identity Protocol, for further experimentation in a production system because it supports legacy applications in a topologically-independent and secure way.
DOI: 10.1088/1748-0221/12/05/p05022
2017
Cited 5 times
Test beam performance measurements for the Phase I upgrade of the CMS pixel detector
A new pixel detector for the CMS experiment was built in order to cope with the instantaneous luminosities anticipated for the Phase~I Upgrade of the LHC. The new CMS pixel detector provides four-hit tracking with a reduced material budget as well as new cooling and powering schemes. A new front-end readout chip mitigates buffering and bandwidth limitations, and allows operation at low comparator thresholds. In this paper, comprehensive test beam studies are presented, which have been conducted to verify the design and to quantify the performance of the new detector assemblies in terms of tracking efficiency and spatial resolution. Under optimal conditions, the tracking efficiency is $99.95\pm0.05\,\%$, while the intrinsic spatial resolutions are $4.80\pm0.25\,\mu \mathrm{m}$ and $7.99\pm0.21\,\mu \mathrm{m}$ along the $100\,\mu \mathrm{m}$ and $150\,\mu \mathrm{m}$ pixel pitch, respectively. The findings are compared to a detailed Monte Carlo simulation of the pixel detector and good agreement is found.
DOI: 10.1088/1748-0221/15/04/p04017
2020
Cited 5 times
Experimental study of different silicon sensor options for the upgrade of the CMS Outer Tracker
During the high-luminosity phase of the LHC (HL-LHC), planned to start in 2027, the accelerator is expected to deliver an instantaneous peak luminosity of up to 7.5×1034 cm−2 s−1. A total integrated luminosity of 0300 or even 0400 fb−1 is foreseen to be delivered to the general purpose detectors ATLAS and CMS over a decade, thereby increasing the discovery potential of the LHC experiments significantly. The CMS detector will undergo a major upgrade for the HL-LHC, with entirely new tracking detectors consisting of an Outer Tracker and Inner Tracker. However, the new tracking system will be exposed to a significantly higher radiation than the current tracker, requiring new radiation-hard sensors. CMS initiated an extensive irradiation and measurement campaign starting in 2009 to systematically compare the properties of different silicon materials and design choices for the Outer Tracker sensors. Several test structures and sensors were designed and implemented on 18 different combinations of wafer materials, thicknesses, and production technologies. The devices were electrically characterized before and after irradiation with neutrons, and with protons of different energies, with fluences corresponding to those expected at different radii of the CMS Outer Tracker after 0300 fb−1. The tests performed include studies with β sources, lasers, and beam scans. This paper compares the performance of different options for the HL-LHC silicon sensors with a focus on silicon bulk material and thickness.
DOI: 10.48550/arxiv.physics/0306002
2003
Cited 9 times
The NorduGrid architecture and tools
The NorduGrid project designed a Grid architecture with the primary goal to meet the requirements of production tasks of the LHC experiments. While it is meant to be a rather generic Grid system, it puts emphasis on batch processing suitable for problems encountered in High Energy Physics. The NorduGrid architecture implementation uses the \globus{} as the foundation for various components, developed by the project. While introducing new services, the NorduGrid does not modify the Globus tools, such that the two can eventually co-exist. The NorduGrid topology is decentralized, avoiding a single point of failure. The NorduGrid architecture is thus a light-weight, non-invasive and dynamic one, while robust and scalable, capable of meeting most challenging tasks of High Energy Physics.
DOI: 10.1007/978-3-540-75755-9_57
2007
Cited 7 times
Roadmap for the ARC Grid Middleware
The Advanced Resource Connector (ARC) or the NorduGrid middleware is an open source software solution enabling production quality computational and data Grids, with special emphasis on scalability, stability, reliability and performance. Since its first release in May 2002, the middleware is deployed and being used in production environments. This paper aims to present the future development directions and plans of the ARC middleware in terms of outlining the software development roadmap.
DOI: 10.1016/j.nima.2004.01.017
2004
Cited 7 times
ATLAS Transition Radiation Tracker test-beam results
Several prototypes of the Transition Radiation Tracker for the ATLAS experiment at the LHC have been built and tested at the CERN SPS accelerator. Results from detailed studies of the straw-tube hit registration efficiency and drift-time measurements and of the pion and electron spectra without and with radiators are presented.
DOI: 10.1140/epjc/s10052-017-5115-z
2017
Cited 4 times
Characterisation of irradiated thin silicon sensors for the CMS phase II pixel upgrade
The high luminosity upgrade of the Large Hadron Collider, foreseen for 2026, necessitates the replacement of the CMS experiment’s silicon tracker. The innermost layer of the new pixel detector will be exposed to severe radiation, corresponding to a 1 MeV neutron equivalent fluence of up to $$\Phi _{eq} = 2 \times 10^{16}$$ cm $$^{-2}$$ , and an ionising dose of $${\approx } 5$$ MGy after an integrated luminosity of 3000 fb $$^{-1}$$ . Thin, planar silicon sensors are good candidates for this application, since the degradation of the signal produced by traversing particles is less severe than for thicker devices. In this paper, the results obtained from the characterisation of 100 and 200 $$\upmu $$ m thick p-bulk pad diodes and strip sensors irradiated up to fluences of $$\Phi _{eq} = 1.3 \times 10^{16}$$ cm $$^{-2}$$ are shown.
DOI: 10.1088/1748-0221/13/03/p03003
2018
Cited 4 times
Test beam demonstration of silicon microstrip modules with transverse momentum discrimination for the future CMS tracking detector
A new CMS Tracker is under development for operation at the High Luminosity LHC from 2026 onwards. It includes an outer tracker based on dedicated modules that will reconstruct short track segments, called stubs, using spatially coincident clusters in two closely spaced silicon sensor layers. These modules allow the rejection of low transverse momentum track hits and reduce the data volume before transmission to the first level trigger. The inclusion of tracking information in the trigger decision is essential to limit the first level trigger accept rate. A customized front-end readout chip, the CMS Binary Chip (CBC), containing stub finding logic has been designed for this purpose. A prototype module, equipped with the CBC chip, has been constructed and operated for the first time in a 4 GeV/c positron beam at DESY. The behaviour of the stub finding was studied for different angles of beam incidence on a module, which allows an estimate of the sensitivity to transverse momentum within the future CMS detector. A sharp transverse momentum threshold around 2 GeV/c was demonstrated, which meets the requirement to reject a large fraction of low momentum tracks present in the LHC environment on-detector. This is the first realistic demonstration of a silicon tracking module that is able to select data, based on the particle's transverse momentum, for use in a first level trigger at the LHC . The results from this test are described here.
2003
Cited 7 times
The NorduGrid architecture and tools
The NorduGrid project designed a Grid architecture with the primary goal to meet the requirements of production tasks of the LHC experiments. While it is meant to be a rather generic Grid system, it puts emphasis on batch processing suitable for problems encountered in High Energy Physics. The NorduGrid architecture implementation uses the Globus ToolkitTM as the foundation for various components, developed by the project. While introducing new services, the NorduGrid does not modify the Globus tools, such that the two can eventually co-exist. The NorduGrid topology is decentralized, avoiding a single point of failure. The NorduGrid architecture is thus a light-weight, non-invasive and dynamic one, while robust and scalable, capable of meeting most challenging tasks of High Energy Physics. (Less)
DOI: 10.48550/arxiv.physics/0306013
2003
Cited 6 times
Atlas Data-Challenge 1 on NorduGrid
The first LHC application ever to be executed in a computational Grid environment is the so-called ATLAS Data-Challenge 1, more specifically, the part assigned to the Scandinavian members of the ATLAS Collaboration. Taking advantage of the NorduGrid testbed and tools, physicists from Denmark, Norway and Sweden were able to participate in the overall exercise starting in July 2002 and continuing through the rest of 2002 and the first part of 2003 using solely the NorduGrid environment. This allowed to distribute input data over a wide area, and rely on the NorduGrid resource discovery mechanism to find an optimal cluster for job submission. During the whole Data-Challenge 1, more than 2 TB of input data was processed and more than 2.5 TB of output data was produced by more than 4750 Grid jobs.
DOI: 10.1088/1748-0221/14/10/p10017
2019
Cited 3 times
The DAQ and control system for the CMS Phase-1 pixel detector upgrade
In 2017 a new pixel detector was installed in the CMS detector. This so-called Phase-1 pixel detector features four barrel layers in the central region and three disks per end in the forward regions. The upgraded pixel detector requires an upgraded data acquisition (DAQ) system to accept a new data format and larger event sizes. A new DAQ and control system has been developed based on a combination of custom and commercial microTCA parts. Custom mezzanine cards on standard carrier cards provide a front-end driver for readout, and two types of front-end controller for configuration and the distribution of clock and trigger signals. Before the installation of the detector the DAQ system underwent a series of integration tests, including readout of the pilot pixel detector, which was constructed with prototype Phase-1 electronics and operated in CMS from 2015 to 2016, quality assurance of the CMS Phase-1 detector during its assembly, and testing with the CMS Central DAQ. This paper describes the Phase-1 pixel DAQ and control system, along with the integration tests and results. A description of the operational experience and performance in data taking is included.
DOI: 10.1088/1748-0221/16/11/p11028
2021
Cited 3 times
Selection of the silicon sensor thickness for the Phase-2 upgrade of the CMS Outer Tracker
Abstract During the operation of the CMS experiment at the High-Luminosity LHC the silicon sensors of the Phase-2 Outer Tracker will be exposed to radiation levels that could potentially deteriorate their performance. Previous studies had determined that planar float zone silicon with n-doped strips on a p-doped substrate was preferred over p-doped strips on an n-doped substrate. The last step in evaluating the optimal design for the mass production of about 200 m 2 of silicon sensors was to compare sensors of baseline thickness (about 300 μm) to thinned sensors (about 240 μm), which promised several benefits at high radiation levels because of the higher electric fields at the same bias voltage. This study provides a direct comparison of these two thicknesses in terms of sensor characteristics as well as charge collection and hit efficiency for fluences up to 1.5 × 10 15 n eq /cm 2 . The measurement results demonstrate that sensors with about 300 μm thickness will ensure excellent tracking performance even at the highest considered fluence levels expected for the Phase-2 Outer Tracker.
DOI: 10.1016/s0168-9002(01)02030-7
2002
Cited 6 times
Tracking performance of the transition radiation tracker prototype for the ATLAS experiment
A prototype of the Transition Radiation Tracker (TRT) for the ATLAS experiment at the CERN LHC has been built and tested at the CERN SPS. Detailed studies of the drift-time measurements, alignment technique, hit registration efficiency, track and momentum accuracy were performed. A coordinate measurement accuracy of 150μm for a single TRT drift tube and momentum resolution of 0.8% for 20GeV pions in a 1.56T magnetic field were achieved. The results obtained are in agreement with the expected tracking performance of the ATLAS TRT.
DOI: 10.1016/s0168-9002(03)01389-5
2003
Cited 5 times
An X-ray scanner for wire chambers
The techniques to measure the position of sense wires and field wires, the gas gain and the gas flow rate inside wire chambers using a collimated and filtered X-ray beam are reported. Specific examples are given using barrel modules of the Transition Radiation Tracker of the ATLAS experiment.
DOI: 10.1109/tns.2005.862799
2005
Cited 4 times
Acceptance tests and criteria of the ATLAS transition radiation tracker
The Transition Radiation Tracker (TRT) sits at the outermost part of the ATLAS Inner Detector, encasing the Pixel Detector and the Semi-Conductor Tracker (SCT). The TRT combines charged particle track reconstruction with electron identification capability. This is achieved by layers of xenon-filled straw tubes with periodic radiator foils or fibers providing TR photon emission. The design and choice of materials have been optimized to cope with the harsh operating conditions at the LHC, which are expected to lead to an accumulated radiation dose of 10 Mrad and a neutron fluence of up to 2middot10 <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">14</sup> n/cm <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2 </sup> after ten years of operation. The TRT comprises a barrel containing 52 000 axial straws and two end-cap parts with 320 000 radial straws. The total of 420 000 electronic channels (two channels per barrel straw) allows continuous tracking with many projective measurements (more than 30 straw hits per track). The assembly of the barrel modules in the US has recently been completed, while the end-cap wheel construction in Russia has reached the 50% mark. After testing at the production sites and shipment to CERN, all modules and wheels undergo a series of quality and conformity measurements. These acceptance tests survey dimensions, wire tension, gas-tightness, high-voltage stability and gas-gain uniformity along each individual straw. This paper gives details on the acceptance criteria and measurement methods. An overview of the most important results obtained to-date is also given
DOI: 10.1371/journal.pone.0188959
2017
Design of a novel instrument for active neutron interrogation of artillery shells
The most common explosives can be uniquely identified by measuring the elemental H/N ratio with a precision better than 10%. Monte Carlo simulations were used to design two variants of a new prompt gamma neutron activation instrument that can achieve this precision. The instrument features an intense pulsed neutron generator with precise timing. Measuring the hydrogen peak from the target explosive is especially challenging because the instrument itself contains hydrogen, which is needed for neutron moderation and shielding. By iterative design optimization, the fraction of the hydrogen peak counts coming from the explosive under interrogation increased from [Formula: see text]% to [Formula: see text]% (statistical only) for the benchmark design. In the optimized design variants, the hydrogen signal from a high-explosive shell can be measured to a statistics-only precision better than 1% in less than 30 minutes for an average neutron production yield of 109 n/s.
DOI: 10.1016/s0168-9002(96)00913-8
1996
Cited 6 times
B-physics potential of ATLAS: an update
The B-physics potential of the ATLAS experiment at LHC is described. Simulation results are shown for the measurement of sin 2β, with an emphasis on new tagging techniques. Other CP-violation measurements are described briefly. New limits are shown for the reach of the xs-measurement, resulting from increased statistics and improved fitting methods. Some rare decay modes of B-mesons can be easily seen in ATLAS. Analyses of channels B → μ+μ− (X) are presented here.
DOI: 10.1109/nssmic.2005.1596462
2006
Cited 3 times
Aging Effects in the ATLAS Transition Radiation Tracker and Gas Filtration Studies
The transition radiation tracker (TRT) is one of three particle tracking detectors now under construction for the ATLAS experiment, whose goal is to exploit the highly exciting new physics potential at CERN's next accelerator, the so called Large Hadron Collider (LHC). The TRT consists of 370000 straw proportional tubes of 4 mm diameter with a 30 micron thick anode wire, which will be operated with a Xe/CO <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2 </sub> /O <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sub> gas mixture at a high voltage of approximately 1.5 kV. While the construction of the TRT is now well under way, a number of interesting and challenging questions need to be solved with regard to wire aging phenomena, which are induced by pollution originating from very small amounts of silicon-based vacuum materials in some components of the gas system. Finally a guideline to avoid aging in wire chamber detectors in high luminosity experiments is given
DOI: 10.5170/cern-2005-002.765
2005
Cited 3 times
Data management services of NorduGrid
In common grid installations, services responsible for storing big data chunks, replication of those data and indexing their availability are usually completely decoupled. And a task of synchronizing data is passed to either user-level tools or separate services (like spiders) which are subject to failure and usually cannot perform properly if one of underlying services fails too. The NorduGrid Smart Storage Element (SSE) was designed to try to overcome those problems by combining the most desirable features into one service. It uses HTTPS/G for secure data transfer, Web Services for control (through same HTTPS/G channel) and can provide information to indexing services used in middlewares based on the Globus Toolkit (TM). At the moment, those are the Replica Catalog and the Replica Location Service. The modular internal design of the SSE and the power of C++ object programming allows to add support for other indexing services in an easy way. There are plans to complement it with a Smart Indexing Service capable of resolving inconsistencies hence creating a robust distributed data storage system. (Less)
2016
Updated baseline for a staged Compact Linear Collider
The Compact Linear Collider (CLIC) is a multi-TeV high-luminosity linear e+e- collider under development. For an optimal exploitation of its physics potential, CLIC is foreseen to be built and operated in a staged approach with three centre-of-mass energy stages ranging from a few hundred GeV up to 3 TeV. The first stage will focus on precision Standard Model physics, in particular Higgs and top-quark measurements. Subsequent stages will focus on measurements of rare Higgs processes, as well as searches for new physics processes and precision measurements of new states, e.g. states previously discovered at LHC or at CLIC itself. In the 2012 CLIC Conceptual Design Report, a fully optimised 3 TeV collider was presented, while the proposed lower energy stages were not studied to the same level of detail. This report presents an updated baseline staging scenario for CLIC. The scenario is the result of a comprehensive study addressing the performance, cost and power of the CLIC accelerator complex as a function of centre-of-mass energy and it targets optimal physics output based on the current physics landscape. The optimised staging scenario foresees three main centre-of-mass energy stages at 380 GeV, 1.5 TeV and 3 TeV for a full CLIC programme spanning 22 years. For the first stage, an alternative to the CLIC drive beam scheme is presented in which the main linac power is produced using X-band klystrons.
DOI: 10.1088/1748-0221/17/06/p06039
2022
Beam test performance of a prototype module with Short Strip ASICs for the CMS HL-LHC tracker upgrade
Abstract The Short Strip ASIC (SSA) is one of the four front-end chips designed for the upgrade of the CMS Outer Tracker for the High Luminosity LHC. Together with the Macro-Pixel ASIC (MPA) it will instrument modules containing a strip and a macro-pixel sensor stacked on top of each other. The SSA provides both full readout of the strip hit information when triggered, and, together with the MPA, correlated clusters called stubs from the two sensors for use by the CMS Level-1 (L1) trigger system. Results from the first prototype module consisting of a sensor and two SSA chips are presented. The prototype module has been characterized at the Fermilab Test Beam Facility using a 120 GeV proton beam.
2003
Cited 3 times
Atlas Data-Challenge 1 on NorduGrid
The first LHC application ever to be executed in a computational Grid environment is the so-called ATLAS Data-Challenge 1, more specifically, the part assigned to the Scandinavian members of the ATLAS Collaboration. Taking advantage of the NorduGrid testbed and tools, physicists from Denmark, Norway and Sweden were able to participate in the overall exercise starting in July 2002 and continuing through the rest of 2002 and the first part of 2003 using solely the NorduGrid environment. This allowed to distribute input data over a wide area, and rely on the NorduGrid resource discovery mechanism to find an optimal cluster for job submission. During the whole Data-Challenge 1, more than 2 TB of input data was processed and more than 2.5 TB of output data was produced by more than 4750 Grid jobs. (Less)
DOI: 10.1088/1742-6596/608/1/012010
2015
An overview of the DII-HEP OpenStack based CMS data analysis
An OpenStack based private cloud with the Cluster File System has been built and used with both CMS analysis and Monte Carlo simulation jobs in the Datacenter Indirection Infrastructure for Secure High Energy Physics (DII-HEP) project. On the cloud we run the ARC middleware that allows running CMS applications without changes on the job submission side. Our test results indicate that the adopted approach provides a scalable and resilient solution for managing resources without compromising on performance and high availability.
DOI: 10.1016/s0168-9002(01)00199-1
2001
Cited 3 times
An alignment method for the ATLAS end-cap TRT detector using a narrow monochromatic X-ray beam
The end-cap Transition Radiation Tracker (TRT), consisting of 36 modules (wheels), is being constructed as a part of the ATLAS Inner Detector at the CERN LHC. This paper describes a method for determining the wire positions inside the straw proportional tubes (SPT), which are the basic building blocks of the ATLAS TRT, with an accuracy of better than 10μm. The procedure involves moving a narrow monochromatic X-ray beam across the straw and measuring the counting rate as a function of the position of the X-ray beam in the straw. To achieve this goal, a Beam Directing Device (BDD), providing the possibility to direct the X-ray beam in a chosen direction within some solid angle and supplying an accurate angular measurement system, has been constructed. The results of the wire position measurements performed using this BDD on a full-scale mechanical prototype end-cap wheel of the TRT are presented in this paper.
DOI: 10.1007/s1010501c0007
2001
Cited 3 times
Searching for physics beyond the Standard Model in the decay B+ ? K+K+ ??
DOI: 10.5170/cern-2005-002.673
2005
Experiences with Data Indexing services supported by the NorduGrid middleware
The NorduGrid middleware, ARC, has integrated support for querying and registering to Data Indexing services such as the Globus Replica Catalog and Globus Replica Location Server. This support allows one to use these indexing services for for example brokering during jobsubmission, automatic registration of files and many other things. This integrated support is complemented by a set of command-line tools for registering to and querying these Data Indexing services. In this article we will describe experiences with these indexing services both from a daily work point of view and in production environments such as the Atlas Data- Challenges 1 and 2. We will describe the advantages of such Data Indexing services as well as their shortcomings. Finally we will present a proposal for an extended Smart Indexing Service which should deal with the shortcomings described. Such an indexing service is being designed at the moment. (Less)
DOI: 10.1063/1.1807328
2004
ATLAS B Physics Performance Update
An update of the B physics performance of the ATLAS experiment is presented. After a short review of B production and decays at LHC, the main features of the ATLAS detector are described, and the present construction status is shown. Physics topics which are presented are: precision measurements, rare decays, and B production. It is shown that despite of recent changes in the trigger strategy and initial detector layout, ATLAS retains an excellent potential for measurements of sin2β and rare dimuon decays. A large spectrum of B physics studies is also feasible — here we present examples of precision measurements with Bs0 mesons, Bc mesons, and B production.
DOI: 10.5170/cern-2005-002.1095
2005
Performance of the NorduGrid ARC and the Dulcinea Executor in ATLAS Data Challenge 2
This talk describes the various stages of ATLAS Data Challenge 2 (DC2) in what concerns usage of resources deployed via NorduGrid's Advanced Resource Connector (ARC). It also describes the integration of these resources with the ATLAS production system using the Dulcinea executor. ATLAS Data Challenge 2 (DC2), run in 2004, was designed to be a step forward in the distributed data processing. In particular, much coordination of task assignment to resources was planned to be delegated to Grid in its different flavours. An automatic production management system was designed, to direct the tasks to Grids and conventional resources. The Dulcinea executor is a part of this system that provides interface to the information system and resource brokering capabilities of the ARC middleware. The executor translates the job definitions recieved from the supervisor to the extended resource specification language (XRSL) used by the ARC middleware. It also takes advantage of the ARC middleware's built-in support for the Globus Replica Location Server (RLS) for file registration and lookup. NorduGrid's ARC has been deployed on many ATLAS-dedicated resources across the world in order to enable effective participation in ATLAS DC2. This was the first attempt to harness large amounts of strongly heterogeneous resources in various countries for a single collaborative exercise using Grid tools. This talk addresses various issues that arose during different stages of DC2 in this environment: preparation, such as ATLAS software installation; deployment of the middleware; and processing. The results and lessons are summarized as well. (Less)
DOI: 10.1016/s0168-9002(00)00045-0
2000
Cited 3 times
BEAUTY’99 Conference Summary
Investigations of B hadrons are expected to break new ground in measuring CP-violation effects. This series of BEAUTY conferences, originating from the 1993 conference in Liblice, has contributed significantly in developing ideas of CP-violation measurements using B hadrons and formulating and comparing critically the B-physics experiments. In the ’99 conference in Bled we saw the ripening of the field and the first fruit emerging – Tevatron have produced beautiful B-physics results and more are expected to come with the next run, while the B-physics experiments at DESY, SLAC and KEK are starting their operation. The longer-term projects at LHC and Tevatron have taken their shape and detailed prototyping work is going on. Meanwhile, on the phenomenological side, there has been impressive theoretical progress in understanding deeper the ‘standard’ measurements and proposing new signatures. In this summary, I will highlight the status of the field as presented in the conference, concentrating on signatures, experiments and R&D programmes.
DOI: 10.1016/j.apradiso.2018.04.027
2018
Thin NaI(Tl) crystals to enhance the detection sensitivity for molten 241Am sources
A thin 5-mm NaI(Tl) scintillator detector was tested with the goal of enhancing the detection efficiency of 241Am gamma and X rays for steelworks operations. The performance of a thin (5 mm) NaI(Tl) detector was compared with a standard 76.2-mm thick NaI(Tl) detector. The 5-mm thick detector crystal results in a 55% smaller background rate at 60 keV compared with the thicker detector, translating into the ability to detect 30% weaker 241Am sources. For a 5 mm thick and 76.2 mm diameter NaI detector in the ladle car tunnel at Outokumpu Tornio Works, the minimum activity of a molten 241Am source that can be detected in 5 s with 95% probability is 9 MBq.
DOI: 10.1016/0168-9002(94)91069-3
1994
Cited 4 times
B physics in ATLAS
Abstract The capabilities of the ATLAS detector for B physics are described. After an introduction outlining the features of the detector relevant for B physics, studies of B event reconstruction are presented. There follows a discussion of CP violation analyses that can be performed, with estimates of the precision that can be obtained. Finally, other B physics topics, including B s 0 mixing, rare decays and measurements with B baryons, are discussed.
1994
Cited 4 times
Asymmetries in B decays and their experimental control
DOI: 10.5170/cern-2002-003.95
2002
DTMROC-S: Deep submicron version of the readout chip for the TRT detector in ATLAS
A new version of the circuit for the readout of the ATLAS straw tube detector, TRT [1], has been developed in a deep-submicron process. The DTMROC-S is fabricated in a commercial 0.25μm CMOS IBM technology, with a library hardened by layout techniques [2]. Compared to the previous version of the chip [3] done in a 0.8μm radiation-hard CMOS and despite of the features added for improving the robustness and testability of the circuit, the deep-submicron technology results in a much smaller chip size that increases the production yield and lowers the power consumption.
2004
Science on NorduGrid
DOI: 10.1109/nssmic.2002.1239373
2003
Implementation of the DTMROC-S ASIC for the ATLAS TRT detector in a 0.25 μm CMOS technology
The DTMROC-S is a 16-channel front-end chip developed for the signal processing of the ATLAS straw tube detector, TRT. Due to a highly radioactive environment, the chip is fabricated in a commercial 0.25 /spl mu/m CMOS technology hardened by layout techniques and, in addition, a special methodology was used to improve the circuit's robustness against Single Events Effects (SEE) caused by ionizing particles. Exhaustive internal test features were foreseen to simplify and ensure comprehensive design verification, high fault coverage and throughput. Compared to the previous version of the chip done in a 0.8 /spl mu/m radiation-hard CMOS and despite of all supplementary features, the Deep-Sub-Micron (DSM) technology results in a much smaller chip size that increases the production yield and lowers the power consumption.
DOI: 10.22323/1.210.0019
2014
Dynamic Provisioning of Resources in a Hybrid Infrastructure
We have built a hybrid Grid/Cloud cluster that allows CMS Grid users to submit jobs in a familiar manner.This cluster functions within the CMS production system in a basic form and we are now adding the more advanced features.In this paper we are presenting the results of using the EMI Argus and EES to instantiate new virtual machines within the OpenStack Cloud.The Argus service is used to render the authorization decisions based on X.509 credentials and EES interacts with the OpenStack Cloud infrastructure.
2016
The NINS3 Research Project
DOI: 10.1063/1.4826801
2013
Rare B(s)0→μ+μ− decays
This review summarizes the current experimental results on rare B ( s ) 0 →μ + μ − decays of the Tevatron experiments CDF and D0, and the LHC experiments ATLAS, CMS and LHCb. The experimental branching fraction upper limits for the B s 0 →μ + μ − are already quite close to the branching fraction predicted by the Standard Model, and the first observation of the B s 0 →μ + μ − decay is expected soon. The rare decays B ( s ) 0 →μ + μ − are highly suppressed in the Standard Model, and therefore accurate measurements of these branching fractions provide complementary constraints to the free parameters of various extensions of the Standard Model.
DOI: 10.1016/0168-9002(93)90187-m
1993
B-physics with the ATLAS experiment at LHC
The feasibility to measure CP-violation in B-meson systems at Large Hadron Collider with the ATLAS experiment has been studied. The decay channel Bd a good tracking performance of the detector, were found to result in a sensitivity to CP-violation similar to dedicated B-experiments.
2000
Searching for physics beyond the Standard Model in the decay B+ -> K+K+pi-
The observation potential of the decay B+ -> K+K+pi- with the ATLAS detector at LHC is described in this paper. In the Standard Model this decay mode is highly suppressed, while in models beyond the Standard Model it could be significantly enhanced. To improve the selection of the K+K+pi- final state, a charged hadron identification using Time-over-Threshold measurements in the ATLAS Transition Radiation Tracker was developed and used.
DOI: 10.22323/1.084.0178
2010
B physics prospects of CMS with the first LHC data
B physics will be one of the key physics themes at the Large Hadron Collider (LHC).B hadrons are an ideal tool for advancing our current understanding of the flavour sector of the Standard Model (SM), and searching for effects originating from physics beyond the SM, thanks to the large production rate and the fact that B hadrons are relatively easy to trigger on and identify due to their long lifetime and high mass.The interplay between strong and electroweak effects in the production and decay of B hadrons makes them a unique test ground for both forces.The integrated luminosity collected by the CMS experiment during the first LHC running period 2009-2010 is expected to be of the order of a few hundred pb 1 .The first B physics measurements with the CMS experiment include quarkonia production cross section and polarization, cross sections and lifetimes of exclusive B decays, b production cross section, and b b correlations.In this paper, some examples of the estimated sensitivities of CMS with the first LHC data, up to an integrated luminosity of about 50 pb 1 , are presented.
2009
B physics prospects of CMS with the first LHC data
B physics will be one of the key physics themes at the Large Hadron Collider (LHC). B hadrons are an ideal tool for advancing our current understanding of the flavour sector of the Standard Model (SM), and searching for effects originating from physics beyond the SM, thanks to the large production rate and the fact that B hadrons are relatively easy to trigger on and identify due to their long lifetime and high mass. The interplay between strong and electroweak effects in the production and decay of B hadrons makes them a unique test ground for both forces. The integrated luminosity collected by the CMS experiment during the first LHC running period 2009-2010 is expected to be of the order of a few hundred pb 1. The first B physics measurements with the CMS experiment include quarkonia production cross section and polarization, cross sections and lifetimes of exclusive B decays, b production cross section, and b b correlations. In this paper, some examples of the estimated sensitivities of CMS with the first LHC data, up to an integrated luminosity of about 50 pb 1, are presented.
2009
B physics prospects of CMS with the first LHC data
B physics will be one of the key physics themes at the Large Hadron Collider (LHC). B hadrons are an ideal tool for advancing our current understanding of the flavour sector of the Standard Model (SM), and searching for effects originating from physics beyond the SM, thanks to the large production rate and the fact that B hadrons are relatively easy to trigger on and identify due to their long lifetime and high mass. The interplay between strong and electroweak effects in the production and decay of B hadrons makes them a unique test ground for both forces. The integrated luminosity collected by the CMS experiment during the first LHC running period 2009-2010 is expected to be of the order of a few hundred pb 1. The first B physics measurements with the CMS experiment include quarkonia production cross section and polarization, cross sections and lifetimes of exclusive B decays, b production cross section, and b b correlations. In this paper, some examples of the estimated sensitivities of CMS with the first LHC data, up to an integrated luminosity of about 50 pb 1, are presented.
1998
An investigation of screwiness in hadronic final states from DELPHI
A recent theoretical model by Andersson et al. proposes that soft gluons order themselves in the form of a helix at the end of the QCD cascades. The Authors of the model present a measure of the rapidity-azimuthal angle correlation, which they call screwiness. We searched for such a signal in DELPHI data, and found no evidence for screwiness.
DOI: 10.1016/j.nuclphysbps.2007.05.050
2007
ATLAS Status and First Run Scenarios for B Physics
This article summarizes the status of the ATLAS detector and its commissioning as of autumn 2006, one year before the expected LHC start-up. The initial running scenarios and goals for B physics are presented for the foreseen pilot run with 900 GeV centre-of-mass energy in autumn 2007, as well as for the first physics run in 2008 at the nominal centre-of-mass energy of 14 TeV.
2018
Feedback from teachers on a widening participation intervention : targeting, context and networking.
DOI: 10.48550/arxiv.hep-ex/9611001
1996
ATLAS sensitivity range for the x_s measurement
Previous results for the prospects of B_s mixing measurement in the ATLAS experiment at LHC are updated. The improved analysis method of the studied decay channels B_s -&gt; D_s pi and B_s -&gt; D_s a_1, combined with most recent values for the branching ratios and the B_s lifetime, leads to the new ATLAS sensitivity range for the x_s measurement: x_s^{max} = 42. An extensive study is done in order to estimate how x_s^{max} is influenced by the B-decay proper-time resolution of the vertex detector, as well as by the number of events and by the signal-to-background ratio.
2004
A Step Towards a Computing Grid for the LHC Experiments: ATLAS Data Challenge 1
The ATLAS Collaboration at CERN is preparing for the data taking and analysis at the LHC that will start in 2007. Therefore, a series of Data Challenges was started in 2002 whose goals are the validation of the Computing Model, of the complete software suite, of the data model, and to ensure the correctness of the technical choices to be made for the final offline computing environment. A major feature of the first Data Challenge (DC1) was the preparation and the deployment of the software required for the production of large event samples as a worldwide distributed activity. It should be noted that it was not an option to “run the complete production at CERN” even if we had wanted to; the resources were not available at CERN to carry out the production on a reasonable time-scale. The great challenge of organising and then carrying out this large-scale production at a significant number of sites around the world had therefore to be faced. However, the benefits of this are manifold: apart from realising the required computing resources, this exercise created worldwide momentum for ATLAS computing as a whole. This report describes in detail the main steps carried out in DC1 and what has been learned from them as a step towards a computing Grid for the LHC experiments. (Less)
DOI: 10.5170/cern-2005-002.711
2005
Usage statistics and usage patterns on the NorduGrid : Analyzing the logging information collected on one of the largest production Grids of the world
The Nordic Grid facility (NorduGrid [1]) came into operation during summer 2002 when the Scandinavian ATLAS HEP group started to use the Grid for the ATLAS Data Challenges (DC) and was thus the first Grid ever contributing to an ATLAS production. Since then, the Grid facility has been in continuous 24/7 operation. NorduGrid is being used by a growing set of active users from various scientific areas including physics, chemistry, biology and informatics. It has given major contributions to the ATLAS Data Challenge 1 [2] and the ongoing Data Challenge 2. · The increasing number of resources has made NorduGrid one of the largest production Grids in the world, continuously running on more than 30 sites more than 3000 CPUs. · The resources range from small test clusters at academic institutions to large farms at several supercomputer centers and the NorduGrid software runs on clusters with very different Linux distributions. This presentation gives a short overview of the design and implementation of the NorduGrid middleware, logging and monitoring facilities. It will be followed by a description of a typical job on NorduGrid and the information about its parameters which are monitored online and persistified in the logging service. (Less)
2003
ATLAS High-Level Trigger, Data Acquisition and Controls Technical Design Report
This Technical Design Report (TDR) for the High-level Trigger (HLT), Data Acquisition (DAQ) and Controls of the ATLAS experiment builds on the earlier documents published on these systems: Trigger Performance Status Report, DAQ, EF, LVL2 and DCS Technical Progress Report, and High-Level Triggers, DAQ and DCS Technical Proposal. Much background and preparatory work relevant to this TDR is referenced in the above documents. In addition, a large amount of detailed technical documentation has been produced in support of this TDR. These documents are referenced in the appropriate places in the following chapters. This section introduces the overall organization of the document. The following sections give an overview of the principal system requirements and functions, as well as a brief description of the principal data types used in the Trigger/DAQ (TDAQ) system. The document has been organized into four parts: Part I — Global View Chapters 2, 3 and 4 address the principal system and experiment parameters which define the main requirements of the HLT, DAQ and Controls system. The global system operations, and the physics requirements and event selection strategy are also addressed. Chapter 5 defines the overall architecture of the system and analyses the requirements of its principal components, while Chapters 6 and 7 address more specific fault-tolerance and monitoring issues. Part II — System Components This part describes in more detail the principal components and functions of the system. Chapter 8 addresses the final prototype design and performance of the Data Flow component. These are responsible for the transport of event data from the output of the detector Read Out Links (ROLs) via the HLT system (where event selection takes place) to mass storage. Chapter 9 explains the decomposition of the HLT into a second level trigger (LVL2) and an Event Filter (EF). It details the design of the data flow within the HLT, the specifics of the HLT system supervision, and the design and implementation of the Event Selection Software (ESS). Chapter 10 addresses the Online Software which is responsible for the run control and DAQ supervision of the entire TDAQ and detector systems during data taking. It is also responsible for miscellaneous services such as error reporting, run parameter accessibility, and histogramming and monitoring support. Chapter 11 describes the Detector Control System (DCS), responsible for the control and supervision of all the detector hardware and of the services and the infrastructure of the experiment. The DCS is also the interface point for information exchange between ATLAS and the LHC accelerator. Chapter 12 draws together the various aspects of experiment control detailed in previous chapters and examines several use-cases for the overall operation and control of the experiment, including: data-taking operations, calibration runs, and operations required outside data-taking. Part III — System Performance Chapter 13 addresses the physics selection. The tools used for physics selection are described along with the event-selection algorithms and their performance. Overall HLT output rates and sizes are also discussed. An initial analysis of how ATLAS will handle the first year of running from the point of view of TDAQ is presented. Chapter 14 discusses the overall performance of the HLT/DAQ system from various points of view, namely the HLT performance as evaluated in dedicated testbeds, the overall performance of the TDAQ system in a testbed of ~10% ATLAS size, and functional tests of the system in the detector test beam environment. Data from these various testbeds are also used to calibrate a detailed discrete-event -simulation model of data flow in the full-scale system. Part IV — Organization and Planning Chapter 15 discusses quality-assurance issues and explains the software-development process employed. Chapter 16 presents the system costing and staging scenario. Chapter 17 presents the overall organization of the project and general system-resource issues. Chapter 18 presents the short-term HLT/DAQ work-plan for the next phase of the project as well as the global development schedule up to LHC turn-on in 2007. (Less)
2003
Next Generation of Grid Services for The NorduGrid
DOI: 10.22323/1.007.0277
2001
Evaluation of the computing resources required for a Nordic research exploitation of the LHC
DOI: 10.48550/arxiv.hep-ex/0012057
2000
Searching for physics beyond the Standard Model in the decay B+ -&gt; K+K+pi-
The observation potential of the decay B+ -&gt; K+K+pi- with the ATLAS detector at LHC is described in this paper. In the Standard Model this decay mode is highly suppressed, while in models beyond the Standard Model it could be significantly enhanced. To improve the selection of the K+K+pi- final state, a charged hadron identification using Time-over-Threshold measurements in the ATLAS Transition Radiation Tracker was developed and used.
DOI: 10.48550/arxiv.hep-ph/0003142
2000
Bottom Production
We review the prospects for bottom production physics at the LHC.
2002
High transverse momentum physics at the large hadron collider
2002
The NorduGrid: Building a Production Grid in Scandinavia
The aim of the NorduGrid project is to build and operate a production Grid infrastructure in Scandinavia and Finland. The innovative middleware solutions allowed setting up a working testbed, which connects a dynamic set of computing resources by providing uniform access to them. The resources range from small test clusters at academic institutions to large farms at several supercomputer centers. This paper reviews the architecture and describes the Grid services, implemented via the NorduGrid middleware. As an example of a demanding application, a real use case of a High Energy Physics study is described.
DOI: 10.1016/s0010-4655(02)00274-6
2002
Regional research exploitation of the LHC: A case-study of the required computing resources
A simulation study to evaluate the required computing resources for a research exploitation of the Large Hadron Collider (LHC) has been performed. The evaluation was done as a case study, assuming existence of a Nordic regional centre and using the requirements for performing a specific physics analysis as a yard-stick. Other input parameters were: assumption for the distribution of researchers at the institutions involved, an analysis model, and two different functional structures of the computing resources.
DOI: 10.1007/s1010502cn002
2002
Observation potential of the decays B0s,d → J/ψη
The observation potential of the decays B0s,d → J/ψη with the ATLAS detector at the LHC is described in this paper. At present there exist only upper limits for the branching fractions, but at LHC, a clear signal for the decay mode B0s,d → J/ψη is expected. The branching fraction of this decay mode can thus be measured, and other parameters such as B0s lifetime can be measured as well. The decay mode B0s → J/ψη is analogous to the mode B0s → J/ψφ, which has been studied extensively in view of CP violation measurements. In these two decay modes, the CP asymmetry predicted by the Standard Model is very small, and the observation of a sizeable effect would be a signal of physics beyond the Standard Model. The decay mode J/ψη constitutes thus a cross-check for the mode J/ψφ. Furthermore, the former final state is a CP-eigenstate and no angular analysis is thus needed. The reconstruction of η-mesons at LHC experiments has not been addressed before, and therefore the study presented here can also be regarded as an example of the physics prospects with η-mesons at the LHC.
DOI: 10.1016/0168-9002(90)90708-e
1990
A novel method for parton reconstruction in collider experiments
Abstract A novel method for reconstructing quark and gluon jets in colliding beam experiments is introduced. The new generation of detectors with particle tracking and highly granular calorimeters are designed for parton reconstruction rather than single hadron measurements. A two-way algorithm utilizing independently formed clusters of tracks and energy depositions allows us to identify the initial partons with high precision and efficiency.
DOI: 10.1016/0168-9002(88)90105-2
1988
Fast triggering with a plastic streamer tube calorimeter
We describe the use of the DELPHI Hadron Calorimeter, a plastic streamer tube detector, in conjunction with the DELPHI fast triggers. The essential parameters of this operation are the particle localization capability, affected by the charge diffusion on the graphite-coated cathode and the inherent noise of the streamer tubes. We present results of measurements of these features, performed on an installed calorimeter module, and give estimates of the fast trigger performance.
DOI: 10.1088/0954-3899/19/11/009
1993
A feasibility study of measuring CP-violation in B-decays with a pp collider experiment
The feasibility of measuring CP-violation in B-meson systems at the Large Hadron Collider with a mainstream collider experiment has been studied. The decay channel Bd0 to psi Ks0 was investigated in detail by simulating the acceptance and resolution of an experiment proposed to be built for LHC. Compared with dedicated b-physics experiments planned for LHC and other accelerators, the sensitivity of a general purpose detector in measuring CP-violation in psi Ks0 final states was found to be similar.
DOI: 10.1142/s0129065792000668
1992
CLASSIFICATION OF THE DECAYS OF THE Z0 INTO b AND c QUARK PAIRS USING A NEURAL NETWORK
A classifier based on a Feed-Forward Neural Network has been used for separating a sample of about 123,500 selected hadronic decays of the Z0, collected by DELPHI during 1991, into three classes according to the flavour of the original quark pair: [Formula: see text] (unresolved), [Formula: see text] and [Formula: see text] The classification has been used to compute the partial widths of the Z0 into b and C quark pairs.
DOI: 10.1063/1.43347
1992
The DELPHI microvertex detector
The main characteristics of the DELPHI Microvertex Detector are presented. The performance in terms of impact parameter resolution, association efficiency, and ambiguity is evaluated after two years of data taking at LEP.
1992
Classification of The Decays of the Z 0 into b and c Quark Pairs Using a Neural Network.
1991
Local charge compensation in Z(sup 0) hadronic decays
1992
Proceedongs of the Conference on Physics and Experiments With Linear Collider : Saariselk , Finland, 9-14 September 1991