ϟ

J. Fulcher

Here are all the papers by J. Fulcher that you can download and read on OA.mg.
J. Fulcher’s last known institution is . Download J. Fulcher PDFs here.

Claim this Profile →
DOI: 10.1016/s0168-9002(01)00589-7
2001
Cited 320 times
Design and results from the APV25, a deep sub-micron CMOS front-end chip for the CMS tracker
The APV25 is a 128-channel analogue pipeline chip for the readout of silicon microstrip detectors in the CMS tracker at the LHC. Each channel comprises a low noise amplifier, a 192-cell analogue pipeline and a deconvolution readout circuit. Output data are transmitted on a single differential current output via an analogue multiplexer. The chip is fabricated in a standard 0.25 μm CMOS process to take advantage of the radiation tolerance, lower noise and power, and high circuit density. Experimental characterisation of this circuit shows full functionality and good performance both in pre- and post-irradiation (20 Mrad) conditions. The measured noise is significantly reduced compared to earlier APV versions. A description of the design and results from measurements prior to irradiation are presented.
DOI: 10.1016/j.physletb.2011.08.023
2011
Cited 66 times
Comparative results on collimation of the SPS beam of protons and Pb ions with bent crystals
New experiments on crystal assisted collimation have been carried out at the CERN SPS with stored beams of 120 GeV/c protons and Pb ions. Bent silicon crystals of 2 mm long with about 170 μrad bend angle and a small residual torsion were used as primary collimators. In channeling conditions, the beam loss rate induced by inelastic interactions of particles with the crystal nuclei is minimal. The loss reduction was about 6 for protons and about 3 for Pb ions. Lower reduction value for Pb ions can be explained by their considerably larger ionization losses in the crystal. In one of the crystals, the measured fraction of the Pb ion beam halo deflected in channeling conditions was 74%, a value very close to that for protons. The intensity of the off-momentum halo leaking out from the collimation station was measured in the first high dispersion area downstream. The particle population in the shadow of the secondary collimator–absorber was considerably smaller in channeling conditions than for amorphous orientations of the crystal. The corresponding reduction was in the range of 2–5 for both protons and Pb ions.
DOI: 10.1016/j.physletb.2012.07.006
2012
Cited 56 times
Strong reduction of the off-momentum halo in crystal assisted collimation of the SPS beam
A study of crystal assisted collimation has been continued at the CERN SPS for different energies of stored beams using 120 GeV/c and 270 GeV/c protons and Pb ions with 270 GeV/c per charge. A bent silicon crystal used as a primary collimator deflected halo particles using channeling and directing them into the tungsten absorber. A strong correlation of the beam losses in the crystal and off-momentum halo intensity measured in the first high dispersion (HD) area downstream was observed. In channeling conditions, the beam loss rate induced by inelastic interactions of particles with nuclei is significantly reduced in comparison with the non-oriented crystal. A maximal reduction of beam losses in the crystal larger than 20 was observed with 270 GeV/c protons. The off-momentum halo intensity measured in the HD area was also strongly reduced in channeling conditions. The reduction coefficient was larger than 7 for the case of Pb ions. A strong loss reduction was also detected in regions of the SPS ring far from the collimation area. It was shown by simulations that the miscut angle between the crystal surface and its crystallographic planes doubled the beam losses in the aligned crystal.
DOI: 10.1088/1748-0221/6/04/p04006
2011
Cited 44 times
Design and performance of a high rate, high angular resolution beam telescope used for crystal channeling studies
A charged particle telescope has been constructed for data taking at high rates in a CERN 400 GeV/c proton beam line. It utilises ten planes of silicon microstrip sensors, arranged as five pairs each measuring two orthogonal coordinates, with an active area of 3.8 × 3.8 cm2. The objective was to provide excellent angular and spatial resolution for measuring the trajectories of incident and outgoing particles. The apparatus has a long baseline, of approximately 10 m in each arm, and achieves an angular resolution in the incoming arm of 2.8 μrad and a total angular resolution on the difference of the two arms of 5.2 μrad, with performance limited by multiple scattering in the sensor layers. The sensors are instrumented by a system based on the CMS Tracker electronic readout chain, including analogue signal readout for optimal spatial resolution. The system profits from modified CMS software and hardware to provide a data acquisition capable of peak trigger rates of at least 7 kHz. We describe the sensor readout, electronic hardware and software, together with the measured performance of the telescope during studies of crystal channeling for the UA9 collaboration. Measurements of a previously unobserved periodic movement of the beam are also presented and the significance of such an effect for precise studies such as for channeling is discussed.
DOI: 10.1016/j.physletb.2014.05.010
2014
Cited 33 times
Observation of focusing of 400 GeV/c proton beam with the help of bent crystals
The results of observation and studies of focusing of 400 GeV/c proton beam with the help of bent single crystals are presented. Two silicon crystals have been used in the measurements. The focal length of the first and second crystals is found to be 1.48 m and 0.68 m, respectively. The mean square size of the horizontal profile in the focus was 3.1 and 4.3 times as small as at the exit of the crystals.
DOI: 10.1142/s0217751x22300046
2022
Cited 9 times
Feasibility of crystal-assisted collimation in the CERN accelerator complex
Bent silicon crystals mounted on high-accuracy angular actuators were installed in the CERN Super Proton Synchrotron (SPS) and extensively tested to assess the feasibility of crystal-assisted collimation in circular hadron colliders. The adopted layout was exploited and regularly upgraded for about a decade by the UA9 Collaboration. The investigations provided the compelling evidence of a strong reduction of beam losses induced by nuclear inelastic interactions in the aligned crystals in comparison with amorphous orientation. A conceptually similar device, installed in the betatron cleaning insertion of CERN Large Hadron Collider (LHC), was operated through the complete acceleration and storage cycle and demonstrated a large reduction of the background leaking from the collimation region and radiated into the cold sections of the accelerator and the experimental detectors. The implemented layout and the relevant results of the beam tests performed in the SPS and in the LHC with stored proton and ion beams are extensively discussed.
DOI: 10.1109/nssmic.2000.949881
2002
Cited 45 times
The APV25 0.25 μm CMOS readout chip for the CMS tracker
The APV25 is a chip designed for readout of silicon microstrips in the CMS tracker at the CERN Large Hadron Collider. It is the first major chip for a high energy physics experiment to exploit a modern commercial 0.25 /spl mu/m CMOS technology. Experimental characterisation of the circuit shows excellent performance before and after irradiation. Automated probe testing of many chips has demonstrated a very high yield. A summary of the design, detailed results from measurements, and probe testing results are presented.
DOI: 10.1016/j.physletb.2013.08.028
2013
Cited 24 times
Optimization of the crystal assisted collimation of the SPS beam
The possibility for optimization of crystal assisted collimation has been studied at the CERN SPS for stored beams of protons and Pb ions with 270 GeV/c per unit charge. A bent silicon crystal used as a primary collimator deflects halo particles in the channeling regime, directing them into a tungsten absorber. In channeling conditions a strong reduction of off-momentum particle numbers produced in the crystal and absorber, which form collimation leakage, has been observed in the first high dispersion (HD) area downstream. The present study shows that the collimation leakage is minimal for some values of the absorber offset relative to the crystal. The optimal offset value is larger for Pb ions because of their considerably larger ionization losses in the crystal, which cause large increases of particle betatron oscillation amplitudes. The optimal absorber offset allows obtaining maximal efficiency of crystal-assisted collimation.
DOI: 10.1088/1748-0221/7/01/c01033
2012
Cited 23 times
The CMS binary chip for microstrip tracker readout at the SLHC
A 130 nm CMOS chip has been designed for silicon microstrip readout at the SLHC. The CBC has 128 channels, and utilises a binary un-sparsified architecture for chip and system simplicity. It is designed to read out signals of either polarity from short strips (capacitances up to ~ 10 pF) and can sink or source sensor leakage currents up to 1 μA. Details of the design and measured performance are presented.
DOI: 10.1016/j.physletb.2014.04.062
2014
Cited 23 times
Mirroring of 400 GeV/c protons by an ultra-thin straight crystal
Channeling is the confinement of the trajectory of a charged particle in a crystalline solid. Positively charged particles channeled between crystal planes oscillate with a certain oscillation length, which depends on particle energy. A crystal whose thickness is half the oscillation length for planar channeling may act as a mirror for charged particles. If the incident angle of the particle trajectory with the crystal plane is less than the critical angle for channeling, under-barrier particles undergo half an oscillation and exit the crystal with the reversal of their transverse momentum, i.e., the particles are “mirrored” by the crystal planes. Unlike the traditional scheme relying on millimeter-long curved crystals, particle mirroring enables beam steering in high-energy accelerators via interactions with micrometer-thin straight crystal. The main advantage of mirroring is the interaction with a minimal amount of material along the beam, thereby decreasing unwanted incoherent nuclear interactions. The effectiveness of the mirror effect for ultrarelativistic positive particles has been experimentally proven at external lines of CERN-SPS. The mirroring effect in a 26.5-μm-thick Si crystal has been studied in the experiment with a narrow beam of 400 GeV/c protons at the CERN-SPS. The reflection efficiency for a quasi-parallel beam is larger than 80%.
DOI: 10.1016/j.physletb.2015.07.040
2015
Cited 21 times
Observation of strong leakage reduction in crystal assisted collimation of the SPS beam
In ideal two-stage collimation systems, the secondary collimator–absorber should have its length sufficient to exclude practically the exit of halo particles with large impact parameters. In the UA9 experiments on the crystal assisted collimation of the SPS beam a 60 cm long tungsten bar is used as a secondary collimator–absorber which is insufficient for the full absorption of the halo protons. Multi-turn simulation studies of the collimation allowed to select the position for the beam loss monitor downstream the collimation area where the contribution of particles deflected by the crystal in channeling regime but emerging from the secondary collimator–absorber is considerably reduced. This allowed observation of a strong leakage reduction of halo protons from the SPS beam collimation area, thereby approaching the case with an ideal absorber.
DOI: 10.1088/1748-0221/6/10/t10002
2011
Cited 22 times
The UA9 experimental layout
The UA9 experimental equipment was installed in the CERN-SPS in March '09 with the aim of investigating crystal assisted collimation in coasting mode. Its basic layout comprises silicon bent crystals acting as primary collimators mounted inside two vacuum vessels. A movable 60 cm long block of tungsten located downstream at about 90 degrees phase advance intercepts the deflected beam. Scintillators, Gas Electron Multiplier chambers and other beam loss monitors measure nuclear loss rates induced by the interaction of the beam halo in the crystal. Roman pots are installed in the path of the deflected particles and are equipped with a Medipix detector to reconstruct the transverse distribution of the impinging beam. Finally UA9 takes advantage of an LHC-collimator prototype installed close to the Roman pot to help in setting the beam conditions and to analyze the efficiency to deflect the beam. This paper describes in details the hardware installed to study the crystal collimation during 2010.
DOI: 10.1016/j.physletb.2011.05.060
2011
Cited 20 times
Observation of parametric X-rays produced by 400 GeV/c protons in bent crystals
Spectral maxima of parametric X-ray radiation (PXR) produced by 400 GeV/c protons in bent silicon crystals aligned with the beam have been observed in an experiment at the H8 external beam of the CERN SPS. The total yield of PXR photons was about 10−6 per proton. Agreement between calculations and the experimental data shows that the PXR kinematic theory is valid for bent crystals with sufficiently small curvature as used in the experiment. The intensity of PXR emitted from halo protons in a bent crystal used as a primary collimator in a circular accelerator may be considered as a possible tool to control its crystal structure, which is slowly damaged because of irradiation. The intensity distribution of PXR peaks depends on the crystal thickness intersected by the beam, which changes for different orientations of a crystal collimator. This dependence may be used to control crystal collimator alignment by analyzing PXR spectra produced by halo protons.
DOI: 10.5170/cern-2000-010.130
2000
Cited 32 times
The CMS tracker APV25 $0.25 \mu m$ CMOS readout chip
DOI: 10.1016/j.nimb.2014.08.013
2014
Cited 10 times
Deflection of high energy protons by multiple volume reflections in a modified multi-strip silicon deflector
The effect of multiple volume reflections in one crystal was observed in each of several bent silicon strips for 400 GeV/c protons. This considerably increased the particle deflections. Some particles were also deflected due to channeling in one of the subsequent strips. As a result, the incident beam was strongly spread because of opposite directions of the deflections. A modified multi-strip deflector produced by periodic grooves on the surface of a thick silicon plate was used for these measurements. This technique provides perfect mutual alignment between crystal strips. Such multi-strip deflector may be effective for collider beam halo collimation and a study is planned at the CERN SPS circulating beam.
DOI: 10.1140/epjc/s10052-018-5985-8
2018
Cited 10 times
Study of inelastic nuclear interactions of 400 GeV/c protons in bent silicon crystals for beam steering purposes
Inelastic nuclear interaction probability of 400 GeV/c protons interacting with bent silicon crystals was investigated, in particular for both types of crystals installed at the CERN Large Hadron Collider for beam collimation purposes. In comparison to amorphous scattering interaction, in planar channeling this probability is $$\sim 36\%$$ for the quasi-mosaic type (planes (111)), and $$\sim 27\%$$ for the strip type (planes (110)). Moreover, the absolute inelastic nuclear interaction probability in the axial channeling orientation, along the $$\langle 110\rangle $$ axis, was estimated for the first time, finding a value of $$0.6\%$$ for a crystal 2 mm long along the beam direction, with a bending angle of 55 $$\upmu $$ rad. This value is more than two times lower with respect to the planar channeling orientation of the same crystal, and increases with the vertical angular misalignment. Finally, the correlation between the inelastic nuclear interaction probability in the planar channeling and the silicon crystal curvature is reported.
DOI: 10.1109/tns.2005.860173
2005
Cited 15 times
The CMS tracker readout front end driver
The front end driver (FED), is a 9U 400 mm VME64x card designed for reading out the compact muon solenoid (CMS), silicon tracker signals transmitted by the APV25 analogue pipeline application specific integrated circuits. The FED receives the signals via 96 optical fibers at a total input rate of 3.4 GB/sec. The signals are digitized and processed by applying algorithms for pedestal and common mode noise subtraction. Algorithms that search for clusters of hits are used to further reduce the input rate. Only the cluster data along with trigger information of the event are transmitted to the CMS data acquisition system using the S-LINK64 protocol at a maximum rate of 400 MB/sec. All data processing algorithms on the FED are executed in large on-board field programmable gate arrays. Results on the design, performance, testing and quality control of the FED are presented and discussed
DOI: 10.1088/1748-0221/7/08/c08006
2012
Cited 9 times
The CBC microstrip readout chip for CMS at the High Luminosity LHC
The CMS Binary Chip (CBC) is designed for readout of silicon microstrips in the CMS Tracker at the High Luminosity LHC (HL-LHC). Binary, unsparsified readout is well suited to the high luminosity environment, where particle fluences and data rates will be much higher than at the LHC. In September 2011, a module comprising a CBC bonded to a silicon microstrip sensor was tested with 400 GeV protons in the H8 beamline at CERN. Performance was in agreement with expectations. The spatial resolution of the sensor and CBC has been shown to be better than pitch/√12 due to spatial distribution of one and two strip clusters. Large cluster events show consistency with the production of delta rays. At operating thresholds, the hit efficiency has been shown to be approximately 98%, limited by the resolution of timing apparatus, while the noise occupancy is measured to be below 10−4. The distribution of charge deposition in the sensor has been reconstructed by measurement of the hit efficiency as a function of comparator threshold; assuming the underlying distribution is a Landau.
DOI: 10.1109/nssmic.2015.7581984
2015
Cited 8 times
The CMS Timing and Control Distribution System
The Compact Muon Solenoid (CMS) experiment operating at the CERN (European Laboratory for Nuclear Physics) Large Hadron Collider (LHC) is in the process of upgrading several of its detector systems. Adding more individual detector components brings the need to test and commission those components separately from existing ones so as not to compromise physics data-taking. The CMS Trigger, Timing and Control (TTC) system had reached its limits in terms of the number of separate elements (partitions) that could be supported. A new Timing and Control Distribution System (TCDS) has been designed, built and commissioned in order to overcome this limit. It also brings additional functionality to facilitate parallel commissioning of new detector elements. The new TCDS system and its components will be described and results from the first operational experience with the TCDS in CMS will be shown.
DOI: 10.1016/s0168-9002(02)01356-6
2002
Cited 13 times
Single event upset studies on the CMS tracker APV25 readout chip
The microstrip tracker for the CMS experiment at the CERN Large Hadron Collider will be read out using APV25 chips. During high luminosity running the tracker will be exposed to particle fluxes up to 107 cm−2 s−1, which raises concerns that the APV25 could occasionally suffer Single Event Upsets (SEUs). The effect of SEU on the APV25 has been studied to investigate implications for CMS detector operation and from the viewpoint of detailed circuit operation, to improve the understanding of its origin and what factors affect its magnitude. Simulations were performed to reconstruct the effects created by highly ionising particles striking sensitive parts of the circuits, along with consideration of the underlying mechanisms of charge deposition, collection and the consequences. A model to predict the behaviour of the memory circuits in the APV25 has been developed and data collected from dedicated experiments using both heavy ions and hadrons have been shown to support it.
DOI: 10.22323/1.370.0111
2020
Cited 6 times
First measurements with the CMS DAQ and Timing Hub prototype-1
The DAQ and Timing Hub is an ATCA hub board designed for the Phase-2 upgrade of the CMS experiment.In addition to providing high-speed Ethernet connectivity to all back-end boards, it forms the bridge between the sub-detector electronics and the central DAQ, timing, and trigger control systems.One important requirement is the distribution of several high-precision, phasestable, and LHC-synchronous clock signals for use by the timing detectors.The current paper presents first measurements performed on the initial prototype, with a focus on clock quality.It is demonstrated that the current design provides adequate clock quality to satisfy the requirements of the Phase-2 CMS timing detectors.
DOI: 10.1088/1742-6596/119/2/022008
2008
Cited 7 times
Data acquisition software for the CMS strip tracker
The CMS silicon strip tracker, providing a sensitive area of approximately 200 m2 and comprising 10 million readout channels, has recently been completed at the tracker integration facility at CERN. The strip tracker community is currently working to develop and integrate the online and offline software frameworks, known as XDAQ and CMSSW respectively, for the purposes of data acquisition and detector commissioning and monitoring. Recent developments have seen the integration of many new services and tools within the online data acquisition system, such as event building, online distributed analysis, an online monitoring framework, and data storage management. We review the various software components that comprise the strip tracker data acquisition system, the software architectures used for stand-alone and global data-taking modes. Our experiences in commissioning and operating one of the largest ever silicon micro-strip tracking systems are also reviewed.
DOI: 10.1016/j.physletb.2015.02.072
2015
Cited 5 times
Observation of nuclear dechanneling length reduction for high energy protons in a short bent crystal
Deflection of 400 GeV/c protons by a short bent silicon crystal was studied at the CERN SPS. It was shown that the dechanneling probability increases while the dechanneling length decreases with an increase of incident angles of particles relative to the crystal planes. The observation of the dechanneling length reduction provides evidence of the particle population increase at the top levels of transverse energies in the potential well of the planar channels.
DOI: 10.1134/s0021364015100124
2015
Cited 4 times
Comparative results on the deflection of positively and negatively charged particles by multiple volume reflections in a multi-strip silicon deflector
DOI: 10.1016/j.physletb.2014.05.051
2014
Cited 4 times
Corrigendum to: Observation of focusing of 400 GeV/c proton beam with the help of bent crystals [Phys. Lett. B 733 (2014) 366–372]
DOI: 10.5170/cern-2003-006.255
2003
Cited 8 times
The CMS Tracker Front-End Driver
The Front End Driver (FED) is a 9U 400mm VME64x card designed for reading out the CMS silicon tracker signals transmitted by the APV25 analogue pipeline ASICs. The signals are transmitted to each FED via 96 optical fibers at a total input rate corresponding to 3 Gbytes/s. The FED digitizes the signals and processes the data digitally by applying algorithms for pedestal and common mode noise subtraction. The input data rate is reduced using algorithms that search for clusters of hits. Only the cluster data along with trigger information of the event are transmitted to the CMS DAQ system using the SLINK-64 protocol at a maximum rate of 640 Mbytes/s. All data processing algorithms on the FED are executed in large on-board Field Programmable Gate Arrays (FPGA). Two FED cards have been manufactured during the last quarter of 2002. Results on the performance of the FED are presented and discussed.
DOI: 10.1051/epjconf/202024501032
2020
Cited 4 times
40 MHz Level-1 Trigger Scouting for CMS
The CMS experiment will be upgraded for operation at the HighLuminosity LHC to maintain and extend its physics performance under extreme pileup conditions. Upgrades will include an entirely new tracking system, supplemented by a track finder processor providing tracks at Level-1, as well as a high-granularity calorimeter in the endcap region. New front-end and back-end electronics will also provide the Level-1 trigger with high-resolution information from the barrel calorimeter and the muon systems. The upgraded Level-1 processors, based on powerful FPGAs, will be able to carry out sophisticated feature searches with resolutions often similar to the offline ones, while keeping pileup effects under control. In this paper, we discuss the feasibility of a system capturing Level-1 intermediate data at the beam-crossing rate of 40 MHz and carrying out online analyzes based on these limited-resolution data. This 40 MHz scouting system would provide fast and virtually unlimited statistics for detector diagnostics, alternative luminosity measurements and, in some cases, calibrations. It has the potential to enable the study of otherwise inaccessible signatures, either too common to fit in the Level-1 accept budget, or with requirements which are orthogonal to “mainstream” physics, such as long-lived particles. We discuss the requirements and possible architecture of a 40 MHz scouting system, as well as some of the physics potential, and results from a demonstrator operated at the end of Run-2 using the Global Muon Trigger data from CMS. Plans for further demonstrators envisaged for Run-3 are also discussed.
DOI: 10.1109/rtc.2016.7543164
2016
Cited 3 times
Performance of the new DAQ system of the CMS experiment for run-2
The data acquisition system (DAQ) of the CMS experiment at the CERN Large Hadron Collider (LHC) assembles events at a rate of 100 kHz, transporting event data at an aggregate throughput of more than 100GB/s to the Highlevel Trigger (HLT) farm. The HLT farm selects and classifies interesting events for storage and offline analysis at an output rate of around 1 kHz. The DAQ system has been redesigned during the accelerator shutdown in 2013-2014. The motivation for this upgrade was twofold. Firstly, the compute nodes, networking and storage infrastructure were reaching the end of their lifetimes. Secondly, in order to maintain physics performance with higher LHC luminosities and increasing event pileup, a number of sub-detectors are being upgraded, increasing the number of readout channels as well as the required throughput, and replacing the off-detector readout electronics with a MicroTCA-based DAQ interface. The new DAQ architecture takes advantage of the latest developments in the computing industry. For data concentration 10/40 Gbit/s Ethernet technologies are used, and a 56Gbit/s Infiniband FDR CLOS network (total throughput ≈ 4Tbit/s) has been chosen for the event builder. The upgraded DAQ - HLT interface is entirely file-based, essentially decoupling the DAQ and HLT systems. The fully-built events are transported to the HLT over 10/40 Gbit/s Ethernet via a network file system. The collection of events accepted by the HLT and the corresponding metadata are buffered on a global file system before being transferred off-site. The monitoring of the HLT farm and the data-taking performance is based on the Elasticsearch analytics tool. This paper presents the requirements, implementation, and performance of the system. Experience is reported on the first year of operation with LHC proton-proton runs as well as with the heavy ion lead-lead runs in 2015.
DOI: 10.1086/386088
1995
Cited 10 times
The Loyalist Response to the Queen Caroline Agitations
It is curious that the unprecedented agitations in support of the rights of Caroline of Brunswick in 1820–21 have been represented as an “affair.” The word seems first to have been used by G. M. Trevelyan and was promptly seized on by Elie Halevy in his 1923 Histoire du peuple anglais au XIX e siècle. The labeling of this popular ebullience as an “affair” has consequently framed the development of its now not inconsiderable historiography. The episode was initially explained as a diversion from some main line of historical development, be it whiggish or Marxisant. More recently, historians have rescued the agitations from this condescension by showing how the radicals identified the king and the government's treatment of the queen as oppression and corruption at work. Since the common thread running through both whig and Marxisant accounts had been a concentration on the effects of the agitations on reform and radical politics, those attempting to put the episode back fully into their narratives emphasized the same factors. This time, however, it was to show that the agitations were not a diversion from the main line of reform politics. What follows is a further contribution to the process of giving greater attention to the queen's cause when telling the story of mass politics in this period, but one which concentrates on other neglected contexts and phenomena important for the explanation of this popular explosion. In the light of this, it may be necessary to change the way we refer to this episode.
DOI: 10.1109/tns.2017.2663442
2017
Cited 3 times
The New Global Muon Trigger of the CMS Experiment
For the 2016 physics data runs, the L1 trigger system of the compact muon solenoid (CMS) experiment underwent a major upgrade to cope with the increasing instantaneous luminosity of the CERN LHC whilst maintaining a high event selection efficiency for the CMS physics program. Most subsystem specific trigger processor boards were replaced with powerful general purpose processor boards, conforming to the MicroTCA standard, whose tasks are performed by firmware on an field-programmable gate array of the Xilinx Virtex 7 family. Furthermore, the muon trigger system moved from a subsystem centered approach, where each of the three muon detector systems provides muon candidates to the global muon trigger (GMT), to a region-based system, where muon track finders (TFs) combine information from the subsystems to generate muon candidates in three detector regions that are then sent to the upgraded GMT. The upgraded GMT receives up to 108 muons from the processors of the muon TFs in the barrel, overlap, and endcap detector regions. The muons are sorted in two steps and duplicates are identified for removal. The first step treats muons from different processors of a TF in one detector region. Muons from TFs in different detector regions are compared in the second step. An isolation variable is calculated, using energy sums from the calorimeter trigger and added to each of the best eight muons before they are sent to the upgraded global trigger (GT) where the final trigger decision is made. The upgraded GMT algorithm is implemented on a general purpose processor board that uses optical links at 10 Gb/s to receive the input data from the muon TFs and the calorimeter energy sums, and to send the selected muon candidates to the upgraded GT.
DOI: 10.5170/cern-2004-010.222
2004
Cited 3 times
Performance of the CMS Silicon Tracker Front-End Driver
The CMS Silicon Tracker Front-End Driver (FED) is a 9U 400mm VME64x card which processes the raw data generated within the Silicon Tracker by the APV25 readout ASICs. The processed, zero-suppressed, data is then sent to the Data Acquisition System (DAQ). The first 2 FEDs were made at the beginning of 2003 and since then a further 15 FEDs of this type (FEDv1) have been manufactured. All hardware modifications to the FEDv1 design have now been completed and a new iteration of the board produced, called the FEDv2, which is expected to be the final version. The firmware and software development is close to completion. The performance of a FED in the laboratory is presented.
DOI: 10.1109/nssmic.2004.1462439
2005
Cited 3 times
The CMS tracker readout front end driver
The front end driver is a 9U 400mm VME64x card designed for reading out the CMS silicon tracker signals transmitted by the APV25 analogue pipeline ASICs. The FED receives the signals via 96 optical fibers at a total input rate of 3.4 GBytes/sec. The signals are digitized and processed by applying algorithms for pedestal and common mode noise subtraction. Algorithms that search for clusters of hits are used to further reduce the input rate. Only the cluster data along with trigger information of the event are transmitted to the CMS DAQ system using the S-LINK64 protocol at a maximum rate of 400 Mbytes/sec. All data processing algorithms on the FED are executed in large on-board FPGAs. Results on the design, performance, testing and quality control of the FED are presented and discussed.
DOI: 10.1051/epjconf/201921407017
2019
Experience with dynamic resource provisioning of the CMS online cluster using a cloud overlay
The primary goal of the online cluster of the Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC) is to build event data from the detector and to select interesting collisions in the High Level Trigger (HLT) farm for offline storage. With more than 1500 nodes and a capacity of about 850 kHEPSpecInt06, the HLT machines represent similar computing capacity of all the CMS Tier1 Grid sites together. Moreover, it is currently connected to the CERN IT datacenter via a dedicated 160 Gbps network connection and hence can access the remote EOS based storage with a high bandwidth. In the last few years, a cloud overlay based on OpenStack has been commissioned to use these resources for the WLCG when they are not needed for data taking. This online cloud facility was designed for parasitic use of the HLT, which must never interfere with its primary function as part of the DAQ system. It also allows to abstract from the different types of machines and their underlying segmented networks. During the LHC technical stop periods, the HLT cloud is set to its static mode of operation where it acts like other grid facilities. The online cloud was also extended to make dynamic use of resources during periods between LHC fills. These periods are a-priori unscheduled and of undetermined length, typically of several hours, once or more a day. For that, it dynamically follows LHC beam states and hibernates Virtual Machines (VM) accordingly. Finally, this work presents the design and implementation of a mechanism to dynamically ramp up VMs when the DAQ load on the HLT reduces towards the end of the fill.
2016
Opportunistic usage of the CMS online cluster using a cloud overlay
DOI: 10.1016/j.physletb.2015.09.001
2015
Corrigendum to “Observation of strong leakage reduction in crystal assisted collimation of the SPS beam” [Phys. Lett. B 748 (2015) 451–454]
W. Scandale a,b,e, F. Andrisani a, G. Arduini a, M. Butcher a, F. Cerutti a, M. Garattini a, S. Gilardoni a, A. Lechner a, R. Losito a, A. Masi a, A. Mereghetti a, E. Metral a, D. Mirarchi a,j, S. Montesano a, S. Redaelli a, R. Rossi a,e, P. Schoofs a, G. Smirnov a, E. Bagli c, L. Bandiera c, S. Baricordi c, P. Dalpiaz c, G. Germogli c, V. Guidi c, A. Mazzolari c, D. Vincenzi c, G. Claps d, S. Dabagov d,k,l, D. Hampai d, F. Murtas d, G. Cavoto e, F. Iacoangeli e, L. Ludovici e, R. Santacesaria e, P. Valente e, F. Galluccio f, A.G. Afonin g, Yu.A. Chesnokov g, A.A. Durumg, V.A. Maisheev g, Yu.E. Sandomirskiy g, A.A. Yanovich g, A.D. Kovalenko h, A.M. Taratin h,∗, Yu.A. Gavrikov i, Yu.M. Ivanov i, L.P. Lapina i, J. Fulcher j, G. Hall j, M. Pesaresi j, M. Raymond j
DOI: 10.5170/cern-2004-010.375
2004
The Manufacture of the CMS Tracker Front-End Driver
The Front-End Driver (FED) is a 9U 400mm VME64x card designed for reading out the CMS silicon tracker. The FED was designed to maximise the number of channels that could be processed on a single 9U board and has a mixture of optical, analogue (96 ADC channels) and digital, Field Programmable Gate Array (FPGA), components. Nevertheless, a total of 440 FED boards are required to readout the entire tracker. Nearly 20 full-scale prototype 9U FED boards have been produced to date. This paper concentrates on the issues of the large-scale manufacture and assembly of PCBs. It also discusses the issues of production testing of such large and complex electronic cards.
DOI: 10.1088/1748-0221/5/07/p07007
2010
Studies of the CMS tracker at high trigger rate
During the latter months of 2006 and the first half of 2007, the CMS Tracker was assembled and operated at the Tracker Integration Facility at CERN. During this period the performance of the tracker at trigger rates up to 100 kHz was assessed, and a source of high occupancy events was uncovered, diagnosed, and mitigated.
DOI: 10.22323/1.270.0022
2017
Opportunistic usage of the CMS online cluster using a cloud overlay
After two years of maintenance and upgrade, the Large Hadron Collider (LHC), the largest and most powerful particle accelerator in the world, has started its second three year run. Around 1500 computers make up the CMS (Compact Muon Solenoid) Online cluster. This cluster is used for Data Acquisition of the CMS experiment at CERN, selecting and sending to storage around 20 TBytes of data per day that are then analysed by the Worldwide LHC Computing Grid (WLCG) infrastructure that links hundreds of data centres worldwide. 3000 CMS physicists can access and process data, and are always seeking more computing power and data. The backbone of the CMS Online cluster is composed of 16000 cores which provide as much computing power as all CMS WLCG Tier1 sites (352K HEP-SPEC-06 score in the CMS cluster versus 300K across CMS Tier1 sites). The computing power available in the CMS cluster can significantly speed up the processing of data, so an effort has been made to allocate the resources of the CMS Online cluster to the grid when it isn’t used to its full capacity for data acquisition. This occurs during the maintenance periods when the LHC is non-operational, which corresponded to 117 days in 2015. During 2016, the aim is to increase the availability of the CMS Online cluster for data processing by making the cluster accessible during the time between two physics collisions while the LHC and beams are being prepared. This is usually the case for a few hours every day, which would vastly increase the computing power available for data processing. Work has already been undertaken to provide this functionality, as an OpenStack cloud layer has been deployed as a minimal overlay that leaves the primary role of the cluster untouched. This overlay also abstracts the different hardware and networks that the cluster is composed of. The operation of the cloud (starting and stopping the virtual machines) is another challenge that has been overcome as the cluster has only a few hours spare during the aforementioned beam preparation. By improving the virtual image deployment and integrating the OpenStack services with the core services of the Data Acquisition on the CMS Online cluster it is now possible to start a thousand virtual machines within 10 minutes and to turn them off within seconds. This document will explain the architectural choices that were made to reach a fully redundant and scalable cloud, with a minimal impact on the running cluster configuration while giving a maximal segregation between the services. It will also present how to cold start 1000 virtual machines 25 times faster, using tools commonly utilised in all data centres.
DOI: 10.1088/1742-6596/119/2/022009
2008
Local reconstruction software for the CMS silicon strip tracker
CMS has a two level trigger system. The first stage is hardware based and provides fast trigger decisions up to a rate 100 kHz. The second stage, known as the High-Level Trigger, is entirely software based and required to provide a trigger decision within 40 ms and a rejection factor of a thousand to achieve a write-to-disk rate of 100 Hz. One of the most CPU-intensive tasks within the High-Level Trigger is the reconstruction of tracking hits using raw data from the strip tracker. This study profiles the performance of these reconstruction algorithms. Even at low luminosities, the average processing time is 5.5 s, which already exceeds the HLT budget. A new schema, optimised for speed and performance, has been developed to reconstruct hits within regions-of-interest only. For the entire sub-detector, hit reconstruction times are reduced to 140 ms. Since only 10 % of High-Level Trigger events are expected to require track reconstruction, the average contribution per event is then ∼14 ms i.e. 30 % of the full budget. Regional reconstruction is tested over Z0 → e+e− events, by unpacking in η-ϕ windows of 0.16 × 0.16 around seeds identified in the calorimeter. In this case, only 2 ± 1 % of the silicon strip tracker raw data is reconstructed in 5 ± 3 ms (or an average contribution per event of 0.5 ms) whilst maintaining 99 % of the original dielectron trigger efficiency.
DOI: 10.22323/1.313.0075
2018
The FEROL40, a microTCA card interfacing custom point-to-point links and standard TCP/IP
In order to accommodate new back-end electronics of upgraded CMS sub-detectors, a new FEROL40 card in the microTCA standard has been developed. The main function of the FEROL40 is to acquire event data over multiple point-to-point serial optical links, provide buffering, perform protocol conversion, and transmit multiple TCP/IP streams (4x10Gbps) to the Ethernet network of the aggregation layer of the CMS DAQ (data acquisition) event builder. This contribution discusses the design of the FEROL40 and experience from operation
DOI: 10.1111/j.1468-2281.1994.tb02315.x
1994
Cited 4 times
Gender, politics and class in the early nineteenth-century English reform movement
Journal Article Gender, politics and class in the early nineteenth-century English reform movement Get access J. FULCHER J. FULCHER Search for other works by this author on: Oxford Academic Google Scholar Historical Research, Volume 67, Issue 162, February 1994, Pages 57–74, https://doi.org/10.1111/j.1468-2281.1994.tb02315.x Published: 12 October 2007
DOI: 10.5170/cern-2007-001.419
2007
Commissioning and Calibrating the CMS Silicon Strip Tracker
The data acquisition system for the CMS Silicon Strip Tracker (SST) is based around a custom analogue front-end ASIC, an analogue optical link system and an off-detector VME board that performs digitization, zero-suppression and data formatting. A complex procedure is required to optimally configure, calibrate and synchronize the 10 channels of the SST readout system. We present an overview of this procedure, which will be used to commission and calibrate the SST during the integration, Start-Up and operational phases of the experiment. Recent experiences from the CMS Magnet Test Cosmic Challenge and system tests at the Tracker Integration Facility are also reported. I. THE DATA ACQUISITION SYSTEM The CMS Silicon Strip Tracker (SST) is unprecedented in terms of its size and complexity, providing a sensitive area of >200 m and comprising 10 readout channels. Fig. 1 shows a schematic of the control and readout systems for the SST. The control system [1] comprises 300 “control rings” that start and end at the off-detector Front-End Controller (FEC) boards and is responsible for distributing slow control commands, clock and Level-1 triggers to the front-end electronics. The signals are transmitted optically from the FECs to front-end digital optohybrids via digital links, and then electrically via ‘token rings” of Communication and Control Units (CCUs) to the front-end electronics. The readout system is based around a custom front-end ASIC known as the APV25 chip [2], an analogue optical link system [3] and an off-detector Front-End Driver (FED) processing board [4]. The system comprises 76k APV25 chips, 38k optical fibres (each transmitting data from a pair of APV25 chips) and 440 FEDs. The APV25 chip samples, amplifies, buffers and processes signals from 128 channels of a silicon strip sensor at the LHC collision frequency of 40MHz. On receipt of a Level-1 trigger, pulse height and bunch-crossing information from pairs of APV25 chips are multiplexed onto a single line and the data are converted to optical signals that are transmitted via analogue fibres to the off-detector FED boards. The FEDs digitize, zerosuppress and format the pulse height data from up to 96 pairs of APV25 chips, before forwarding the resulting event fragments to the CMS event builder (EVB) and online computing farm. Figure 1: The SST control system uses ∼300 control rings (based around the FEC and CCU boards) to propagate clock, trigger and slow control information to the front-end. The SST readout system is based around the APV25 chip, an analogue optical link system and the off-
DOI: 10.22323/1.343.0129
2019
Design and development of the DAQ and Timing Hub for CMS Phase-2
The CMS detector will undergo a major upgrade for Phase-2 of the LHC program, starting around 2026.The upgraded Level-1 hardware trigger will select events at a rate of 750 kHz.At an expected event size of 7.4 MB this corresponds to a data rate of up to 50 Tbit/s.Optical links will carry the signals from on-detector front-end electronics to back-end electronics in ATCA crates in the service cavern.A DAQ and Timing Hub board aggregates data streams from back-end boards over point-to-point links, provides buffering and transmits the data to the commercial data-to-surface network for processing and storage.This hub board is also responsible for the distribution of timing, control and trigger signals to the back-ends.This paper presents the current development towards the DAQ and Timing Hub and the design of the first prototype, to be used as for validation and integration with the first back-end prototypes in 2019-2020.
DOI: 10.1051/epjconf/201921401015
2019
Operational experience with the new CMS DAQ-Expert
The data acquisition (DAQ) system of the Compact Muon Solenoid (CMS) at CERN reads out the detector at the level-1 trigger accept rate of 100 kHz, assembles events with a bandwidth of 200 GB/s, provides these events to the high level-trigger running on a farm of about 30k cores and records the accepted events. Comprising custom-built and cutting edge commercial hardware and several 1000 instances of software applications, the DAQ system is complex in itself and failures cannot be completely excluded. Moreover, problems in the readout of the detectors,in the first level trigger system or in the high level trigger may provoke anomalous behaviour of the DAQ systemwhich sometimes cannot easily be differentiated from a problem in the DAQ system itself. In order to achieve high data taking efficiency with operators from the entire collaboration and without relying too heavily on the on-call experts, an expert system, the DAQ-Expert, has been developed that can pinpoint the source of most failures and give advice to the shift crew on how to recover in the quickest way. The DAQ-Expert constantly analyzes monitoring data from the DAQ system and the high level trigger by making use of logic modules written in Java that encapsulate the expert knowledge about potential operational problems. The results of the reasoning are presented to the operator in a web-based dashboard, may trigger sound alerts in the control room and are archived for post-mortem analysis - presented in a web-based timeline browser. We present the design of the DAQ-Expert and report on the operational experience since 2017, when it was first put into production.
DOI: 10.1051/epjconf/202024501028
2020
DAQExpert the service to increase CMS data-taking efficiency
The Data Acquisition (DAQ) system of the Compact Muon Solenoid (CMS) experiment at the LHC is a complex system responsible for the data readout, event building and recording of accepted events. Its proper functioning plays a critical role in the data-taking efficiency of the CMS experiment. In order to ensure high availability and recover promptly in the event of hardware or software failure of the subsystems, an expert system, the DAQ Expert, has been developed. It aims at improving the data taking efficiency, reducing the human error in the operations and minimising the on-call expert demand. Introduced in the beginning of 2017, it assists the shift crew and the system experts in recovering from operational faults, streamlining the post mortem analysis and, at the end of Run 2, triggering fully automatic recovery without human intervention. DAQ Expert analyses the real-time monitoring data originating from the DAQ components and the high-level trigger updated every few seconds. It pinpoints data flow problems, and recovers them automatically or after given operator approval. We analyse the CMS downtime in the 2018 run focusing on what was improved with the introduction of automated recovery; present challenges and design of encoding the expert knowledge into automated recovery jobs. Furthermore, we demonstrate the web-based, ReactJS interfaces that ensure an effective cooperation between the human operators in the control room and the automated recovery system. We report on the operational experience with automated recovery.
DOI: 10.5170/cern-2004-010.370
2004
Software and DAQ for the CMS silicon tracker front end driver
DOI: 10.5170/cern-2003-006.119
2003
A testing device for the CMS silicon tracker front end driver cards
A 9U 400mm VME FED Tester card (FT) has been designed for evaluation and production testing of the CMS silicon microstrip tracker Front End Driver (FED). The FT is designed to simulate both the tracker analogue optical signals and the trigger digital signals required by a FED. Each FT can drive up to 24 FED optical input channels. The internal logic of the FT is based on large FPGAs which employ fast digital logic, digital clock managers and memories. Test patterns and real tracker data can be loaded via VME to the memories. DACs operating at 40MHz convert the data to analogue form and drive the on-board CMS tracker Analogue-Opto-Hybrids (AOH) to convert the data to analogue optical format. Hence, they are identical to the signals produced by the CMS tracker. The FT either transmits the clock and trigger information directly to a FED or to the CMS Trigger and Timing Control (TTC) system. Four such cards will be used to fully test a FED. One FT prototype has been manufactured and is currently being used to evaluate the CMS tracker FED. This paper describes the FED Tester design and architecture.
2014
An interview with Dr Jonathan Fulcher
PB: Dr Jonathan Fulcher, thank you for joining us. What is native title? JF: In the Mabo decision the High Court found that the common law recognised that native title exists in Australia. The conc...
DOI: 10.1071/aj13101
2014
Structuring your farmin in an unconventional world
This extended abstract and its presentation explore the legal aspects of good planning and implementation of a successful farmin—from the perspectives of the farmor and the farmee. Among other things, this extended abstract addresses the common and some unusual farmin obligations and structures available to achieve the purpose of the farmin. A common issue is when the tenement interest is transferred—the presentation discusses at the implications. The position under relevant tax and state duty rulings are also examined. The presentation further addresses the farmin program and structuring impacts of force majeure, permit commitments, and overlap in a multistage farmin; and, the development of a joint operating agreement (JOA) that could start during or after farmin obligations are completed. The implications of different majority voting percentages in an operating committee under a JOA, which starts about the same time as a farmin in which participating interest shares increase in stages (also be briefly examined).
DOI: 10.1071/aj11081
2012
The social licence to operate: how to get it granted
This extended abstract explains key issues associated with the social licence to operate, a concept recently developed to explain the tacit acceptance by the community of large development projects, particularly mining, oil and gas projects. It is essentially a concept characterised by an absence of protest, a tacit acknowledgement of the project’s presence or activity in the community, and an economic engagement by that community in the project’s activities and impacts. As such, it is difficult to measure positively; however, this extended abstract suggests that to achieve the negative milestone of getting and keeping the social licence, a project developer can address several issues: Educating your stakeholders. Building relationships with your stakeholders. Broadly defining your stakeholders. Not seeing the political tick of obtaining valid approvals as the end of the approvals process, but as the beginning of a new phase of stakeholder engagement. Keeping the approvals ministers informed but not involved. Experience suggests that state ministers in all jurisdictions would rather not adjudicate in favour of developers instead of stakeholders, particularly in relation to land acquisition for project footprints. Also, in a legal framework of continuous disclosure and keen press scrutiny, legal compliance can more often than not prove a short-term fix for matters requiring a longer-term focus. Strategies for obtaining land where the fallback is not a legal process, compulsory acquisition or ministerial intervention need to be more actively considered and developed. It is not so much beyond compliance, but enlarging the notion of compliance to encompass the expectations of governments, the community and a broad view of who the project’s stakeholders are. In this way, a social licence to operate can be granted.
2017
Mine Waste Classification and Management
2017
Environmental and Social Impact Assessment
2017
Compensation for Environmental Damage
2017
Remediation, Rehabilitation and Mine Closure
2017
Monitoring, Enforcement and Compliance
DOI: 10.1088/1742-6596/898/3/032028
2017
New operator assistance features in the CMS Run Control System
During Run-1 of the LHC, many operational procedures have been automated in the run control system of the Compact Muon Solenoid (CMS) experiment. When detector high voltages are ramped up or down or upon certain beam mode changes of the LHC, the DAQ system is automatically partially reconfigured with new parameters. Certain types of errors such as errors caused by single-event upsets may trigger an automatic recovery procedure. Furthermore, the top-level control node continuously performs cross-checks to detect sub-system actions becoming necessary because of changes in configuration keys, changes in the set of included front-end drivers or because of potential clock instabilities. The operator is guided to perform the necessary actions through graphical indicators displayed next to the relevant command buttons in the user interface. Through these indicators, consistent configuration of CMS is ensured. However, manually following the indicators can still be inefficient at times. A new assistant to the operator has therefore been developed that can automatically perform all the necessary actions in a streamlined order. If additional problems arise, the new assistant tries to automatically recover from these. With the new assistant, a run can be started from any state of the sub-systems with a single click. An ongoing run may be recovered with a single click, once the appropriate recovery action has been selected. We review the automation features of CMS Run Control and discuss the new assistant in detail including first operational experience.
DOI: 10.1017/s004727940000814x
1978
JSP volume 7 issue 4 Cover and Back matter
The results of a major s t u d / of hypothermia and cold conditions.Based on an inter-disciplinary enquiry, it analyses findings about the social circumstances and body and environmental temperatures of a national sample of over a thousand old people.
DOI: 10.1088/1742-6596/119/2/022028
2008
Monitoring the CMS strip tracker readout system
The CMS Silicon Strip Tracker at the LHC comprises a sensitive area of approximately 200 m2 and 10 million readout channels. Its data acquisition system is based around a custom analogue front-end chip. Both the control and the readout of the front-end electronics are performed by off-detector VME boards in the counting room, which digitise the raw event data and perform zero-suppression and formatting. The data acquisition system uses the CMS online software framework to configure, control and monitor the hardware components and steer the data acquisition. The first data analysis is performed online within the official CMS reconstruction framework, which provides many services, such as distributed analysis, access to geometry and conditions data, and a Data Quality Monitoring tool based on the online physics reconstruction.
DOI: 10.1088/1742-6596/1085/3/032021
2018
DAQExpert - An expert system to increase CMS data-taking efficiency
The efficiency of the Data Acquisition (DAQ) of the Compact Muon Solenoid (CMS) experiment for LHC Run 2 is constantly being improved. A significant factor affecting the data taking efficiency is the experience of the DAQ operator. One of the main responsibilities of the DAQ operator is to carry out the proper recovery procedure in case of failure of data-taking. At the start of Run 2, understanding the problem and finding the right remedy could take a considerable amount of time (up to many minutes). Operators heavily relied on the support of on-call experts, also outside working hours. Wrong decisions due to time pressure sometimes lead to an additional overhead in recovery time. To increase the efficiency of CMS data-taking we developed a new expert system, the DAQExpert, which provides shifters with optimal recovery suggestions instantly when a failure occurs. DAQExpert is a web application analyzing frequently updating monitoring data from all DAQ components and identifying problems based on expert knowledge expressed in small, independent logic-modules written in Java. Its results are presented in real-time in the control room via a web-based GUI and a sound-system in a form of short description of the current failure, and steps to recover.
DOI: 10.22323/1.313.0123
2018
CMS DAQ Current and Future Hardware Upgrades up to Post Long Shutdown 3 (LS3) Times
Following the first LHC collisions seen and recorded by CMS in 2009, the DAQ hardware went through a major upgrade during LS1 (2013-2014) and new detectors have been connected during 2015-2016 and 2016-2017 winter shutdowns.Now, LS2 (2019-2020) and LS3 (2024-mid 2026) are actively being prepared.This paper shows how CMS DAQ hardware has evolved from the beginning and will continue to evolve in order to meet the future challenges posed by High Luminosity LHC (HL-LHC) and the CMS detector evolution.In particular, post LS3 DAQ architectures are focused upon.
DOI: 10.48550/arxiv.1806.08975
2018
The CMS Data Acquisition System for the Phase-2 Upgrade
During the third long shutdown of the CERN Large Hadron Collider, the CMS Detector will undergo a major upgrade to prepare for Phase-2 of the CMS physics program, starting around 2026. The upgraded CMS detector will be read out at an unprecedented data rate of up to 50 Tb/s with an event rate of 750 kHz, selected by the level-1 hardware trigger, and an average event size of 7.4 MB. Complete events will be analyzed by the High-Level Trigger (HLT) using software algorithms running on standard processing nodes, potentially augmented with hardware accelerators. Selected events will be stored permanently at a rate of up to 7.5 kHz for offline processing and analysis. This paper presents the baseline design of the DAQ and HLT systems for Phase-2, taking into account the projected evolution of high speed network fabrics for event building and distribution, and the anticipated performance of general purpose CPU. In addition, some opportunities offered by reading out and processing parts of the detector data at the full LHC bunch crossing rate (40 MHz) are discussed.
DOI: 10.1051/epjconf/201921401044
2019
Presentation layer of CMS Online Monitoring System
The Compact Muon Solenoid (CMS) is one of the experiments at the CERN Large Hadron Collider (LHC). The CMS Online Monitoring system (OMS) is an upgrade and successor to the CMS Web-Based Monitoring (WBM)system, which is an essential tool for shift crew members, detector subsystem experts, operations coordinators, and those performing physics analyses. The CMS OMS is divided into aggregation and presentation layers. Communication between layers uses RESTful JSON:API compliant requests. The aggregation layer is responsible for collecting data from heterogeneous sources, storage of transformed and pre-calculated (aggregated) values and exposure of data via the RESTful API. The presentation layer displays detector information via a modern, user-friendly and customizable web interface. The CMS OMS user interface is composed of a set of cutting-edge software frameworks and tools to display non-event data to any authenticated CMS user worldwide. The web interface tree-like component structure comprises (top-down): workspaces, folders, pages, controllers and portlets. A clear hierarchy gives the required flexibility and control for content organization. Each bottom element instantiates a portlet and is a reusable component that displays a single aspect of data, like a table, a plot, an article, etc. Pages consist of multiple different portlets and can be customized at runtime by using a drag-and-drop technique. This is how a single page can easily include information from multiple online sources. Different pages give access to a summary of the current status of the experiment, as well as convenient access to historical data. This paper describes the CMS OMS architecture, core concepts and technologies of the presentation layer.
2018
A "diminished nation"? The Racial Discrimination Act 1975, the Native Title Act 1993 and constitutional recognition of indigenous Australians
The Native Title Act 1993 (Cth) is discriminatory in its treatment of Indigenous peoples' land interests compared with those of freeholders, particularly in relation to the provision of public infrastructure on their land. Section 7 of the Native Title Act does not permit the Racial Discrimination Act 1975 (Cth) to be used to effect a cure of that defect. Constitutional recognition for Indigenous Australians must address this incongruity. Indigenous Australians want it to be addressed in the referendum questions. Some commentators have warned against addressing it. Father Frank Brennan, for example, believes addressing such discrimination will lead to the failure of the referendum. This article asks if there can be an accommodation of the two views, both of which are compelling in their own way. It also suggests ways in which the Constitution and legislation might be amended to address such discrimination if ultimately the politics of the referendum questions does not allow discrimination to be dealt with in the Constitution in the way many Indigenous people would prefer.
DOI: 10.18429/jacow-pcapac2018-wep17
2019
Extending the Remote Control Capabilities in the CMS Detector Control System with Remote Procedure Call Services
The CMS Detector Control System (DCS) is implemented as a large distributed and redundant system, with applications interacting and sharing data in multiple ways. The CMS XML-RPC is a software toolkit implementing the standard Remote Procedure Call (RPC) protocol, using the Extensible Mark-up Language (XML) and a custom lightweight variant using the JavaScript Object Notation (JSON) to model, encode and expose resources through the Hypertext Transfer Protocol (HTTP). The CMS XML-RPC toolkit complies with the standard specification of the XML-RPC protocol that allows system developers to build collaborative software architectures with self-contained and reusable logic, and with encapsulation of well-defined processes. The implementation of this protocol introduces not only a powerful communication method to operate and exchange data with web-based applications, but also a new programming paradigm to design service-oriented software architectures within the CMS DCS domain. This paper presents details of the CMS XML-RPC implementation in WinCC Open Architecture (OA) Control Language using an object-oriented approach.
DOI: 10.7591/9781501728211-002
2019
Acknowledgments
DOI: 10.5170/cern-2007-001.187
2006
Recent Results on the Performance of the CMS Tracker Readout System
The CMS Silicon Tracker comprises a complicated set of hardware and software components that have been thoroughly tested at CERN before final integration of the Tracker. A vertical slice of the full readout chain has been operated under near-final conditions. In the absence of the tracker front-end modules, simulated events have been created within the FED (Front End Driver) and used to test the readout reliability and efficiency of the final DAQ (Data Acquisition). The data are sent over the S-Link 64 bit links to the FRL (Fast Readout Link) modules at rates in excess of 200 MBytes/s per FED depending on setup and conditions. The current tracker DAQ is fully based on the CMS communication and acquisition tool XDAQ. This paper discusses setup and results of a vertical slice of the full Tracker final readout system comprising 2 full crates of FEDs, 30 in total, read out through 1 full crate of final FRL modules. This test is to complement previous tests done at Imperial College[3] taking them to the next level in order to prove that a complete crate of FRLs using the final DAQ system, including all subcomponents of the final system both software and hardware with the exception of the detector modules themselves, is capable of sustained readout at the desired rates and occupancy of the CMS Tracker. Simulated data are created with varying hit occupancy (1-10%) and Poisson distributed trigger rates (<200KHz) and the resulting behaviour of the system is recorded. Data illustrating the performance of the system and data readout are presented.
DOI: 10.5170/cern-1999-009.513
1999
Single event upset studies on the APV6 front end readout chip
The microstrip tracker for the CMS experiment at the LHC will be read out using radiation hard APV chips. During high luminosity running of the LHC the tracker will be exposed to particle fluxes up to 10 cm s, which introduces a concern that the APV25 could occasionally suffer from Single Event Upset (SEU). To evaluate the expected upset rate under these circumstances the APV25 was run under controlled conditions in a heavy ion beam. Upset cross-sections of the digital parts of the chip have been measured at 13 values of incident Linear Energy Transfer (LET). A theoretical prediction of both threshold LET and cross-section is presented along with experimental measurements. These data are used to predict the upset rate for the APV25 in the CMS tracker.