ϟ

H. B. Newman

Here are all the papers by H. B. Newman that you can download and read on OA.mg.
H. B. Newman’s last known institution is . Download H. B. Newman PDFs here.

Claim this Profile →
DOI: 10.1103/physrevlett.107.181802
2011
Cited 605 times
Improved Search for Muon-Neutrino to Electron-Neutrino Oscillations in MINOS
We report the results of a search for $\nu_{e}$ appearance in a $\nu_{\mu}$ beam in the MINOS long-baseline neutrino experiment. With an improved analysis and an increased exposure of $8.2\times10^{20}$ protons on the NuMI target at Fermilab, we find that $2\sin^2(\theta_{23})\sin^2(2\theta_{13})<0.12\ (0.20)$ at 90% confidence level for $\delta\mathord{=}0$ and the normal (inverted) neutrino mass hierarchy, with a best fit of $2\sin^2(\theta_{23})\sin^2(2\theta_{13})\,\mathord{=}\,0.041^{+0.047}_{-0.031}\ (0.079^{+0.071}_{-0.053})$. The $\theta_{13}\mathord{=}0$ hypothesis is disfavored by the MINOS data at the 89% confidence level.
DOI: 10.1103/physrevlett.106.181801
2011
Cited 221 times
Measurement of the Neutrino Mass Splitting and Flavor Mixing by MINOS
Measurements of neutrino oscillations using the disappearance of muon neutrinos from the Fermilab NuMI neutrino beam as observed by the two MINOS detectors are reported. New analysis methods have been applied to an enlarged data sample from an exposure of $7.25 \times 10^{20}$ protons on target. A fit to neutrino oscillations yields values of $|Δm^2| = (2.32^{+0.12}_{-0.08})\times10^{-3}$\,eV$^2$ for the atmospheric mass splitting and $\rm \sin^2\!(2θ) &gt; 0.90$ (90%\,C.L.) for the mixing angle. Pure neutrino decay and quantum decoherence hypotheses are excluded at 7 and 9 standard deviations, respectively.
DOI: 10.1103/physrevlett.110.251801
2013
Cited 204 times
Measurement of Neutrino and Antineutrino Oscillations Using Beam and Atmospheric Data in MINOS
We report measurements of oscillation parameters from ν(μ) and ν(μ) disappearance using beam and atmospheric data from MINOS. The data comprise exposures of 10.71×10(20) protons on target in the ν(μ)-dominated beam, 3.36×10(20) protons on target in the ν(μ)-enhanced beam, and 37.88 kton yr of atmospheric neutrinos. Assuming identical ν and ν oscillation parameters, we measure |Δm2| = (2.41(-0.10)(+0.09))×10(-3) eV2 and sin2(2θ) = 0.950(-0.036)(+0.035). Allowing independent ν and ν oscillations, we measure antineutrino parameters of |Δm2| = (2.50(-0.25)(+0.23))×10(-3) eV2 and sin2(2θ) = 0.97(-0.08)(+0.03), with minimal change to the neutrino parameters.
DOI: 10.1109/mnet.2005.1383434
2005
Cited 166 times
FAST TCP: from theory to experiments
We describe a variant of TCP, called FAST, that can sustain high throughput and utilization at multigigabits per second over large distances. We present the motivation, review the background theory, summarize key features of FAST TCP, and report our first experimental results.
DOI: 10.1088/1742-6596/1085/2/022008
2018
Cited 120 times
Machine Learning in High Energy Physics Community White Paper
Machine learning is an important applied research area in particle physics, beginning with applications to high-level physics analysis in the 1990s and 2000s, followed by an explosion of applications in particle and event identification and reconstruction in the 2010s. In this document we discuss promising future research and development areas in machine learning in particle physics with a roadmap for their implementation, software and hardware resource requirements, collaborative initiatives with the data science community, academia and industry, and training the particle physics community in data science. The main objective of the document is to connect and motivate these areas of research and development with the physics drivers of the High-Luminosity Large Hadron Collider and future neutrino experiments and identify the resource needs for their implementation. Additionally we identify areas where collaboration with external communities will be of great benefit.
DOI: 10.1103/physrevlett.122.091803
2019
Cited 108 times
Search for Sterile Neutrinos in MINOS and MINOS+ Using a Two-Detector Fit
A search for mixing between active neutrinos and light sterile neutrinos has been performed by looking for muon neutrino disappearance in two detectors at baselines of 1.04 and 735 km, using a combined MINOS and MINOS+ exposure of 16.36×10^{20} protons on target. A simultaneous fit to the charged-current muon neutrino and neutral-current neutrino energy spectra in the two detectors yields no evidence for sterile neutrino mixing using a 3+1 model. The most stringent limit to date is set on the mixing parameter sin^{2}θ_{24} for most values of the sterile neutrino mass splitting Δm_{41}^{2}>10^{-4} eV^{2}.
DOI: 10.1140/epjc/s10052-020-7608-4
2020
Cited 94 times
JEDI-net: a jet identification algorithm based on interaction networks
Abstract We investigate the performance of a jet identification algorithm based on interaction networks (JEDI-net) to identify all-hadronic decays of high-momentum heavy particles produced at the LHC and distinguish them from ordinary jets originating from the hadronization of quarks and gluons. The jet dynamics are described as a set of one-to-one interactions between the jet constituents. Based on a representation learned from these interactions, the jet is associated to one of the considered categories. Unlike other architectures, the JEDI-net models achieve their performance without special handling of the sparse input jet representation, extensive pre-processing, particle ordering, or specific assumptions regarding the underlying detector geometry. The presented models give better results with less model parameters, offering interesting prospects for LHC applications.
DOI: 10.1016/0168-9002(96)00286-0
1996
Cited 142 times
A study on the properties of lead tungstate crystals
This report summarizes the results of a study on the properties of five large and five small size lead tungstate (PbWO4) crystals. Data are presented on the longitudinal optical transmittance and light attenuation length, light yield and response uniformity, emission spectra and decay time. The radiation resistance of large crystals and possible curing with optical bleaching are discussed. The result of an in depth materials study, including trace impurities analysis, are also presented. The general conclusion from this investigation is that further research and development is needed to develop fast, radiation-hard PbWO4 crystals for the CMS experiment at the CERN LHC.
DOI: 10.1145/948383.948411
2003
Cited 119 times
Data-intensive e-science frontier research
Large-scale e-science, including high-energy and nuclear physics, biomedical informatics, and Earth science, depend on an increasingly integrated, distributed cyberinfrastructure serving virtual organizations on a global scale.
DOI: 10.1016/0370-2693(76)90369-5
1976
Cited 102 times
Inclusive production of low-momentum charged pions at x = 0 at the CERN intersecting storage rings
Results are reported for the invariant differential cross-section of charged pions produced at x = 0 in proton-proton collisions at the CERN ISR. The range covered is 40 to 400 MeV/c in transverse momentum and 23 to 63 GeV in collision energy. The inclusive cross-section for π+ and π− are increasing by 36 ± 2% and 41 ± 2%, respectively over the ISR energy range with a somewhat stronger increase at the lowest transverse momenta. The transverse momentum distribution is well described by an exponential in the transverse energy.
DOI: 10.1016/j.cpc.2009.08.003
2009
Cited 85 times
MonALISA: An agent based, dynamic service system to monitor, control and optimize distributed systems
The MonALISA (Monitoring Agents in a Large Integrated Services Architecture) framework provides a set of distributed services for monitoring, control, management and global optimization for large scale distributed systems. It is based on an ensemble of autonomous, multi-threaded, agent-based subsystems which are registered as dynamic services. They can be automatically discovered and used by other services or clients. The distributed agents can collaborate and cooperate in performing a wide range of management, control and global optimization tasks using real time monitoring information. Program title: MonALISA Catalogue identifier: AEEZ_v1_0 Program summary URL: http://cpc.cs.qub.ac.uk/summaries/AEEZ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Caltech License – free for all non-commercial activities No. of lines in distributed program, including test data, etc.: 147 802 No. of bytes in distributed program, including test data, etc.: 2 5913 689 Distribution format: tar.gz Programming language: Java, additional APIs available in Java, C, C++, Perl and python Computer: Computing Clusters, Network Devices, Storage Systems, Large scale data intensive applications Operating system: The MonALISA service is mainly used in Linux, the MonALISA client runs on all major platforms (Windows, Linux, Solaris, MacOS). Has the code been vectorized or parallelized?: It is a multithreaded application. It will efficiently use all the available processors. RAM: for the MonALISA service the minimum required memory is 64 MB; if the JVM is started allocating more memory this will be used for internal caching. The MonALISA client requires typically 256–512 MB of memory. Classification: 6.5 External routines: Requires Java: JRE or JDK to run. These external packages are used (they are included in the distribution): JINI, JFreeChart, PostgreSQL (optional). Nature of problem: To monitor and control distributed computing clusters and grids, the network infrastructure, the storage systems, and the applications used on such facilities. The monitoring information gathered is used for developing the required higher level services, the components that provide decision support and some degree of automated decisions and for maintaining and optimizing workflow in large scale distributed systems. Solution method: The MonALISA framework is designed as an ensemble of autonomous self-describing agent-based subsystems which are registered as dynamic services. These services are able to collaborate and cooperate in performing a wide range of distributed information-gathering and processing tasks. Running time: MonALISA services are designed to run continuously to collect monitoring data and to trigger alarms or to take automatic actions in case it is necessary. References: http://monalisa.caltech.edu.
DOI: 10.1016/0550-3213(76)90313-8
1976
Cited 83 times
Inclusive production of low-momentum charged pions, kaons, and protons at x = 0 at the CERN intersecting storage rings
The inclusive production of low-momentum charged pions, kaons, and protons has been measured at x = 0 over the ISR energy range 23 < √s < 63 GeV. The average increase in the invariant differential cross section is 36 ± 2% for π+, 41 ± 2% for π−, 52 ± 8% for K+, 69 ± 8% for K−, 8 ± 5% for p, and 84 ± for p̄. Pions have been measured in the range 0.04 < pT < 0.4 GeV/c, kaons over 0.1 < pT < 0.3 GeV/c, and nucleons over 0.1 < pT < 0.5 GeV/c.
DOI: 10.1103/physrevlett.105.151601
2010
Cited 80 times
Search for Lorentz Invariance and<mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline"><mml:mi>C</mml:mi><mml:mi>P</mml:mi><mml:mi>T</mml:mi></mml:math>Violation with the MINOS Far Detector
We searched for a sidereal modulation in the MINOS far detector neutrino rate. Such a signal would be a consequence of Lorentz and CPT violation as described by the standard-model extension framework. It also would be the first detection of a perturbative effect to conventional neutrino mass oscillations. We found no evidence for this sidereal signature, and the upper limits placed on the magnitudes of the Lorentz and CPT violating coefficients describing the theory are an improvement by factors of 20-510 over the current best limits found by using the MINOS near detector.
DOI: 10.1103/physrevlett.108.191801
2012
Cited 73 times
Improved Measurement of Muon Antineutrino Disappearance in MINOS
We report an improved measurement of ν(μ) disappearance over a distance of 735 km using the MINOS detectors and the Fermilab Main Injector neutrino beam in a ν(μ)-enhanced configuration. From a total exposure of 2.95×10(20) protons on target, of which 42% have not been previously analyzed, we make the most precise measurement of Δm2=[2.62(-0.28)(+0.31)(stat)±0.09(syst)]×10(-3) eV2 and constrain the ν(μ) mixing angle sin2(2θ)>0.75 (90% C.L.). These values are in agreement with Δm2 and sin2(2θ) measured for ν(μ), removing the tension reported in [P. Adamson et al. (MINOS), Phys. Rev. Lett. 107, 021801 (2011).].
DOI: 10.1103/physrevd.102.012010
2020
Cited 47 times
Interaction networks for the identification of boosted <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline"><mml:mi>H</mml:mi><mml:mo stretchy="false">→</mml:mo><mml:mi>b</mml:mi><mml:mover accent="true"><mml:mi>b</mml:mi><mml:mo stretchy="false">¯</mml:mo></mml:mover></mml:math> decays
We develop an algorithm based on an interaction network to identify high-transverse-momentum Higgs bosons decaying to bottom quark-antiquark pairs and distinguish them from ordinary jets that reflect the configurations of quarks and gluons at short distances. The algorithm's inputs are features of the reconstructed charged particles in a jet and the secondary vertices associated with them. Describing the jet shower as a combination of particle-to-particle and particle-to-vertex interactions, the model is trained to learn a jet representation on which the classification problem is optimized. The algorithm is trained on simulated samples of realistic LHC collisions, released by the CMS Collaboration on the CERN Open Data Portal. The interaction network achieves a drastic improvement in the identification performance with respect to state-of-the-art algorithms.
DOI: 10.48550/arxiv.cs/0306096
2003
Cited 94 times
MonALISA : A Distributed Monitoring Service Architecture
The MonALISA (Monitoring Agents in A Large Integrated Services Architecture) system provides a distributed monitoring service. MonALISA is based on a scalable Dynamic Distributed Services Architecture which is designed to meet the needs of physics collaborations for monitoring global Grid systems, and is implemented using JINI/JAVA and WSDL/SOAP technologies. The scalability of the system derives from the use of multithreaded Station Servers to host a variety of loosely coupled self-describing dynamic services, the ability of each service to register itself and then to be discovered and used by any other services, or clients that require such information, and the ability of all services and clients subscribing to a set of events (state changes) in the system to be notified automatically. The framework integrates several existing monitoring tools and procedures to collect parameters describing computational nodes, applications and network performance. It has built-in SNMP support and network-performance monitoring algorithms that enable it to monitor end-to-end network performance as well as the performance and state of site facilities in a Grid. MonALISA is currently running around the clock on the US CMS test Grid as well as an increasing number of other sites. It is also being used to monitor the performance and optimize the interconnections among the reflectors in the VRVS system.
DOI: 10.1103/physrevd.81.052004
2010
Cited 69 times
Search for sterile neutrino mixing in the MINOS long-baseline experiment
A search for depletion of the combined flux of active neutrino species over a 735 km baseline is reported using neutral-current interaction data recorded by the MINOS detectors in the NuMI neutrino beam.Such a depletion is not expected according to conventional interpretations of neutrino oscillation data involving the three known neutrino flavors.A depletion would be a signature of oscillations or decay to postulated noninteracting sterile neutrinos, scenarios not ruled out by existing data.From an exposure of 3.18 × 10 20 protons on target in which neutrinos of energies between ∼500 MeV and 120 GeV are produced predominantly as νµ, the visible energy spectrum of candidate neutral-current reactions in the MINOS far-detector is reconstructed.Comparison of this spectrum to that inferred from a similarly selected near-detector sample shows that of the portion of the νµ flux observed to disappear in charged-current interaction data, the fraction that could be converting to a sterile state is less than 52% at 90% confidence level (C.L.).The hypothesis that active neutrinos mix with a single sterile neutrino via oscillations is tested by fitting the data to various models.In the particular four-neutrino models considered, the mixing angles θ24 and θ34 are constrained to be less than 11 • and 56 • at 90% C.L., respectively.The possibility that active neutrinos may decay to sterile neutrinos is also investigated.Pure neutrino decay without oscillations is ruled out at 5.4 standard deviations.For the scenario in which active neutrinos decay into sterile states concurrently with neutrino oscillations, a lower limit is established for the neutrino decay lifetime τ3/m3 > 2.1 × 10 -12 s/eV at 90% C.L.
DOI: 10.1103/physrevd.81.012001
2010
Cited 59 times
Observation of muon intensity variations by season with the MINOS far detector
The temperature of the upper atmosphere affects the height of primary cosmic ray interactions and the production of high-energy cosmic ray muons which can be detected deep underground. The MINOS far detector at Soudan MN, USA, has collected over 67 million cosmic ray induced muons. The underground muon rate measured over a period of five years exhibits a 4% peak-to-peak seasonal variation which is highly correlated with the temperature in the upper atmosphere. The coefficient, $\alpha_T$, relating changes in the muon rate to changes in atmospheric temperature was found to be: $\alpha_T = 0.874 \pm 0.009$ (stat.) $\pm 0.010$ (syst.). Pions and kaons in the primary hadronic interactions of cosmic rays in the atmosphere contribute differently to $\alpha_T$ due to the different masses and lifetimes. This allows the measured value of $\alpha_T$ to be interpreted as a measurement of the K/$\pi$ ratio for $E_{p}\gtrsim$\unit[7]{TeV} of $0.13 \pm 0.08$, consistent with the expectation from collider experiments.
DOI: 10.1016/0370-2693(85)90524-6
1985
Cited 63 times
New particle searches
PETRA, the e+e− collider, has operated at a maximum CM energy of 46.78 GeV. We update our previous results on new particle searches and set significantly better mass limits on some.
DOI: 10.1103/physrevd.82.051102
2010
Cited 51 times
New constraints on muon-neutrino to electron-neutrino transitions in MINOS
This paper reports results from a search for ${\ensuremath{\nu}}_{\ensuremath{\mu}}\ensuremath{\rightarrow}{\ensuremath{\nu}}_{e}$ transitions by the MINOS experiment based on a $7\ifmmode\times\else\texttimes\fi{}{10}^{20}$ protons-on-target exposure. Our observation of 54 candidate ${\ensuremath{\nu}}_{e}$ events in the far detector with a background of $49.1\ifmmode\pm\else\textpm\fi{}7.0(\mathrm{stat})\ifmmode\pm\else\textpm\fi{}2.7(\mathrm{syst})$ events predicted by the measurements in the near detector requires $2{sin}^{2}(2{\ensuremath{\theta}}_{13}){sin}^{2}{\ensuremath{\theta}}_{23}&lt;0.12(0.20)$ at the 90% C.L. for the normal (inverted) mass hierarchy at ${\ensuremath{\delta}}_{CP}=0$. The experiment sets the tightest limits to date on the value of ${\ensuremath{\theta}}_{13}$ for nearly all values of ${\ensuremath{\delta}}_{CP}$ for the normal neutrino mass hierarchy and maximal ${sin}^{2}(2{\ensuremath{\theta}}_{23})$.
DOI: 10.1007/bf02822248
1978
Cited 51 times
Production of deuterons and antideuterons in proton-proton collisions at the CERN ISR
DOI: 10.1145/1048935.1050200
2003
Cited 60 times
Optimizing 10-Gigabit Ethernet for Networks of Workstations, Clusters, and Grids
This paper presents a case study of the 10-Gigabit Ethernet (10GbE) adapter from Intel R . Specifically, with appropriate optimizations to the configurations of the 10GbE adapter and TCP, we demonstrate that the 10GbE adapter can perform well in local-area, storage-area, system-area, and wide-area networks. For local-area, storage-area, and system-area networks in support of networks of workstations, network-attached storage, and clusters, respectively, we can achieve over 7-Gb/s end-to-end throughput and 12-µs end-to-end latency between applications running on Linux-based PCs. For the wide-area network in support of grids, we broke the recently-set Internet2 Land Speed Record by 2.5 times by sustaining an end-to-end TCP/IP throughput of 2.38 Gb/s between Sunnyvale, California and Geneva, Switzerland (i.e., 10,037 kilometers) to move over a terabyte of data in less than an hour. Thus, the above results indicate that 10GbE may be a cost-effective solution across a multitude of computing environments.
DOI: 10.1103/physrevlett.50.2051
1983
Cited 53 times
Model-Independent Second-Order Determination of the Strong-Coupling Constant<mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline"><mml:mrow><mml:msub><mml:mrow><mml:mi>α</mml:mi></mml:mrow><mml:mrow><mml:mi>s</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:math>
With use of the MARK-J detector at $\sqrt{s}=34.7$ GeV 21 000 ${e}^{+}{e}^{\ensuremath{-}}\ensuremath{\rightarrow}\mathrm{hadron}$ events have been collected. By measurement of the asymmetry in angular energy correlations the strong coupling constant ${\ensuremath{\alpha}}_{s}=0.13\ifmmode\pm\else\textpm\fi{}0.01 (\mathrm{statistical})\ifmmode\pm\else\textpm\fi{}0.02 (\mathrm{systematic})$ is determined, in complete second order, and independent of the fragmentation models and QCD cutoff values used.
DOI: 10.1016/0168-9002(91)90493-a
1991
Cited 52 times
A study on radiation damage in doped BGO crystals
We report on a study of the correlation between radiation damage of bismuth germanate (BGO) scintillator crystals and trace impurities in the crystal. The light yield and the absorption spectra of doped BGO crystals were measured, both before and after irradiation. While trace concentrations of Cr, Mn, Fe and Pb in BGO were found to lead to substantial radiation damage, traces of Al, Ca, Cu and Si were found to have no measurable effect. Traces of Co, Ga, Mg and Ni were found to have an effect intermediate between the above two groups. Three radiation-induced absorption bands were observed in a set of BGO crystal samples. Their energy levels were found to be independent of the trace elements in the BGO. They are at 2.3±0.1, 3.0±0.1 and 3.8±0.1 eV, respectively, above the valence band. A brief discussion of the radiation damage mechanism in BGO is presented.
DOI: 10.1103/physrevlett.50.799
1983
Cited 48 times
Search for Top Quark and a Test of Models without Top Quark up to 38.54 GeV at PETRA
With a PETRA energy scan in \ensuremath{\le}30-MeV steps, the continuum production of open top quark up to 38.54 GeV is excluded. Over regions of energy scan from 29.90 to 38.63 GeV limits are set on the product of hadronic branching ratio and electronic width ${B}_{h}{\ensuremath{\Gamma}}_{\mathrm{ee}}$ for toponium to be less than 2.0 keV at the 95% confidence level. By a search for flavor-changing neutral currents in $b$ decay, models without a top quark are excluded.
DOI: 10.1103/physrevd.86.052007
2012
Cited 34 times
Measurements of atmospheric neutrinos and antineutrinos in the MINOS far detector
This paper reports measurements of atmospheric neutrino and antineutrino interactions in the MINOS Far Detector, based on 2553 live-days (37.9 kton-years) of data. A total of 2072 candidate events are observed. These are separated into 905 contained-vertex muons and 466 neutrino-induced rock-muons, both produced by charged-current $\nu_{\mu}$ and $\bar{\nu}_{\mu}$ interactions, and 701 contained-vertex showers, composed mainly of charged-current $\nu_{e}$ and $\bar{\nu}_{e}$ interactions and neutral-current interactions. The curvature of muon tracks in the magnetic field of the MINOS Far Detector is used to select separate samples of $\nu_{\mu}$ and $\bar{\nu}_{\mu}$ events. The observed ratio of $\bar{\nu}_{\mu}$ to $\nu_{\mu}$ events is compared with the Monte Carlo simulation, giving a double ratio of $R^{data}_{\bar{\nu}/\nu}/R^{MC}_{\bar{\nu}/\nu} = 1.03 \pm 0.08 (stat.) \pm 0.08 (syst.)$. The $\nu_{\mu}$ and $\bar{\nu}_{\mu}$ data are separated into bins of $L/E$ resolution, based on the reconstructed energy and direction of each event, and a maximum likelihood fit to the observed $L/E$ distributions is used to determine the atmospheric neutrino oscillation parameters. This fit returns 90% confidence limits of $|\Delta m^{2}| = (1.9 \pm 0.4) \times 10^{-3} eV^{2}$ and $sin^{2} 2\theta > 0.86$. The fit is extended to incorporate separate $\nu_{\mu}$ and $\bar{\nu}_{\mu}$ oscillation parameters, returning 90% confidence limits of $|\Delta m^{2}|-|\Delta \bar{m}^{2}| = 0.6^{+2.4}_{-0.8} \times 10^{-3} eV^{2}$ on the difference between the squared-mass splittings for neutrinos and antineutrinos.
DOI: 10.1145/2832099.2832100
2015
Cited 32 times
Managing scientific data with named data networking
Many scientific domains, such as climate science and High Energy Physics (HEP), have data management requirements that are not well supported by the IP network architecture. Named Data Networking (NDN) is a new network architecture whose service model is better aligned with the needs of data-oriented applications. NDN provides features such as best-location retrieval, caching, load sharing, and transparent failover that would otherwise be painstakingly (re-)implemented by each application using point-to-point semantics in an IP network.
DOI: 10.1103/physrevd.88.072011
2013
Cited 31 times
Search for flavor-changing non-standard neutrino interactions by MINOS
We report new constraints on flavor-changing non-standard neutrino interactions (NSI) using data from the MINOS experiment.We analyzed a combined set of beam neutrino and antineutrino data, and found no evidence for deviations from standard neutrino mixing.The observed energy spectra constrain the NSI parameter to the range -0.20 < εµτ < 0.07 (90% C.L.).
DOI: 10.1145/2620728.2620770
2014
Cited 25 times
Flow-based load balancing in multipathed layer-2 networks using OpenFlow and multipath-TCP
In this paper we address the challenge of traffic optimization for big data flows in layer-2 networks. We present an OpenFlow controller implementation that removes the necessity of a Spanning Tree Protocol, allows for the usage of multiple paths, and enables in-network per-flow load balancing. Moreover, we demonstrate how systems deploying Multipath-TCP can benefit from our solution.
DOI: 10.1146/annurev.ns.44.120194.002321
1994
Cited 43 times
Crystal Calorimeters in Particle Physics
Following a major shortage of 99Mo in the 2009–2010 period, concern grew that the aging reactor production facilities needed to be replaced. Most producers were using highly enriched 235U (HEU) as the target material. The Organisation for Economic Co-...Read More
DOI: 10.1016/0168-9002(90)91362-f
1990
Cited 38 times
Radiation resistance and fluorescence of europium doped BGO crystals
We report on a study of the radiation resistance and fluorescence of bismuth germanate scintillation crystals doped with europium (BGO: Eu). The transmission spectrum, the light output and the fluorescence spectrum of BGO: Eu crystals were measured before and after irradiation. The radiation resistance of BGO: Eu crystals was found to be increased with increase of europium doping. A red fluorescence emission around 600 nm was found for BGO: Eu samples, which has a 1.5 ms decay time.
DOI: 10.1029/2008gl036359
2009
Cited 29 times
Sudden stratospheric warmings seen in MINOS deep underground muon data
The rate of high energy cosmic ray muons as measured underground is shown to be strongly correlated with upper‐air temperatures during short‐term atmospheric (10‐day) events. The effects are seen by correlating data from the MINOS underground detector and temperatures from the European Centre for Medium Range Weather Forecasts during the winter periods from 2003–2007. This effect provides an independent technique for the measurement of meteorological conditions and presents a unique opportunity to measure both short and long‐term changes in this important part of the atmosphere.
DOI: 10.1002/0470867167.ch39
2003
Cited 33 times
Data‐Intensive Grids for High‐Energy Physics
This chapter contains sections titled: Introduction: Scientific Exploration at the High-energy Frontier HEP Challenges: At the Frontiers of Information Technology Meeting the Challenges: Data Grids as Managed Distributed Systems for Global Virtual Organizations Emergence of HEP Grids: Regional Centers and Global Databases HEP Grid Projects Example Architectures and Applications Inter-grid Coordination Current Issues for HEP Grids A Distributed Server Architecture for Dynamic HEP Grid Services The Grid-enabled Analysis Environment Conclusion: Relevance of Meeting These Challenges for Future Networks and Society Acknowledgements References
DOI: 10.1103/physrevlett.53.134
1984
Cited 31 times
Search for New Particles in<mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline"><mml:mrow><mml:msup><mml:mrow><mml:mi>e</mml:mi></mml:mrow><mml:mrow><mml:mo>+</mml:mo></mml:mrow></mml:msup></mml:mrow><mml:mrow><mml:msup><mml:mrow><mml:mi>e</mml:mi></mml:mrow><mml:mrow><mml:mo>−</mml:mo></mml:mrow></mml:msup></mml:mrow></mml:math>Annihilation from 39.79 to 45.52 GeV
We have searched for resonances in the reaction ${e}^{+}{e}^{\ensuremath{-}}\ensuremath{\rightarrow}\mathrm{hadrons}, \ensuremath{\gamma}\ensuremath{\gamma}, \ensuremath{\mu}\ensuremath{\mu}, \mathrm{and} \mathrm{ee}$, in the energy range $39.79&lt;\sqrt{s}&lt;45.52$ GeV, using the Mark J detector at PETRA. We obtain stringent upper limits on the production of toponium and particles postulated to explain ${Z}^{0}\ensuremath{\rightarrow}\mathrm{lepton}\mathrm{pair}+\ensuremath{\gamma}$ events observed at the CERN $\stackrel{-}{p}p$ collider. We also set limits on the mass and coupling constant of excited electrons.
DOI: 10.1103/physrevd.90.012010
2014
Cited 19 times
Observation of muon intensity variations by season with the MINOS near detector
A sample of $1.53\ifmmode\times\else\texttimes\fi{}{10}^{9}$ cosmic-ray-induced single muon events has been recorded at 225 m water equivalent using the MINOS near detector. The underground muon rate is observed to be highly correlated with the effective atmospheric temperature. The coefficient ${\ensuremath{\alpha}}_{T}$, relating the change in the muon rate to the change in the vertical effective temperature, is determined to be $0.428\ifmmode\pm\else\textpm\fi{}0.003(\text{stat}.)\ifmmode\pm\else\textpm\fi{}0.059(\text{syst}.)$. An alternative description is provided by the weighted effective temperature, introduced to account for the differences in the temperature profile and muon flux as a function of zenith angle. Using the latter estimation of temperature, the coefficient is determined to be $0.352\ifmmode\pm\else\textpm\fi{}0.003(\text{stat}.)\ifmmode\pm\else\textpm\fi{}0.046(\text{syst}.)$.
DOI: 10.1103/physrevd.94.111101
2016
Cited 16 times
Constraints on large extra dimensions from the MINOS experiment
We report new constraints on the size of large extra dimensions from data collected by the MINOS experiment between 2005 and 2012. Our analysis employs a model in which sterile neutrinos arise as Kaluza-Klein states in large extra dimensions and thus modify the neutrino oscillation probabilities due to mixing between active and sterile neutrino states. Using Fermilab's NuMI beam exposure of $10.56 \times 10^{20}$ protons-on-target, we combine muon neutrino charged current and neutral current data sets from the Near and Far Detectors and observe no evidence for deviations from standard three-flavor neutrino oscillations. The ratios of reconstructed energy spectra in the two detectors constrain the size of large extra dimensions to be smaller than $0.45\,\mu\text{m}$ at 90% C.L. in the limit of a vanishing lightest active neutrino mass. Stronger limits are obtained for non-vanishing masses.
DOI: 10.1051/epjconf/202429507044
2024
The Global Network Advancement Group A Next Generation System for the LHC Program and Data Intensive Sciences
This paper presents the rapid progress, vision and outlook across multiple state of the art development lines within the Global Network Advancement Group (GNA-G) and its Data Intensive Sciences and SENSE/AutoGOLE working groups, which are designed to meet the present and future needs and address the challenges of the Large Hadron Collider and other science programs with global reach. Since it was founded in the Fall of 2019 and the working groups were formed in 2020, in partnership with ESnet, Internet2, CENIC, GEANT, ANA, RNP, StarLight, NRP, N-DISE, AmLight, and many other leading research and education networks and network R&amp;D projects, as well as Caltech, UCSD/SDSC, Fermilab, CERN, LBL, and many other leading universities and laboratories, the GNA-G working groups have deployed two virtual circuit and programmable testbeds spanning six continents which supports continuous developments aimed at the next generation of programmable networks interworking with the science programs’ computing and data management systems. The talk covers examples of recent progress in developing and deploying new methods and approaches in multidomain virtual circuits, flow steering, path selection, load balancing and congestion avoidance, segment routing and machine learning based traffic prediction and optimization.
DOI: 10.1051/epjconf/202429504028
2024
Job CPU Performance comparison based on MINIAOD reading options: Local versus remote
A critical challenge of performing data transfers or remote reads is to be as fast and efficient as possible while, at the same time, keeping the usage of system resources as low as possible. Ideally, the software that manages these data transfers should be able to organize them so that one can have them run up to the hardware limits. Significant portions of LHC analysis use the same datasets, running over each file or dataset multiple times. By utilizing "ondemand" based regional caches, we can improve CPU Efficiency and reduce the wide area network usage. Speeding up user analysis and reducing network usage (and hiding latency from jobs by caching most essential files on demand) are significant challenges for HL-LHC, where the data volume increases to an exabyte level. In this paper, we will describe our journey and tests with the CMS XCache project (SoCal Cache), which will compare job performance and CPU efficiency using different storage solutions (Hadoop, Ceph, Local Disk, Named Data Networking). It will also provide insights into our tests over a wide area network and possible storage and network usage savings.
DOI: 10.1051/epjconf/202429501001
2024
400Gbps benchmark of XRootD HTTP-TPC
Due to the increased demand of network traffic expected during the HL-LHC era, the T2 sites in the USA will be required to have 400Gbps of available bandwidth to their storage solution. With the above in mind we are pursuing a scale test of XRootD software when used to perform Third Party Copy transfers using the HTTP protocol. Our main objective is to understand the possible limitations in the software stack to achieve the target transfer rate; to that end we have set up a testbed of multiple XRootD servers in both UCSD and Caltech which are connected through a dedicated link capable of 400 Gbps end-to-end. Building upon our experience deploying containerized XRootD servers, we use Kubernetes to easily deploy and test different configurations of our testbed. In this work, we will present our experience doing these tests and the lessons learned.
DOI: 10.1051/epjconf/202429501004
2024
A Named Data Networking Based Fast Open Storage System Plugin for XRootD
This work presents the design and implementation of an Open Storage System plugin for XRootD, utilizing Named Data Networking (NDN). This represents a significant step in integrating NDN, a prominent future Internet architecture, with the established data management systems within CMS. We show that this integration enables XRootD to access data in a location transparent manner, reducing the complexity of data management and retrieval. Our approach includes the creation of the NDNc software library, which bridges the existing NDN C++ library with the high-performance NDN-DPDK data-forwarding system. This paper outlines the design of the plugin and preliminary results of data transfer tests using both internal and external 100 Gbps testbed.
DOI: 10.1051/epjconf/202429501044
2024
Predicting Resource Utilization Trends with Southern California Petabyte Scale Cache
Large community of high-energy physicists share their data all around world making it necessary to ship a large number of files over wide- area networks. Regional disk caches such as the Southern California Petabyte Scale Cache have been deployed to reduce the data access latency. We observe that about 94% of the requested data volume were served from this cache, without remote transfers, between Sep. 2022 and July 2023. In this paper, we show the predictability of the resource utilization by exploring the trends of recent cache usage. The time series based prediction is made with a machine learning approach and the prediction errors are small relative to the variation in the input data. This work would help understanding the characteristics of the resource utilization and plan for additional deployments of caches in the future.
DOI: 10.1051/epjconf/202429501009
2024
Automated Network Services for Exascale Data Movement
The Large Hadron Collider (LHC) experiments distribute data by leveraging a diverse array of National Research and Education Networks (NRENs), where experiment data management systems treat networks as a “blackbox” resource. After the High Luminosity upgrade, the Compact Muon Solenoid (CMS) experiment alone will produce roughly 0.5 exabytes of data per year. NREN Networks are a critical part of the success of CMS and other LHC experiments. However, during data movement, NRENs are unaware of data priorities, importance, or need for quality of service, and this poses a challenge for operators to coordinate the movement of data and have predictable data flows across multi-domain networks. The overarching goal of SENSE (The Software-defined network for End-to-end Networked Science at Exascale) is to enable National Labs and universities to request and provision end-to-end intelligent network services for their application workflows leveraging SDN (Software-Defined Networking) capabilities. This work aims to allow LHC Experiments and Rucio, the data management software used by CMS Experiment, to allocate and prioritize certain data transfers over the wide area network. In this paper, we will present the current progress of the integration of SENSE, Multi-domain end-to-end SDN Orchestration with QoS (Quality of Service) capabilities, with Rucio, the data management software used by CMS Experiment.
DOI: 10.1051/epjconf/202429504036
2024
Scientific Community Transfer Protocols, Tools, and Their Performance Based on Network Capabilities
The efficiency of high energy physics workflows relies on the ability to rapidly transfer data among the sites where the data is processed and analyzed. The best data transfer tools should provide a simple and reliable solution for local, regional, national and in some cases intercontinental data transfers. This work outlines the results of data transfer tool tests using internal and external (simulated latency and packet loss) in 100 Gbps testbeds and compares the results among the existing solutions, while also treating the issue of tuning parameters and methods to help optimize the rates of transfers. Many tools have been developed to facilitate data transfers over wide area networks. However, few studies have shown the tools’ requirements, use cases, and reliability through comparative measurements. Here, we were evaluating a variety of high-performance data transfer tools used today in the LHC and other scientific communities, such as FDT, WDT, and NDN in different environments. Furthermore, this test was made to reproduce real-world data transfer examples to analyse each tool’s strengths and weaknesses, including the fault tolerance of the tools when we have packet loss. By comparing the tools in a controlled environment, we can shed light on the tool’s relative reliability and usability for academia and industry. Also, this work highlights the best tuning parameters for WAN and LAN transfers for maximum performance, in several cases.
DOI: 10.1007/978-1-4613-1147-8
1996
Cited 32 times
History of Original Ideas and Basic Discoveries in Particle Physics
The International Conference on the History of Original Ideas and Basic Discoveries, held at the Ettore Majorana Centre for Scientific Culture in Erice, Sicily, July 27-August 4, 1994, brought together sixty of the leading scientists including many Nobel Laureates in high energy physics, principal contributors in other fields of physics such as high T_c superconductivity, particle accelerators and detector instrumentation, and thirty-six talented younger physicists selected from candidates throughout the world.
DOI: 10.48550/arxiv.1807.02876
2018
Cited 15 times
Machine Learning in High Energy Physics Community White Paper
Machine learning has been applied to several problems in particle physics research, beginning with applications to high-level physics analysis in the 1990s and 2000s, followed by an explosion of applications in particle and event identification and reconstruction in the 2010s. In this document we discuss promising future research and development areas for machine learning in particle physics. We detail a roadmap for their implementation, software and hardware resource requirements, collaborative initiatives with the data science community, academia and industry, and training the particle physics community in data science. The main objective of the document is to connect and motivate these areas of research and development with the physics drivers of the High-Luminosity Large Hadron Collider and future neutrino experiments and identify the resource needs for their implementation. Additionally we identify areas where collaboration with external communities will be of great benefit.
DOI: 10.1109/23.682613
1998
Cited 29 times
A study on the radiation hardness of lead tungstate crystals
This report presents recent progress of a study on the radiation damage in lead tungstate (PbWO/sub 4/) crystals. The dose rate dependence of radiation damage in PbWO/sub 4/ has been observed. An optimization of the oxygen compensation through post-growth thermal annealing has led to PbWO/sub 4/ samples with significantly improved radiation hardness. Front irradiation is found to cause a factor of 2 to 6 times less severe damage than uniform irradiation. Lanthanum doping was found not to be a determining factor for PbWO/sub 4/ radiation hardness improvement. Finally, a TEM/EDS analysis revealed that the radiation damage in PbWO/sub 4/ crystals is caused by oxygen vacancies.
DOI: 10.48550/arxiv.hep-ph/0608079
2006
Cited 21 times
CP Studies and Non-Standard Higgs Physics
There are many possibilities for new physics beyond the Standard Model that feature non-standard Higgs sectors. These may introduce new sources of CP violation, and there may be mixing between multiple Higgs bosons or other new scalar bosons. Alternatively, the Higgs may be a composite state, or there may even be no Higgs at all. These non-standard Higgs scenarios have important implications for collider physics as well as for cosmology, and understanding their phenomenology is essential for a full comprehension of electroweak symmetry breaking. This report discusses the most relevant theories which go beyond the Standard Model and its minimal, CP-conserving supersymmetric extension: two-Higgs-doublet models and minimal supersymmetric models with CP violation, supersymmetric models with an extra singlet, models with extra gauge groups or Higgs triplets, Little Higgs models, models in extra dimensions, and models with technicolour or other new strong dynamics. For each of these scenarios, this report presents an introduction to the phenomenology, followed by contributions on more detailed theoretical aspects and studies of possible experimental signatures at the LHC and other colliders.
DOI: 10.1103/physrevlett.54.1750
1985
Cited 21 times
Measurement of Strong-Coupling Constant<mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline"><mml:mrow><mml:msub><mml:mrow><mml:mi>α</mml:mi></mml:mrow><mml:mrow><mml:mi>s</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:math>to Second Order for<mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline"><mml:mn>22</mml:mn><mml:mo>&lt;~</mml:mo><mml:msqrt><mml:mrow><mml:mi>s</mml:mi></mml:mrow></mml:msqrt><mml:mo>&lt;~</mml:mo><mml:mn>46.78</mml:mn></mml:math>GeV
Using the Mark-J detector at the high-energy ${e}^{+}{e}^{\ensuremath{-}}$ collider PETRA, we compare the data from hadron production with the complete second-order QCD calculation over the energy region 22 to 46.78 GeV. We determine the QCD parameter $\ensuremath{\Lambda}=100\ifmmode\pm\else\textpm\fi{}{30}_{\ensuremath{-}45}^{+60}$ MeV which yields the strong-coupling constant ${\ensuremath{\alpha}}_{s}=0.12\ifmmode\pm\else\textpm\fi{}0.02$ for $\sqrt{s}=44$ GeV.
DOI: 10.1103/physrevd.91.112006
2015
Cited 12 times
Observation of seasonal variation of atmospheric multiple-muon events in the MINOS Near and Far Detectors
We report the first observation of seasonal modulations in the rates of cosmic ray multiple-muon events at two underground sites, the MINOS Near Detector with an overburden of 225 mwe, and the MINOS Far Detector site at 2100 mwe. At the deeper site, multiple-muon events with muons separated by more than 8 m exhibit a seasonal rate that peaks during the summer, similar to that of single-muon events. In contrast and unexpectedly, the rate of multiple-muon events with muons separated by less than 5--8 m, and the rate of multiple-muon events in the smaller, shallower Near Detector, exhibit a seasonal rate modulation that peaks in the winter.
DOI: 10.1109/indis.2018.00007
2018
Cited 12 times
SDN for End-to-End Networked Science at the Exascale (SENSE)
The Software-defined network for End-to-end Networked Science at Exascale (SENSE) research project is building smart network services to accelerate scientific discovery in the era of `big data' driven by Exascale, cloud computing, machine learning and AI. The project's architecture, models, and demonstrated prototype define the mechanisms needed to dynamically build end-to-end virtual guaranteed networks across administrative domains, with no manual intervention. In addition, a highly intuitive `intent' based interface, as defined by the project, allows applications to express their high-level service requirements, and an intelligent, scalable model-based software orchestrator converts that intent into appropriate network services, configured across multiple types of devices. The significance of these capabilities is the ability for science applications to manage the network as a first-class schedulable resource akin to instruments, compute, and storage, to enable well defined and highly tuned complex workflows that require close coupling of resources spread across a vast geographic footprint such as those used in science domains like high-energy physics and basic energy sciences.
DOI: 10.1016/0168-9002(87)90079-9
1987
Cited 19 times
Determination of trace elements in BGO by neutron activation analysis
We report on the determination of trace elements in a set of Bi4Ge3O12 (BGO) crystals by neutron activation analysis. By using doped polycrystalline BGO powders as calibration standards, we have measured the concentrations of Al, Mn, Cu, Co, Cr and Fe in the crystals with a sensitivity of 0.5–500 ppb, and we have determined the segregation coefficients. We have also studied the correlation between the trace elements and the “radiation damage” (color center formation) effect.
DOI: 10.5170/cern-2005-002.750
2004
Cited 19 times
Predicting the Resource Requirements of a Job Submission
Grid computing aims to provide an infrastructure for distributed problem solving in dynamic virtual organizations. It is gaining interest among many scientific disciplines as well as the industrial community. However, current grid solutions still require highly trained programmers with expertise in networking, high-performance computing, and operating systems. One of the big issues in full-scale usage of a grid is matching the resource requirements of job submission to the resources available on the grid. Resource brokers and job schedulers must make estimates of the resource usage of job submissions in order to ensure efficient use of grid resources. We prop ose a prediction engine that will operate as part of a grid scheduler. This prediction engine will provide estimates of the resources required by job submission based upon historical information. This paper presents the need for such a prediction engine and discusses two approaches for history based estimation.
DOI: 10.5170/cern-2005-002.1119
2005
Cited 19 times
Grid Enabled Analysis : Architecture, prototype and status
The Grid Analysis Environment (GAE), which is a continuation of the CAIGEE project [5], is an effort to develop, integrate and deploy a system for distributed analysis. The current focus within the GAE is on the CMS experiment [1] however the GAE design abstracts from any specific scientific experiment and focuses on scientific analysis in general. The GAE project does not intend to reinvent services, but rather to integrate existing services into a collaborative system of web services.
DOI: 10.1088/1742-6596/664/5/052033
2015
Cited 10 times
Named Data Networking in Climate Research and HEP Applications
The Computing Models of the LHC experiments continue to evolve from the simple hierarchical MONARC[2] model towards more agile models where data is exchanged among many Tier2 and Tier3 sites, relying on both large scale file transfers with strategic data placement, and an increased use of remote access to object collections with caching through CMS's AAA, ATLAS' FAX and ALICE's AliEn projects, for example. The challenges presented by expanding needs for CPU, storage and network capacity as well as rapid handling of large datasets of file and object collections have pointed the way towards future more agile pervasive models that make best use of highly distributed heterogeneous resources. In this paper, we explore the use of Named Data Networking (NDN), a new Internet architecture focusing on content rather than the location of the data collections. As NDN has shown considerable promise in another data intensive field, Climate Science, we discuss the similarities and differences between the Climate and HEP use cases, along with specific issues HEP faces and will face during LHC Run2 and beyond, which NDN could address.
DOI: 10.1016/s0370-2693(97)01082-4
1997
Cited 21 times
Measurements of mass, width and gauge couplings of the W boson at LEP
We report on measurements of mass and total decay width of the W boson and of triple-gauge-boson couplings, γWW and ZWW, with the L3 detector at LEP. W-pair events produced in e+e− interactions between 161 GeV and 172GeV centre-of-mass energy are selected in a data sample corresponding to a total luminosity of 21.2 pb−1. The mass and total decay width of the W boson are determined to be MW = 80.75−0.27+0.26(exp.) ± 0.03 (LEP) GeV and ΓW = 1.74−0.78+0.88(stat.) ± 0.25(syst.)GeV, respectively. Limits on anomalous triple-gauge-boson couplings, γWW and ZWW, are determined, in particular −1.5 < δZ < 1.9 (95% CL), excluding vanishing ZWW coupling at more than 95% confidence level.
DOI: 10.1016/0168-583x(91)95561-q
1991
Cited 19 times
Light yield and surface treatment of barium fluoride crystals
Abstract We report on a study of the light yield and surface treatment of barium fluoride (BaF 2 ) scintillation crystals. Using a bialkali photocathode the photoelectron (p.e.) yield of BaF 2 crystals was measured to be 130 p.e./MeV for the fast components and 700 p.e./MeV for the slow component. A somewhat hygroscopic nature for the BaF 2 is found. Teflon film was found to be the best wrapping material for the BaF 2 crystals. The radiation damage of the BaF 2 crystals can be fully annealed under 500 ° C for 3 hours.
DOI: 10.1016/0370-2693(86)90457-0
1986
Cited 17 times
The production and decay of tau leptons
A study of τ-lepton production in the CMS energy region from 14 to 46.8 GeV at PETRA is reported. The cross section, the decay branching ratio into μνν, and the electroweak parameters are determined with a total integrated luminosity of 115 pb−1.
DOI: 10.5170/cern-2005-002.830
2005
Cited 16 times
The Clarens Grid-enabled Web Services Framework : Services and Implementation
DOI: 10.1016/j.astropartphys.2010.10.010
2011
Cited 10 times
Observation in the MINOS far detector of the shadowing of cosmic rays by the sun and moon
The shadowing of cosmic ray primaries by the moon and sun was observed by the MINOS far detector at a depth of 2070 mwe using 83.54 million cosmic ray muons accumulated over 1857.91 live-days. The shadow of the moon was detected at the 5.6 σ level and the shadow of the sun at the 3.8 σ level using a log-likelihood search in celestial coordinates. The moon shadow was used to quantify the absolute astrophysical pointing of the detector to be 0.17 ± 0.12°. Hints of interplanetary magnetic field effects were observed in both the sun and moon shadow.
DOI: 10.1088/1742-6596/396/4/042065
2012
Cited 9 times
The DYNES Instrument: A Description and Overview
Scientific innovation continues to increase requirements for the computing and networking infrastructures of the world. Collaborative partners, instrumentation, storage, and processing facilities are often geographically and topologically separated, as is the case with LHC virtual organizations. These separations challenge the technology used to interconnect available resources, often delivered by Research and Education (R&E) networking providers, and leads to complications in the overall process of end-to-end data management.
DOI: 10.1145/3234200.3234208
2018
Cited 9 times
Fine-Grained, Multi-Domain Network Resource Abstraction as a Fundamental Primitive to Enable High-Performance, Collaborative Data Sciences
Recently, a number of multi-domain network resource information and reservation systems have been developed and deployed, driven by the demand and substantial benefits of providing predictable network resources. A major lacking of such systems, however, is that they are based on coarse-grained or localized information, resulting in substantial inefficiencies. In this paper, we present Explorer, a simple, novel, highly efficient multi-domain network resource discovery system to provide fine-grained, global network resource information, to support high-performance, collaborative data sciences. The core component of Explorer is the use of linear inequalities, referred to as resource state abstraction (ReSA), as a compact, unifying representation of multi-domain network available bandwidth, which simplifies applications without exposing network details. We develop a ReSA obfuscating protocol and a proactive full-mesh ReSA discovery mechanism to ensure the privacy-preserving and scalability of Explorer. We fully implement Explorer and demonstrate its efficiency and efficacy through extensive experiments using real network topologies and traces.
DOI: 10.1016/j.future.2018.09.048
2019
Cited 8 times
Unicorn: Unified resource orchestration for multi-domain, geo-distributed data analytics
As the data volume increases exponentially over time, data-intensive analytics benefits substantially from multi-organizational, geographically-distributed, collaborative computing, where different organizations contribute various yet scarce resources, e.g., computation, storage and networking resources, to collaboratively collect, share and analyze extremely large amounts of data. By analyzing the data analytics trace from the Compact Muon Solenoid (CMS) experiment, one of the largest scientific experiments in the world, and systematically examining the design of existing resource management systems for clusters, we show that the multi-domain, geo-distributed, resource-disaggregated nature of this new paradigm calls for a framework to manage a large set of distributively-owned, heterogeneous resources, with the objective of efficient resource utilization, following the autonomy and privacy of different domains, and that the fundamental challenge for designing such a framework is: how to accurately discover and represent resource availability of a large set of distributively-owned, heterogeneous resources across different domains with minimal information exposure from each domain? Existing resource management systems are designed for single-domain clusters and cannot address this challenge. In this paper, we design Unicorn, the first unified resource orchestration framework for multi-domain, geo-distributed data analytics. In Unicorn, we encode the resource availability for each domain into resource state abstraction, a variant of the network view abstraction extended to accurately represent the availability of multiple resources with minimal information exposure using a set of linear inequalities. We then design a novel, efficient cross-domain query algorithm and a privacy-preserving resource information integration protocol to discover and integrate the accurate, minimal resource availability information for a set of data analytics jobs across different domains. In addition, Unicorn also contains a global resource orchestrator that computes optimal resource allocation decisions for data analytics jobs. We implement a prototype of Unicorn and present preliminary evaluation results to demonstrate its efficiency and efficacy. We also give a full demonstration of the Unicorn system at SuperComputing 2017.
DOI: 10.1109/jsac.2019.2927073
2019
Cited 8 times
Toward Fine-Grained, Privacy-Preserving, Efficient Multi-Domain Network Resource Discovery
Multi-domain network resource reservation systems are being deployed, driven by the demand and substantial benefits of providing predictable network resources. However, a major lack of existing systems is their coarse granularity, due to the participating networks' concern of revealing sensitive information, which can result in substantial inefficiencies. This paper presents Mercator, a novel multi-domain network resource discovery system to provide fine-grained, global network resource information, for collaborative sciences. The foundation of Mercator is a resource abstraction through algebraic-expression enumeration (i.e., linear inequalities/equations), as a compact representation of multiple properties of network resources (e.g., bandwidth, delay, and loss rate) in multi-domain networks. In addition, we develop an obfuscating protocol, to address the privacy concerns by ensuring that no participant can associate the algebraic expressions with the corresponding member networks. We also introduce a super-set projection technique to increase Mercator's scalability. We implement a prototype Mercator and deploy it in a small federation network. We also evaluate the performance of Mercator through extensive experiments using real topologies and traces. Results show that Mercator 1) efficiently discovers available networking resources in collaborative networks on average four orders of magnitude faster, and allows fairer allocations of network resources; 2) preserves the member networks' privacy with little overhead; and 3) scales to a collaborative network of 200 member networks.
DOI: 10.1016/0168-9002(89)90369-0
1989
Cited 16 times
Calibration of the L3 BGO electromagnetic calorimeter with a radiofrequency quadrupole accelerator
A new calibration technique based on radiative capture of protons from an RFQ accelerator in a lithium target, which makes use of the resultant high intensity flux of 17.6 MeV photons, has been developed and tested. The technique is capable of calibrating the thousands of BGO crystals in the L3 electromagnetic calorimeter at once, with an absolute accuracy of better than 1% in 1–2 h. Systematic errors in the calibration, which have been studied earlier through Monte Carlo simulations, have been experimentally proven to be small and calculable, and are expected to be much less than 1%. When installed in the L3 experiment at LEP, this system will help ensure that the high resolution of the L3 electromagnetic calorimeter is maintained during running.
DOI: 10.1109/icppw.2005.82
2005
Cited 14 times
The Clarens Web Service Framework for Distributed Scientific Analysis in Grid Projects
Large scientific collaborations are moving towards service oriented architectures for implementation and deployment of globally distributed systems. Clarens is a high performance, easy to deploy Web service framework that supports the construction of such globally distributed systems. This paper discusses some of the core functionality of Clarens that the authors believe is important for building distributed systems based on Web services that support scientific analysis.
DOI: 10.1103/physrevd.95.012005
2017
Cited 7 times
Search for flavor-changing nonstandard neutrino interactions using <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline"><mml:msub><mml:mi>ν</mml:mi><mml:mi>e</mml:mi></mml:msub></mml:math> appearance in MINOS
We report new constraints on flavor-changing nonstandard neutrino interactions from the MINOS long-baseline experiment using ${\ensuremath{\nu}}_{e}$ and ${\overline{\ensuremath{\nu}}}_{e}$ appearance candidate events from predominantly ${\ensuremath{\nu}}_{\ensuremath{\mu}}$ and ${\overline{\ensuremath{\nu}}}_{\ensuremath{\mu}}$ beams. We used a statistical selection algorithm to separate ${\ensuremath{\nu}}_{e}$ candidates from background events, enabling an analysis of the combined MINOS neutrino and antineutrino data. We observe no deviations from standard neutrino mixing, and thus place constraints on the nonstandard interaction matter effect, $|{ϵ}_{e\ensuremath{\tau}}|$, and phase, $({\ensuremath{\delta}}_{CP}+{\ensuremath{\delta}}_{e\ensuremath{\tau}})$, using a 30-bin likelihood fit.
DOI: 10.1016/j.future.2020.04.018
2020
Cited 7 times
Software-Defined Network for End-to-end Networked Science at the Exascale
Domain science applications and workflow processes are currently forced to view the network as an opaque infrastructure into which they inject data and hope that it emerges at the destination with an acceptable Quality of Experience. There is little ability for applications to interact with the network to exchange information, negotiate performance parameters, discover expected performance metrics, or receive status/troubleshooting information in real time. The work presented here is motivated by a vision for a new smart network and smart application ecosystem that will provide a more deterministic and interactive environment for domain science workflows. The Software-Defined Network for End-to-end Networked Science at Exascale (SENSE) system includes a model-based architecture, implementation, and deployment which enables automated end-to-end network service instantiation across administrative domains. An intent based interface allows applications to express their high-level service requirements, an intelligent orchestrator and resource control systems allow for custom tailoring of scalability and real-time responsiveness based on individual application and infrastructure operator requirements. This allows the science applications to manage the network as a first-class schedulable resource as is the current practice for instruments, compute, and storage systems. Deployment and experiments on production networks and testbeds have validated SENSE functions and performance. Emulation based testing verified the scalability needed to support research and education infrastructures. Key contributions of this work include an architecture definition, reference implementation, and deployment. This provides the basis for further innovation of smart network services to accelerate scientific discovery in the era of big data, cloud computing, machine learning and artificial intelligence.
DOI: 10.1016/s0168-9002(96)01030-3
1997
Cited 16 times
Studies of lead tungstate crystal matrices in high energy beams for the CMS electromagnetic calorimeter at the LHC
Using matrices of lead tungstate crystals, energy resolutions better than 0.6% at 100 GeV have been achieved in the test beam in 1995. It has been demonstrated that a lead tungstate electromagnetic calorimeter read out by avalanche photodiodes can consistently achieve the excellent energy resolutions necessary to justify its construction in the CMS detector. The performance achieved has been understood in terms of the properties of the crystals and photodetectors.
2001
Cited 15 times
An International Virtual-Data Grid Laboratory for Data Intensive Science
DOI: 10.1007/978-1-4613-2451-5
1985
Cited 14 times
Electroweak Effects at High Energies
The first Europhysics Study Conference on Electroweak Effects at High Energies was held at the "Ettore Majorana" Centre for Scientific Culture in Erice, Sicily from February 1 -12, 1983. The conferenc
DOI: 10.1016/j.nima.2014.11.041
2015
Cited 6 times
Precision timing measurements for high energy photons
Particle colliders operating at high luminosities present challenging environments for high energy physics event reconstruction and analysis. We discuss how timing information, with a precision on the order of 10 ps, can aid in the reconstruction of physics events under such conditions. We present calorimeter based timing measurements from test beam experiments in which we explore the ultimate timing precision achievable for high energy photons or electrons of 10 GeV and above. Using a prototype calorimeter consisting of a 1.7×1.7×1.7 cm3 lutetium–yttrium oxyortho-silicate (LYSO) crystal cube, read out by micro-channel plate photomultipliers, we demonstrate a time resolution of 33.5±2.1 ps for an incoming beam energy of 32 GeV. In a second measurement, using a 2.5×2.5×20 cm3 LYSO crystal placed perpendicularly to the electron beam, we achieve a time resolution of 59±11 ps using a beam energy of 4 GeV. We also present timing measurements made using a shashlik-style calorimeter cell made of LYSO and tungsten plates, and demonstrate that the apparatus achieves a time resolution of 54±5 ps for an incoming beam energy of 32 GeV.
DOI: 10.48550/arxiv.2203.04312
2022
Cited 3 times
Dual-Readout Calorimetry for Future Experiments Probing Fundamental Physics
In this White Paper for the 2021 Snowmass process, we detail the status and prospects for dual-readout calorimetry. While all calorimeters allow estimation of energy depositions in their active material, dual-readout calorimeters aim to provide additional information on the light produced in the sensitive media via, for example, wavelength and polarization, and/or a precision timing measurements, allowing an estimation of the shower-by-shower particle content. Utilizing this knowledge of the shower particle content may allow unprecedented energy resolution for hadronic particles and jets and new types of particle flow algorithms. We also discuss the impact continued development of this kind of calorimetry could have on precision on Higgs boson property measurements at future colliders.
DOI: 10.5555/510378.510641
2000
Cited 13 times
The MONARC toolset for simulating large network-distributed processing systems
The next generation of High Energy Physics experiments have envisaged the use of network-distributed Petabyte-scale data handling and computing systems of unprecedented complexity. The general concept is that of a Data Grid Hierarchy in which the central facility at the European Laboratory for Particle Physics (CERN) in Geneva will interact and coherently manage tasks shared by and distributed amongst national (National) Regional Centres situated in the US, Europe, and Asia. CERN and the Tier1 will further communicate and task-share with the Tier2 Regional Centers, Tier3 centers serving individual universities or research groups, and thousands of Tier4 desktops and small servers.The design and optimization of systems with this level of complexity requires a realistic description and modeling of the data access patterns, the data flow across the local and wide area networks, and the scheduling and workload presented by hundreds of jobs running concurrently on large scale distributed systems exchanging very large amounts of data.The simulation toolset developed within the Models Of Networked Analysis at Regional Centers - MONARC project provides a code and execution time-efficient design and optimisation framework for large scale distributed systems. A process-oriented approach for discrete event simulation has been adopted because it is well suited to describe various activities running concurrently, as well the stochastic arrival patterns typical of this class of simulations. Threaded objects or Active Objects provide a natural way to map the specific behaviour of distributed data processing (and the required flows of data across the networks) into the simulation program.This simulation program is based on Java2(™) technology because of the support for the necessary methods and techniques needed to develop an efficient and flexible distributed process oriented simulation. This includes a convenient set of interactive graphical presentation and analysis tools, which are essential for the development and effective use of the simulation system.The design elements, status and features of the MONARC simulation tool are presented. The program allows realistic modelling of complex data access patterns by multiple concurrent users in large scale computing systems in a wide range of possible architectures. Comparison between queuing theory and realistic client-server measurements is also presented.
DOI: 10.1109/tns.2005.852755
2005
Cited 10 times
Distributed computing grid experiences in CMS
The CMS experiment is currently developing a computing system capable of serving, processing and archiving the large number of events that will be generated when the CMS detector starts taking data. During 2004 CMS undertook a large scale data challenge to demonstrate the ability of the CMS computing system to cope with a sustained data-taking rate equivalent to 25% of startup rate. Its goals were: to run CMS event reconstruction at CERN for a sustained period at 25 Hz input rate; to distribute the data to several regional centers; and enable data access at those centers for analysis. Grid middleware was utilized to help complete all aspects of the challenge. To continue to provide scalable access from anywhere in the world to the data, CMS is developing a layer of software that uses Grid tools to gain access to data and resources, and that aims to provide physicists with a user friendly interface for submitting their analysis jobs. This paper describes the data challenge experience with Grid infrastructure and the current development of the CMS analysis system.
DOI: 10.1109/icppw.2005.76
2005
Cited 9 times
Resource Management Services for a Grid Analysis Environment
Selecting optimal resources for submitting jobs on a computational grid or accessing data from a data grid is one of the most important tasks of any grid middleware. Most modern grid software today satisfies this responsibility and gives a best-effort performance to solve this problem. Almost all decisions regarding scheduling and data access are made by the software automatically, giving users little or no control over the entire process. To solve this problem, a more interactive set of services and middleware is desired that provides users more information about grid weather, and gives them more control over the decision making process. This paper presents a set of services that have been developed to provide more interactive resource management capabilities within the grid analysis environment (GAE) being developed collaboratively by Caltech, NUST and several other institutes. These include a steering service, a job monitoring service and an estimator service that have been designed and written using a common grid-enabled Web services framework named Clarens. The paper also presents a performance analysis of the developed services to show that they have indeed resulted in a more interactive and powerful system for user-centric grid-enabled physics analysis.
DOI: 10.1109/uic-atc.2017.8397409
2017
Cited 5 times
Unicorn: Unified resource orchestration for multi-domain, geo-distributed data analytics
Data-intensive analytics is entering the era of multi-organizational, geographically-distributed, collaborative computing, where different organizations contribute various resources, e.g., sensing, computation, storage and networking resources, to collaboratively collect, share and analyze extremely large amounts of data. This new paradigm calls for a framework to manage a large set of distributively owned heterogeneous resources, with the fundamental objective of efficient resource utilization, following the autonomy and privacy of resource owners. In this paper, we design Unicorn, the first unified framework that accomplishes this goal. The foundation of Unicorn is RSDP, an autonomous, privacy-preserving resource discovery and representation system to provide accurate resource availability information. Its core is a novel abstraction called resource vector abstraction which describes the resource availability in a set of linear constraints. In addition, Unicorn also provides a series of advanced solutions to support automatic, efficient management of resource dynamics on both supply and demand sides, including an automatic workflow transformer, an intelligent resource demand estimator and an efficient, scalable multi-resource orchestrator. Being the first unified framework for this new paradigm, Unicorn plays a fundamental role in next-generation data-intensive collaborative computing systems.
DOI: 10.1109/wsc.2000.899171
2002
Cited 11 times
The MONARC toolset for simulating large network-distributed processing systems
The next generation of High Energy Physics experiments have envisaged the use of network-distributed Petabytescale data handling and computing systems of unprecedented complexity. The design and optimization of systems with this level of complexity requires a realistic description and modeling of the data access patterns, the data flow across the local and wide area networks, and the scheduling and workload presented by hundreds of jobs running concurrently on large scale distributed systems exchanging very large amounts of data. The simulation toolset developed within the "Models Of Networked Analysis at Regional Centers", MONARC project, provides a code and execution time-efficient design and optimisation framework for large scale distributed systems. A process-oriented approach for discrete event simulation has been adopted. Threaded objects or "Active Objects" provide a natural way to map the specific behaviour of distributed data processing (and the required flows of data across the networks) into the simulation program. This simulation program is based on Java2/sup (TM)/ technology because of the support for the necessary techniques needed to develop an efficient and flexible distributed process oriented simulation. This includes a convenient set of interactive graphical presentation and analysis tools. The design elements, status and features of the MONARC simulation tool are presented. The program allows realistic modelling of complex data access patterns by multiple concurrent users in large scale computing systems in a wide range of possible architectures. Comparison between queuing theory and realistic client-server measurements is also presented.
DOI: 10.1109/icws.2005.71
2005
Cited 8 times
JClarens: a Java framework for developing and deploying Web services for grid computing
High energy physics (HEP) and other scientific communities have adopted service oriented architectures (SOA) as part of a larger grid computing effort. This effort involves the integration of many legacy applications and programming libraries into a SOA framework. The grid analysis environment (GAE) (Lingen et al., 2004) is such a service oriented architecture based on the Clarens grid services framework (Steenberg et al., 2004) and is being developed as part of the compact muon solenoid (CMS) experiment at the large hadron collider (LHC) at European Laboratory for Particle Physics (CERN). Clarens provides a set of authorization, access control, and discovery services, as well as XMLRPC and SOAP access to all deployed services. Two implementations of the Clarens Web services framework (Python and Java) offer integration possibilities for a wide range of programming languages. This paper describes the Java implementation of the Clarens Web services framework called 'JClarens' and several Web services of interest to the scientific and grid community that have been deployed using JClarens.
DOI: 10.1145/3234200.3234207
2018
Cited 5 times
SFP
Interdomain routing using BGP is widely deployed and well understood. The deployment of SDN in BGP domain networks, however, has not been systematically studied. In this paper, we first show that the use-announcement inconsistency is a fundamental mismatch in such a deployment, leading to serious issues including unnecessary blackholes, unnecessary reduced reachability, and permanent forwarding loops. We then design SFP, the first fine-grained interdomain routing protocol that extends BGP with fine-grained routing, eliminating the aforementioned mismatch. We develop two novel techniques, automatic receiver filtering and on-demand information dissemination, to address the scalability issue brought by fine-grained routing. Evaluating SFP using real network topologies and traces for intended settings, which are not global Internet but tens of collaborative domains, we show that SFP can reduce the amount of traffic affected by blackholes and loops by more than 50%, and that our proposed techniques can reduce the amount of signaling between ASes by 3 orders of magnitude compared with naive fine-grained routing.
DOI: 10.1145/2830318.2830320
2015
Cited 4 times
High speed scientific data transfers using software defined networking
The massive data volumes acquired, simulated, processed and analyzed by globally distributed scientific collaborations continue to grow exponentially. One leading example is the LHC program, now at the start of its second three year data taking cycle, searching for new particles and interactions in a previously inaccessible range of energies, which has experienced a 70% growth in peak data transfer rates over the last 12 months alone. Other major science programs such as LSST and SKA, and other disciplines ranging from earth observation to genomics, are expected to have similar or great needs than the LHC program within the next decade. The development of new methods for fast, efficient and reliable data transfers over national and global distances, and a new generation of intelligent, software-driven networks capable of supporting multiple science programs with diverse needs for high volume and/or real-time data delivery, are essential if these programs are to continue to progress, and meet their goals. In this paper we describe activities of the Caltech High Energy Physics team and collaborators, related to the use Software Defined Networking to help achieve fast and efficient data distribution and access. Results from Supercomputing 2014 are presented together with our work on the Advanced Network Services for the Experiments project, and a new project developing a Next Generation Integrated SDN Architecture, as well as our plans for Supercomputing 2015.
DOI: 10.1109/nssmic.2015.7581951
2015
Cited 4 times
Studies of wavelength-shifting liquid filled quartz capillaries for use in a proposed CMS calorimeter
Studies have been done and continue on the design and construction of a Shashlik detector using Radiation hard quartz capillaries filled with wavelength shifting liquid to collect the scintillation light from LYSO crystals for use as a calorimeter in the Phase II CMS upgrade at CERN. The work presented here focuses on the studies of the capillaries and liquids that would best suit the purpose of the detector. Comparisons are made of various liquids, concentrations, and capillary construction techniques will be discussed.
2016
Cited 4 times
Measurement of transverse momentum relative to dijet systems in PbPb and pp collisions √sNN = 2.76 TeV
DOI: 10.1016/0168-9002(94)01208-3
1995
Cited 12 times
Optical bleaching in situ for barium fluoride crystals
This report presents results of optical bleaching for barium fluoride crystals to be used to construct a precision electromagnetic calorimeter at future hadron colliders. A practical technique for in situ bleaching by illuminating crystals with light from a UV lamp through an optical fiber is established. The light attenuation length of current production BaF2 crystals can be set to around 200 (160) cm under 400 (130) rad/h by using 4.3 (1.1) mW bleaching light from a mercury (xenon) lamp through a ∅0.6 mm fiber. The color center dynamics model proposed in our early publication [1] has also been verified. The technical approach described in this report can be used for other candidate crystals to maintain an adequate light attenuation length in a radiation environment.
DOI: 10.1109/23.507152
1996
Cited 12 times
A study of the optical and radiation damage properties of lead tungstate crystals
A study has been made of the optical and radiation damage properties of undoped and niobium doped lead tungstate crystals. Data were obtained on the optical absorbance, the intensity and decay time of the scintillation light output, and the radioluminescence and photoluminescence emission spectra. Radiation damage was studied in several undoped and niobium doped samples using /sup 60/Co gamma ray irradiation. The change in optical absorption and observed scintillation light output was measured as a function of dose up to total cumulative doses on the order of 800 krad. The radiation induced phosphorescence and thermoluminescence was also measured, as well as recovery from damage by optical bleaching and thermal annealing. An investigation was also made to determine trace element impurities in several samples.
DOI: 10.1007/bf02730309
1977
Cited 8 times
Associated multiplicities in proton-proton collisions with a charged hadron atX=0 at the CERN ISR
DOI: 10.48550/arxiv.cs/0407012
2004
Cited 7 times
A Taxonomy and Survey of Grid Resource Planning and Reservation Systems for Grid Enabled Analysis Environment
The concept of coupling geographically distributed resources for solving large scale problems is becoming increasingly popular forming what is popularly called grid computing. Management of resources in the Grid environment becomes complex as the resources are geographically distributed, heterogeneous in nature and owned by different individuals and organizations each having their own resource management policies and different access and cost models. There have been many projects that have designed and implemented the resource management systems with a variety of architectures and services. In this paper we have presented the general requirements that a Resource Management system should satisfy. The taxonomy has also been defined based on which survey of resource management systems in different existing Grid projects has been conducted to identify the key areas where these systems lack the desired functionality.
DOI: 10.1016/j.future.2004.10.011
2005
Cited 7 times
The DataTAG transatlantic testbed
Wide area network testbeds allow researchers and engineers to test out new equipment, protocols and services in real-life situations, without jeopardizing the stability and reliability of production networks. The Data TransAtlantic Grid (DataTAG) testbed, deployed in 2002 between CERN, Geneva, Switzerland and StarLight, Chicago, IL, USA, is probably the largest testbed built to date. Jointly managed by CERN and Caltech, it is funded by the European Commission, the U.S. Department of Energy and the U.S. National Science Foundation. The main objectives of this testbed are to improve the Grid community's understanding of the networking issues posed by data-intensive Grid applications over transoceanic gigabit networks, design and develop new Grid middleware services, and improve the interoperability of European and U.S. Grid applications in High-Energy and Nuclear Physics. In this paper, we give an overview of this testbed, describe its various topologies over time, and summarize the main lessons learned after two years of operation.
DOI: 10.1109/icws.2004.1314803
2004
Cited 7 times
JClarens: a Java based interactive physics analysis environment for data intensive applications
In this paper we describe JClarens; a Java based implementation of the Clarens remote data server. JClarens provides Web services for an interactive analysis environment to dynamically access and analyze the tremendous amount of data scattered across various locations. Additionally this research is aimed to develop a service oriented grid enabled portal (GEP) that provides interface and access to several grid services to give a homogeneous and optimized view of the distributed and heterogeneous environment. Other than showing platform independent behavior provided by Java, the use of XML-RPC based Web services enabled JClarens to be a language neutral server and demonstrated interoperability with its Python variant. Extreme care has been taken in the usage and manipulation of various Java libraries to cater the needs of high performance computing. The overall exercise has yielded in a prototype with strong emphasis on security and virtual organization management (VOM). This shall provide a common platform to support development of larger, more flexible framework with future aims to integrate it with a loosely coupled, decentralized, and autonomous framework for grid enabled analysis environment (GAE).
DOI: 10.1145/2110217.2110224
2011
Cited 4 times
Scientific data movement enabled by the DYNES instrument
Scientific innovation continues to increase requirements for the computing and networking infrastructures of the world. Collaborative partners, instrumentation, storage, and processing facilities are often geographically and topologically separated, thus complicating the problem of end-to-end data management. Networking solutions, provided by R&E focused organizations, often serve as a vital link between these distributed components. Capacity and traffic management are key concerns of these network operators; a delicate balance is required to serve both long-lived, high capacity network flows, as well as more traditional end-user activities. The advent of dynamic circuit services, a technology that enables the creation of variable duration, guaranteed bandwidth networking channels, has afforded operations staff greater control over traffic demands and has increased the overall quality of service for scientific users. This paper presents the DYNES instrument, an NSF funded cyberinfrastructure project designed to facilitate end-to-end dynamic circuit services. This combination of hardware and software innovation is being deployed across R&E networks in the United States, end sites located at University Campuses. DYNES is peering with international efforts in other countries using similar solutions, and is increasing the reach of this emerging technology. This global data movement solution could be integrated into computing paradigms such as cloud and grid computing platforms, and through the use of APIs can be integrated into existing data movement software.
DOI: 10.1364/jocn.9.00a162
2017
Cited 4 times
Next-Generation Exascale Network Integrated Architecture for Global Science [Invited]
The next-generation exascale network integrated architecture (NGENIA-ES) is a project specifically designed to accomplish new levels of network and computing capabilities in support of global science collaborations through the development of a new class of intelligent, agile networked systems. Its path to success is built upon our ongoing developments in multiple areas, strong ties among our high energy physics, computer and network science, and engineering teams, and our close collaboration with key technology developers and providers deeply engaged in the national strategic computing initiative (NSCI). This paper describes the building of a new class of distributed systems, our work with the leadership computing facilities (LFCs), the use of software-defined networking (SDN) methods, and the use of data-driven methods for the scheduling and optimization of network resources. Sections I–III present the challenges of data-intensive research and the important ingredients of this ecosystem. Sections IV–VI describe some crucial elements of the foreseen solution and some of the progress so far. Sections VII–IX go into the details of orchestration, software-defined networking, and scheduling optimization. Finally, Section X talks about engagement and partnerships, and Section XI gives a summary. References are given at the end.
DOI: 10.1016/s0920-5632(95)80085-9
1995
Cited 10 times
Scintillating crystals in a radiation environment
The unique physics capability of a crystal calorimeter is the result of their superp energy resolution, hermetic coverage and fine granularity. However, its energy resolution can only be maintained in situ if the crystal is sufficiently radiation hard. This report summarizes the performance of large size crystals (CsI, BaF2, CeF3, BGO and PbWO4) in a radiation environment. Technical approaches to solve radiation damage problem are discussed, with particular emphasis on crystal quality control in the growing process. An approach to solve the radiation damage problem by implementing optical bleaching in situ is elaborated.
DOI: 10.1016/0370-2693(86)90158-9
1986
Cited 9 times
A measurement of the strong coupling constant αs to complete second order
The strong interaction coupling constant αs has been measured with a new method, the planar triple energy correlation in the reaction e+e- → hadrons at center-of-mass energies ranging from 14 GeV to 46.78 GeV. A complete second-order perturbative QCD calculation was used. ΛMS = 110 ± 30 −55+70 MeV is found.
DOI: 10.48550/arxiv.cs/0306001
2003
Cited 7 times
Clarens Client and Server Applications
Several applications have been implemented with access via the Clarens web service infrastructure, including virtual organization management, JetMET physics data analysis using relational databases, and Storage Resource Broker (SRB) access. This functionality is accessible transparently from Python scripts, the Root analysis framework and from Java applications and browser applets.
DOI: 10.2172/1163919
2014
Cited 3 times
OLiMPS. OpenFlow Link-layer MultiPath Switching
The OLiMPS project’s goal was the development of an OpenFlow controller application allowing load balancing over multiple switched paths across a complex network topology. The second goal was to integrate the controller with Dynamic Circuit Network systems such as ESnet’s OSCARS. Both goals were achieved successfully, as laid out in this report.
DOI: 10.1016/0168-9002(89)91478-2
1989
Cited 8 times
Calibration of electromagnetic calorimeters in high energy experiments with a Radio Frequency Quadrupole accelerator
A fast, effective calibration technique has been developed for future Superconducting Supercollider (SSC) calorimeters based upon the radiative capture of protons from a pulsed Radio Frequency Quadrupole (RFQ) accelerator in a fluoride target. The intense flux of low energy photons acts as a clean “pulse generator” calibration signal equivalent to 20 GeV or more. This calibration technique has been demonstrated with a bismuth germanate (BGO) detector array, as well as with several barium fluoride detector crystals, to provide a calibration accuracy of 0.5% within two minutes. The SSC calibration has resulted from the development of this novel technique by Caltech over the past five years for the L3 BGO electromagnetic calorimeter, which uses a lithium target for the production of 17.6 MeV low energy photons for the calibration source. Proven techniques have been developed to provide an in situ calibration for all 11 000 BGO detector crystals, with an accuracy of 0.8% obtainable in 1–2 h. The result of the experimental test by using the AccSys RFQ accelerator is reported, as is the preliminary concept of a small storage ring that can be used to compress the output beam pulse from the RFQ accelerator into a target beam pulse of 100 ns or less.
2003
Cited 6 times
The Clarens web services architecture
Clarens is a Grid-enabled web service infrastruture implemented to augment the current batch-oriented Grid services computing model in the Compound Muon Solenoid (CMS) experiment of the LHC. Clarens servers leverage the Apache web server to provide a scalable framework for clients to communicate with services using the SOAP and XML-RPC protocols. This framework provides security, session persistent storage, service standards wherever possible instead of inventing new standards. This paper describes the basic architeture of Clarens, while a companion paper describes clients and services that take advantage of this architecture. More information and documentation is also available at the Clarens web page at http://clarens.sourceforge.net.
2003
Cited 6 times
Distributed Heterogeneous Relational Data Warehouse In A Grid Environment
This paper examines how a Distributed Heterogeneous Relational Data Warehouse can be integrated in a Grid environment that will provide physicists with efficient access to large and small object collections drawn from databases at multiple sites. This paper investigates the requirements of Grid-enabling such a warehouse, and explores how these requirements may be met by extensions to existing Grid middleware. We present initial results obtained with a working prototype warehouse of this kind using both SQLServer and Oracle9i, where a Grid-enabled web-services interface makes it easier for web-applications to access the distributed contents of the databases securely. Based on the success of the prototype, we proposes a framework for using heterogeneous relational data warehouse through the web-service interface and create a single Virtual Database System for users. The ability to transparently access data in this way, as shown in prototype, is likely to be a very powerful facility for HENP and other grid users wishing to collate and analyze information distributed over Grid.
DOI: 10.1007/978-3-540-30208-7_145
2004
Cited 5 times
Distributed Analysis and Load Balancing System for Grid Enabled Analysis on Hand-Held Devices Using Multi-agents Systems
Handheld devices, while growing rapidly, are inherently constrained and lack the capability of executing resource hungry applications. This paper presents the design and implementation of distributed analysis and load-balancing system for hand-held devices using multi-agents system. This system enables low resource mobile handheld devices to act as potential clients for Grid enabled applications and analysis environments. We propose a system, in which mobile agents will transport, schedule, execute and return results for heavy computational jobs submitted by handheld devices. Moreover, in this way, our system provides high throughput computing environment for hand-held devices.
DOI: 10.1109/sc.2018.00008
2018
Cited 3 times
Fine-Grained, Multi-Domain Network Resource Abstraction as a Fundamental Primitive to Enable High-Performance, Collaborative Data Sciences
Multi-domain network resource reservation systems are being deployed, driven by the demand and substantial benefits of providing predictable network resources. However, a major lack of existing systems is their coarse granularity, due to the participating networks' concern of revealing sensitive information, which can result in substantial inefficiencies. This paper presents Mercator, a novel multi-domain network resource discovery system to provide fine-grained, global network resource information, for collaborative sciences. The foundation of Mercator is a resource abstraction through algebraic-expression enumeration (i.e., linear inequalities/equations), as a compact representation of the available bandwidth in multi-domain networks. In addition, we develop an obfuscating protocol, to address the privacy concerns by ensuring that no participant can associate the algebraic expressions with the corresponding member networks. We also introduce a super-set projection technique to increase Mercator's scalability. Finally, we implement Mercator and demonstrate both its efficiency and efficacy through extensive experiments using real topologies and traces.
DOI: 10.22323/1.239.0018
2016
ANSE Update on the Use of Dynamic Circuits in PhEDEx
ANSE (Advanced Network Services for Experiments) is an NSF funded project, which aims to incorporate advanced network-aware tools in the mainstream production workflows of LHC's two largest experiments: ATLAS and CMS.For CMS, this translates in the integration of bandwidth provisioning capabilities in PhEDEx, its data-transfer management tool.PhEDEx controls the large-scale data-flows on the WAN across the experiment, typically handling 1 PB of data per week, spread over 70 sites.This is only set to increase once LHC resumes operations in 2015.The goal of ANSE is to improve the overall working efficiency of the experiments, by allowing for more deterministic times to completion for a designated set of data transfers, through the use of end-to-end dynamic virtual circuits with guaranteed bandwidth.Through our work in ANSE, we have enhanced PhEDEx, allowing it to control a circuit's lifecycle based on its own needs.By checking its current workload and past transfer history on normal links, PhEDEx is now able to make smart use of dynamic circuits, only creating one when it's worth doing so.Different circuit management infrastructures can be used, via a plug-in system, making it highly adaptable.In this paper, we present the progress made by ANSE with regards to PhEDEx.We show how our system has evolved since the prototype phase we presented last year, and how it is now able to make use of dynamic circuits as a production-quality service.We describe its updated software architecture and how this mechanism can be refactored and used as a stand-alone system in other software domains (like ATLAS' PanDA).We conclude, by describing the remaining work to be done ANSE (for PhEDEx) and discuss on future directions for continued development.