ϟ

Kevin Mcdermott

Here are all the papers by Kevin Mcdermott that you can download and read on OA.mg.
Kevin Mcdermott’s last known institution is . Download Kevin Mcdermott PDFs here.

Claim this Profile →
DOI: 10.5040/9781350394605
2015
Cited 23 times
Communist Czechoslovakia, 1945-89
<JATS1:p>New Casebook offers a selection of the most lively and innovative contemporary criticism on the four late plays commonly known as Shakespeare's 'Romances': Pericles, Cymbeline, The Winter's Tale and The Tempest. New historicist, Marxist, feminist and psychoanalytic perspectives are all represented, alongside other readings that engage with less familiar issues such as nationalism, topography, religious politics and medico-moral discourse. Alison Thorne's introduction explores the much discussed question of how these plays should be classified generically, and traces the recurrence of certain preconceptions and critical stances in the reception of the Romances from the seventeenth century onwards.</JATS1:p>
DOI: 10.1093/ofid/ofae175
2024
Relative Vaccine Effectiveness of Cell- vs Egg-Based Quadrivalent Influenza Vaccine Against Test-Confirmed Influenza Over 3 Seasons Between 2017 and 2020 in the United States
Influenza vaccine viruses grown in eggs may acquire egg-adaptive mutations that may reduce antigenic similarity between vaccine and circulating influenza viruses and decrease vaccine effectiveness. We compared cell- and egg-based quadrivalent influenza vaccines (QIVc and QIVe, respectively) for preventing test-confirmed influenza over 3 US influenza seasons (2017-2020).
DOI: 10.2307/2651222
1998
Cited 35 times
The Comintern: A History of International Communism from Lenin to Stalin
DOI: 10.1007/s40273-023-01272-9
2023
Oral Azacitidine for Maintenance Treatment of Acute Myeloid Leukaemia After Induction Therapy: An Evidence Review Group Perspective of a NICE Single Technology Appraisal
The National Institute for Health and Care Excellence (NICE) invited the manufacturer (Celgene) of oral azacitidine (ONUREG), as part of the Single Technology Appraisal (STA) process, to submit evidence for the clinical effectiveness and cost-effectiveness of oral azacitidine for maintenance treatment of acute myeloid leukaemia (AML) after induction therapy compared with watch-and-wait plus best supportive care (BSC) and midostaurin. Kleijnen Systematic Reviews Ltd, in collaboration with Maastricht University Medical Centre+, was commissioned to act as the independent Evidence Review Group (ERG). This paper summarises the company submission (CS), presents the ERG's critical review on the clinical and cost-effectiveness evidence in the CS, highlights the key methodological considerations and describes the development of the NICE guidance by the Appraisal Committee. In the QUAZAR AML-001 trial, oral azacitidine significantly improved overall survival (OS) versus placebo: median OS gain of 9.9 months (24.7 months versus 14.8 months; hazard ratio (HR) 0.69 (95% CI 0.55-0.86), p < 0.001). The median time to relapse was also better for oral azacitidine, and the incidences of TEAEs were similar for the two arms. The company excluded two of the comparators listed in the scope, low-dose cytarabine and subcutaneous azacitidine, informed only by clinical expert opinion, leaving only best supportive care (BSC) and midostaurin for the FLT3-ITD and/or FLT3-TKD (FLT3 mutation)-positive subgroup. An ITC comparing oral azacitidine to midostaurin as maintenance therapy in the appropriate subgroup demonstrated that the OS and relapse-free survival (RFS) HRs were favourable for oral azacitidine when compared with midostaurin. However, in the only available trial of midostaurin as maintenance treatment in AML that was used for this ITC, subjects were not randomised at the maintenance phase, but at induction, which posed a substantial risk of bias. The revised and final probabilistic incremental cost-effectiveness ratio (ICER) presented by the company, including a commercial arrangement, was £32,480 per quality-adjusted life year (QALY) gained for oral azacitidine versus watch-and-wait plus BSC. Oral azacitidine was dominant versus midostaurin in the FLT-3 subgroup. The ERG's concerns included the approach of modelling haematopoietic stem cell transplantation (HSCT), the generalisability of the population and the number of cycles of consolidation therapy pre-treatment in the QUAZAR AML-001 trial to UK clinical practice, and uncertainty in the relapse utility. The revised and final ERG base case resulted in a similar probabilistic ICER of £33,830 per QALY gained versus watch-and-wait plus BSC, but with remaining uncertainty. Oral azacitidine remained dominant versus midostaurin in the FLT-3 subgroup. After the second NICE appraisal committee meeting, the NICE Appraisal Committee recommended oral azacitidine (according to the commercial arrangement), within its marketing authorisation, as an option for maintenance treatment for AML in adults who are in complete remission, or complete remission with incomplete blood count recovery, after induction therapy with or without consolidation treatment, and cannot have or do not want HSCT.
DOI: 10.1051/epjconf/201612700010
2016
Cited 8 times
Kalman Filter Tracking on Parallel Architectures
Power density constraints are limiting the performance improvements of modern CPUs. To address this we have seen the introduction of lower-power, multi-core processors such as GPGPU, ARM and Intel MIC. In order to achieve the theoretical performance gains of these processors, it will be necessary to parallelize algorithms to exploit larger numbers of lightweight cores and specialized functions like large vector units. Track finding and fitting is one of the most computationally challenging problems for event reconstruction in particle physics. At the High-Luminosity Large Hadron Collider (HL-LHC), for example, this will be by far the dominant problem. The need for greater parallelism has driven investigations of very different track finding techniques such as Cellular Automata or Hough Transforms. The most common track finding techniques in use today, however, are those based on a Kalman filter approach. Significant experience has been accumulated with these techniques on real tracking detector systems, both in the trigger and offline. They are known to provide high physics performance, are robust, and are in use today at the LHC. Given the utility of the Kalman filter in track finding, we have begun to port these algorithms to parallel architectures, namely Intel Xeon and Xeon Phi. We report here on our progress towards an end-to-end track reconstruction algorithm fully exploiting vectorization and parallelization techniques in a simplified experimental environment.
DOI: 10.1088/1742-6596/664/7/072008
2015
Cited 7 times
Kalman Filter Tracking on Parallel Architectures
Power density constraints are limiting the performance improvements of modern CPUs. To address this we have seen the introduction of lower-power, multi-core processors, but the future will be even more exciting. In order to stay within the power density limits but still obtain Moore's Law performance/price gains, it will be necessary to parallelize algorithms to exploit larger numbers of lightweight cores and specialized functions like large vector units. Example technologies today include Intel's Xeon Phi and GPGPUs. Track finding and fitting is one of the most computationally challenging problems for event reconstruction in particle physics. At the High Luminosity LHC, for example, this will be by far the dominant problem. The need for greater parallelism has driven investigations of very different track finding techniques including Cellular Automata or returning to Hough Transform. The most common track finding techniques in use today are however those based on the Kalman Filter. Significant experience has been accumulated with these techniques on real tracking detector systems, both in the trigger and offline. They are known to provide high physics performance, are robust and are exactly those being used today for the design of the tracking system for HL-LHC. Our previous investigations showed that, using optimized data structures, track fitting with Kalman Filter can achieve large speedup both with Intel Xeon and Xeon Phi. We report here our further progress towards an end-to-end track reconstruction algorithm fully exploiting vectorization and parallelization techniques in a realistic simulation setup.
DOI: 10.1051/epjconf/201715000006
2017
Cited 6 times
Parallelized Kalman-Filter-Based Reconstruction of Particle Tracks on Many-Core Processors and GPUs
For over a decade now, physical and energy constraints have limited clock speed improvements in commodity microprocessors. Instead, chipmakers have been pushed into producing lower-power, multi-core processors such as GPGPU, ARM and Intel MIC. Broad-based efforts from manufacturers and developers have been devoted to making these processors user-friendly enough to perform general computations. However, extracting performance from a larger number of cores, as well as specialized vector or SIMD units, requires special care in algorithm design and code optimization. One of the most computationally challenging problems in high-energy particle experiments is finding and fitting the charged-particle tracks during event reconstruction. This is expected to become by far the dominant problem in the High-Luminosity Large Hadron Collider (HL-LHC), for example. Today the most common track finding methods are those based on the Kalman filter. Experience with Kalman techniques on real tracking detector systems has shown that they are robust and provide high physics performance. This is why they are currently in use at the LHC, both in the trigger and offline. Previously we reported on the significant parallel speedups that resulted from our investigations to adapt Kalman filters to track fitting and track building on Intel Xeon and Xeon Phi. Here, we discuss our progresses toward the understanding of these processors and the new developments to port Kalman filter to NVIDIA GPUs.
DOI: 10.1088/1742-6596/608/1/012057
2015
Cited 5 times
Traditional Tracking with Kalman Filter on Parallel Architectures
Power density constraints are limiting the performance improvements of modern CPUs. To address this, we have seen the introduction of lower-power, multi-core processors, but the future will be even more exciting. In order to stay within the power density limits but still obtain Moore's Law performance/price gains, it will be necessary to parallelize algorithms to exploit larger numbers of lightweight cores and specialized functions like large vector units. Example technologies today include Intel's Xeon Phi and GPGPUs. Track finding and fitting is one of the most computationally challenging problems for event reconstruction in particle physics. At the High Luminosity LHC, for example, this will be by far the dominant problem. The most common track finding techniques in use today are however those based on the Kalman Filter. Significant experience has been accumulated with these techniques on real tracking detector systems, both in the trigger and offline. We report the results of our investigations into the potential and limitations of these algorithms on the new parallel hardware.
DOI: 10.1088/1748-0221/15/09/p09030
2020
Cited 5 times
Speeding up particle track reconstruction using a parallel Kalman filter algorithm
One of the most computationally challenging problems expected for the High-Luminosity Large Hadron Collider (HL-LHC) is determining the trajectory of charged particles during event reconstruction. Algorithms used at the LHC today rely on Kalman filtering, which builds physical trajectories incrementally while incorporating material effects and error estimation. Recognizing the need for faster computational throughput, we have adapted Kalman-filter-based methods for highly parallel, many-core SIMD architectures that are now prevalent in high-performance hardware. In this paper, we discuss the design and performance of the improved tracking algorithm, referred to as MKFIT. A key piece of the algorithm is the MATRIPLEX library, containing dedicated code to optimally vectorize operations on small matrices. The physics performance of the MKFIT algorithm is comparable to the nominal CMS tracking algorithm when reconstructing tracks from simulated proton-proton collisions within the CMS detector. We study the scaling of the algorithm as a function of the parallel resources utilized and find large speedups both from vectorization and multi-threading. MKFIT achieves a speedup of a factor of 6 compared to the nominal algorithm when run in a single-threaded application within the CMS software framework.
DOI: 10.1109/nssmic.2015.7581932
2015
Cited 4 times
Kalman-Filter-based particle tracking on parallel architectures at Hadron Colliders
Power density constraints are limiting the performance improvements of modern CPUs. To address this we have seen the introduction of lower-power, multi-core processors such as GPGPU, ARM and Intel MIC. To stay within the power density limits but still obtain Moore's Law performance/price gains, it will be necessary to parallelize algorithms to exploit larger numbers of lightweight cores and specialized functions like large vector units. Track finding and fitting is one of the most computationally challenging problems for event reconstruction in particle physics. At the High-Luminosity Large Hadron Collider (HL-LHC), for example, this will be by far the dominant problem. The need for greater parallelism has driven investigations of very different track finding techniques such as Cellular Automata or Hough Transforms. The most common track finding techniques in use today, however, are those based on the Kalman Filter. Significant experience has been accumulated with these techniques on real tracking detector systems, both in the trigger and offline. They are known to provide high physics performance, are robust, and are in use today at the LHC. We report on porting these algorithms to new parallel architectures. Our previous investigations showed that, using optimized data structures, track fitting with Kalman Filter can achieve large speedups both with Intel Xeon and Xeon Phi. We report here our progress towards an end-to-end track reconstruction algorithm fully exploiting vectorization and parallelization techniques in a realistic experimental environment.
DOI: 10.1088/1742-6596/1085/4/042016
2018
Cited 4 times
Parallelized Kalman-Filter-Based Reconstruction of Particle Tracks on Many-Core Architectures
Faced with physical and energy density limitations on clock speed, contemporary microprocessor designers have increasingly turned to on-chip parallelism for performance gains. Algorithms should accordingly be designed with ample amounts of fine-grained parallelism if they are to realize the full performance of the hardware. This requirement can be challenging for algorithms that are naturally expressed as a sequence of small-matrix operations, such as the Kalman filter methods widely in use in high-energy physics experiments. In the High-Luminosity Large Hadron Collider (HL-LHC), for example, one of the dominant computational problems is expected to be finding and fitting charged-particle tracks during event reconstruction; today, the most common track-finding methods are those based on the Kalman filter. Experience at the LHC, both in the trigger and offline, has shown that these methods are robust and provide high physics performance. Previously we reported the significant parallel speedups that resulted from our efforts to adapt Kalman-filter-based tracking to many-core architectures such as Intel Xeon Phi. Here we report on how effectively those techniques can be applied to more realistic detector configurations and event complexity.
DOI: 10.1007/978-0-230-20478-2
2006
Cited 6 times
Stalin
DOI: 10.1088/1742-6596/898/4/042051
2017
Cited 3 times
Kalman filter tracking on parallel architectures
We report on the progress of our studies towards a Kalman filter track reconstruction algorithm with optimal performance on manycore architectures. The combinatorial structure of these algorithms is not immediately compatible with an efficient SIMD (or SIMT) implementation; the challenge for us is to recast the existing software so it can readily generate hundreds of shared-memory threads that exploit the underlying instruction set of modern processors. We show how the data and associated tasks can be organized in a way that is conducive to both multithreading and vectorization. We demonstrate very good performance on Intel Xeon and Xeon Phi architectures, as well as promising first results on Nvidia GPUs.
DOI: 10.1093/biostatistics/kxq038
2010
Cited 3 times
Hypothesis testing for neural cell growth experiments using a hybrid branching process model
Neuron branching patterns can characterize neural cell types and act as markers for neurodegenerative disease and neural development. We develop a hybrid Markovian model for neural branching that extends previously published models by (i) using a discretized gamma model to account for underdispersion in primary branching, (ii) incorporating both bifurcation and trifurcation branching events to accommodate observed data, and (iii) only requiring branch counts and not branching topology as observations, allowing larger numbers of neurons to be sampled than in previous literature. Inference for primary branching is achieved through a gamma generalized linear model. Due to incomplete data, bifurcation and trifurcation probabilities are estimated using an expectation-maximization algorithm, which is shown to give consistent estimates using simulation studies and theoretical arguments. In simulation studies, comparison of standard errors shows no significant loss of accuracy relative to when topological information is available. A unified methodology for testing hypotheses using likelihood ratio tests (LRTs) is developed. The methodology is applied to an experiment where neurons are cocultured with different treatments: growth factor (GF), hypothalamic-astroglial conditioned medium (HY), and combination. The model provides statistically adequate fit at all branching orders. All treatments cause significantly higher branching at primary and secondary orders relative to control (p-value < 0.01), but not at higher branching orders, suggesting genetic regulation by the treatments. Using a computationally feasible lower bound on the LRT, bifurcation probabilities are shown to decrease exponentially with branching order for all treatments except HY (p-value 0.03).
DOI: 10.48550/arxiv.2304.05853
2023
Speeding up the CMS track reconstruction with a parallelized and vectorized Kalman-filter-based algorithm during the LHC Run 3
One of the most challenging computational problems in the Run 3 of the Large Hadron Collider (LHC) and more so in the High-Luminosity LHC (HL-LHC) is expected to be finding and fitting charged-particle tracks during event reconstruction. The methods used so far at the LHC and in particular at the CMS experiment are based on the Kalman filter technique. Such methods have shown to be robust and to provide good physics performance, both in the trigger and offline. In order to improve computational performance, we explored Kalman-filter-based methods for track finding and fitting, adapted for many-core SIMD architectures. This adapted Kalman-filter-based software, called "mkFit", was shown to provide a significant speedup compared to the traditional algorithm, thanks to its parallelized and vectorized implementation. The mkFit software was recently integrated into the offline CMS software framework, in view of its exploitation during the Run 3 of the LHC. At the start of the LHC Run 3, mkFit will be used for track finding in a subset of the CMS offline track reconstruction iterations, allowing for significant improvements over the existing framework in terms of computational performance, while retaining comparable physics performance. The performance of the CMS track reconstruction using mkFit at the start of the LHC Run 3 is presented, together with prospects of further improvement in the upcoming years of data taking.
2019
Speeding up Particle Track Reconstruction in the CMS Detector using a Vectorized and Parallelized Kalman Filter Algorithm
Building particle tracks is the most computationally intense step of event reconstruction at the LHC. With the increased instantaneous luminosity and associated increase in pileup expected from the High-Luminosity LHC, the computational challenge of track finding and fitting requires novel solutions. The current track reconstruction algorithms used at the LHC are based on Kalman filter methods that achieve good physics performance. By adapting the Kalman filter techniques for use on many-core SIMD architectures such as the Intel Xeon and Intel Xeon Phi and (to a limited degree) NVIDIA GPUs, we are able to obtain significant speedups and comparable physics performance. New optimizations, including a dedicated post-processing step to remove duplicate tracks, have improved the algorithm's performance even further. Here we report on the current structure and performance of the code and future plans for the algorithm.
DOI: 10.1051/epjconf/202024502013
2020
Reconstruction of Charged Particle Tracks in Realistic Detector Geometry Using a Vectorized and Parallelized Kalman Filter Algorithm
One of the most computationally challenging problems expected for the High-Luminosity Large Hadron Collider (HL-LHC) is finding and fitting particle tracks during event reconstruction. Algorithms used at the LHC today rely on Kalman filtering, which builds physical trajectories incrementally while incorporating material effects and error estimation. Recognizing the need for faster computational throughput, we have adapted Kalman-filterbased methods for highly parallel, many-core SIMD and SIMT architectures that are now prevalent in high-performance hardware. Previously we observed significant parallel speedups, with physics performance comparable to CMS standard tracking, on Intel Xeon, Intel Xeon Phi, and (to a limited extent) NVIDIA GPUs. While early tests were based on artificial events occurring inside an idealized barrel detector, we showed subsequently that our mkFit software builds tracks successfully from complex simulated events (including detector pileup) occurring inside a geometrically accurate representation of the CMS-2017 tracker. Here, we report on advances in both the computational and physics performance of mkFit, as well as progress toward integration with CMS production software. Recently we have improved the overall efficiency of the algorithm by preserving short track candidates at a relatively early stage rather than attempting to extend them over many layers. Moreover, mkFit formerly produced an excess of duplicate tracks; these are now explicitly removed in an additional processing step. We demonstrate that with these enhancements, mkFit becomes a suitable choice for the first iteration of CMS tracking, and eventually for later iterations as well. We plan to test this capability in the CMS High Level Trigger during Run 3 of the LHC, with an ultimate goal of using it in both the CMS HLT and offline reconstruction for the HL-LHC CMS tracker.
DOI: 10.1051/epjconf/201921402002
2019
Parallelized and Vectorized Tracking Using Kalman Filters with CMS Detector Geometry and Events
The High-Luminosity Large Hadron Collider at CERN will be characterized by greater pileup of events and higher occupancy, making the track reconstruction even more computationally demanding. Existing algorithms at the LHC are based on Kalman filter techniques with proven excellent physics performance under a variety of conditions. Starting in 2014, we have been developing Kalman-filter-based methods for track finding and fitting adapted for many-core SIMD processors that are becoming dominant in high-performance systems. This paper summarizes the latest extensions to our software that allow it to run on the realistic CMS-2017 tracker geometry using CMSSW-generated events, including pileup. The reconstructed tracks can be validated against either the CMSSW simulation that generated the detector hits, or the CMSSW reconstruction of the tracks. In general, the code’s computational performance has continued to improve while the above capabilities were being added. We demonstrate that the present Kalman filter implementation is able to reconstruct events with comparable physics performance to CMSSW, while providing generally better computational performance. Further plans for advancing the software are discussed.
DOI: 10.1167/7.9.267
2010
Adaptation and contrast constancy in natural images
Many perceptual judgments have a well defined norm that can be biased in predictable ways by adaptation, so that the adapting stimulus appears more neutral and thus induces a negative aftereffect in the original neutral stimulus. We examined how adaptation affects the norm for judgments of contrast in natural images, in order to examine how perceived contrast is calibrated by experience and how norms are established for intensive dimensions like contrast that do not have a qualitatively distinct perceptual null (e.g. as in color or motion). Stimuli were grayscale images of natural scenes with contrast titrated over a wide range. The level in successive images was varied in a forced-choice staircase to find the subjectively correct image, with settings repeated before or after adaptation to the same images at high (150%) or low (50%) contrasts. Subjects could reliably set the images to appear natural and thus had a well defined norm. Surprisingly, adaptation to the 50% contrast images shifted this norm to lower contrasts relative to the pre-adapt settings. That is, the original images appeared to have a higher contrast after adapting to their low-contrast versions than to a zero-contrast field, even though adaptation to any contrast in simple gratings reduces rather than increases perceived contrast (Georgeson, 1985). To directly test for changes in contrast sensitivity in the images, settings were repeated with an asymmetric matching task in which an unadapted image was adjusted to equal the perceived contrast in the adapted image. This showed only a weak effect for the 50% adapt level, implying that the renormalization may primarly reflect short-term shifts in criterion rather than sensitivity. These shifts may be important for understanding how visual appearance is recalibrated for changes in viewing conditions or the observer (cataracts) that alter contrast in the retinal image.
DOI: 10.1167/10.7.394
2010
Variations in achromatic settings across the visual field
The stimulus spectrum that appears white shows little change between the fovea and near periphery, despite large changes in spectral sensitivity from differences in macular pigment screening (Beer et al JOV 2005; Webster and Leonard JOSA A 2008). This perceptual constancy could occur if color coding at different regions of the retina is normalized to the local average spectrum. However, local adaptation could instead lead to changes in the achromatic point across the visual field if the spectral characteristics of the world itself vary across space. Natural scenes in fact include significant spatial variations in chromaticity because of factors such as the spectral differences between earth and sky. We asked whether there might be corresponding differences in achromatic loci in upper and lower visual fields. Observers dark adapted and then viewed a 25 cd/m2 2-deg spot flashed repeatedly for 0.5 sec on and 3.5 sec off on a black background. The chromaticity of the spot was adjusted to appear achromatic by using a pair of buttons that varied chromaticities in terms of the CIE u′v′ coordinates. Settings were repeated while observers fixated dim markers so that the spot fell at a range of eccentricities spanning +60 deg along the vertical meridian. Achromatic settings did not change systematically with location, and in particular did not show a blue to yellow-green trend consistent with outdoor scenes. This could indicate that observers are primarily adapted to environments with more stationary color statistics (e.g. indoor settings) or that achromatic loci are also calibrated by retinally non-local processes.
DOI: 10.1167/9.14.64
2009
Predicting the color environment from uniform color spaces
Adaptation is thought to match visual coding for the environment by adjusting sensitivity so that the average responses across channels are equated. However, the characteristics of the color environment to which we are routinely adapted remain uncertain. We estimated the color statistics of the standard color environment by using a model of color coding and adaptation to predict which stimulus color distributions would be required to yield different perceptually uniform color spaces. The model was based on adapting responses in the cones and in multiple post-receptoral channels tuned to different color-luminance directions, with color and lightness based on the vector sum of the channel responses. Color spaces included the Munsell and CIE uniform spaces. For each space we calculated the relative channel sensitivities and response functions that would approximate equal perceptual steps within a given space, and then from this estimated the relative contrast and principal axes of the color distributions that would lead to these sensitivities, assuming that adaptation scales sensitivity so that all channels give the same average response to the adapting distribution. This analysis allows us to characterize the ratios of luminance and chromatic contrast that would be required to account for the perceptual balance of color for a given perceptually uniform metric and for given assumptions about adaptation and color coding, and we compare these derived distributions to actual measurements of the color statistics of scenes. For example, cone-opponent responses to uniform color metrics like the Munsell space are elongated along the blue-yellow axis, implying that channels tuned to blue-yellow variations are less sensitive because they are exposed to stronger stimulation by the environment, and this blue-yellow bias is a characteristic of many natural color environments.
2014
Traditional Tracking with Kalman Filter on Parallel Architectures
Power density constraints are limiting the performance improvements of modern CPUs. To address this, we have seen the introduction of lower-power, multi-core processors, but the future will be even more exciting. In order to stay within the power density limits but still obtain Moore's Law performance/price gains, it will be necessary to parallelize algorithms to exploit larger numbers of lightweight cores and specialized functions like large vector units. Example technologies today include Intel's Xeon Phi and GPGPUs. Track finding and fitting is one of the most computationally challenging problems for event reconstruction in particle physics. At the High Luminosity LHC, for example, this will be by far the dominant problem. The most common track finding techniques in use today are however those based on the Kalman Filter. Significant experience has been accumulated with these techniques on real tracking detector systems, both in the trigger and offline. We report the results of our investigations into the potential and limitations of these algorithms on the new parallel hardware.
2015
Effects of low velocity impacts on basaltoids
DOI: 10.1057/9781137368928.0006
2015
De-Stalinising Eastern Europe
DOI: 10.48550/arxiv.1409.8213
2014
Traditional Tracking with Kalman Filter on Parallel Architectures
Power density constraints are limiting the performance improvements of modern CPUs. To address this, we have seen the introduction of lower-power, multi-core processors, but the future will be even more exciting. In order to stay within the power density limits but still obtain Moore's Law performance/price gains, it will be necessary to parallelize algorithms to exploit larger numbers of lightweight cores and specialized functions like large vector units. Example technologies today include Intel's Xeon Phi and GPGPUs. Track finding and fitting is one of the most computationally challenging problems for event reconstruction in particle physics. At the High Luminosity LHC, for example, this will be by far the dominant problem. The most common track finding techniques in use today are however those based on the Kalman Filter. Significant experience has been accumulated with these techniques on real tracking detector systems, both in the trigger and offline. We report the results of our investigations into the potential and limitations of these algorithms on the new parallel hardware.
2011
Analyzing Potential Tracking Algorithms for the Upgrade to the Silicon Tracker of the Compact Muon Solenoid
The research performed revolves around creating tracking algorithms for the proposed ten-year upgrade to the silicon tracker for the Compact Muon Solenoid (CMS), one of two main detectors for the Large Hadron Collider (LHC) at CERN in Geneva, Switzerland. The proposed upgrade to the silicon tracker for CMS will use high-speed electronics to trace particle trajectories so that they can be used immediately in a trigger system. The additional information will be combined with other sub-detectors in CMS to distinguish interesting events from background, enabling the good events to be read-out by the detector. The algorithms would be implemented directly into the Level-1 trigger, i.e. the first trigger in a two-trigger system, to be used in real time. Specifically, by analyzing computer generated stable particles over various ranges of transverse momentum and the various tracks they produce, we created and tested various simulated trigger algorithms that would be hopefully used in hardware. As one algorithm has proved very effective, the next step is to this algorithm against simulated events with an environment equivalent to SLHC luminosities.
2017
Parallelized Kalman-Filter-Based Reconstruction of Particle Tracks on Many-Core Architectures
Faced with physical and energy density limitations on clock speed, contemporary microprocessor designers have increasingly turned to on-chip parallelism for performance gains. Algorithms should accordingly be designed with ample amounts of fine-grained parallelism if they are to realize the full performance of the hardware. This requirement can be challenging for algorithms that are naturally expressed as a sequence of small-matrix operations, such as the Kalman filter methods widely in use in high-energy physics experiments. In the High-Luminosity Large Hadron Collider (HL-LHC), for example, one of the dominant computational problems is expected to be finding and fitting charged-particle tracks during event reconstruction; today, the most common track-finding methods are those based on the Kalman filter. Experience at the LHC, both in the trigger and offline, has shown that these methods are robust and provide high physics performance. Previously we reported the significant parallel speedups that resulted from our efforts to adapt Kalman-filter-based tracking to many-core architectures such as Intel Xeon Phi. Here we report on how effectively those techniques can be applied to more realistic detector configurations and event complexity.
DOI: 10.48550/arxiv.1711.06571
2017
Parallelized Kalman-Filter-Based Reconstruction of Particle Tracks on Many-Core Architectures
Faced with physical and energy density limitations on clock speed, contemporary microprocessor designers have increasingly turned to on-chip parallelism for performance gains. Algorithms should accordingly be designed with ample amounts of fine-grained parallelism if they are to realize the full performance of the hardware. This requirement can be challenging for algorithms that are naturally expressed as a sequence of small-matrix operations, such as the Kalman filter methods widely in use in high-energy physics experiments. In the High-Luminosity Large Hadron Collider (HL-LHC), for example, one of the dominant computational problems is expected to be finding and fitting charged-particle tracks during event reconstruction; today, the most common track-finding methods are those based on the Kalman filter. Experience at the LHC, both in the trigger and offline, has shown that these methods are robust and provide high physics performance. Previously we reported the significant parallel speedups that resulted from our efforts to adapt Kalman-filter-based tracking to many-core architectures such as Intel Xeon Phi. Here we report on how effectively those techniques can be applied to more realistic detector configurations and event complexity.
DOI: 10.1093/ehr/cen028
2008
Power and the People: A Social History of Central European Politics, 1945-56
This impressive volume seeks to introduce both an undergraduate and a more specialist audience to the recent innovative approaches to the social history of post-war Central Europe. It includes chapters by established and younger scholars drawn from five countries—Austria, Germany, Hungary, Slovakia and the UK. This geographical spread in itself is a most welcome departure from the more usual Anglo-American focus. The book is divided into four sections: workers, ethnic and linguistic minorities, youth, and women, each consisting of four essays. In addition, the editors have contributed a valuable lucid introduction in which they make the eminently reasonable claim that the historiography has concentrated disproportionately on the strictly political and ideological aspects of the emerging Cold War. While certainly not eschewing ‘history from above’ and the difficult issue of the interaction between ‘high politics’ and ‘popular politics’, this anthology shifts our attention towards the social history of the region with an emphasis on labour, generational and gender tensions. An important sub-theme is the cultural impact of ‘Americanisation’ and consumerism on both capitalist and, to a lesser extent, socialist societies. In terms of sources, several authors make good use of the newly accessible party and state archives in the former Soviet bloc countries.
DOI: 10.1167/9.14.85
2009
Observer vs. environmental variations in color appearance
Adaptation strongly affects visual appearance, and thus how the world looks to an individual depends on which world they are adapted to. Previously we developed a model to simulate how color appearance should vary within a single standard observer when they are adapted to different color environments. Here we compare these variations to the converse case of different observers — with different spectral sensitivities — that are adapted to the same environments. The adaptation was modeled as gain changes in the cones and in multiple post-receptoral channels tuned to different color-luminance directions. For each channel sensitivity is adjusted so that the average response within the target environment equals the mean response to a reference environment, and images are then rendered based on the adapted channel responses. Spectral sensitivities for different observers are simulated based on estimates of normal variations in color vision. Changes in color appearance were modeled by variations in the foci for basic color terms within the Munsell palette of the World Color Survey, based on determining the stimulus coordinates that would generate equivalent color-luminance angles in the responses of different observers. Comparisons across different environments allows us to assess the relative influence of observer vs. environmental variations in shaping color appearance, and also the extent to which adaptation can compensate for sensitivity differences in observers. For example, we show that chromatic and contrast adaptation cannot undo the effects of sensitivity changes and thus predict incomplete color constancy across observers (in the same way that adaptation cannot completely discount a change in illumination and thus leads to imperfect color constancy within an observer). We also extend the model to predict color appearance in color-deficient observers. Simulations of what individuals with visual deficiencies can see typically filter the images to adjust for the changes in color or contrast sensitivity of the observer, but may not capture how images actually appear if these sensitivity losses are compensated by processes such as adaptation. In our simulations colors are again scaled so that the average signals are again equated across channels, but should appear less determinant in color-deficient observers because this scaling affects both signal and noise. Our model thus predicts why observers with reduced sensitivity may nevertheless perceive the world as perceptually balanced.
DOI: 10.1201/9781003211471-78
2022
Heat production patterns in commercial turkey production houses
A study was carried out with the objective of measuring the sensible and evaporative heat production in a commercial turkey barn. Heat production was measured directly and indirectly and the two methods compared. Heat losses from Large White hens from 2 to 94 days of age were determined for nine 24-hour monitoring periods. Diurnal patterns were also investigated. Maximum heat production was 36.4 and 35.0 W/bird measured directly and indirectly, respectively. Direct calorimetry measured 6% higher than indirect averaged over all the runs. Low temperatures during the night caused increased heat production. On a fait etude dans le but de mesurer la production de chaleur sensible et evaporative dans un elevage industriel de dindes. La production de chaleur a 6tê mesurée directement et indirectement et les deux methodes comparées. Les déperditions de chaleur de dindes Large White dont l’âge allait de 2 a 94 jours ont été déterminées pour 9 périodes de surveillance de 24 heures. On a également étudié les schémas diurnes. La production de chaleur maximale était de 36.4 et de 35.0 W/oiseau par mesure directe, respectivement. La calorimétrie directe était 6% plus élevée que l’indirecte dans la moyenne de toutes les périodes d’enregistrement. Des températures basses durant la nuit entra naient une production de chaleur accrue. Die vorliegende Studie wurde im Bereich einer kommerziellen Truthahnfarm durchgeführt mit der Absicht, die sensible Wärme, die von den Truthennen produziert wird, zu messen. Die Wärmeproduktion wurde am direkten und indirekten Wege ermittelt und die zwei Messverfahren verglichen. Die Wärmeverluste von 2 bis 94 Tage alten Large White Truthennen wurden für neun Versuchsperioden, jede von 24 Stunden Zeitspanne, bestimmt. Tägliche Schwankungen wurden ebenso untersucht. Die Höchstwerte für die direkt und indirekt gemessene Wärmenproduktion wurden mit 36.4 beziehungsweise 35.0 W/Truthenne angegeben. Die am direkten Wege ermittelten Wärmemessungen waren im Durchschnitt um 6% hoher als diejenigen, ermittelt am indirekten Wege. Niedrige Nachtstemperaturen hatten eine erhöhte Wärmeproduktion zum Folge.
DOI: 10.1007/978-3-030-98271-3_1
2022
Czechoslovakia and Eastern Europe in the Era of Normalisation
2007
Bridging the gaps, naturally: ICOET 2007, May 20-25, 2007, Little Rock, Arkansas
DOI: 10.2172/1668396
2020
Parallelization for HEP Reconstruction
in porting existing serial algorithms to many-core devices. Measurements of both data processing and data transfer latency are shown, considering different I/O strategies to/from the parallel devices.
DOI: 10.1088/1742-6596/1525/1/012078
2020
Parallelized Kalman-Filter-Based Reconstruction of Particle Tracks on Many-Core Architectures with the CMS Detector
Abstract In the High–Luminosity Large Hadron Collider (HL–LHC), one of the most challenging computational problems is expected to be finding and fitting charged-particle tracks during event reconstruction. The methods currently in use at the LHC are based on the Kalman filter. Such methods have shown to be robust and to provide good physics performance, both in the trigger and offline. In order to improve computational performance, we explored Kalman-filter-based methods for track finding and fitting, adapted for many-core SIMD (single instruction, multiple data) and SIMT (single instruction, multiple thread) architectures. Our adapted Kalman-filter-based software has obtained significant parallel speedups using such processors, e.g., Intel Xeon Phi, Intel Xeon SP (Scalable Processors) and (to a limited degree) NVIDIA GPUs. Recently, an effort has started towards the integration of our software into the CMS software framework, in view of its exploitation for the Run III of the LHC. Prior reports have shown that our software allows in fact for some significant improvements over the existing framework in terms of computational performance with comparable physics performance, even when applied to realistic detector configurations and event complexity. Here, we demonstrate that in such conditions physics performance can be further improved with respect to our prior reports, while retaining the improvements in computational performance, by making use of the knowledge of the detector and its geometry.
DOI: 10.48550/arxiv.1906.11744
2019
Speeding up Particle Track Reconstruction in the CMS Detector using a Vectorized and Parallelized Kalman Filter Algorithm
Building particle tracks is the most computationally intense step of event reconstruction at the LHC. With the increased instantaneous luminosity and associated increase in pileup expected from the High-Luminosity LHC, the computational challenge of track finding and fitting requires novel solutions. The current track reconstruction algorithms used at the LHC are based on Kalman filter methods that achieve good physics performance. By adapting the Kalman filter techniques for use on many-core SIMD architectures such as the Intel Xeon and Intel Xeon Phi and (to a limited degree) NVIDIA GPUs, we are able to obtain significant speedups and comparable physics performance. New optimizations, including a dedicated post-processing step to remove duplicate tracks, have improved the algorithm's performance even further. Here we report on the current structure and performance of the code and future plans for the algorithm.
2018
Parallelized and Vectorized Tracking Using Kalman Filters with CMS Detector Geometry and Events
2020
Reconstruction of Charged Particle Tracks in Realistic Detector Geometry Using a Vectorized and Parallelized Kalman Filter Algorithm
DOI: 10.48550/arxiv.2101.11489
2021
Parallelizing the Unpacking and Clustering of Detector Data for Reconstruction of Charged Particle Tracks on Multi-core CPUs and Many-core GPUs
We present results from parallelizing the unpacking and clustering steps of the raw data from the silicon strip modules for reconstruction of charged particle tracks. Throughput is further improved by concurrently processing multiple events using nested OpenMP parallelism on CPU or CUDA streams on GPU. The new implementation along with earlier work in developing a parallelized and vectorized implementation of the combinatoric Kalman filter algorithm has enabled efficient global reconstruction of the entire event on modern computer architectures. We demonstrate the performance of the new implementation on Intel Xeon and NVIDIA GPU architectures.
2000
Коминтерн : история международного коммунизма от Ленина до Сталина
DOI: 10.2307/20631739
1997
Before Their Time
DOI: 10.1177/088636878301500207
1983
Exploring Job Guarantees