ϟ

V. Krutelyov

Here are all the papers by V. Krutelyov that you can download and read on OA.mg.
V. Krutelyov’s last known institution is . Download V. Krutelyov PDFs here.

Claim this Profile →
DOI: 10.1051/epjconf/202429503019
2024
Generalizing mkFit and its Application to HL-LHC
mkFit is an implementation of the Kalman filter-based track reconstruction algorithm that exploits both threadand data-level parallelism. In the past few years the project transitioned from the R&D phase to deployment in the Run-3 offline workflow of the CMS experiment. The CMS tracking performs a series of iterations, targeting reconstruction of tracks of increasing difficulty after removing hits associated to tracks found in previous iterations. mkFit has been adopted for several of the tracking iterations, which contribute to the majority of reconstructed tracks. When tested in the standard conditions for production jobs, speedups in track pattern recognition are on average of the order of 3.5x for the iterations where it is used (3-7x depending on the iteration). Multiple factors contribute to the observed speedups, including vectorization and a lightweight geometry description, as well as improved memory management and single precision. Efficient vectorization is achieved with both the icc and the gcc (default in CMSSW) compilers and relies on a dedicated library for small matrix operations, Matriplex, which has recently been released in a public repository. While the mkFit geometry description already featured levels of abstraction from the actual Phase-1 CMS tracker, several components of the implementations were still tied to that specific geometry. We have further generalized the geometry description and the configuration of the run-time parameters, in order to enable support for the Phase-2 upgraded tracker geometry for the HL-LHC and potentially other detector configurations. The implementation strategy and high-level code changes required for the HL-LHC geometry are presented. Speedups in track building from mkFit imply that track fitting becomes a comparably time consuming step of the tracking chain. Prospects for an mkFit implementation of the track fit are also discussed.
DOI: 10.1051/epjconf/202429502019
2024
Line Segment Tracking in the High-luminosity LHC
The Large Hadron Collider (LHC) will be upgraded to Highluminosity LHC, increasing the number of simultaneous proton-proton collisions (pileup, PU) by several-folds. The harsher PU conditions lead to exponentially increasing combinatorics in charged particle tracking, placing a large demand on the computing resources. The projection on required computing resources exceeds the computing budget with the current algorithms running on single-thread CPUs. Motivated by the rise of heterogeneous computing in high-performance computing centers, we present Line Segment Tracking (LST), a highly parallelizeable algorithm that can run efficiently on GPUs and is being integrated to the CMS experiment central software. The usage of Alpaka framework for the algorithm implementation allows better portability of the code to run on different types of commercial parallel processors allowing flexibility on which processors to purchase for the experiment in the future. To verify a similar computational performance with a native solution, the Alpaka implementation is compared with a CUDA one on a NVIDIA Tesla V100 GPU. The algorithm creates short track segments in parallel, and progressively form higher level objects by linking segments that are consistent with genuine physics track hypothesis. The computing and physics performance are on par with the latest, multi-CPU versions of existing CMS tracking algorithms.
DOI: 10.1016/j.nima.2006.06.011
2006
Cited 27 times
The timing system for the CDF electromagnetic calorimeters
We report on the design and performance of the electromagnetic calorimeter timing readout system (EMTiming) for the Collider Detector at Fermilab (CDF). The system will be used in searches for rare events with high-energy photons to verify that the photon is in time with the event collision, to reject cosmic-ray and beam-halo backgrounds, and to allow direct searches for new, heavy, long-lived neutral particles that decay to photons. The installation and commissioning of all 862 channels were completed in Fall 2004 as part of an upgrade to the Run II version of the detector. Using in situ data, including electrons from W→eν and Z→ee decays, we measure the energy threshold for a time to be recorded to be 3.8±0.3GeV (1.9±0.1GeV) in the central (plug) portion of the detector. Similarly, for the central (plug) portion we measure a timing resolution of 600±10ps (610±10ps) for electrons above 10 GeV (6 GeV). There are very few system pathologies such as recording a time when no energy is deposited, or recording a second, fake time for a single energy deposit.
DOI: 10.1088/1748-0221/15/09/p09030
2020
Cited 5 times
Speeding up particle track reconstruction using a parallel Kalman filter algorithm
One of the most computationally challenging problems expected for the High-Luminosity Large Hadron Collider (HL-LHC) is determining the trajectory of charged particles during event reconstruction. Algorithms used at the LHC today rely on Kalman filtering, which builds physical trajectories incrementally while incorporating material effects and error estimation. Recognizing the need for faster computational throughput, we have adapted Kalman-filter-based methods for highly parallel, many-core SIMD architectures that are now prevalent in high-performance hardware. In this paper, we discuss the design and performance of the improved tracking algorithm, referred to as MKFIT. A key piece of the algorithm is the MATRIPLEX library, containing dedicated code to optimally vectorize operations on small matrices. The physics performance of the MKFIT algorithm is comparable to the nominal CMS tracking algorithm when reconstructing tracks from simulated proton-proton collisions within the CMS detector. We study the scaling of the algorithm as a function of the parallel resources utilized and find large speedups both from vectorization and multi-threading. MKFIT achieves a speedup of a factor of 6 compared to the nominal algorithm when run in a single-threaded application within the CMS software framework.
2008
Cited 3 times
Searches for Large Extra Dimensions at the Tevatron
The presence of extra dimensions can be probed in high energy collisions via the production or exchange of gravitons. The former corresponds to signatures with missing energy while the latter corresponds to modifications of the final state spectra. Here I review results of analyses performed by the CDF and D0 Collaborations on ppbar collisions at 1.96 TeV in signatures sensitive to large extra dimensions. These include analyses of photon+(missing transverse energy) and jet+(missing transverse energy) as signatures of graviton production as well as analyses of dilepton and diboson final states sensitive to graviton exchange.
DOI: 10.48550/arxiv.2304.05853
2023
Speeding up the CMS track reconstruction with a parallelized and vectorized Kalman-filter-based algorithm during the LHC Run 3
One of the most challenging computational problems in the Run 3 of the Large Hadron Collider (LHC) and more so in the High-Luminosity LHC (HL-LHC) is expected to be finding and fitting charged-particle tracks during event reconstruction. The methods used so far at the LHC and in particular at the CMS experiment are based on the Kalman filter technique. Such methods have shown to be robust and to provide good physics performance, both in the trigger and offline. In order to improve computational performance, we explored Kalman-filter-based methods for track finding and fitting, adapted for many-core SIMD architectures. This adapted Kalman-filter-based software, called "mkFit", was shown to provide a significant speedup compared to the traditional algorithm, thanks to its parallelized and vectorized implementation. The mkFit software was recently integrated into the offline CMS software framework, in view of its exploitation during the Run 3 of the LHC. At the start of the LHC Run 3, mkFit will be used for track finding in a subset of the CMS offline track reconstruction iterations, allowing for significant improvements over the existing framework in terms of computational performance, while retaining comparable physics performance. The performance of the CMS track reconstruction using mkFit at the start of the LHC Run 3 is presented, together with prospects of further improvement in the upcoming years of data taking.
DOI: 10.48550/arxiv.2312.11728
2023
Generalizing mkFit and its Application to HL-LHC
mkFit is an implementation of the Kalman filter-based track reconstruction algorithm that exploits both thread- and data-level parallelism. In the past few years the project transitioned from the R&D phase to deployment in the Run-3 offline workflow of the CMS experiment. The CMS tracking performs a series of iterations, targeting reconstruction of tracks of increasing difficulty after removing hits associated to tracks found in previous iterations. mkFit has been adopted for several of the tracking iterations, which contribute to the majority of reconstructed tracks. When tested in the standard conditions for production jobs, speedups in track pattern recognition are on average of the order of 3.5x for the iterations where it is used (3-7x depending on the iteration). Multiple factors contribute to the observed speedups, including vectorization and a lightweight geometry description, as well as improved memory management and single precision. Efficient vectorization is achieved with both the icc and the gcc (default in CMSSW) compilers and relies on a dedicated library for small matrix operations, Matriplex, which has recently been released in a public repository. While the mkFit geometry description already featured levels of abstraction from the actual Phase-1 CMS tracker, several components of the implementations were still tied to that specific geometry. We have further generalized the geometry description and the configuration of the run-time parameters, in order to enable support for the Phase-2 upgraded tracker geometry for the HL-LHC and potentially other detector configurations. The implementation strategy and high-level code changes required for the HL-LHC geometry are presented. Speedups in track building from mkFit imply that track fitting becomes a comparably time consuming step of the tracking chain.
2019
Speeding up Particle Track Reconstruction in the CMS Detector using a Vectorized and Parallelized Kalman Filter Algorithm
Building particle tracks is the most computationally intense step of event reconstruction at the LHC. With the increased instantaneous luminosity and associated increase in pileup expected from the High-Luminosity LHC, the computational challenge of track finding and fitting requires novel solutions. The current track reconstruction algorithms used at the LHC are based on Kalman filter methods that achieve good physics performance. By adapting the Kalman filter techniques for use on many-core SIMD architectures such as the Intel Xeon and Intel Xeon Phi and (to a limited degree) NVIDIA GPUs, we are able to obtain significant speedups and comparable physics performance. New optimizations, including a dedicated post-processing step to remove duplicate tracks, have improved the algorithm's performance even further. Here we report on the current structure and performance of the code and future plans for the algorithm.
DOI: 10.1051/epjconf/202024502013
2020
Reconstruction of Charged Particle Tracks in Realistic Detector Geometry Using a Vectorized and Parallelized Kalman Filter Algorithm
One of the most computationally challenging problems expected for the High-Luminosity Large Hadron Collider (HL-LHC) is finding and fitting particle tracks during event reconstruction. Algorithms used at the LHC today rely on Kalman filtering, which builds physical trajectories incrementally while incorporating material effects and error estimation. Recognizing the need for faster computational throughput, we have adapted Kalman-filterbased methods for highly parallel, many-core SIMD and SIMT architectures that are now prevalent in high-performance hardware. Previously we observed significant parallel speedups, with physics performance comparable to CMS standard tracking, on Intel Xeon, Intel Xeon Phi, and (to a limited extent) NVIDIA GPUs. While early tests were based on artificial events occurring inside an idealized barrel detector, we showed subsequently that our mkFit software builds tracks successfully from complex simulated events (including detector pileup) occurring inside a geometrically accurate representation of the CMS-2017 tracker. Here, we report on advances in both the computational and physics performance of mkFit, as well as progress toward integration with CMS production software. Recently we have improved the overall efficiency of the algorithm by preserving short track candidates at a relatively early stage rather than attempting to extend them over many layers. Moreover, mkFit formerly produced an excess of duplicate tracks; these are now explicitly removed in an additional processing step. We demonstrate that with these enhancements, mkFit becomes a suitable choice for the first iteration of CMS tracking, and eventually for later iterations as well. We plan to test this capability in the CMS High Level Trigger during Run 3 of the LHC, with an ultimate goal of using it in both the CMS HLT and offline reconstruction for the HL-LHC CMS tracker.
DOI: 10.1051/epjconf/201921402002
2019
Parallelized and Vectorized Tracking Using Kalman Filters with CMS Detector Geometry and Events
The High-Luminosity Large Hadron Collider at CERN will be characterized by greater pileup of events and higher occupancy, making the track reconstruction even more computationally demanding. Existing algorithms at the LHC are based on Kalman filter techniques with proven excellent physics performance under a variety of conditions. Starting in 2014, we have been developing Kalman-filter-based methods for track finding and fitting adapted for many-core SIMD processors that are becoming dominant in high-performance systems. This paper summarizes the latest extensions to our software that allow it to run on the realistic CMS-2017 tracker geometry using CMSSW-generated events, including pileup. The reconstructed tracks can be validated against either the CMSSW simulation that generated the detector hits, or the CMSSW reconstruction of the tracks. In general, the code’s computational performance has continued to improve while the above capabilities were being added. We demonstrate that the present Kalman filter implementation is able to reconstruct events with comparable physics performance to CMSSW, while providing generally better computational performance. Further plans for advancing the software are discussed.
DOI: 10.1016/s0370-2693(01)00336-7
2001
Cited 4 times
Prospect for searches for gluinos and squarks at the Tevatron Tripler
We examine the discovery potential for SUSY new physics at a pp collider upgrade of Tevatron with √ s = 5.4 TeV and luminosity L ≃ 4× 10 32 cm -2 s -1(the Tripler).We consider the reach for gluinos (g) and squarks (q) using the experimental signatures with large missing transverse energy (E / T ) of jets + E / T and 1ℓ + jets + E / T (where ℓ=electron or muon) within the framework of minimal supergravity.The Tripler's strongest reach for the gluino is 1060GeV for the jets + E / T channel and 1140 GeV for the 1ℓ + jets + E / T channel for 30 fb -1 of integrated luminosity (approximately two years running time).This is to be compared with the Tevatron where the reach is 440(460) GeV in the jets + E / T channel for 15(30) fb -1 of integrated luminosity.
2007
Measurement of the Inclusive Jet Cross Section using the {\boldmath $k_{\rm T}$} algorithmin{\boldmath $p\overline{p}$} Collisions at{\boldmath $\sqrt{s}$} = 1.96 TeV with the CDF II Detector
DOI: 10.3360/dis.2008.118
2008
Searches for Large Extra Dimensions at the Tevatron
DOI: 10.2172/878965
2005
A Combination of CDF and D0 limits on the branching ratio of B0(s)(d) ---> mu+ mu- decays
The authors combine the results of CDF and D0 searches for the rare decays B{sub s}{sup 0} {yields} {mu}{sup +}{mu}{sup -} and B{sub d}{sup 0} {yields} {mu}{sup +}{mu}{sup -}. The experiments use 364 pb{sup -1} and 300 pb{sup -1} of data respectively. The limits on the branching ratios are obtained by normalizing the estimated sensitivity to the decay B{sup +} {yields} J/{psi}K{sup +} taking into account the fragmentation ratios f{sub u}/f{sub s(d)}. The combined results exclude branching ratios of BR(B{sub s}{sup 0} {yields} {mu}{sup +}{mu}{sup -}) > 1.5 x 10{sup -7} and BR(B{sub d}{sup 0} {yields} {mu}{sup +}{mu}{sup -}) > 4.0 x 10{sup -8} at 95% confidence level. These are the most stringent limits on these decays at the present time.
DOI: 10.1088/1742-6596/898/4/042023
2017
Impact of tracker layout on track reconstruction with high pileup
High luminosity operation of the LHC is expected to deliver proton-proton collisions to experiments with an average number of proton-proton interactions reaching 200 in every bunch crossing. Reconstruction of charged particle tracks with current algorithms, in this environment, dominates reconstruction time and is increasingly computationally challenging. We discuss the importance of taking computing costs into account as a critical part of future tracker designs in HEP as well as the importance of algorithms used.
2008
Searches for New Physics in gamma + missing E(T) enents at CDF Run II
DOI: 10.48550/arxiv.0810.3320
2008
Searches for New Physics in Events with a photon and a missing energy at CDF Run II
The addition of the EMTiming system installed to provide the time measurements of the electromagnetic calorimeter signals has significantly increased the sensitivity of CDF to events with a photon and a missing energy. Here I review recent searches in this signature performed by CDF using data from proton-antiproton collisions at the center of mass energy of 1.96 TeV. They provide new constraints on models with large extra dimensions and with gauge mediated supersymmetry breaking.
2008
Searches for New Physics in Events at CDF Run II
DOI: 10.48550/arxiv.2207.08207
2022
Line Segment Tracking in the HL-LHC
The major challenge posed by the high instantaneous luminosity in the High Luminosity LHC (HL-LHC) motivates efficient and fast reconstruction of charged particle tracks in a high pile-up environment. While there have been efforts to use modern techniques like vectorization to improve the existing classic Kalman Filter based reconstruction algorithms, Line Segment Tracking takes a fundamentally different approach by doing a bottom-up reconstruction of tracks. Small track stubs from adjoining detector regions are constructed, and then these track stubs that are consistent with typical track trajectories are successively linked. Since the production of these track stubs is localized, they can be made in parallel, which lends way into using architectures like GPUs and multi-CPUs to take advantage of the parallelism. The algorithm is implemented in the context of the CMS Phase-2 Tracker and runs on NVIDIA Tesla V100 GPUs. Good physics and timing performance has been obtained, and stepping stones for the future are elaborated.
DOI: 10.48550/arxiv.0807.0645
2008
Searches for Large Extra Dimensions at the Tevatron
The presence of extra dimensions can be probed in high energy collisions via the production or exchange of gravitons. The former corresponds to signatures with missing energy while the latter corresponds to modifications of the final state spectra. Here I review results of analyses performed by the CDF and D0 Collaborations on ppbar collisions at 1.96 TeV in signatures sensitive to large extra dimensions. These include analyses of photon+(missing transverse energy) and jet+(missing transverse energy) as signatures of graviton production as well as analyses of dilepton and diboson final states sensitive to graviton exchange.
DOI: 10.48550/arxiv.2209.13711
2022
Segment Linking: A Highly Parallelizable Track Reconstruction Algorithm for HL-LHC
The High Luminosity upgrade of the Large Hadron Collider (HL-LHC) will produce particle collisions with up to 200 simultaneous proton-proton interactions. These unprecedented conditions will create a combinatorial complexity for charged-particle track reconstruction that demands a computational cost that is expected to surpass the projected computing budget using conventional CPUs. Motivated by this and taking into account the prevalence of heterogeneous computing in cutting-edge High Performance Computing centers, we propose an efficient, fast and highly parallelizable bottom-up approach to track reconstruction for the HL-LHC, along with an associated implementation on GPUs, in the context of the Phase 2 CMS outer tracker. Our algorithm, called Segment Linking (or Line Segment Tracking), takes advantage of localized track stub creation, combining individual stubs to progressively form higher level objects that are subject to kinematical and geometrical requirements compatible with genuine physics tracks. The local nature of the algorithm makes it ideal for parallelization under the Single Instruction, Multiple Data paradigm, as hundreds of objects can be built simultaneously. The computing and physics performance of the algorithm has been tested on an NVIDIA Tesla V100 GPU, already yielding efficiency and timing measurements that are on par with the latest, multi-CPU versions of existing CMS tracking algorithms.
DOI: 10.1088/1742-6596/2375/1/012005
2022
Segment Linking: A Highly Parallelizable Track Reconstruction Algorithm for HL-LHC
Abstract The High Luminosity upgrade of the Large Hadron Collider (HL-LHC) will produce particle collisions with up to 200 simultaneous proton-proton interactions. These unprecedented conditions will create a combinatorial complexity for charged-particle track reconstruction that demands a computational cost that is expected to surpass the projected computing budget using conventional CPUs. Motivated by this and taking into account the prevalence of heterogeneous computing in cutting-edge High Performance Computing centers, we propose an efficient, fast and highly parallelizable bottom-up approach to track reconstruction for the HL-LHC, along with an associated implementation on GPUs, in the context of the Phase 2 CMS outer tracker. Our algorithm, called Segment Linking (or Line Segment Tracking), takes advantage of localized track stub creation, combining individual stubs to progressively form higher level objects that are subject to kinematical and geometrical requirements compatible with genuine physics tracks. The local nature of the algorithm makes it ideal for parallelization under the Single Instruction, Multiple Data paradigm, as hundreds of objects can be built simultaneously. The computing and physics performance of the algorithm has been tested on an NVIDIA Tesla V100 GPU, already yielding efficiency and timing measurements that are on par with the latest, multi-CPU versions of existing CMS tracking algorithms.
DOI: 10.48550/arxiv.0707.2820
2007
Rare decays of heavy flavor at the Tevatron
In this report I review recent results in the field of rare decays at the Tevatron CDF II and D0 experiments. The presentation is focused on rare decays of charm and bottom mesons with two muons in the final state. This includes improvements over the previously available limits on the following branching ratios: $B(D^+ \to π^+ μ^+ μ^-)< 4.7 \times 10^{-6}$, B(B_s^0 \to ϕμ^+ μ^-)< 3.2\times 10^{-6}$, $B(B_s^0 \to μ^+ μ^-)< 1 \times 10^{-7}$, and $B(B_d^0)< 3 \times 10{-8}$ all at the 90% confidence level. Also reported are the first direct observation of $D_s^+ \to ϕπ^+ \to μ^+ μ^- π^+$ with a significance above background of over 7 standard deviations and evidence of $D^+ \to ϕπ^+ \to μ^+ μ^- π^+$ with a significance of 3.1 and $B(D^+ \to ϕπ^+ \to μ^+ μ^- π^+)=(1.75 \pm0.7 \pm0.5) \times 10^{-6}$.
2007
Searches for a Dark Matter Candidate in Particle Physics Experiments at the Fermilab Tevatron
DOI: 10.1088/1742-6596/1525/1/012078
2020
Parallelized Kalman-Filter-Based Reconstruction of Particle Tracks on Many-Core Architectures with the CMS Detector
Abstract In the High–Luminosity Large Hadron Collider (HL–LHC), one of the most challenging computational problems is expected to be finding and fitting charged-particle tracks during event reconstruction. The methods currently in use at the LHC are based on the Kalman filter. Such methods have shown to be robust and to provide good physics performance, both in the trigger and offline. In order to improve computational performance, we explored Kalman-filter-based methods for track finding and fitting, adapted for many-core SIMD (single instruction, multiple data) and SIMT (single instruction, multiple thread) architectures. Our adapted Kalman-filter-based software has obtained significant parallel speedups using such processors, e.g., Intel Xeon Phi, Intel Xeon SP (Scalable Processors) and (to a limited degree) NVIDIA GPUs. Recently, an effort has started towards the integration of our software into the CMS software framework, in view of its exploitation for the Run III of the LHC. Prior reports have shown that our software allows in fact for some significant improvements over the existing framework in terms of computational performance with comparable physics performance, even when applied to realistic detector configurations and event complexity. Here, we demonstrate that in such conditions physics performance can be further improved with respect to our prior reports, while retaining the improvements in computational performance, by making use of the knowledge of the detector and its geometry.
DOI: 10.48550/arxiv.1906.11744
2019
Speeding up Particle Track Reconstruction in the CMS Detector using a Vectorized and Parallelized Kalman Filter Algorithm
Building particle tracks is the most computationally intense step of event reconstruction at the LHC. With the increased instantaneous luminosity and associated increase in pileup expected from the High-Luminosity LHC, the computational challenge of track finding and fitting requires novel solutions. The current track reconstruction algorithms used at the LHC are based on Kalman filter methods that achieve good physics performance. By adapting the Kalman filter techniques for use on many-core SIMD architectures such as the Intel Xeon and Intel Xeon Phi and (to a limited degree) NVIDIA GPUs, we are able to obtain significant speedups and comparable physics performance. New optimizations, including a dedicated post-processing step to remove duplicate tracks, have improved the algorithm's performance even further. Here we report on the current structure and performance of the code and future plans for the algorithm.
2018
Parallelized and Vectorized Tracking Using Kalman Filters with CMS Detector Geometry and Events
2006
Rare decays of heavy flavor at the Tevatron
In this report the author reviews recent results in the field of rare decays at the Tevatron CDF II and D0 experiments. The presentation is focused on rare decays of charm and bottom mesons with two muons in the final state. This includes improvements over the previously available limits on the following branching ratios: {Beta}(D{sup +} {yields} {pi}{sup +}{mu}{sup +} {mu}{sup -}) < 4.7 x 10{sup -6}, {Beta}(B{sub s}{sup 0} {yields} {phi}{mu}{sup +} {mu}{sup -}) < 3.2 x 10{sup -6}, {Beta}(B{sub s}{sup 0} {yields} {mu}{sup +}{mu}{sup -}) < 1 x 10{sup -7}, and {Beta}(B{sub d}{sup 0} {yields} {mu}{sup +}{mu}{sup -}) < 3 x 10{sup -8} all at the 90% confidence level. Also reported are the first direct observation of D{sub s}{sup +} {yields} {phi}{pi}{sup +} {yields} {mu}{sup +}{mu}{sup -}{pi}{sup +} with a significance above background of over 7 standard deviations and evidence of D{sup +} {yields} {phi}{pi}{sup +} {yields} {mu}{sup +}{mu}{sup -}{pi}{sup +} with a significance of 3.1 and {Beta}(D{sup +} {yields} {phi}{pi}{sup +} {yields} {mu}{sup +}{mu}{sup -}{pi}{sup +}) = (1.75 {+-} 0.7 {+-} 0.5) x 10{sup -6}.
2006
Search for Heavy, Long-Lived Particles that Decay to Photons at CDF
2020
Reconstruction of Charged Particle Tracks in Realistic Detector Geometry Using a Vectorized and Parallelized Kalman Filter Algorithm
2004
B-physics: New states, rare decays and branching ratios in CDF
DOI: 10.2172/878932
2005
Search for Supersymmetry using rare B$0\atop{s(d)}$ → μ&lt;sup&gt;+&lt;/sup&gt;μ&lt;sup&gt;-&lt;/sup&gt; decays at CDF Run II
A search for rare B$0\atop{s}$ → μ+μ- and B$0\atop{d}$ → μ+μ- decays has been performed in pp collisions at √s = 1.96 TeV using 364 pb-1 of data collected by the CDF II experiment at the Fermilab Tevatron Collider. The rate of each decay is sensitive to contributions from physics beyond the Standard Model (SM). No events pass the optimized selection requirements, consistent with the SM expectation. The resulting upper limits on the branching ratios are B(B$0\atop{s}$ → μ+μ-) < 1.5 × 10-7 and B(B$0\atop{d}$ → μ+μ-) < 3.8 × 10-8 at the 90% confidence level. The limits are used to exclude some parameter space for several supersymmetric models.
DOI: 10.1016/j.nuclphysbps.2005.01.032
2005
B-physics: new states, rare decays and branching ratios in CDF
We present results and prospects for searches for rare B and D meson decays with final state dimuons, including Bs0→μ+μ−, Bd0→μ+μ−, and D0→μ+μ−. Upper limits on the branching fractions are compared to previous CDF measurements, recent results from the B factories and theoretical expectations. We also report on new measurements of production and decay properties of the X(3872) particle, discovered in 2003 by the Belle Collaboration. New results on the measurement of the relative branching fraction for the Cabibbo suppressed decay B+→J/ψπ+B(B+→J/ψπ+)/B(B+→J/ψK+) are presented too. The presented results are based on the analyses of 70 to 220 pb−1 of data collected by the CDF II detector in pp¯ collisions at s=1.96 GeV at Fermilab Tevatron.
2005
Search for supersymmetry using rare neutral B(s(d)) meson decays to muon+ muon- at CDF run II
DOI: 10.48550/arxiv.hep-ex/0508058
2005
A Combination of CDF and DO Limits on the Branching Ratio of B_s(d) to mu+ mu- Decays
We combine the results of CDF and DO searches for the rare decays B_s to mu+ mu- and B_d to mu+ mu-. The experiments use 364 pb-1 and 300 pb-1 of data respectively. The limits on the branching ratios are obtained by normalizing the estimated sensitivity to the decay B+ to J/psi K+, taking into account the fragmentation ratios f_u/f_s(d). The combined results exclude branching ratios of BR(B_s to mu+ mu-) &gt; 1.5x10-7 and BR(B_d to mu+ mu- &gt; 4.0x10-8 at 95% confidence level. These are the most stringent limits on these decays at the present time.
DOI: 10.48550/arxiv.2101.11489
2021
Parallelizing the Unpacking and Clustering of Detector Data for Reconstruction of Charged Particle Tracks on Multi-core CPUs and Many-core GPUs
We present results from parallelizing the unpacking and clustering steps of the raw data from the silicon strip modules for reconstruction of charged particle tracks. Throughput is further improved by concurrently processing multiple events using nested OpenMP parallelism on CPU or CUDA streams on GPU. The new implementation along with earlier work in developing a parallelized and vectorized implementation of the combinatoric Kalman filter algorithm has enabled efficient global reconstruction of the entire event on modern computer architectures. We demonstrate the performance of the new implementation on Intel Xeon and NVIDIA GPU architectures.
2000
Prospect of Searches for Supersymmetric Gluons and Quarks at Tevatron and Tripler