ϟ

T. Liu

Here are all the papers by T. Liu that you can download and read on OA.mg.
T. Liu’s last known institution is . Download T. Liu PDFs here.

Claim this Profile →
DOI: 10.48550/arxiv.2402.00795
2024
LLMs learn governing principles of dynamical systems, revealing an in-context neural scaling law
Pretrained large language models (LLMs) are surprisingly effective at performing zero-shot tasks, including time-series forecasting. However, understanding the mechanisms behind such capabilities remains highly challenging due to the complexity of the models. In this paper, we study LLMs' ability to extrapolate the behavior of dynamical systems whose evolution is governed by principles of physical interest. Our results show that LLaMA 2, a language model trained primarily on texts, achieves accurate predictions of dynamical system time series without fine-tuning or prompt engineering. Moreover, the accuracy of the learned physical rules increases with the length of the input context window, revealing an in-context version of neural scaling law. Along the way, we present a flexible and efficient algorithm for extracting probability density functions of multi-digit numbers directly from LLMs.
DOI: 10.1109/animma.2011.6172856
2011
Cited 14 times
A new variable-resolution Associative Memory for high energy physics
We describe an important advancement for the Associative Memory device (AM). The AM is a VLSI processor for pattern recognition based on Content Addressable Memory (CAM) architecture. The AM is optimized for on-line track finding in high-energy physics experiments. Pattern matching is carried out by finding track candidates in coarse resolution “roads”. A large AM bank stores all trajectories of interest, called “patterns”, for a given detector resolution. The AM extracts roads compatible with a given event during detector read-out. Two important variables characterize the quality of the AM bank: its “coverage” and the level of fake roads. The coverage, which describes the geometric efficiency of a bank, is defined as the fraction of tracks that match at least one pattern in the bank. Given a certain road size, the coverage of the bank can be increased just adding patterns to the bank, while the number of fakes unfortunately is roughly proportional to the number of patterns in the bank. Moreover, as the luminosity increases, the fake rate increases rapidly because of the increased silicon occupancy. To counter that, we must reduce the width of our roads. If we decrease the road width using the current technology, the system will become very large and extremely expensive. We propose an elegant solution to this problem: the “variable resolution patterns”. Each pattern and each detector layer within a pattern will be able to use the optimal width, but we will use a “don't care” feature (inspired from ternary CAMs) to increase the width when that is more appropriate. In other words we can use patterns of variable shape. As a result we reduce the number of fake roads, while keeping the efficiency high and avoiding excessive bank size due to the reduced width. We describe the idea, the implementation in the new AM design and the implementation of the algorithm in the simulation. Finally we show the effectiveness of the “variable resolution patterns” idea using simulated high occupancy events in the ATLAS detector.
DOI: 10.1109/nssmic.2011.6154467
2011
Cited 12 times
Associative memory design for the fast track processor (FTK) at ATLAS
We propose a new generation of VLSI processors for pattern recognition, based on associative memory architecture, optimized for online track finding in high-energy physics experiments.We describe the architecture, the technology studies and the prototype design of a new associative memory project: it maximizes the pattern density on the ASIC, minimizes the power consumption and improves the functionality for the fast tracker processor proposed to upgrade the ATLAS trigger at LHC. I. INTRODUCTIONT HE track reconstruction in high-energy physics experi- ments requires large online computing power.The Fast Tracker for ATLAS triggers [1] is an evolution of the Silicon Vertex Tracker (SVT) in CDF [2], [3].The Fast Tracker is an online processor that tackles and solves the full track reconstruction problem at a hadron collider.The SVT track fitting system approaches the offline tracking precision with a processing time of the order of tens of microseconds, compatible with 30 kHz input event rates.This task is performed with negligible time delay by a Content Addressable Memory (CAM), also called Associative Memory (AM), i.e., a device that compares the event hits in parallel with all the stored pre-calculated low resolution track candidates (patterns), and returns the addresses of the matching patterns.A second processor receives the matching patterns and their related full-resolution hits to perform the final track fitting (Track Fitter, TF).A critical figure of merit for the AM-based track reconstruction system is the number of patterns that can be stored in the bank.For the SVT upgrade [2], [4], we developed a version of the AM chip (AMchip03) [5], using a 180 nm
DOI: 10.1088/1748-0221/7/10/c10002
2012
Cited 6 times
FTK: a Fast Track Trigger for ATLAS
We describe the design and expected performance of a the Fast Tracker Trigger (FTK) system for the ATLAS detector at the Large Hadron Collider. The FTK is a highly parallel hardware system designed to operate at the Level 1 trigger output rate. It is designed to provide global tracks reconstructed in the inner detector with resolution comparable to the full offline reconstruction as input of the Level 2 trigger processing. The hardware system is based on associative memories for pattern recognition and fast FPGAs for track reconstruction. The FTK is expected to dramatically improve the performance of track based isolation and b-tagging with little to no dependencies of pile-up interactions.
DOI: 10.1016/j.nima.2007.08.030
2007
Cited 8 times
On-line tracking processors at hadron colliders: The SVT experience at CDF II and beyond
The Silicon Vertex Trigger (SVT) provides the CDF experiment with a powerful tool for fast and precise track finding and fitting at trigger level. The system enhances the experiment's reach on B-physics and large PT-physics coupled to b quarks. We review the main design features and the performance of the SVT with particular attention to the recent upgrade that improved its capabilities. Finally, we will focus on additional improvements of the functionality of such a system in a more general experimental context.
DOI: 10.1109/tns.2009.2016420
2009
Cited 7 times
Level-2 Calorimeter Trigger Upgrade at CDF
The CDF Run II level 2 calorimeter trigger is implemented in hardware and is based on a simple algorithm that was used in Run I. This system has worked well for Run II at low luminosity. As the Tevatron instantaneous luminosity increases, the limitation due to this simple algorithm starts to become clear. As a result, some of the most important jet and MET (missing ET) related triggers have large growth terms in cross section at higher luminosity. In this paper, we present an upgrade of the L2CAL system which makes the full calorimeter trigger tower information directly available to the level 2 decision CPU. This upgrade is based on the Pulsar, a general purpose VME board developed at CDF and already used for upgrading both the level 2 global decision crate and the level 2 silicon vertex tracking. The upgrade system allows more sophisticated algorithms to be implemented in software and both level 2 jets and MET can be made nearly equivalent to offline quality, thus significantly improving the performance and flexibility of the jet and MET related triggers. This is a natural expansion of the already-upgraded level 2 trigger system, and is a big step forward to improve the CDF triggering capability at level 2. This paper describes the design, the hardware and software implementation and the performance of the upgrade system.
DOI: 10.1109/nssmic.2016.8069898
2016
Cited 5 times
VIPRAM_L1CMS: A 2-tier 3D architecture for pattern recognition for track finding
In HEP tracking trigger applications, flagging an individual detector hit is not important. Rather, the path of a charged particle through many detector layers is what must be found. Moreover, given the increased luminosity projected for future LHC experiments, this type of track finding will be required within the Level 1 Trigger system. This means that future LHC experiments require not just a chip capable of high-speed track finding but also one with a high-speed readout architecture. VIPRAML1CMS is 2-Tier Vertically Integrated chip designed to fulfill these requirements. It is a complete pipelined Pattern Recognition Associative Memory (PRAM) architecture including pattern recognition, result sparsification, and readout for Level 1 trigger applications in CMS with 15-bit wide detector addresses and eight detector layers included in the track finding. Pattern recognition is based on classic Content Addressable Memories with a Current Race Scheme to reduce timing complexity and a 4-bit Selective Precharge to minimize power consumption. VIPRAM_L1CMS uses a pipelined set of priority-encoded binary readout structures to sparsify and readout active road flags at frequencies of at least 100MHz. VIPRAM_L1CMS is designed to work directly with the Pulsar2b Architecture.
DOI: 10.1088/1748-0221/13/01/c01035
2018
Cited 4 times
A compressed sensing X-ray camera with a multilayer architecture
Recent advances in compressed sensing theory and algorithms offer new possibilities for high-speed X-ray camera design. In many CMOS cameras, each pixel has an independent on-board circuit that includes an amplifier, noise rejection, signal shaper, an analog-to-digital converter (ADC), and optional in-pixel storage. When X-ray images are sparse, i.e., when one of the following cases is true: (a.) The number of pixels with true X-ray hits is much smaller than the total number of pixels; (b.) The X-ray information is redundant; or (c.) Some prior knowledge about the X-ray images exists, sparse sampling may be allowed. Here we first illustrate the feasibility of random on-board pixel sampling (ROPS) using an existing set of X-ray images, followed by a discussion about signal to noise as a function of pixel size. Next, we describe a possible circuit architecture to achieve random pixel access and in-pixel storage. The combination of a multilayer architecture, sparse on-chip sampling, and computational image techniques, is expected to facilitate the development and applications of high-speed X-ray camera technology.
DOI: 10.1016/j.nima.2019.05.018
2019
Cited 4 times
A high-performance track fitter for use in ultra-fast electronics
This article describes a new charged-particle track fitting algorithm designed for use in high-speed electronics applications such as hardware-based triggers in high-energy physics experiments. Following a novel technique designed for fast electronics, the positions of the hits on the detector are transformed before being passed to a linearized track parameter fit. This transformation results in fitted track parameters with a very linear dependence on the hit positions. The approach is demonstrated in a representative detector geometry based on the CMS detector at the Large Hadron Collider. The fit is implemented in FPGA chips and optimized for track fitting throughput and obtains excellent track parameter performance. Such an algorithm is potentially useful in any high-speed track-fitting application.
DOI: 10.1109/rtc.2010.5750337
2010
Cited 4 times
The Fast Track real time processor and its impact on muon isolation, tau and b-jet online selections at ATLAS
As the LHC luminosity is ramped up to 3×10 <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">34</sup> cm <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">-2</sup> s <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">-1</sup> and beyond, the high rates, multiplicities, and energies of particles seen by the detectors will pose a unique challenge. Only a tiny fraction of the produced collisions can be stored on tape and immense real-time data reduction is needed. An effective trigger system must maintain high trigger efficiencies for the physics we are most interested in, and at the same time suppress the enormous QCD backgrounds. This requires massive computing power to minimize the online execution time of complex algorithms. A multi-level trigger is an effective solution for an otherwise impossible problem. The Fast Tracker (FTK) is a proposed upgrade to the current ATLAS trigger system that will operate at full Level-1 output rates and provide high quality tracks reconstructed over the entire detector by the start of processing in Level-2. FTK solves the combinatorial challenge inherent to tracking by exploiting massive parallelism of associative memories that can compare inner detector hits to millions of pre-calculated patterns simultaneously. The tracking problem within matched patterns is further simplified by using pre-computed linearized fitting constants and leveraging fast DSPs in modern commercial FPGAs. Overall, FTK is able to compute the helix parameters for all tracks in an event and apply quality cuts in less than 100 μs. The system design is defined and studied with respect to high transverse momentum (high-P <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">T</sub> ) Level-2 objects: b-jets, tau-jets, and isolated leptons. We test FTK algorithms using ATLAS full simulation with WH events up to 3×10 <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">34</sup> cm <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">-2</sup> s <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">-1</sup> luminosity and comparing FTK results with the offline tracking capability. We present the architecture and the reconstruction performances for the mentioned high-P <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">T</sub> Level-2 objects.
DOI: 10.1016/j.nima.2006.09.022
2006
Cited 5 times
Real time secondary vertexing at CDF
The Online Silicon Vertex Tracker (SVT) is the trigger processor dedicated to the 2-D reconstruction of charged particle trajectories at the Level 2 of the CDF trigger. As the Tevatron luminosity rises, multiple interactions increase the complexity of events and thus the SVT processing time, reducing the amount of data CDF can record. The SVT upgrade aims to increase the SVT processing power to restore at high luminosity the original CDF Data Acquisition capability. In this paper we review the tracking algorithms implemented in the SVT and we report on the first step in the SVT upgrade.
DOI: 10.1109/nssmic.2004.1462389
2005
Cited 4 times
CDF Level 2 Trigger Upgrade - The Pulsar Project
The CDF data acquisition and trigger system is being upgraded to significantly increase the bandwidth for the upcoming high luminosity running of the Tevatron Collider (run IIb). This paper focuses on the upgrade for the level 2 (L2) trigger decision crate. This crate is at the heart of the L2 trigger system and has to interface with many different subsystems both upstream and downstream. The challenge of this upgrade is to have a uniform design to be able to interface with many different data paths upstream, merge and process the data at high speed for fast L2 trigger decision making, and minimize the impact on the running CDF experiment during the commissioning phase. In order to meet this challenge, the design philosophy of the upgrade is to use one type of general purpose motherboard, with a few powerful modern FPGAs and SRAMs, to interface any user data with any industrial standard link through the use of mezzanine cards. This general purpose motherboard, named "Pulsar" (PULSer And Recorder), is fully self-testable at board level as well as at system level. CERN S-LINK is chosen to allow Pulsar to communicate with commodity processors via high bandwidth, low latency S-LINK-to-PCI cards. Knowledge gained by using S-LINK at CDF will be transferable to and from the LHC community.
DOI: 10.1088/1742-6596/513/1/012002
2014
Many-core applications to online track reconstruction in HEP experiments
Interest in parallel architectures applied to real time selections is growing in High Energy Physics (HEP) experiments. In this paper we describe performance measurements of Graphic Processing Units (GPUs) and Intel Many Integrated Core architecture (MIC) when applied to a typical HEP online task: the selection of events based on the trajectories of charged particles. We use as benchmark a scaled-up version of the algorithm used at CDF experiment at Tevatron for online track reconstruction - the SVT algorithm - as a realistic test-case for low-latency trigger systems using new computing architectures for LHC experiment. We examine the complexity/performance trade-off in porting existing serial algorithms to many-core devices. Measurements of both data processing and data transfer latency are shown, considering different I/O strategies to/from the parallel devices.
DOI: 10.1109/nssmic.2012.6551422
2012
Applications of GPUs to online track reconstruction in HEP experiments
One of the most important issues that particle physics experiments at hadron colliders have to solve is realtime selection of the most interesting events. Typical collision frequencies do not allow all events to be written to tape for offline analysis, and in most cases, only a small fraction can be saved. The most commonly used strategy is based on two or three selection levels, with the low level ones usually exploiting dedicated hardware to decide within a few to ten microseconds if the event should be kept or not. This strict time requirement has made the usage of commercial devices inadequate, but recent improvements to Graphics Processing Units (GPUs) have substantially changed the conditions. Thanks to their highly parallel, multi-threaded, multicore architecture with remarkable computational power and high memory bandwidth, these commercial devices may be used in scientific applications, among which the event selection system (trigger) in particular may benefit, even at low levels. This paper describes the results of an R&D project to study the performance of GPU technology for low latency applications, such as HEP fast tracking trigger algorithms. On two different setups, we measure the latency to transfer data to/from the GPU, exploring the timing of different I/O technologies on different GPU models. We then describe the implementation and the performance of a track fitting algorithm which mimics the CDF Silicon Vertex Tracker. These studies provide performance benchmarks to investigate the potential and limitations of GPUs for future real-time applications in HEP experiments.
DOI: 10.1088/1748-0221/18/02/c02031
2023
An FPGA-based readout chip emulator for the CMS ETL detector upgrade
We present an FPGA-based readout chip emulator board for the CMS Endcap Timing Layer (ETL) detector upgrade. The emulator board uses an Intel Cyclone 10 GX FPGA to emulate the digital functions of four Endcap Layer Readout Chips (ETROCs). Based on the actual ETROC design, the firmware is implemented and verified. The emulator board is being used for the ETROC digital design verification and system development.
DOI: 10.48550/arxiv.2306.00392
2023
Coneheads: Hierarchy Aware Attention
Attention networks such as transformers have achieved state-of-the-art performance in many domains. These networks rely heavily on the dot product attention operator, which computes the similarity between two points by taking their inner product. However, the inner product does not explicitly model the complex structural properties of real world datasets, such as hierarchies between data points. To remedy this, we introduce cone attention, a drop-in replacement for dot product attention based on hyperbolic entailment cones. Cone attention associates two points by the depth of their lowest common ancestor in a hierarchy defined by hyperbolic cones, which intuitively measures the divergence of two points and gives a hierarchy aware similarity score. We test cone attention on a wide variety of models and tasks and show that it improves task-level performance over dot product attention and other baselines, and is able to match dot-product attention with significantly fewer parameters. Our results suggest that cone attention is an effective way to capture hierarchical relationships when calculating attention.
DOI: 10.1038/s41467-023-41813-6
2023
Macroscopic waves, biological clocks and morphogenesis driven by light in a giant unicellular green alga
Abstract A hallmark of self-organisation in living systems is their capacity to stabilise their own dynamics, often appearing to anticipate and act upon potential outcomes. Caulerpa brachypus is a marine green alga consisting of differentiated organs resembling leaves, stems and roots. While an individual can exceed a metre in size, it is a single multinucleated giant cell. Thus Caulerpa presents the mystery of morphogenesis on macroscopic scales in the absence of cellularization. The experiments reported here reveal self-organised waves of greenness — chloroplasts — that propagate throughout the alga in anticipation of the day-night light cycle. Using dynamical systems analysis we show that these waves are coupled to a self-sustained oscillator, and demonstrate their entrainment to light. Under constant conditions light intensity affects the natural period and drives transition to temporal disorder. Moreover, we find distinct morphologies depending on light temporal patterns, suggesting waves of chlorophyll could link biological oscillators to metabolism and morphogenesis in this giant single-celled organism.
DOI: 10.1109/nssmicrtsd49126.2023.10338516
2023
A self-certifying FPGA based pixel readout chip test system for CMS ETL detector upgrade
This talk presents a flexible self-certifying FPGA-based pixel readout test system for testing the Endcap Timing Read-Out Chip (ETROC) being developed for the CMS MIP Timing Detector (MTD). The system includes an FPGA-based emulator and a test system. The test system can take data from emulator or up to four ETROC test boards in beam telescope mode and is compatible with both ETROC1 and ETROC2 boards. A python-based GUI simplifies configuration and calibration. The system provides an efficient and reliable solution for testing ETROC chips and can be extended to other readout chips with similar architecture.
DOI: 10.1109/nssmicrtsd49126.2023.10337992
2023
The ETROC2 prototype for CMS MTD Endcap Timing Layer (ETL) upgrade
The Endcap Timing ReadOut Chip (ETROC) is designed to process LGAD signals with time resolution down to about 40-50ps per hit. The ETROC2 is the first full size (16x16) prototype design with the front-end based on and scaled up from the ETROC1 (4x4). The readout designs at pixel and global level and the system interfaces are all new and are compatible with the final chip specifications in terms of functionality. The ETROC2 is intended as a learning chip, as a stepstone to the ETROC3 which is intended as the pre-production design. The ETROC2 design and test results will be presented.
DOI: 10.1364/iprsn.2011.ituc1
2011
Large-Scale Monolithic Integration of PM-QPSK Modulation Architecture in 500Gb/s Transmitters
In this talk, we describe the monolithic integration of 10 InP-based phase-modulated transmitter channels employing polarization multiplexing and quadrature phase-shift keying coherent modulation format to provide an aggregate 500Gb/s bandwidth on a single chip.
DOI: 10.1088/1748-0221/10/04/c04032
2015
ATCA-based ATLAS FTK input interface system
The first stage of the ATLAS Fast TracKer (FTK) is an ATCA-based input interface system, where hits from the entire silicon tracker are clustered and organized into overlapping η-ϕ trigger towers before being sent to the tracking engines. First, FTK Input Mezzanine cards receive hit data and perform clustering to reduce data volume. Then, the ATCA-based Data Formatter system will organize the trigger tower data, sharing data among boards over full mesh backplanes and optic fibers. The board and system level design concepts and implementation details, as well as the operation experiences from the FTK full-chain testing, will be presented.
DOI: 10.1109/nssmic.2013.6829552
2013
Applications of many-core technologies to on-line event reconstruction in High Energy Physics experiments
Interest in many-core architectures applied to real time selections is growing in High Energy Physics (HEP) experiments. In this paper we describe performance measurements of many-core devices when applied to a typical HEP online task: the selection of events based on the trajectories of charged particles. We use as benchmark a scaled-up version of the algorithm used at CDF experiment at Tevatron for online track reconstruction - the SVT algorithm - as a realistic test-case for low-latency trigger systems using new computing architectures for LHC experiment. We examine the complexity/performance trade-off in porting existing serial algorithms to many-core devices. We measure performance of different architectures (Intel Xeon Phi and AMD GPUs, in addition to NVidia GPUs) and different software environments (OpenCL, in addition to NVidia CUDA). Measurements of both data processing and data transfer latency are shown, considering different I/O strategies to/from the many-core devices.
DOI: 10.1016/j.nima.2008.08.035
2009
The CDF level 2 calorimetric trigger upgrade
CDF II upgraded the calorimeter trigger to cope with the higher detector occupancy due to the increased Tevatron instantaneous luminosity (∼2.8×1032cm-2s-1). While the original system was implemented in custom hardware and provided to the L2 trigger a limited-quality jet clustering performed using a reduced resolution measurement of the transverse energy in the calorimeter trigger towers, the upgraded system provides offline-quality jet reconstruction of the full resolution calorimeter data. This allows to keep better under control the dependence of the trigger rates on the instantaneous luminosity and to improve the efficiency and purity of the trigger selections. The upgraded calorimeter trigger uses the general purpose VME board Pulsar, developed at CDF II and already widely used to upgrade the L2 tracking and L2 decision systems. A battery of Pulsars is used to merge and send the calorimeter data to the L2 CPUs, where software-implemented algorithms perform offline-like clustering. In this paper we review the design and the performance of the upgraded system.
DOI: 10.1109/rtc.2007.4382864
2007
Level-2 Calorimeter Trigger Upgrade at CDF
The CDF Run II level-2 calorimeter trigger is implemented in hardware and is based on a simple algorithm used in Run I. This system has worked well for Run II at low luminosity. However, as the Tevatron instantaneous luminosity increases, the limitation due to the simple algorithm starts to become clear. In this paper, we will present an upgrade path to the level-2 calorimeter trigger system at CDF. This upgrade approach is based on the Pulsar board, a general purpose VME board developed at CDF and used for upgrading both the level-2 tracking and the Level-2 global decision crate. This paper will describe the design, hardware and software implementation, as well as the advantages of this approach over the existing system.
DOI: 10.1109/nssmic.2006.354160
2006
Level-2 calorimeter Trigger Upgrade at CDF
The CDF Run II Level-2 calorimeter trigger is implemented in hardware and is based on an algorithm used in Run I. This system insured good performance at low luminosity obtained during the Tevatron Run II. However, as the Tevatron instantaneous luminosity increases, the limitations of the current system due to the algorithm start to become clear. In this paper, we will present an upgrade of the Level-2 calorimeter trigger system at CDF. The upgrade is based on the Pulsar board, a general purpose VME board developed at CDF and used for upgrading both the Level-2 tracking and the Level-2 global decision crate. This paper will describe the design, hardware and software implementation, as well as the advantages of this approach over the existing system.
DOI: 10.1088/1748-0221/10/03/c03015
2015
Thermal Analysis for the proto-VIPRAM00 chip
Thermal analysis has been essential in designing reliable ICs. This becomes even more critical when multiple thin dies are stacked together to form a 3D integration. This paper presents our work on thermal modeling, analysis, and simulations on the first 2D prototype of Vertically Integrated PRAM (proto-VIPRAM00) chip. We proposed a sub-circuit-block level thermal simulation approach using Fourier heat flow model, where one CAM cell in proto-VIPRAM00 is used as a unit heat source. This approach significantly reduces the simulation time and computing resources while providing efficient and accurate thermal/temperature simulations in both 2D and 3D IC scenarios.
DOI: 10.1109/imws-amp.2016.7588355
2016
A study on plasma diagnostics by quasi-optical resonator method
A quasi-optical resonator system for plasma's non-intrusive diagnosis is presented. The test system includes xenon-lamp, quasi-optical resonant cavity, vector network analyzer and mobile platform. Before and after adding plasma to the resonant cavity, the change of resonance frequency and quality factor can be detected by vector network analyzer. According to the perturbation theory, plasma's parameters could be given by calculation. By changing xenon-lamp's position in the quasi-optical cavity, a wider range of concentration can be calculated in different electric field strength. Reliability of the quasi-optical system for plasma diagnostics is verified through analyzing the results.
DOI: 10.1515/9783110790948-fm
2022
Frontmatter
DOI: 10.1109/rtc.2007.4382822
2007
The SVT Bypass for a Forward Lepton wide coverage in the CDF Trigger
The Silicon-Vertex-Trigger (SVT) [1,2] at CDF is made of two pipelined processors: the Associative-Memory, AM [3,4]. finding low precision tracks (roads) and the Track-Fitter, TF, refining the track quality with high-precision fits. We propose to extend the SVT use, now mostly focused on B-physics, to high-P <inf xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">T</inf> physics as a tracker in the forward/backward region. The upgraded SVT structure is easily improved working on firmware, or connecting the existing general purpose FPGA-based SVT boards, named Pulsars, with other Pulsars in a lego-structure. In particular, SVT can easily extend the prompt-lepton acceptance providing silicon-only tracks where the drift-chamber coverage is poor or missing (pseudorapidity larger than 1). Since prompt-leptons from high-P <inf xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">T</inf> events do not require precise impact parameter measurement, we don't need to measure these tracks with the maximum silicon detector resolution. We enlarge the use of the AM, to detect tracks above a defined P <inf xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">T</inf> threshold. We propose a bypass that brings the new thin roads found by AM, directly to the level-2 CPUs. While the slower full-resolution path (TF) will have to digest the normal AM road production, four new Pulsars will deliver new roads from AM to L2-CPU. All the hardware exists, needs only to be assembled. We present the bypass architecture, the forward-track quality and their possible use in Higgs triggers. The system timing is estimated from simulation on real data and measurements on test stand.
2018
A High-performance Track Fitter for Use in Ultra-fast Electronics