ϟ

P. Sphicas

Here are all the papers by P. Sphicas that you can download and read on OA.mg.
P. Sphicas’s last known institution is . Download P. Sphicas PDFs here.

Claim this Profile →
DOI: 10.1016/0370-2693(95)01435-7
1996
Cited 47 times
Transverse momentum spectra of charged particles in collisions at
We have analysed a sample of 2.36 million minimum bias events produced in pp collisions at s=630 GeV in the UA1 experiment at the CERN collider. We have studied the production of charged particles with transverse momenta (pT) up to 25 GeV/c. The results are in agreement with QCD predictions. The rise of 〈pT〉 with charged particle multiplicity may be related to changing production of low pT particles.
DOI: 10.1016/0370-2693(92)90874-4
1992
Cited 42 times
Higher order Bose-Einstein correlations in pp̄ collisions at √s=630 and 900 GeV
Higher order Bose-Einstein correlations up to fifth order, of particles produced in proton-antiproton collisions, are presented using UA1 data at √s=630 and 900 GeV. The results are compared with theoretical calculations to investigate the primary assumptions for the parametrization of the correlation functions.
DOI: 10.1007/s1010502cn001
2002
Cited 20 times
High transverse momentum physics at the large hadron collider
This note summarizes many detailed physics studies done by the ATLAS and CMS Collaborations for the LHC, concentrating on processes involving the production of high mass states. These studies show that the LHC should be able to elucidate the mechanism of electroweak symmetry breaking and to study a variety of other topics related to physics at the TeV scale. In particular, a Higgs boson with couplings given by the Standard Model is observable in several channels over the full range of allowed masses. Its mass and some of its couplings will be determined. If supersymmetry is relevant to electroweak interactions, it will be discovered and the properties of many supersymmetric particles elucidated. Other new physics, such as the existence of massive gauge bosons and extra dimensions can be searched for extending existing limits by an order of magnitude or more.
DOI: 10.1007/bf01558391
1993
Cited 21 times
The influence of Bose-Einstein correlations on intermittency in $$p\bar p$$ collisions at $$\sqrt s = 630 GeV$$
The influence of Bose-Einstein correlations on the rise of factorial moments is small in the 1-dimensional phase space given by the pseudorapidity η, where the 2-body correlation function is dominated by unlike-sign particle correlations. Contraily, the influence is dominant in the higher dimensional phase space. This is shown by using correlation integrals. They exhibit clear power law dependences on the four-momentum transferQ 2 for all orders investigated (i=2–5). When searching for the origin of this behaviour, we found that the Bose-Einstein ratio itself shows a steep rise forQ 2→0, compatible with a power law.
DOI: 10.1007/bf01589705
1992
Cited 20 times
Multifractal analysis of minimum bias events in $$\sqrt s $$ = 630 GeV $$\bar p$$ p collisions
A search for multifractal structures, in analogy with multifractal theories, is performed on UA1 minimum bias events. A downward concave multifractal spectral function,f(α) (where α is the Lipschitz-Hölder exponent), indicates that there are self-similar cascading processes, governing the evolution from the quark to the hadron level, in the final states of hadronic interactions.f(α) is an accurate measure of the bin to bin fluctuations of any observable. It is shown that the most sensitive comparison between data and the Monte Carlo models, GENCL and PYTHIA 4.8 can be made usingf(α). It is found that these models do not fully reproduce the behaviour of the data.
DOI: 10.1088/1748-0221/12/01/c01095
2017
Cited 9 times
The CMS Barrel Muon trigger upgrade
The increase of luminosity expected by LHC during Phase1 will impose tighter constraints for rate reduction in order to maintain high efficiency in the CMS Level1 trigger system. The TwinMux system is the early layer of the muon barrel region that concentrates the information from different subdetectors: Drift Tubes, Resistive Plate Chambers and Outer Hadron Calorimeter. It arranges the slow optical trigger links from the detector chambers into faster links (10 Gbps) that are sent in multiple copies to the track finders. Results from collision runs, that confirm the satisfactory operation of the trigger system up to the output of the barrel track finder, will be shown.
DOI: 10.1016/s0010-4655(01)00261-2
2001
Cited 15 times
Event Builder and Level 3 trigger at the CDF experiment
The Event Builder and Level 3 trigger systems of the CDF experiment at Fermilab are required to process about 300 events per second, with an average event size of ∼200 KB. In the event building process the event is assembled from 15 sources supplying event fragments with roughly equal sizes of 12–16 KB. In the subsequent commercial processor-based Level 3 trigger, the events are reconstructed and trigger algorithms are applied. The CPU power required for filtering such a high data throughput rate exceeds 45 000 MIPS. To meet these requirements a distributed and scalable architecture has been chosen. It is based on commodity components: VME-based CPU's for the data read out, an ATM switch for the event building and Pentium-based personal computers running the Linux operating system for the event processing. Event flow through ATM is controlled by a reflective memory ring. The roughly homogeneous distribution of the expected load allows the use of 100 Mbps Ethernet for event distribution and collection within the Level 3 system. Preliminary results from a test system obtained during the last year are presented.
DOI: 10.1109/23.846167
2000
Cited 11 times
The CMS event builder demonstrator based on Myrinet
The data acquisition system for the CMS experiment at the Large Hadron Collider (LHC) will require a large and high performance event building network. Several switch technologies are currently being evaluated in order to compare different architectures for the event builder. One candidate is Myrinet. This paper describes the demonstrator which has been set up to study a small-scale (8/spl times/8) event builder based on a Myrinet switch. Measurements are presented on throughput, overhead and scaling for various traffic conditions. Results are shown on event building with a push architecture.
2003
Cited 9 times
Run Control and Monitor System for the CMS Experiment
V. Brigljevic, G. Bruno, E. Cano, S. Cittolin, A. Csilling, D. Gigi, F. Glege, R. Gomez-Reino, M. Gulmini, J. Gutleber, C. Jacobs, M. Kozlovszky, H. Larsen, I. Magrans, F. Meijers, E. Meschi, S. Murray, A. Oh, L. Orsini, L. Pollet, A. Racz, D. Samyn, P. Scharff-Hansen, C. Schwick, P. Sphicas CERN, European Organization for Nuclear Research, Geneva, Switzerland Also at INFN, Laboratori Nazionali di Legnaro, Legnaro, Italy Also at University of Athens, Greece L. Berti, G. Maron, G. Rorato, N. Toniolo, L. Zangrando INFN, Laboratori Nazionali di Legnaro, Legnaro, Italy M. Bellato, S. Ventura INFN, Sezione di Padova, Padova, Italy S. Erhan University of California, Los Angeles, California, USA
DOI: 10.1016/s0010-4655(01)00264-8
2001
Cited 8 times
The CMS event builder demonstrator and results with Myrinet
The data acquisition system for the CMS experiment at the Large Hadron Collider (LHC) will require a large and high performance event building network. Several switch technologies are currently being evaluated in order to compare different architectures for the event builder. One candidate is Myrinet. This paper describes the demonstrator which has been setup to study a small-scale (16×16) event builder based on PCs running Linux connected to Myrinet and Ethernet switches. A detailed study of the Myrinet switch performance has been performed for various traffic conditions, including the behaviour of composite switches. Results from event building studies are presented, including measurements on throughput, overhead and scaling. Traffic shaping techniques have been implemented and the effect on the event building performance has been investigated. The paper reports on performances and maximum event rate obtainable using custom software, not described, for the Myrinet control program and the low-level communication layer, implemented in a driver for Linux. A high performance sender is emulated by creating a dummy buffer that remains resident in the network interface and moving from the host only the first 64 bytes used by the event building protocol. An approximate scaling in N is presented assuming a balanced system where each source sends on average data to all destinations with the same rate.
DOI: 10.48550/arxiv.cs/0306110
2003
Cited 7 times
Run Control and Monitor System for the CMS Experiment
The Run Control and Monitor System (RCMS) of the CMS experiment is the set of hardware and software components responsible for controlling and monitoring the experiment during data-taking. It provides users with a "virtual counting room", enabling them to operate the experiment and to monitor detector status and data quality from any point in the world. This paper describes the architecture of the RCMS with particular emphasis on its scalability through a distributed collection of nodes arranged in a tree-based hierarchy. The current implementation of the architecture in a prototype RCMS used in test beam setups, detector validations and DAQ demonstrators is documented. A discussion of the key technologies used, including Web Services, and the results of tests performed with a 128-node system are presented.
DOI: 10.1109/23.846119
2000
Cited 6 times
Event-building and PC farm based level-3 trigger at the CDF experiment
In the technical design report the event building process at Fermilab's CDF experiment is required to function at an event rate of 300 events/sec. The events are expected to have an average size of 150 kBytes (kB) and are assembled from fragments of 16 readout locations. The fragment size from the different locations varies between 12 kB and 16 kB. Once the events are assembled they are fed into the Level-3 trigger which is based on processors running programs to filter events using the full event information. Computing power on the order of a second on a Pentium II processor is required per event. The architecture design is driven by the cost and is therefore based on commodity components: VME processor modules running VxWorks for the readout, an ATM switch for the event building, and Pentium PCs running Linux as an operation system for the Level-3 event processing. Pentium PCs are also used to receive events from the ATM switch and further distribute them to the processing nodes over multiple 100 Mbps Ethernets. Studies with a prototype of up to 10 VME readout modules and up to 4 receiving PCs are presented. This system is also a candidate for the CMS experiment at CERN.
DOI: 10.1088/1748-0221/18/02/c02039
2023
An ATCA processor for Level-1 trigger primitive generation and readout of the CMS barrel muon detectors
Abstract An ATCA processor was designed to instrument the first layer of the CMS Barrel Muon Trigger. The processor receives and processes DT and RPC data and produces muon track segments. Furthermore, it provides readout for the DT detector. The ATCA processor is based on a Xilinx XCVU13P FPGA, receives data via 10 Gbps optical links and transmits track segments via 25 Gbps optical links. The processor is instrumented with a Zynq Ultrascale+ SoM connected with an SSD which provides the necessary resources for enhanced monitoring and control information. The design of the board as well as results on its performance are presented.
DOI: 10.1016/s0920-5632(03)01424-5
2003
Cited 4 times
Forward look at LHC physics
We provide a brief summary of the potential of the Large Hadron Collider (LHC) to provide a crucial next step in our understanding of nature. Indeed, the physics potential of the LHC is enormous: among currently approved projects in high energy physics, it uniquely has sufficient energy and luminosity to probe in detail the TeV energy scale relevant to electroweak symmetry breaking. A Higgs boson with Standard-Model couplings can be studied in detail. It is observable in several decay modes over the full range of currently allowed masses, while its mass and a few of its couplings can be determined. Moreover, if supersymmetry is relevant to electroweak interactions, it will be discovered and the properties of many supersymmetric particles will be studied. Other new physics, such as the existence of massive gauge bosons and extra dimensions can be searched for extending the current reach by about an order of magnitude.
DOI: 10.1088/1748-0221/11/03/c03038
2016
The CMS Level-1 Trigger Barrel Track Finder
The design and performance of the upgraded CMS Level-1 Trigger Barrel Muon Track Finder (BMTF) is presented. Monte Carlo simulation data as well as cosmic ray data from a CMS muon detector slice test have been used to study in detail the performance of the new track finder. The design architecture is based on twelve MP7 cards each of which uses a Xilinx Virtex-7 FPGA and can receive and transmit data at 10 Gbps from 72 input and 72 output fibers. According to the CMS Trigger Upgrade TDR the BMTF receives trigger primitive data which are computed using both RPC and DT data and transmits data from a number of muon candidates to the upgraded Global Muon Trigger. Results from detailed studies of comparisons between the BMTF algorithm results and the results of a C++ emulator are also presented. The new BMTF will be commissioned for data taking in 2016.
2003
Cited 3 times
The CMS Event Builder
The data acquisition system of the CMS experiment at the Large Hadron Collider will employ an event builder which will combine data from about 500 data sources into full events at an aggregate throughput of 100 GByte/s. Several architectures and switch technologies have been evaluated for the DAQ Technical Design Report by measurements with test benches and by simulation. This paper describes studies of an EVB test-bench based on 64 PCs acting as data sources and data consumers and employing both Gigabit Ethernet and Myrinet technologies as the interconnect. In the case of Ethernet, protocols based on Layer-2 frames and on TCP/IP are evaluated. Results from ongoing studies, including measurements on throughput and scaling are presented. The architecture of the baseline CMS event builder will be outlined. The event builder is organised into two stages with intelligent buffers in between. The first stage contains 64 switches performing a first level of data concentration by building super-fragments from fragments of 8 data sources. The second stage combines the 64 super-fragments into full events. This architecture allows installation of the second stage of the event builder in steps, with the overall throughput scaling linearly with the number of switches in the second stage. Possible implementations of the components of the event builder are discussed and the expected performance of the full event builder is outlined.
DOI: 10.1016/j.nima.2005.03.034
2005
Feasibility study of a XML-based software environment to manage data acquisition hardware devices
A Software environment to describe configuration, control and test systems for data acquisition hardware devices is presented. The design follows a model that enforces a comprehensive use of an extensible markup language (XML) syntax to describe both the code and associated data. A feasibility study of this software, carried out for the CMS experiment at CERN, is also presented. This is based on a number of standalone applications for different hardware modules, and the design of a hardware management system to remotely access to these heterogeneous subsystems through a uniform web service interface.
DOI: 10.1109/23.846157
2000
Cited 3 times
A software approach for readout and data acquisition in CMS
Traditional systems dominated by performance constraints tend to neglect other qualities such as maintainability and configurability. Object-Orientation allows one to encapsulate the technology differences in communication sub-systems and to provide a uniform view of data transport layer to the systems engineer. We applied this paradigm to the design and implementation of intelligent data servers in the Compact Muon Solenoid (CMS) data acquisition system at CERN to easily exploiting the physical communication resources of the available equipment. CMS is a high-energy physics experiment under study that incorporates a highly distributed data acquisition system. This paper outlines the architecture of one part, the so called Readout Unit, and shows how we can exploit the object advantage for systems with specific data rate requirements. A C++ streams communication layer with zero copying functionality has been established for UDP, TCP, DLPI and specific Myrinet and VME bus communication on the VxWorks real-time operating system. This software provides performance close to the hardware channel and hides communication details from the application programmers.
DOI: 10.1016/0010-4655(89)90245-2
1989
Cited 3 times
The third level trigger and output event unit of the UA1 data-acquisition system
The upgraded UA1 experiment utilizes twelve 3081/E emulators for its third-level trigger system. The system is interfaced to VME, and is controlled by 68000 microprocessor VME boards on the input and output. The output controller communicates with an IBM 9375 mainframe via the CERN-IBM developed VICI interface. The events selected by the emulators are output on IBM-3480 cassettes. The user interface to this system is based on a series of Macintosh personal computer connected to the VME bus. These Macs are also used for developing software for the emulators and for monitoring the entire system. The same configuration has also been used for offline event reconstruction. A description of the system, together with details of both the online and offline modes of operation and an eveluation of its performance are presented.
DOI: 10.1109/rtc.2005.1547433
2005
The 2 Tbps "data to surface" system of the CMS data acquisition
The data acquisition system of the CMS experiment, at the CERN LHC collider, is designed to build 1 MB events at a sustained rate of 100 kHz and to provide sufficient computing power to filter the events by a factor of 1000. The data to surface (D2S) system is the first layer of the data acquisition interfacing the underground subdetector readout electronics to the surface event builder. It collects the 100 GB/s input data from a large number of front-end cards (650), implements a first stage event building by combining multiple sources into larger-size data fragments, and transports them to the surface for the full event building. The data to surface system can operate at the maximum rate of 2 Tbps. This paper describes the layout, reconfigurability and production validation of the D2S system which is to be installed by December 2005
2005
HYPERDAQ - WHERE DATA ACQUISITION MEETS THE WEB
HyperDAQ was conceived to give users access to distributed data acquisition systems easily. To achieve that, we marry two well-established technologies: the World Wide Web and Peer-to-Peer systems. An embedded HTTP protocol engine turns an executable program into a browsable Web application that can reflect its internal data structures using a data serialization package. While the Web is based on static hyperlinks to destinations known at the time of page creation, HyperDAQ creates links to data content providers dynamically. Peer-to-Peer technology enables adaptive navigation from one application to another depending on the lifetime of application modules. Traditionally, distributed systems give the user a single point of access. We take a radically different approach. Every node may become an access point from which the whole system can be explored.
DOI: 10.1109/nssmic.2003.1351855
2003
The CMS high level trigger
The High Level Trigger (HLT) system of the CMS experiment will consist of a series of reconstruction and selection algorithms designed to reduce the Level-1 trigger accept rate of 100 kHz to 100 Hz forwarded to permanent storage. The HLT operates on events assembled by an event builder collecting detector data from the CMS front-end system at full granularity and resolution. The HLT algorithms will run on a farm of commodity PCs, the filter farm, with a total expected computational power of 10/sup 6/ SpecInt95. The farm software, responsible for collecting, analyzing, and storing event data, consists of components from the data acquisition and the offline reconstruction domains, extended with the necessary glue components and implementation of interfaces between them. The farm is operated and monitored by the DAQ control system and must provide near-real-time feedback on the performance of the detector and the physics quality of data. In this paper, the architecture of the HLT farm is described, and the design of various software components reviewed. The status of software development is presented, with a focus on the integration issues. The physics and CPU performance of current reconstruction and selection algorithm prototypes is summarized in relation with projected parameters of the farm and taking into account the requirements of the CMS physics program. Results from a prototype test stand and plans for the deployment of the final system are finally discussed.
2014
Status of HEP after the LHC Run 1 Paraskevas Sphicas
In the past 20 years, the Standard Model (SM) of elementary particles and their interactions has provided an unfailing and remarkably accurate description of all experiments with and without high-energy accelerators, establishing that we understand the physics of the very small up to energy scales of 100 GeV. The Large Hadron Collider of CERN, and its experiments, were conceived to probe the physics of the next frontier, that of the TeV energy scale. True to their charge, the experiments have delivered hundreds of significant and often beautiful measurements, along with the discovery of what looks like the first fundamental scalar particle. The triumph of the Standard Model is complete, especially since no signal has emerged from the intense searches for new physics” — yet. The field is now at a crossroads: the existence of a Higgs boson opens a set of questions, while the evidence, both direct and indirect, that there physics beyond the SM does exist, is still strong and convincing. The talk will present a broad-brush picture of how Run 1 of the LHC has shaped the field of High Energy Physics; along with why expectations are still so very high.
2003
Using XDAQ in Application Scenarios of the CMS Experiment
XDAQ is a generic data acquisition software environment that emerged from a rich set of of use-cases encountered in the CMS experiment. They cover not the deployment for multiple sub-detectors and the operation of different processing and networking equipment as well as a distributed collaboration of users with different needs. The use of the software in various application scenarios demonstrated the viability of the approach. We discuss two applications, the tracker local DAQ system for front-end commissioning and the muon chamber validation system. The description is completed by a brief overview of XDAQ.
DOI: 10.48550/arxiv.physics/0306150
2003
The CMS Event Builder
The data acquisition system of the CMS experiment at the Large Hadron Collider will employ an event builder which will combine data from about 500 data sources into full events at an aggregate throughput of 100 GByte/s. Several architectures and switch technologies have been evaluated for the DAQ Technical Design Report by measurements with test benches and by simulation. This paper describes studies of an EVB test-bench based on 64 PCs acting as data sources and data consumers and employing both Gigabit Ethernet and Myrinet technologies as the interconnect. In the case of Ethernet, protocols based on Layer-2 frames and on TCP/IP are evaluated. Results from ongoing studies, including measurements on throughput and scaling are presented. The architecture of the baseline CMS event builder will be outlined. The event builder is organised into two stages with intelligent buffers in between. The first stage contains 64 switches performing a first level of data concentration by building super-fragments from fragments of 8 data sources. The second stage combines the 64 super-fragments into full events. This architecture allows installation of the second stage of the event builder in steps, with the overall throughput scaling linearly with the number of switches in the second stage. Possible implementations of the components of the event builder are discussed and the expected performance of the full event builder is outlined.
DOI: 10.5170/cern-2000-010.565
2000
Readout unit prototypes for the CMS DAQ system
In the context of developing a DAQ Prototype System for the CMS experiment CERN we have designed a Readout Unit (RU). The RU is a part of the DAQ Readout Column and has up to 512 MBytes memory. It is capable of handling data events at 400MB/s bandwidth. This unit is based on PCI (PMC) modularity and exhibits a reconfigurable structure. This paper describes the hardware implementation of the RU and its components.
1998
Application of PC's and Linux to the CDF Run II level-3 trigger
For Run II, the CDF Level-3 trigger must provide a sustained input bandwidth of at least 45 MBytes/set and will require processing power of at least 45000 MIPS to perform the necessary reconstruction and filtering of events. We present a distributed, scalable architecture using commod- ity hardware running the Linux operating system. I/O and CPU intensive functions are separated into two types of nodes; �converter� nodes receive event fragments via ATM from Level 2 computers and distribute complete events to �processor� nodes via multiple fast ethernets. We present re- sults from a small-scale prototype roughly equivalent to a 1/16th vertical slice of the final system. With this hardware we have demonstrated the capability of sustained I/O rates of 15 MBytes/set, more then three times the required baseline performance. We discuss PC hardware and Linux software issues and modifications for real time performance.
1993
Measurement of the angle gamma
1988
Search for a High-Mass Resonance Decaying to Jets in Proton-Antiproton Collisions.