ϟ

S. Padhi

Here are all the papers by S. Padhi that you can download and read on OA.mg.
S. Padhi’s last known institution is . Download S. Padhi PDFs here.

Claim this Profile →
DOI: 10.1007/jhep01(2014)164
2014
Cited 294 times
First look at the physics case of TLEP
A bstract The discovery by the ATLAS and CMS experiments of a new boson with mass around 125 GeV and with measured properties compatible with those of a Standard-Model Higgs boson, coupled with the absence of discoveries of phenomena beyond the Standard Model at the TeV scale, has triggered interest in ideas for future Higgs factories. A new circular e + e − collider hosted in a 80 to 100 km tunnel, TLEP, is among the most attractive solutions proposed so far. It has a clean experimental environment, produces high luminosity for top-quark, Higgs boson, W and Z studies, accommodates multiple detectors, and can reach energies up to the $$ \mathrm{t}\overline{\mathrm{t}} $$ threshold and beyond. It will enable measurements of the Higgs boson properties and of Electroweak Symmetry-Breaking (EWSB) parameters with unequalled precision, offering exploration of physics beyond the Standard Model in the multi-TeV range. Moreover, being the natural precursor of the VHE-LHC, a 100 TeV hadron machine in the same tunnel, it builds up a long-term vision for particle physics. Altogether, the combination of TLEP and the VHE-LHC offers, for a great cost effectiveness, the best precision and the best search reach of all options presently on the market. This paper presents a first appraisal of the salient features of the TLEP physics potential, to serve as a baseline for a more extensive design study.
DOI: 10.1088/0954-3899/39/10/105005
2012
Cited 294 times
Simplified models for LHC new physics searches
This document proposes a collection of simplified models relevant to the design of new-physics searches at the LHC and the characterization of their results. Both ATLAS and CMS have already presented some results in terms of simplified models, and we encourage them to continue and expand this effort, which supplements both signature-based results and benchmark model interpretations. A simplified model is defined by an effective Lagrangian describing the interactions of a small number of new particles. Simplified models can equally well be described by a small number of masses and cross-sections. These parameters are directly related to collider physics observables, making simplified models a particularly effective framework for evaluating searches and a useful starting point for characterizing positive signals of new physics. This document serves as an official summary of the results from the "Topologies for Early LHC Searches" workshop, held at SLAC in September of 2010, the purpose of which was to develop a set of representative models that can be used to cover all relevant phase space in experimental searches. Particular emphasis is placed on searches relevant for the first ~50-500 pb-1 of data and those motivated by supersymmetric models. This note largely summarizes material posted at http://lhcnewphysics.org/, which includes simplified model definitions, Monte Carlo material, and supporting contacts within the theory community. We also comment on future developments that may be useful as more data is gathered and analyzed by the experiments.
DOI: 10.1109/csie.2009.950
2009
Cited 242 times
The Pilot Way to Grid Resources Using glideinWMS
Grid computing has become very popular in big and widespread scientific communities with high computing demands, like high energy physics. Computing resources are being distributed over many independent sites with only a thin layer of grid middleware shared between them. This deployment model has proven to be very convenient for computing resource providers, but has introduced several problems for the users of the system, the three major being the complexity of job scheduling, the non-uniformity of compute resources, and the lack of good job monitoring. Pilot jobs address all the above problems by creating a virtual private computing pool on top of grid resources. This paper presents both the general pilot concept, as well as a concrete implementation, called glideinWMS, deployed in the Open Science Grid.
DOI: 10.1140/epjc/s10052-014-3174-y
2014
Cited 237 times
Squark and gluino production cross sections in $$pp$$ p p collisions at $$\sqrt{s} = 13, 14, 33$$ s = 13 , 14 , 33 and $$100$$ 100 TeV
We present state-of-the-art cross section predictions for the production of supersymmetric squarks and gluinos at the upcoming LHC run with a centre-of-mass energy of $\sqrt{s} = 13$ and $14$ TeV, and at potential future $pp$ colliders operating at $\sqrt{s} = 33$ and $100$ TeV. The results are based on calculations which include the resummation of soft-gluon emission at next-to-leading logarithmic accuracy, matched to next-to-leading order supersymmetric QCD corrections. Furthermore, we provide an estimate of the theoretical uncertainty due to the variation of the renormalisation and factorisation scales and the parton distribution functions.
DOI: 10.1007/jhep04(2014)117
2014
Cited 78 times
SUSY simplified models at 14, 33, and 100 TeV proton colliders
Results are presented for a variety of SUSY Simplified Models at the 14 TeV LHC as well as a 33 and 100 TeV proton collider. Our focus is on models whose signals are driven by colored production. We present projections of the upper limit and discovery reach in the gluino-neutralino (for both light and heavy flavor decays), squark-neutralino, and gluino-squark Simplified Model planes. Depending on the model a jets + $ E_T^{\mathrm{miss}} $ , mono-jet, or same-sign di-lepton search is applied. The impact of pileup is explored. This study utilizes the Snowmass backgrounds and combined detector. Assuming 3000 fb−1 of integrated luminosity, a gluino that decays to light flavor quarks can be discovered below 2.3 TeV at the 14 TeV LHC and below 11 TeV at a 100 TeV machine.
DOI: 10.1007/jhep02(2012)075
2012
Cited 65 times
Interpreting LHC SUSY searches in the phenomenological MSSM
We interpret within the phenomenological MSSM (pMSSM) the results of SUSY searches published by the CMS collaboration based on the first ~1 fb^-1 of data taken during the 2011 LHC run at 7 TeV. The pMSSM is a 19-dimensional parametrization of the MSSM that captures most of its phenomenological features. It encompasses, and goes beyond, a broad range of more constrained SUSY models. Performing a global Bayesian analysis, we obtain posterior probability densities of parameters, masses and derived observables. In contrast to constraints derived for particular SUSY breaking schemes, such as the CMSSM, our results provide more generic conclusions on how the current data constrain the MSSM.
DOI: 10.48550/arxiv.1605.04692
2016
Cited 53 times
Les Houches 2015: Physics at TeV Colliders Standard Model Working Group Report
This Report summarizes the proceedings of the 2015 Les Houches workshop on Physics at TeV Colliders. Session 1 dealt with (I) new developments relevant for high precision Standard Model calculations, (II) the new PDF4LHC parton distributions, (III) issues in the theoretical description of the production of Standard Model Higgs bosons and how to relate experimental measurements, (IV) a host of phenomenological studies essential for comparing LHC data from Run I with theoretical predictions and projections for future measurements in Run II, and (V) new developments in Monte Carlo event generators.
DOI: 10.1109/ted.2023.3339086
2024
A Proposal for Optimization of Spacer Engineering at Sub-5-nm Technology Node for JL-TreeFET: A Device to Circuit Level Implementation
This article for the first time explores the effect of different spacer materials on junctionless (JL) TreeFET for the IRDS sub-5-nm technology node. The study focuses on evaluating the influence of various spacer materials (Air, SiO <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$_{\text{2}}$</tex-math> </inline-formula> , Si <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$_{\text{3}}$</tex-math> </inline-formula> N <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$_{\text{4}}$</tex-math> </inline-formula> , Al <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$_{\text{2}}$</tex-math> </inline-formula> O <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$_{\text{3}}$</tex-math> </inline-formula> , HfO <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$_{\text{2}}$</tex-math> </inline-formula> , and TiO <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$_{\text{2}}\text{)}$</tex-math> </inline-formula> on dc and analog/RF performance, considering both single- <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">k</i> and dual- <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">k</i> spacer materials while maintaining a fixed <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\textit{L}_{\text{g}}$</tex-math> </inline-formula> <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$=$</tex-math> </inline-formula> 8 nm and spacer lengths ( <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\textit{L}_{\text{ext}}$</tex-math> </inline-formula> <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$=$</tex-math> </inline-formula> 5 and 7 nm). The single- <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">k</i> spacer analysis demonstrated better dc performances with <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\textit{I}_{\biosc{on}}$</tex-math> </inline-formula> / <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\textit{I}_{\biosc{off}}$</tex-math> </inline-formula> , subthreshold swing (SS), and drain-induced barrier lowering (DIBL) <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\sim&gt;$</tex-math> </inline-formula> 10 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$^{\text{8}}$</tex-math> </inline-formula> , <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\sim$</tex-math> </inline-formula> 61 mV/dec, and <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\sim$</tex-math> </inline-formula> 63 mV/V, respectively, at <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\textit{L}_{\text{ext}}=$</tex-math> </inline-formula> 7 nm, for the TiO <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$_{\text{2}}$</tex-math> </inline-formula> spacer. The analog parameters <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\textit{A}_{\text{V}}$</tex-math> </inline-formula> , gain frequency product (GFP), and gain transconductance frequency product (GTFP) experienced significant improvements of <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\sim$</tex-math> </inline-formula> 58.7%, <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\sim$</tex-math> </inline-formula> 70.9%, and <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\sim$</tex-math> </inline-formula> 55.95% for the TiO <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$_{\text{2}}$</tex-math> </inline-formula> spacer at 10-nA normalized drain current. However, the RF parameters, such as <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\textit{f}_{\text{T}}$</tex-math> </inline-formula> and <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\textit{f}_{\text{MAX}}$</tex-math> </inline-formula> , tend to be deteriorated by an amount of <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\sim$</tex-math> </inline-formula> 64.5% and <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\sim$</tex-math> </inline-formula> 62.6%, respectively. To further optimize the device performance, four dual- <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">k</i> spacer configurations (HfO <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$_{\text{2}}$</tex-math> </inline-formula> <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$+$</tex-math> </inline-formula> Air, HfO <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$_{\text{2}}$</tex-math> </inline-formula> <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$+$</tex-math> </inline-formula> SiO <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$_{\text{2}}$</tex-math> </inline-formula> , TiO <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$_{\text{2}}$</tex-math> </inline-formula> <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$+$</tex-math> </inline-formula> Air, and TiO <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$_{\text{2}}$</tex-math> </inline-formula> <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$+$</tex-math> </inline-formula> SiO <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$_{\text{2}}\text{)}$</tex-math> </inline-formula> are explored. Specifically, by employing an inner high- <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">k</i> spacer length ( <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\textit{L}_{\text{sp,\textit{hk}}}\text{)}$</tex-math> </inline-formula> of <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\textit{L}_{\text{ext}}$</tex-math> </inline-formula> /2, notable enhancements are achieved in <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$^{^{^{}}}$</tex-math> </inline-formula> <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\textit{A}_{\text{V}}$</tex-math> </inline-formula> , <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\textit{f}_{\text{T}}$</tex-math> </inline-formula> , <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\textit{f}_{\text{MAX}}$</tex-math> </inline-formula> , GFP, and GTFP by <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\sim$</tex-math> </inline-formula> 69.5%, <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\sim$</tex-math> </inline-formula> 17.5%, <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\sim$</tex-math> </inline-formula> 27.4%, <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\sim$</tex-math> </inline-formula> 37.7%, and <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\sim$</tex-math> </inline-formula> 36.06%, respectively, making TiO <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$_{\text{2}}$</tex-math> </inline-formula> <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$+$</tex-math> </inline-formula> Air dual- <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">k</i> spacer suitable for analog/RF applications. The dc performance is also found to be best for this combination too as compared with all other combinations. Particularly, when decreasing <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\textit{L}_{\text{sp,\textit{hk}}}$</tex-math> </inline-formula> from <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\textit{L}_{\text{ext}}$</tex-math> </inline-formula> /2 to <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\textit{L}_{\text{ext}}$</tex-math> </inline-formula> /6, the dc, and analog/RF performances are found to be degraded. Furthermore, the best-optimized device (TiO <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$_{\text{2}}+$</tex-math> </inline-formula> Air) when implemented to design a CMOS inverter circuit, a voltage gain of <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\sim$</tex-math> </inline-formula> 12 V/V and a delay of 5.32 ps is achieved. Overall, this article pioneers the incorporation of spacer analysis in JL-TreeFET, unveiling its potential for pushing the boundaries of performance and efficiency in modern semiconductor devices.
DOI: 10.38124/ijisrt/ijisrt24mar1984
2024
Artificial Intelligence Powered Voice to Text and Text to Speech Recognition Model – A Powerful Tool for Student Comprehension of Tutor Speech
Speech-to-Text and Text-to-Speech are both NLP(natural language processing) powered models which transform speech to text and vice versa, providing an increased scope of learning for the parties involved. For the past couple of years it's been observed that students have been moving abroad for quality education and better financial aid. Since there is an accent gap between students and tutors which reduces the understanding of students. Our work is done to solve the aforementioned problem. With its state-of-the-art STT(speech-to-text) and TTS(text-to-speech) softwares this work intends to ease the learning curve of the students. The key targets of this work are international students, individuals with disabilities. It can also be used to transcribe meetings for quick conversion of meeting discussion points into text. Companies can also use the model to get the data for the call recordings and further perform sentiment analysis and various such activities. This research aims to give a detailed walk through of the product as it stands, and provide details regarding all aspects of the product. This covers the various tech stacks used, the implementation of the said technologies, the reports shown to the different end users. This provides the workflow of the product.
DOI: 10.1103/physrevd.88.115010
2013
Cited 42 times
Electroweakinos in the light of the Higgs boson
Given the increasingly more stringent bounds on supersymmetry (SUSY) from the LHC searches, we are motivated to explore the situation in which the only accessible SUSY states are the electroweakinos (charginos and neutralinos). In the minimal SUSY framework, we systematically study the three general scenarios classified by the relative size of the gaugino mass parameters ${M}_{1}$ and ${M}_{2}$ and the Higgsino mass parameter $\ensuremath{\mu}$, with six distinctive cases, four of which would naturally result in a compressed spectrum of nearly degenerate lightest supersymmetric particles. We present the relevant decay branching fractions and provide insightful understanding about the decay modes in connection with the Goldstone-boson equivalence theorem. We show the cross sections for electroweakino pair production at the LHC and International Linear Collider and emphasize the unique signals involving the Standard Model-like Higgs boson as a new search reference. The electroweakino signal from pair production and subsequent decay to the $Wh/Zh(h\ensuremath{\rightarrow}b\overline{b})$ final state may yield a sensitivity of 95% C.L. exclusion ($5\ensuremath{\sigma}$ discovery) to the mass scale ${M}_{2}$, $\ensuremath{\mu}\ensuremath{\sim}350--400\text{ }\text{ }\mathrm{GeV}$ (220--270 GeV) at the 14 TeV LHC with a luminosity of $300\text{ }\text{ }{\mathrm{fb}}^{\ensuremath{-}1}$. Combining with all the other decay channels, the 95% C.L. exclusion ($5\ensuremath{\sigma}$ discovery) may be extended to ${M}_{2}$, $\ensuremath{\mu}\ensuremath{\sim}480--700\text{ }\text{ }\mathrm{GeV}$ (320--500 GeV). At the ILC, the electroweakinos could be readily discovered once the kinematical threshold is crossed, and their properties could be thoroughly studied.
DOI: 10.2172/1128171
2013
Cited 31 times
Snowmass Energy Frontier Simulations
This document describes the simulation framework used in the Snowmass Energy Frontier studies for future Hadron Colliders. An overview of event generation with Madgraph5 along with parton shower and hadronization with Pythia6 is followed by a detailed description of pile-up and detector simulation with Delphes3. Details of event generation are included in a companion paper cited within this paper. The input parametrization is chosen to reflect the best object performance expected from the future ATLAS and CMS experiments; this is referred to as the "Combined Snowmass Detector". We perform simulations of pp interactions at center-of-mass energies √s = 14, 33, and 100 TeV with 0, 50, and 140 additional pp pile-up interactions. The object performance with multi-TeV pp collisions are studied for the first time using large pile-up interactions.
DOI: 10.2172/1128125
2013
Cited 30 times
Methods and Results for Standard Model Event Generation at $\sqrt{s}$ = 14 TeV, 33 TeV and 100 TeV Proton Colliders (A Snowmass Whitepaper)
This document describes the novel techniques used to simulate the common Snowmass 2013 En- ergy Frontier Standard Model backgrounds for future hadron colliders. The purpose of many Energy Frontier studies is to explore the reach of high luminosity data sets at a variety of high energy collid- ers. The generation of high statistics samples which accurately model large integrated luminosities for multiple center-of-mass energies and pile-up environments is not possible using an unweighted event generation strategy | an approach which relies on event weighting was necessary. Even with these improvements in e ciency, extensive computing resources were required. This document de- scribes the speci c approach to event generation using Madgraph5 to produce parton-level processes, followed by parton showering and hadronization with Pythia6, and pile-up and detector simulation with Delphes3. The majority of Standard Model processes for pp interactions at √s = 14, 33, and 100 TeV with 0, 50, and 140 additional pile-up interactions are publicly available.
DOI: 10.47893/ijica.2011.1004
2011
Cited 27 times
Web Usage Mining: A Survey on Pattern Extraction from Web Logs
As the size of web increases along with number of users, it is very much essential for the website owners to better understand their customers so that they can provide better service, and also enhance the quality of the website. To achieve this they depend on the web access log files. The web access log files can be mined to extract interesting pattern so that the user behaviour can be understood. This paper presents an overview of web usage mining and also provides a survey of the pattern extraction algorithms used for web usage mining.
DOI: 10.1016/j.mejo.2024.106139
2024
A novel step architecture based negative capacitance (SNC) FET: Design and circuit level analysis
This study investigates the effects of temperature on RF/Analog and linearity parameters using a 3 nm technology node Step-Negative capacitance FinFET (SNC-FinFET) for the first time. The SNC-FinFET exhibits superior performance compared to the conventional step architecture, with an enhancement of 7.2% in ION (ON-current), 73.58% in IOFF (OFF-current), excellent SS (Sub-threshold Swing) of 57.51 mV/decade, and a barrier rising of 52.38%. This is due to prior DC analysis that led to the extraction of the electrical parameters such as ION, IOFF, SS and threshold voltage (VT). After that, dimensional analysis is being done in terms of gate length (LG), fin widths (Wfin1 and Wfin2), and fin heights (Hfin1 and Hfin2). The optimized dimensions for this study are LG = 18 nm, Wfin1 = 5 nm, Wfin2 = 3 nm, Hfin1 = 50 nm, and Hfin2 = 14 nm. From this point on, using the same SNC-FinFET with the optimized dimension, varying the temperature from 250 K to 400 K, degrades the RF/analog characteristics while exhibiting the opposite trend for linearity, with improvements of 40.36%, 67.42%, 49.59%, and 62.31% in the 2nd order harmonic, 3rd order harmonic, 2nd order Voltage Intercept Point, and 3rd order Voltage Intercept Point, respectively. The proposed device performance is also analyzed at the circuit level followed by noise margin calculation where at T = 400 K 46.36% of improvement in noise margin is encountered.
DOI: 10.48550/arxiv.1206.2892
2012
Cited 19 times
Supersymmetry production cross sections in pp collisions at sqrt{s} = 7 TeV
This document emerged from work that started in January 2012 as a joint effort by the ATLAS, CMS and LPCC supersymmetry (SUSY) working groups to compile state-of-the-art cross section predictions for SUSY particle production at the LHC. We present cross sections for various SUSY processes in pp collisions at $\sqrt{s} =7$ TeV, including an estimate of the theoretical uncertainty due to scale variation and the parton distribution functions. Further results for higher LHC centre-of-mass energies will be collected at https://twiki.cern.ch/twiki/bin/view/LHCPhysics/SUSYCrossSections. For squark and gluino production, which dominate the inclusive SUSY cross section, we employ calculations which include the resummation of soft gluon emission at next-to-leading logarithmic accuracy, matched to next-to-leading order (NLO) SUSY-QCD. In all other cases we rely on NLO SUSY-QCD predictions.
DOI: 10.2172/1336627
2013
Cited 18 times
Snowmass Energy Frontier Simulations using the Open Science Grid (A Snowmass 2013 whitepaper)
Snowmass is a US long-term planning study for the high-energy community by the American Physical Society's Division of Particles and Fields. For its simulation studies, opportunistic resources are harnessed using the Open Science Grid infrastructure. Late binding grid technology, GlideinWMS, was used for distributed scheduling of the simulation jobs across many sites mainly in the US. The pilot infrastructure also uses the Parrot mechanism to dynamically access CvmFS in order to ascertain a homogeneous environment across the nodes. This report presents the resource usage and the storage model used for simulating large statistics Standard Model backgrounds needed for Snowmass Energy Frontier studies.
DOI: 10.1109/jiot.2020.2965103
2020
Cited 11 times
Reinforcement-Learning-Empowered MLaaS Scheduling for Serving Intelligent Internet of Things
Machine learning (ML) has been embedded in many Internet of Things (IoT) applications (e.g., smart home and autonomous driving). Yet it is often infeasible to deploy ML models on IoT devices due to resource limitation. Thus, deploying trained ML models in the cloud and providing inference services to IoT devices becomes a plausible solution. To provide low-latency ML serving to massive IoT devices, a natural and promising approach is to use parallelism in computation. However, existing ML systems (e.g., Tensorflow) and cloud ML-serving platforms (e.g., SageMaker) are service-level-objective (SLO) agnostic and rely on users to manually configure the parallelism at both request and operation levels. To address this challenge, we propose a region-based reinforcement learning (RRL)-based scheduling framework for ML serving in IoT applications that can efficiently identify optimal configurations under dynamic workloads. A key observation is that the system performance under similar configurations in a region can be accurately estimated by using the system performance under one of these configurations due to their correlation. We theoretically show that the RRL approach can achieve fast convergence speed at the cost of performance loss. To improve the performance, we propose an adaptive RRL algorithm based on Bayesian optimization to balance the convergence speed and the optimality. The proposed framework is prototyped and evaluated on the Tensorflow Serving system. Extensive experimental results show that the proposed approach can outperform state-of-the-art approaches by finding near-optimal solutions over eight times faster while reducing inference latency up to 88.9% and reducing SLO violation up to 91.6%.
DOI: 10.1007/s10052-002-0957-3
2002
Cited 25 times
A phenomenological interpretation of open charm production at HERA in terms of the semi-hard approach
In the framework of the semi-hard ( $k_{\rm t}$ -factorization) approach, we analyze the various charm production processes in the kinematic region covered by the HERA experiments.
DOI: 10.1007/s10723-010-9152-1
2010
Cited 12 times
Distributed Analysis in CMS
The CMS experiment expects to manage several Pbytes of data each year during the LHC programme, distributing them over many computing sites around the world and enabling data access at those centers for analysis. CMS has identified the distributed sites as the primary location for physics analysis to support a wide community with thousands potential users. This represents an unprecedented experimental challenge in terms of the scale of distributed computing resources and number of user. An overview of the computing architecture, the software tools and the distributed infrastructure is reported. Summaries of the experience in establishing efficient and scalable operations to get prepared for CMS distributed analysis are presented, followed by the user experience in their current analysis activities.
DOI: 10.48550/arxiv.1311.0299
2013
Cited 10 times
New Particles Working Group Report of the Snowmass 2013 Community Summer Study
This report summarizes the work of the Energy Frontier New Physics working group of the 2013 Community Summer Study (Snowmass).
DOI: 10.1088/1742-6596/513/3/032040
2014
Cited 7 times
CMS computing operations during run 1
During the first run, CMS collected and processed more than 10B data events and simulated more than 15B events. Up to 100k processor cores were used simultaneously and 100PB of storage was managed. Each month petabytes of data were moved and hundreds of users accessed data samples. In this document we discuss the operational experience from this first run. We present the workflows and data flows that were executed, and we discuss the tools and services developed, and the operations and shift models used to sustain the system. Many techniques were followed from the original computing planning, but some were reactions to difficulties and opportunities. We also address the lessons learned from an operational perspective, and how this is shaping our thoughts for 2015.
DOI: 10.1088/1742-6596/219/7/072013
2010
Cited 7 times
Use of glide-ins in CMS for production and analysis
With the evolution of various grid federations, the Condor glide-ins represent a key feature in providing a homogeneous pool of resources using late-binding technology. The CMS collaboration uses the glide-in based Workload Management System, glideinWMS, for production (ProdAgent) and distributed analysis (CRAB) of the data. The Condor glide-in daemons traverse to the worker nodes, submitted via Condor-G. Once activated, they preserve the Master-Worker relationships, with the worker first validating the execution environment on the worker node before pulling the jobs sequentially until the expiry of their lifetimes. The combination of late-binding and validation significantly reduces the overall failure rate visible to CMS physicists. We discuss the extensive use of the glideinWMS since the computing challenge, CCRC-08, in order to prepare for the forthcoming LHC data-taking period. The key features essential to the success of large-scale production and analysis on CMS resources across major grid federations, including EGEE, OSG and NorduGrid are outlined. Use of glide-ins via the CRAB server mechanism and ProdAgent, as well as first hand experience of using the next generation CREAM computing element within the CMS framework is discussed.
DOI: 10.47893/ijica.2011.1007
2011
Cited 6 times
A Review Of Trends In Research On Web Mining
In recent years the growth of the World Wide Web exceeded all expectations. Today there are several billions of HTML documents, pictures and other multimedia files available via internet and the number is still rising. But considering the impressive variety of the web, retrieving interesting content has become a very difficult task.So, the World Wide Web is a fertile area for data mining research.Web mining is a research topic which combines two of the activated research areas: Data Mining and World Wide Web. Web mining research relates to several research communities such as Database, information Retrieval and Artificial intelligence, visualization.This paper reviews the research and application issues in web mining besides proving an overall view of Web mining.
DOI: 10.1088/1742-6596/219/6/062036
2010
Cited 6 times
Scalability and interoperability within glideinWMS
Physicists have access to thousands of CPUs in grid federations such as OSG and EGEE. With the start-up of the LHC, it is essential for individuals or groups of users to wrap together available resources from multiple sites across multiple grids under a higher user-controlled layer in order to provide a homogeneous pool of available resources. One such system is glideinWMS, which is based on the Condor batch system. A general discussion of glideinWMS can be found elsewhere. Here, we focus on recent advances in extending its reach: scalability and integration of heterogeneous compute elements. We demonstrate that the new developments exceed the design goal of over 10,000 simultaneous running jobs under a single Condor schedd, using strong security protocols across global networks, and sustaining a steady-state job completion rate of a few Hz. We also show interoperability across heterogeneous computing elements achieved using client-side methods. We discuss this technique and the challenges in direct access to NorduGrid and CREAM compute elements, in addition to Globus based systems.
DOI: 10.1109/ccgrid.2018.00040
2018
Cited 6 times
Addressing the Challenges of Executing a Massive Computational Cluster in the Cloud
A major limitation for time-to-science can be the lack of available computing resources. Depending on the capacity of resources, executing an application suite with hundreds of thousands of jobs can take weeks when resources are in high demand. We describe how we dynamically provision a large scale high performance computing cluster of more than one million cores utilizing Amazon Web Services (AWS). We discuss the trade-offs, challenges, and solutions associated with creating such a large scale cluster with commercial cloud resources. We utilize our large scale cluster to study a parameter sweep workflow composed of message-passing parallel topic modeling jobs on multiple datasets. At peak, we achieve a simultaneous core count of 1,119,196 vCPUs across nearly 50,000 instances, and are able to execute almost half a million jobs within two hours utilizing AWS Spot Instances in a single AWS region. Our solutions to the challenges and trade-offs have broad application to the lifecycle management of similar clusters on other commercial clouds.
DOI: 10.1088/1742-6596/219/7/072007
2010
Cited 5 times
CMS analysis operations
During normal data taking CMS expects to support potentially as many as 2000 analysis users. Since the beginning of 2008 there have been more than 800 individuals who submitted a remote analysis job to the CMS computing infrastructure. The bulk of these users will be supported at the over 40 CMS Tier-2 centres. Supporting a globally distributed community of users on a globally distributed set of computing clusters is a task that requires reconsidering the normal methods of user support for Analysis Operations. In 2008 CMS formed an Analysis Support Task Force in preparation for large-scale physics analysis activities. The charge of the task force was to evaluate the available support tools, the user support techniques, and the direct feedback of users with the goal of improving the success rate and user experience when utilizing the distributed computing environment. The task force determined the tools needed to assess and reduce the number of non-zero exit code applications submitted through the grid interfaces and worked with the CMS experiment dashboard developers to obtain the necessary information to quickly and proactively identify issues with user jobs and data sets hosted at various sites. Results of the analysis group surveys were compiled. Reference platforms for testing and debugging problems were established in various geographic regions. The task force also assessed the resources needed to make the transition to a permanent Analysis Operations task. In this presentation the results of the task force will be discussed as well as the CMS Analysis Operations plans for the start of data taking.
DOI: 10.1007/978-981-13-3329-3_44
2018
Cited 5 times
Static Task Scheduling Heuristic Approach in Cloud Computing Environment
Scheduling is to assign a resource with a starting and ending time. Mapping refers to the assigning resource to the task without specifying the start time. Mapping can be possible in several conditions. Mapping can be possible when you know what tasks are scheduled or when you do not know what tasks are scheduled. If it is known then it only requires to choose the way so that it can be mapped correctly otherwise it needs to consider varying circumstances. The proposed article focuses on the way to choose an algorithm for mapping when the tasks are scheduled. This article also analyzes experimentally the different algorithms to get out the best of it in different conditions.
DOI: 10.48550/arxiv.1309.7342
2013
Cited 4 times
Electroweakino Searches: A Comparative Study for LHC and ILC (A Snowmass White Paper)
We make a systematic and comparative study for the LHC and ILC for the electroweakino searches in the Minimal Supersymmetric Standard Model. We adopt a general bottom-up approach and scan over the parameter regions for all the three cases of the lightest supersymmetric particle being Bino-, Wino-, and Higgsino-like. The electroweakino signal from pair production and subsequent decay to Wh (h to b\bar b) final state may yield a sensitivity of 95% C.L. exclusion (5sigma discovery) to the mass scale M_2, mu ~ 250-400 GeV (200-250 GeV) at the 14 TeV LHC with an luminosity of 300 fb^{-1}. Combining with all the other decay channels, the 95% C.L. exclusion (5sigma discovery) may be extended to M_2, mu ~ 480-700 GeV (320-500 GeV). At the ILC, the electroweakinos could be readily discovered once the kinematical threshold is crossed, and their properties could be thoroughly studied.
DOI: 10.1007/978-981-13-3600-3_9
2019
Cited 4 times
Optimization of Cloud Datacenter Using Heuristic Strategic Approach
Task scheduling is extremely challenging as it is very difficult to utilize resources in the best possible manner with low response time and high throughput. Task scheduling can be designed on the basis of different criteria under several rules and regulations. This is simply nothing but an agreement between cloud users and cloud providers. Task scheduling has attracted a lot of attention. It is very challenging due to the heterogeneity of the cloud resources with varying capacities and functionalities. Therefore, minimizing the makespan for task scheduling is a challenging issue. The scheduling algorithm has been emphasized not only on appropriate resource utilization but also on efficient resource utilization. The proposed algorithm performance is estimated based on load balancing of tasks over the nodes and makespan time. Scheduling algorithm is used to enhance the performance of the system by maximizing the CPU utilization, reducing the turnaround time, and maximizing throughput. Tasks are statically scheduled based on which different available resources are allocated at compile time or dynamically. The prime objective for scheduling of task approach in the cloud is to minimize the task completion time, task waiting time and makespan. And also to optimize the utilization of resources.
DOI: 10.1109/e-science.2007.47
2007
Cited 5 times
Large-Scale ATLAS Simulated Production on EGEE
In preparation for first data at the LHC, a series of Data Challenges, of increasing scale and complexity, have been performed. Large quantities of simulated data have been produced on three different Grids, integrated into the ATLAS production system. During 2006, the emphasis moved towards providing stable continuous production, as is required in the immediate run-up to first data, and thereafter. Here, we discuss the experience of the production done on EGEE resources, using submission based on the gLite WMS, CondorG and a system using Condor Glide-ins. The overall walltime efficiency of around 90% is largely independent of the submission method, and the dominant source of wasted cpu comes from data handling issues. The efficiency of grid job submission is significantly worse than this, and the glide-in method benefits greatly from factorising this out.
DOI: 10.1109/nssmic.2008.4774771
2008
Cited 4 times
The commissioning of CMS computing centres in the worldwide LHC computing Grid
The computing system of the CMS experiment uses distributed resources from more than 60 computing centres worldwide. Located in Europe, America and Asia, these centres are interconnected by the Worldwide LHC Computing Grid. The operation of the system requires a stable and reliable behavior of the underlying infrastructure. CMS has established a procedure to extensively test all relevant aspects of a Grid site, such as the ability to efficiently use their network to transfer data, services relevant for CMS and the capability to sustain the various CMS computing workflows (Monte Carlo simulation, event reprocessing and skimming, data analysis) at the required scale. This contribution describes in detail the procedure to rate CMS sites depending on their performance, including the complete automation of the program, the description of monitoring tools, and its impact in improving the overall reliability of the Grid from the point of view of the CMS computing system.
DOI: 10.1088/1742-6596/331/6/062032
2011
Cited 3 times
CMS Distributed Computing Integration in the LHC sustained operations era
After many years of preparation the CMS computing system has reached a situation where stability in operations limits the possibility to introduce innovative features. Nevertheless it is the same need of stability and smooth operations that requires the introduction of features that were considered not strategic in the previous phases. Examples are: adequate authorization to control and prioritize the access to storage and computing resources; improved monitoring to investigate problems and identify bottlenecks on the infrastructure; increased automation to reduce the manpower needed for operations; effective process to deploy in production new releases of the software tools. We present the work of the CMS Distributed Computing Integration Activity that is responsible for providing a liaison between the CMS distributed computing infrastructure and the software providers, both internal and external to CMS. In particular we describe the introduction of new middleware features during the last 18 months as well as the requirements to Grid and Cloud software developers for the future.
2013
Cited 3 times
Working Group Report: New Particles, Forces, and Dimensions
DOI: 10.1088/1742-6596/219/6/062022
2010
Cited 3 times
The CREAM-CE: First experiences, results and requirements of the four LHC experiments
In terms of the gLite middleware, the current LCG-CE used by the four LHC experiments is about to be deprecated. The new CREAM-CE service (Computing Resource Execution And Management) has been approved to replace the previous service. CREAM-CE is a lightweight service created to handle job management operations at the CE level. It is able to accept requests both via the gLite WMS service and also via direct submission for transmission to the local batch system.
DOI: 10.1063/5.0143069
2023
Flexible manufacturing system with industry 4.0
DOI: 10.1088/1742-6596/2603/1/012024
2023
The Effect of Interconnecting Ribbons in External Resistance of Crystalline Silicon Photovoltaic Modules
Abstract The interconnecting ribbons in commercial crystalline silicon (c-Si) photovoltaic (PV) modules significantly contribute to the external resistance of the modules. This external resistance plays a major role in resistive losses in the modules and thus measures should be taken in minimization of such resistances. A wider ribbon cause blockage for the incoming light in the solar cells. In addition, the selection of wrong ribbon configuration essentially leads to breakage in fingers which further enhances resistive losses in the modules. In this paper, an analytical approach using a resistive network model has been proposed which effectively describes the effect of ribbon dimension on the blockage area as well as external resistance of the modules. The model has been useful to understand the resistive effect in the PV modules in a quantitative manner. The results show that the number of busbars and the width of the busbars in a solar cell play a significant role in choosing the right ribbon configuration. The proposed approach is useful to understand the optimum ribbon configuration in newly manufactured c-Si PV modules.
DOI: 10.48550/arxiv.2311.07888
2023
RoboSense At Edge: Detecting Slip, Crumple and Shape of the Object in Robotic Hand for Teleoprations
Slip and crumple detection is essential for performing robust manipulation tasks with a robotic hand (RH) like remote surgery. It has been one of the challenging problems in the robotics manipulation community. In this work, we propose a technique based on machine learning (ML) based techniques to detect the slip, and crumple as well as the shape of an object that is currently held in the robotic hand. We proposed ML model will detect the slip, crumple, and shape using the force/torque exerted and the angular positions of the actuators present in the RH. The proposed model would be integrated into the loop of a robotic hand(RH) and haptic glove(HG). This would help us to reduce the latency in case of teleoperation
DOI: 10.1109/nssmic.2009.5401636
2009
Use of late-binding technology for workload management system in CMS
Condor glidein-based workload management system (glideinWMS) has been developed and integrated with distributed physics analysis and Monte Carlo (MC) production system at Compact Muon Solenoid (CMS) experiment. The late-binding between the jobs and computing element (CE), and the validation of WorkerNode (WN) environment help significantly reduce the failure rate of Grid jobs. For CPU-consuming MC data production, opportunistic grid resources can be effectively explored via the extended computing pool built on top of various heterogeneous Grid resources. The Virtual Organization (VO) policy is embedded into the glideinWMS and pilot job configuration. GSI authentication, authorization and interfacing with gLExec allows a large user basis to be supported and seamlessly integrated with Grid computing infrastructure. The operation of glideinWMS at CMS shows that it is a highly available and stable system for a large VO of thousands of users and running tens of thousands of user jobs simultaneously. The enhanced monitoring allows system administrators and users to easily track the system-level and job-level status.
DOI: 10.22323/1.398.0842
2022
Application of Quantum Machine Learning to HEP Analysis at LHC Using Quantum Computer Simulators and Quantum Computer Hardware
Machine learning enjoys widespread success in High Energy Physics (HEP) analyses at LHC. However the ambitious HL-LHC program will require much more computing resources in the next two decades. Quantum computing may offer speed-up for HEP physics analyses at HL-LHC, and can be a new computational paradigm for big data analyses in High Energy Physics. We have successfully employed three methods (1) Variational Quantum Classifier (VQC) method, (2) Quantum Support Vector Machine Kernel (QSVM-kernel) method and (3) Quantum Neural Network (QNN) method for two LHC flagship analyses: ttH (Higgs production in association with two top quarks) and H->mumu (Higgs decay to two muons, the second generation fermions). We shall address the progressive improvements in performance from method (1) to method (3). We will present our experiences and results of a study on LHC High Energy Physics data analyses with IBM Quantum Simulator and Quantum Hardware (using IBM Qiskit framework), Google Quantum Simulator (using Google Cirq framework), and Amazon Quantum Simulator (using Amazon Braket cloud service). The work is in the context of a Qubit platform (a gate-model quantum computer). Taking into account the present limitation of hardware access, different quantum machine learning methods are studied on simulators and the results are compared with classical machine learning methods (BDT, classical Support Vector Machine and classical Neural Network). Furthermore, we do apply quantum machine learning on IBM quantum hardware to compare performance between quantum simulator and quantum hardware. The work is performed by an international and interdisciplinary collaboration with the Department of Physics and Department of Computer Sciences of University of Wisconsin, CERN Quantum Technology Initiative, IBM Research Zurich, IBM T.J. Watson Research Center, Fermilab Quantum Institute, BNL Computational Science Initiative, State University of New York at Stony Brook, and Quantum Computing and AI Research of Amazon Web Services. This work pioneers a close collaboration of academic institutions with industrial corporations in the High Energy Physics analyses effort. Though the size of event samples in future HL-LHC physics and the limited number of qubits pose some challenges to the Quantum Machine learning studies for High Energy Physics, more advanced quantum computers with larger number of qubits, reduced noise and improved running time (as envisioned by IBM and Google) may outperform classical machine learning in both classification power and in speed. Although the era of efficient quantum computing may still be years away, we have made promising progress and obtained preliminary results in applying quantum machine learning to High Energy Physics. A PROOF OF PRINCIPLE.
DOI: 10.48550/arxiv.1310.1190
2013
Review on Fragment Allocation by using Clustering Technique in Distributed Database System
Considerable Progress has been made in the last few years in improving the performance of the distributed database systems. The development of Fragment allocation models in Distributed database is becoming difficult due to the complexity of huge number of sites and their communication considerations. Under such conditions, simulation of clustering and data allocation is adequate tools for understanding and evaluating the performance of data allocation in Distributed databases. Clustering sites and fragment allocation are key challenges in Distributed database performance, and are considered to be efficient methods that have a major role in reducing transferred and accessed data during the execution of applications. In this paper a review on Fragment allocation by using Clustering technique is given in Distributed Database System.
DOI: 10.47893/ijcsi.2011.1004
2011
An Efficient Algorithm for Mining Of frequent items using incremental model
Data mining is a part of know ledge Discovery in database process (KDD). As technology advances, floods of data can be produced and shared in many appliances such as wireless Sensor networks or Web click streams. This calls for extracting useful information and knowledge from streams of data. In this paper, We have proposed an efficient algorithm, where, at any time the current frequencies of all frequent item sets can be immediately produced. The current frequency of an item set in a stream is defined as its maximal frequency over all possible windows in the stream from any point in the past until the current state. The experimental result shows the proposed algorithm not only maintains a small summery of information for one item set but also consumes less memory then existing algorithms for mining frequent item sets over recent data streams.
DOI: 10.1088/1742-6596/219/7/072051
2010
A Grid job monitoring system
This paper presents a web-based Job Monitoring framework for individual Grid sites that allows users to follow in detail their jobs in quasi-real time. The framework consists of several independent components : (a) a set of sensors that run on the site CE and worker nodes and update a database, (b) a simple yet extensible web services framework and (c) an Ajax powered web interface having a look-and-feel and control similar to a desktop application. The monitoring framework supports LSF, Condor and PBS-like batch systems. This is one of the first monitoring systems where an X.509 authenticated web interface can be seamlessly accessed by both end-users and site administrators. While a site administrator has access to all the possible information, a user can only view the jobs for the Virtual Organizations (VO) he/she is a part of. The monitoring framework design supports several possible deployment scenarios. For a site running a supported batch system, the system may be deployed as a whole, or existing site sensors can be adapted and reused with the web services components. A site may even prefer to build the web server independently and choose to use only the Ajax powered web interface. Finally, the system is being used to monitor a glideinWMS instance. This broadens the scope significantly, allowing it to monitor jobs over multiple sites.
DOI: 10.22323/1.070.0043
2009
The commissioning of CMS computing centres in the WLCG Grid
2019
Petabytes to science
Paper published on ArXiv to raise awareness and start discussions about data access in the era of large astronomical surveys.
DOI: 10.1142/9789812704962_0038
2003
HEAVY QUARK PRODUCTION AT HERA WITH BFKL AND CCFM DYNAMICS
In the framework of the semi-hard ($k_t-$factorization) approach, we analyze the various charm production processes in the kinematic region covered by the HERA experiments
DOI: 10.4304/jcp.9.5.1041-1046
2014
A Protocol For Energy Efficient Mechanism For Wireless Sensor Network With Symmetric Cluster Formation
The network scalability can be achieved by grouping sensor nodes into a clustering hierarchy. Cluster head is referred as the leader of every cluster. In order to achieve stable clustering for mobile environments many clustering schemes are used for Wireless Sensor Network. In this paper, we have proposed an extension to Low Energy Adaptive Cluster Head protocol namely a protocol for energy efficient mechanism for wireless sensor network with symmetric cluster formation which is known as Highest Energy Clustering Hierarchy (HECH) protocol. Mathematical simulation studies show the correctness and effectiveness of our protocol.
2014
First Look at the Physics Case of the FCC-ee (TLEP)
2013
A Comparison of Future Proton Colliders Using SUSY Simplified Models: A Snowmass Whitepaper
We present a summary of results for SUSY Simplified Model searches at future proton colliders: the 14 TeV LHC as well as a 33 TeV proton collider and a 100 TeV proton collider. Upper limits and discovery significances are provided for the gluino-neutralino (for both light and heavy flavor decays), squark-neutralino, and gluino-squark Simplified Model planes. Events are processed with the Snowmass combined detector and Standard Model backgrounds are computed using the Snowmass samples. We place emphasis on comparisons between different collider scenarios, along with the lessons learned regarding the impact of systematic errors and pileup. More details are provided in a companion paper.
2011
Searches for supersymmetry in final states with leptons or photons and missing energy
We present the results of searches for Supersymmetry in various topologies that lead to final states with jets and missing transverse momentum together with one or more isolated leptons, one or two photons or a photon and a lepton. The searches are performed using data collected by the CMS experiment at the LHC in pp-collisions at a center-of-mass energy of 7 TeV. Various data- driven techniques used to measure the Standard Model backgrounds are discussed. The results are interpreted in the CMSSM framework.
DOI: 10.1088/1742-6596/331/7/072051
2011
Measuring and understanding computer resource utilization in CMS
Significant funds are expended in order to make CMS data analysis possible across Tier-2 and Tier-3 resources worldwide. Here we review how CMS monitors operational success in using those resources, identifies and understands problems, monitors trends, provides feedback to site operators and software developers, and generally accumulates quantitative data on the operational aspects of CMS data analysis. This includes data transfers, data distribution, use of data and software releases for analysis, failure analysis and more.
2013
Technique in Distributed Technique in Distributed Technique in Distributed Technique in Distributed Database System Database System Database System Database System
Considerable Progress has been made in the last few years in improving the performance of the distributed database systems. The development of Fragment allocation models in Distributed database is becoming difficult due to the complexity of huge number of sites and their communication considerations. Under such conditions, simulation of clustering and data allocation is adequate tools for understanding and evaluating the performance of data allocation in Distributed databases. Clustering sites and fragment allocation are key challenges in Distributed database performance, and are considered to be efficient methods that have a major role in reducing transferred and accessed data during the execution of applications. In this paper a review on Fragment allocation by using Clustering technique is given in Distributed Database System.
DOI: 10.47893/ijica.2011.1009
2011
Minimal Energy Efficient Routing (MEER) Protocol using GSP For Sensor Network
The most important criterion while designing wireless sensor network is the consumption of energy[5,6,7]. There are many schemes cited for conservation of energy issues[1,2,8]. Again the efficient minimal energy consumption routing schemes are an important consideration. In this paper, we have proposed an energy saving scheme, named as minimal energy efficient routing (MEER) Protocol, which uses GSP (Gossip based sleep Protocol) to achieve energy efficiency in sensor networks. Here, we have compared our work with the existing work given by GSP[1] . We have shown the correctness &amp; effectiveness of our protocol by mathematical simulation studies.
2011
Web Usage Mining in Soft Computing Framework: A Review and State of the Art
This study presents the role of soft computing techniques (artificial neural network (ANN) fuzzy logic (FL) and genetic algorithm (GA)) in the area of web usage mining. In recent years the growth of the World Wide Web exceeded all expectations. Today there are several billions of HTML documents, pictures and other multimedia files available via internet and the number is still rising. But considering the impressive variety of the web, retrieving interesting content has become a very difficult task. So, the World Wide Web is a very advanced area for data mining research. Web mining is a research topic which combines two of the activated research areas: Data Mining and World Wide Web. Web mining research relates to several research communities such as Database, information Retrieval and Artificial intelligence, visualization. This paper also reviews the research and application issues in web mining.
DOI: 10.22323/1.134.0271
2012
Searches for supersymmetry in final states with leptons or photons and missing energy
We present the results of searches for Supersymmetry in various topologies that lead to final states with jets and missing transverse momentum together with one or more isolated leptons, one or two photons or a photon and a lepton. The searches are performed using data collected by the CMS experiment at the LHC in pp-collisions at a center-of-mass energy of 7 TeV. Various data- driven techniques used to measure the Standard Model backgrounds are discussed. The results are interpreted in the CMSSM framework.
2011
Searches for supersymmetry in final states with leptons or photons and missing energy
We present the results of searches for Supersymmetry in various topologies that lead to final states with jets and missing transverse momentum together with one or more isolated leptons, one or two photons or a photon and a lepton. The searches are performed using data collected by the CMS experiment at the LHC in pp-collisions at a center-of-mass energy of 7 TeV. Various data- driven techniques used to measure the Standard Model backgrounds are discussed. The results are interpreted in the CMSSM framework.
2011
Searches for supersymmetry in final states with leptons or photons and missing energy
We present the results of searches for Supersymmetry in various topologies that lead to final states with jets and missing transverse momentum together with one or more isolated leptons, one or two photons or a photon and a lepton. The searches are performed using data collected by the CMS experiment at the LHC in pp-collisions at a center-of-mass energy of 7 TeV. Various datadriven techniques used to measure the Standard Model backgrounds are discussed. The results are interpreted in the CMSSM framework.
DOI: 10.48550/arxiv.1308.1636
2013
Methods and Results for Standard Model Event Generation at $\sqrt{s}$ = 14 TeV, 33 TeV and 100 TeV Proton Colliders (A Snowmass Whitepaper)
This document describes the novel techniques used to simulate the common Snowmass 2013 Energy Frontier Standard Model backgrounds for future hadron colliders. The purpose of many Energy Frontier studies is to explore the reach of high luminosity data sets at a variety of high energy colliders. The generation of high statistics samples which accurately model large integrated luminosities for multiple center-of-mass energies and pile-up environments is not possible using an unweighted event generation strategy -- an approach which relies on event weighting was necessary. Even with these improvements in efficiency, extensive computing resources were required. This document describes the specific approach to event generation using Madgraph5 to produce parton-level processes, followed by parton showering and hadronization with Pythia6, and pile-up and detector simulation with Delphes3. The majority of Standard Model processes for pp interactions at $\sqrt(s)$ = 14, 33, and 100 TeV with 0, 50, and 140 additional pile-up interactions are publicly available.
DOI: 10.48550/arxiv.1309.1057
2013
Snowmass Energy Frontier Simulations
This document describes the simulation framework used in the Snowmass Energy Frontier studies for future Hadron Colliders. An overview of event generation with {\sc Madgraph}5 along with parton shower and hadronization with {\sc Pythia}6 is followed by a detailed description of pile-up and detector simulation with {\sc Delphes}3. Details of event generation are included in a companion paper cited within this paper. The input parametrization is chosen to reflect the best object performance expected from the future ATLAS and CMS experiments; this is referred to as the "Combined Snowmass Detector". We perform simulations of $pp$ interactions at center-of-mass energies $\sqrt{s}=$ 14, 33, and 100 TeV with 0, 50, and 140 additional $pp$ pile-up interactions. The object performance with multi-TeV $pp$ collisions are studied for the first time using large pile-up interactions.
DOI: 10.48550/arxiv.1310.0077
2013
A Comparison of Future Proton Colliders Using SUSY Simplified Models: A Snowmass Whitepaper
We present a summary of results for SUSY Simplified Model searches at future proton colliders: the 14 TeV LHC as well as a 33 TeV proton collider and a 100 TeV proton collider. Upper limits and discovery significances are provided for the gluino-neutralino (for both light and heavy flavor decays), squark-neutralino, and gluino-squark Simplified Model planes. Events are processed with the Snowmass combined detector and Standard Model backgrounds are computed using the Snowmass samples. We place emphasis on comparisons between different collider scenarios, along with the lessons learned regarding the impact of systematic errors and pileup. More details are provided in a companion paper.
2013
SUSY Simplified Models at 14, 33, and 100 TeV Proton Colliders
DOI: 10.48550/arxiv.1111.2733
2011
Searches for supersymmetry in final states with leptons or photons and missing energy
We present the results of searches for Supersymmetry in various topologies that lead to final states with jets and missing transverse momentum together with one or more isolated leptons, one or two photons or a photon and a lepton. The searches are performed using data collected by the CMS experiment at the LHC in pp-collisions at a center-of-mass energy of 7 TeV. Various data- driven techniques used to measure the Standard Model backgrounds are discussed. The results are interpreted in the CMSSM framework.
2002
Associated D * and Dijet Production at HERA as a Test of the K T -Factorization Approach
In the framework of the semi-hard (k(T)-factorization) approach, we analyze the various charm production processes in the kinematic region covered by the HERA experiments.
DOI: 10.23956/ijarcsse/v7i7/0174
2017
Smart Driving System for Improving Traffic Flow
Traffic congestion on road networks is one of the most significant problems that is faced in almost all urban areas. Driving under traffic congestion compels frequent idling, acceleration, and braking, which increase energy consumption and wear and tear on vehicles. By efficiently maneuvering vehicles, traffic flow can be improved. An Adaptive Cruise Control (ACC) system in a car automatically detects its leading vehicle and adjusts the headway by using both the throttle and the brake. Conventional ACC systems are not suitable in congested traffic conditions due to their response delay. For this purpose, development of smart technologies that contribute to improved traffic flow, throughput and safety is needed. In today’s traffic, to achieve the safe inter-vehicle distance, improve safety, avoid congestion and the limited human perception of traffic conditions and human reaction characteristics constrains should be analyzed. In addition, erroneous human driving conditions may generate shockwaves in addition which causes traffic flow instabilities. In this paper to achieve inter-vehicle distance and improved throughput, we consider Cooperative Adaptive Cruise Control (CACC) system. CACC is then implemented in Smart Driving System. For better Performance, wireless communication is used to exchange Information of individual vehicle. By introducing vehicle to vehicle (V2V) communication and vehicle to roadside infrastructure (V2R) communications, the vehicle gets information not only from its previous and following vehicle but also from the vehicles in front of the previous Vehicle and following vehicle. This enables a vehicle to follow its predecessor at a closer distance under tighter control.
2010
The pilot way to Grid resources using glideinWMS
2019
Cloud-Based Remote Healthcare System Environment
2018
Scheduling Issues and Analysis under Distributed Computing Environment
DOI: 10.3204/desy-thesis-2004-012
2004
Charm jets in photoproduction at HERA
DOI: 10.1142/9789812702722_0017
2004
OPEN CHARM AND BEAUTY PRODUCTION
DOI: 10.5170/cern-2005-002.1014
2004
Installing and Operating a Grid Infrastructure at DESY.
DESY is one of the world-wide leading centres for research with particle accelerators and synchrotron light. At the hadron-electron collider HERA three experiments are currently taking data and will be operated until 2007. Since end of August 2004 the DESY Production Grid is operated on basis of the recent LCG-2 release. Its Grid infrastructure is used for all DESY Grid activities, including national and international Grid projects. The HERA experiments are adapting their Monte Carlo production schemes to the Grid.
2005
Full simulation of Higgs boson decays into four leptons for ATLAS at the LHC
2002
Charm fragmentation and dijet angular distributions
Charm fragmentation and dijet angular distributions have been measured in $D^*$ photoproduction at HERA. Charm fragmentation and its property of universality is evaluated in terms of measurement of $P_v$, the ratio of vector/(vector + pseudoscalar) mesons. Angular distributions of dijets, with at least one of the jets associated with a $D^{*\pm}$ meson, have been measured for samples enriched in direct or resolved photon events. The differential cross section shows a steep rise for resolved events in the photon direction, providing strong evidence that the bulk of the resolved photon cross section is due to the charm content of the photon. The shallower rise for direct events as well as for resolved photon events in the proton direction are consistent with the quark exchange diagrams.
DOI: 10.1142/9789812777157_0027
2002
HEAVY-FLAVOURED JETS AT HERA
Heavy-flavoured jets have been studied in photoproduction at HERA.This includes the first measurement of dijet angular distributions in D* photoproduction and the ratio of the vector/(vector + pseudoscalar) production rate for charm mesons.
DOI: 10.48550/arxiv.hep-ex/0207023
2002
Charm fragmentation and dijet angular distributions
Charm fragmentation and dijet angular distributions have been measured in $D^*$ photoproduction at HERA. Charm fragmentation and its property of universality is evaluated in terms of measurement of $P_v$, the ratio of vector/(vector + pseudoscalar) mesons. Angular distributions of dijets, with at least one of the jets associated with a $D^{*\pm}$ meson, have been measured for samples enriched in direct or resolved photon events. The differential cross section shows a steep rise for resolved events in the photon direction, providing strong evidence that the bulk of the resolved photon cross section is due to the charm content of the photon. The shallower rise for direct events as well as for resolved photon events in the proton direction are consistent with the quark exchange diagrams.