ϟ

Sanjeev Kumar

Here are all the papers by Sanjeev Kumar that you can download and read on OA.mg.
Sanjeev Kumar’s last known institution is . Download Sanjeev Kumar PDFs here.

Claim this Profile →
DOI: 10.1145/1454115.1454128
2008
Cited 3,058 times
The PARSEC benchmark suite
This paper presents and characterizes the Princeton Application Repository for Shared-Memory Computers (PARSEC), a benchmark suite for studies of Chip-Multiprocessors (CMPs). Previous available benchmarks for multiprocessors have focused on high-performance computing applications and used a limited number of synchronization methods. PARSEC includes emerging applications in recognition, mining and synthesis (RMS) as well as systems applications which mimic large-scale multithreaded commercial programs. Our characterization shows that the benchmark suite covers a wide spectrum of working sets, locality, data sharing, synchronization and off-chip traffic. The benchmark suite has been made available to the public.
DOI: 10.1103/physrevlett.89.142001
2002
Cited 258 times
Observation of Double<mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline"><mml:mi>c</mml:mi><mml:mover accent="true"><mml:mi>c</mml:mi><mml:mo>¯</mml:mo></mml:mover></mml:math>Production in<mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline"><mml:msup><mml:mi>e</mml:mi><mml:mo>+</mml:mo></mml:msup><mml:msup><mml:mi>e</mml:mi><mml:mo>−</mml:mo></mml:msup></mml:math>Annihilation at<mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline"><…
We report the observation of prompt J/psi via double cc; production from the e+e- continuum. In this process one cc; pair fragments into a J/psi meson while the remaining pair either produces a charmonium state or fragments into open charm. Both cases have been experimentally observed. We find cross sections of sigma[e+e- -->J/psieta(c)(gamma)]xB(eta(c)-->>or=4 charged)=(0.033(+0.007)(-0.006)+/-0.009) pb and sigma(e+e- -->J/psiD(*+)X)=(0.53(+0.19)(-0.15)+/-0.14) pb and infer sigma(e+e- -->J/psicc;)/sigma(e+e- -->J/psiX)=0.59(+0.15)(-0.13)+/-0.12. These results are obtained from a 46.2 fb(-1) data sample collected near the Upsilon(4S) resonance, with the Belle detector at the KEKB collider.
DOI: 10.1145/1024393.1024415
2004
Cited 207 times
Dynamic tracking of page miss ratio curve for memory management
Memory can be efficiently utilized if the dynamic memory demands of applications can be determined and analyzed at run-time. The page miss ratio curve(MRC), i.e. page miss rate vs. memory size curve, is a good performance-directed metric to serve this purpose. However, dynamically tracking MRC at run time is challenging in systems with virtual memory because not every memory reference passes through the operating system (OS).This paper proposes two methods to dynamically track MRC of applications at run time. The first method is using a hardware MRC monitor that can track MRC at fine time granularity. Our simulation results show that this monitor has negligible performance and energy overheads. The second method is an OS-only implementation that can track MRC at coarse time granularity. Our implementation results on Linux show that it adds only 7--10% overhead.We have also used the dynamic MRC to guide both memory allocation for multiprogramming systems and memory energy management. Our real system experiments on Linux with applications including Apache Web Server show that the MRC-directed memory allocation can speed up the applications' execution/response time by up to a factor of 5.86 and reduce the number of page faults by up to 63.1%. Our execution-driven simulation results with SPEC2000 benchmarks show that the MRC-directed memory energy management can improve the Energy * Delay metric by 27--58% over previously proposed static and dynamic schemes.
DOI: 10.1145/1122971.1123003
2006
Cited 204 times
Hybrid transactional memory
High performance parallel programs are currently difficult to write and debug. One major source of difficulty is protecting concurrent accesses to shared data with an appropriate synchronization mechanism. Locks are the most common mechanism but they have a number of disadvantages, including possibly unnecessary serialization, and possible deadlock. Transactional memory is an alternative mechanism that makes parallel programming easier. With transactional memory, a transaction provides atomic and serializable operations on an arbitrary set of memory locations. When a transaction commits, all operations within the transaction become visible to other threads. When it aborts, all operations in the transaction are rolled back.Transactional memory can be implemented in either hardware or software. A straightforward hardware approach can have high performance, but imposes strict limits on the amount of data updated in each transaction. A software approach removes these limits, but incurs high overhead. We propose a novel hybrid hardware-software transactional memory scheme that approaches the performance of a hardware scheme when resources are not exhausted and gracefully falls back to a software scheme otherwise.
DOI: 10.1109/iiswc.2008.4636090
2008
Cited 194 times
PARSEC vs. SPLASH-2: A quantitative comparison of two multithreaded benchmark suites on Chip-Multiprocessors
The PARSEC benchmark suite was recently released and has been adopted by a significant number of users within a short amount of time. This new collection of workloads is not yet fully understood by researchers. In this study we compare the SPLASH-2 and PARSEC benchmark suites with each other to gain insights into differences and similarities between the two program collections. We use standard statistical methods and machine learning to analyze the suites for redundancy and overlap on Chip-Multiprocessors (CMPs). Our analysis shows that PARSEC workloads are fundamentally different from SPLASH-2 benchmarks. The observed differences can be explained with two technology trends, the proliferation of CMPs and the accelerating growth of world data.
DOI: 10.14778/1454159.1454171
2008
Cited 194 times
Efficient implementation of sorting on multi-core SIMD CPU architecture
Sorting a list of input numbers is one of the most fundamental problems in the field of computer science in general and high-throughput database applications in particular. Although literature abounds with various flavors of sorting algorithms, different architectures call for customized implementations to achieve faster sorting times. This paper presents an efficient implementation and detailed analysis of MergeSort on current CPU architectures. Our SIMD implementation with 128-bit SSE is 3.3X faster than the scalar version. In addition, our algorithm performs an efficient multiway merge, and is not constrained by the memory bandwidth. Our multi-threaded, SIMD implementation sorts 64 million floating point numbers in less than 0.5 seconds on a commodity 4-core Intel processor. This measured performance compares favorably with all previously published results. Additionally, the paper demonstrates performance scalability of the proposed sorting algorithm with respect to certain salient architectural features of modern chip multiprocessor (CMP) architectures, including SIMD width and core-count. Based on our analytical models of various architectural configurations, we see excellent scalability of our implementation with SIMD width scaling up to 16X wider than current SSE width of 128-bits, and CMP core-count scaling well beyond 32 cores. Cycle-accurate simulation of Intel's upcoming x86 many-core Larrabee architecture confirms scalability of our proposed algorithm.
DOI: 10.1145/1250662.1250683
2007
Cited 168 times
Carbon
Chip multiprocessors (CMPs) are now commonplace, and the number of cores on a CMP is likely to grow steadily. However, in order to harness the additional compute resources of a CMP, applications must expose their thread-level parallelism to the hardware. One common approach to doing this is to decompose a program into parallel "tasks" and allow an underlying software layer to schedule these tasks to different threads. Software task scheduling can provide good parallel performance as long as tasks are large compared to the software overheads.
DOI: 10.1007/s12043-017-1373-4
2017
Cited 107 times
Invited review: Physics potential of the ICAL detector at the India-based Neutrino Observatory (INO)
The upcoming 50 kt magnetized iron calorimeter (ICAL) detector at the India-based Neutrino Observatory (INO) is designed to study the atmospheric neutrinos and antineutrinos separately over a wide range of energies and path lengths. The primary focus of this experiment is to explore the Earth matter effects by observing the energy and zenith angle dependence of the atmospheric neutrinos in the multi-GeV range. This study will be crucial to address some of the outstanding issues in neutrino oscillation physics, including the fundamental issue of neutrino mass hierarchy. In this document, we present the physics potential of the detector as obtained from realistic detector simulations. We describe the simulation framework, the neutrino interactions in the detector, and the expected response of the detector to particles traversing it. The ICAL detector can determine the energy and direction of the muons to a high precision, and in addition, its sensitivity to multi-GeV hadrons increases its physics reach substantially. Its charge identification capability, and hence its ability to distinguish neutrinos from antineutrinos, makes it an efficient detector for determining the neutrino mass hierarchy. In this report, we outline the analyses carried out for the determination of neutrino mass hierarchy and precision measurements of atmospheric neutrino mixing parameters at ICAL, and give the expected physics reach of the detector with 10 years of runtime. We also explore the potential of ICAL for probing new physics scenarios like CPT violation and the presence of magnetic monopoles.
DOI: 10.18280/ijsdp.180131
2023
Cited 13 times
Information Technology, Food Service Quality and Restaurant Revisit Intention
In this article, we determine whether there is a link between information technology (IT) use in ensuring food service quality and revisit intention.We examined how the use of IT applications in food service affects revisit intention to a hotel's food outlet.To conduct the study, we used a 29-item DINESERV: A Tool for Measuring Service Quality in Restaurants.The DINESERV questionnaire helps restaurateurs gauge customer satisfaction, identify problems, and find solutions.The 29-item questionnaire includes five service-quality categories: assurance, Empathy, reliability, responsiveness, and tangibles.It's meant to help operators gauge what consumers expect from a restaurant.We collected 280 responses from guests visiting Bangladesh's five-star hotels' food service outlets and executed the proposed correlations using PLS-SEM.This study showed that IT application use in determining food service quality does not correlate with revisit intention and that it influences guest confidence, which greatly influences revisit intention.
DOI: 10.1109/jproc.2008.917729
2008
Cited 111 times
Convergence of Recognition, Mining, and Synthesis Workloads and Its Implications
This paper examines the growing need for a general-purpose ldquoanalytics enginerdquo that can enable next-generation processing platforms to effectively model events, objects, and concepts based on end-user input, and accessible datasets, along with an ability to iteratively refine the model in real-time. We find such processing needs at the heart of many emerging applications and services. This processing is further decomposed in terms of an integration of three fundamental compute capabilities-recognition, mining, and synthesis (RMS). The set of RMS workloads is examined next in terms of usage, mathematical models, numerical algorithms, and underlying data structures. Our analysis suggests a workload convergence that is analyzed next for its platform implications. In summary, a diverse set of emerging RMS applications from market segments like graphics, gaming, media-mining, unstructured information management, financial analytics, and interactive virtual communities presents a relatively focused, highly overlapping set of common platform challenges. A general-purpose processing platform designed to address these challenges has the potential for significantly enhancing users' experience and programmer productivity.
DOI: 10.1103/physrevd.76.013002
2007
Cited 95 times
Phenomenology of two-texture zero neutrino mass matrices
The generic predictions of two-texture zero neutrino mass matrices in the flavor basis have been examined especially in relation to the degeneracies between mass matrices within a class and interesting constraints on the neutrino parameters have been obtained. It is shown that the knowledge of the octant of ${\ensuremath{\theta}}_{23}$, the sign of $\mathrm{cos}\ensuremath{\delta}$, and neutrino mass hierarchy can be used to lift these degeneracies.
DOI: 10.1007/s11523-023-00965-7
2023
Cited 8 times
A Phase Ib Study Assessing the Safety, Tolerability, and Efficacy of the First-in-Class Wee1 Inhibitor Adavosertib (AZD1775) as Monotherapy in Patients with Advanced Solid Tumors
Adavosertib (AZD1775) is a first-in-class, selective, small-molecule inhibitor of Wee1.The safety, tolerability, pharmacokinetics, and efficacy of adavosertib monotherapy were evaluated in patients with various solid-tumor types and molecular profiles.Eligible patients had the following: confirmed diagnosis of ovarian cancer (OC), triple-negative breast cancer (TNBC), or small-cell lung cancer (SCLC); previous treatment for metastatic/recurrent disease; and measurable disease. Patients were grouped into six matched cohorts based on tumor type and presence/absence of biomarkers and received oral adavosertib 175 mg twice a day on days 1-3 and 8-10 of a 21-day treatment cycle.Eighty patients received treatment in the expansion phase; median total treatment duration was 2.4 months. The most common treatment-related adverse events (AEs) were diarrhea (56.3%), nausea (42.5%), fatigue (36.3%), vomiting (18.8%), and decreased appetite (12.5%). Treatment-related grade ≥ 3 AEs and serious AEs were reported in 32.5% and 10.0% of patients, respectively. AEs led to dose interruptions in 22.5%, reductions in 11.3%, and discontinuations in 16.3% of patients. One patient died following serious AEs of deep vein thrombosis (treatment related) and respiratory failure (not treatment related). Objective response rate, disease control rate, and progression-free survival were as follows: 6.3%, 68.8%, 4.5 months (OC BRCA wild type); 3.3%, 76.7%, 3.9 months (OC BRCA mutation); 0%, 69.2%, 3.1 months (TNBC biomarker [CCNE1/MYC/MYCL1/MYCN] non-amplified [NA]); 0%, 50%, 2 months (TNBC biomarker amplified); 8.3%, 33.3%, 1.3 months (SCLC biomarker NA); and 0%, 33.3%, 1.2 months (SCLC biomarker amplified).Adavosertib monotherapy was tolerated and demonstrated some antitumor activity in patients with advanced solid tumors.ClinicalTrials.gov identifier NCT02482311; registered June 2015.
DOI: 10.5152/dir.2022.21233
2023
Cited 7 times
Catheters in vascular interventional radiology: an illustrated review
T he catheter is an invaluable tool for interventional radiologists.In 1929, Dr. Werner Forssmann demonstrated the catheterization of the pulmonary artery with a simple rubber catheter by performing an angiogram through the ante-cubital vein. 1 Today, a variety of catheters are available in the armamentarium of the interventional radiologist to suit different needs.However, the literature lacks a comprehensive compilation of the properties, types, and uses of catheters.In this review, we aim to describe the characteristics, properties, and uses of the common angiographic catheters used in vascular interventions. Properties of cathetersA catheter is a flexible hollow tube that can be inserted into a duct, body cavity, or vessel.It consists of a hub at the rear end and a distal tubular shaft.The shaft can be straight or molded into different curved shapes (primary, secondary, or tertiary curves) and can have a tapered or non-tapered tip (Figure 1).Catheterization is the process of inserting a catheter.Angiographic catheters are the most important tool in any vascular intervention.They are introduced through a sheath placed at the vascular access site.Wires introduced via these catheters are navigated to enter the target vessels.Once the catheters are inside the vessel, they can be used to conduct diagnostic angiography of the intended vascular territory and as a conduit for the delivery of balloons and stents for endovascular intervention at the intended location.An ideal catheter should have strength, good torque control, radiopacity, flexibility, an atraumatic tip, and low surface friction for good trackability over a guidewire.2 Construction i) Surface coating: surface coatings can modify the catheter's friction coefficient, thrombogenicity, or antimicrobial properties.ii) Outer layer: angiographic catheters can be made of polyethylene, polyurethane, nylon, polytetrafluoroethylene, silicone, polyvinyl chloride, or a combination of these materials.Their respective properties, advantages, and disadvantages are discussed in Table 1.3,4 The coefficient of friction on the luminal side is important for easy passage of the wire and achieving high flow rates of contrast during angiography.Conversely, a low coefficient of friction on the catheter's outer surface helps its trackability,
DOI: 10.1109/aisc56616.2023.10084978
2023
Cited 7 times
Post Pandemic Cyber Attacks Impacts and Countermeasures: A Systematic Review
COVID-19 is one of the deadliest pandemics of this century’s that affected the whole world. As the COVID-19 spread the government had to impose lockdown that pushed the people to follow some new lifestyle like social distancing, work from home, hand washing, and the country have to shut down industries, businesses and public transport. At the same time, doctors were occupied in saving life’s and on other side cyber criminals were busy taking this situation as advantage, which creates an another silent pandemic i.e. cyber-security pandemic. During this pandemic with overloaded ICT infrastructure, cyber space was gaining attention of more cyber attacker and number of attacks/threats increased exponentially. This is one of the rapidly growing global challenges for industry as well as for human life. In this paper a systematic surveys and review is done on recent trends of cyber security attacks during and post COVID-19 pandemic and their countermeasures. The relevant information has been collected from different trusted sources and impact landscape discussed with importance of cyber security education and future research challenges highlights.
DOI: 10.1016/j.stress.2023.100334
2024
Microbes mediated induced systemic response in plants: A review
Biotic stress affects economically important crop species and leads to quality and yield losses. Plants exhibit the ability of responding to the pathogen attack by synthesizing compounds which leads to either the inhibition or reduction of disease incidence. Plants live in close association with microbial communities. Microbes and their metabolites impact the health of the plants by supplying mineral nutrients, hormonal modulation and protection from the pathogenic organisms. Induced systemic response is one of the major mechanisms employed by the microbes in biocontrol. Beneficial microbes release certain compounds as elicitors in the rhizospheric region which are perceived by the plant roots as signals that increase the defense and resistance of the plants against the phytopathogens. The phytohormones such as ethylene, jasmonic acid, and salicylic acid are involved in regulation of the induced defense responses. The present review highlights the negative impact of the biotic stress on the plants and how induced systemic response is induced in the plants further discussing the role of the microbial elicitors in induced systemic response, their molecular mechanisms and hormonal regulation and draws the attention of the scientific community to explore new microbial elicitors as disease control alternatives.
DOI: 10.48550/arxiv.2403.09685
2024
A space-time gauge theory for modeling ductile damage and its NOSB peridynamic implementation
Local translational and scaling symmetries in space-time is exploited for modelling ductile damage in metals and alloys over wide ranges of strain rate and temperature. The invariant energy density corresponding to the ductile deformation is constructed through the gauge invariant curvature tensor by imposing the Weyl like condition. The energetics of the plastic deformation is brought in through the gauge compensating field emerged due to local translation. Invariance of the energy density under the local action of translation and scaling is preserved through minimally replaced space-time gauge covariant operators. Minimal replacement introduces two non-trivial gauge compensating fields pertaining to local translation and scaling. These are used to describe ductile damage, including plastic flow and micro-crack evolution in the material. A space-time pseudo-Riemannian metric is used to lay out the kinematics in a finite-deformation setting. Recognizing the available insights in classical theories of viscoplasticity, we also establish a correspondence of the gauge compensating field due to spatial translation with Kr\"{o}ner's multiplicative decomposition of the deformation gradient. Thermodynamically consistent coupling between viscoplasticity and ductile damage is ensured through an appropriate degradation function. Non-ordinary state-based (NOSB) peridynamics (PD) discretization of the model is used for numerical implementation. The model's viability is tested in reproducing a few experimentally known facts, viz., strain rate locking in the stress-strain response, whose origin is traced to a nonlinear microscopic inertia term arising out of the space-time translation symmetry. Finally, we solved 2D and axisymmetric deformation problems for qualitatively validating the model's viability. NOSB peridynamics axisymmetric formulation in finite deformation setup is also presented.
DOI: 10.1103/physrevlett.91.221801
2003
Cited 89 times
Observation of<mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline"><mml:msup><mml:mi>B</mml:mi><mml:mo>∓</mml:mo></mml:msup><mml:mo>→</mml:mo><mml:msup><mml:mi>ρ</mml:mi><mml:mo>∓</mml:mo></mml:msup><mml:msup><mml:mi>ρ</mml:mi><mml:mn>0</mml:mn></mml:msup></mml:math>Decays
We report the first observation of the charmless vector-vector decay process B∓→ρ∓ρ0. The measurement uses a 78 fb−1 data sample collected with the Belle detector at the KEKB asymmetric e+e− collider operating at the Υ(4S) resonance. We obtain a branching fraction of B(B∓→ρ∓ρ0)=[31.7±7.1(stat)−6.7+3.8(syst)]×10−6. An analysis of the ρ helicity-angle distributions gives a longitudinal polarization fraction of ΓL/Γ=0.95±0.11(stat)±0.02(syst). We also measure the direct-CP-violating asymmetry ACP(B∓→ρ∓ρ0)=0.00±0.22(stat)±0.03(syst).Received 3 June 2003DOI:https://doi.org/10.1103/PhysRevLett.91.221801©2003 American Physical Society
DOI: 10.1145/1024393.1024425
2004
Cited 86 times
Performance directed energy management for main memory and disks
Much research has been conducted on energy management for memory and disks. Most studies use control algorithms that dynamically transition devices to low power modes after they are idle for a certain threshold period of time. The control algorithms used in the past have two major limitations. First, they require painstaking, application-dependent manual tuning of their thresholds to achieve energy savings without significantly degrading performance. Second, they do not provide performance guarantees. In one case, they slowed down an application by 835.This paper addresses these two limitations for both memory and disks, making memory/disk energy-saving schemes practical enough to use in real systems. Specifically, we make three contributions: (1) We propose a technique that provides a performance guarantee for control algorithms. We show that our method works well for all tested cases, even with previously proposed algorithms that are not performance-aware. (2) We propose a new control algorithm, Performance-directed Dynamic (PD), that dynamically adjusts its thresholds periodically, based on available slack and recent workload characteristics. For memory, PD consumes the least energy, when compared to previous hand-tuned algorithms combined with a performance guarantee. However, for disks, PD is too complex and its self-tuning is unable to beat previous hand-tuned algorithms. (3) To improve on PD, we propose a simple, optimization-based, threshold-free control algorithm, Performance-directed Static (PS). PS periodically assigns a static configuration by solving an optimization problem that incorporates information about the available slack and recent traffic variability to different chips/disks. We find that PS is the best or close to the best across all performanceguaranteed disk algorithms, including hand-tuned versions.
DOI: 10.1103/physrevc.81.014601
2010
Cited 65 times
Effect of the symmetry energy on nuclear stopping and its relation to the production of light charged fragments
We present a complete systematics (excitation function, impact parameter, system size, isospin asymmetry, and equations of state dependences) of global stopping and fragment production for heavy-ion reactions in the energy range between $50$ and $1000$ MeV/nucleon in the presence of symmetry energy and an isospin-dependent cross section. It is observed that the degree of stopping depends weakly on the symmetry energy and strongly on the isospin-dependent cross section. However, the symmetry energy and isospin-dependent cross section has an effect of the order of more than $10%$ on the emission of light charged particles (LCP's). It means that nuclear stopping and LCP's can be used as a tool to get the information of an isospin-dependent cross section. Interestingly, the LCP's emission in the presence of symmetry energy is found to be highly correlated with the global stopping.
DOI: 10.1103/physrevc.81.014611
2010
Cited 63 times
Elliptical flow and isospin effects in heavy-ion collisions at intermediate energies
The elliptical flow of fragments is studied for different systems at incident energies between 50 and 1000 MeV/nucleon using the isospin-dependent quantum molecular dynamics (IQMD) model. Our findings reveal that elliptical flow shows a transition from positive (in-plane) to negative (out-of-plane) values in the midrapidity region at a certain incident energy known as the transition energy. This transition energy is found to depend on the model ingredients, size of the fragments, and composite mass of the reacting system as well as on the impact parameter of the reaction. A reasonable agreement is observed for the excitation function of elliptical flow between the data and our calculations. Interestingly, the transition energy is found to exhibit a power-law mass dependence.
DOI: 10.1103/physrevd.84.077301
2011
Cited 54 times
Implications of a class of neutrino mass matrices with texture zeros for nonzero<mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline"><mml:msub><mml:mi>θ</mml:mi><mml:mn>13</mml:mn></mml:msub></mml:math>
A class of neutrino mass matrices with texture zeros realizable using the group $Z_3$ within the framework of type (I+II) seesaw mechanism naturally admits a non-zero $\theta_{13}$ and allows for deviations from maximal mixing. The phenomenology of this model is reexamined in the light of recent hints for non-zero $\theta_{13}$.
DOI: 10.1088/1748-0221/8/11/p11003
2013
Cited 48 times
Hadron energy response of the Iron Calorimeter detector at the India-based Neutrino Observatory
The results of a Monte Carlo simulation study of the hadron energy response for the magnetized Iron CALorimeter detector, ICAL, proposed to be located at the India-based Neutrino Observatory (INO) is presented. Using a GEANT4 modeling of the detector ICAL, interactions of atmospheric neutrinos with target nuclei are simulated. The detector response to hadrons propagating through it is investigated using the hadron hit multiplicity in the active detector elements. The detector response to charged pions of fixed energy is studied first, followed by the average response to the hadrons produced in atmospheric neutrino interactions using events simulated with the NUANCE event generator. The shape of the hit distribution is observed to fit the Vavilov distribution, which reduces to a Gaussian at high energies. In terms of the parameters of this distribution, we present the hadron energy resolution as a function of hadron energy, and the calibration of hadron energy as a function of the hit multiplicity. The energy resolution for hadrons is found to be in the range 85% (for 1GeV) -- 36% (for 15 GeV).
DOI: 10.1007/978-981-16-7952-0_1
2022
Cited 15 times
A Review of Physical Unclonable Functions (PUFs) and Its Applications in IoT Environment
This paper describes various studies about physical unclonable functions, and it inspires use of physical unclonable functions over conventional security mechanisms and compares them in many aspects. It categorizes physical unclonable functions as strong physical unclonable functions and weak physical unclonable functions. For any communication in a network, authentication scheme for nodes, server, router, and network gateway is presented and procedure of communication is explained and presented through architecture. This paper explained problems faced by smart devices due to attacks on security. Finally, this paper reviews various emerging concepts of physical unclonable functions and its advancement.
DOI: 10.1016/j.biosx.2022.100218
2022
Cited 14 times
Development and recent advancement in microfluidics for point of care biosensor applications: A review
Capillaries are small microscopic channels found in nature predominantly. These flow blood in animals and food, water and nutritions in plants. Mimicking these capillaries scientist discovered the micro channels and named the related study as microfluidics. These microscopic channels/capillaries have wide applications in the area of biomedical instrumentations. Microfluidics work on the combined principles of fluid dynamics, biology, chemistry, microelectronics, physics, and material science. The artificial microfluidics or capillaries can be fabricated using various techniques such as xurography, laser cutting, photolithography, injection moulding, and fast lithographic Activation of Sheets (FLASH). Capillaries have tremendous application, especially in biosensors; from sample collection to the detection of various biomolecules such as nucleic acids, proteins, carbohydrates, lipids, other metabolites, etc. This work presents a comprehensive review of the journey of capillary development, milestones achieved, and recent advancements in the area of microfluidics-based biosensors. This review is focused to meet the requirements of the researchers engaged in designing, simulating, and fabricating capillaries for the desired applications. Special insights have been given on the opportunities and challenges in capillary development.
DOI: 10.1007/s12223-023-01092-6
2023
Cited 6 times
Plant endophytes: unveiling hidden applications toward agro-environment sustainability
DOI: 10.1109/65.953234
2001
Cited 81 times
Expanding confidence in network simulations
Networking engineers increasingly depend on simulation to design and deploy complex, heterogeneous networks. Similarly, networking researchers increasingly depend on simulation to investigate the behavior and performance of new protocol designs. Despite such widespread use of simulation, today there exists little common understanding of the degree of validation required for various applications of simulation. Further, only limited knowledge exists regarding the effectiveness of known validation techniques. To investigate these issues, in May 1999 DARPA and NIST organized a workshop on Network Simulation Validation. This article reports on discussions and consensus about issues that arose at the workshop. We describe best current practices for validating simulations and for validating TCP models across various simulation environments. We also discuss interactions between scale and model validation and future challenges for the community.
DOI: 10.1103/physrevlett.87.101801
2001
Cited 74 times
Measurement of Branching Fractions for<mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline"><mml:mi mathvariant="italic">B</mml:mi><mml:mspace /><mml:mo>→</mml:mo><mml:mspace /><mml:mi mathvariant="italic">π</mml:mi><mml:mi mathvariant="italic">π</mml:mi></mml:math>,<mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline"><mml:mi mathvariant="italic">K</mml:mi><mml:mi mathvariant="italic">π</mml:mi></mml:math>, and<mml:math xmlns:mml="http://www.w3.org/1998/…
We report measurements of the branching fractions for $B^0\to\pi^+\pi^-$, $K^+\pi^-$, $K^+K^-$ and $K^0\pi^0$, and $B^+\to\pi^+\pi^0$, $K^+\pi^0$, $K^0\pi^+$ and $K^+\bar{K}{}^0$. The results are based on 10.4 fb$^{-1}$ of data collected on the $\Upsilon$(4S) resonance at the KEKB $e^+e^-$ storage ring with the Belle detector, equipped with a high momentum particle identification system for clear separation of charged $\pi$ and $K$ mesons. We find ${\cal B}(B^0\to\pi^+\pi^-) =(0.56^{+0.23}_{-0.20}\pm 0.04)\times 10^{-5}$, ${\cal B}(B^0\to K^+\pi^-) =(1.93^{+0.34 +0.15}_{-0.32 -0.06})\times 10^{-5}$, ${\cal B}(B^+\to K^+\pi^0) =(1.63^{+0.35 +0.16}_{-0.33 -0.18})\times 10^{-5}$, ${\cal B}(B^+\to K^0\pi^+) =(1.37^{+0.57 +0.19}_{-0.48 -0.18})\times 10^{-5}$, and ${\cal B}(B^0\to K^0\pi^0) =(1.60^{+0.72 +0.25}_{-0.59 -0.27})\times 10^{-5}$, where the first and second errors are statistical and systematic. We also set upper limits of ${\cal B}(B^+\to\pi^+\pi^0)<1.34\times 10^{-5}$, ${\cal B}(B^0\to K^+K^-)<0.27\times 10^{-5}$, and ${\cal B}(B^+\to K^+\bar{K}{}^0)<0.50\times 10^{-5}$ at the 90% confidence level.
DOI: 10.1145/279358.279404
1998
Cited 73 times
Exploiting spatial locality in data caches using spatial footprints
Modern cache designs exploit spatial locality by fetching large blocks of data called cache lines on a cache miss. Subsequent references to words within the same cache line result in cache hits. Although this approach benefits from spatial locality, less than half of the data brought into the cache gets used before eviction. The unused portion of the cache line negatively impacts performance by wasting bandwidth and polluting the cache by replacing potentially useful data that would otherwise remain in the cache.This paper describes an alternative approach to exploit spatial locality available in data caches. On a cache miss, our mechanism, called Spatial Footprint Predictor (SFP), predicts which portions of a cache block will get used before getting evicted. The high accuracy of the predictor allows us to exploit spatial locality exhibited in larger blocks of data yielding better miss ratios without significantly impacting the memory access latencies. Our evaluation of this mechanism shows that the miss rate of the cache is improved, on average, by 18% in addition to a significant reduction in the bandwidth requirement.
DOI: 10.1103/physrevlett.88.052001
2002
Cited 68 times
Production of Prompt Charmonia in<mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline"><mml:mrow><mml:msup><mml:mrow><mml:mi mathvariant="italic">e</mml:mi></mml:mrow><mml:mrow><mml:mo>+</mml:mo></mml:mrow></mml:msup></mml:mrow><mml:mrow><mml:msup><mml:mrow><mml:mi mathvariant="italic">e</mml:mi></mml:mrow><mml:mrow><mml:mo>−</mml:mo></mml:mrow></mml:msup></mml:mrow></mml:math>Annihilation at<mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline"><mml:msqrt…
The production of prompt $J/\ensuremath{\psi}$, $\ensuremath{\psi}(2S)$, ${\ensuremath{\chi}}_{c1}$, and ${\ensuremath{\chi}}_{c2}$ is studied using a $32.4{\mathrm{fb}}^{\ensuremath{-}1}$ data sample collected with the Belle detector at $\ensuremath{\Upsilon}(4S)$ and at $60\mathrm{MeV}$ below the resonance. The yield of prompt $J/\ensuremath{\psi}$ mesons in the $\ensuremath{\Upsilon}(4S)$ sample is compatible with that of continuum production; we set an upper limit $B(\ensuremath{\Upsilon}(4S)\ensuremath{\rightarrow}J/\ensuremath{\psi}X)&lt;1.9\ifmmode\times\else\texttimes\fi{}{10}^{\ensuremath{-}4}$ at the $95%$ confidence level, and find $\ensuremath{\sigma}({e}^{+}{e}^{\ensuremath{-}}\ensuremath{\rightarrow}J/\ensuremath{\psi}X)\phantom{\rule{0ex}{0ex}}=\phantom{\rule{0ex}{0ex}}1.47\ifmmode\pm\else\textpm\fi{}0.10\ifmmode\pm\else\textpm\fi{}0.13\mathrm{pb}$. The cross sections for prompt $\ensuremath{\psi}(2S)$ and direct $J/\ensuremath{\psi}$ are measured. The $J/\ensuremath{\psi}$ momentum spectrum, production angle distribution, and polarization are studied.
DOI: 10.1016/j.nuclphysb.2007.06.030
2007
Cited 67 times
Phenomenological implications of a class of neutrino mass matrices
The generic predictions of two-texture zero neutrino mass matrices of class A in the flavor basis have been reexamined especially in relation to the degeneracy between mass matrices of types A1 and A2 and interesting constraints on the neutrino parameters have been obtained. It is shown that the octant of θ23 and the quadrant of the Dirac-type CP-violating phase δ can be used to lift this degeneracy.
DOI: 10.1016/j.physletb.2005.05.008
2005
Cited 64 times
Study of the baryon–antibaryon low-mass enhancements in charmless three-body baryonic B decays
The angular distributions of the baryon-antibaryon low-mass enhancements seen in the charmless three-body baryonic B decays B+ -> p pbar K+, B0 -> p pbar Ks, and B0 -> p Lambdabar pi- are reported. A quark fragmentation interpretation is supported, while the gluonic resonance picture is disfavored. Searches for the Theta+ and Theta++ pentaquarks in the relevant decay modes and possible glueball states G with 2.2 GeV/c2 < M-ppbar < 2.4 GeV/c2 in the ppbar systems give null results. We set upper limits on the products of branching fractions, B(B0 -> Theta+ p)\times B(Theta+ -> p Ks) < 2.3 \times 10^{-7}, B(B+ -> Theta++ pbar) \times B(Theta++ -> p K+) < 9.1 \times 10^{-8}, and B(B+ -> G K+) \times B(G -> p pbar) < 4.1 \times 10^{-7} at the 90% confidence level. The analysis is based on a 140 fb^{-1} data sample recorded on the Upsilon(4S) resonance with the Belle detector at the KEKB asymmetric-energy e+e- collider.
DOI: 10.1103/physrevc.78.064602
2008
Cited 53 times
Medium mass fragment production due to momentum dependent interactions
The role of system size and momentum dependent effects are analyzed in multifragmenation by simulating symmetric reactions of $\mathrm{Ca}+\mathrm{Ca}$, $\mathrm{Ni}+\mathrm{Ni}$, $\mathrm{Nb}+\mathrm{Nb}$, $\mathrm{Xe}+\mathrm{Xe}$, $\mathrm{Er}+\mathrm{Er}$, $\mathrm{Au}+\mathrm{Au}$, and $\mathrm{U}+\mathrm{U}$ at incident energies between 50 MeV/nucleon and 1000 MeV/nucleon and over full impact parameter zones. Our detailed study reveals that there exists a system size dependence when the reaction is simulated with momentum dependent interactions. This dependence exhibits a mass power law behavior.
DOI: 10.1172/jci155860
2022
Cited 12 times
Indoxyl sulfate in uremia: an old idea with updated concepts
Patients with end-stage kidney disease (ESKD) have increased vascular disease. While protein-bound molecules that escape hemodialysis may contribute to uremic toxicity, specific contributing toxins remain ambiguous. In this issue of the JCI, Arinze et al. explore the role of tryptophan metabolites in chronic kidney disease-associated (CKD-associated) peripheral arterial disease. The authors used mouse and zebrafish models to show that circulating indoxyl sulfate (IS) blocked endothelial Wnt signaling, which impaired angiogenesis. Plasma levels of IS and other tryptophan metabolites correlated with adverse peripheral vascular disease events in humans. These findings suggest that lowering IS may benefit individuals with CKD and ESKD.
DOI: 10.1109/isca.1998.694794
2002
Cited 61 times
Exploiting spatial locality in data caches using spatial footprints
Modern cache designs exploit spatial locality by fetching large blocks of data called cache lines on a cache miss. Subsequent references to words within the same cache line result in cache hits. Although this approach benefits from spatial locality, less than half of the data brought into the cache gets used before eviction. The unused portion of the cache line negatively impacts performance by wasting bandwidth and polluting the cache by replacing potentially useful data that would otherwise remain in the cache. This paper describes an alternative approach to exploit spatial locality available in data caches. On a cache miss, our mechanism, called Spatial Footprint Predictor (SFP), predicts which portions of a cache block will get used before getting evicted. The high accuracy of the predictor allows us to exploit spatial locality exhibited in larger blocks of data yielding better miss ratios without significantly impacting the memory access latencies. Our evaluation of this mechanism shows that the miss rate of the cache is improved, on average, by 18% in addition to a significant reduction in the bandwidth requirement.
DOI: 10.1103/physrevd.76.013006
2007
Cited 52 times
Texture 4 zero Fritzsch-like lepton mass matrices
For Majorana or Dirac neutrinos, using Fritzsch-like texture 4 zero mass matrices with parallel texture structures for the charged leptons and the Dirac neutrino mass matrix (${M}_{\ensuremath{\nu}D}$), detailed predictions for cases pertaining to normal/inverted hierarchy as well as degenerate scenarios of neutrino masses have been carried out. The inverted hierarchy as well as degenerate scenarios seem to be ruled out at $3\ensuremath{\sigma}$ C.L. for both Majorana and Dirac neutrinos. For normal hierarchy, Jarlskog's rephasing invariant parameter $J$, the $CP$ violating Dirac-like phase $\ensuremath{\delta}$, and the effective neutrino mass $⟨{m}_{ee}⟩$ have been calculated. For this case, lower limits of ${m}_{{\ensuremath{\nu}}_{1}}$ and ${\ensuremath{\theta}}_{13}$ would have implications for the nature of neutrinos.
DOI: 10.1103/physrevd.94.036004
2016
Cited 26 times
Zeros in the magic neutrino mass matrix
We study the phenomenological implications of the presence of two zeros in a magic neutrino mass matrix. We find that only two such patterns of the neutrino mass matrix are experimentally acceptable. We express all the neutrino observables as functions of one unknown phase $\ensuremath{\phi}$ and two known parameters $\mathrm{\ensuremath{\Delta}}{m}_{12}^{2}$, $r=\mathrm{\ensuremath{\Delta}}{m}_{12}^{2}/\mathrm{\ensuremath{\Delta}}{m}_{23}^{2}$. In particular, we find ${\mathrm{sin}}^{2}{\ensuremath{\theta}}_{13}=(2/3)r/(1+r)$. We also present a mass model for the allowed textures based upon the group ${A}_{4}$ using the type $\mathrm{I}+\mathrm{II}$ seesaw mechanism.
DOI: 10.1016/s0370-2693(01)01483-6
2002
Cited 48 times
Measurement of (→Dℓ) and determination of |V|
We present a measurement of the branching fraction for the semileptonic B decay B̄0→D+ℓ−ν̄, where ℓ− can be either an electron or a muon. We find Γ(B̄0→D+ℓ−ν̄)=(13.79±0.76±2.51) ns−1, and the resulting branching fraction B(B̄0→D+ℓ−ν̄)=(2.13±0.12±0.39)%, where the first error is statistical and the second systematic. We also investigate the B̄0→D+ℓ−ν̄ form factor and the implications of the result for |Vcb|. From a fit to the differential decay distribution we obtain the rate normalization |Vcb|FD(1)=(4.11±0.44±0.52)×10−2. Using a theoretical calculation of FD(1), the Cabibbo–Kobayashi–Maskawa matrix element |Vcb|=(4.19±0.45±0.53±0.30)×10−2 is obtained, where the last error comes from the theoretical uncertainty of FD(1). The results are based on a data sample of 10.2fb−1 recorded at the ϒ(4S) resonance with the Belle detector at the KEKB e+e− collider.
DOI: 10.1007/s00024-008-0298-8
2008
Cited 37 times
A Fourth Order Accurate SH-Wave Staggered Grid Finite-difference Algorithm with Variable Grid Size and VGR-Stress Imaging Technique
DOI: 10.1103/physrevc.84.044620
2011
Cited 33 times
Probing the density dependence of the symmetry energy via multifragmentation at subsaturation densities
Symmetry energy for asymmetric nuclear matter at subsaturation densitieswas investigated in the framework of an isospin-dependent quantum molecular dynamics model.Asingle ratio of neutrons and protons is comparedwith the experimental data of Famiano et al. [Phys. Rev. Lett. 97, 052701 (2006)]. We have also performed a comparison for the double ratio with experimental as well as different theoretical results of Boltzmann-Uehling-Uhlenbeck in 1997, Isospin-dependent Boltzmann-Uehling-Uhlenbeck in 2004, Boltzmann-Nordeim-Vlasov, and Improved QuantumMolecular Dynamics models. It is found that the double ratio predicts the softness of symmetry energy, which is a little underestimated in the single ratio. Furthermore, the study of the single ratio is extended for different kinds of fragments, while the double ratio is extended for different neutron-rich isotopes of Sn.
DOI: 10.1088/1361-6463/acc5f5
2023
Cited 3 times
Switching of electromagnetic induced transparency in terahertz metasurface
Abstract We demonstrate functional switching of electromagnetic induced transparency (EIT) in terahertz (THz) metasurface. We first simulated and fabricated two metasurfaces that have light difference in their unit cell design. THz time domain spectroscopy of fabricated metasurfaces shows that two metasurfaces have almost similar transmission spectra but one of them possesses EIT while the second does not. To implement functional switching of EIT, we show numerically that characteristics of both metasurfaces can be achieved by a single hybrid metasurface containing a phase change material, Ge 2 Sb 2 Te 5 (GST). GST has a large contrast in THz material properties in its crystalline and amorphous phases and its phase can be rapidly interchanged by external stimuli. We incorporated GST in the unit cell and show that phase change of GST portion in the metasurface unit cell at a specific location modulates the transmission spectra working as an EIT switch. EIT in the metasurface is attributed to coupling of two opposite phases bright resonance modes supported by the unit cell. The group delay of the transmitted THz radiation indicates that THz wave slows down significantly at EIT frequency. The dynamic interplay between two different responses within a single hybrid metasurface can have applications in biosensors, THz buffers, modulators, and other functional THz communication devices.
DOI: 10.1103/physrevlett.89.231801
2002
Cited 43 times
Radiative<mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline"><mml:mi>B</mml:mi></mml:math>Meson Decays into<mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline"><mml:mi>K</mml:mi><mml:mi>π</mml:mi><mml:mi>γ</mml:mi></mml:math>and<mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline"><mml:mi>K</mml:mi><mml:mi>π</mml:mi><mml:mi>π</mml:mi><mml:mi>γ</mml:mi></mml:math>Final States
We report observations of radiative B meson decays into the K+pi(-)gamma and K+pi(-)pi(+)gamma final states. In the B0-->K+pi(-)gamma channel, we present evidence for decays via an intermediate tensor meson state with a branching fraction of B(B0-->K(*)(2)(1430)(0)gamma)=[1.3+/-0.5(stat)+/-0.1(syst)]x10(-5). We measure the branching fraction B(B+-->K+pi(-)pi(+)gamma)=[2.4+/-0.5(stat) +0.4-0.2(syst)]x10(-5), in which the B+-->K(*0)pi(+)gamma and B+-->K+rho(0)gamma channels dominate. The analysis is based on a data set of 29.4 fb(-1) recorded by the Belle experiment at the KEKB collider.
DOI: 10.1016/s0370-2693(02)02374-2
2002
Cited 42 times
Study of B→ρπ decays at Belle
This Letter describes a study of B meson decays to the pseudoscalar-vector final state ρπ using 31.9×106 BB events collected with the Belle detector at KEKB. The branching fractions B(B+→ρ0π+)=8.0+2.3+0.7−2.0−0.7×10−6 and B(B0→ρ±π∓)=20.8+6.0+2.8−6.3−3.1×10−6 are obtained. In addition, a 90% confidence level upper limit of B(B0→ρ0π0)<5.3×10−6 is reported.
DOI: 10.1103/physrevlett.88.031802
2002
Cited 41 times
Observation of<mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline"><mml:mrow><mml:msup><mml:mrow><mml:mi mathvariant="italic">B</mml:mi></mml:mrow><mml:mrow><mml:mo>+</mml:mo></mml:mrow></mml:msup></mml:mrow><mml:mi /><mml:mo>→</mml:mo><mml:mi /><mml:mrow><mml:msub><mml:mrow><mml:mi>χ</mml:mi></mml:mrow><mml:mrow><mml:mi>c</mml:mi><mml:mn>0</mml:mn><mml:mn /></mml:mrow></mml:msub></mml:mrow><mml:mrow><mml:msup><mml:mrow><mml:mi>K</mml:mi></mml:mrow><mml:mrow><mml:mo>+</mml:…
Using a sample of $31.3\ifmmode\times\else\texttimes\fi{}{10}^{6}B\overline{B}$ pairs collected with the Belle detector at the $\ensuremath{\Upsilon}(4S)$ resonance, we make the first observation of the charged $B$ meson decay to ${\ensuremath{\chi}}_{c0}$ and a charged kaon. The measured branching fraction is $B({B}^{+}\ensuremath{\rightarrow}{\ensuremath{\chi}}_{c0}{K}^{+})\phantom{\rule{0ex}{0ex}}=\phantom{\rule{0ex}{0ex}}({6.0}_{\ensuremath{-}1.8}^{+2.1}\ifmmode\pm\else\textpm\fi{}1.1)\ifmmode\times\else\texttimes\fi{}{10}^{\ensuremath{-}4}$, where the first error is statistical, and the second is systematic.
DOI: 10.1145/1250662.1250690
2007
Cited 36 times
Physical simulation for animation and visual effects
We explore the emerging application area of physics-based simulation for computer animation and visual special effects. In particular, we examine its parallelization potential and characterize its behavior on a chip multiprocessor (CMP). Applications in this domain model and simulate natural phenomena, and often direct visual components of motion pictures. We study a set of three workloads that exemplify the span and complexity of physical simulation applications used in a production environment: fluid dynamics, facial animation, and cloth simulation. They are computationally demanding, requiring from a few seconds to several minutes to simulate a single frame; therefore, they can benefit greatly from the acceleration possible with large scale CMPs.
DOI: 10.1142/s0217732307021767
2007
Cited 34 times
NEUTRINO PARAMETER SPACE FOR A VANISHING ee ELEMENT IN THE NEUTRINO MASS MATRIX
The consequences of a texture zero at the ee entry of neutrino mass matrix in the flavor basis, which also implies a vanishing effective Majorana mass for neutrinoless double beta decay, have been studied for Majorana neutrinos. The neutrino parameter space under this condition has been constrained in the light of all available neutrino data including the CHOOZ bound on [Formula: see text].
DOI: 10.1145/1273440.1250683
2007
Cited 33 times
Carbon
Chip multiprocessors (CMPs) are now commonplace, and the number of cores on a CMP is likely to grow steadily. However, in order to harness the additional compute resources of a CMP, applications must expose their thread-level parallelism to the hardware. One common approach to doing this is to decompose a program into parallel "tasks" and allow an underlying software layer to schedule these tasks to different threads. Software task scheduling can provide good parallel performance as long as tasks are large compared to the software overheads. We examine a set of applications from an important emerging domain: Recognition, Mining, and Synthesis (RMS). Many RMS applications are compute-intensive and have abundant thread-level parallelism, and are therefore good targets for running on a CMP. However, a significant number have small tasks for which software task schedulers achieve only limited parallel speedups. We propose Carbon, a hardware technique to accelerate dynamic task scheduling on scalable CMPs. Carbon has relatively simple hardware, most of which can be placed far from the cores. We compare Carbon to some highly tuned software task schedulers for a set of RMS benchmarks with small tasks. Carbon delivers significant performance improvements over the best software scheduler: on average for 64 cores, 68% faster on a set of loop-parallel benchmarks, and 109% faster on aset of task-parallel benchmarks.
DOI: 10.1103/physrevc.85.024620
2012
Cited 24 times
Sensitivity of neutron to proton ratio toward the high density behavior of the symmetry energy in heavy-ion collisions
The symmetry energy at sub and supra-saturation densities has a great importance in understanding the exact nature of asymmetric nuclear matter as well as neutron star, but, it is poor known, especially at supra-saturation densities. We will demonstrate here that the neutron to proton ratios from different kind of fragments is able to determine the supra-saturation behavior of symmetry energy or not. For this purpose, a series of Sn isotopes are simulated at different incident energies using the Isospin Quantum Molecular Dynamics (IQMD) model with either a soft or a stiff symmetry energy for the present study. It is found that the single neutron to proton ratio from free nucleons as well as LCP's is sensitive towards the symmetry energy, incident energy as well as isospin asymmetry of the system. However, with the double neutron to proton ratio, it is true only for the free nucleons. It is possible to study the high density behavior of symmetry energy by using the neutron to proton ratio from free nucleons.
DOI: 10.1007/978-3-319-26832-3_64
2015
Cited 22 times
AMRITA_CEN-NLP@SAIL2015: Sentiment Analysis in Indian Language Using Regularized Least Square Approach with Randomized Feature Learning
The present work is done as part of shared task in Sentiment Analysis in Indian Languages (SAIL 2015), under constrained category. The task is to classify the twitter data into three polarity categories such as positive, negative and neutral. For training, twitter dataset under three languages were provided Hindi, Bengali and Tamil. In this shared task, ours is the only team who participated in all the three languages. Each dataset contained three separate categories of twitter data namely positive, negative and neutral. The proposed method used binary features, statistical features generated from SentiWordNet, and word presence (binary feature). Due to the sparse nature of the generated features, the input features were mapped to a random Fourier feature space to get a separation and performed a linear classification using regularized least square method. The proposed method identified more negative tweets in the test data provided Hindi and Bengali language. In test tweet for Tamil language, positive tweets were identified more than other two polarity categories. Due to the lack of language specific features and sentiment oriented features, the tweets under neutral were less identified and also caused misclassifications in all the three polarity categories. This motivates to take forward our research in this area with the proposed method.
DOI: 10.1103/physrevd.109.015004
2024
Neutrino phenomenology in a model with generalized <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline"><mml:mi>C</mml:mi><mml:mi>P</mml:mi></mml:math> symmetry within type-I seesaw framework
We investigate the consequences of generalized $CP$ (GCP) symmetry within the context of the two Higgs doublet model (2HDM), specifically focusing on the lepton sector. Utilizing the type-I seesaw framework, we study an intriguing connection between the Dirac Yukawa couplings originating from both Higgs fields, leading to a reduction in the number of independent Yukawa couplings and simplifying the scalar and Yukawa sectors when compared to the general 2HDM. The constraints coming from $CP$ symmetry of class three (CP3) results in two right-handed neutrinos having equal masses and leads to a diagonal right-handed Majorana neutrino mass matrix. Notably, $CP$ symmetry experiences a soft breaking due to the phase associated with the vacuum expectation value of the second Higgs doublet. The model aligns well with observed charged lepton masses and neutrino oscillation data, explaining both masses and mixing angles, and yields distinct predictions for normal and inverted neutrino mass hierarchies. It features a novel interplay between atmospheric mixing angle ${\ensuremath{\theta}}_{23}$ and neutrino mass hierarchy: The angle ${\ensuremath{\theta}}_{23}$ is below maximal for the normal hierarchy and above maximal for inverted hierarchy. Another interesting feature of the model is inherent $CP$ violation for the inverted hierarchy.
DOI: 10.21203/rs.3.rs-3857192/v1
2024
A novel Approach in MRI Signal Processing for Unveiling the Intricacies of Brain Axonal Organization
Abstract This article introduces an innovative methodology to unveil the intricacies of white matter fiber pathways in the brain using diffusion MRI. Relying on the rationale that traditional methods observe a significant decrease in signal intensity values in the direction of higher diffusivity, our novel approach strategically opts for diffusion-sensitizing gradient directions (dSGDs, representing the directions along which signals are generated) aligned with reduced signal intensities. By treating these chosen directions as maximum diffusivity directions, we generate uniformly distributed gradient directions (GDs) around them, which are subsequently employed in the reconstruction process. This approach overcomes drawbacks present in existing methods, such as the uniform gradient directions (UGDs) approach, which exhibits gradient direction redundancy, and the adaptive gradient direction (AGDs) approach, requiring solving the linear system twice per voxel. Our method simultaneously addresses both limitations, offering a more efficient and streamlined process. The effectiveness of our proposed methodology is rigorously evaluated through simulations and experiments involving real data, showcasing its superior performance in uncovering the complex white matter fiber pathways in the brain.
DOI: 10.22271/27889289.2024.v4.i1a.109
2024
Performance of agriculture sector in Uttar Pradesh, India: District level analysis
DOI: 10.48550/arxiv.2402.16491
2024
On Lepton Flavor Violation and Dark Matter in Scotogenic model with Trimaximal Mixing
We examine the Scotogenic model employing the TM$_2$ mixing matrix, $U_{\text{TM}_2}$, for neutrinos and parameterize the Yukawa coupling matrix $y$ based on the diagonalization condition for the neutrino mass matrix, $m_{\nu}$. Our investigation centers on analyzing the relic density of cold dark matter ($\Omega h^2$) and possible lepton flavor violation (LFV) in the model. While analyzing, we have taken into consideration respective experimental constraints on $\Omega h^2$ and LFV alongside neutrino oscillation data. In the second part, we have extended the analysis incorporating extended magic symmetry in $m_\nu$ enabling us to completely determine Yukawa coupling matrix ($y$). We observe a notable exclusion of the effective Majorana mass $|m_{ee}|$ parameter space by cosmological bound on sum of neutrino masses, particularly in the normal hierarchy while inverted hierarchy scenario is excluded due to constraints coming from extended magic symmetry. These findings shed light on the interplay among the Scotogenic model, TM$_2$ mixing, and extended magic symmetry, offering insights into the permitted parameter space and hierarchy exclusion.
DOI: 10.5194/egusphere-egu24-14480
2024
Primary production and dinitrogen fixation in a subtropical inland saline environment
For over half a century, scientists have endeavoured to measure the rates of primary production (PP) and dinitrogen (N2) fixation in a diverse range of inland waters, spanning from freshwater to saline. There is nearly an equivalent portion of the saline as well as fresh in the world's inland waters, emphasizing their significance within our continental landscapes. Lakes play crucial role in global biogeochemical processes and are fundamental for essential ecosystem functions and services. Nonetheless, swift alterations in lakes have been recognized (i.e., salinization of freshwater ecosystems) on a global scale due to shifts in climate and increasing human interventions, posing risks to the valuable services these habitats offer. While considerable research on saline lakes has occurred in the past years across Africa, Australia, and North America, there remains a substantial amount to explore in Asian lakes and beyond, necessitating investigation into these unique ecosystems worldwide. &amp;#160;&amp;#160;&amp;#160;&amp;#160;&amp;#160;&amp;#160;&amp;#160;&amp;#160;&amp;#160;&amp;#160;&amp;#160; The current study explores rates of PP and N2 fixation within a subtropical saline lake (Sambhar, India) along with its neighbouring brine reservoir and salt pans. Incubation experiments were performed to estimate the PP and N2 fixation rates using 13C and 15N tracer techniques. The study reveals that PP and N2 fixation rates were higher in the lake than the adjacent brine reservoir. Concentrations of particulate and dissolved forms of carbon and nitrogen were also higher in the lake than the brine reservoir. However, salt pans showed huge variation in PP, but N2 fixation rates were quite low. The highest concentration of particulate and dissolved forms of carbon and nitrogen were also found in the salt pans. The high uptake rates in the lake and salt pan may be attributed to high biomass and high nutrient concentrations than the brine reservoir. The difference in the rates is possibly due to variation in salinity, temperature, nutrient concentrations, and runoff to the lake, which can affect primary producers and potentially leading to shifts in community structure and biodiversity in different systems. This study provides insights into the complex interactions of PP and N2 fixation rates with environmental parameters in a subtropical saline environment.
DOI: 10.1016/j.nuclphysb.2024.116520
2024
Neutrino mass matrices with generalized CP symmetries and texture zeros
We investigate the properties of neutrino mass matrices that incorporate texture zeros and generalized CP symmetries associated with tribimaximal mixing. By combining these approaches, we derive predictive neutrino mass matrices and explore their implications for mass hierarchies, mixing angles, and CP-violating phases. We find that the three angles defining the generalized CP symmetries have narrow allowed ranges. We also obtain distinct correlations between the three mixing angles and the CP-violating phases that distinguish the various texture patterns from one another. Moreover, we compute the effective neutrino mass for neutrinoless double beta decay and the sum of neutrino masses. Our results highlight the predictability and testability of neutrino mass matrices with generalized CP symmetry.
DOI: 10.1504/ijsami.2024.137659
2024
Economic sustainability of organic farming: an empirical study on farmer's prospective
DOI: 10.9734/bpi/mono/978-81-972870-7-7
2024
Chronobiology: Unlocking the Mysteries of the Biological Clock
DOI: 10.1055/s-0044-1782667
2024
Exoscopic Minimally Invasive Excision of Intradural Extramedullary Tumor
Abstract Spinal tumors extending up to two levels can be removed using minimally invasive techniques. A microscope is traditionally used as a visualization tool with the tubular-retractor system. An exoscope is a newer optical tool with improved digital resolution, panoramic view, and better ergonomics for surgery. A 36-year-old who presented with paraparesis was diagnosed with intradural extramedullary tumor in the T7-T8 region. A complete tumor resection was possible using tubular retractors and exoscope. The patient recovered clinically. We document our surgical experience and present an edited video of the surgery. The key steps and nuances are described in the audio timeline. The authors acknowledge the feasibility of performing this surgery via a minimally invasive method using an exoscope.
DOI: 10.1145/301453.301477
1999
Cited 42 times
Evaluating synchronization on shared address space multiprocessors
Article Free AccessEvaluating synchronization on shared address space multiprocessors: methodology and performance Authors: Sanjeev Kumar Department of Computer Science, Princeton University Department of Computer Science, Princeton UniversityView Profile , Dongming Jiang Department of Computer Science, Princeton University Department of Computer Science, Princeton UniversityView Profile , Rohit Chandra Silicon Graphics Inc. Silicon Graphics Inc.View Profile , Jaswinder Pal Singh Department of Computer Science, Princeton University Department of Computer Science, Princeton UniversityView Profile Authors Info & Claims SIGMETRICS '99: Proceedings of the 1999 ACM SIGMETRICS international conference on Measurement and modeling of computer systemsMay 1999 Pages 23–34https://doi.org/10.1145/301453.301477Published:01 May 1999Publication History 35citation543DownloadsMetricsTotal Citations35Total Downloads543Last 12 Months24Last 6 weeks0 Get Citation AlertsNew Citation Alert added!This alert has been successfully added and will be sent to:You will be notified whenever a record that you have chosen has been cited.To manage your alert preferences, click on the button below.Manage my AlertsNew Citation Alert!Please log in to your account Save to BinderSave to BinderCreate a New BinderNameCancelCreateExport CitationPublisher SiteeReaderPDF
DOI: 10.1016/s0370-2693(01)01476-9
2002
Cited 36 times
Determination of |V| using the semileptonic decay →De
We present a measurement of the Cabibbo–Kobayashi–Maskawa (CKM) matrix element |Vcb| using a 10.2 fb−1 data sample recorded at the ϒ(4S) resonance with the Belle detector at the KEKB asymmetric e+e− storage ring. By extrapolating the differential decay width of the B̄0→D∗+e−ν̄ decay to the kinematic limit at which the D∗+ is at rest with respect to the B̄0, we extract the product of |Vcb| with the normalization of the decay form factor F(1), |Vcb|F(1)=(3.54±0.19±0.18)×10−2, where the first error is statistical and the second is systematic. A value of |Vcb|=(3.88±0.21±0.20±0.19)×10−2 is obtained using a theoretical calculation of F(1), where the third error is due to the theoretical uncertainty in the value of F(1). The branching fraction B(B̄0→D∗+e−ν̄) is measured to be (4.59±0.23±0.40)×10−2.
DOI: 10.1103/physrevc.86.051601
2012
Cited 20 times
Understanding the symmetry energy using data from the ALADIN-2000 Collaboration taken at the GSI Large Neutron Detector
The present study deals with the extraction of the symmetry energy from heavy-ion collisions at intermediate energies. Using the isospin quantum molecular dynamical (IQMD) model, the dependence of the sum of the charge number for fragments with $Z\ensuremath{\ge}2$(${Z}_{\mathrm{bound}}$) on the multiplicity of neutrons (${M}_{n}$) from the projectile spectator fragmentation of ${}^{124}$Sn and ${}^{124}$La at 600 MeV/nucleon is compared with the experimental results of the ALADIN-2000 Collaboration. The comparison suggests a soft symmetry energy. In addition, the sensitivities of the symmetry energy toward the ${Z}_{\mathrm{bound}}$ dependence on proton multiplicity (${M}_{p}$), neutron-to-proton single [$R(n/p)$] and double ratio [${R}_{D}(n/p)$], are also examined. The ${Z}_{\mathrm{bound}}$ dependence of $R(n/p)$ is found to be the most sensitive observable toward the symmetry energy. The ALADIN Collaboration should extend the results for $R(n/p)$ in the near future.
DOI: 10.1140/epjc/s10052-015-3374-0
2015
Cited 18 times
The sensitivity of the ICAL detector at India-based Neutrino Observatory to neutrino oscillation parameters
The India-based Neutrino Observatory will host a 50 kt magnetized iron calorimeter (ICAL) detector that will be able to detect muon tracks and hadron showers produced by charged-current muon neutrino interactions in the detector. The ICAL experiment will be able to determine the precision of atmospheric neutrino mixing parameters and neutrino mass hierarchy using atmospheric muon neutrinos through the earth matter effect. In this paper, we report on the sensitivity for the atmospheric neutrino mixing parameters ( $$\sin ^{2}\theta _{23}$$ and $$|\Delta m^{2}_{32}|$$ ) and octant sensitivity for the ICAL detector using the reconstructed neutrino energy and muon direction as observables. We apply realistic resolutions and efficiencies obtained by the ICAL collaboration with a GEANT4-based simulation to reconstruct neutrino energy and muon direction. Our study shows that using neutrino energy and muon direction as observables for a $$\chi ^{2}$$ analysis, the ICAL detector can measure $$\sin ^{2}\theta _{23}$$ and $$|\Delta m^{2}_{32}|$$ with 13 % and 4 % uncertainties at 1 $$\sigma $$ confidence level and can rule out the wrong octant of $$\theta _{23}$$ with 2 $$\sigma $$ confidence level for 10 years of exposure.
DOI: 10.1088/1748-0221/11/10/t10004
2016
Cited 18 times
Dose rate effects in the radiation damage of the plastic scintillators of the CMS hadron endcap calorimeter
We present measurements of the reduction of light output by plastic scintillators irradiated in the CMS detector during the 8 TeV run of the Large Hadron Collider and show that they indicate a strong dose rate effect. The damage for a given dose is larger for lower dose rate exposures. The results agree with previous measurements of dose rate effects, but are stronger due to the very low dose rates probed. We show that the scaling with dose rate is consistent with that expected from diffusion effects.
DOI: 10.1007/s13534-019-00121-z
2019
Cited 17 times
A hybrid method for fundamental heart sound segmentation using group-sparsity denoising and variational mode decomposition
Segmentation of fundamental heart sounds-S1 and S2 is important for automated monitoring of cardiac activity including diagnosis of the heart diseases. This pa-per proposes a novel hybrid method for S1 and S2 heart sound segmentation using group sparsity denoising and variation mode decomposition (VMD) technique. In the proposed method, the measured phonocardiogram (PCG) signals are denoised using group sparsity algorithm by exploiting the group sparse (GS) property of PCG signals. The denoised GS-PCG signals are then decomposed into subsequent modes with specific spectral characteristics using VMD algorithm. The appropriate mode for further processing is selected based on mode central frequencies and mode energy. It is then followed by the extraction of Hilbert envelope (HEnv) and a thresholding on the selected mode to segment S1 and S2 heart sounds. The performance advantage of the proposed method is verified using PCG signals from benchmark databases namely eGeneralMedical, Littmann, Washington, and Michigan. The proposed hybrid algorithm has achieved a sensitivity of 100%, positive predictivity of 98%, accuracy of 98% and detection error rate of 1.5%. The promising results obtained suggest that proposed approach can be considered for automated heart sound segmentation.
DOI: 10.21275/v5i2.nov161007
2016
Cited 16 times
Role of Big Data and Analytics in Smart Cities
The aim of this paper is to study the real potential of using Big Data Analytics in Smart Cities.In this work, we studied cases across the globe where decision maker are using Big Data Analytics as a tool for making Smart City.The paper covers how Internet of Things, Machine to machine, Big Data and Smart Cities Linkages can help in doing predictive analytics which can be helpful to human wellbeing.This paper focused on two main areas -Smart Grid and Traffic Congestion Management where Big Data Analytics can be useful for decision makers and city planner.The report includes various pilot project currently undergoing for making a city smarter along with benefits to human wellbeing.The report also considered various challenges that can be encountered while implementing Big Data solution in making Smart Cities.
DOI: 10.1016/j.jisa.2020.102560
2020
Cited 14 times
A novel image encryption algorithm using chaotic compressive sensing and nonlinear exponential function
This paper presents an optically secure image cryptosystem in the compressed-sensing (CS) domain by using a chaos driven nonlinear function. First, the original image is divided into non-overlapping blocks of uniform size, and then the discrete cosine transform (DCT) is used for obtaining sparse representation of these blocks. A logistic map-based circulant sensing measurement matrix is used to compress the sparse representations of the blocks. This partial cipher is further processed for encryption using a nonlinear function based on a dynamic invertible exponential function. The decorrelation of values is done by applying the Arnold 2-D map on the cipher. The seed values to the logistic equation and the parameters of the Arnold map are based on the original image, which makes the proposed algorithm robust in terms of plain text sensitivity. Not only does the algorithm achieve a considerable amount of security, but it also conceals the original image identity by the compression introduced, thereby reducing the storage and transmission cost. The decryption of the cipher is done using a smooth l0 approximation, giving high-quality reconstruction results. The extensive simulation results and performance analysis illustrates the excellent security and reconstruction results of the proposed scheme in comparison with the existing algorithms.
DOI: 10.1109/com-it-con54601.2022.9850629
2022
Cited 7 times
Sentiment Analysis of Covid19 Vaccines Tweets Using NLP and Machine Learning Classifiers
Sentiment Analysis (SA) is an approach for detecting subjective information such as thoughts, outlooks, reactions, and emotional state. The majority of previous SA work treats it as a text-classification problem that requires labelled input to train the model. However, obtaining a tagged dataset is difficult. We will have to do it by hand the majority of the time. Another concern is that the absence of sufficient cross-domain portability creates challenging situation to reuse same-labelled data across applications. As a result, we will have to manually classify data for each domain. This research work applies sentiment analysis to evaluate the entire vaccine twitter dataset. The work involves the lexicon analysis using NLP libraries like neattext, textblob and multi class classification using BERT. This word evaluates and compares the results of the machine learning algorithms.
DOI: 10.1109/com-it-con54601.2022.9850656
2022
Cited 7 times
Artificial Intelligence And Smart Cities: A Bibliometric Analysis
Smart cities use pioneering technology like Machine-Learning (ML), Artificial-Intelligence (AI) and Internet of –Things (IoT) to connect people and places. This report includes a bibliometric analysis of works related to AI technologies used in smart city applications in order to provide a comprehensive perspective. The study’s goal is to identify the most common AI approaches utilized in smart city contexts, as well as how the smart city AI research area changes over time. To investigate the problem, we used both qualitative and quantitative methodologies. To discover relevant publications published in scientific journals, we used the Scopus database. This research looks at 197 publications published between 2013 and 2021. We used the Biliometrix library and biblioshiny package, which was created in R Studio, for the bibliometric analysis. Our findings reveal that every tier of a smart city project employs a diverse set of AI technologies. Several supervised and unsupervised machine learning methods are used to operate the instrumentation, middleware, and application layers. According to the bibliometric study, AI and IoT in terms of smart cities is a rapidly emerging research field. Researchers from all over the world have expressed a strong desire to explore and collaborate in this interdisciplinary topic.
DOI: 10.1007/s13246-022-01207-2
2023
An iterative algorithm for computing gradient directions for white matter fascicles detection in brain MRI
DOI: 10.1016/j.str.2023.04.010
2023
Molecular basis of SARS-CoV-2 Omicron variant evasion from shared neutralizing antibody response
<h2>Summary</h2> Understanding the molecular features of neutralizing epitopes is important for developing vaccines/therapeutics against emerging SARS-CoV-2 variants. We describe three monoclonal antibodies (mAbs) generated from COVID-19 recovered individuals during the first wave of the pandemic in India. These mAbs had publicly shared near germline gene usage and potently neutralized Alpha and Delta, poorly neutralized Beta, and failed to neutralize Omicron BA.1 SARS-CoV-2 variants. Structural analysis of these mAbs in complex with trimeric spike protein showed that all three mAbs bivalently bind spike with two mAbs targeting class 1 and one targeting a class 4 receptor binding domain epitope. The immunogenetic makeup, structure, and function of these mAbs revealed specific molecular interactions associated with the potent multi-variant binding/neutralization efficacy. This knowledge shows how mutational combinations can affect the binding or neutralization of an antibody, which in turn relates to the efficacy of immune responses to emerging SARS-CoV-2 escape variants.
DOI: 10.1007/978-3-031-28053-5_11
2023
Opportunities and Challenges of the Homestay Family Business Concept in the Indian Tourism Sector: A Viewpoint Study
The chapter provide an overview of the potential opportunities for homestay family businesses in India. This study also highlighted some major challenges associated with the concept of homestay in India. Homestays have long been popular in tourist areas. It offers a welcoming and comfortable atmosphere for travelers who are far from home. Homeowners typically run homestay businesses with vacant space or a room for rent. The homestay business in India offers many opportunities and challenges. This study is based on the author’s perspective and secondary data analysis of the current situation and the rising role of homestays. This research offers the authors’ viewpoint on the increased demand for homestays, supported by secondary data from multiple scholarly studies and reports from various government agencies. This research examines the significant opportunity such as environmental, infrastructural, tourism, social, economic, and challenges associated with homestays and the role of homestays in promoting tourism in India. Finally, the authors provide some recommendations that will aid in the country’s rise of the homestay concept. The contemporary extent of homestay research is limited since little past research has been published in the literature. This study is based on the author’s perception and secondary data. Upcoming research could form the basis of this research and lead to quantitative analysis. Researchers can fill out questionnaires from various homestay stakeholders to learn about the real problem and opportunities in this area.
DOI: 10.1007/979-8-8688-0029-0
2024
Architecting a Modern Data Warehouse for Large Enterprises
This book provides an in-depth understanding of how to build modern cloud-native data warehouses with Azure and AWS.
DOI: 10.1142/s021759082443001x
2024
BOARD GENDER DIVERSITY AND FINANCIAL INCLUSION: EVIDENCE FROM THE GLOBAL MICROFINANCE INDUSTRY
This study examines the effect of board gender diversity within Microfinance Institutions (MFIs) on their ability to acquire new borrowers, a key indicator of progress toward achieving the financial inclusion agenda of the Sustainable Development Goals (SDGs). Utilizing an unbalanced panel dataset consisting of 1,450 unique MFIs operating in 106 countries over the period of 2010–2018, this study deployed various econometric models, including the Pooled Ordinary Least Squares (POLS), Random Effects Model (REM), and Fixed Effects Model (FEM). Rigorous measures, including endogeneity-corrected techniques, alternative proxies for board gender diversity such as the BLAU index and sub-sample analyses were applied to ensure the reliability and robustness of our results. The study’s findings indicated a positive association between board gender diversity and financial inclusion within MFIs. However, the statistical significance of these outcomes varied depending on the specific analytical techniques, sub-samples, and alternative proxies used during the research. Overall, this study offers implications for practitioners and policymakers, encouraging women’s participation in the boardrooms of MFIs to advance the financial inclusion agenda of the SDGs.
DOI: 10.1007/s11227-024-05920-5
2024
Auto-localization algorithm for mobile sensor nodes in wireless sensor networks
DOI: 10.1007/s12223-024-01147-2
2024
Microbial nanotechnology for agriculture, food, and environmental sustainability: Current status and future perspective
DOI: 10.4108/eetsis.5457
2024
Evaluating Performance of Conversational Bot Using Seq2Seq Model and Attention Mechanism
The Chat-Bot utilizes Sequence-to-Sequence Model with the Attention Mechanism, in order to interpret and address user inputs effectively. The whole model consists of Data gathering, Data preprocessing, Seq2seq Model, Training and Tuning. Data preprocessing involves cleaning of any irrelevant data, before converting them into the numerical format. The Seq2Seq Model is comprised of two components: an Encoder and a Decoder. Both Encoder and Decoder along with the Attention Mechanism allow dialogue management, which empowers the Model to answer the user in the most accurate and relevant manner. The output generated by the Bot is in the Natural Language only. Once the building of the Seq2Seq Model is completed, training of the model takes place in which the model is fed with the preprocessed data, during training it tries to minimize the loss function between the predicted output and the ground truth output. Performance is computed using metrics such as perplexity, BLEU score, and ROUGE score on a held-out validation set. In order to meet non-functional requirements, our system needs to maintain a response time of under one second with an accuracy target exceeding 90%.
DOI: 10.1063/5.0190130
2024
On the denoising of periodic and aperiodic ECG signals from a discretized reaction diffusion heart model using variational mode decomposition
Power line interference (PLI), which is in the range of 50 Hz or 60 Hz, is one of the unavoidable noise present in the electrocardiogram signal (ECG). An ECG signal corrupted with PLI noise may end up with poor diagnosis of unhealthy heart conditions. We propose a methodology employing Variational mode decomposition (VMD) In this work, we explore the boundless potential of Variational Mode Decomposition (VMD) for removing the PLI noise from various types of numerically simulated ECG signals obtained from healthy as well as different abnormal heart conditions. An existing mathematical model developed using a discretized reaction diffusion model is used to numerically simulate normal as well as various abnormal ECG signals. The model consists of a set of mutually coupled non-linear oscillators which is represented as a four component ordinary differential equation (ODE) system. The PLI noise is manually added to the numerically simulated ECG signals represent different arrythmia heart conditions. The corrupted abnormal ECG signal is decomposed into intrinsic mode functions (IMFs) employing VMD. Further, the frequency spectrum analysis is performed and the IMFs corresponding to the PLI noise is determined and removed to get back the original ECG signal. From the result analysis, it is observed that the proposed methodology employing VMD can remove power line noise very effectively from healthy as well as various abnormal ECG signals.
DOI: 10.7324/jabb.2024.166024
2024
Probiotic formulations for human health: Current research and future perspective
Probiotics are living microorganisms known for their beneficial properties and have been extensively researched and utilized in various products worldwide. These microorganisms have essential nutritional needs and exhibit significant functional qualities. Probiotics have been employed to enhance the well-being of both animals and humans by influencing the balance of microorganisms in the intestines. Several probiotic strains, such as Bifidobacterium and Lactobacilli, became identified and studied for their potential to mitigate the incidence of gastrointestinal (GI) infections or as a therapeutic approach for treating such infections. With the rise of microbiota displaying resistance and tolerance to traditional medications and antibiotics, the effectiveness of drugs has diminished. Several probiotic strains have been identified to possess notable properties, including potent anti-inflammatory and anti-allergic effects. Consequently, introducing beneficial bacterial species into the GI tract offers an appealing approach to restoring microbial balance and preventing diseases. Furthermore, probiotics have demonstrated the capacity to inhibit the action of intestinal bacterial enzymes responsible being synthesizing colonic carcinogens. Probiotics offer a promising preventive and therapeutic advancement, but further research is required to better understand their specific impact on intestinal health. Probiotics can also exert a direct influence on other microorganisms, including pathogens, which is crucial in preventing and treating infections and restoring the balance of microorganisms in the GI tract. The present review deals with probiotic formulations, their mechanisms, and their role in human health.
DOI: 10.1109/access.2024.3392592
2024
Short-Term Load Foresting Using Combination of Linear and Non-linear Models
DOI: 10.1016/j.biopsych.2024.02.323
2024
88. Atypical Resting-State Electroencephalogram Theta-Beta Ratio in Autistic Adults: Preliminary Results
DOI: 10.1016/s0370-2693(02)02373-0
2002
Cited 33 times
Observation of B→DKK decays
The B→D(∗)K−K(∗)0 decays have been observed for the first time. The branching fractions of the B→D(∗)K−K(∗)0 decay modes are measured. Significant signals are found for the B→D(∗)K−K∗0 and B−→D0K−K0S decay modes. The invariant mass and polarization distributions for the K−K∗0 and K−K0S subsystems have been studied. For the K−K∗0 sybsystem these distributions agree well with those expected for two-body B→D(∗)a1−(1260) decays, with a1−(1260)→K−K∗0. The analysis was done using 29.4 fb−1 of data collected with the Belle detector at the e+e− asymmetric collider KEKB.
DOI: 10.1088/0256-307x/27/6/062504
2010
Cited 20 times
Systematic Study on System Size Dependence of Global Stopping: Role of Momentum-Dependent Interactions and Symmetry Energy
Using the isospin-dependent quantum molecular dynamical model, we systematically study the role of symmetry energy with and without momentum-dependent interactions on the global nuclear stopping. We simulate the reactions by varying the total mass of the system from 80 to 394 at different beam energies from 30 to 1000 MeV/nucleon over central and semi-central geometries. The nuclear stopping is found to be sensitive towards the momentum-dependent interactions and symmetry energy at low incident energies. The momentum-dependent interactions are found to weaken the finite size effects in nuclear stopping.
DOI: 10.1109/mvip.2012.6428771
2012
Cited 19 times
A robust watermarking method based on Compressed Sensing and Arnold scrambling
Watermarking is a technique for information hiding, which is used to identify the authentication and copyright protection. In this paper, a new method of watermarking scheme is proposed, which uses both Compressed Sensing and Arnold scrambling method for efficient data compression and encryption. Compressive sensing technique aims at the reconstruction of sparse signal using a small number of linear measurements. Compressed measurements are then encrypted using Arnold transform. The proposed encryption scheme is computationally more secure against investigated attacks on digital multimedia signals.
DOI: 10.1016/j.cagd.2015.01.001
2015
Cited 15 times
Reconstruction of free-form space curves using NURBS-snakes and a quadratic programming approach
In this study, we propose a robust algorithm for reconstructing free-form space curves in space using a Non-Uniform Rational B-Spline (NURBS)-snake model. Two perspective images of the required free-form curve are used as the input and a nonlinear optimization process is used to fit a NURBS-snake on the projected data in these images. Control points and weights are treated as decision variables in the optimization process. The Levenberg–Marquardt optimization algorithm is used to optimize the parameters of the NURBS-snake, where the initial solution is obtained using a two-step procedure. This makes the convergence faster and it stabilizes the optimization procedure. The curve reconstruction problem is reduced to a problem that comprises stereo reconstruction of the control points and computation of the corresponding weights. Several experiments were conducted to evaluate the performance of the proposed algorithm and comparisons were made with other existing approaches.
DOI: 10.1109/isca.2008.38
2008
Cited 20 times
Atomic Vector Operations on Chip Multiprocessors
The current trend is for processors to deliver dramatic improvements in parallel performance while only modestly improving serial performance. Parallel performance is harvested through vector/SIMD instructions as well as multithreading (through both multithreaded cores and chip multiprocessors). Vector parallelism can be more efficiently supported than multithreading, but is often harder for software to exploit. In particular, code with sparse data access patterns cannot easily utilize the vector/SIMD instructions of mainstream processors. Hardware to scatter and gather sparse data has previously been proposed to enable vector execution for these codes. However, on multithreaded architectures, a number of applications spend significant time on atomic operations (e.g., parallel reductions), which cannot be vectorized using previously proposed schemes. This paper proposes architectural support for atomic vector operations (referred to as GLSC) that addresses this limitation. GLSC extends scatter-gather hardware to support atomic memory operations. Our experiments show that the GLSC provides an average performance improvement on a set of important RMS kernels of 54% for 4-wide SIMD.
DOI: 10.1103/physrevc.82.024610
2010
Cited 16 times
Experimental balance energies and isospin-dependent nucleon-nucleon cross-sections
The effect of different isospin-dependent cross-sections on directed flow is studied for a variety of systems (for which experimental balance energies are available) using an isospin-dependent quantum molecular dynamic (IQMD) model. We show that balance energies are sensitive toward isospin-dependent cross sections for light systems, while nearly no effect exists for heavier nuclei. A reduced cross-section $\ensuremath{\sigma}=0.9{\ensuremath{\sigma}}_{\mathrm{NN}}$ with stiff equation of state is able to explain experimental balance energies in most of systems. A power law behavior is also given for the mass dependence of balance energy, which also follows the $N/Z$ dependence.
DOI: 10.1103/physrevd.82.013010
2010
Cited 16 times
Unitarity constraints on trimaximal mixing
When the neutrino mass eigenstate ${\ensuremath{\nu}}_{2}$ is trimaximally mixed, the mixing matrix is called trimaximal. The middle column of the trimaximal mixing matrix is identical to tribimaximal mixing and the other two columns are subject to unitarity constraints. This corresponds to a mixing matrix with four independent parameters in the most general case. Apart from the two Majorana phases, the mixing matrix has only one free parameter in the $CP$ conserving limit. Trimaximality results in interesting interplay between mixing angles and $CP$ violation. A notion of maximal $CP$ violation naturally emerges here: $CP$ violation is maximal for maximal 2--3 mixing. Similarly, there is a natural constraint on the deviation from maximal 2--3 mixing which takes its maximal value in the $CP$ conserving limit.
DOI: 10.1093/cid/ciac224
2022
Cited 5 times
Facilitating Safe Discharge Through Predicting Disease Progression in Moderate Coronavirus Disease 2019 (COVID-19): A Prospective Cohort Study to Develop and Validate a Clinical Prediction Model in Resource-Limited Settings
Abstract Background In locations where few people have received coronavirus disease 2019 (COVID-19) vaccines, health systems remain vulnerable to surges in severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infections. Tools to identify patients suitable for community-based management are urgently needed. Methods We prospectively recruited adults presenting to 2 hospitals in India with moderate symptoms of laboratory-confirmed COVID-19 to develop and validate a clinical prediction model to rule out progression to supplemental oxygen requirement. The primary outcome was defined as any of the following: SpO2 &amp;lt; 94%; respiratory rate &amp;gt; 30 BPM; SpO2/FiO2 &amp;lt; 400; or death. We specified a priori that each model would contain three clinical parameters (age, sex, and SpO2) and 1 of 7 shortlisted biochemical biomarkers measurable using commercially available rapid tests (C-reactive protein [CRP], D-dimer, interleukin 6 [IL-6], neutrophil-to-lymphocyte ratio [NLR], procalcitonin [PCT], soluble triggering receptor expressed on myeloid cell-1 [sTREM-1], or soluble urokinase plasminogen activator receptor [suPAR]), to ensure the models would be suitable for resource-limited settings. We evaluated discrimination, calibration, and clinical utility of the models in a held-out temporal external validation cohort. Results In total, 426 participants were recruited, of whom 89 (21.0%) met the primary outcome; 257 participants comprised the development cohort, and 166 comprised the validation cohort. The 3 models containing NLR, suPAR, or IL-6 demonstrated promising discrimination (c-statistics: 0.72–0.74) and calibration (calibration slopes: 1.01–1.05) in the validation cohort and provided greater utility than a model containing the clinical parameters alone. Conclusions We present 3 clinical prediction models that could help clinicians identify patients with moderate COVID-19 suitable for community-based management. The models are readily implementable and of particular relevance for locations with limited resources.
DOI: 10.1109/65.484230
1996
Cited 25 times
On multicast support for shared-memory-based ATM switch architecture
Because of its superior performance characteristics in terms of cell loss and throughput for a given memory space, shared-memory ATM switches have gained significant importance in handling bursty traffic in ATM networks. Because of its increased effective load, multicast traffic requires an even greater increase in switching capacity than a shared-memory-based ATM switching system can provide. In order to support multicast operations with a memory-based switching system, the replication and storage methods used for multicast ATM cells for switching purposes become important. Various ways used to support multicast operation with a shared-memory-based ATM switching system have been categorized into several classes. Performance evaluation shows that the class of single-write single-read multicast scheme with output mask (CSWSR-w-OM) implementation overcomes the memory bottleneck involved with replication of multicast cells. It also provides superior performance for a given memory space employed in the shared-memory switching system in comparison to other classes of multicast schemes.
DOI: 10.1145/1037949.1024415
2004
Cited 22 times
Dynamic tracking of page miss ratio curve for memory management
Memory can be efficiently utilized if the dynamic memory demands of applications can be determined and analyzed at run-time. The page miss ratio curve(MRC), i.e. page miss rate vs. memory size curve, is a good performance-directed metric to serve this purpose. However, dynamically tracking MRC at run time is challenging in systems with virtual memory because not every memory reference passes through the operating system (OS).This paper proposes two methods to dynamically track MRC of applications at run time. The first method is using a hardware MRC monitor that can track MRC at fine time granularity. Our simulation results show that this monitor has negligible performance and energy overheads. The second method is an OS-only implementation that can track MRC at coarse time granularity. Our implementation results on Linux show that it adds only 7--10% overhead.We have also used the dynamic MRC to guide both memory allocation for multiprogramming systems and memory energy management. Our real system experiments on Linux with applications including Apache Web Server show that the MRC-directed memory allocation can speed up the applications' execution/response time by up to a factor of 5.86 and reduce the number of page faults by up to 63.1%. Our execution-driven simulation results with SPEC2000 benchmarks show that the MRC-directed memory energy management can improve the Energy * Delay metric by 27--58% over previously proposed static and dynamic schemes.
DOI: 10.1109/jsac.2003.810513
2003
Cited 21 times
The sliding-window packet switch: a new class of packet switch architecture with plural memory modules and decentralized control
Shared-memory based packet switches are known to provide the best possible throughput performance for bursty data traffic in high-speed packet networks and internets compared with other buffering strategies under conditions of identical memory resources deployed in the switch. However, scaling of shared-memory packet switches to a larger size has been restricted mainly due to the physical limitations imposed by the memory-access speed and the centralized control for switching functions in shared-memory switches. A new scalable architecture for a shared-memory packet switch, called the sliding-window (SW) switch, is proposed to overcome these limitations. The SW switch introduces a new class of switching architecture, where physically separate multiple memory modules are logically shared among all the ports of the switch, and the control is decentralized. The SW switch alleviates the bottleneck caused by the centralized control of switching functions in large shared-memory switches. Decentralized switching functions enable the SW switch to operate in a pipeline fashion to enhance scalability and switching capacity compared with that of previously known classes of shared-memory switch architecture.
DOI: 10.1016/0550-3213(82)90353-4
1982
Cited 20 times
Charm production in 400 GeV/c proton-emulsion interactions
5032 proton-emulsion interactions at 400 GeV/c momentum have been carefully scrutinized for production and decay of charged charm particles. In order to detect these decays, shower tracks from 3056 stars have been followed to a maximum length of 1 mm and those from 1976 stars up to 2 mm. A total of 23 three-prong charm-like candidates have been recorded in the forward cone. The background due to γ-overlap on a shower track, trident/pseudo-trident production and secondary interactions is estimated to be 15. Attributing the signal of 8 events to Λc+ and assuming the branching ratio of Λc+ → 3 prong to be 0.6 and τΛc to be 10−13 sec we obtain the production cross section to be 106±39μb/nucleon. Out of these 8 events one example of semileptonic decay of Λc+ is seen.
DOI: 10.1088/1361-6471/aaf55e
2018
Cited 12 times
Two simple textures of the magic neutrino mass matrix
The tri-bimaximal (TBM) mixing predicts a vanishing θ13. This can be attributed to the inherited μ − τ symmetry of TBM mixing. We break its μ − τ symmetry by adding a complex magic matrix with one variable to TBM neutrino mass matrix with one vanishing eigenvalue. We present two such textures and study their phenomenological implications.
DOI: 10.1016/s2211-9477(12)70009-2
2012
Cited 12 times
Diabetic foot
Diabetes is one of the causes of mortality worldwide and it is a leading cause of morbidity. 2–10% persons with diabetes may develop lower extremity ulcer during the course of disease and the foot ulceration is the precursor to approximately 85% of lower extremity of amputations. Diabetic foot ulcers have multiple risk factors, diabetic peripheral neuropathy plays central role. Careful history and neurological, vascular and musculoskeletal system examination is important. Classify the ulcer according to Modified Wagner Classification System or University of Texas system. Few bedside test like ankle brachial index, vibration perception threshold, pedopodgraph and hot and cold sensidometer are important tools for the risk categorization of lower limb. Management is according to the risk category. The infection in diabetic foot is usually polymicrobial and the treatment is based on culture and sensitivity and empirical antimicrobial therapy is according to the audit of the culture and sensitivity of the diabetic foot wound for previous year. Patient education is an effective tool for the prevention of diabetic foot ulcer and further limb loss.
DOI: 10.1504/ijbic.2020.108593
2020
Cited 9 times
PSO-MoSR: a PSO-based multi-objective software remodularisation
The quality of modular structure of a software system highly affects the success of a software project. Software remodularisation which is used to improve the software structure is a complex task and involves the optimisation of multiple conflicting aspects. To address the optimisation of multiple objectives, many metaheuristic optimisation algorithms have been designed. The customisation of these algorithms according to the suitability of real-world multi-objective software remodularisation problem is a challenging task. In this article, particle swarm optimisation (PSO) a widely used metaheuristic heuristic technique is customised and proposed a PSO-based multi-objective software remodularisation (PSO-MoSR) to address the optimisation of multiple objective issues of software remodularisation. The effectiveness of the proposed PSO-MoSR is evaluated by conducting several experiments by modularising 17 real-world software systems.
DOI: 10.1016/j.physletb.2007.09.013
2007
Cited 15 times
CP violation in two zero texture neutrino mass matrices
It has been shown that the neutrino mass matrices with two texture zeros in the charged lepton basis predict non-zero 1–3 mixing and are necessarily CP violating with one possible exception in class C for maximal mixing.
DOI: 10.1103/physrevd.79.033011
2009
Cited 13 times
<mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline"><mml:mi>C</mml:mi><mml:mi>P</mml:mi></mml:math>-odd weak basis invariants and texture zeros
We construct the $CP$-odd weak basis invariants from the neutrino mass matrix in a weak basis, in which the charged lepton mass matrix is diagonal, and find the necessary and sufficient conditions for $CP$ conservation. We study the interrelationships between different $CP$-odd weak basis invariants to examine their implications for the Dirac- and Majorana-type $CP$ violating phases for the phenomenologically allowed Frampton-Glashow-Marfatia texture zero structures of the neutrino mass matrix.
DOI: 10.1140/epjc/s10052-012-1940-2
2012
Cited 10 times
Four zero texture fermion mass matrices in SO(10) GUT
We attempt the integration of the phenomenologically successful four zero texture of fermion mass matrices with the renormalizable SO(10) GUT. The resulting scenario is found to be highly predictive. Firstly, we examine the phenomenological implications of a class of the lepton mass matrices with parallel texture structures and obtain interesting constraints on the parameters of the charged lepton and the neutrino mass matrices. We combine these phenomenological constraints with the constraints obtained from SO(10) GUT to reduce the number of the free parameters and to further constrain the allowed ranges of the free parameters. The solar/atmospheric mixing angles obtained in this analysis are in fairly good agreement with the data.
DOI: 10.1109/iccv.2013.201
2013
Cited 10 times
Non-convex P-Norm Projection for Robust Sparsity
In this paper, we investigate the properties of L <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">p</sub> norm (p ≤1) within a projection framework. We start with the KKT equations of the non-linear optimization problem and then use its key properties to arrive at an algorithm for L <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">p</sub> norm projection on the non-negative simplex. We compare with L <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">1</sub> projection which needs prior knowledge of the true norm, as well as hard thresholding based sparsification proposed in recent compressed sensing literature. We show performance improvements compared to these techniques across different vision applications.
DOI: 10.1016/j.jksuci.2014.12.010
2017
Cited 10 times
NURBS-based geometric inverse reconstruction of free-form shapes
In this study, a geometric inverse algorithm is proposed for reconstructing free-form shapes (curves and surfaces) in space from their arbitrary perspective images using a Non-Uniform Rational B-Spline (NURBS) model. In particular, NURBS model is used to recover information about the required shape in space. An optimization problem is formulated to fit the NURBS to the digitized data in the images. Control points and weights are treated as the decision variables in the optimization process. The 3D shape reconstruction problem is reduced to a problem that comprises stereo reconstruction of control points and computation of the corresponding weights. The correspondence between the control points in the two images is obtained using a third image. The performance of the proposed algorithm was validated by taking several examples based on the synthetic and real images. Comparisons were made with a point-based method in terms of various types of errors.
DOI: 10.1103/physrevd.95.093005
2017
Cited 10 times
Search for the differences in atmospheric neutrino and antineutrino oscillation parameters at the INO-ICAL experiment
In this paper, we present a study to measure the differences between the atmospheric neutrino and anti-neutrino oscillations in the Iron-Calorimeter detector at the India-based Neutrino Observatory experiment. Charged Current $\nu_{\mu}$ and $\overline{\nu}_{\mu}$ interactions with the detector under the influence of earth matter effect have been simulated for ten years of exposure. The observed $\nu_{\mu}$ and $\overline{\nu}_{\mu}$ events spectrum are separately binned into direction and energy bins, and a $\chi^{2}$ is minimised with respect to each bin to extract the oscillation parameters for $\nu_{\mu}$ and $\overline{\nu}_{\mu}$ separately. We then present the ICAL sensitivity to confirm a non-zero value of the difference in atmospheric mass squared of neutrino and anti-neutrino i.e. $|\Delta m^{2}_{32}|-|\Delta\overline{m^{2}}_{32}|$.