ϟ

Wen Guan

Here are all the papers by Wen Guan that you can download and read on OA.mg.
Wen Guan’s last known institution is . Download Wen Guan PDFs here.

Claim this Profile →
DOI: 10.1007/s41781-019-0026-3
2019
Cited 99 times
Rucio: Scientific Data Management
Rucio is an open-source software framework that provides scientific collaborations with the functionality to organize, manage, and access their data at scale. The data can be distributed across heterogeneous data centers at widely distributed locations. Rucio was originally developed to meet the requirements of the high-energy physics experiment ATLAS, and now is continuously extended to support the LHC experiments and other diverse scientific communities. In this article, we detail the fundamental concepts of Rucio, describe the architecture along with implementation details, and give operational experience from production usage.
DOI: 10.1088/2632-2153/abc17d
2021
Cited 69 times
Quantum machine learning in high energy physics
Abstract Machine learning has been used in high energy physics (HEP) for a long time, primarily at the analysis level with supervised classification. Quantum computing was postulated in the early 1980s as way to perform computations that would not be tractable with a classical computer. With the advent of noisy intermediate-scale quantum computing devices, more quantum algorithms are being developed with the aim at exploiting the capacity of the hardware for machine learning applications. An interesting question is whether there are ways to apply quantum machine learning to HEP. This paper reviews the first generation of ideas that use quantum machine learning on problems in HEP and provide an outlook on future applications.
DOI: 10.1103/physrevresearch.3.033221
2021
Cited 41 times
Application of quantum machine learning using the quantum kernel algorithm on high energy physics analysis at the LHC
Quantum machine learning could possibly become a valuable alternative to classical machine learning for applications in high energy physics by offering computational speedups. In this study, we employ a support vector machine with a quantum kernel estimator (QSVM-Kernel method) to a recent LHC flagship physics analysis: $t\overline{t}H$ (Higgs boson production in association with a top quark pair). In our quantum simulation study using up to 20 qubits and up to $50\phantom{\rule{0.16em}{0ex}}000$ events, the QSVM-Kernel method performs as well as its classical counterparts in three different platforms from Google Tensorflow Quantum, IBM Quantum, and Amazon Braket. Additionally, using 15 qubits and 100 events, the application of the QSVM-Kernel method on the IBM superconducting quantum hardware approaches the performance of a noiseless quantum simulator. Our study confirms that the QSVM-Kernel method can use the large dimensionality of the quantum Hilbert space to replace the classical feature space in realistic physics data sets.
DOI: 10.1088/1361-6471/ac1391
2021
Cited 40 times
Application of quantum machine learning using the quantum variational classifier method to high energy physics analysis at the LHC on IBM quantum computer simulator and hardware with 10 qubits
One of the major objectives of the experimental programs at the Large Hadron Collider (LHC) is the discovery of new physics. This requires the identification of rare signals in immense backgrounds. Using machine learning algorithms greatly enhances our ability to achieve this objective. With the progress of quantum technologies, quantum machine learning could become a powerful tool for data analysis in high energy physics. In this study, using IBM gate-model quantum computing systems, we employ the quantum variational classifier method in two recent LHC flagship physics analyses: (Higgs boson production in association with a top quark pair, probing the Higgs boson couplings to the top quark) and H → μ+μ− (Higgs boson decays to two muons, probing the Higgs boson couplings to second-generation fermions). We have obtained early results with 10 qubits on the IBM quantum simulator and the IBM quantum hardware. With small training samples of 100 events on the quantum simulator, the quantum variational classifier method performs similarly to classical algorithms such as SVM (support vector machine) and BDT (boosted decision tree), which are often employed in LHC physics analyses. On the quantum hardware, the quantum variational classifier method has shown promising discrimination power, comparable to that on the quantum simulator. This study demonstrates that quantum machine learning has the ability to differentiate between signal and background in realistic physics datasets. We foresee the usage of quantum machine learning in future high-luminosity LHC physics analyses, including measurements of the Higgs boson self-couplings and searches for dark matter.
DOI: 10.1016/j.ejmech.2023.115284
2023
Cited 8 times
Design, synthesis, and biological evaluation of diaryl heterocyclic derivatives targeting tubulin polymerization with potent anticancer activities
A series of diaryl heterocyclic analogues were designed and synthesized as tubulin polymerization inhibitors. Among them, compound 6y showed the highest antiproliferative activity against HCT-116 colon cancer cell line with an IC50 values of 2.65 μM. Compound 6y also effectively inhibited tubulin polymerization in vitro (IC50 of 10.9 μM), and induced HCT-116 cell cycle arrest in G2/M phase. In addition, compound 6y exhibited high metabolic stability on human liver microsomes (T1/2 = 106.2 min). Finally, 6y was also effective in suppressing tumor growth in a HCT-116 mouse colon model without apparent toxicity. Collectively, these results suggest that 6y represents a new class of tubulin inhibitors deserving further investigation.
DOI: 10.1016/j.ejmech.2023.115195
2023
Cited 6 times
Advances in the development of phosphodiesterase-4 inhibitors
Phosphodiesterase 4 (PDE4) hydrolyzes cyclic adenosine monophosphate (cAMP) and plays a vital roles in many biological processes. PDE4 inhibitors have been widely studied as therapeutics for the treatment of various diseases, including asthma, chronic obstructive pulmonary disease (COPD) and psoriasis. Many PDE4 inhibitors have progressed to clinical trials and some have been approved as therapeutic drugs. Although many PDE4 inhibitors have been approved to enter clinical trials, however, the development of PDE4 inhibitors for the treatment of COPD or psoriasis has been hampered by their side effects of emesis. Herein, this review summarizes advances in the development of PDE4 inhibitors over the last ten years, focusing on PDE4 sub-family selectivity, dual target drugs, and therapeutic potential. Hopefully, this review will contribute to the development of novel PDE4 inhibitors as potential drugs.
DOI: 10.1016/j.apenergy.2023.121503
2023
Cited 5 times
Federated deep contrastive learning for mid-term natural gas demand forecasting
Accurate mid-term gas demand forecasting plays a crucial role for gas companies and policymakers to achieve reliable gas supply plans, supply contracts management, and efficient operation to meet the increasing gas demand. However, mid-term gas demand forecasting faces the problems of data paucity caused by the low frequency of collecting monthly data and heterogeneous consumption patterns of various usage categories. This paper proposes a novel Federated Contrastive pretraining - Local Clustered Finetuning paradigm (FedCon-LCF) by integrating federated learning, deep contrastive learning, and clustering approaches. The proposed method can utilize data from multiple gas companies to overcome data paucity issues in a privacy-preserving way, and high-performance forecasting can be achieved by local clustered regression considering the heterogeneous patterns. An improved hierarchical contrastive loss and multi-scale regression loss are integrated to develop the Forecasting-Oriented Contrastive Learning model (FOCL), which can effectively extract information and generate fine-grained representations of time series for accurate forecasting. The proposed method is evaluated on a dataset collected from 11 gas companies in 11 different Chinese cities with a total of 17648 clients over 10 usage categories. The proposed method outperforms the benchmark LSTM model with an average improvement of 25.30% in MSE and 16.52% in MAE for 3-month-ahead, 6-month-ahead, 9-month-ahead, and 12-month-ahead gas demand forecasting.
DOI: 10.1016/j.biopha.2023.115412
2023
Cited 4 times
Design, synthesis and anti-inflammatory activity study of lansiumamide analogues for treatment of acute lung injury
Acute lung injury (ALI) is an inflammation-mediated respiratory disease with a high mortality rate. Medications with anti-inflammatory small molecules have been demonstrated in phase I and II clinical trials to considerably reduce the ALI mortality. In this study, two series of lansiumamide analogues were designed, synthesized, and evaluated for anti-inflammatory activity for ALI treatment. We found that compound 8n exhibited the best anti-inflammatory activity through inhibiting LPS-induced expression of the proinflammatory cytokines interleukin-6 (IL-6) and interleukin-1β (IL-1β) in Raw264.7 cells and activating the Nrf2/HO-1 pathway. Furthermore, we discovered in a LPS-induced ALI mice model that compound 8n significantly reduced the infiltration of inflammatory cells into lung tissue to achieve the effect of protecting lung tissues and improving ALI. Additionally, our mice model study revealed that compound 8n had a good expectorant effect. These results consistently support that lansiumamide analogue 8n represents a new class of anti-inflammatory agents with potential as a lead compound for further development into a therapeutic drug for ALI treatment.
DOI: 10.47294/ksbda.25.1.2
2024
The Influence of Dynamic Principles of Motion Graphics on User’Visual Search Efficiency
본 연구는 동적 광고의 모션 그래픽을 연구 매개체로 활용하여 모션 그래픽 디자인의 동적 법칙이 사용자의 시각 검색 효율에 미치는 영향을 연구 목적으로 한다. 연구 내용은 단일 요인 실험 디자인 방법을 통해 재생 속도, 그래픽 움직임 방향, 동적 그래픽과 정적 그래픽의 면적 비율, 그리고 Z축 공간 깊이 수준이 사용자의 시각 검색 효율에 미치는 영향을 조사하고 결과를 도출하는 것이다. 연구 방법은 휴대용 눈 추적 장치를 사용하여 동적 작품 인터페이스를 검색하는 동안 피험자의 눈 추적 데이터를 수집하고 소프트웨어를 이용하여 본 데이터에 대해 기술 통계 분석, 독립 표본 T-검정 과 일원 분산 분석을 수행하였다. 실험 결과는 동적 작품의 재생 속도, 그래픽 움직임 방향, 그리고 동적 그래픽과 정적 그래픽의 면적 비율이 모두 사용자의 시각 검색 효율에 상당한 영향 미침을 확인하였다. 반면 Z축 공간의 깊이 수준은 사용자의 시각 검색 효율에 큰 영향을 미치지 않음을 관찰하였다. 연구 결과에 따라 1배 속의 재생 속도를 사용하면 사용자의 시각적 주의를 효과적으로 끌 수 있으며 사용자의 시각 검색 효율성을 향상시키는 데 도움이 될 수 있다. 다양한 모션그래픽 방향으로 통합 모션은 사용자의 시각적 주의를 끌고 사용자의 시각 검색 효율성을 향상시키는 데 가장 도움이 된다. 또한 동적 그래픽의 면적이 정적 그래픽보다 크다면 사용자의 시각적 주의를 효과적으로 끌 수 있다. 그러나 사용자의 시각 검색 효율 측면에서 동적과 정적 그래픽의 면적 비율은 크게 다르지 않았다. 연구 결과는 모션 그래픽 디자인 및 동적 광고 분야에서 시각적 인지 및 사용자 경험을 개선하는 데 유용한 정보를 제공할 수 있을 것으로 기대된다.
DOI: 10.1039/d3ob01848b
2024
Thiosuccinimide enabled S–N bond formation to access <i>N</i>-sulfenylated sulfonamide derivatives with synthetic diversity
A thiosuccinimide enabled S-N cross-coupling strategy has been established for the intermolecular N-sulfenylation of clinically approved sulfa drugs under additive-free conditions. This approach features simple operation, high chemoselectivity for sulfenylating the phenylamino group of sulfonamides, wide substrate scope, and easy scale production, affording N-sulfenylated products in moderate to excellent yields (up to 90%). In addition, we also found that this transformation can be realized in a one-pot manner by employing readily available thiols as starting materials, and the obtained sulfonamide derivatives are capable of various late-stage functionalizations, including oxidation, arylation, benzylation, and methylation.
DOI: 10.48550/arxiv.2401.04724
2024
A parametrically programmable delay line for microwave photons
Delay lines capable of storing quantum information are crucial for advancing quantum repeaters and hardware efficient quantum computers. Traditionally, they are physically realized as extended systems that support wave propagation, such as waveguides. But such delay lines typically provide limited control over the propagating fields. Here, we introduce a parametrically addressed delay line (PADL) for microwave photons that provides a high level of control over the dynamics of stored pulses, enabling us to arbitrarily delay or even swap pulses. By parametrically driving a three-waving mixing superconducting circuit element that is weakly hybridized with an ensemble of resonators, we engineer a spectral response that simulates that of a physical delay line, while providing fast control over the delay line's properties and granting access to its internal modes. We illustrate the main features of the PADL, operating on pulses with energies on the order of a single photon, through a series of experiments, which include choosing which photon echo to emit, translating pulses in time, and swapping two pulses. We also measure the noise added to the delay line from our parametric interactions and find that the added noise is much less than one photon.
DOI: 10.1007/s41781-024-00114-3
2024
PanDA: Production and Distributed Analysis System
Abstract The Production and Distributed Analysis (PanDA) system is a data-driven workload management system engineered to operate at the LHC data processing scale. The PanDA system provides a solution for scientific experiments to fully leverage their distributed heterogeneous resources, showcasing scalability, usability, flexibility, and robustness. The system has successfully proven itself through nearly two decades of steady operation in the ATLAS experiment, addressing the intricate requirements such as diverse resources distributed worldwide at about 200 sites, thousands of scientists analyzing the data remotely, the volume of processed data beyond the exabyte scale, dozens of scientific applications to support, and data processing over several billion hours of computing usage per year. PanDA’s flexibility and scalability make it suitable for the High Energy Physics community and wider science domains at the Exascale. Beyond High Energy Physics, PanDA’s relevance extends to other big data sciences, as evidenced by its adoption in the Vera C. Rubin Observatory and the sPHENIX experiment. As the significance of advanced workflows continues to grow, PanDA has transformed into a comprehensive ecosystem, effectively tackling challenges associated with emerging workflows and evolving computing technologies. The paper discusses PanDA’s prominent role in the scientific landscape, detailing its architecture, functionality, deployment strategies, project management approaches, results, and evolution into an ecosystem.
DOI: 10.1051/epjconf/202429504026
2024
Integrating the PanDA Workload Management System with the Vera C. Rubin Observatory
The Vera C. Rubin Observatory will produce an unprecedented astronomical data set for studies of the deep and dynamic universe. Its Legacy Survey of Space and Time (LSST) will image the entire southern sky every three to four days and produce tens of petabytes of raw image data and associated calibration data over the course of the experiment’s run. More than 20 terabytes of data must be stored every night, and annual campaigns to reprocess the entire dataset since the beginning of the survey will be conducted over ten years. The Production and Distributed Analysis (PanDA) system was evaluated by the Rubin Observatory Data Management team and selected to serve the Observatory’s needs due to its demonstrated scalability and flexibility over the years, for its Directed Acyclic Graph (DAG) support, its support for multi-site processing, and its highly scalable complex workflows via the intelligent Data Delivery Service (iDDS). PanDA is also being evaluated for prompt processing where data must be processed within 60 seconds after image capture. This paper will briefly describe the Rubin Data Management system and its Data Facilities (DFs). Finally, it will describe in depth the work performed in order to integrate the PanDA system with the Rubin Observatory to be able to run the Rubin Science Pipelines using PanDA.
DOI: 10.1088/1742-6596/664/6/062065
2015
Cited 10 times
The ATLAS Event Service: A new approach to event processing
The ATLAS Event Service (ES) implements a new fine grained approach to HEP event processing, designed to be agile and efficient in exploiting transient, short-lived resources such as HPC hole-filling, spot market commercial clouds, and volunteer computing. Input and output control and data flows, bookkeeping, monitoring, and data storage are all managed at the event level in an implementation capable of supporting ATLAS-scale distributed processing throughputs (about 4M CPU-hours/day). Input data flows utilize remote data repositories with no data locality or pre-staging requirements, minimizing the use of costly storage in favor of strongly leveraging powerful networks. Object stores provide a highly scalable means of remotely storing the quasi-continuous, fine grained outputs that give ES based applications a very light data footprint on a processing resource, and ensure negligible losses should the resource suddenly vanish. We will describe the motivations for the ES system, its unique features and capabilities, its architecture and the highly scalable tools and technologies employed in its implementation, and its applications in ATLAS processing on HPCs, commercial cloud resources, volunteer computing, and grid resources.
DOI: 10.3390/gels9080659
2023
Carboxymethyl Chitosan/Sodium Alginate/Chitosan Quaternary Ammonium Salt Composite Hydrogel Supported 3J for the Treatment of Oral Ulcer
Oral ulcer is a common inflammatory disease of oral mucosa, causing severe burning pain and great inconvenience to daily life. In this study, compound 3J with anti-inflammatory activity was synthesized beforehand. Following that, an intelligent composite hydrogel supported 3J was designed with sodium alginate, carboxymethyl chitosan, and chitosan quaternary ammonium salt as the skeleton, and its therapeutic effect on the rat oral ulcer model was investigated. The results show that the composite hydrogel has a dense honeycomb structure, which is conducive to drug loading and wound ventilation, and has biodegradability. It has certain antibacterial effects and good anti-inflammatory activity. When loaded with 3J, it reduced levels of TNF-α and IL-6 in inflammatory cells by up to 50.0%. It has excellent swelling and water retention properties, with a swelling rate of up to 765.0% in a pH 8.5 environment. The existence of a large number of quaternary ammonium groups, carboxyl groups, and hydroxyl groups makes it show obvious differences in swelling in different pH environments, which proves that it has double pH sensitivity. It is beneficial to adapt to the highly dynamic changes of the oral environment. Compared with single hydrogel or drug treatment, the drug-loaded hydrogel has a better effect on the treatment of oral ulcers.
DOI: 10.1039/c8cp02720j
2018
Cited 8 times
Zero-thermal-hysteresis magnetocaloric effect induced by magnetic transition at a morphotropic phase boundary in Heusler Ni<sub>50</sub>Mn<sub>36</sub>Sb<sub>14−x</sub>In<sub>x</sub> alloys
Enhanced MCE with zero thermal hysteresis is achieved in Ni<sub>50</sub>Mn<sub>36</sub>Sb<sub>14−x</sub>In<sub>x</sub> by constructing a MPB-involved phase diagram.
DOI: 10.1051/epjconf/202125102007
2021
Cited 5 times
An intelligent Data Delivery Service for and beyond the ATLAS experiment
The intelligent Data Delivery Service (iDDS) has been developed to cope with the huge increase of computing and storage resource usage in the coming LHC data taking. iDDS has been designed to intelligently orchestrate workflow and data management systems, decoupling data pre-processing, delivery, and main processing in various workflows. It is an experiment-agnostic service around a workflow-oriented structure to work with existing and emerging use cases in ATLAS and other experiments. Here we will present the motivation for iDDS, its design schema and architecture, use cases and current status, and plans for the future.
DOI: 10.22323/1.367.0049
2019
Cited 5 times
Application of Quantum Machine Learning to High Energy Physics Analysis at LHC using IBM Quantum Computer Simulators and IBM Quantum Computer Hardware
The ambitious HL-LHC program will require enormous computing resources in the next two decades.A burning question is whether quantum computer can solve the ever growing demand of computing resources in High Energy Physics in general and physics at LHC in particular.Using IBM Quantum Computer Simulators and Quantum Computer Hardware, we have successfully employed the Quantum Support Vector Machine Method (QSVM) for a ttH (H to two photons), Higgs coupling to top quarks analysis at LHC.
DOI: 10.1088/1742-6596/664/9/092025
2015
Cited 4 times
Fine grained event processing on HPCs with the ATLAS Yoda system
High performance computing facilities present unique challenges and opportunities for HEP event processing. The massive scale of many HPC systems means that fractionally small utilization can yield large returns in processing throughput. Parallel applications which can dynamically and efficiently fill any scheduling opportunities the resource presents benefit both the facility (maximal utilization) and the (compute-limited) science. The ATLAS Yoda system provides this capability to HEP-like event processing applications by implementing event-level processing in an MPI-based master-client model that integrates seamlessly with the more broadly scoped ATLAS Event Service. Fine grained, event level work assignments are intelligently dispatched to parallel workers to sustain full utilization on all cores, with outputs streamed off to destination object stores in near real time with similarly fine granularity, such that processing can proceed until termination with full utilization. The system offers the efficiency and scheduling flexibility of preemption without requiring the application actually support or employ check-pointing. We will present the new Yoda system, its motivations, architecture, implementation, and applications in ATLAS data processing at several US HPC centers.
DOI: 10.1088/1742-6596/898/5/052010
2017
Cited 4 times
Exploiting opportunistic resources for ATLAS with ARC CE and the Event Service
With ever-greater computing needs and fixed budgets, big scientific experiments are turning to opportunistic resources as a means to add much-needed extra computing power. These resources can be very different in design from those that comprise the Grid computing of most experiments, therefore exploiting them requires a change in strategy for the experiment. They may be highly restrictive in what can be run or in connections to the outside world, or tolerate opportunistic usage only on condition that tasks may be terminated without warning. The Advanced Resource Connector Computing Element (ARC CE) with its nonintrusive architecture is designed to integrate resources such as High Performance Computing (HPC) systems into a computing Grid. The ATLAS experiment developed the ATLAS Event Service (AES) primarily to address the issue of jobs that can be terminated at any point when opportunistic computing capacity is needed by someone else. This paper describes the integration of these two systems in order to exploit opportunistic resources for ATLAS in a restrictive environment. In addition to the technical details, results from deployment of this solution in the SuperMUC HPC centre in Munich are shown.
DOI: 10.1088/1742-6596/898/6/062019
2017
Cited 4 times
Experiences with the new ATLAS Distributed Data Management System
The ATLAS Distributed Data Management (DDM) system has evolved drastically in the last two years with the Rucio software fully replacing the previous system before the start of LHC Run-2. The ATLAS DDM system manages now more than 250 petabytes spread on 130 storage sites and can handle file transfer rates of up to 30Hz. In this paper, we discuss our experience acquired in developing, commissioning, running and maintaining such a large system. First, we describe the general architecture of the system, our integration with external services like the WLCG File Transfer Service and the evolution of the system over its first years of production. Then, we show the performance of the system, describe the integration of new technologies such as object stores, and outline some new developments, which mainly focus on performance and automation.
DOI: 10.1051/epjconf/201921403054
2019
Cited 4 times
The next generation PanDA Pilot for and beyond the ATLAS experiment
The Production and Distributed Analysis system (PanDA) is a pilot-based workload management system that was originally designed for the ATLAS Experiment at the LHC and to use with grid sites. Since the coming LHC data taking runs will require more resources than grid computing alone can provide, the various LHC experiments are engaged in an ambitious program to extend the computing model to include opportunistically used resources such as High Performance Computers (HPCs), clouds and volunteer computers. To this end, PanDA is being extended beyond grids and ATLAS to be used on the new types of resources as well as by other experiments. A new key component is being developed, the next generation PanDA Pilot (Pilot 2). Pilot 2 is a complete rewrite of the original PanDA Pilot which has been used in the ATLAS Experiment for over a decade. The new Pilot architecture follows a component-based approach which improves system flexibility, enables a clear workflow control, evolves the system according to modern functional use-cases to facilitate coming feature requests from new and old PanDA users. This paper describes Pilot 2, its architecture and place in the PanDA hierarchy. Furthermore, its ability to be used either as a command tool or through APIs is explained, as well as how its workflows and components are being streamlined for usage on both grids and opportunistically used resources for and beyond the ATLAS experiment.
DOI: 10.11650/twjm/1500405182
2008
Cited 3 times
MULTIPLE POSITIVE SOLUTIONS FOR p-LAPLACIAN FUNCTIONAL DYNAMIC EQUATIONS ON TIME SCALES
In this paper we consider the following boundary value problems for $p$-Laplacian functional dynamic equations on time scales $$\begin{array}{rl} & \left[ \Phi _p(u^{\bigtriangleup }(t))\right]^{\bigtriangledown }+a(t)f(u(t),u(\mu (t)))=0,t\in \left( 0,T\right) _{\mathbf{T}},\\ u_0(t)& =\varphi (t), \, \, t\in \left[ -r,0\right] _{\mathbf{T}},\, \, u(0)-B_0(u^{\bigtriangleup }(\eta ))=0,\, \, u^{\bigtriangleup }(T)=0, \mbox{ or}\\ u_0(t)& =\varphi (t),\, \, t\in \left[ -r,0\right] _{\mathbf{T}},\, \, u^{\bigtriangleup }(0)=0,u(T)+B_1(u^{\bigtriangleup }(\eta ))=0. \end{array} $$ Some existence criteria of at least three positive solutions are established by using the well-known Leggett-Williams fixed-point theorem. An example is also given to illustrate the main results.
DOI: 10.57262/ade028-0304-217
2023
Multiple positive bound state solutions for fractional Schrödinger-Poisson system with critical growth
In this paper, we deal with the following critical fractional Schrödinger-Poisson system without subcritical perturbation $$ \begin{cases} (-\Delta)^{s} u + V(x)u + K(x)\phi u=|u|^{2_{s}^{*}-2}u, & x\in\mathbb{R}^{3},\\ (-\Delta)^{t}\phi=K(x)u^{2}, & x\in\mathbb{R}^{3}, \end{cases} $$ where $s\in(\frac{3}{4},1),$ $t\in(0,1)$, $2^{\ast}_{s}=\frac{6}{3-2s}$ is the critical Sobolev exponent. When $V(x)$ is positive bounded from below, combining with variation methods and Brouwer degree theory, we investigate the existence and multiplicity of positive bound solutions to this system. The results obtained in this paper extend and improve some recent works in which potential function $V(x)$ may vanish at the infinity.
DOI: 10.25236/ajcis.2023.060108
2023
Research on how to optimize data structures with C++ language
As the most basic programming language in high-level computer languages, C++ language has the advantages of simple attributes, convenient use, no compilation environment restrictions, fewer syntax restrictions, and possible operation in different operating systems, thus making it easy to execute a migration.Since its advent, C language has been popular among programming enthusiasts.With continuous improvement in the long process of development, it has eventually formed a complete theoretical system, which plays a pivotal role in programming languages.This paper focuses on how to optimize data structures with C++ language.
DOI: 10.1002/lpor.202370016
2023
Terahertz Semiconductor Dual‐Comb Sources with Relative Offset Frequency Cancellation (Laser Photonics Rev. 17(4)/2023)
Terahertz Semiconductor Dual-Comb Sources with Relative Offset Frequency Cancellation In article number 2200418, J.C. Cao, Heping Zeng, Hua Li, and colleagues present a self-reference approach to improve the stability of terahertz semiconductor-based quantum cascade laser dual-comb sources. The proposed method can entirely remove the common dual-comb carrier offset frequency noise and then significantly improve the long-term stability of the dual-comb sources emitting in the terahertz frequency range. The self-referenced dual-comb sources show potentials for spectroscopic applications.
DOI: 10.48550/arxiv.2306.11249
2023
OpenSTL: A Comprehensive Benchmark of Spatio-Temporal Predictive Learning
Spatio-temporal predictive learning is a learning paradigm that enables models to learn spatial and temporal patterns by predicting future frames from given past frames in an unsupervised manner. Despite remarkable progress in recent years, a lack of systematic understanding persists due to the diverse settings, complex implementation, and difficult reproducibility. Without standardization, comparisons can be unfair and insights inconclusive. To address this dilemma, we propose OpenSTL, a comprehensive benchmark for spatio-temporal predictive learning that categorizes prevalent approaches into recurrent-based and recurrent-free models. OpenSTL provides a modular and extensible framework implementing various state-of-the-art methods. We conduct standard evaluations on datasets across various domains, including synthetic moving object trajectory, human motion, driving scenes, traffic flow and weather forecasting. Based on our observations, we provide a detailed analysis of how model architecture and dataset properties affect spatio-temporal predictive learning performance. Surprisingly, we find that recurrent-free models achieve a good balance between efficiency and performance than recurrent models. Thus, we further extend the common MetaFormers to boost recurrent-free spatial-temporal predictive learning. We open-source the code and models at https://github.com/chengtan9907/OpenSTL.
DOI: 10.54254/2754-1169/18/20230084
2023
The Development Status of the Fan Economy from the Perspective of Consuming Psychology in the Era of New Media
The widespread use of new media and the Internet has altered people's consumption behaviors. Brand management and product promotion strategies have also improved. Since production eventually chooses to be consumed, the conventional consumption model primarily satisfies people's fundamental material necessities. People's consumption has increased as a result of the growth of the social economy to include spiritual and emotional demands. The new era's consumption paradigm has altered to allow consumer demand to drive industry production. The emotional consumption of consumers is being investigated by media and business, and they are able to capture people's need to use consumption to find emotional solace. Utilize the fan economy to sculpt the emotional resonance between your brand's offerings and customers in order to build a strong social presence for your brand. This paper will look at the fan economy's current state of development in light of new media from the perspective of consumer psychology. Find the key elements that influence customers' purchasing decisions in today's society through research and a review of pertinent literature. Additionally, how the evolution of the Internet has influenced how the consumer market has changed and how this has influenced customer behavior. The article's main goal is to demonstrate that customers are now more interested in emotional value than only material or economic advantages when determining whether an action is worthwhile.
DOI: 10.48550/arxiv.2312.04921
2023
Integrating the PanDA Workload Management System with the Vera C. Rubin Observatory
The Vera C. Rubin Observatory will produce an unprecedented astronomical data set for studies of the deep and dynamic universe. Its Legacy Survey of Space and Time (LSST) will image the entire southern sky every three to four days and produce tens of petabytes of raw image data and associated calibration data over the course of the experiment's run. More than 20 terabytes of data must be stored every night, and annual campaigns to reprocess the entire dataset since the beginning of the survey will be conducted over ten years. The Production and Distributed Analysis (PanDA) system was evaluated by the Rubin Observatory Data Management team and selected to serve the Observatory's needs due to its demonstrated scalability and flexibility over the years, for its Directed Acyclic Graph (DAG) support, its support for multi-site processing, and its highly scalable complex workflows via the intelligent Data Delivery Service (iDDS). PanDA is also being evaluated for prompt processing where data must be processed within 60 seconds after image capture. This paper will briefly describe the Rubin Data Management system and its Data Facilities (DFs). Finally, it will describe in depth the work performed in order to integrate the PanDA system with the Rubin Observatory to be able to run the Rubin Science Pipelines using PanDA.
1871
Biographical sketches and incidents of the Taipin Rebellion
DOI: 10.1016/j.nuclphysbps.2007.11.147
2008
CMS Monte Carlo production operations in a distributed computing environment
Monte Carlo production for the CMS experiment is carried out in a distributed computing environment; the goal of producing 30M simulated events per month in the first half of 2007 has been reached. A brief overview of the production operations and statistics is presented.
DOI: 10.1088/1742-6596/119/5/052019
2008
CMS Monte Carlo production in the WLCG computing grid
Monte Carlo production in CMS has received a major boost in performance and scale since the past CHEP06 conference. The production system has been re-engineered in order to incorporate the experience gained in running the previous system and to integrate production with the new CMS event data model, data management system and data processing framework. The system is interfaced to the two major computing Grids used by CMS, the LHC Computing Grid (LCG) and the Open Science Grid (OSG).
DOI: 10.1051/epjconf/201921404016
2019
Grid production with the ATLAS Event Service
ATLAS has developed and previously presented a new computing architecture, the Event Service, that allows real time delivery of fine grained workloads which process dispatched events (or event ranges) and immediately streams outputs. The principal aim was to profit from opportunistic resources such as commercial cloud, supercomputing, and volunteer computing, and otherwise unused cycles on clusters and grids. During the development and deployment phase, its utility also on the grid and conventional clusters for the exploitation of otherwise unused cycles became apparent. Here we describe our experience commissioning the Event Service on the grid in the ATLAS production system. We study the performance compared with standard simulation production. We describe the integration with the ATLAS data management system to ensure scalability and compatibility with object stores. Finally, we outline the remaining steps towards a fully commissioned system.
DOI: 10.1051/epjconf/201921404034
2019
Towards an Event Streaming Service for ATLAS data processing
The ATLAS experiment at the LHC is gradually transitioning from the traditional file-based processing model to dynamic workflow management at the event level with the ATLAS Event Service (AES). The AES assigns finegrained processing jobs to workers and streams out the data in quasi-real time, ensuring fully efficient utilization of all resources, including the most volatile. The next major step in this evolution is the possibility to intelligently stream the input data itself to workers. The Event Streaming Service (ESS) is now in development to asynchronously deliver only the input data required for processing when it is needed, protecting the application payload fromWAN latency without creating expensive long-term replicas. In the current prototype implementation, ESS processes run on compute nodes in parallel to the payload, reading the input event ranges remotely over the network, and replicating them in small input files that are passed to the application. In this contribution, we present the performance of the ESS prototype for different types of workflows in comparison to tasks accessing remote data directly. Based on the experience gained with the current prototype, we are now moving to the development of a server-side component of the ESS. The service can evolve progressively into a powerful Content Delivery Network-like capability for data streaming, ultimately enabling the delivery of ‘virtual data’ generated on demand.
DOI: 10.1088/1742-6596/762/1/012027
2016
Scaling up ATLAS Event Service to production levels on opportunistic computing platforms
Continued growth in public cloud and HPC resources is on track to exceed the dedicated resources available for ATLAS on the WLCG. Examples of such platforms are Amazon AWS EC2 Spot Instances, Edison Cray XC30 supercomputer, backfill at Tier 2 and Tier 3 sites, opportunistic resources at the Open Science Grid (OSG), and ATLAS High Level Trigger farm between the data taking periods. Because of specific aspects of opportunistic resources such as preemptive job scheduling and data I/O, their efficient usage requires workflow innovations provided by the ATLAS Event Service. Thanks to the finer granularity of the Event Service data processing workflow, the opportunistic resources are used more efficiently. We report on our progress in scaling opportunistic resource usage to double-digit levels in ATLAS production.
DOI: 10.22323/1.398.0842
2022
Application of Quantum Machine Learning to HEP Analysis at LHC Using Quantum Computer Simulators and Quantum Computer Hardware
Machine learning enjoys widespread success in High Energy Physics (HEP) analyses at LHC. However the ambitious HL-LHC program will require much more computing resources in the next two decades. Quantum computing may offer speed-up for HEP physics analyses at HL-LHC, and can be a new computational paradigm for big data analyses in High Energy Physics. We have successfully employed three methods (1) Variational Quantum Classifier (VQC) method, (2) Quantum Support Vector Machine Kernel (QSVM-kernel) method and (3) Quantum Neural Network (QNN) method for two LHC flagship analyses: ttH (Higgs production in association with two top quarks) and H->mumu (Higgs decay to two muons, the second generation fermions). We shall address the progressive improvements in performance from method (1) to method (3). We will present our experiences and results of a study on LHC High Energy Physics data analyses with IBM Quantum Simulator and Quantum Hardware (using IBM Qiskit framework), Google Quantum Simulator (using Google Cirq framework), and Amazon Quantum Simulator (using Amazon Braket cloud service). The work is in the context of a Qubit platform (a gate-model quantum computer). Taking into account the present limitation of hardware access, different quantum machine learning methods are studied on simulators and the results are compared with classical machine learning methods (BDT, classical Support Vector Machine and classical Neural Network). Furthermore, we do apply quantum machine learning on IBM quantum hardware to compare performance between quantum simulator and quantum hardware. The work is performed by an international and interdisciplinary collaboration with the Department of Physics and Department of Computer Sciences of University of Wisconsin, CERN Quantum Technology Initiative, IBM Research Zurich, IBM T.J. Watson Research Center, Fermilab Quantum Institute, BNL Computational Science Initiative, State University of New York at Stony Brook, and Quantum Computing and AI Research of Amazon Web Services. This work pioneers a close collaboration of academic institutions with industrial corporations in the High Energy Physics analyses effort. Though the size of event samples in future HL-LHC physics and the limited number of qubits pose some challenges to the Quantum Machine learning studies for High Energy Physics, more advanced quantum computers with larger number of qubits, reduced noise and improved running time (as envisioned by IBM and Google) may outperform classical machine learning in both classification power and in speed. Although the era of efficient quantum computing may still be years away, we have made promising progress and obtained preliminary results in applying quantum machine learning to High Energy Physics. A PROOF OF PRINCIPLE.
DOI: 10.1007/s40843-022-2260-6
2022
超薄金属性MoO2纳米片可控合成及其范德华接触应用研究
Van der Waals contact through assembly of two-dimensional (2D) semiconductors with metallic materials by van der Waals force is considered as one of the most promising methods to solve the contact problem in 2D-material-based electronics. However, the previous studies mostly focused on semiconductor materials, while the preparation and properties of metallic materials have been less studied. In this paper, we reported a controlled synthesis of metallic layered MoO2 flakes with thicknesses of 3.5 to 106.8 nm using the chemical vapor deposition method. X-ray diffraction, scanning tunneling microscopy, and transmission electron microscopy were used to characterize the fabricated MoO2 nanoplates. The results indicate that the samples have a monoclinic crystal structure with high crystal quality and stability. The electrical characterization reveals an excellent conduction behavior of thin MoO2 flakes with a conductivity exceeding 106 S m−1, which is comparable to those of graphene and some metals. In addition, we explored the contact applications of thin MoO2 flakes in a MoS2 field-effect transistor (FET) by introducing MoO2 flakes as a van der Waals contact. High carrier mobility combined with an optimized Schottky barrier height was achieved in the designed MoS2 FET. This study provides new insights into the preparation as well as application of metallic materials and is expected to promote the development of 2D-material-based electronics.
DOI: 10.1088/1742-6596/219/7/072014
2010
Distributed analysis with PROOF in ATLAS collaboration
The Parallel ROOT Facility – PROOF is a distributed analysis system which allows to exploit inherent event level parallelism of high energy physics data. PROOF can be configured to work with centralized storage systems, but it is especially effective together with distributed local storage systems – like Xrootd, when data are distributed over computing nodes. It works efficiently on different types of hardware and scales well from a multi-core laptop to large computing farms. From that point of view it is well suited for both large central analysis facilities and Tier 3 type analysis farms. PROOF can be used in interactive or batch like regimes. The interactive regime allows the user to work with typically distributed data from the ROOT command prompt and get a real time feedback on analysis progress and intermediate results. We will discuss our experience with PROOF in the context of ATLAS Collaboration distributed analysis. In particular we will discuss PROOF performance in various analysis scenarios and in multi-user, multi-session environments. We will also describe PROOF integration with the ATLAS distributed data management system and prospects of running PROOF on geographically distributed analysis farms.
DOI: 10.1109/csie.2009.30
2009
Web Service Grid Resource Management System
To access distributed computational resources and manage users' jobs is a key issue for Grid framework to support large distributed computing environment. Web Service Grid Resource Management System (WSGRMS) is a scheduling system which provides an easy way to manage distributed computational resources and an efficient way to process a large number of user requests for computing. It's designed on Web Service to adapt robust and scalable requirement by grid. Based on agents, it implements a virtual cluster system. In this paper, we will describe the architecture and implementation of WSGRMS. We also discuss how it's integrated into LCG/OSG Grid.
DOI: 10.1088/1742-6596/1085/3/032031
2018
Global heterogeneous resource harvesting: the next-generation PanDA Pilot for ATLAS
The Production and Distributed Analysis system (PanDA), used for workload management in the ATLAS Experiment at the LHC for over a decade, has in recent years expanded its reach to diverse new resource types such as HPCs, and innovative new workflows such as the Event Service. PanDA meets the heterogeneous resources it harvests in the PanDA Pilot, which has embarked on a next-generation reengineering to efficiently integrate and exploit the new platforms and workflows. The new modular architecture is the product of a year of design and prototyping in conjunction with the design of a completely new component, Harvester, that will mediate a richer flow of control and information between Pilot and PanDA. Harvester will enable more intelligent and dynamic matching between processing tasks and resources, with an initial focus on HPCs, simplifying the operator and user view of a PanDA site but internally leveraging deep information gathering on the resource to accrue detailed knowledge of a site's capabilities and dynamic state to inform the matchmaking. This paper will give an overview of the new Pilot architecture, how it will be used in and beyond ATLAS, its relation to Harvester, and the work ahead.
DOI: 10.1051/epjconf/202024504015
2020
Towards an Intelligent Data Delivery Service
The ATLAS Event Streaming Service (ESS) at the LHC is an approach to preprocess and deliver data for Event Service (ES) that has implemented a fine-grained approach for ATLAS event processing. The ESS allows one to asynchronously deliver only the input events required by ES processing, with the aim to decrease data traffic over WAN and improve overall data processing throughput. A prototype of ESS was developed to deliver streaming events to fine-grained ES jobs. Based on it, an intelligent Data Delivery Service (iDDS) is under development to decouple the “cold format” and the processing format of the data, which also opens the opportunity to include the production systems of other HEP experiments. Here we will at first present the ESS model view and its motivations for iDDS system. Then we will also present the iDDS schema, architecture and the applications of iDDS.
2019
Application of Quantum Machine Learning to High Energy Physics Analysis at LHC using IBM Quantum Computer Simulators and IBM Quantum Computer Hardware
DOI: 10.22323/1.390.0930
2021
Application of Quantum Machine Learning to High Energy Physics Analysis at LHC using IBM Quantum Computer Simulators and IBM Quantum Computer Hardware
One of the major objectives of the experimental programs at the LHC is the discovery of new physics.This requires the identification of rare signals in immense backgrounds.Using machine learning algorithms greatly enhances our ability to achieve this objective.With the progress of quantum technologies, quantum machine learning could become a powerful tool for data analysis in high energy physics.In this study, using IBM gate-model quantum computing systems, we employ the quantum variational classifier method and the quantum kernel estimator method in two recent LHC flagship physics analyses: t tH (Higgs boson production in association with a top quark pair) and H → µ + µ -(Higgs boson decays to two muons).We have obtained early results with 10 qubits on the IBM quantum simulator and the IBM quantum hardware.On the quantum simulator, the quantum machine learning methods perform similarly to classical algorithms such as SVM (support vector machine) and BDT (boosted decision tree), which are often employed in LHC physics analyses.On the quantum hardware, the quantum machine learning methods have shown promising discrimination power, comparable to that on the quantum simulator.This study demonstrates that quantum machine learning has the ability to differentiate between signal and background in realistic physics datasets.
2016
Integration Of PanDA Workload Management System With Supercomputers for ATLAS
2016
Simplified pilot module development and testing within the ATLAS PanDA Pilot 2.0 Project
DOI: 10.14257/ijhit.2016.9.3.14
2016
Towards Unified Business Process Modeling and Verification for Role-based Resource-oriented Service Composition
With the prevalence of ubiquitous computing, big data, and Internet of things in cloud computing environment, it’s important to consider both of collaboration, heterogeneity, isolation of multi-tenant applications and information security and privacy in service composition. Current methods need to be readdressed to cope with cross-organizational, multi-roles participated and knowledge-intensive service composition in an integrated way. Based on the modeling and verification theories of hierarchical colored petri-net, a resource-oriented collaborative workflow model, its resource control model and the joint modeling and verification method are proposed which present a unified solution bridging the gap between traditional structure-oriented workflow execution model and resource-oriented workflow domain model taking into account the underlying roles, tasks, resources and their association and coordination in design-time and runtime as well. In our approach, a business process is divided into three layers: the backbone top-level process, the task fulfillment sub-process and the task execution sub-process in order to reduce the complexity of model verification. In addition this paper gives in-depth discussions on the fine control of implicit parallel and multi-threaded process executions. Finally, the case studies show that the proposed methods are not only applicable to modeling and verification of traditional task-oriented workflows, but also suited for knowledge or data-intensive workflows which involve
DOI: 10.1109/icitec.2014.7105627
2014
A goal-driven and content-oriented planning system for knowledge-intensive service composition
With the rapid development of Cloud computing, social computing, and Big Data, the service composition in support of increasing knowledge-intensive innovation activities across enterprises and organizations sets a new challenge of dealing with diversified users' needs and changeable structures in process. Automatic service composition mainly makes use of artificial intelligence planning techniques to address these challenges. But current research focuses on separate aspects of these challenges, and special effort should be made to put goals, domain knowledge and activities into a unified perspective. This paper proposes a goal-driven and content-oriented automatic service composition method based on Hierarchical Task Network. First a unified planning domain knowledge modeling approach is given regarding goals, contents, tasks and their relationships, then a planning framework and corresponding algorithms are designed by extending Simple Hierarchical Ordered Planner 2, finally a loosely-coupled, extensible and flexible planning system is implemented and the experiment proves its feasibility and features.
DOI: 10.1088/1742-6596/331/1/012009
2011
Data-oriented scheduling for PROOF
The Parallel ROOT Facility - PROOF - is a distributed analysis system optimized for I/O intensive analysis tasks of HEP data. With LHC entering the analysis phase, PROOF has become a natural ingredient for computing farms at Tier3 level. These analysis facilities will typically be used by a few tenths of users, and can also be federated into a sort of analysis cloud corresponding to the Virtual Organization of the experiment. Proper scheduling is required to guarantee fair resource usage, to enforce priority policies and to optimize the throughput. In this paper we discuss an advanced priority system that we are developing for PROOF. The system has been designed to automatically adapt to unknown length of the tasks, to take into account the data location and availability (including distribution across geographically separated sites), and the {group, user} default priorities. In this system, every element - user, group, dataset, job slot and storage - gets its priority and those priorities are dynamically linked with each other. In order to tune the interplay between the various components, we have designed and started implementing a simulation application that can model various type and size of PROOF clusters. In this application a monitoring package records all the changes of them so that we can easily understand and tune the performance. We will discuss the status of our simulation and show examples of the results we are expecting from it.
2011
Automatic penetration test framework based on unified data format mechanism
Backtrack4 is a highly evaluated penetration test platform.It contains large database of security tool collection up-to-date,but it can not work efficiently without data supporting.We propose a penetration test framework(PTF) with unified data format mechanism,which can accomplish penetration testing automatically and efficiently.Tools are used automatically for information detection,vulnerability assessment,and report createment.Real network experiments show that PTF can highly enhance the effectiveness of penetration test using Backtrack4.
DOI: 10.1088/1742-6596/898/6/062002
2017
Production experience with the ATLAS Event Service
The ATLAS Event Service (AES) has been designed and implemented for efficient running of ATLAS production workflows on a variety of computing platforms, ranging from conventional Grid sites to opportunistic, often short-lived resources, such as spot market commercial clouds, supercomputers and volunteer computing. The Event Service architecture allows real time delivery of fine grained workloads to running payload applications which process dispatched events or event ranges and immediately stream the outputs to highly scalable Object Stores. Thanks to its agile and flexible architecture the AES is currently being used by grid sites for assigning low priority workloads to otherwise idle computing resources; similarly harvesting HPC resources in an efficient back-fill mode; and massively scaling out to the 50-100k concurrent core level on the Amazon spot market to efficiently utilize those transient resources for peak production needs. Platform ports in development include ATLAS@Home (BOINC) and the Google Compute Engine, and a growing number of HPC platforms.
DOI: 10.48550/arxiv.2203.13498
2022
Strain-dependent structural and electronic reconstructions in long-wavelength WS$_{2}$ moiré superlattices
In long-wavelength moir\'e superlattices of stacked transition metal dichalcogenides (TMDs), structural reconstruction ubiquitously occurs, which has reported to impact significantly their electronic properties. However, complete microscopic understandings of the interplay between the lattice reconstruction and alteration of electronic properties, and their further response to external perturbations in the reconstructed TMDs moir\'e superlattice are still lacking. Here, using scanning tunneling microscopy (STM) and scanning tunneling spectroscopy (STS) combined with first-principles calculation, we study the strain-dependent structural reconstruction and its correlated electronic reconstruction in long-wavelength H-type WS$_{2}$ moir\'e superlattice at nanometer scale. We observe that the long-wavelength WS$_{2}$ moir\'e superlattices experiencing strong atomic reconstruction transform into a hexagonal array of screw dislocations separating large-sized H-stacked domains. Both the geometry and the moir\'e wavelength of the moir\'e superlattice are dramatically tuned by external intralayer heterostrain in our experiment. Remarkably, the STS measurements further demonstrate that the location of the K point in conduction band is modulated sensitively by strain-induced lattice deformation at nanometer scale in this system, with the maximum energy shift reaching up to 300 meV. Our results highlight that intralayer strain plays a vital role in determining structural and electronic properties in TMD moir\'e superlattice.
DOI: 10.48550/arxiv.2206.10187
2022
Self-Referenced Terahertz Semiconductor Dual-Comb Sources
Employing two frequency combs with a slight difference in repetition frequencies, the dual-comb source shows unique advantages in high precision spectroscopy, imaging, ranging, communications, etc. In the terahertz (THz) frequency range, the electrically pumped quantum cascade laser (QCL) offers the possibility of realizing the compact dual-comb source due to its semiconductor-based chip-scale configuration. Although the active stabilization of a THz QCL dual-comb source was demonstrated by phase locking one of the dual-comb lines, the full stabilization of all dual-comb lines is still challenging. Here, we propose a self-reference method to obtain a fully stabilized dual-comb signal on a pure THz QCL platform. Without using any external locking components, we filter out one dual-comb line and beat it with the whole dual-comb signal, which eliminates the common carrier offset frequency noise and reduces the dual-comb repetition frequency noise. It is experimentally demonstrated that the self-reference technique can significantly improve the long-term stability of the dual-comb signal. A record of the ``maxhold" linewidth of 14.8 kHz (60 s time duration) is obtained by implementing the self-reference technique, while without the self-reference the dual-comb lines show a ``maxhold" linewidth of 2 MHz (15 s time duration). The method provides the simplest way to improve the long-term stability of THz QCL dual-comb sources, which can be further adopted for high precision measurements.
DOI: 10.22323/1.414.0218
2022
An intelligent Data Delivery Service for and beyond the ATLAS experiment
The intelligent Data Delivery Service (iDDS) has been developed to cope with the huge increase of computing and storage resource usage in the coming LHC data taking. It has been designed to intelligently orchestrate workflows and data management systems, decoupling data pre-processing, delivery, and primary processing in large scale workflows. It is an experiment-agnostic service that has been deployed to serve data carousel (orchestrating efficient processing of tape-resident data), machine learning hyperparameter optimization, active learning, and other complex multi-stage workflows defined via DAG (Directed Acyclic Graph), CWL (Common Workflow Language) and other descriptions, including a growing number of analysis workflows. We will at first introduce some deployed use cases in a summary. Then we will focus on new improvements and use cases under developments in ATLAS, Rubin Observatory and sPHENIX, together with future efforts.
2007
Qinghai Power Grid Harmonics Wave Situation and Its Management Status
Simplex industry system induced unreasonable load structure of Qinghai power grid,non-linear load of large numbers of aluminum and silicon iron plant made rather huge harmonic wave pollution,thus the harmonic wave supervision should be enhanced in order to ensure safe and economic operation of power grid and electric equipments.The paper analyzes harmonic wave situation of Qinghai power grid,and proposes detailed measures on enhancing harmonic wave management.
DOI: 10.1109/escience.2018.00083
2018
Fine-Grained Processing Towards HL-LHC Computing in ATLAS
During LHC's Run-2 ATLAS has been developing and evaluating new fine-grained approaches to workflows and dataflows able to better utilize computing resources in terms of storage, processing and networks. The compute-limited physics of ATLAS has driven the collaboration to aggressively harvest opportunistic cycles from what are often transiently available resources, including HPCs, clouds, volunteer computing, and grid resources in transitional states. Fine-grained processing (with typically a few minutes' granularity, corresponding to one event for the present ATLAS full simulation) enables agile workflows with a light footprint on the resource such that cycles can be more fully and efficiently utilized than with conventional workflows processing O(GB) files per job.
2018
Artificial Intelligence on Remote Sensing Data for Precision Agriculture Applications
DOI: 10.1051/epjconf/202024503025
2020
Harnessing the power of supercomputers using the PanDA Pilot 2 in the ATLAS Experiment
The unprecedented computing resource needs of the ATLAS experiment at LHC have motivated the Collaboration to become a leader in exploiting High Performance Computers (HPCs). To meet the requirements of HPCs, the PanDA system has been equipped with two new components; Pilot 2 and Harvester, that were designed with HPCs in mind. While Harvester is a resource-facing service which provides resource provisioning and workload shaping, Pilot 2 is responsible for payload execution on the resource. The presentation focuses on Pilot 2, which is a complete rewrite of the original PanDA Pilot used by ATLAS and other experiments for well over a decade. Pilot 2 has a flexible and adaptive design that allows for plugins to be defined with streamlined workflows. In particular, it has plugins for specific hardware infrastructures (HPC/GPU clusters) as well as for dedicated workflows defined by the needs of an experiment. Examples of dedicated HPC workflows are discussed in which the Pilot either uses an MPI application for processing fine-grained event level service under the control of the Harvester service or acts like an MPI application itself and runs a set of job in an assemble. In addition to describing the technical details of these workflows, results are shown from its deployment on Titan (OLCF) and other HPCs in ATLAS.
DOI: 10.22323/1.364.0116
2020
Application of Quantum Machine Learning to High Energy Physics Analysis at LHC using IBM Quantum Computer Simulators and IBM Quantum Computer Hardware
The ambitious HL-LHC program will require enormous computing resources in the next two decades.A burning question is whether quantum computer can solve the ever growing demand of computing resources in High Energy Physics in general and physics at LHC in particular.Using IBM Quantum Computer Simulators and Quantum Computer Hardware, we have successfully employed the Quantum Support Vector Machine Method (QSVM) in applying quantum machine learning for a ttH (H to two photons), Higgs coupling to top quarks analysis at LHC.