ϟ

Meng Xiao

Here are all the papers by Meng Xiao that you can download and read on OA.mg.
Meng Xiao’s last known institution is . Download Meng Xiao PDFs here.

Claim this Profile →
DOI: 10.1016/j.bspc.2013.06.001
2013
Cited 135 times
Sleep stages classification based on heart rate variability and random forest
An alternative technique for sleep stages classification based on heart rate variability (HRV) was presented in this paper. The simple subject specific scheme and a more practical subject independent scheme were designed to classify wake, rapid eye movement (REM) sleep and non-REM (NREM) sleep. 41 HRV features extracted from RR sequence of 45 healthy subjects were trained and tested through random forest (RF) method. Among the features, 25 were newly proposed or applied to sleep study for the first time. For the subject independent classifier, all features were normalized with our developed fractile values based method. Besides, the importance of each feature for sleep staging was also assessed by RF and the appropriate number of features was explored. For the subject specific classifier, a mean accuracy of 88.67% with Cohen's kappa statistic κ of 0.7393 was achieved. While the accuracy and κ dropped to 72.58% and 0.4627, respectively when the subject independent classifier was considered. Some new proposed HRV features even performed more effectively than the conventional ones. The proposed method could be used as an alternative or aiding technique for rough and convenient sleep stages classification.
DOI: 10.1103/physrevd.94.055023
2016
Cited 106 times
Constraining anomalous Higgs boson couplings to the heavy-flavor fermions using matrix element techniques
In this paper we investigate anomalous interactions of the Higgs boson with heavy fermions, employing shapes of kinematic distributions. We study the processes $pp \to t\bar{t} + H$, $b\bar{b} + H$, $tq+H$, and $pp \to H\to\tau^+\tau^-$, and present applications of event generation, re-weighting techniques for fast simulation of anomalous couplings, as well as matrix element techniques for optimal sensitivity. We extend the MELA technique, which proved to be a powerful matrix element tool for Higgs boson discovery and characterization during Run I of the LHC, and implement all analysis tools in the JHU generator framework. A next-to-leading order QCD description of the $pp \to t\bar{t} + H$ process allows us to investigate the performance of MELA in the presence of extra radiation. Finally, projections for LHC measurements through the end of Run III are presented.
DOI: 10.3390/w15020319
2023
Cited 10 times
Research on the Uplift Pressure Prediction of Concrete Dams Based on the CNN-GRU Model
Dam safety is considerably affected by seepage, and uplift pressure is a key indicator of dam seepage. Thus, making accurate predictions of uplift pressure trends can improve dam hazard forecasting. In this study, a convolutional neural network, (CNN)-gated recurrent neural network, (GRU)-based uplift pressure prediction model was developed, which included the CNN model’s feature extractability and the GRU model’s learnability for time series correlation data. Then, the model performance was verified using a dam as an example. The results showed that the mean absolute errors (MAEs) of the CNN-GRU model were 0.1554, 0.0398, 0.2306, and 0.1827, and the root mean square errors (RMSEs) were 0.1903, 0.0548, 0.2916, and 0.2127. The prediction performance was better than that of the particle swarm optimization–back propagation (PSO-BP), artificial bee colony optimization–support vector machines (ABC-SVM), GRU, long short-term memory network (LSTM), and CNN-LSTM models. The method improves the utilization rate of dam safety monitoring results and has engineering utility for safe dam operations.
DOI: 10.1145/3638059
2024
Traceable Group-Wise Self-Optimizing Feature Transformation Learning: A Dual Optimization Perspective
Feature transformation aims to reconstruct an effective representation space by mathematically refining the existing features. It serves as a pivotal approach to combat the curse of dimensionality, enhance model generalization, mitigate data sparsity, and extend the applicability of classical models. Existing research predominantly focuses on domain knowledge-based feature engineering or learning latent representations. However, these methods, while insightful, lack full automation and fail to yield a traceable and optimal representation space. An indispensable question arises: Can we concurrently address these limitations when reconstructing a feature space for a machine learning task? Our initial work took a pioneering step towards this challenge by introducing a novel self-optimizing framework. This framework leverages the power of three cascading reinforced agents to automatically select candidate features and operations for generating improved feature transformation combinations. Despite the impressive strides made, there was room for enhancing its effectiveness and generalization capability. In this extended journal version, we advance our initial work from two distinct yet interconnected perspectives: 1) We propose a refinement of the original framework, which integrates a graph-based state representation method to capture the feature interactions more effectively and develop different Q-learning strategies to alleviate Q-value overestimation further. 2) We utilize a new optimization technique (actor-critic) to train the entire self-optimizing framework in order to accelerate the model convergence and improve the feature transformation performance. Finally, to validate the improved effectiveness and generalization capability of our framework, we perform extensive experiments and conduct comprehensive analyses. These provide empirical evidence of the strides made in this journal version over the initial work, solidifying our framework’s standing as a substantial contribution to the field of automated feature transformation. To improve the reproducibility, we have released the associated code and data by the Github link https://github.com/coco11563/TKDD2023_code.
DOI: 10.1007/s11433-014-5598-7
2014
Cited 75 times
First dark matter search results from the PandaX-I experiment
We report on the first dark-matter (DM) search results from PandaX-I, a low threshold dual-phase xenon experiment operating at the China JinPing Underground Laboratory. In the 37-kg liquid xenon target with 17.4 live-days of exposure, no DM particle candidate event was found. This result sets a stringent limit for low-mass DM particles and disfavors the interpretation of previously-reported positive experimental results. The minimum upper limit, 3.7 × 10−44 cm2, for the spin-independent isoscalar DM-particle-nucleon scattering cross section is obtained at a DM-particle mass of 49GeV/c2 at 90% confidence level.
DOI: 10.1016/j.ceja.2022.100427
2023
Cited 8 times
Enhanced coagulation of covalent composite coagulant with potassium permanganate oxidation for algae laden water treatment: Algae and extracellular organic matter removal
The enhanced coagulation performance for algae and extracellular organic matter (EOM) removal by using novel Al/Fe-based covalently bonded composite coagulants (CAFM) coupled with potassium permanganate oxidation was evaluated in the current study. For the CAFM coagulation and potassium permanganate oxidation combination process, two dosage sequences of oxidant and coagulant, the potassium permanganate pre-oxidation method, and the simultaneous oxidation method, was compared for algae and EOM removal. Polyaluminum chloride (PAC) and polyacrylamide (PAM) were selected as common used coagulants to compare with CAFM. The results show that 21.1% UV254, 99.6% OD686, and 98.2% turbidity were removed from EOM algae laden water at CAFM dosage of 5.4 mg/L and KMnO4 dosage of 1 mg/L. UV254 removal rate was increased by 10% in potassium permanganate oxidation treatment compared with CAFM coagulation. In addition, pre-oxidation method exhibited better extracellular organic matter removal efficiency compared with simultaneous oxidation method. Furthermore, SEM, FTIR, XRD and XPS were employed to study the physical and chemical structure of flocs, whereas ICP-MS, EEM, GCMS– and GPC were used to analyze water quality before and after treatment for coagulation and oxidation in mechanism exploration. This study proposes a new process for algal bloom control, organic pollution reduction and ensure water quality of water plant effluent to provide research basis.
DOI: 10.1103/physrevd.102.056022
2020
Cited 34 times
New features in the JHU generator framework: Constraining Higgs boson properties from on-shell and off-shell production
We present an extension of the JHUGen and MELA framework, which includes an event generator and library for the matrix element analysis. It enables simulation, optimal discrimination, reweighting techniques, and analysis of a bosonic resonance and the triple and quartic gauge boson interactions with the most general anomalous couplings. The new features, which become especially relevant at the current stage of LHC data taking, are the simulation of gluon fusion and vector boson fusion in the off-shell region, associated $ZH$ production at NLO QCD including the $gg$ initial state, and the simulation of a second spin-zero resonance. We also quote translations of the anomalous coupling measurements into constraints on dimension-six operators of an effective field theory. Some of the new features are illustrated with projections for experimental measurements with the full LHC and HL-LHC datasets.
DOI: 10.1016/j.epsr.2021.107604
2022
Cited 12 times
Saturated load forecasting based on clustering and logistic iterative regression
Saturated load forecasting is the basis of power grid planning, its accuracy directly affects the utility of consumers. However, saturated load forecasting suffered from insufficient training data. To face this challenge, this paper proposed an improved load forecasting method by combining clustering with logistic iterative regression. Before regression, historical load is clustered based on Fuzzy C-means. According to unsaturated historical data under different clusters, three unknown parameters in logistic regression model are formulated: Neyman-Fisher factorization is used to obtain unbiased sufficient statistics of one parameter. Least square is employed to solve the other two parameters. Afterward, predictive model is acquired by selecting model with least error rate. Subsequently, the optimal cluster number is determined by average absolute percentage error. Finally, the optimal load forecasting model is determined based on the optimal cluster number. Simulation shows that this method improves the accuracy of load forecasting compared with other methods.
DOI: 10.3389/fgene.2022.972899
2022
Cited 11 times
Construction of five cuproptosis-related lncRNA signature for predicting prognosis and immune activity in skin cutaneous melanoma
Cuproptosis is a newly discovered new mechanism of programmed cell death, and its unique pathway to regulate cell death is thought to have a unique role in understanding cancer progression and guiding cancer therapy. However, this regulation has not been studied in SKCM at present. In this study, data on Skin Cutaneous Melanoma (SKCM) patients were downloaded from the TCGA database. We screened the genes related to cuproptosis from the published papers and confirmed the lncRNAs related to them. We applied Univariate/multivariate and LASSO Cox regression algorithms, and finally identified 5 cuproptosis-related lncRNAs for constructing prognosis prediction models (VIM-AS1, AC012443.2, MALINC1, AL354696.2, HSD11B1-AS1). The reliability and validity test of the model indicated that the model could well distinguish the prognosis and survival of SKCM patients. Next, immune microenvironment, immunotherapy analysis, and functional enrichment analysis were also performed. In conclusion, this study is the first analysis based on cuproptosis-related lncRNAs in SKCM and aims to open up new directions for SKCM therapy.
DOI: 10.1137/1.9781611977653.ch87
2023
Cited 4 times
Traceable Automatic Feature Transformation via Cascading Actor-Critic Agents
Feature transformation for AI is an essential task to boost the effectiveness and interpretability of machine learning (ML). Feature transformation aims to transform original data to identify an optimal feature space that enhances the performances of a downstream ML model. Existing studies either combines preprocessing, feature selection, and generation skills to empirically transform data, or automate feature transformation by machine intelligence, such as reinforcement learning. However, existing studies suffer from: 1) high-dimensional non-discriminative feature space; 2) inability to represent complex situational states; 3) inefficiency in integrating local and global feature information. To fill the research gap, we propose a novel group-wise cascading actor-critic perspective to develop the AI construct of automated feature transformation. Specifically, we formulate the feature transformation task as an iterative, nested process of feature generation and selection, where feature generation is to generate and add new features based on original features, and feature selection is to remove redundant features to control the size of feature space. Our proposed framework has three technical aims: 1) efficient generation; 2) effective policy learning; 3) accurate state perception. For an efficient generation, we develop a tailored feature clustering algorithm and accelerate generation by feature group-group crossing based generation. For effective policy learning, we propose a cascading actor-critic learning strategy to learn state-passing agents to select candidate feature groups and operations for fast feature generation. Such a strategy can effectively learn policies when the original feature size is large, along with exponentially growing feature generation action space, in which classic Q-value estimation methods fail. For accurate state perception of feature space, we develop a state comprehension method considering not only pointwise feature information but also pairwise feature-feature correlations. Finally, we present extensive experiments and case studies to illustrate 24.7% improvements in F1 scores compared with SOTAs and robustness in high-dimensional data.
2013
Cited 32 times
Physics at a High-Luminosity LHC with ATLAS
The physics accessible at the high-luminosity phase of the LHC extends well beyond that of the earlier LHC program. This white paper, submitted as input to the Snowmass Community Planning Study 2013, contains preliminary studies of selected topics, spanning from Higgs boson studies to new particle searches and rare top quark decays. They illustrate the substantially enhanced physics reach with an increased integrated luminosity of 3000 fb-1, and motivate the planned upgrades of the LHC machine and ATLAS detector.
DOI: 10.1137/1.9781611977653.ch75
2023
Cited 3 times
<b>NEEDED: Introducing Hierarchical Transformer to Eye Diseases Diagnosis</b>
With the development of natural language processing tech- niques(NLP), automatic diagnosis of eye diseases using ophthalmology electronic medical records (OEMR) has become possible. It aims to evaluate the condition of both eyes of a patient respectively, and we formulate it as a particular multi-label classification task in this paper. Although there are a few related studies in other diseases, automatic diagnosis of eye diseases exhibits unique characteristics. First, descriptions of both eyes are mixed up in OEMR documents, with both free text and templated asymptomatic descriptions, resulting in sparsity and clutter of information. Second, OEMR documents contain multiple parts of descriptions and have long document lengths. Third, it is critical to provide explainability to the disease diagnosis model. To overcome those challenges, we present an effective automatic eye disease diagnosis framework, NEEDED. In this framework, a preprocessing module is integrated to improve the density and quality of information. Then, we design a hierarchical transformer structure for learning the contextualized representations of each sentence in the OEMR document. For the diagnosis part, we propose an attention-based predictor that enables traceable diagnosis by obtaining disease-specific information. Experiments on the real dataset and comparison with several baseline models show the advantage and explainability of our framework.
DOI: 10.1103/physrevd.104.055045
2021
Cited 12 times
Probing the <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline"><mml:mi>C</mml:mi><mml:mi>P</mml:mi></mml:math> structure of the top quark Yukawa coupling: Loop sensitivity versus on-shell sensitivity
The question whether the Higgs boson is connected to additional CP violation is one of the driving forces behind precision studies at the Large Hadron Collider. In this work, we investigate the CP structure of the top quark Yukawa interaction-one of the most prominent places for searching for New Physics-through Higgs boson loops in top quark pair production. We calculate the electroweak corrections including arbitrary CP mixtures at next-to-leading-order in the Standard Model Effective Field Theory. This approach of probing Higgs boson degrees of freedom relies on the large $t\bar{t}$ cross section and the excellent perturbative control. In addition, we consider all direct probes with on-shell Higgs boson production in association with a single top quark or top quark pair. This allows us to contrast loop sensitivity versus on-shell sensitivity in these fundamentally different process dynamics. We find that loop sensitivity in $t\bar{t}$ production and on-shell sensitivity in $t\bar{t}H$ and $tH$ provide complementary handles over a wide range of parameter space.
DOI: 10.1016/j.envres.2021.110895
2021
Cited 11 times
Analysis on historical flood and countermeasures in prevention and control of flood in Daqing River Basin
The Daqing River Basin has long been seriously threatened by floods. The construction of Xiong'an New Area has put forward higher requirements for the flood control system of the basin. Researches on the characteristics of the flood control system in the Daqing River Basin, the causes of the flood and the historical flood were conducted with the purpose to have a clear understanding of the deficiency of the existing flood control system in the Drainage Basin and figure out the countermeasures to improve the flood control capacity of the Drainage Basin by taking the development process of the flood control system in the Drainage Basin into consideration. Analysis was conducted on the causes, flood process, the process of regulation and storage of reservoirs and depressions as well as flood disaster with the flood occurred in August 1963 as the major research object. Besides, the deficiency of flood control system of the Drainage Basin in terms of flood control system and the flood occurred in August 1963 (also called 63.8 Flood). According to the research, the major problems in the current flood control system of the Daqing River Basin are mainly concentrated on Baiyang Lake with blocked internal water transmission and insufficient drainage capacity, which makes it difficult to cope with the flood exceeding the designed level. The great change of water level will also destroy the ecological balance of Baiyang Lake itself. In addition, during the construction of the New District, some flood storage and detention areas in Baiyang Lake will be deprived of the capability in regulation and storage, which will increase the difficulty of flood control in the downstream areas. Combined with the existing flood control system and aiming at the construction of Xiong'an New Area, the concepts of flood control system such as developing the Xiaoguan flood diversion way, rebuilding the new housing hub and strengthening the dredging of downstream river are put forward.
DOI: 10.1016/j.cca.2023.117240
2023
Development and clinical evaluation of an online automated quality control system for improving laboratory quality management
We developed an efficient online automated quality control (AUTO QC) system and tested its feasibility on automatic laboratory assembly lines. AUTO QC is based on developed quality control software (Smart QC) and designed adaptable consumables before. We applied the system to two assembly lines in our laboratory. Using third-party quality control samples, we evaluated the impact of the online AUTO QC system on out-of-control rate, biosecurity risk, turnaround time (TAT) and cost. AUTO QC significantly decreased the occurrence rate of the Westgard quality control rules 13S/22S/R4s and 12S, representing out-of-control and warning, respectively. The out-of-control rates were reduced by 58%, and the potential biosecurity risk of the samples decreased by 90%. The AUTO QC implementation also reduced the median TAT (by 7 min), the number of full-time employees and the cost of the quality control samples (by 45%). The total laboratory AUTO QC system can improve the quality and stability of QC testing and reduce cost.
DOI: 10.1007/s41365-023-01360-7
2023
NνDEx-100 conceptual design report
DOI: 10.1117/12.3026630
2024
Fast response method of power supply resources based on simulated annealing algorithm
Demand response has developed into one of the important contents of power supply system. In order to realize rapid response of power supply resources and promote efficient utilization of power resources, a rapid response method of power supply resources based on simulated annealing algorithm is proposed. Through simulated annealing algorithm, considering the constraints of power supply resources, the single-period response behavior and multi-period response behavior of power system are analyzed. The potential characteristics of power supply resources are extracted by combining the characteristic samples with the potential grade; Based on the effective value characteristics of the task and based on the time characteristics of the task, the rapid response of power supply resources is realized. The experimental results show that the response task completion rate of the proposed method is 95.7%. Under normal data and abnormal data, the best fitness can be achieved when the number of iterations is 40. It is proved that the proposed method has high efficiency and data adaptability when dealing with tasks, which can better ensure the completion and response of tasks and further improve the stability and adaptability of power supply system in order to meet the growing energy demand.
DOI: 10.2139/ssrn.4769661
2024
Enhancing Signal Recognition Accuracy in Delay-Based Optical Reservoir Computing:A Comparative Analysis of Training Algorithms
DOI: 10.32629/ajn.v5i1.1765
2024
Lifestyle in Breast Care and Evaluation of Health Education Effects
This paper discusses the effect evaluation of lifestyle and health education in breast care. It emphasized the importance of health education in improving patient quality of life, reducing complications, improving the ability to perform activities of daily living, and relieving patient negative emotions and improving functional exercise compliance. The implementation methods of health education are introduced in detail, including the systematic health education plan, the application of the causal real cause introduction of health education promotion activities, and the effect of continuous health education. Lifestyle adjustments were discussed, including healthy eating habits, regular exercise, wearing suitable underwear, etc. The importance of psychological and social support, including the reduction of psychological stress, improving endocrine function and the effects of peer supportive health education. Finally, the readability of the breast cancer-related health education materials was assessed. These strategies and methods help to improve individual health awareness and behavior and promote health level.
DOI: 10.3390/biology13040281
2024
Toxicity of Ammonia Stress on the Physiological Homeostasis in the Gills of Litopenaeus vannamei under Seawater and Low-Salinity Conditions
Ammonia is a major water quality factor influencing the survival and health of shrimp, among which the gill is the main effector organ for ammonia toxicity. In this study, we chose two types of Litopenaeus vannamei that were cultured in 30‰ seawater and domesticated in 3‰ low salinity, respectively, and then separately subjected to ammonia stress for 14 days under seawater and low-salinity conditions, of which the 3‰ low salinity-cultured shrimp were domesticated from the shrimp cultured in 30‰ seawater after 27 days of gradual salinity desalination. In detail, this study included four groups, namely the SC group (ammonia-N 0 mg/L, salinity 30‰), SAN group (ammonia-N 10 mg/L, salinity 30‰), LC group (ammonia-N 0 mg/L, salinity 3‰), and LAN group (ammonia-N 10 mg/L, salinity 3‰). The ammonia stress lasted for 14 days, and then the changes in the morphological structure and physiological function of the gills were explored. The results show that ammonia stress caused the severe contraction of gill filaments and the deformation or even rupture of gill vessels. Biochemical indicators of oxidative stress, including LPO and MDA contents, as well as T-AOC and GST activities, were increased in the SAN and LAN groups, while the activities of CAT and POD and the mRNA expression levels of antioxidant-related genes (nrf2, cat, gpx, hsp70, and trx) were decreased. In addition, the mRNA expression levels of the genes involved in ER stress (ire1 and xbp1), apoptosis (casp-3, casp-9, and jnk), detoxification (gst, ugt, and sult), glucose metabolism (pdh, hk, pk, and ldh), and the tricarboxylic acid cycle (mdh, cs, idh, and odh) were decreased in the SAN and LAN groups; the levels of electron-transport chain-related genes (ndh, cco, and coi), and the bip and sdh genes were decreased in the SAN group but increased in the LAN group; and the level of the ATPase gene was decreased but the cytc gene was increased in the SAN and LAN groups. The mRNA expression levels of osmotic regulation-related genes (nka-β, ca, aqp and clc) were decreased in the SAN group, while the level of the ca gene was increased in the LAN group; the nka-α gene was decreased in both two groups. The results demonstrate that ammonia stress could influence the physiological homeostasis of the shrimp gills, possibly by damaging the tissue morphology, and affecting the redox, ER function, apoptosis, detoxification, energy metabolism, and osmoregulation.
DOI: 10.1109/tdei.2022.3214616
2022
Cited 6 times
Effect of Cyclic Olefin Copolymer on Dielectric Performance of Polypropylene Films for Capacitors
In this article, aiming at the high conductance loss and low breakdown strength of the polypropylene (PP) film under a high-temperature environment, we attempt to improve the dielectric properties of PP films by blending cyclic olefin copolymer (COC). It is shown that these two materials have good interfacial compatibility. COC has little effect on the crystal form and dielectric loss of the PP/COC film because it is an amorphous polymer without the polar group. At 105 °C, a low proportion of the COC is beneficial to the decrease of dc conductivity and the increase of the breakdown strength. Since the glass transition temperature of COC is much higher than PP, the dimensional stability of the PP/COC film is improved at high temperatures. The cyclic structure of COC restricts the movement of PP molecular chains in the free volume, which plays an important role in inhibiting the migration of carriers and improving the breakdown properties. This method provides a feasible idea to improve the high-temperature resistance of metalized film capacitors (MFCs).
DOI: 10.1109/icassp43922.2022.9747116
2022
Cited 5 times
Deformable Convolution Dense Network for Compressed Video Quality Enhancement
Different from the traditional video quality enhancement, the goal of compressed video quality enhancement is to reduce the artifacts brought by the video compression. The existing multi-frame methods for compressed video quality enhancement heavily rely on optical flow, which is both inefficient and limited in performance. In this paper, a Multi-frame Residual Dense Network (MRDN) with deformable convolution is developed to improve the quality of the compressed video, by utilizing high-quality frame to compensate the low-quality frame. Specifically, the proposed network consists of the developed Motion Compensation (MC) module and Quality Enhancement (QE) module, aiming to compensate and enhance the quality of the input frame, respectively. Besides, a novel edge enhancement loss is conducted on the enhanced frame, in order to enhance edge structure during the training. Finally, the experimental results over a public benchmark show that our method outperforms the state-of-the-art methods for compressed video quality enhancement task.
DOI: 10.1016/j.est.2022.105309
2022
Cited 5 times
Non-droop-control-based cascaded superconducting magnetic energy storage/battery hybrid energy storage system
Existing parallel-structured superconducting magnetic energy storage (SMES)/battery hybrid energy storage systems (HESSs) expose shortcomings, including transient switching instability, weak ability of continuous fault compensation, etc. Under continuous faults and long-term power fluctuations, SMES part in existing SMES/battery HESSs will run out its energy and fail to quickly respond to subsequent disturbances. This paper proposes a new cascaded topology of SMES/battery HESS. It remains the superiorities of conventional HESS schemes from aspects of fast response (~5 ms) and large energy capacity. Moreover, it avoids the use of droop control and then resolves the transient dynamic problem between the switchover of the battery and SMES. In the cascaded HESS, both the energy storages simultaneously activate when power variation is detected, where the SMES is just for compensating the initial power insufficiency before the full-power start of the battery. The stored energy in SMES can be sustainably utilized and available for multiple repeated power compensations. A simulation is established to discuss the performance of the HESS as an uninterruptable power source with comparisons of the other three cases in aspects of response time, energy capacity, and capital cost. Finally, a scaled-down experiment is tested, and performance comparisons are discussed to verify the technical advancement of cascaded HESS.
DOI: 10.48550/arxiv.2207.11280
2022
Cited 5 times
PanGu-Coder: Program Synthesis with Function-Level Language Modeling
We present PanGu-Coder, a pretrained decoder-only language model adopting the PanGu-Alpha architecture for text-to-code generation, i.e. the synthesis of programming language solutions given a natural language problem description. We train PanGu-Coder using a two-stage strategy: the first stage employs Causal Language Modelling (CLM) to pre-train on raw programming language data, while the second stage uses a combination of Causal Language Modelling and Masked Language Modelling (MLM) training objectives that focus on the downstream task of text-to-code generation and train on loosely curated pairs of natural language program definitions and code functions. Finally, we discuss PanGu-Coder-FT, which is fine-tuned on a combination of competitive programming problems and code with continuous integration tests. We evaluate PanGu-Coder with a focus on whether it generates functionally correct programs and demonstrate that it achieves equivalent or better performance than similarly sized models, such as CodeX, while attending a smaller context window and training on less data.
DOI: 10.1007/s13246-017-0612-9
2017
Cited 12 times
A principal component analysis based data fusion method for ECG-derived respiration from single-lead ECG
DOI: 10.1088/1361-6501/aca81b
2022
Cited 4 times
Hierarchical dispersion Lempel–Ziv complexity for fault diagnosis of rolling bearing
Abstract The fault information of rolling bearings is generally contained in vibration signals. How to efficiently unearth fault information from the raw signals is the key to detecting and evaluating the health condition of mechanical equipment. Therefore, a hierarchical dispersion Lempel–Ziv complexity (HDLZC) feature extraction method is developed in this paper to improve the accuracy of fault diagnosis. In this method, dispersion theory addresses the deficiency of Lempel–Ziv complexity, and can obtain more fault features from the raw signal. Second, the hierarchical extraction of high- and low-frequency components from time series can improve the ability to describe dynamic features. Simulations and experiments respectively demonstrate the predominance of HDLZC. The experimental results reveal that this method is significantly better than multiscale dispersive Lempel–Ziv complexity, hierarchical Lempel–Ziv complexity, multiscale dispersion entropy, and multiscale permutation entropy in extracting fault information.
DOI: 10.1680/jcien.22.00172
2023
Rapid construction of modular buildings for emergencies: a case study from Hong Kong, China
A new wave of the Covid-19 pandemic struck Hong Kong in February 2022. It led to construction of a temporary 1000-bed hospital and 10 000-bed isolation and treatment facility on an island site in just 51 days using factory-made modules. To achieve such rapid construction, module assembly was carried out at a separate site between the factories and site. Several new modular construction technologies were also developed, including adjustable base supports, large-span roof modules, universal safety barriers and an intelligent cloud platform for construction management. But to enable sustainable construction of such emergency buildings in future, further studies on demolition, recycling and relocation of modular buildings need to be carried out in the post-pandemic era.
DOI: 10.1016/j.geothermics.2023.102654
2023
Study on the thermal - mechanical properties and heat transfer characteristics of heat storage functional backfill body
Heat storage functional backfill can realize the synergistic mining of the associated geothermal resources while ensuring the safe mining of the deep coal resources. Study on the thermal - mechanical properties and heat transfer characteristics of heat storage functional backfill body (HSFBB) is of great significance. In this paper, paraffin (PA) and expanded graphite (EG) were used to prepare composite phase change material (CPCM), and its optimal ratio was determined, firstly. Then, coal gangue, tailings, CPCM and cement were used to prepare the HSFBB, and the influence of CPCM content on the thermal and mechanical properties of HSFBB was determined. Finally, the influence of CPCM on the heat transfer characteristics of HSFBB was analyzed by means of numerical simulation. The results showed that the optimum mass fraction of EG in CPCM was 10%. The heat stored per unit volume of HSFBB was proportional to the CPCM content, while the thermal conductivity was the opposite. With the increase of CPCM content, the strength and stiffness of HSFBB weakened, the pore and overall deformation increased, and the HSFBB shown obvious ductility characteristics. CPCM content and latent heat had a significant influence on the overall temperature rising rate of HSFBB and the temperature rising process of HSFBB at different locations. However, the phase change interval only had a weak influence on the temperature rising process of HSFBB far away from the heat source.
DOI: 10.48550/arxiv.2309.13618
2023
Reinforcement-Enhanced Autoregressive Feature Transformation: Gradient-steered Search in Continuous Space for Postfix Expressions
Feature transformation aims to generate new pattern-discriminative feature space from original features to improve downstream machine learning (ML) task performances. However, the discrete search space for the optimal feature explosively grows on the basis of combinations of features and operations from low-order forms to high-order forms. Existing methods, such as exhaustive search, expansion reduction, evolutionary algorithms, reinforcement learning, and iterative greedy, suffer from large search space. Overly emphasizing efficiency in algorithm design usually sacrifices stability or robustness. To fundamentally fill this gap, we reformulate discrete feature transformation as a continuous space optimization task and develop an embedding-optimization-reconstruction framework. This framework includes four steps: 1) reinforcement-enhanced data preparation, aiming to prepare high-quality transformation-accuracy training data; 2) feature transformation operation sequence embedding, intending to encapsulate the knowledge of prepared training data within a continuous space; 3) gradient-steered optimal embedding search, dedicating to uncover potentially superior embeddings within the learned space; 4) transformation operation sequence reconstruction, striving to reproduce the feature transformation solution to pinpoint the optimal feature space.
2003
Cited 17 times
Recent Progress in Camera Self-Calibration
We present a review and classification of camera self-calibration techniques developed in recent years. Compared with traditional calibration techniques, self-calibration does not require a calibration object with known 3D geometry, but only needs point correspondences from images to solve for the intrinsic parameters. Our main focus is on dealing with the pinhole camera model with either constant or varying intrinsic parameters. In the end, we also give a brief review of self-calibration techniques under other camera models.
DOI: 10.1016/j.future.2016.08.002
2017
Cited 9 times
A hybrid index for temporal big data
Temporal index provides an important way to accelerate query performance in temporal big data. However, the current temporal index cannot support the variety of queries very well, and it is hard to take account of the efficiency of query execution as well as the index construction and maintenance. In this paper, we propose a novel segmentation-based hybrid index B+-Tree, called SHB+- tree, for temporal big data. First, the temporal data in temporal table deposited is separated to fragments according to the time order. In each segment, the hybrid index is constructed by integrating the temporal index and the object index, and the temporal big data is shared by them. The performance of construction and maintenance is improved by employing the segmented storage strategy and bottom-up index construction approaches for every part of the hybrid index. The experimental results on benchmark data set verify the effectiveness and efficiency of the proposed method.
DOI: 10.3390/wevj6030782
2013
Cited 8 times
Fast Charging Method Based on Estimation of Ion Concentrations using a Reduced Order of Electrochemical Thermal Model for Lithium Ion Polymer Battery
Comparably long charging time for battery of electric and hybrid vehicles is one of barriers for massive commercialization of the vehicles. Typical charging methods are by a constant current (CC) with constant voltage (CV), pulsed or tapered current. Theoretically, the charging time can be reduced by increased amplitude of the charging current, which, however, accelerate degradation of cells and reduces the lifespan. The relationship between the charging current and the degradation has not been well understood. Studies on ion transport and chemical reactions using a computational model developed in our laboratory reveal that a high charging current causes excessive ions at the surface of electrode particles because of slow diffusion process of ions in the solid electrodes. The excessive lithium ions react with electrons and form a thin layer, called Lithium plating that is irreversible. The Lithium plating not only reduces ion conductivities, but also contributes growth of dendrites and potentially internal short circuit. In this paper, a new charging algorithm is proposed that is based on an electrochemical and thermal model, which order is drastically reduced in order to facilitate a real time operation. The model, called Reduced Order of Electrochemical Thermal Model (ROM), is completely validated with a pouch type of Lithium polymer battery and used to dynamically estimate ion concentration at the surface of particles. Based on the estimated ion concentration, a new control algorithm is derived that allows for determination of amplitude and duration of the charging current. The ROM performs at least ten fold faster in calculations than the original full order model. The simulation and experimental results show that the charging time can be reduced to 60-70% of that of the classical CC/CV charging by preventing excessive ions and slowing down degradation of cell capacity losses.
DOI: 10.1360/n032016-00161
2017
Cited 6 times
Self-propelling mini-motor and its applications in supramolecular self-assembly and energy conversion
图 1 电场驱动物体运动.(a) 交流电驱动固态物体在水面上的运动 [35] ; (b) 直流电驱动凝胶在溶液中运动 [36] (网络版彩图) 的释放与爆破也可以实现导体在电场中的定向运动. 法国波尔多大学的 Kuhn 等 [37] 向放置有一个导体的水 体系中通入直流电, 此时
DOI: 10.1109/lawp.2022.3197449
2022
Cited 3 times
Miniaturized Lossy-Layer Scheme For Designing a Frequency Selective Rasorber
This letter proposes a miniaturized frequency selective rasorber (FSR) based on parallel-plate capacitors that can be applied in antenna systems to delay the appearance of grating lobes. Compared with traditional solutions, the lossy-layer unit-topology is rearranged to balance a more efficient capacitor <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">C</i> <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">1</sub> , fewer resistive lumped elements, and a large inductance <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">L</i> <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">1</sub> in series. Furthermore, the lossless layer is constructed using a third-order frequency selective surface with parallel-plate capacitors. These approaches achieve a rasorber unit size of only 0.035 <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">λ</i> <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">L</sub> , where <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">λ</i> <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">L</sub> is the wavelength of the lowest absorption frequency. Under normal incidence, the 1 dB transmission band with a bandwidth of 40.9% is obtained at 6 GHz, whereas an absorption rate of over 90% covers 1.5 to 4.05 GHz. Moreover, the FSR exhibits high stability for the dual-polarization and oblique incidence up to 30°. Prototypes were fabricated and measured, verifying the feasibility of the miniaturized design.
DOI: 10.1080/00150193.2019.1611104
2019
Cited 4 times
A wide-band piezoelectric harvester based on cantilever beam
Harvesting energy from human motion has already been seen as an attractive approach to obtain renewable electric energy to power wearable sensors. On the basis of previous studies, a wide-band piezoelectric harvester based on cantilever beam with multiple masses and beams is proposed in this article. The numerical simulation based on MATLAB is used to analyze how the length of beam and the size of mass influence the output voltage of harvester. And the finite element method (FEM) is also used to research the influence regulation of the size of mass, the location of mass, the number of mass, the gap of masses to frequency, bandwidth, output voltage of harvester. The improved harvester not only possesses wider resonant band, but also can generate higher output voltage. The optimum output power can reach to 0.75 mW at the resonant frequency of 11 Hz, when the load resistance is 510 kΩ.
DOI: 10.1049/iet-gtd.2020.0600
2020
Cited 4 times
Topology identification in distribution networks based on alternating optimisation
IET Generation, Transmission & DistributionVolume 14, Issue 24 p. 5851-5857 ArticleFree Access Topology identification in distribution networks based on alternating optimisation Renhai Feng, Renhai Feng orcid.org/0000-0001-7194-6889 School of Electrical and Information Engineering, Tianjin University, Tianjin, People's Republic of ChinaSearch for more papers by this authorWanqi Yuan, Wanqi Yuan orcid.org/0000-0003-1962-7500 School of Electrical and Information Engineering, Tianjin University, Tianjin, People's Republic of ChinaSearch for more papers by this authorMeng Xiao, Corresponding Author Meng Xiao tjuxiaomeng@tju.edu.cn School of Electrical and Information Engineering, Tianjin University, Tianjin, People's Republic of ChinaSearch for more papers by this authorZheng Zhao, Zheng Zhao School of Microelectronics, Tianjin University, Tianjin, People's Republic of ChinaSearch for more papers by this authorQiulin Wang, Qiulin Wang School of Electrical and Information Engineering, Tianjin University, Tianjin, People's Republic of ChinaSearch for more papers by this author Renhai Feng, Renhai Feng orcid.org/0000-0001-7194-6889 School of Electrical and Information Engineering, Tianjin University, Tianjin, People's Republic of ChinaSearch for more papers by this authorWanqi Yuan, Wanqi Yuan orcid.org/0000-0003-1962-7500 School of Electrical and Information Engineering, Tianjin University, Tianjin, People's Republic of ChinaSearch for more papers by this authorMeng Xiao, Corresponding Author Meng Xiao tjuxiaomeng@tju.edu.cn School of Electrical and Information Engineering, Tianjin University, Tianjin, People's Republic of ChinaSearch for more papers by this authorZheng Zhao, Zheng Zhao School of Microelectronics, Tianjin University, Tianjin, People's Republic of ChinaSearch for more papers by this authorQiulin Wang, Qiulin Wang School of Electrical and Information Engineering, Tianjin University, Tianjin, People's Republic of ChinaSearch for more papers by this author First published: 04 August 2020 https://doi.org/10.1049/iet-gtd.2020.0600Citations: 1AboutSectionsPDF ToolsRequest permissionExport citationAdd to favoritesTrack citation ShareShare Give accessShare full text accessShare full-text accessPlease review our Terms and Conditions of Use and check box below to share full-text version of article.I have read and accept the Wiley Online Library Terms and Conditions of UseShareable LinkUse the link below to share a full-text version of this article with your friends and colleagues. Learn more.Copy URL Share a linkShare onFacebookTwitterLinkedInRedditWechat Abstract Topology identification (TI) is an essential problem in the distribution network due to exponentially growing power grid size in recent years. In this study, it is reformulated as a regularised alternating convex optimisation problem. Then an application based on the current injection model is proposed. Compared to the traditional algorithm optimising -norm, which may lead to overfitting, a new -norm minimisation problem with -norm regularisation is proposed to solve the trade-off problem with non-convex constraints. The proposed method reduces the size of the training data set compared with the traditional TI method. Simulation results show that the recovery performance of the proposed algorithm is superior to the traditional one in additive white Gaussian noise scenario. Nomenclature graph of distribution network (DN) index set for distribution transformers (DTs) index set for meters index set for power lines between the ith DT and the jth meter M number of DTs N number of meters power line between the ith DT and the jth meter k number of measurements or samples per node T time interval of meter readings in a particular minute topology matrix (TM) between DTs and meters nodal admittance matrix (AM) Hadamard product of and row vectors of row vectors of current flowing matrix from the ith DT to the jth meter diagonal matrix of row vectors of voltages matrix between the ith DT and the jth meter value of recovery performance error-covariance matrix of synchronisation errors error-covariance matrix of line losses error-covariance matrix of random errors transpose of vectors or matrices estimated value of vectors or matrices mean value of vectors or matrices 1 Introduction Nowadays, topology identification (TI) is applied in many engineering fields, such as image processing, microwave communication networks, and distribution network (DN). With the access to new energy sources, the scale of network interconnection continues to expand. In such a scenario, power outages had caused huge economic losses and adverse social impacts. To prevent power outage and calculate the budget more accurately, the role of TI in DN continues to evolve. Since power outage will cause the topology to recovered improperly, TI gives us a good way to quickly locate the fault. In recent years, TI in large-scale DN based on measurement information has become a new active safety prevention. The accurate TI method becomes a prerequisite for maintaining the normal operation of DN. Based on the above discussion, lots of attempts are made to solve the TI problem. In [1], a DN application is proposed to validate a common information model represented grid topology based on power profiles measured by grid monitoring devices. To overcome the prohibitive computational complexity of exhaustive search methods, Babakmehr et al. [2] proposed a sparse algorithm modified clustered orthogonal matching pursuit to improve the efficiency of matrix recovery. In [3], the authors reconstruct the grid topology by principal component analysis and its graph-theoretic interpretation from energy measurements. In [4, 5], the authors apply the proposed linear optimisation algorithm to identify the phase connectivity and grid topology based on energy measurements, respectively. In [6], a novel two-stage method is proposed to identify the connection relationships between distribution transformers (DTs) and feeders. Unfortunately, low resource utilisation is highlighted in these methods. To improve TI recovery efficiency, approaches based on time series of measurements are proposed. In [7], a time series signature verification method is employed to identify the grid topology using synchronised voltage phasor data. In [8], the proposed algorithm grouping users according to their phases then evaluates the Pearson correlation coefficient of voltage-time series effectively. In [9], a new method of TI is proposed, which adopted multi-sampling of power injections by micro-synchronous phasor measurement unit ( PMU) to construct the variance model of branch voltage deviation. However, analysis of PMU error factors and the maximum time limit of data sample size is missing. Previous methods use sample data without considering the effects of complexity. To alleviate the complexity of calculation, some researchers proposed new methods based on topology simplification. Vill and Rosin [10] described a method for TI in rural low-voltage feeders based on the connection point placement and branching. Yang and Zhou [11] proposed to reduce the order of the incidence matrix using node elimination. However, its complexity of nodes elimination still needs to be reduced. In [12], symmetric node elimination is proposed, which further reduced the order of the adjacency matrix. Therefore, a topology matrix (TM) representing a two-layer topology structure is proposed in this paper. It reduces the order of traditional to ). In this paper, an H-regularised alternating optimisation (HAO) method is proposed based on voltage and current measurements to overcome the above limitations. Based on the previous study of TM and admittance matrix (AM) recovery, this paper proposes an alternating -norm optimisation paradigm with -norm regularisation and Hadamard product constraints. Recovery performance in [2] is defined as the ability to recover TM and AM. Different from the previous study, both TM and AM can be evaluated explicitly based on HAO. Simulation results show that the proposed HAO method can be implemented online due to its few sample requirements. The main contributions of this paper are listed as follows: (1) A traditional multi-layer DN structure is reformulated into a two-layer TI problem with admittance measurement and additive noise to fully explore the special problem structure in this TI application. (2) A new regularisation method called H-regularisation is proposed to achieve better recovery performance while reducing the required sample size. (3) The original impedance matrix is reformulated as a Hadamard product of TM and AM by introducing redundant variables to focus our main concern on topology and also pose an advance in TI recovery performance. (4) However, under normal conditions, objective function in (22) will not converge by alternating optimisation, (4) and (16) are used to restrict the TM in a feasible set during the alternating procedure. According to the simulation results, HAO converges well. The remainder of this paper is organised as follows: system model is given in Section 2. The HAO method is proposed in Section 3. Simulation results and conclusions are presented in Sections 4 and 5, respectively. 2 System model 2.1 Graph of the DN The topology of DN is the connection pattern between substations, feeders, DTs, and meters. Since the connection between the household smart meter and DT is unique, the structure of the DN can be simplified as a bus. As is shown in Fig. 1. Fig. 1Open in figure viewerPowerPoint Bus-type grid connection model To study the TI problem based on the separate ammeter, the bus structure is transformed into a two-layer tree structure. Assuming the number of DTs and meters are M and N, respectively. The bus-type network can be reformulated as a tree as indicated in Fig. 2. Fig. 2Open in figure viewerPowerPoint Two-layer tree-type grid connection model As shown in Fig. 2, a power system can be represented as a undirectional connected graph, , is the index set for DTs, is the index set for meters, and is a set of indexes for power lines between the ith DT and the jth meter. The connectivity of ( , ) is defined as TM. k samples are obtained for voltage and current measurements from the meter at regular time interval T. Let be an indicator, which determines the connectivity between the ith DT and the jth meter ((1)) Therefore, can be defined as TM, which represents the connectivity between DTs and meters. As is shown in (2) ((2)) It can be seen from Fig. 2 that each meter is connected to a specific DT. The features about can be written as for each and for each . Therefore, is sparse. Equation (3) can be obtained based on such sparseness ((3)) It can be observed from (3) that, different row vectors in are orthogonal, which is denoted as ((4)) where and are two different row vectors in . Thus, (4) is called as row orthogonal constraint (ROC). 2.2 Current injection model Power line model [13] is illustrated in Fig. 3. Fig. 3Open in figure viewerPowerPoint Power line model As is discussed in Section 2.1, since there is an impedance on the power line between the ith DT and the jth meter. It is feasible to recover AM using the current injection model [14]. Matrix is called nodal AM, describing a DN of N power lines. The elements in can be represented as follows: ((5)) It is known that and . As a result, can be defined. Due to its symmetry, . If exists, Appendix Lemma 1 illustrates the existence of and Re[ [15]. Next, AM between the ith DT and the jth meter can be written as follows: ((6)) Assuming k sampling time points, let be the vector of current injected and be the vector of nodal voltages at time k. The AM relates the node voltages and injected currents according to [14] ((7)) It is useful to rewrite (7) in a matrix-vector format, where we have ((8)) In (8), currents and voltages are from and . is randomly distributed in a certain range according to the practical DNs. As is proved in Lemma 1, can be represented as . Once the meter has a voltage value, it indicates that the connection between the jth meter and the ith transformer is valid. 2.3 Hadamard product Different from the traditional multi-layer DN topology, the problem solves the topology with a two-layer. To focus on topology, we introduce redundant variables TM to AM. Given the necessity of studying this particular structure, the related theorem of Hadamard product is as follows: Theorem 1.Hadamard product is a type of matrix operation. We define two matrices and , they posses the same order, and , then matrix is called Hadamard product of . Based on Theorem 1, a new matrix is defined as ((9)) Next, the Hadamard product is transformed into a new product operation ((10)) If we rewrite (10) in a vector format, variables in (10) are written as ((11)) ((12)) ((13)) where it is easy to find that , , and . Then, (10) can be denoted as ((14)) As a result, our goal is to design a fast method exploiting the structure of . In fact, and when exists. Correct values of admittance can be obtained from C. According to the above analysis, a modified current injection equation is denoted as ((15)) As is discussed in Section 2.1, a constraint about can be written as ((16)) Similarly, and are two different orthogonal row vectors in , (16) is another ROC. 2.4 Noise reduction To be practical, errors in the measurements, such as synchronisation errors, line losses are modelled as follows [3]: (i) Synchronisation errors: A smart meter records readings based on its internal clock, which may be out of synchronisation with respect to the standard clock. Although synchronisation errors are expected to be tiny, its effect will grow with time series. It can be modelled as a zero-mean Gaussian distribution as follows: ((17)) where is an -dimensional matrix due to the uncorrelated errors. (ii) Line losses: Other important sources of error are line losses. Since power lines from DTs to customers have a certain amount of electrical resistance, some of the transferred energy is lost as heat. These losses vary with temperature, loads, and age of feeders. Under this circumstance, line losses errors can be modelled as follows: ((18)) where is an -dimensional matrix. Next, summing over and , which are modelled as a vector of additive white Gaussian noise (AWGN) [2], we reach the following equation for each individual node: ((19)) where is a stationary Gaussian sequence with zero mean and a covariance matrix, and it is assumed that the AWGN is independent of the state vectors. The usually accounts for 1–5% in practical measurements. 3 HAO-topology identification 3.1 -norm and LASSO According to Section 2, our goal is to recover and , so we can define the following optimisation problem [5]. In view of a two-layer DN topology studied in this paper, new constraints (14) and (16) are used in (20) ((20)) This optimisation problem can be solved by minimising -norm. Once is recovered from (20), we can separate and according to (14) to achieve our purpose. In many instances [16], matrix is ill-conditioned and is not accurately observable, thus numerical instability will arise in solving . To increase the numerical stability [2], the classical regularisation method LASSO is proposed. Therefore, the reformulated problem can be defined as follows: ((21)) where is the regularisation parameter employed to avoid large deviation from the optimal solution. Similarly, and can be separated from (14). To compare with the initial data, the recovery performance of TM and AM can be calculated. Algorithm (see Fig. 4) of LASSO is as follows. Fig. 4Open in figure viewerPowerPoint Algorithm 1: topology identification with LASSO 3.2 H-regularisation model In response to LASSO, we make some new changes. It is known that will apparently influence the curve fitting. To search the best solution to , (22) can be defined as follows: ((22)) Then, we find the derivative of c in (23) ((23)) Let (23) be 0 and (23) can be simplified as ((24)) It is easy to observe that the values of are , so (24) can be transformed into (25) ((25)) where and are proposed in Sections 2.4 and 2.2. After calculating the value of and adding ROCs of (4) and (16), (21) is transformed into the HAO method ((26)) As can be seen in (26), C is replaced by H as a regularisation term compared with (21). The algorithm of HAO is shown in Fig. 5. Fig. 5Open in figure viewerPowerPoint Algorithm 2: topology identification with HAO 4 Simulation results In Section 3, we will show the recovery performance of different methods. To make the simulation close to the real situation, we add errors modelled as AWGN mentioned in Section 2.4. 4.1 Simulation platform configuration After the above analysis, the algorithms can be demonstrated in the larger DNs. The network is built by using random number generators in . The simulation example of the proposed two-layer structure of DN is used as a general representation. Assuming there are 100 meters connected to four DTs in a certain area with meter readings generated from a uniform distribution and voltage phase differences are generated from a random distribution. After the data set is generated, several simulations are conducted by using a different data set. Finally, 1– AWGN has been added to measurement vectors randomly. Through these simulations to complete the recovery of TI (Windows 10, AMD FX-6300 3.50 GHz processor, 8 GB of RAM). The algorithm is then applied to the data set of different lengths, the test is carried out under k is 0–100. 4.2 Results and analysis After initialisation, we start model fitting. Before analysing the following figures. Since one meter can only be connected to one DT, if a recovered column vector of TM has only one element for each , and , for each , then is regarded as a successful recovery, otherwise it is a failed recovery, and so on. It can be denoted as ((27)) where , and represent the number of successful recovery columns in and . Thus, and can be defined as the recovery performance of TM and AM. Moreover, to obtain a more accurate recovery performance, (8) is divided into two parts: ((28)) and can be recovered, respectively, and then add them to compare with the initial data to obtain and by (27). Fig. 6 shows and of -norm [5] with 1 and AWGN. When AWGN is , and are extremely poor, so it is not shown in the results. Fig. 6Open in figure viewerPowerPoint of -norm As can be shown from Fig. 6, in the case of AWGN, is slightly better than AM. However, when AWGN accounts for , is up to about , is . The experiment shows that AM is sensitive to AWGN interference. To reduce the impact of AWGN and obtain better and , the method LASSO is proposed [2]. Fig. 7 indicates that and of LASSO with 1 and AWGN. Fig. 7Open in figure viewerPowerPoint of LASSO It can be shown in Fig. 7, is apparently better than -norm when AWGN accounts for a relatively larger ratio. The of LASSO is less sensitive to AWGN than -norm. However, when AWGN is , is even worse than -norm. As is proposed in (26), is taken as a regularisation term to reach the best and . Fig. 8 indicates and of HAO, when there are 1 and AWGN, respectively. Fig. 8Open in figure viewerPowerPoint of HAO As shown in Fig. 8, in the case of AWGN, of HAO is apparently better than -norm and LASSO. When AWGN accounts for , of HAO can reach 1 when . However, of -norm can reach 1 when and LASSO even cannot reach 1 through the whole experiment. Meanwhile, of HAO is also the best. In practice, is often used to identify faults in the DN structure. Next, we compare of the above methods with 1 and AWGN in the separate experiment. In Fig. 9, of HAO shows a significant advantage in the whole experiment when AWGN is . As is shown in Fig. 10, when measurement errors are larger, HAO is least affected by AWGN and is also the best. With the experiment progressing, it shows an absolute advantage than -norm and LASSO. The experiment indicates that HAO is more effective than traditional methods -norm and LASSO to deal with the two-layer DN structure. Fig. 9Open in figure viewerPowerPoint with AWGN Fig. 10Open in figure viewerPowerPoint with AWGN To show the superiority of HAO in dealing with the specific topology structure, we perform data fitting on and . As can be seen in Fig. 11, HAO reconstructs topology faster than LASSO. As a result, the fitting accuracy of the HAO method is better than LASSO. However, to deal with the topology structure proposed in the paper, ROCs of (4) and (16) are applied to HAO. Next, we try to find the feasibility of HAO without ROCs. Fig. 12 shows that the data fitting of HAO without ROCs. The root mean square error of LASSO, HAO, and HAO without ROCs are 0.0483, 0.0228, and 0.1036, respectively. The results indicate that ROCs play an important role in topology construction between and . Fig. 11Open in figure viewerPowerPoint Data fitting of LASSO and HAO Fig. 12Open in figure viewerPowerPoint Data fitting of HAO without ROCs 5 Conclusion In this paper, we introduce a new alternating optimisation method HAO, compared to the traditional method LASSO used -regularisation and a fixed value of regularisation coefficient. The improvements made in this paper are: (i) we find an optimal regularisation coefficient; (ii) we use -regularisation term instead of -regularisation; (iii) Hadamard product is used to separate a compound matrix so that TM and AM can be recovered simultaneously. As is mentioned in Section 4, it can be found that the improved method has higher superiority of and data fitting. The HAO method can achieve full recovery of TM and AM with less sample than traditional methods. As a result, the HAO method proposed for the two-layer structure has the best . There are a number of directions for future work in terms of addressing different connectivity models, better optimisation techniques, tighter constraints, and experiments with a larger number of meter measurements for both noiseless and large noise settings. However, the most notable shortcoming of this paper is that we usually encounter a multi-layer network structure in practice, how to extend the two-layer to a multi-layer structure requires further research. 6 Acknowledgments This work was supported by the Project Foundation of Shenzhen City (JCYJ20160422093217170) and the Natural Science Foundation of China (61601309). Appendix 8 Lemma 1.Consider matrix with , , and . Then and exist. Proof.The reversibility proof is based on contradiction. Assume is a non-zero vector in the nullspace of Y. Then, we have ((29)) Thus, it can be concluded that ((30))Multiplying the second equality in (32) by yields . It can be seen that the first term is non-negative due to the assumption that , . Therefore, we must have that . Since we find that ((31))Multiplying the first equality in (32) by yields . It can be obtained that the first term is non-negative due to the assumption that , and the second term is non-positive due to (33). Therefore, both terms must be zero. Hence . Replacing in the second equality of (32) yields as well. Hence, which is a contradiction, since was assumed to be non-zero. To prove the second portion, let . Then, it holds that . Hence ((32))Using the second equality, we can obtain . The latter expression can be inserted into the first equality to obtain ((33))Notice that is positive definite since . Hence, it holds that . Therefore, we prove the existence of and reversibility of . □ 7 References 1Stefan, M., Faschang, M., Cejka, S. et al.: ‘Distribution grid topology validation and identification by graph-based load profile analysis’. 2018 IEEE Int. Conf. on Industrial Technology (ICIT), Lyon, 2018, pp. 1195– 1200 2Babakmehr, M., Simões, M.G., Wakin, M.B. et al.: ‘Compressive sensing-based topology identification for smart grids’, IEEE Trans. Ind. Inf., 2016, 12, (2), pp. 532– 543 3Pappu, S.J., Bhatt, N., Pasumarthy, R. et al.: ‘Identifying topology of low voltage distribution networks based on smart meter data’, IEEE Trans. Smart Grid, 2018, 9, (5), pp. 5113– 5122 4Arya, V., Seetharam, D., Kalyanaraman, S. et al.: ‘Phase identification in smart grids’. 2011 IEEE Int. Conf. on Smart Grid Communications (Smart Grid Comm), Brussels, Belgium, October 2011, pp. 25– 30 5Arya, V., Jayram, T., Pal, S. et al.: ‘Inferring connectivity model from meter measurements in distribution networks’. Proc. Fourth Int. Conf. on Future Energy Systems, Berkley, CA, USA., 2013, pp. 173– 182 6Chen, Y., Chen, J., Jiao, H. et al.: ‘Two-stage topology identification method for distribution network via clustering correction’. 2019 IEEE Innovative Smart Grid Technologies-Asia (ISGT Asia), 2019, Chengdu, China, pp. 280– 284 7Cavraro, G., Arghandeh, R.: ‘Power distribution network topology detection with time-series signature verification method’, IEEE Trans. Power Syst., 2018, 33, (4), pp. 3500– 3509 8Zhang, M., Luan, W., Guo, S. et al.: ‘Topology identification method of distribution network based on smart meter measurements’. 2018 China Int. Conf. Electricity Distribution (CI-CED), Sep. 2018, pp. 372– 376 9Rui, M., Kun, Y., Xingying, C.: ‘Topology identification in distribution network based on power injection measurements’. 2017 2nd Int. Conf. Power and Renewable Energy (ICPRE), 2017, Chengdu, pp. 477– 484 10Vill, K., Rosin, A.: ‘Identification of Estonian weak low voltage grid topologies’. 2017 IEEE Int. Conf. Environment and Electrical Engineering and 2017 IEEE Industrial and Commercial Power Systems Europe (EEEIC I&CPS Europe), 2017, Milan, pp. 1– 5 11Yang, D., Zhou, S. et al.: ‘A novel method for power grid topology identification based on incidence matrix simplification’, East China Electr. Power, 2014, 42, (11), pp. 2254– 2259 12Zhang, S., Yan, Y., Bao, W. et al.: ‘Network topology identification algorithm based on adjacency matrix’. 2017 IEEE Innovative Smart Grid Technologies-Asia (ISGT-Asia), 2017, Auckland, pp. 1– 5 13Wood, A.J., Wollenberg, B.F.: ‘ Power generation, operation, and control ( Wiley, Hoboken, NJ, USA, 2nd edn.) 14Ardakanian, O. et al.: ‘On identification of distribution grids’, IEEE Trans. Control Netw. Syst., 2019, 6, (3), pp. 950– 960, doi: 10.1109/TCNS.2019.2891002 15Bazrafshan, M., Gatsis, N.: ‘Comprehensive modeling of three-phase distribution systems via the bus AM’, IEEE Trans. Power Syst., 2018, 33, (2), pp. 2015– 2029, doi: 10.1109/TPWRS.2017.2728618 16Mei, G., Wu, X., Wang, Y. et al.: ‘Compressive-sensing-based structure identification for multilayer networks’, IEEE Trans. Cybern., 2018, 48, (2), pp. 754– 764 Citing Literature Volume14, Issue24Special Issue: Advanced Data-Analytics for Power System Operation, Control and Enhanced Situational AwarenessDecember 2020Pages 5851-5857 FiguresReferencesRelatedInformation
2015
Cited 3 times
[Effect of extraction and non-extraction treatment on frontal smiling esthetics: a meta-analysis].
To evaluate the effectiveness of tooth extraction and non-extraction orthodontic treatment on frontal smiling esthetics.A literature research was performed using Wanfang database, Chinese Biological Literature database, China National Knowledge Infrastructure, Chinese Scientific Journals Database of VIP, Medline and the Cochrane Library, dating from the establishment of the databases to 31st, August, 2014. Weighted mean difference (WMD) was calculated and meta analysis was performed by Review Manager 5.2.A total of 8 controlled studies were included. The results of meta analysis showed no significant difference between extraction and non-extraction treatment on subjective evaluation of smile esthetics [5.74-7.05 for extraction; 5.53-7.02 for non-extraction; WMD=0.09, 95%CI (-0.28, 0.46), P=0.64], buccal corridor [0.12-0.19 for extraction; 0.11-0.18 for non-extraction; WMD=0.01, 95%CI (-0.00,0.02), P=0.09], maxillary visual arch width [26.3-52.17 mm for extraction; 25.43-52.37 mm for non-extraction; WMD=-0.13, 95%CI (-1.01, 0.75), P=0.77] and smile height [5.7-10.39 mm for extraction; 5.4-9.97 mm for non-extraction; WMD=0.38, 95%CI (-0.27, 1.03), P=0.25].Based on the results of this meta analysis, it can't be concluded that extraction treatment could affect the frontal smiling esthetics based on the present clinic evidences. Given the small sample size and the potential heterogeneity, more well-designed prospective studies should be performed in future.
DOI: 10.2147/rmhp.s377936
2022
Influence of Different Protection States on the Mental Fatigue of Nurses During the COVID-19 Pandemic
COVID-19 has brought greater workload pressures to the medical field, such as medical staff being required to wear personal protective equipment (PPE). While PPE can protect the safety of staff during the pandemic, it can also accelerate the accumulation of fatigue among operators.This study explores the influence of different protection states on the mental fatigue of nurses.In this study, 10 participants (5 males and 5 females) were randomly selected among applicants to monitor mental fatigue during the nurses' daily work in four different PPE states (low temperature and low protection; low temperature and high protection; high temperature and low protection; high temperature and high protection). The NASA subjective mental fatigue scale was used for subjective evaluation. Reaction time, attention concentration, attention distribution, memory, and main task completion time were used for objective evaluation.The results demonstrated a significant difference in the effects of different protection states on mental fatigue. The state of high temperature and high protection had the greatest influence on mental fatigue, the state of low temperature and low protection had the least, and states of high (low) temperature and low (high) protection had intermediate effects on mental fatigue. Furthermore, the correlation between the subjective and objective fatigue indices was analyzed using a multiple regression model.This study clarified the influence of different protection states on the mental fatigue of nurses, and verified that nurses require more time and energy to complete the same work as before under high protection states. It provides a basis for evaluating the mental fatigue of nurses in the unique period of the COVID-19 pandemic and specific ideas for optimizing the nursing process.
2004
Cited 6 times
State of the Art and Trends in Database Research
This paper discusses the state of the art, the challenge problems that we face, and the future trends in database research field. It covers the hot topics such as information integration, stream data management, sensor database technology, XML data management, data grid, self-adaptation, moving object management, small-footprint database, and user interface.
2002
Cited 7 times
BEAM PHYSICS ISSUES FOR A POSSIBLE 2ND GENERATION LHC IR
We consider a possible 2nd generation IR for the LHC that would be designed to achieve at least a factor of two increase in luminosity beyond that reached with the original IRs. We discuss the optics issues, in particular those associated with larger bore quadrupoles with Nb3Sn superconductor, field quality issues, energy deposition issues and the limitations imposed on energy and luminosity reach.
DOI: 10.48550/arxiv.2301.03394
2023
The suppression of Finite Size Effect within a Few Lattices
Boundary modes localized on the boundaries of a finite-size lattice experience a finite size effect (FSE) that could result in unwanted couplings, crosstalks and formation of gaps even in topological boundary modes. It is commonly believed that the FSE decays exponentially with the size of the system and thus requires many lattices before eventually becoming negligibly small. Here we identify a special type of FSE of some boundary modes that apparently vanishes at some particular wave vectors along the boundary. Meanwhile, the number of wave vectors where the FSE vanishes equals the number of lattices across the strip. We analytically prove this type of FSE in a simple model and prove this peculiar feature. We also provide a physical system consisting of a plasmonic sphere array where this FSE is present. Our work points to the possibility of almost arbitrarily tunning of the FSE, which facilitates unprecedented manipulation of the coupling strength between modes or channels such as the integration of multiple waveguides and photonic non-abelian braiding.
DOI: 10.21203/rs.3.rs-2532367/v1
2023
Individualized active surveillance for carbapenem-resistant microorganisms using Xpert Carba-R in intensive care units: A single center, before-after study
Abstract Background Carbapenem antibiotics are widely used in intensive care units (ICU), and the prevalence of carbapenem-resistant microorganisms (CRO) has increased, forming a major threat to inpatients that urgently requires improved surveillance. This study aimed to assess the role of individualized active surveillance of carbapenem resistance genes on CRO risk. Methods A total of 3,765 patients were admitted to the ICU of Zhongnan Hospital of Wuhan University between 2020 and 2022 (March 2020 to February 2021 in the first period and March 2021 to February 2022 in the second period). The presence of carbapenem resistance genes were monitored using Xpert Carba-R, and CRO incidence was assigned as the investigated outcome. Results Of 3,765 patients, 390 manifested the presence of CRO, representing a prevalence of 10.36%. Active surveillance was associated with a lower CRO risk (odds ratio [OR]: 0.77; 95% confidence interval [CI]: 0.62–0.95; P = 0.013), especially for carbapenem-resistant Acinetobacter + carbapenem-resistant Pseudomonas aeruginosa (OR: 0.79; 95%CI: 0.62–0.99; P = 0.043), carbapenem-resistant Klebsiella pneumoniae (OR: 0.56; 95%CI: 0.40–0.79; P = 0.001), and carbapenem-resistant Enterobacteriaceae (OR: 0.65; 95%CI: 0.47–0.90; P = 0.008). However, active surveillance was not associated with risk of carbapenem-resistant Acinetobacter ( P = 0.140), carbapenem-resistant Pseudomonas aeruginosa ( P = 0.161), carbapenem-resistant Enterobacteriaceae (except CRKP) ( P = 0.259), or ICU stay ( P = 0.743). Moreover, there were significant differences between positive and negative active surveillance in high-risk patients with a CRO-positive culture ( P &lt; 0.001) or microorganism-positive culture ( P &lt; 0.001), time between ICU admission and CRO positivity ( P &lt; 0.001), length of hospital stay before surveillance ( P = 0.002), carbapenem antibiotic use in the 90 days before surveillance ( P = 0.001), corticosteroid use in the 90 days prior to surveillance ( P = 0.028), and surgery in the 90 days before surveillance ( P = 0.003). Conclusions Individualized active surveillance using Xpert Carba-R may be associated with a reduction in the overall CRO incidence in the ICU, especially for carbapenem-resistant Acinetobacter + carbapenem-resistant Pseudomonas aeruginosa , carbapenem-resistant Klebsiella pneumoniae , and carbapenem-resistant Enterobacteriaceae. Further prospective studies should be performed to verify these conclusions and guide further management of patients in the ICU.
DOI: 10.1117/12.2672641
2023
Operation and maintenance management and emergency response of data protection system in market supervision
Based on the trend research of market supervision scheduling automation, this study analyzes the function of establishing market supervision scheduling automation operation and maintenance supervision system, expounds the application value of market supervision scheduling automation operation and maintenance supervision system, puts forward the key points of market supervision scheduling automation operation and maintenance supervision system design, provides the key points of market supervision application scheduling automation operation and maintenance supervision system application, It is hoped that through the evaluation of dispatching automation operation and maintenance supervision system, the design and application of dispatching automation operation and maintenance supervision system will be strengthened, and a dispatching automation operation and maintenance supervision system suitable for the actual development of market supervision will be established, which will be helpful for the popularization and application of dispatching automation operation and maintenance supervision system in market supervision.
2023
[Evaluation of the effect of simultaneous neuralized iliac bone flap on the preservation of lower lip and chin sensation during mandibular reconstruction].
To evaluate the effect of reconstructing inferior alveolar nerve and preserving the sensation of lower lip and chin in repairing mandibular defect by simultaneous neuralized iliac bone flap.Patients with continuous mandibular defects requiring reconstruction were randomly assigned to the innervated(IN) group and the control(CO) group by random number table. In the IN group, the deep circumflex iliac artery and recipient vessels were anastomosed microscopically during mandible reconstruction, and the ilioinguinal nerve(IN), mental nerve(MN) and inferior alveolar nerve(IAN) were anastomosed at the same time. In the CO group, only vascular anastomosis was performed without nerve reconstruction. During the operation, the nerve electrical activity after nerve anastomosis was detected by nerve monitor, and the sensory recovery of lower lip was recorded by two-point discrimination(TPD), current perception threshold (CPT) and Touch test sensory evaluator(TTSE) test. SPSS 26.0 software package was used for data analysis.According to the inclusion and exclusion criteria, a total of 20 patients were included, with 10 patients in each group. All the flaps survived in both groups, and no serious complications such as flap crisis occurred, and no obvious complications occurred in the donor site. The results of TPD test, CPT test and TTSE test all indicated that the degree of postoperative hypoesthesia in the IN group was less(P<0.05).Simultaneous nerve anastomosis vascularized iliac bone flap can effectively preserve the feeling of lower lip and improve the postoperative quality of life of patients. It is a safe and effective technique.
DOI: 10.1038/s41598-023-36321-y
2023
Individualized active surveillance for carbapenem-resistant microorganisms using Xpert Carba-R in intensive care units
Carbapenem antibiotics are widely used in ICU, and the prevalence of carbapenem-resistant microorganisms (CRO) has increased. This study aimed to assess the role of individualized active surveillance using Xpert Carba-R of carbapenem resistance genes on CRO risk. A total of 3,765 patients were admitted to the ICU of Zhongnan Hospital of Wuhan University between 2020 and 2022. The presence of carbapenem resistance genes were monitored using Xpert Carba-R, and CRO incidence was assigned as the investigated outcome. Of 3,765 patients, 390 manifested the presence of CRO, representing a prevalence of 10.36%. Active surveillance using Xpert Carba-R was associated with a lower CRO risk (odds ratio [OR]: 0.77; 95% confidence interval [CI] 0.62-0.95; P = 0.013), especially for carbapenem-resistant Acinetobacter + carbapenem-resistant Pseudomonas aeruginosa (OR: 0.79; 95% CI 0.62-0.99; P = 0.043), carbapenem-resistant Klebsiella pneumoniae (OR: 0.56; 95% CI 0.40-0.79; P = 0.001), and carbapenem-resistant Enterobacteriaceae (OR: 0.65; 95% CI 0.47-0.90; P = 0.008). Individualized active surveillance using Xpert Carba-R may be associated with a reduction in the overall CRO incidence in ICU. Further prospective studies should be performed to verify these conclusions and guide further management of patients in ICU.
DOI: 10.48550/arxiv.2306.16893
2023
Traceable Group-Wise Self-Optimizing Feature Transformation Learning: A Dual Optimization Perspective
Feature transformation aims to reconstruct an effective representation space by mathematically refining the existing features. It serves as a pivotal approach to combat the curse of dimensionality, enhance model generalization, mitigate data sparsity, and extend the applicability of classical models. Existing research predominantly focuses on domain knowledge-based feature engineering or learning latent representations. However, these methods, while insightful, lack full automation and fail to yield a traceable and optimal representation space. An indispensable question arises: Can we concurrently address these limitations when reconstructing a feature space for a machine-learning task? Our initial work took a pioneering step towards this challenge by introducing a novel self-optimizing framework. This framework leverages the power of three cascading reinforced agents to automatically select candidate features and operations for generating improved feature transformation combinations. Despite the impressive strides made, there was room for enhancing its effectiveness and generalization capability. In this extended journal version, we advance our initial work from two distinct yet interconnected perspectives: 1) We propose a refinement of the original framework, which integrates a graph-based state representation method to capture the feature interactions more effectively and develop different Q-learning strategies to alleviate Q-value overestimation further. 2) We utilize a new optimization technique (actor-critic) to train the entire self-optimizing framework in order to accelerate the model convergence and improve the feature transformation performance. Finally, to validate the improved effectiveness and generalization capability of our framework, we perform extensive experiments and conduct comprehensive analyses.
DOI: 10.48550/arxiv.2309.04051
2023
Observation of Hybrid-Order Topological Pump in a Kekule-Textured Graphene Lattice
Thouless charge pumping protocol provides an effective route for realizing topological particle transport. To date, the first-order and higher-order topological pumps, exhibiting transitions of edge-bulk-edge and corner-bulk-corner states, respectively, are observed in a variety of experimental platforms. Here, we propose a concept of hybrid-order topological pump, which involves a transition of bulk, edge, and corner states simultaneously. More specifically, we consider a Kekul\'e-textured graphene lattice that features a tunable phase parameter. The finite sample of zigzag boundaries, where the corner configuration is abnormal and inaccessible by repeating unit cells, hosts topological responses at both the edges and corners. The former is protected by a nonzero winding number, while the latter can be explained by a nontrivial vector Chern number. Using our skillful acoustic experiments, we verify those nontrivial boundary landmarks and visualize the consequent hybrid-order topological pump process directly. This work deepens our understanding to higher-order topological phases and broadens the scope of topological pumps.
DOI: 10.1016/j.annonc.2023.10.700
2023
464P The prevalence and correlates of frailty and pre-frailty in elderly patients with breast cancer: A cross-sectional study from China
The frailty status of elderly Chinese breast cancer patients has not been fully assessed. The aim of this study was to assess the prevalence of frailty and pre-frailty and associated factors in elderly Chinese patients with breast cancer. This is a prospective cross-sectional registry study. Breast cancer patients aged over 65 years were classified into robust (0 points), pre-frailty (1-2 points) and frailty (3-5 points) using the Frailty Screening Scale. The HADS, the sleep and pain subscale of the EORTC QLQ-C30 and the Charlson Comorbidity Index (CCI) were used to assess associated factors. Logistic regression model was used to analyse the factors related to frailty and pre-frailty. 481 elderly breast cancer patients treated at our hospital from October 2021 to November 2022 were finally analysed. The median age was 69 years. 76.7% of patients were in early-stage. The proportions of patients receiving surgery, chemotherapy, radiotherapy and endocrine therapy were 91.1%, 56.5%, 30.1%, and 65.3%, respectively. 75 (15.6%) patients met the diagnostic criteria for frailty and 257 (53.4%) patients were in pre-frailty state. Multivariate logistic regression analysis showed that independently associated risk factors for frailty included advanced tumours, more comorbidities, anxiety, insomnia and pain (all P<0.05). The latter three factors were also independently associated with pre-frailty (all P<0.05). Age, BMI, molecular subtype (not shown) and treatment were not significantly associated with frailty and pre-frailty (all P>0.05) (Table).Table: 464PMultivariable analysisRobust vs pre-frailtyRobust vs frailtyORPORPAge0.98(0.94-1.03)0.511.00(0.92-1.09)0.98StageEarly--Advanced1.51(0.79-2.87)0.2112.95(4.38-38.31)<0.01Surgery1.31(0.57-2.99)0.532.02(0.51-8.09)0.32Chemotherapy0.93(0.56-1.52)0.760.59(0.26-1.36)0.22Radiotherapy0.79(0.47-1.33)0.380.68(0.27-1.69)0.40Endocrine therapy1.11(0.58-2.11)0.760.48(0.16-1.47)0.20CCI1.17(0.98-1.41)0.092.07(1.39-3.08)<0.01Anxiety4.00(1.09-14.66)0.047.45(1.33-41.63)0.02Depression1.12(0.45-2.77)0.810.52(0.12-2.22)0.38Insomina1.01(1.00-1.02)0.011.02(1.00-1.03)0.01Pain1.02(1.01-1.03)0.011.03(1.01-1.05)<0.01 Open table in a new tab Frailty is not prevalent in elderly breast cancer, but most of them are in a pre-frailty state. Before making clinical decisions for these patients, it is necessary to evaluate the associated factors and to make appropriate interventions for controllable factors (such as anxiety, insomnia, comorbidities, etc.) to improve patients' tolerance and compliance with tumour treatment.
DOI: 10.1109/rtss59052.2023.00054
2023
Brief Industry Paper: Evaluating Robustness of Deep Learning-Based Recommendation Systems Against Hardware Errors: A Case Study
Deep learning-based recommendation systems (DL-RMs) are industry-scale recommendation models developed by Meta, designed to make use of both categorical and numerical inputs to make personalized recommendations. To serve billions of users in real-time, DLRMs rely on high-performance hardware and accelerators within our data centers, optimizing for execution latency and recommendation quality. However, continuous technology scaling, expanding workload, and increasing hardware heterogeneity could lead to increased risk of hardware errors. Addressing this risk often involves introducing extra design redundancy, which can pose a non-negligible overhead in performance and latency. In this paper, we present a case study of evaluating DLRM robustness against hardware errors by performing an extensive error injection campaign to DLRM. Our findings unveil that DLRM is notably robust to hardware errors and we further find that embedding tables in DLRM show an especially strong robustness. Additionally, we explore a software-level error mitigation techniques, activation clipping, for mitigating the hardware errors, which improves the DLRM robustness further. This industrial case study of understanding and improving DLRM robustness can enable the system to continue to deliver timely recommendations even in the presence of hardware challenges, or reduce the timing latency overhead posed by design redundancy, enhancing overall recommendation system performance.
DOI: 10.1088/1742-6596/1865/4/042143
2021
Research on Combating Epidemics based on Differential Equations and Cellular Automata
Abstract The new crown pneumonia epidemic has swept the world, panic, anxious, and uneasy emotions flowed into the psychology of everyone. This epidemic not only puts many countries in economic crisis, but may also put some small countries in danger of disappearing forever, and even make it impossible for large-scale people to return home. Therefore, the World Health Organization (WHO) and countries around the world need to join hands and work together to further explore ways to effectively control the epidemic. Analyze the relevant data of the four modes of disease (susceptible persons, latent persons, asymptomatic persons, and recovered persons), establish a differential equation model, and combine the results of question 1 to get the trend of the number of asymptomatic infections (present The trend of first rising and then falling) and specific data (it peaked at about 2733 people on the 42nd day after the outbreak; it dropped to 10 people about 70 days after the outbreak; it decreased to 0 at about 80 days). Then we simulated the distribution of asymptomatic infections based on the data reported by the Beijing Municipal Health Commission and combined with the cellular automaton model: the clustering state appeared in the first 10 days, and it gradually dispersed over time, and the distribution became more distributed after about 50 days. Evenly. Finally, it summarizes the applicable fields of this model in life and effective and accurate suggestions for epidemic prevention and risk reduction.
DOI: 10.1109/icd.2018.8468442
2018
Simulation of Interface Charge Behaviors in HVDC Cable Accessory Based on Bipolar Carrier Transportation Model
DOI: 10.48550/arxiv.1906.02754
2019
Simplified Template Cross Sections - Stage 1.1
Simplified Template Cross Sections (STXS) have been adopted by the LHC experiments as a common framework for Higgs measurements. Their purpose is to reduce the theoretical uncertainties that are directly folded into the measurements as much as possible, while at the same time allowing for the combination of the measurements between different decay channels as well as between experiments. We report the complete, revised definition of the STXS kinematic bins (stage 1.1), which are to be used for the upcoming measurements by the ATLAS and CMS experiments using the full LHC Run 2 datasets. The main focus is on the three dominant Higgs production processes, namely gluon-fusion, vector-boson fusion, and in association with a vector boson. We also comment briefly on the treatment of other production modes.
DOI: 10.48550/arxiv.2203.10922
2022
Who Should Review Your Proposal? Interdisciplinary Topic Path Detection for Research Proposals
The peer merit review of research proposals has been the major mechanism to decide grant awards. Nowadays, research proposals have become increasingly interdisciplinary. It has been a longstanding challenge to assign proposals to appropriate reviewers. One of the critical steps in reviewer assignment is to generate accurate interdisciplinary topic labels for proposals. Existing systems mainly collect topic labels manually reported by discipline investigators. However, such human-reported labels can be non-accurate and incomplete. What role can AI play in developing a fair and precise proposal review system? In this evidential study, we collaborate with the National Science Foundation of China to address the task of automated interdisciplinary topic path detection. For this purpose, we develop a deep Hierarchical Interdisciplinary Research Proposal Classification Network (HIRPCN). We first propose a hierarchical transformer to extract the textual semantic information of proposals. We then design an interdisciplinary graph and leverage GNNs to learn representations of each discipline in order to extract interdisciplinary knowledge. After extracting the semantic and interdisciplinary knowledge, we design a level-wise prediction component to fuse the two types of knowledge representations and detect interdisciplinary topic paths for each proposal. We conduct extensive experiments and expert evaluations on three real-world datasets to demonstrate the effectiveness of our proposed model.
DOI: 10.1002/cbdv.202200218
2022
Neuroprotective Alkamides from the Aerial Parts of<i>Achillea alpina</i>L.
Three new alkamides, achilleamide B-D (1-3) along with five known alkamides (4-8) were isolated from the aerial parts of Achillea alpina L. Structures were elucidated by spectroscopic analysis. Modified Mosher's method and electronic circular dichroism (ECD) calculations were introduced for the absolute configuration of 3. The neuroprotective effects of all the compounds were evaluated by 6-hydroxydopamine (6-OHDA)-induced cell death in human neuroblastoma SH-SY5Y cells, with concentration for 50 % of maximal effect (EC50 ) values of 3.16-24.75 μM, and the structure-activity relationship was conducted.
DOI: 10.3389/fnbot.2022.971205
2022
An online human–robot collaborative grinding state recognition approach based on contact dynamics and LSTM
Collaborative state recognition is a critical issue for physical human-robot collaboration (PHRC). This paper proposes a contact dynamics-based state recognition method to identify the human-robot collaborative grinding state. The main idea of the proposed approach is to distinguish between the human-robot contact and the robot-environment contact. To achieve this, dynamic models of both these contacts are first established to identify the difference in dynamics between the human-robot contact and the robot-environment contact. Considering the reaction speed required for human-robot collaborative state recognition, feature selections based on Spearman's correlation and random forest recursive feature elimination are conducted to reduce data redundancy and computational burden. Long short-term memory (LSTM) is then used to construct a collaborative state classifier. Experimental results illustrate that the proposed method can achieve a recognition accuracy of 97% in a period of 5 ms and 99% in a period of 40 ms.
DOI: 10.1109/bmei.2012.6512919
2012
Multiscale entropy based analysis of HRV during sleep
The potential application of multiscale entropy (MSE) to analyze the heart rate variability (HRV) during different sleep stages, i.e. wake/Rapid Eye Movement (REM) /light sleep/deep sleep, was investigated. The RR sequences of 5 min from four sleep stages were analyzed by the measurements of MSE under three scales (MSE1, MSE2 and MSE3). The results demonstrated that the complexity of heart rate (HR) in non-REM(NREM) is significantly higher (p<;0.01) than that in wake and REM, and the complexity becomes higher with sleep deepening (p<;0.01). MSE3 corresponding to r=0.25*STD reveal the most powerful entropy measurement for classifying different sleep stages.
DOI: 10.1007/978-981-10-6232-2_50
2017
An ECG-Derived Respiration Method Based on Signal Reconstruction of R, S Amplitudes and Filtering
An ECG-derived respiration (EDR) algorithm based on signal reconstruction and filtering is presented and applied to derive the respiratory signals from single-lead ECG. The ECG features, R-peak amplitude, S-peak amplitude, and R-peak position are used to reconstruct the signal by cubic spline interpolation. The EDR signal is obtained by applying a Kaiser filter to the reconstructed signal at last. The method is evaluated on data from the MIT-BIH polysomnographic database and validated against a “gold-standard” respiratory obtained from simultaneously recorded respiration data. Correlation coefficient (C) and magnitude-squared coherence coefficient (MSC) are used to assess the performance of the methods. The statistical difference is significant among the method presented in this study and the EDR methods based on wavelet and empirical mode decomposition (EMD), proving that the algorithm introduced in this article outperforms the others in the extraction of respiratory signals from single-lead ECGs.
DOI: 10.48550/arxiv.1709.02363
2017
Topologically Charged Nodal Surface
We report the existence of topologically charged nodal surface, a band degeneracy on a two-dimensional surface in momentum space that is topologically charged. We develop a Hamiltonian for such charged nodal surface, and show that such a Hamiltonian can be implemented in a tight-binding model as well as in an acoustic meta-material. We also identify a topological phase transition, through which the charges of the nodal surface changes by absorbing or emitting an integer number of Weyl points. Our result indicates that in the band theory, topologically charged objects are not restrict to zero dimension as in a Weyl point, and thus pointing to previously unexplored opportunities for the design of topological materials.
2017
Advances in application of plant growth regulators in walnut production.
2009
Compare the methods of capsantbin extraction
Compare the methodsof extracted the capsanthin by solvent and supercritical CO2 fluid extraction,determin the best technology methods of high extraction rate and high color scale.The results show that the optimum conditions is supercritical CO2 fluid extraction,extraction time is 2 h,the extraction temperature of 50 ℃,the extraction pressure of 22 MPa and the separation temperature is 50 ℃.The extraction rate is 5.19%.
2019
Simplified Template Cross Sections - Stage 1.1
Simplified Template Cross Sections (STXS) have been adopted by the LHC experiments as a common framework for Higgs measurements. Their purpose is to reduce the theoretical uncertainties that are directly folded into the measurements as much as possible, while at the same time allowing for the combination of the measurements between different decay channels as well as between experiments. We report the complete, revised definition of the STXS kinematic bins (stage 1.1), which are to be used for the upcoming measurements by the ATLAS and CMS experiments using the full LHC Run 2 datasets. The main focus is on the three dominant Higgs production processes, namely gluon-fusion, vector-boson fusion, and in association with a vector boson. We also comment briefly on the treatment of other production modes.
DOI: 10.1109/pac.2001.987258
2002
Beam-beam interactions at the Tevatron in Run IIa
The Tevatron in Run IIa will operate with three trains of twelve bunches each. The, impact of the long-range interactions on beam stability will be more significant compared to Run I. We study these beam-beam interactions (head-on and long-range) with particle tracking using two different codes. The model includes machine nonlinearities such as the field errors of the Interaction Region quadrupoles and the chromaticity sextupoles. Tune footprints and dynamic apertures are calculated for different bunches in a train.
DOI: 10.22323/1.395.0504
2021
Reconstruction of antinucleus-annihilation events in the GAPS experiment
The General Antiparticle Spectrometer (GAPS) experiment is designed to detect low-energy (< 0.25 GeV/n) cosmic-ray antinuclei as indirect signatures of dark matter.Several beyondthe-standard-model scenarios predict a large antideuteron flux due to dark matter decay or annihilation compared to the astrophysical background.The GAPS experiment will perform such measurements using long-duration balloon flights over Antarctica, beginning in the 2022/23 austral summer.The experimental apparatus consists of ten planes of Si(Li) detectors surrounded by a time-of-flight system made of plastic scintillators.The detection of the primary antinucleus relies on the reconstruction of the annihilation products: the low-energy antinucleus is captured by an atom of the detector material, forming an exotic atom that de-excites by emitting characteristics X-rays.Finally, the antinucleus undergoes nuclear annihilation, producing a "star" of pions and protons emitted from the annihilation vertex.Several algorithms were developed to determine the annihilation vertex position and to reconstruct the topology of the primary and secondary particles.An overview of the event reconstruction techniques and their performances, based on detailed Monte Carlo simulation studies, will be presented in this contribution.
DOI: 10.1007/978-3-319-27122-4_20
2015
SHB+-Tree: A Segmentation Hybrid Index Structure for Temporal Data
Temporal index provide an important way to accelerate query performance in temporal database. However, the current temporal index can not support the variety of queries very well, and it is hard to take account of both the efficiency of query execution and the index construction as well as maintenance. This paper propose a novel segmentation hybrid index SHB+-Tree for temporal data. First the temporal data in temporal table deposited is separated to fragments according to the time order. In each segment, the hybrid index is constructed which is a combination of temporal index and object index, and the temporal data is shared by them. By employing the segmented storage strategy and bottom-up index construction approaches for every part of the hybrid index, it greatly improves the performance of construction and maintenance. The experimental results on benchmark data set verify the effectiveness and efficiency of the proposed method.
DOI: 10.1039/9781782623991-00273
2016
Chapter 10. Superhydrophobic/Superhydrophilic Property in Functionally Cooperated Smart Device
Surfaces with special wettability are commonly seen in nature. Through imitating them, scientists have fabricated a large amount of materials, which have shown great potential applications in materials, especially in smart devices. A smart device is often a complex system consisting of two or more functional materials or surfaces to realize a given intention in an orderly manner. We identify a functionally cooperated smart device as such a system. In this chapter, we review the applications of superhydrophobic/superhydrophilic properties in the functionally cooperated smart device for a given complex intention.
2016
平行二重スケール放射線追跡に基づく海面電磁散乱計算【JST・京大機械翻訳】
2016
河南省の産桔梗根の品質に及ぼす播種日の影響を研究した。【JST・京大機械翻訳】
2016
ガス動圧軸受の非線形動力学挙動と安定性予測【JST・京大機械翻訳】
2016
コールタールピッチの職業接触労働者の健康状況への影響【JST・京大機械翻訳】
2016
相反カイラルメタ材料における双曲型Weyl点【Powered by NICT】
2016
Au/BiFeO3/SrRuO3構造の抵抗スイッチングへのポーリングプロセスの効果
DOI: 10.3760/cma.j.issn.1009-9158.2016.04.013
2016
Evaluation of clinical application of different microbial automated inoculation systems
Objectives To study the performance of different microbial automated inoculation systems and to evaluate the performance of the Probact microbial automated inoculation and incubation system (Probact system) and its applications in clinical microbiology laboratory. Methods A total of 160 clinical specimens, including respiratory secretions (n=61), urine (n=49), and feces (n=50), that were submitted to the Clinical Microbiology Laboratory in Peking Union Medical College Hospital of Chinese Academy of Medical Sciences from February 2015 to April 2015 were evaluated. These specimens were processed with conventional manual method, the Probact automated inoculation system, and PREVI Isola Inoculator. The quantity of bacterial species recovery, number of effectively isolated colonies, total number of colonies recovery per plate, and time of processing the 160 specimens by the three methods were evaluated. Wilcoxon signed-rank test and Kruskal-Wallis rank sum test were used for statistical analysis. Results The Probact system had significantly higher quantity of bacterial species recovery (respiratory specimens 3.41±1.40, urine 1.92±0.86, and feces 1.16±0.79) than those by the Isola Inoculator (respiratory specimens 3.75±1.29, urine 2.24±0.97, and feces 1.92±0.72), (P=0.006, 0.011, <0.001). Compared to the manual method, Probact performed less quantity of bacterial species recovery for respiratory specimens(3.85±1.38), but higher in feces(0.80±0.81)( P<0.001). There is no significant differences for urine(1.84±1.23) (P=0.266). As for number of isolated colony, the Probact system (respiratory specimens 12.16±7.72, urine 2.71±4.24, and feces 5.40±5.04) had significant smaller numbers than that of Isola Inoculator (respiratory specimens 16.56±5.76, urine 4.35±4.89, and feces 8.40±3.70) (P<0.001, 0.007, 0.003). However, both system had larger numbers of isolated colonies than those by the manual method (respiratory specimens 11.30±8.42, urine 2.67±4.34, and feces 1.90±3.90) and the difference was significant for fecal specimens(P<0.001). Regarding the total number of colonies recovery, larger number was found by Isola Inoculator than that by the Probact system for fecal specimens, however, there were no significant differences for respiratory or urine specimens(P=0.524, 0.738). Compared with manual method, the Probact system had significantly more numbers of colonies recovery for respiratory and fecal specimens(P<0.001). The total time for processing 160 specimens was shortest for manual method (281 min), followed by Probact system (419 min) and Isola Inoculator (495 min). Conclusions The performance of the Probact system is better than the manual method but no superior to the Isola Inoculator. The Probact system can meet the clinical need in terms of full automation and standardization of specimen inoculation and prevention of bias of processing by laboratory staffs using manual method.(Chin J Lab Med, 2016, 39: 262-266) Key words: Automation; Bacteriological techniques; Culture techniques
DOI: 10.1109/bcgin.2012.156
2012
The Homogeneity and Heterogeneity Hypothesis Test of the Relative Risk
Relative risk plays an important role in identifying and assessing diseases in the case-control association study, but the homogeneity and heterogeneity of the RR is often ignored in M stratum 2*2 tables and cohort (2*k tables) studies. This paper proposes a hypothesis test for homogeneity and heterogeneity of the RR based on the M-H methods.
DOI: 10.4028/www.scientific.net/amr.765-767.2668
2013
Comprehensive Spectral Analysis of HRV during Sleep and their Application in Sleep Staging
Most studies considering spectral features of HRV during sleep divided total frequency band into low frequency (LF, 0.04~0.15Hz) and high frequency (HF, 0.15~0.4 Hz) roughly, and were limited to a few measures like the power in LF and HF, or the ratio of them. To make full use of HRV, more comprehensive spectral features were evaluated in this paper. LF was further divided into true LF (0.04~0.1Hz) and medium frequency (0.1~0.15Hz). Spectrum power, mean frequency and spectral entropy of different spectral bands, fractal dimension and peak in HF (20 measures in total) were calculated for wake, REM, light sleep and deep sleep. The significance between sleep stages of each feature was evaluated. The random forest method was adopted for sleep staging and features importance rank. The results suggested that almost all the new proposed features showed significant differences during different sleep stages. They can improve sleep stages classification performance notably. Our study provided new features for sleep stages classification based on ECG.
DOI: 10.1109/csa.2013.55
2013
A Survey on GPU Techniques in Astronomical Data Processing
When GeForce256 was issued in 1999, NVIDIA first proposes the concept of GPU (Graphics Processor Unit), and then a large number of complex application requirements make the whole industry flourishing. This paper first presents the development of GPU, and then briefly introduces and compares two popular development platform of GPU: CUDA (Compute Unified Device Architecture) C and OpenCL (Open Computing Language), and finally summarizes the successful applications of GPU computing in astronomy. And in the conclusions, it also presents that the problems about branch prediction capability, the larger cache and shared memory and the thread particle size may be solved in the further research about GPU computing.
DOI: 10.2991/icacsei.2013.87
2013
A Novel Level Set Method for Ultrasonic Cardiogram Segmentation Based On Chan-Vese Model
Level Set Method based on Chan-Vese(C-V) model is widely used in image processing and computer vision.However, there are some drawbacks when C-V model processes Ultrasonic Cardiogram(UCG) images.For example, the accuracy is influenced by noise and speckle in UCG image and some problems such as numerical error and time consuming are caused by re-initialization in level set evolution.Therefore, a novel level set method based on C-V model was proposed in this paper.First of all, the C-V's Partial Differential Equation(PDE) was improved.Second, three signed distance penalizing energy function was analyzed and compared, then the best one which forces the level set function(LSF) to be close to a signed distance function was chosen in this paper.Experiments results showed that the proposed method not only eliminated the effect of speckles on UCG image segmentation, but also reduced the computational cost and avoided numerical errors caused by reinitialization.Besides, the obtained contour curve was much smoother.
2011
An Analysis of the Employment Situation of Students Majoring Statistics Based on Web Survey
At present,the employment of college students is a large problem for college teaching and students' work.Because of the lack of the analysis of the employment situation of students majoring statistics in science,the employment situation is quantitatively studied by network data of personnel recruitment.The characteristics of employing units and types of needed talents are analyzed by objective data,in which the analytical method is appropriate and the results obtained are convincing.The paper provides a new way for relevant educators to research the employment situation of college students and the demands of employing units.
2011
Ecosystem Reconstruction Model of Degraded Grasslands in Mixed Cropland——Grassland Zone in Northern Shaanxi Province
Based on the qualitative description of the driving factors for grassland degradation in the mixed cropland—grassland region in Northern Shaanxi Province,we chose the dominant factors that were closely correlated to the grassland degeneration and also relatively easy to acquire,and obtained the most dominant factors affecting the grassland degradation using a principal component analysis and multiple linear regression analysis.Furthermore,for those selected factors,we proposed the prevention measures to control the degeneration of grassland in Northern Shaanxi Province.In addition,we suggested some concrete counter measures based on the moisture conditions and the severity of grassland degradation.Finally,we proposed a model of reconstructing the degradated ecosystem in Northern Shaanxi Province.
2012
高分子量の線状ポリ(p‐フェニレンスルフィド)樹脂(HMW PPS)の合成と溶媒と添加剤の再生の研究
2013
Search for the Higgs boson decaying to four leptons in the ATLAS detector at LHC leading to the observation of a new particle compatible with the Higgs Boson
Cette these porte sur l'observation d'une nouvelle particule dans la recherche du boson de Higgs se desintegrant en deux bosons Z qui se desintegrent eux-memes en quatre leptons avec le detecteur ATLAS aupres du LHC. Les donnees utilisees sont celles collectees par l'experience ATLAS durant les annees 2011 et 2012 et correspondant a une luminosite de 4. 8 fb⁻¹ a une energie de centre de masse de 7 TeV et 5. 8 fb⁻¹ a 8TeV. Les caracteristiques de ce boson sont compatibles avec celles du boson de Higgs du Modele Standard avec une masse de 126. 5 GeV. Une etude detaillee de l'estimation des bruits de fond provenant des canaux Z+jets et t-tbar a partir des donnees est presentee. L'analyse est mise a jour en utilisant toutes les donnees collectees en 2011 et 2012 confirmant la presence d'un boson de Higgs de masse. La these contient aussi des etudes de performance reliees au spectrometre a muons :-La precision de la mesure de la position des fils des chambres a derive MOT obtenue en utilisant les donnees du tomographe a rayons X, est exploitee pour ameliorer la reconstruction des μ. -Une optimisation du terme provenant des muons dans le calcul de l'energie transverse manquante est effectuee et est validee. -Une amelioration de la reconstruction des μ dans la region vers l'avant (pseudo-rapidite I π l >2. 5) est presentee, qui utilise la combinaison des traces reconstruites dans le spectrometre avec celles formees par les coups dans les pixels. Cette combinaison ameliore la resolution sur le parametre d'impact des μ. -L'impact des emissions de photons radiatifs par les μ sur la reconstruction des bosons Z et sur l'analyse du Higgs se desintegrant en quatre leptons, est presente.
2013
Collapsibility of odds ratios for a continuous outcome variable
The sign of an association measure between two varibles may be strongly affected and even be reversed after marginalization over a backgruoud variable, which is the well-known Yule-Simpson paradox. Odds ratios are strongly collapsible over a background variable if they remain unchanged no matter how the background variable is partially pooled. In this paper, we firstly give some definitions and notations about odds ratios between a dichotomous explanatory variable and a continuous response variable. Then, we present conditions for simple collapsibility of odds ratios. Further, necessary and sufficient conditions are given for strong collapsibility of odds ratios for continuous outcome variable.
2010
Evaluation of medium and long term efficacy of interventional therapy in 60 patients with hysteromyoma
Objective:To evaluate the medium and long term efficacy and safety of uterine artery embolization in treatment of hysteromyoma.Methods:60 patients with hysteromyoma underwent uterine artery embolization from January 2007 to January 2008,and they were measured before operation and on the third,sixth and twelfth month after operation,the changes of volumes of uterus and hysteromyoma were recorded.Results:The volumes of uterus and hysteromyoma decreased gradually on the third,sixth and twelfth month after operation (P0.05).The clinical symptoms relieved after operation,especially the patients with catamenia changes and increasing menstrual volume.There was no significant difference in levels of FSH,LH and estrogen before and after operation;no severe adverse reaction occurred.Conclusion:Uterine artery embolization has the advantages of effectiveness and few complications in treatment of hysteromyoma,which is worth popularizing.
2010
Moving Target Detection and Tracking Based on Frame Difference Method Combined Mean-shift
Traditional mean-shift algorithm is simple and fast,but there are semi-automatic tracking defects,it need to determine the search window to select the target at the initial frame,and the bandwidth of kerne is fixed,not in real time to adapt to changes in target size,which is easily get lost during the tracking.The frame difference method is applied,first to detect the target and obtain the target window,then to integrate mean-shift tracking and to determine whether to obtain the new target template by setting the relative amount r of ρ^(y).Finally to achieve the meam-shift algorithm automatic tracking,and adapt to the changing of target size.The experiments indicate that this method comes out with high accuracy in tracking and better real timing.
DOI: 10.3969/j.issn.1006-267x.2017.05.018
2017
低分子质量壳寡糖对蛋鸡生产性能、蛋品质、血清生化指标、盲肠微生物数量及脾脏白细胞介素-2、肿瘤坏死因子-α基因表达的影响
本试验旨在研究饲粮中添加不同水平的低分子质量(1 000 u)壳寡糖对蛋鸡生产性能、蛋品质、血清生化指标、盲肠微生物数量以及脾脏白细胞介素-2(IL-2)和肿瘤坏死因子-α(TNF-α)基因表达的影响。选取体重和产蛋率相近的58周龄海兰褐壳蛋鸡600只,随机分为4组,每组5个重复,每个重复30只。对照组饲喂基础饲粮,试验组分别饲喂在基础饲粮中添加300、600、900 mg/kg壳寡糖的试验饲粮。预试期为7 d,正试期为42 d。结果表明:1)添加300、600和900 mg/kg壳寡糖组的产蛋率分别比对照组提高了4.52%(P 0.05)和4.08%(P>0.05)。2)试验第3、6周末,添加600和900 mg/kg壳寡糖组的鸡蛋哈夫单位分别比对照组提高了6.87%、6.69%和6.47%、6.60%(P < 0.05)。3)与对照组相比,饲粮添加600、900 mg/kg壳寡糖显著降低了血清葡萄糖、胆固醇含量和谷草转氨酶活性(P < 0.05)。4)与对照组相比,饲粮添加600、900 mg/kg壳寡糖显著提高了盲肠双歧杆菌和乳酸杆菌的数量(P < 0.05),显著降低了盲肠金黄色葡萄球菌的数量(P < 0.05)。5)与对照组相比,饲粮添加300、600 mg/kg壳寡糖显著提高了脾脏IL-2 mRNA表达水平(P < 0.05),饲粮添加600 mg/kg壳寡糖显著提高了脾脏TNF-α mRNA表达水平(P < 0.05)。由此可见,饲粮中添加不同水平的壳寡糖,提高了蛋鸡的产蛋率和哈夫单位,调节了肠道微生物菌群,增强了蛋鸡的免疫力,适宜壳寡糖添加水平为600 mg/kg。
2017
New Higgs boson results from CMS
DOI: 10.12783/dteees/edep2017/15573
2017
Application of Resin in Advanced Treatment of Coal Chemical Industry Wastewater
The adsorptive qualities of NDA-150 resin on coal chemical wastewater were studied under different conditions by static adsorption test. The results showed that NDA-150 resin played a good adsorptive effect on coal chemical wastewater. After treatment, the removal rate of chemical oxygen demand (COD) was more than 82%. The used resin could be regenerated and the removal rate of COD reached up to 75%, which effectively realized the advanced treatment of coal chemical industry wastewater.
2009
Research of the Patients with Vascular Cognitive Impairment no Dementia in Terms of Neuropsychology and the Clinical Study of Huperzine A in the Treatment
Aim: To investigate the characteristics of the patients with vascular cognitive impairment no dementia(VCIND) in terms of neuropsychology and to observe the efficacy of huperzine A in treating VCIND. Methods: 64 patients with VCIND and 42 normal controls were examined with the neuropsychological test, including mini-mental state examination(MMSE), clock drawing test (CDT). 64 patients with VCIND were randomly divided into 2 groups: a huperzine A treated group and a control group. The whole course of treatment for each group lasted 8 weeks. MMSE and CDT were examined in the 4th week and 8th week. Results: ① The scores of CDT and MMSE in the VCIND group were significantly lower than those in the control group(P0.01). The scores of subitems of MMSE including time orientation, place orientation, account ability, short time memory, language repetition, reading comprehension, language expression, figure portrayal in patients with VCIND were lower than those in the normal subjects(all P0.01). ②After 8 weeks, the marksof MMSE and CDT of huperzine A treated group were more improved than before and the control group. Conclusion: ① The combination of MMSE and CDT can be used for early finding of cognitve impairment in VCIND patients. ② Huperzine A can improve the cognitive function of patients with VCIND.
2009
STUDY ON MAKING THE BEST GEL OF AMIDATED LOW METHOXYL PECTINS
High methoxyl pectins (HMP) was used as raw materials to prepare amidated low methoxyl pectins (ALMP) .The breaking pressure of the gel was used as an index to make good gel. The results showed that best gel can be obtained with 1.4 % of ALMP adding 50 mg Ca2+/g ALMP and 30 % of sucrose soluted in deionized water at pH3.6.
DOI: 10.1016/j.bse.2022.104381
2022
Chemical constituents from the aerial parts of Achillea alpina and their chemotaxonomic significance
Phytochemical investigation of the aerial parts of Achillea alpina led to the isolation of sixteen compounds, including six sesquiterpenes (1-6), two polyacetylenic alcohols (7-8), three lignans (9-11), and five fatty acids (12-16). Their structures were identified by spectroscopic analysis of UV, IR, MS and NMR, and in comparison with the those reported data in the references. Among them, five compounds (10-14) were first obtained from the family Compositae, six compounds (1-3, 7, 15 and 16) were first reported from the genus Achillea, and three compounds (4, 5 and 8) were first obtained from this plant. The chemotaxonomic significance of constituents was also discussed.
DOI: 10.48550/arxiv.2206.14398
2022
Active Coding Piezoelectric Metasurfaces
The manipulation of acoustic waves plays an important role in a wide range of applications. Currently, acoustic wave manipulation typically relies on either acoustic metasurfaces or phased array transducers. The elements of metasurfaces are designed and optimized for a target frequency, which thus limits their bandwidth. Phased array transducers, suffering from high-cost and complex control circuits, are usually limited by the array size and the filling ratio of the control units. In this work, we introduce active coding piezoelectric metasurfaces; demonstrate commonly implemented acoustic wave manipulation functionalities such as beam steering, beam focusing and vortex beam focusing, acoustic tweezers; and eventually realize ultrasound imaging. The information coded on the piezoelectric metasurfaces herein is frequency independent and originates from the polarization directions, pointing either up or down, of the piezoelectric materials. Such a piezoelectric metasurface is driven by a single electrode and acts as a controllable active sound source, which combines the advantages of acoustic metasurfaces and phased array transducers while keeping the devices structurally simple and compact. Our coding piezoelectric metasurfaces can lead to potential technological innovations in underwater acoustic wave modulation, acoustic tweezers, biomedical imaging, industrial non-destructive testing and neural regulation.
DOI: 10.48550/arxiv.2207.12092
2022
Nonlinearity enabled higher-dimensional exceptional topology
The role of nonlinearity on topology has been investigated extensively in Hermitian systems, while nonlinearity has only been used as a tuning knob in a PT symmetric non-Hermitian system. Here, in our work, we show that nonlinearity plays a crucial role in forming topological singularities of non-Hermitian systems. We provide a simple and intuitive example by demonstrating with both theory and circuit experiments an exceptional nexus (EX), a higher-order exceptional point with a hybrid topological invariant (HTI), within only two coupled resonators with the aid of nonlinear gain. Phase rigidities are constructed to confirm the HTI in our nonlinear system, and the anisotropic critical behavior of the eigenspectra is verified with experiments. Our findings lead to advances in the fundamental understanding of the peculiar topology of nonlinear non-Hermitian systems, possibly opening new avenues for applications.
DOI: 10.2139/ssrn.4197400
2022
Identification of Novel Strains of &lt;i&gt;Pasteurella multocida&lt;/i&gt;, an Important Pathogen of &lt;i&gt;Marmota himalayana&lt;/i&gt; Found on China's Qinghai-Tibet Plateau
Three novel capsule type strains of the highly pathogenic bacterium, Pasteurella multocida, were first isolated from the tissues of Marmota Himalayanas that had died of natural causes on China’s Qinghai-Tibet plateau. Two strains were identified as subspecies Multocida and one was Septica. Whole genome phylogeny was constructed from 3,262 core SNPs and demonstrated a cluster that was distinct from other strains. GS2020-X2, AKS2021-HT3 and GS2020-R1 were L3 type, and AKS2021-HT67 may represent a new lipolysaccharide genotype. Infected animals had sepsis and systemic organ lesions. Ten single clones were able to kill experimental mice within 24 hours. Serum IL-12P70, IL-6, TNF-alpha and IL-10 levels were elevated at different time points (P <0.05) showing evidence of a cytokine storm. P. multocida represents an important pathogen of Marmota himalayana that die of natural causes. Contact with Marmota himalayana may serve as a source of P. multocida infection in humans.Funding Information: This work was supported by the National Sci-Tech Key Project (grant nos. 2018ZX10713-003-002 and 2018ZX10713-001-002).Declaration of Interests: The authors declare no competing interests.Ethics Approval Statement: All methods were carried out in accordance with the protocols for laboratory animal use and proper care and approved by the Animal Care and Use Committee of China CDC.
DOI: 10.2139/ssrn.4090433
2022
Point-of-Need Quantitation of 2,4-Dichlorophenoxyacetic Acid Using a Ratiometric Fluorescent Nanoprobe and a Smartphone-Based Sensing System
DOI: 10.1109/icd53806.2022.9863536
2022
Improved breakdown performances of PP films based on molecular chain and aggregate structure design
In this paper, the branched structure is grafted to the long molecular chain to prepare long-chain branched polypropylene (LCBPP). PP/LCBPP composites are obtained using physical blending. The change in crystalline morphology caused by the long-chain branched structures are investigated. Based on the laboratory platform, dielectric properties of PP composite films with different proportion of LCBPP are tested. The trap energy level of the PP/LCBPP composite film is calculated from the SPD curves. Besides, the molecular chain affects the aggregated state structure to a certain extent. Based on the simulation, the spherulite distribution with large number, small size and uniform dispersion is beneficial to the homogenization of the overall electric field inside the film, leading to optimized dielectric properties. Results show that at 25 °c, a low proportion of the LCBPP is beneficial to the increase of the breakdown strength. While at 85 °C, a high proportion of the LCBPP plays a decisive role in suppressing the local field distortion and improving the breakdown properties.
DOI: 10.48550/arxiv.2209.08044
2022
Self-Optimizing Feature Transformation
Feature transformation aims to extract a good representation (feature) space by mathematically transforming existing features. It is crucial to address the curse of dimensionality, enhance model generalization, overcome data sparsity, and expand the availability of classic models. Current research focuses on domain knowledge-based feature engineering or learning latent representations; nevertheless, these methods are not entirely automated and cannot produce a traceable and optimal representation space. When rebuilding a feature space for a machine learning task, can these limitations be addressed concurrently? In this extension study, we present a self-optimizing framework for feature transformation. To achieve a better performance, we improved the preliminary work by (1) obtaining an advanced state representation for enabling reinforced agents to comprehend the current feature set better; and (2) resolving Q-value overestimation in reinforced agents for learning unbiased and effective policies. Finally, to make experiments more convincing than the preliminary work, we conclude by adding the outlier detection task with five datasets, evaluating various state representation approaches, and comparing different training strategies. Extensive experiments and case studies show that our work is more effective and superior.
DOI: 10.1515/9783110790948-fm
2022
Frontmatter
DOI: 10.48550/arxiv.2209.13519
2022
Hierarchical Interdisciplinary Topic Detection Model for Research Proposal Classification
The peer merit review of research proposals has been the major mechanism for deciding grant awards. However, research proposals have become increasingly interdisciplinary. It has been a longstanding challenge to assign interdisciplinary proposals to appropriate reviewers, so proposals are fairly evaluated. One of the critical steps in reviewer assignment is to generate accurate interdisciplinary topic labels for proposal-reviewer matching. Existing systems mainly collect topic labels manually generated by principal investigators. However, such human-reported labels can be non-accurate, incomplete, labor intensive, and time costly. What role can AI play in developing a fair and precise proposal reviewer assignment system? In this study, we collaborate with the National Science Foundation of China to address the task of automated interdisciplinary topic path detection. For this purpose, we develop a deep Hierarchical Interdisciplinary Research Proposal Classification Network (HIRPCN). Specifically, we first propose a hierarchical transformer to extract the textual semantic information of proposals. We then design an interdisciplinary graph and leverage GNNs for learning representations of each discipline in order to extract interdisciplinary knowledge. After extracting the semantic and interdisciplinary knowledge, we design a level-wise prediction component to fuse the two types of knowledge representations and detect interdisciplinary topic paths for each proposal. We conduct extensive experiments and expert evaluations on three real-world datasets to demonstrate the effectiveness of our proposed model.
DOI: 10.48550/arxiv.2209.13912
2022
Hierarchical MixUp Multi-label Classification with Imbalanced Interdisciplinary Research Proposals
Funding agencies are largely relied on a topic matching between domain experts and research proposals to assign proposal reviewers. As proposals are increasingly interdisciplinary, it is challenging to profile the interdisciplinary nature of a proposal, and, thereafter, find expert reviewers with an appropriate set of expertise. An essential step in solving this challenge is to accurately model and classify the interdisciplinary labels of a proposal. Existing methodological and application-related literature, such as textual classification and proposal classification, are insufficient in jointly addressing the three key unique issues introduced by interdisciplinary proposal data: 1) the hierarchical structure of discipline labels of a proposal from coarse-grain to fine-grain, e.g., from information science to AI to fundamentals of AI. 2) the heterogeneous semantics of various main textual parts that play different roles in a proposal; 3) the number of proposals is imbalanced between non-interdisciplinary and interdisciplinary research. Can we simultaneously address the three issues in understanding the proposal's interdisciplinary nature? In response to this question, we propose a hierarchical mixup multiple-label classification framework, which we called H-MixUp. H-MixUp leverages a transformer-based semantic information extractor and a GCN-based interdisciplinary knowledge extractor for the first and second issues. H-MixUp develops a fused training method of Wold-level MixUp, Word-level CutMix, Manifold MixUp, and Document-level MixUp to address the third issue.
DOI: 10.36347/sjebm.2022.v09i10.001
2022
Data Analysis of Student Enrollment of Cooperation Programs between Shenzhen Polytechnic and City University of Seattle
This paper briefly analyzes data on students enrolled in the Cooperation Programs between Shenzhen Polytechnic (SZPT) and City University of Seattle (CityU) from 2012 to 2016. A total of 105 students from SZPT have transferred to CityU and completed their undergraduate studies there since 2015. At CityU, the transfer students have learned cutting-edge knowledge, improved their English proficiency, and experienced multiple cultures and advanced technology, greatly expanding their international horizons. The SZPT &amp; CityU cooperation programs provide a broader work and life prospect for program students.
DOI: 10.48550/arxiv.2212.06562
2022
A Universal Mirror-stacking Approach for Constructing Topological Bound States in the Continuum
Bound states in the continuum (BICs) are counter-intuitive localized states with eigenvalues embedded in the continuum of extended states. Recently, nontrivial band topology is exploited to enrich the BIC physics, resulted in topological BICs (TBICs) with extraordinary robustness against perturbations or disorders. Here, we propose a simple but universal mirror-stacking approach to turn nontrivial bound states of any topological monolayer model into TBICs. Physically, the mirror-stacked bilayer Hamiltonian can be decoupled into two independent subspaces of opposite mirror parities, each of which directly inherits the energy spectrum information and band topology of the original monolayer. By tuning the interlayer couplings, the topological bound state of one subspace can move into and out of the continuum of the other subspace continuously without hybridization. As representative examples, we construct one-dimensional first-order and two-dimensional higher-order TBICs, and demonstrate them unambiguously by acoustic experiments. Our findings will expand the research implications of both topological materials and BICs.