ϟ

U. Rajendra Acharya

Here are all the papers by U. Rajendra Acharya that you can download and read on OA.mg.
U. Rajendra Acharya’s last known institution is . Download U. Rajendra Acharya PDFs here.

Claim this Profile →
DOI: 10.1016/j.compbiomed.2020.103792
2020
Cited 2,011 times
Automated detection of COVID-19 cases using deep neural networks with X-ray images
The novel coronavirus 2019 (COVID-2019), which first appeared in Wuhan city of China in December 2019, spread rapidly around the world and became a pandemic. It has caused a devastating effect on both daily lives, public health, and the global economy. It is critical to detect the positive cases as early as possible so as to prevent the further spread of this epidemic and to quickly treat affected patients. The need for auxiliary diagnostic tools has increased as there are no accurate automated toolkits available. Recent findings obtained using radiology imaging techniques suggest that such images contain salient information about the COVID-19 virus. Application of advanced artificial intelligence (AI) techniques coupled with radiological imaging can be helpful for the accurate detection of this disease, and can also be assistive to overcome the problem of a lack of specialized physicians in remote villages. In this study, a new model for automatic COVID-19 detection using raw chest X-ray images is presented. The proposed model is developed to provide accurate diagnostics for binary classification (COVID vs. No-Findings) and multi-class classification (COVID vs. No-Findings vs. Pneumonia). Our model produced a classification accuracy of 98.08% for binary classes and 87.02% for multi-class cases. The DarkNet model was used in our study as a classifier for the you only look once (YOLO) real time object detection system. We implemented 17 convolutional layers and introduced different filtering on each layer. Our model (available at (https://github.com/muhammedtalo/COVID-19)) can be employed to assist radiologists in validating their initial screening, and can also be employed via cloud to immediately screen patients.
DOI: 10.1016/j.compbiomed.2017.09.017
2018
Cited 1,168 times
Deep convolutional neural network for the automated detection and diagnosis of seizure using EEG signals
An encephalogram (EEG) is a commonly used ancillary test to aide in the diagnosis of epilepsy. The EEG signal contains information about the electrical activity of the brain. Traditionally, neurologists employ direct visual inspection to identify epileptiform abnormalities. This technique can be time-consuming, limited by technical artifact, provides variable results secondary to reader expertise level, and is limited in identifying abnormalities. Therefore, it is essential to develop a computer-aided diagnosis (CAD) system to automatically distinguish the class of these EEG signals using machine learning techniques. This is the first study to employ the convolutional neural network (CNN) for analysis of EEG signals. In this work, a 13-layer deep convolutional neural network (CNN) algorithm is implemented to detect normal, preictal, and seizure classes. The proposed technique achieved an accuracy, specificity, and sensitivity of 88.67%, 90.00% and 95.00%, respectively.
DOI: 10.1016/j.compbiomed.2017.08.022
2017
Cited 957 times
A deep convolutional neural network model to classify heartbeats
The electrocardiogram (ECG) is a standard test used to monitor the activity of the heart. Many cardiac abnormalities will be manifested in the ECG including arrhythmia which is a general term that refers to an abnormal heart rhythm. The basis of arrhythmia diagnosis is the identification of normal versus abnormal individual heart beats, and their correct classification into different diagnoses, based on ECG morphology. Heartbeats can be sub-divided into five categories namely non-ectopic, supraventricular ectopic, ventricular ectopic, fusion, and unknown beats. It is challenging and time-consuming to distinguish these heartbeats on ECG as these signals are typically corrupted by noise. We developed a 9-layer deep convolutional neural network (CNN) to automatically identify 5 different categories of heartbeats in ECG signals. Our experiment was conducted in original and noise attenuated sets of ECG signals derived from a publicly available database. This set was artificially augmented to even out the number of instances the 5 classes of heartbeats and filtered to remove high-frequency noise. The CNN was trained using the augmented data and achieved an accuracy of 94.03% and 93.47% in the diagnostic classification of heartbeats in original and noise free ECGs, respectively. When the CNN was trained with highly imbalanced data (original dataset), the accuracy of the CNN reduced to 89.07%% and 89.3% in noisy and noise-free ECGs. When properly trained, the proposed CNN model can serve as a tool for screening of ECG to quickly identify different types and frequency of arrhythmic heartbeats.
DOI: 10.1016/j.cmpb.2018.04.005
2018
Cited 748 times
Deep learning for healthcare applications based on physiological signals: A review
We have cast the net into the ocean of knowledge to retrieve the latest scientific research on deep learning methods for physiological signals. We found 53 research papers on this topic, published from 01.01.2008 to 31.12.2017.An initial bibliometric analysis shows that the reviewed papers focused on Electromyogram(EMG), Electroencephalogram(EEG), Electrocardiogram(ECG), and Electrooculogram(EOG). These four categories were used to structure the subsequent content review.During the content review, we understood that deep learning performs better for big and varied datasets than classic analysis and machine classification methods. Deep learning algorithms try to develop the model by using all the available input.This review paper depicts the application of various deep learning algorithms used till recently, but in future it will be used for more healthcare areas to improve the quality of diagnosis.
DOI: 10.1016/j.compbiomed.2018.09.009
2018
Cited 575 times
Arrhythmia detection using deep convolutional neural network with long duration ECG signals
This article presents a new deep learning approach for cardiac arrhythmia (17 classes) detection based on long-duration electrocardiography (ECG) signal analysis. Cardiovascular disease prevention is one of the most important tasks of any health care system as about 50 million people are at risk of heart disease in the world. Although automatic analysis of ECG signal is very popular, current methods are not satisfactory. The goal of our research was to design a new method based on deep learning to efficiently and quickly classify cardiac arrhythmias. Described research are based on 1000 ECG signal fragments from the MIT - BIH Arrhythmia database for one lead (MLII) from 45 persons. Approach based on the analysis of 10-s ECG signal fragments (not a single QRS complex) is applied (on average, 13 times less classifications/analysis). A complete end-to-end structure was designed instead of the hand-crafted feature extraction and selection used in traditional methods. Our main contribution is to design a new 1D-Convolutional Neural Network model (1D-CNN). The proposed method is 1) efficient, 2) fast (real-time classification) 3) non-complex and 4) simple to use (combined feature extraction and selection, and classification in one stage). Deep 1D-CNN achieved a recognition overall accuracy of 17 cardiac arrhythmia disorders (classes) at a level of 91.33% and classification time per single sample of 0.015 s. Compared to the current research, our results are one of the best results to date, and our solution can be implemented in mobile devices and cloud computing.
DOI: 10.1016/j.compbiomed.2018.06.002
2018
Cited 495 times
Automated diagnosis of arrhythmia using combination of CNN and LSTM techniques with variable length heart beats
Arrhythmia is a cardiac conduction disorder characterized by irregular heartbeats. Abnormalities in the conduction system can manifest in the electrocardiographic (ECG) signal. However, it can be challenging and time-consuming to visually assess the ECG signals due to the very low amplitudes. Implementing an automated system in the clinical setting can potentially help expedite diagnosis of arrhythmia, and improve the accuracies. In this paper, we propose an automated system using a combination of convolutional neural network (CNN) and long short-term memory (LSTM) for diagnosis of normal sinus rhythm, left bundle branch block (LBBB), right bundle branch block (RBBB), atrial premature beats (APB) and premature ventricular contraction (PVC) on ECG signals. The novelty of this work is that we used ECG segments of variable length from the MIT-BIT arrhythmia physio bank database. The proposed system demonstrated high classification performance in the handling of variable-length data, achieving an accuracy of 98.10%, sensitivity of 97.50% and specificity of 98.70% using ten-fold cross validation strategy. Our proposed model can aid clinicians to detect common arrhythmias accurately on routine screening ECG.
DOI: 10.1007/s00521-018-3689-5
2018
Cited 394 times
A deep learning approach for Parkinson’s disease diagnosis from EEG signals
DOI: 10.1016/j.cogsys.2018.12.007
2019
Cited 368 times
Application of deep transfer learning for automated brain abnormality classification using MR images
Magnetic resonance imaging (MRI) is the most common imaging technique used to detect abnormal brain tumors. Traditionally, MRI images are analyzed manually by radiologists to detect the abnormal conditions in the brain. Manual interpretation of huge volume of images is time consuming and difficult. Hence, computer-based detection helps in accurate and fast diagnosis. In this study, we proposed an approach that uses deep transfer learning to automatically classify normal and abnormal brain MR images. Convolutional neural network (CNN) based ResNet34 model is used as a deep learning model. We have used current deep learning techniques such as data augmentation, optimal learning rate finder and fine-tuning to train the model. The proposed model achieved 5-fold classification accuracy of 100% on 613 MR images. Our developed system is ready to test on huge database and can assist the radiologists in their daily screening of MR images.
DOI: 10.1016/j.ins.2018.01.051
2018
Cited 345 times
Deep convolution neural network for accurate diagnosis of glaucoma using digital fundus images
Glaucoma progressively affects the optic nerve and may cause partial or complete vision loss. Raised intravascular pressure is the only factor which can be modified to prevent blindness from this condition. Accurate early detection and continuous screening may prevent the vision loss. Computer aided diagnosis (CAD) is a non-invasive technique which can detect the glaucoma in its early stage using digital fundus images. Developing such a system require diverse huge database in order to reach optimum performance. This paper proposes a novel CAD tool for the accurate detection of glaucoma using deep learning technique. An eighteen layer convolutional neural networks (CNN) is effectively trained in order to extract robust features from the digital fundus images. Finally these features are classified into normal and glaucoma classes during testing. We have achieved the highest accuracy of 98.13% using 1426 (589: normal and 837: glaucoma) fundus images. Our experimental results demonstrates the robustness of the system, which can be used as a supplementary tool for the clinicians to validate their decisions.
DOI: 10.1016/j.patrec.2017.03.023
2017
Cited 323 times
A new approach to characterize epileptic seizures using analytic time-frequency flexible wavelet transform and fractal dimension
The identification of seizure activities in non-stationary electroencephalography (EEG) is a challenging task. The seizure detection by human inspection of EEG signals is prone to errors, inaccurate as well as time-consuming. Several attempts have been made to develop automatic systems so as to assist neurophysiologists in identifying epileptic seizures accurately. The proposed study brings forth a novel automatic approach to detect epileptic seizures using analytic time-frequency flexible wavelet transform (ATFFWT) and fractal dimension (FD). The ATFFWT has inherent attractive features such as, shift-invariance property, tunable oscillatory attribute and flexible time-frequency covering favorable for the analysis of non-stationary and transient signals. We have used ATFFWT to decompose EEG signals into the desired subbands. Following the ATFFWT decomposition, we calculate FD for each subband. Finally, FDs of all subbands have been fed to the least-squares support vector machine (LS-SVM) classifier. The 10-fold cross validation has been used to obtain stable and reliable performance and to avoid the over fitting of the model. In this study, we investigate various different classification problems (CPs) pertaining to different classes of EEG signals, including the following popular CPs: (i) ictal versus normal (ii) ictal versus inter-ictal (iii) ictal versus non-ictal. The proposed model is found to be outperforming all existing models in terms of classification sensitivity (CSE) as it achieves perfect 100% sensitivity for seven CPs investigated by us. The prominent attribute of the proposed system is that though the model employs only one set of discriminating features (FD) for all CPs, it yields promising classification accuracy. Since, the proposed model attains the perfect classification performance it appears that a system is in place to assist clinicians to diagnose seizures accurately in less time. Further, the proposed system seems useful and attractive, especially, in the rural areas of developing countries where there is a shortage of experienced clinicians and expensive machines like functional magnetic resonance imaging (fMRI).
DOI: 10.1016/j.patrec.2019.02.016
2019
Cited 308 times
Classification of myocardial infarction with multi-lead ECG signals and deep CNN
Myocardial infarction (MI), commonly known as heart attack, causes irreversible damage to heart muscles and even leads to death. Rapid and accurate diagnosis of MI is critical to avoid death. Blood tests and electrocardiogram (ECG) signals are used to diagnose acute MI. However, for an increase in blood enzyme values, a certain time must pass after the attack. This time lag may delay MI diagnosis. Hence, ECG diagnosis is still very important. Manual ECG interpretation requires expertise and is prone to inter-observer variability. Therefore, computer aided diagnosis may be useful in automatic detection of MI on ECG. In this study, a deep learning model with an end-to-end structure on the standard 12-lead ECG signal for the diagnosis of MI is proposed. For this purpose, the most commonly used technique, convolutional neural network (CNN) is used. Our trained CNN model with the proposed architecture yielded impressive accuracy and sensitivity performance over 99.00% for MI diagnosis on all ECG lead signals. Thus, the proposed model has the potential to provide high performance on MI detection which can be used in wearable technologies and intensive care units.
DOI: 10.1371/journal.pone.0216456
2019
Cited 277 times
SleepEEGNet: Automated sleep stage scoring with sequence to sequence deep learning approach
Electroencephalogram (EEG) is a common base signal used to monitor brain activities and diagnose sleep disorders. Manual sleep stage scoring is a time-consuming task for sleep experts and is limited by inter-rater reliability. In this paper, we propose an automatic sleep stage annotation method called SleepEEGNet using a single-channel EEG signal. The SleepEEGNet is composed of deep convolutional neural networks (CNNs) to extract time-invariant features, frequency information, and a sequence to sequence model to capture the complex and long short-term context dependencies between sleep epochs and scores. In addition, to reduce the effect of the class imbalance problem presented in the available sleep datasets, we applied novel loss functions to have an equal misclassified error for each sleep stage while training the network. We evaluated the performance of the proposed method on different single-EEG channels (i.e., Fpz-Cz and Pz-Oz EEG channels) from the Physionet Sleep-EDF datasets published in 2013 and 2018. The evaluation results demonstrate that the proposed method achieved the best annotation performance compared to current literature, with an overall accuracy of 84.26%, a macro F1-score of 79.66% and κ = 0.79. Our developed model can be applied to other sleep EEG signals and aid the sleep specialists to arrive at an accurate diagnosis. The source code is available at https://github.com/SajadMo/SleepEEGNet.
DOI: 10.1016/j.compbiomed.2019.01.013
2019
Cited 274 times
Cascaded LSTM recurrent neural network for automated sleep stage classification using single-channel EEG signals
Automated evaluation of a subject's neurocognitive performance (NCP) is a relevant topic in neurological and clinical studies. NCP represents the mental/cognitive human capacity in performing a specific task. It is difficult to develop the study protocols as the subject's NCP changes in a known predictable way. Sleep is time-varying NCP and can be used to develop novel NCP techniques. Accurate analysis and interpretation of human sleep electroencephalographic (EEG) signals is needed for proper NCP assessment. In addition, sleep deprivation may cause prominent cognitive risks in performing many common activities such as driving or controlling a generic device; therefore, sleep scoring is a crucial part of the process. In the sleep cycle, the first stage of non-rapid eye movement (NREM) sleep or stage N1 is the transition between wakefulness and drowsiness and becomes relevant for the study of NCP. In this study, a novel cascaded recurrent neural network (RNN) architecture based on long short-term memory (LSTM) blocks, is proposed for the automated scoring of sleep stages using EEG signals derived from a single-channel. Fifty-five time and frequency-domain features were extracted from the EEG signals and fed to feature reduction algorithms to select the most relevant ones. The selected features constituted as the inputs to the LSTM networks. The cascaded architecture is composed of two LSTM RNNs: the first network performed 4-class classification (i.e. the five sleep stages with the merging of stages N1 and REM into a single stage) with a classification rate of 90.8%, and the second one obtained a recognition performance of 83.6% for 2-class classification (i.e. N1 vs REM). The overall percentage of correct classification for five sleep stages is found to be 86.7%. The objective of this work is to improve classification performance in sleep stage N1, as a first step of NCP assessment, and at the same time obtain satisfactory classification results in the other sleep stages.
DOI: 10.1016/j.cmpb.2019.05.004
2019
Cited 254 times
A new approach for arrhythmia classification using deep coded features and LSTM networks
For diagnosis of arrhythmic heart problems, electrocardiogram (ECG) signals should be recorded and monitored. The long-term signal records obtained are analyzed by expert cardiologists. Devices such as the Holter monitor have limited hardware capabilities. For improved diagnostic capacity, it would be helpful to detect arrhythmic signals automatically. In this study, a novel approach is presented as a candidate solution for these issues.A convolutional auto-encoder (CAE) based nonlinear compression structure is implemented to reduce the signal size of arrhythmic beats. Long-short term memory (LSTM) classifiers are employed to automatically recognize arrhythmias using ECG features, which are deeply coded with the CAE network.Based upon the coded ECG signals, both storage requirement and classification time were considerably reduced. In experimental studies conducted with the MIT-BIH arrhythmia database, ECG signals were compressed by an average 0.70% percentage root mean square difference (PRD) rate, and an accuracy of over 99.0% was observed.One of the significant contributions of this study is that the proposed approach can significantly reduce time duration when using LSTM networks for data analysis. Thus, a novel and effective approach was proposed for both ECG signal compression, and their high-performance automatic recognition, with very low computational cost.
DOI: 10.1016/j.patrec.2020.03.011
2020
Cited 228 times
Automated invasive ductal carcinoma detection based using deep transfer learning with whole-slide images
Advances in artificial intelligence technologies have made it possible to obtain more accurate and reliable results using digital images. Due to the advances in digital histopathological images obtained using whole slide image (WSI) scanners, automated analysis of digital images by computer support systems has become interesting. In particular, deep learning architectures, are one of the preferred approaches in the analysis of digital histopathology images. The deeper networks trained on large amounts of image data are adapted for different tasks using transfer learning technique. In this study, automated detection of invasive ductal carcinoma (IDC), which is the most common subtype of breast cancers, is proposed using deep transfer learning technique. We have used deep learning pre-trained models, ResNet-50 and DenseNet-161 for the IDC detection task. The public histopathology dataset containing 277,524 image patches were used in our experimental studies. As a result of training on the last layers of pre-trained deep networks, DenseNet-161 model has yielded F-sore of 92.38% and balanced accuracy value of 91.57%. Similarly, we have obtained F-score of 94.11% and balanced accuracy value of 90.96% using ResNet-50 architecture. In addition, our developed model is validated using the publicly available BreakHis breast cancer dataset and obtained promising results in classifying magnification independent histopathology images into benign and malignant classes. Our developed system obtained the highest classification performance as compared to the state-of-art techniques and is ready to be tested with more diverse huge databases.
DOI: 10.1016/j.compbiomed.2018.07.001
2018
Cited 221 times
Automated detection of atrial fibrillation using long short-term memory network with RR interval signals
Atrial Fibrillation (AF), either permanent or intermittent (paroxysnal AF), increases the risk of cardioembolic stroke. Accurate diagnosis of AF is obligatory for initiation of effective treatment to prevent stroke. Long term cardiac monitoring improves the likelihood of diagnosing paroxysmal AF. We used a deep learning system to detect AF beats in Heart Rate (HR) signals. The data was partitioned with a sliding window of 100 beats. The resulting signal blocks were directly fed into a deep Recurrent Neural Network (RNN) with Long Short-Term Memory (LSTM). The system was validated and tested with data from the MIT-BIH Atrial Fibrillation Database. It achieved 98.51% accuracy with 10-fold cross-validation (20 subjects) and 99.77% with blindfold validation (3 subjects). The proposed system structure is straight forward, because there is no need for information reduction through feature extraction. All the complexity resides in the deep learning system, which gets the entire information from a signal block. This setup leads to the robust performance for unknown data, as measured with the blind fold validation. The proposed Computer-Aided Diagnosis (CAD) system can be used for long-term monitoring of the human heart. To the best of our knowledge, the proposed system is the first to incorporate deep learning for AF beat detection.
DOI: 10.1007/s10489-018-1179-1
2018
Cited 208 times
Deep convolutional neural network for the automated diagnosis of congestive heart failure using ECG signals
DOI: 10.1016/j.cmpb.2019.104992
2019
Cited 208 times
A new machine learning technique for an accurate diagnosis of coronary artery disease
Coronary artery disease (CAD) is one of the commonest diseases around the world. An early and accurate diagnosis of CAD allows a timely administration of appropriate treatment and helps to reduce the mortality. Herein, we describe an innovative machine learning methodology that enables an accurate detection of CAD and apply it to data collected from Iranian patients. We first tested ten traditional machine learning algorithms, and then the three-best performing algorithms (three types of SVM) were used in the rest of the study. To improve the performance of these algorithms, a data preprocessing with normalization was carried out. Moreover, a genetic algorithm and particle swarm optimization, coupled with stratified 10-fold cross-validation, were used twice: for optimization of classifier parameters and for parallel selection of features. The presented approach enhanced the performance of all traditional machine learning algorithms used in this study. We also introduced a new optimization technique called N2Genetic optimizer (a new genetic training). Our experiments demonstrated that N2Genetic-nuSVM provided the accuracy of 93.08% and F1-score of 91.51% when predicting CAD outcomes among the patients included in a well-known Z-Alizadeh Sani dataset. These results are competitive and comparable to the best results in the field. We showed that machine-learning techniques optimized by the proposed approach, can lead to highly accurate models intended for both clinical and research use.
DOI: 10.1016/j.compmedimag.2019.101673
2019
Cited 208 times
Convolutional neural networks for multi-class brain disease detection using MRI images
The brain disorders may cause loss of some critical functions such as thinking, speech, and movement. So, the early detection of brain diseases may help to get the timely best treatment. One of the conventional methods used to diagnose these disorders is the magnetic resonance imaging (MRI) technique. Manual diagnosis of brain abnormalities is time-consuming and difficult to perceive the minute changes in the MRI images, especially in the early stages of abnormalities. Proper selection of the features and classifiers to obtain the highest performance is a challenging task. Hence, deep learning models have been widely used for medical image analysis over the past few years. In this study, we have employed the AlexNet, Vgg-16, ResNet-18, ResNet-34, and ResNet-50 pre-trained models to automatically classify MR images in to normal, cerebrovascular, neoplastic, degenerative, and inflammatory diseases classes. We have also compared their classification performance with pre-trained models, which are the state-of-art architectures. We have obtained the best classification accuracy of 95.23% ± 0.6 with the ResNet-50 model among the five pre-trained models. Our model is ready to be tested with huge MRI images of brain abnormalities. The outcome of the model will also help the clinicians to validate their findings after manual reading of the MRI images.
DOI: 10.1016/j.knosys.2012.08.011
2013
Cited 201 times
Automated diagnosis of Coronary Artery Disease affected patients using LDA, PCA, ICA and Discrete Wavelet Transform
Coronary Artery Disease (CAD) is the narrowing of the blood vessels that supply blood and oxygen to the heart. Electrocardiogram (ECG) is an important cardiac signal representing the sum total of millions of cardiac cell depolarization potentials. It contains important insights into the state of health and nature of the disease afflicting the heart. However, it is very difficult to perceive the subtle changes in ECG signals which indicate a particular type of cardiac abnormality. Hence, we have used the heart rate signals from the ECG for the diagnosis of cardiac health. In this work, we propose a methodology for the automatic detection of normal and Coronary Artery Disease conditions using heart rate signals. The heart rate signals are decomposed into frequency sub-bands using Discrete Wavelet Transform (DWT). Principle Component Analysis (PCA), Linear Discriminant Analysis (LDA) and Independent Component Analysis (ICA) were applied on the set of DWT coefficients extracted from particular sub-bands in order to reduce the data dimension. The selected sets of features were fed into four different classifiers: Support Vector Machine (SVM), Gaussian Mixture Model (GMM), Probabilistic Neural Network (PNN) and K-Nearest Neighbor (KNN). Our results showed that the ICA coupled with GMM classifier combination resulted in highest accuracy of 96.8%, sensitivity of 100% and specificity of 93.7% compared to other data reduction techniques (PCA and LDA) and classifiers. Overall, compared to previous techniques, our proposed strategy is more suitable for diagnosis of CAD with higher accuracy.
DOI: 10.1016/j.future.2018.08.044
2019
Cited 195 times
Characterization of focal EEG signals: A review
Epilepsy is a common neurological condition that can occur in anyone at any age. Electroencephalogram (EEG) signals of non-focal (NF) and focal (F) types contain brain activity information that can be used to identify areas affected by seizures. Generally, F EEG signals are recorded from the epileptic part of the brain, while NF EEG signals are recorded from brain regions unaffected by epilepsy. It is essential to correctly detect F EEG signals, when and where they occur, as focal epilepsy can be successfully treated by surgical means. However, all EEG signals are complex and require highly trained personnel for right interpretation. To overcome the associated challenges, in this study a computer-aided detection (CAD) system to aid in the detection of F EEG signals has been developed, and the performance of nonlinear features for differentiating F and NF EEG signals is compared. Moreover, it is noted that nonlinear features can effectively capture concealed patterns and rhythms contained in the EEG signals. Overall, it was found that the CAD system will be useful to clinicians in providing an accurate and objective paradigm for localization of the epileptogenic area.
DOI: 10.1007/s00521-018-03980-2
2019
Cited 191 times
Novel deep genetic ensemble of classifiers for arrhythmia detection using ECG signals
The heart disease is one of the most serious health problems in today’s world. Over 50 million persons have cardiovascular diseases around the world. Our proposed work based on 744 segments of ECG signal is obtained from the MIT-BIH Arrhythmia database (strongly imbalanced data) for one lead (modified lead II), from 29 people. In this work, we have used long-duration (10 s) ECG signal segments (13 times less classifications/analysis). The spectral power density was estimated based on Welch’s method and discrete Fourier transform to strengthen the characteristic ECG signal features. Our main contribution is the design of a novel three-layer (48 + 4 + 1) deep genetic ensemble of classifiers (DGEC). Developed method is a hybrid which combines the advantages of: (1) ensemble learning, (2) deep learning, and (3) evolutionary computation. Novel system was developed by the fusion of three normalization types, four Hamming window widths, four classifiers types, stratified tenfold cross-validation, genetic feature (frequency components) selection, layered learning, genetic optimization of classifiers parameters, and new genetic layered training (expert votes selection) to connect classifiers. The developed DGEC system achieved a recognition sensitivity of 94.62% (40 errors/744 classifications), accuracy = 99.37%, specificity = 99.66% with classification time of single sample = 0.8736 (s) in detecting 17 arrhythmia ECG classes. The proposed model can be applied in cloud computing or implemented in mobile devices to evaluate the cardiac health immediately with highest precision.
DOI: 10.1016/j.cmpb.2022.107161
2022
Cited 188 times
Application of explainable artificial intelligence for healthcare: A systematic review of the last decade (2011–2022)
Artificial intelligence (AI) has branched out to various applications in healthcare, such as health services management, predictive medicine, clinical decision-making, and patient data and diagnostics. Although AI models have achieved human-like performance, their use is still limited because they are seen as a black box. This lack of trust remains the main reason for their low use in practice, especially in healthcare. Hence, explainable artificial intelligence (XAI) has been introduced as a technique that can provide confidence in the model's prediction by explaining how the prediction is derived, thereby encouraging the use of AI systems in healthcare. The primary goal of this review is to provide areas of healthcare that require more attention from the XAI research community.Multiple journal databases were thoroughly searched using PRISMA guidelines 2020. Studies that do not appear in Q1 journals, which are highly credible, were excluded.In this review, we surveyed 99 Q1 articles covering the following XAI techniques: SHAP, LIME, GradCAM, LRP, Fuzzy classifier, EBM, CBR, rule-based systems, and others.We discovered that detecting abnormalities in 1D biosignals and identifying key text in clinical notes are areas that require more attention from the XAI research community. We hope this is review will encourage the development of a holistic cloud system for a smart city.
DOI: 10.1016/j.compbiomed.2020.103726
2020
Cited 185 times
Application of deep learning techniques for heartbeats detection using ECG signals-analysis and review
Deep learning models have become a popular mode to classify electrocardiogram (ECG) data. Investigators have used a variety of deep learning techniques for this application. Herein, a detailed examination of deep learning methods for ECG arrhythmia detection is provided. Approaches used by investigators are examined, and their contributions to the field are detailed. For this purpose, journal papers have been surveyed according to the methods used. In addition, various deep learning models and experimental studies are described and discussed. A five-class ECG dataset containing 100,022 beats was then utilized for further analysis of deep learning techniques. The constructed models were examined with this dataset, and results are presented. This study therefore provides information concerning deep learning approaches used for arrhythmia classification, and suggestions for further research in this area.
DOI: 10.1016/j.knosys.2019.104923
2019
Cited 168 times
Automated arrhythmia detection using novel hexadecimal local pattern and multilevel wavelet transform with ECG signals
Electrocardiography (ECG) is widely used for arrhythmia detection nowadays. The machine learning methods with signal processing algorithms have been used for automated diagnosis of cardiac health using ECG signals. In this article, discrete wavelet transform (DWT) coupled with novel 1-dimensional hexadecimal local pattern (1D-HLP) technique are employed for automated detection of arrhythmia detection. The ECG signals of 10 s duration are subjected to DWT to decompose up to five levels. The 1D-HLP extracts 512 dimensional features from each level of the five levels of low pass filter. Then, these extracted features are concatenated to obtain 512×6=3072 dimensional feature set. These fused features are subjected to neighborhood component analysis (NCA) feature reduction technique to obtain 64, 128 and 256 features. Finally, these features are subjected to 1 nearest neighborhood (1NN) classifier for classification with 4 distance metrics namely city block, Euclidean, spearman and cosine. We have obtained a classification accuracy of 95.0% in classifying 17 arrhythmia classes using MIT-BIH Arrhythmia ECG dataset. Our results show that the proposed method is more superior than other already reported classical ensemble learning and deep learning methods for arrhythmia detection using ECG signals.
DOI: 10.1016/j.compbiomed.2020.104129
2021
Cited 166 times
The impact of pre- and post-image processing techniques on deep learning frameworks: A comprehensive review for digital pathology image analysis
Recently, deep learning frameworks have rapidly become the main methodology for analyzing medical images. Due to their powerful learning ability and advantages in dealing with complex patterns, deep learning algorithms are ideal for image analysis challenges, particularly in the field of digital pathology. The variety of image analysis tasks in the context of deep learning includes classification (e.g., healthy vs. cancerous tissue), detection (e.g., lymphocytes and mitosis counting), and segmentation (e.g., nuclei and glands segmentation). The majority of recent machine learning methods in digital pathology have a pre- and/or post-processing stage which is integrated with a deep neural network. These stages, based on traditional image processing methods, are employed to make the subsequent classification, detection, or segmentation problem easier to solve. Several studies have shown how the integration of pre- and post-processing methods within a deep learning pipeline can further increase the model's performance when compared to the network by itself. The aim of this review is to provide an overview on the types of methods that are used within deep learning frameworks either to optimally prepare the input (pre-processing) or to improve the results of the network output (post-processing), focusing on digital pathology image analysis. Many of the techniques presented here, especially the post-processing methods, are not limited to digital pathology but can be extended to almost any image analysis field.
DOI: 10.3390/e17085218
2015
Cited 163 times
An Integrated Index for the Identification of Focal Electroencephalogram Signals Using Discrete Wavelet Transform and Entropy Measures
The dynamics of brain area influenced by focal epilepsy can be studied using focal and non-focal electroencephalogram (EEG) signals.This paper presents a new method to detect focal and non-focal EEG signals based on an integrated index, termed the focal and non-focal index (FNFI), developed using discrete wavelet transform (DWT) and entropy features.The DWT decomposes the EEG signals up to six levels, and various entropy measures are computed from approximate and detail coefficients of sub-band signals.The computed entropy measures are average wavelet, permutation, fuzzy and phase entropies.The proposed FNFI developed using permutation, fuzzy and Shannon wavelet entropies is able to clearly discriminate focal and non-focal EEG signals using a single number.Furthermore, these entropy measures are ranked using different techniques, namely the Bhattacharyya space algorithm, Student's t-test, the Wilcoxon test, the receiver operating characteristic (ROC) and entropy.These ranked features are fed to various classifiers, namely k-nearest neighbour (KNN), probabilistic neural network (PNN), fuzzy classifier and least squares support vector machine (LS-SVM), for automated classification of focal and non-focal EEG signals using the minimum number of features.The identification of the focal EEG signals can be helpful to locate the epileptogenic focus.
DOI: 10.1016/j.artmed.2019.07.006
2019
Cited 162 times
Automated detection of schizophrenia using nonlinear signal processing methods
Examination of the brain's condition with the Electroencephalogram (EEG) can be helpful to predict abnormality and cerebral activities. The purpose of this study was to develop an Automated Diagnostic Tool (ADT) to investigate and classify the EEG signal patterns into normal and schizophrenia classes. The ADT implements a sequence of events, such as EEG series splitting, non-linear features mining, t-test assisted feature selection, classification and validation. The proposed ADT is employed to evaluate a 19-channel EEG signal collected from normal and schizophrenia class volunteers. A dataset was created by splitting the raw 19-channel EEG into a sequence of 6250 sample points, which was helpful to produce 1142 features of normal and schizophrenia class patterns. Non-linear feature extraction was then implemented to mine 157 features from each EEG pattern, from which 14 of the principal features were identified based on significance. Finally, a signal classification practice with Decision-Tree (DT), Linear-Discriminant analysis (LD), k-Nearest-Neighbour (KNN), Probabilistic-Neural-Network (PNN), and Support-Vector-Machine (SVM) with various kernels was implemented. The experimental outcome showed that the SVM with Radial-Basis-Function (SVM-RBF) offered a superior average performance value of 92.91% on the considered EEG dataset, as compared to other classifiers implemented in this work.
DOI: 10.1007/s10916-019-1345-y
2019
Cited 161 times
Automated Depression Detection Using Deep Representation and Sequence Learning with EEG Signals
DOI: 10.1016/j.cogsys.2018.07.004
2018
Cited 152 times
An efficient compression of ECG signals using deep convolutional autoencoders
Advances in information technology have facilitated the retrieval and processing of biomedical data. Especially with wearable technologies and mobile platforms, we are able to follow our healthcare data, such as electrocardiograms (ECG), in real time. However, the hardware resources of these technologies are limited. For this reason, the optimal storage and safe transmission of the personal health data is critical. This study proposes a new deep convolutional autoencoder (CAE) model for compressing ECG signals. In this paper, a deep network structure of 27 layers consisting of encoder and decoder parts is designed. In the encoder section of this model, the signals are reduced to low-dimensional vectors; and in the decoder section, the signals are reconstructed. The deep learning approach provides the representations of the low and high levels of signals in the hidden layers of the model. Hence, the original signal can be reconstructed with minimal loss. Very different from traditional linear transformation methods, a deep compression approach implies that it can learn to use different ECG records automatically. The performance was evaluated on an experimental data set comprising 4800 ECG fragments from 48 unique clinical patients. The compression rate (CR) of the proposed model was 32.25, and the average PRD value was 2.73%. These favourable observation suggest that our deep model can allow secure data transfer in a low-dimensional form to remote medical centers. We present an effective compression approach that can potentially be used in wearable devices, e-health applications, telemetry and Holter systems.
DOI: 10.1016/j.knosys.2015.02.011
2015
Cited 150 times
Automated diagnosis of coronary artery disease using tunable-Q wavelet transform applied on heart rate signals
Coronary artery disease (CAD) is the narrowing of coronary arteries leading to inadequate supply of nutrients and oxygen to the heart muscles. Over time, the condition can weaken the heart muscles and may lead to heart failure, arrhythmias and even sudden cardiac death. Hence, the early diagnosis of CAD can save life and prevent the risk of stroke. Electrocardiogram (ECG) depicts the state of the heart and can be used to detect the CAD. Small changes in the ECG signal indicate a particular disease. It is very difficult to decipher these minute changes in the ECG signal, as it is prone to artifacts and noise. Hence, we detect the R peaks from the ECG and use heart rate signals for our analysis. The manual inspection of the heart rate signals is time consuming, taxing and prone to errors due to fatigue. Hence, a decision support system independent of human intervention can yield accurate repeatable results. In this paper, we present a new method for diagnosis of CAD using tunable-Q wavelet transform (TQWT) based features extracted from heart rate signals. The heart rate signals are decomposed into various sub-bands using TQWT for better diagnostic feature extraction. The nonlinear feature called centered correntropy (CC) is computed on decomposed detail sub-band. Then the principal component analysis (PCA) is performed on these CC to transform the number of features. These clinically significant features are subjected to least squares support vector machine (LS-SVM) with different kernel functions for automated diagnosis. The experimental results demonstrate better classification accuracy, sensitivity, specificity and Matthews correlation coefficient using Morlet wavelet kernel function with optimized kernel and regularization parameters. Also, we have developed a novel CAD Risk index using significant features to discriminate the two classes using a single number. Our proposed methodology is more suitable in classification of normal and CAD heart rate signals and can aid the clinicians while screening the CAD patients.
DOI: 10.3389/fnins.2019.01325
2020
Cited 148 times
Automated Detection of Autism Spectrum Disorder Using a Convolutional Neural Network
Background: Convolutional Neural Networks (CNN) have provided a significant achievement in different machine learning tasks such as speech recognition, image classification, automotive software engineering, together with some substantial applications in neuroscience. This impressive progress is largely due to a combination of algorithmic breakthroughs, computation resource improvements, and access to a large amount of data. Method In this paper, we focused on the diagnosis of the autism spectrum disorder (ASD) via CNN using a large brain imaging dataset. We classified ASD patients using most common resting-state functional magnetic resonance imaging (fMRI) data represented by a multi-site database known as Autism Brain Imaging Data Exchange (ABIDE). The proposed approach was able to classify individuals with autism compared to typical controls based on the patterns of functional connectivity. The outcome measure is accuracy, sensitivity, and specificity of the prediction of ASD from control subjects. Results: The experimental results indicate that our proposed model with 70.22 % diagnostic accuracy in classification of the ASD outperforms the previous works on ABIDE I dataset and for the CC400 functional parcellation atlas of the brain. Also, it was shown that the number of parameters used in our CNN model is fewer than the best known study in the ASD classification which leads to the reduction of the training time. The existing best-known method had a huge number of parameters, 19,961,200, in theirs final stage wheras we reduced it to 4,398,80221 parameters. The sensitivity and specificity were also measured in this study as part of our report
DOI: 10.1016/j.artmed.2019.101789
2020
Cited 146 times
Comprehensive electrocardiographic diagnosis based on deep learning
Cardiovascular disease (CVD) is the leading cause of death worldwide, and coronary artery disease (CAD) is a major contributor. Early-stage CAD can progress if undiagnosed and left untreated, leading to myocardial infarction (MI) that may induce irreversible heart muscle damage, resulting in heart chamber remodeling and eventual congestive heart failure (CHF). Electrocardiography (ECG) signals can be useful to detect established MI, and may also be helpful for early diagnosis of CAD. For the latter especially, the ECG perturbations can be subtle and potentially misclassified during manual interpretation and/or when analyzed by traditional algorithms found in ECG instrumentation. For automated diagnostic systems (ADS), deep learning techniques are favored over conventional machine learning techniques, due to the automatic feature extraction and selection processes involved. This paper highlights various deep learning algorithms exploited for the classification of ECG signals into CAD, MI, and CHF conditions. The Convolutional Neural Network (CNN), followed by combined CNN and Long Short-Term Memory (LSTM) models, appear to be the most useful architectures for classification. A 16-layer LSTM model was developed in our study and validated using 10-fold cross-validation. A classification accuracy of 98.5% was achieved. Our proposed model has the potential to be a useful diagnostic tool in hospitals for the classification of abnormal ECG signals.
DOI: 10.1016/j.compbiomed.2019.103346
2019
Cited 145 times
Machine learning-based coronary artery disease diagnosis: A comprehensive review
Coronary artery disease (CAD) is the most common cardiovascular disease (CVD) and often leads to a heart attack. It annually causes millions of deaths and billions of dollars in financial losses worldwide. Angiography, which is invasive and risky, is the standard procedure for diagnosing CAD. Alternatively, machine learning (ML) techniques have been widely used in the literature as fast, affordable, and noninvasive approaches for CAD detection. The results that have been published on ML-based CAD diagnosis differ substantially in terms of the analyzed datasets, sample sizes, features, location of data collection, performance metrics, and applied ML techniques. Due to these fundamental differences, achievements in the literature cannot be generalized. This paper conducts a comprehensive and multifaceted review of all relevant studies that were published between 1992 and 2019 for ML-based CAD diagnosis. The impacts of various factors, such as dataset characteristics (geographical location, sample size, features, and the stenosis of each coronary artery) and applied ML techniques (feature selection, performance metrics, and method) are investigated in detail. Finally, the important challenges and shortcomings of ML-based CAD diagnosis are discussed.
DOI: 10.1016/j.compbiomed.2021.104949
2021
Cited 143 times
Deep learning for neuroimaging-based diagnosis and rehabilitation of Autism Spectrum Disorder: A review
Accurate diagnosis of Autism Spectrum Disorder (ASD) followed by effective rehabilitation is essential for the management of this disorder. Artificial intelligence (AI) techniques can aid physicians to apply automatic diagnosis and rehabilitation procedures. AI techniques comprise traditional machine learning (ML) approaches and deep learning (DL) techniques. Conventional ML methods employ various feature extraction and classification techniques, but in DL, the process of feature extraction and classification is accomplished intelligently and integrally. DL methods for diagnosis of ASD have been focused on neuroimaging-based approaches. Neuroimaging techniques are non-invasive disease markers potentially useful for ASD diagnosis. Structural and functional neuroimaging techniques provide physicians substantial information about the structure (anatomy and structural connectivity) and function (activity and functional connectivity) of the brain. Due to the intricate structure and function of the brain, proposing optimum procedures for ASD diagnosis with neuroimaging data without exploiting powerful AI techniques like DL may be challenging. In this paper, studies conducted with the aid of DL networks to distinguish ASD are investigated. Rehabilitation tools provided for supporting ASD patients utilizing DL networks are also assessed. Finally, we will present important challenges in the automated detection and rehabilitation of ASD and propose some future works.
DOI: 10.1007/s00521-018-3889-z
2018
Cited 141 times
A deep convolutional neural network model for automated identification of abnormal EEG signals
DOI: 10.1016/j.ins.2018.07.063
2018
Cited 139 times
Computer-aided diagnosis of atrial fibrillation based on ECG Signals: A review
Arrhythmia is a type of disorder that affects the pattern and rate of the heartbeat. Among the various arrhythmia conditions, atrial fibrillation (AF) is the most prevalent. AF is associated with a chaotic, and frequently fast, heartbeat. Moreover, AF increases the risk of cardioembolic stroke and other heart-related problems such as heart failure. Thus, it is necessary to screen for AF and receive proper treatment before the condition progresses. To date, electrocardiogram (ECG) feature analysis is the gold standard for the diagnosis of AF. However, because it is time-varying, AF ECG signals are difficult to interpret. The ECG signals are often contaminated with noise. Further, manual interpretation of ECG signals may be subjective, time-consuming, and susceptible to inter-observer variabilities. Various computer-aided diagnosis (CADx) methods have been proposed to remedy these shortcomings. In this paper, different CADx systems developed by researchers are discussed. Also, the potentials of the CADx system are highlighted.
DOI: 10.1016/j.compbiomed.2018.04.025
2018
Cited 136 times
An accurate sleep stages classification system using a new class of optimally time-frequency localized three-band wavelet filter bank
Sleep related disorder causes diminished quality of lives in human beings. Sleep scoring or sleep staging is the process of classifying various sleep stages which helps to detect the quality of sleep. The identification of sleep-stages using electroencephalogram (EEG) signals is an arduous task. Just by looking at an EEG signal, one cannot determine the sleep stages precisely. Sleep specialists may make errors in identifying sleep stages by visual inspection. To mitigate the erroneous identification and to reduce the burden on doctors, a computer-aided EEG based system can be deployed in the hospitals, which can help identify the sleep stages, correctly. Several automated systems based on the analysis of polysomnographic (PSG) signals have been proposed. A few sleep stage scoring systems using EEG signals have also been proposed. But, still there is a need for a robust and accurate portable system developed using huge dataset. In this study, we have developed a new single-channel EEG based sleep-stages identification system using a novel set of wavelet-based features extracted from a large EEG dataset. We employed a novel three-band time-frequency localized (TBTFL) wavelet filter bank (FB). The EEG signals are decomposed using three-level wavelet decomposition, yielding seven sub-bands (SBs). This is followed by the computation of discriminating features namely, log-energy (LE), signal-fractal-dimensions (SFD), and signal-sample-entropy (SSE) from all seven SBs. The extracted features are ranked and fed to the support vector machine (SVM) and other supervised learning classifiers. In this study, we have considered five different classification problems (CPs), (two-class (CP-1), three-class (CP-2), four-class (CP-3), five-class (CP-4) and six-class (CP-5)). The proposed system yielded accuracies of 98.3%, 93.9%, 92.1%, 91.7%, and 91.5% for CP-1 to CP-5, respectively, using 10-fold cross validation (CV) technique.
DOI: 10.1016/j.yebeh.2018.09.030
2018
Cited 132 times
Automated seizure prediction
In the past two decades, significant advances have been made on automated electroencephalogram (EEG)-based diagnosis of epilepsy and seizure detection. A number of innovative algorithms have been introduced that can aid in epilepsy diagnosis with a high degree of accuracy. In recent years, the frontiers of computational epilepsy research have moved to seizure prediction, a more challenging problem. While antiepileptic medication can result in complete seizure freedom in many patients with epilepsy, up to one-third of patients living with epilepsy will have medically intractable epilepsy, where medications reduce seizure frequency but do not completely control seizures. If a seizure can be predicted prior to its clinical manifestation, then there is potential for abortive treatment to be given, either self-administered or via an implanted device administering medication or electrical stimulation. This will have a far-reaching impact on the treatment of epilepsy and patient's quality of life. This paper presents a state-of-the-art review of recent efforts and journal articles on seizure prediction. The technologies developed for epilepsy diagnosis and seizure detection are being adapted and extended for seizure prediction. The paper ends with some novel ideas for seizure prediction using the increasingly ubiquitous machine learning technology, particularly deep neural network machine learning.
DOI: 10.1016/j.compbiomed.2018.12.012
2019
Cited 130 times
Automated beat-wise arrhythmia diagnosis using modified U-net on extended electrocardiographic recordings with heterogeneous arrhythmia types
Abnormality of the cardiac conduction system can induce arrhythmia - abnormal heart rhythm - that can frequently lead to other cardiac diseases and complications, and are sometimes life-threatening. These conduction system perturbations can manifest as morphological changes on the surface electrocardiographic (ECG) signal. Assessment of these morphological changes can be challenging and time-consuming, as ECG signal features are often low in amplitude and subtle. The main aim of this study is to develop an automated computer aided diagnostic (CAD) system that can expedite the process of arrhythmia diagnosis, as an aid to clinicians to provide appropriate and timely intervention to patients. We propose an autoencoder of ECG signals that can diagnose normal sinus beats, atrial premature beats (APB), premature ventricular contractions (PVC), left bundle branch block (LBBB) and right bundle branch block (RBBB). Apart from the first, the rest are morphological beat-to-beat elements that characterize and constitute complex arrhythmia. The novelty of this work lies in how we modified the U-net model to perform beat-wise analysis on heterogeneously segmented ECGs of variable lengths derived from the MIT-BIH arrhythmia database. The proposed system has demonstrated self-learning ability in generating class activations maps, and these generated maps faithfully reflect the cardiac conditions in each ECG cardiac cycle. It has attained a high classification accuracy of 97.32% in diagnosing cardiac conditions, and 99.3% for R peak detection using a ten-fold cross validation strategy. Our developed model can help physicians to screen ECG accurately, potentially resulting in timely intervention of patients with arrhythmia.
DOI: 10.1016/j.compbiomed.2021.104418
2021
Cited 130 times
Uncertainty quantification in skin cancer classification using three-way decision-based Bayesian deep learning
Accurate automated medical image recognition, including classification and segmentation, is one of the most challenging tasks in medical image analysis. Recently, deep learning methods have achieved remarkable success in medical image classification and segmentation, clearly becoming the state-of-the-art methods. However, most of these methods are unable to provide uncertainty quantification (UQ) for their output, often being overconfident, which can lead to disastrous consequences. Bayesian Deep Learning (BDL) methods can be used to quantify uncertainty of traditional deep learning methods, and thus address this issue. We apply three uncertainty quantification methods to deal with uncertainty during skin cancer image classification. They are as follows: Monte Carlo (MC) dropout, Ensemble MC (EMC) dropout and Deep Ensemble (DE). To further resolve the remaining uncertainty after applying the MC, EMC and DE methods, we describe a novel hybrid dynamic BDL model, taking into account uncertainty, based on the Three-Way Decision (TWD) theory. The proposed dynamic model enables us to use different UQ methods and different deep neural networks in distinct classification phases. So, the elements of each phase can be adjusted according to the dataset under consideration. In this study, two best UQ methods (i.e., DE and EMC) are applied in two classification phases (the first and second phases) to analyze two well-known skin cancer datasets, preventing one from making overconfident decisions when it comes to diagnosing the disease. The accuracy and the F1-score of our final solution are, respectively, 88.95% and 89.00% for the first dataset, and 90.96% and 91.00% for the second dataset. Our results suggest that the proposed TWDBDL model can be used effectively at different stages of medical image analysis.
DOI: 10.1016/j.asoc.2019.105740
2019
Cited 122 times
Application of new deep genetic cascade ensemble of SVM classifiers to predict the Australian credit scoring
In the recent decades, credit scoring has become a very important analytical resource for researchers and financial institutions around the world. It helps to boost both profitability and risk control since bank credits plays a significant role in the banking industry. In this study, a novel approach based on deep genetic cascade ensemble of different support vector machine (SVM) classifiers (called Deep Genetic Cascade Ensembles of Classifiers (DGCEC)) is applied to the Statlog Australian data. The proposed approach is a hybrid model which merges the benefits of: (a) evolutionary computation, (b) ensemble learning, and (c) deep learning. The proposed approach comprises of a novel 16-layer genetic cascade ensemble of classifiers, having: two types of SVM classifiers, normalization techniques, feature extraction methods, three types of kernel functions, parameter optimizations, and stratified 10-fold cross-validation method. The general architecture of the proposed approach consists of ensemble learning, deep learning, layered learning, supervised training, feature (attributes) selection using genetic algorithm, optimization of parameters for all classifiers by using genetic algorithm, and a new genetic layered training technique (for selection of classifiers). Our developed model achieved the highest prediction accuracy of 97.39%. Hence, our proposed approach can be employed in the banking system to evaluate the bank credits of the applicants and aid the bank managers in making correct decisions.
DOI: 10.1002/jmv.26699
2020
Cited 116 times
Risk factors prediction, clinical outcomes, and mortality in COVID‐19 patients
Abstract Preventing communicable diseases requires understanding the spread, epidemiology, clinical features, progression, and prognosis of the disease. Early identification of risk factors and clinical outcomes might help in identifying critically ill patients, providing appropriate treatment, and preventing mortality. We conducted a prospective study in patients with flu‐like symptoms referred to the imaging department of a tertiary hospital in Iran between March 3, 2020, and April 8, 2020. Patients with COVID‐19 were followed up after two months to check their health condition. The categorical data between groups were analyzed by Fisher's exact test and continuous data by Wilcoxon rank‐sum test. Three hundred and nineteen patients (mean age 45.48 ± 18.50 years, 177 women) were enrolled. Fever, dyspnea, weakness, shivering, C‐reactive protein, fatigue, dry cough, anorexia, anosmia, ageusia, dizziness, sweating, and age were the most important symptoms of COVID‐19 infection. Traveling in the past 3 months, asthma, taking corticosteroids, liver disease, rheumatological disease, cough with sputum, eczema, conjunctivitis, tobacco use, and chest pain did not show any relationship with COVID‐19. To the best of our knowledge, a number of factors associated with mortality due to COVID‐19 have been investigated for the first time in this study. Our results might be helpful in early prediction and risk reduction of mortality in patients infected with COVID‐19.
DOI: 10.1016/j.ins.2019.12.045
2020
Cited 105 times
DGHNL: A new deep genetic hierarchical network of learners for prediction of credit scoring
Credit scoring (CS) is an effective and crucial approach used for risk management in banks and other financial institutions. It provides appropriate guidance on granting loans and reduces risks in the financial area. Hence, companies and banks are trying to use novel automated solutions to deal with CS challenge to protect their own finances and customers. Nowadays, different machine learning (ML) and data mining (DM) algorithms have been used to improve various aspects of CS prediction. In this paper, we introduce a novel methodology, named Deep Genetic Hierarchical Network of Learners (DGHNL). The proposed methodology comprises different types of learners, including Support Vector Machines (SVM), k-Nearest Neighbors (kNN), Probabilistic Neural Networks (PNN), and fuzzy systems. The Statlog German (1000 instances) credit approval dataset available in the UCI machine learning repository is used to test the effectiveness of our model in the CS domain. Our DGHNL model encompasses five kinds of learners, two kinds of data normalization procedures, two extraction of features methods, three kinds of kernel functions, and three kinds of parameter optimizations. Furthermore, the model applies deep learning, ensemble learning, supervised training, layered learning, genetic selection of features (attributes), genetic optimization of learners parameters, and novel genetic layered training (selection of learners) approaches used along with the cross-validation (CV) training-testing method (stratified 10-fold). The novelty of our approach relies on a proper flow and fusion of information (DGHNL structure and its optimization). We show that the proposed DGHNL model with a 29-layer structure is capable to achieve the prediction accuracy of 94.60% (54 errors per 1000 classifications) for the Statlog German credit approval data. It is the best prediction performance for this well-known credit scoring dataset, compared to the existing work in the field.
DOI: 10.1016/j.bbe.2021.11.004
2022
Cited 105 times
Transfer learning techniques for medical image analysis: A review
Medical imaging is a useful tool for disease detection and diagnostic imaging technology has enabled early diagnosis of medical conditions. Manual image analysis methods are labor-intense and they are susceptible to intra as well as inter-observer variability. Automated medical image analysis techniques can overcome these limitations. In this review, we investigated Transfer Learning (TL) architectures for automated medical image analysis. We discovered that TL has been applied to a wide range of medical imaging tasks, such as segmentation, object identification, disease categorization, severity grading, to name a few. We could establish that TL provides high quality decision support and requires less training data when compared to traditional deep learning methods. These advantageous properties arise from the fact that TL models have already been trained on large generic datasets and a task specific dataset is only used to customize the model. This eliminates the need to train the models from scratch. Our review shows that AlexNet, ResNet, VGGNet, and GoogleNet are the most widely used TL models for medical image analysis. We found that these models can understand medical images, and the customization refines the ability, making these TL models useful tools for medical image analysis.
DOI: 10.1016/j.compbiomed.2021.104697
2021
Cited 104 times
Applications of deep learning techniques for automated multiple sclerosis detection using magnetic resonance imaging: A review
Multiple Sclerosis (MS) is a type of brain disease which causes visual, sensory, and motor problems for people with a detrimental effect on the functioning of the nervous system. In order to diagnose MS, multiple screening methods have been proposed so far; among them, magnetic resonance imaging (MRI) has received considerable attention among physicians. MRI modalities provide physicians with fundamental information about the structure and function of the brain, which is crucial for the rapid diagnosis of MS lesions. Diagnosing MS using MRI is time-consuming, tedious, and prone to manual errors. Research on the implementation of computer aided diagnosis system (CADS) based on artificial intelligence (AI) to diagnose MS involves conventional machine learning and deep learning (DL) methods. In conventional machine learning, feature extraction, feature selection, and classification steps are carried out by using trial and error; on the contrary, these steps in DL are based on deep layers whose values are automatically learn. In this paper, a complete review of automated MS diagnosis methods performed using DL techniques with MRI neuroimaging modalities is provided. Initially, the steps involved in various CADS proposed using MRI modalities and DL techniques for MS diagnosis are investigated. The important preprocessing techniques employed in various works are analyzed. Most of the published papers on MS diagnosis using MRI modalities and DL are presented. The most significant challenges facing and future direction of automated diagnosis of MS using MRI modalities and DL techniques are also provided.
DOI: 10.1016/j.compbiomed.2022.105554
2022
Cited 88 times
An overview of artificial intelligence techniques for diagnosis of Schizophrenia based on magnetic resonance imaging modalities: Methods, challenges, and future works
Schizophrenia (SZ) is a mental disorder that typically emerges in late adolescence or early adulthood. It reduces the life expectancy of patients by 15 years. Abnormal behavior, perception of emotions, social relationships, and reality perception are among its most significant symptoms. Past studies have revealed that SZ affects the temporal and anterior lobes of hippocampus regions of the brain. Also, increased volume of cerebrospinal fluid (CSF) and decreased volume of white and gray matter can be observed due to this disease. Magnetic resonance imaging (MRI) is the popular neuroimaging technique used to explore structural/functional brain abnormalities in SZ disorder, owing to its high spatial resolution. Various artificial intelligence (AI) techniques have been employed with advanced image/signal processing methods to accurately diagnose SZ. This paper presents a comprehensive overview of studies conducted on the automated diagnosis of SZ using MRI modalities. First, an AI-based computer aided-diagnosis system (CADS) for SZ diagnosis and its relevant sections are presented. Then, this section introduces the most important conventional machine learning (ML) and deep learning (DL) techniques in the diagnosis of diagnosing SZ. A comprehensive comparison is also made between ML and DL studies in the discussion section. In the following, the most important challenges in diagnosing SZ are addressed. Future works in diagnosing SZ using AI techniques and MRI modalities are recommended in another section. Results, conclusion, and research findings are also presented at the end.
DOI: 10.1016/j.compbiomed.2021.104548
2021
Cited 76 times
Automated ASD detection using hybrid deep lightweight features extracted from EEG signals
Autism spectrum disorder is a common group of conditions affecting about one in 54 children. Electroencephalogram (EEG) signals from children with autism have a common morphological pattern which makes them distinguishable from normal EEG. We have used this type of signal to design and implement an automated autism detection model. We propose a hybrid lightweight deep feature extractor to obtain high classification performance. The system was designed and tested with a big EEG dataset that contained signals from autism patients and normal controls. (i) A new signal to image conversion model is presented in this paper. In this work, features are extracted from EEG signal using one-dimensional local binary pattern (1D_LBP) and the generated features are utilized as input of the short time Fourier transform (STFT) to generate spectrogram images. (ii) The deep features of the generated spectrogram images are extracted using a combination of pre-trained MobileNetV2, ShuffleNet, and SqueezeNet models. This method is named hybrid deep lightweight feature generator. (iii) A two-layered ReliefF algorithm is used for feature ranking and feature selection. (iv) The most discriminative features are fed to various shallow classifiers, developed using a 10-fold cross-validation strategy for automated autism detection. A support vector machine (SVM) classifier reached 96.44% accuracy based on features from the proposed model. The results strongly indicate that the proposed hybrid deep lightweight feature extractor is suitable for autism detection using EEG signals. The model is ready to serve as part of an adjunct tool that aids neurologists during autism diagnosis in medical centers.
DOI: 10.1016/j.cmpb.2021.105941
2021
Cited 74 times
Automated detection of conduct disorder and attention deficit hyperactivity disorder using decomposition and nonlinear techniques with EEG signals
Attention deficit hyperactivity disorder (ADHD) is often presented with conduct disorder (CD). There is currently no objective laboratory test or diagnostic method to discern between ADHD and CD, and diagnosis is further made difficult as ADHD is a common neuro-developmental disorder often presenting with other co-morbid difficulties; and in particular with conduct disorder which has a high degree of associated behavioural challenges. A novel automated system (AS) is proposed as a convenient supplementary tool to support clinicians in their diagnostic decisions. To the best of our knowledge, we are the first group to develop an automated classification system to classify ADHD, CD and ADHD+CD classes using brain signals.The empirical mode decomposition (EMD) and discrete wavelet transform (DWT) methods were employed to decompose the electroencephalogram (EEG) signals. Autoregressive modelling coefficients and relative wavelet energy were then computed on the signals. Various nonlinear features were extracted from the decomposed coefficients. Adaptive synthetic sampling (ADASYN) was then employed to balance the dataset. The significant features were selected using sequential forward selection method. The highly discriminatory features were subsequently fed to an array of classifiers.The highest accuracy of 97.88% was achieved with the K-Nearest Neighbour (KNN) classifier. The proposed system was developed using ten-fold validation strategy on EEG data from 123 children. To the best of our knowledge this is the first study to develop an AS for the classification of ADHD, CD and ADHD+CD classes using EEG signals.Our AS can potentially be used as a web-based application with cloud system to aid the clinical diagnosis of ADHD and/or CD, thus supporting faster and accurate treatment for the children. It is important to note that testing with larger data is required before the AS can be employed for clinical applications.
DOI: 10.1007/s10479-021-04006-2
2021
Cited 73 times
Handling of uncertainty in medical data using machine learning and probability theory techniques: a review of 30 years (1991–2020)
Understanding the data and reaching accurate conclusions are of paramount importance in the present era of big data. Machine learning and probability theory methods have been widely used for this purpose in various fields. One critically important yet less explored aspect is capturing and analyzing uncertainties in the data and model. Proper quantification of uncertainty helps to provide valuable information to obtain accurate diagnosis. This paper reviewed related studies conducted in the last 30 years (from 1991 to 2020) in handling uncertainties in medical data using probability theory and machine learning techniques. Medical data is more prone to uncertainty due to the presence of noise in the data. So, it is very important to have clean medical data without any noise to get accurate diagnosis. The sources of noise in the medical data need to be known to address this issue. Based on the medical data obtained by the physician, diagnosis of disease, and treatment plan are prescribed. Hence, the uncertainty is growing in healthcare and there is limited knowledge to address these problems. Our findings indicate that there are few challenges to be addressed in handling the uncertainty in medical raw data and new models. In this work, we have summarized various methods employed to overcome this problem. Nowadays, various novel deep learning techniques have been proposed to deal with such uncertainties and improve the performance in decision making.
DOI: 10.1016/j.compbiomed.2022.105550
2022
Cited 72 times
Explainable detection of myocardial infarction using deep learning models with Grad-CAM technique on ECG signals
Myocardial infarction (MI) accounts for a high number of deaths globally. In acute MI, accurate electrocardiography (ECG) is important for timely diagnosis and intervention in the emergency setting. Machine learning is increasingly being explored for automated computer-aided ECG diagnosis of cardiovascular diseases. In this study, we have developed DenseNet and CNN models for the classification of healthy subjects and patients with ten classes of MI based on the location of myocardial involvement. ECG signals from the Physikalisch-Technische Bundesanstalt database were pre-processed, and the ECG beats were extracted using an R peak detection algorithm. The beats were then fed to the two models separately. While both models attained high classification accuracies (more than 95%), DenseNet is the preferred model for the classification task due to its low computational complexity and higher classification accuracy than the CNN model due to feature reusability. An enhanced class activation mapping (CAM) technique called Grad-CAM was subsequently applied to the outputs of both models to enable visualization of the specific ECG leads and portions of ECG waves that were most influential for the predictive decisions made by the models for the 11 classes. It was observed that Lead V4 was the most activated lead in both the DenseNet and CNN models. Furthermore, this study has also established the different leads and parts of the signal that get activated for each class. This is the first study to report features that influenced the classification decisions of deep models for multiclass classification of MI and healthy ECGs. Hence this study is crucial and contributes significantly to the medical field as with some level of visible explainability of the inner workings of the models, the developed DenseNet and CNN models may garner needed clinical acceptance and have the potential to be implemented for ECG triage of MI diagnosis in hospitals and remote out-of-hospital settings.
DOI: 10.1016/j.cmpb.2022.106646
2022
Cited 59 times
Automated emotion recognition: Current trends and future perspectives
Human emotions greatly affect the actions of a person. The automated emotion recognition has applications in multiple domains such as health care, e-learning, surveillance, etc. The development of computer-aided diagnosis (CAD) tools has led to the automated recognition of human emotions.This review paper provides an insight into various methods employed using electroencephalogram (EEG), facial, and speech signals coupled with multi-modal emotion recognition techniques. In this work, we have reviewed most of the state-of-the-art papers published on this topic.This study was carried out by considering the various emotion recognition (ER) models proposed between 2016 and 2021. The papers were analysed based on methods employed, classifier used and performance obtained.There is a significant rise in the application of deep learning techniques for ER. They have been widely applied for EEG, speech, facial expression, and multimodal features to develop an accurate ER model.Our study reveals that most of the proposed machine and deep learning-based systems have yielded good performances for automated ER in a controlled environment. However, there is a need to obtain high performance for ER even in an uncontrolled environment.
DOI: 10.1016/j.inffus.2022.09.023
2023
Cited 49 times
UncertaintyFuseNet: Robust uncertainty-aware hierarchical feature fusion model with Ensemble Monte Carlo Dropout for COVID-19 detection
The COVID-19 (Coronavirus disease 2019) pandemic has become a major global threat to human health and well-being. Thus, the development of computer-aided detection (CAD) systems that are capable of accurately distinguishing COVID-19 from other diseases using chest computed tomography (CT) and X-ray data is of immediate priority. Such automatic systems are usually based on traditional machine learning or deep learning methods. Differently from most of the existing studies, which used either CT scan or X-ray images in COVID-19-case classification, we present a new, simple but efficient deep learning feature fusion model, called UncertaintyFuseNet , which is able to classify accurately large datasets of both of these types of images. We argue that the uncertainty of the model's predictions should be taken into account in the learning process, even though most of the existing studies have overlooked it. We quantify the prediction uncertainty in our feature fusion model using effective Ensemble Monte Carlo Dropout (EMCD) technique. A comprehensive simulation study has been conducted to compare the results of our new model to the existing approaches, evaluating the performance of competing models in terms of Precision, Recall, F-Measure, Accuracy and ROC curves. The obtained results prove the efficiency of our model which provided the prediction accuracy of 99.08% and 96.35% for the considered CT scan and X-ray datasets, respectively. Moreover, our UncertaintyFuseNet model was generally robust to noise and performed well with previously unseen data. The source code of our implementation is freely available at: https://github.com/moloud1987/UncertaintyFuseNet-for-COVID-19-Classification.
DOI: 10.1016/j.cmpb.2021.106609
2022
Cited 47 times
Interpretation of radiomics features–A pictorial review
Radiomics is a newcomer field that has opened new windows for precision medicine. It is related to extraction of a large number of quantitative features from medical images, which may be difficult to detect visually. Underlying tumor biology can change physical properties of tissues, which affect patterns of image pixels and radiomics features. The main advantage of radiomics is that it can characterize the whole tumor non-invasively, even after a single sampling from an image. Therefore, it can be linked to a "digital biopsy". Physicians need to know about radiomics features to determine how their values correlate with the appearance of lesions and diseases. Indeed, physicians need practical references to conceive of basics and concepts of each radiomics feature without knowing their sophisticated mathematical formulas. In this review, commonly used radiomics features are illustrated with practical examples to help physicians in their routine diagnostic procedures.
DOI: 10.1016/j.inffus.2023.101898
2023
Cited 38 times
Application of data fusion for automated detection of children with developmental and mental disorders: A systematic review of the last decade
Mental health is a basic need for a sustainable and developing society. The prevalence and financial burden of mental illness have increased globally, and especially in response to community and worldwide pandemic events. Children suffering from such mental disorders find it difficult to cope with educational, occupational, personal, and societal developments, and treatments are not accessible to all. Advancements in technology have resulted in much research examining the use of artificial intelligence to detect or identify characteristics of mental illness. Therefore, this paper presents a systematic review of nine developmental and mental disorders (Autism spectrum disorder, Attention deficit hyperactivity disorder, Schizophrenia, Anxiety, Depression, Dyslexia, Post-traumatic stress disorder, Tourette syndrome, and Obsessive-compulsive disorder) prominent in children and adolescents. Our paper focuses on the automated detection of these developmental and mental disorders using physiological signals. This paper also presents a detailed discussion on signal analysis, feature engineering, and decision-making with their advantages, future directions and challenges on the papers published on mental disorders of children. We have presented the details of the dataset description, validation techniques, features extracted and decision-making models. The challenges and future directions present open research questions on signal or availability, uncertainty, explainability, and hardware implementation resources for signal analysis and machine or deep learning models. Finally, the main findings of this study are presented in the conclusion section.
DOI: 10.1016/j.compbiomed.2023.106676
2023
Cited 37 times
An explainable and interpretable model for attention deficit hyperactivity disorder in children using EEG signals
Attention deficit hyperactivity disorder (ADHD) is a neurodevelopmental disorder that affects a person's sleep, mood, anxiety, and learning. Early diagnosis and timely medication can help individuals with ADHD perform daily tasks without difficulty. Electroencephalogram (EEG) signals can help neurologists to detect ADHD by examining the changes occurring in it. The EEG signals are complex, non-linear, and non-stationary. It is difficult to find the subtle differences between ADHD and healthy control EEG signals visually. Also, making decisions from existing machine learning (ML) models do not guarantee similar performance (unreliable).The paper explores a combination of variational mode decomposition (VMD), and Hilbert transform (HT) called VMD-HT to extract hidden information from EEG signals. Forty-one statistical parameters extracted from the absolute value of analytical mode functions (AMF) have been classified using the explainable boosted machine (EBM) model. The interpretability of the model is tested using statistical analysis and performance measurement. The importance of the features, channels and brain regions has been identified using the glass-box and black-box approach. The model's local and global explainability has been visualized using Local Interpretable Model-agnostic Explanations (LIME), SHapley Additive exPlanations (SHAP), Partial Dependence Plot (PDP), and Morris sensitivity. To the best of our knowledge, this is the first work that explores the explainability of the model prediction in ADHD detection, particularly for children.Our results show that the explainable model has provided an accuracy of 99.81%, a sensitivity of 99.78%, 99.84% specificity, an F-1 measure of 99.83%, the precision of 99.87%, a false detection rate of 0.13%, and Mathew's correlation coefficient, negative predicted value, and critical success index of 99.61%, 99.73%, and 99.66%, respectively in detecting the ADHD automatically with ten-fold cross-validation. The model has provided an area under the curve of 100% while the detection rate of 99.87% and 99.73% has been obtained for ADHD and HC, respectively.The model show that the interpretability and explainability of frontal region is highest compared to pre-frontal, central, parietal, occipital, and temporal regions. Our findings has provided important insight into the developed model which is highly reliable, robust, interpretable, and explainable for the clinicians to detect ADHD in children. Early and rapid ADHD diagnosis using robust explainable technologies may reduce the cost of treatment and lessen the number of patients undergoing lengthy diagnosis procedures.
DOI: 10.1016/j.knosys.2022.110190
2023
Cited 31 times
Automated accurate detection of depression using twin Pascal’s triangles lattice pattern with EEG Signals
Electroencephalogram (EEG)-based major depressive disorder (MDD) machine learning detection models can objectively differentiate MDD from healthy controls but are limited by high complexities or low accuracies. This work presents a self-organized computationally lightweight handcrafted classification model for accurate MDD detection using a reference subject-based validation strategy. We used the public Multimodal Open Dataset for Mental Disorder Analysis (MODMA) comprising 128-channel EEG signals from 24 MDD and 29 healthy control (HC) subjects. The input EEG was decomposed using multilevel discrete wavelet transform with Daubechies 4 mother wavelet function into eight low- and high-level wavelet bands. We used a novel Twin Pascal’s Triangles Lattice Pattern(TPTLP) comprising an array of 25 values to extract local textural features from the raw EEG signal and subbands. For each overlapping signal block of length 25, two walking paths that traced the maximum and minimum L1-norm distances from v1 to v25 of the TPTLP were dynamically generated to extract features. Forty statistical features were also extracted in parallel per run. We employed neighborhood component analysis for feature selection, a k-nearest neighbor classifier to obtain 128 channel-wise prediction vectors, iterative hard majority voting to generate 126 voted vectors, and a greedy algorithm to determine the best overall model result. Our generated model attained the best channel-wise and overall model accuracies. The generated system attained an accuracy of 76.08% (for Channel 1) and 83.96% (voted from the top 13 channels) using leave-one-subject-out(LOSO) cross-validation (CV) and 100% using 10-fold CV strategies, which outperformed other published models developed using same (MODMA) dataset.
DOI: 10.1016/j.cmpb.2022.107308
2023
Cited 28 times
Uncertainty quantification in DenseNet model using myocardial infarction ECG signals
Myocardial infarction (MI) is a life-threatening condition diagnosed acutely on the electrocardiogram (ECG). Several errors, such as noise, can impair the prediction of automated ECG diagnosis. Therefore, quantification and communication of model uncertainty are essential for reliable MI diagnosis.A Dirichlet DenseNet model that could analyze out-of-distribution data and detect misclassification of MI and normal ECG signals was developed. The DenseNet model was first trained with the pre-processed MI ECG signals (from the best lead V6) acquired from the Physikalisch-Technische Bundesanstalt (PTB) database, using the reverse Kullback-Leibler (KL) divergence loss. The model was then tested with newly synthesized ECG signals with added em and ma noise samples. Predictive entropy was used as an uncertainty measure to determine the misclassification of normal and MI signals. Model performance was evaluated using four uncertainty metrics: uncertainty sensitivity (UNSE), uncertainty specificity (UNSP), uncertainty accuracy (UNAC), and uncertainty precision (UNPR); the classification threshold was set at 0.3.The UNSE of the DenseNet model was low but increased over the studied decremental noise range (-6 to 24 dB), indicating that the model grew more confident in classifying the signals as they got less noisy. The model became more certain in its predictions from SNR values of 12 dB and 18 dB onwards, yielding UNAC values of 80% and 82.4% for em and ma noise signals, respectively. UNSP and UNPR values were close to 100% for em and ma noise signals, indicating that the model was self-aware of what it knew and didn't.Through this work, it has been established that the model is reliable as it was able to convey when it was not confident in the diagnostic information it was presenting. Thus, the model is trustworthy and can be used in healthcare applications, such as the emergency diagnosis of MI on ECGs.
DOI: 10.1016/j.inffus.2023.03.022
2023
Cited 28 times
Epilepsy detection in 121 patient populations using hypercube pattern from EEG signals
Epilepsy is one of the most commonly seen neurologic disorders worldwide and has generally caused seizures. Electroencephalography (EEG) is widely used in seizure diagnosis. To detect epilepsy automatically, various machine learning (ML) models have been introduced in the literature, but the used EEG signal datasets for epilepsy detection are relatively small. Our main objective is to present a large EEG signal dataset and investigate the detection ability of a new hypercube pattern-based framework using the EEG signals. This study collected a large EEG signal dataset (10,356 EEG signals) from 121 participants. We proposed a new information fusion-based feature engineering framework to get high classification performance from this dataset. The dataset consists of 35 channels, and our proposed feature engineering model extracts features from each channel. A new hypercube-based feature extractor has been proposed to generate two feature vectors in the feature extraction phase. Various statistical parameters of the signals have been used to create a feature vector. Multilevel discrete wavelet transform (MDWT) has been applied to develop a multileveled feature extraction function, and seven feature vectors have been extracted. In this work, we have extracted 245 (=35 × 7) feature vectors, and the most valuable features from these vectors have been selected using the neighborhood component analysis (NCA) selector. Finally, these selected features were fed to the k nearest neighbors (kNN) classifier with the leave one subject out (LOSO) cross-validation (CV) strategy. These results have been voted/fused to obtain the highest classification performance. In this work, we have attained 87.78% classification accuracy using voting these vectors and 79.07% with LOSO CV with the EEG signals. The proposed fusion-based feature engineering model achieved satisfactory classification performance using the largest EEG signal datasets for epilepsy detection.
DOI: 10.1016/j.compbiomed.2023.107441
2023
Cited 27 times
Application of uncertainty quantification to artificial intelligence in healthcare: A review of last decade (2013–2023)
Uncertainty estimation in healthcare involves quantifying and understanding the inherent uncertainty or variability associated with medical predictions, diagnoses, and treatment outcomes. In this era of Artificial Intelligence (AI) models, uncertainty estimation becomes vital to ensure safe decision-making in the medical field. Therefore, this review focuses on the application of uncertainty techniques to machine and deep learning models in healthcare. A systematic literature review was conducted using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. Our analysis revealed that Bayesian methods were the predominant technique for uncertainty quantification in machine learning models, with Fuzzy systems being the second most used approach. Regarding deep learning models, Bayesian methods emerged as the most prevalent approach, finding application in nearly all aspects of medical imaging. Most of the studies reported in this paper focused on medical images, highlighting the prevalent application of uncertainty quantification techniques using deep learning models compared to machine learning models. Interestingly, we observed a scarcity of studies applying uncertainty quantification to physiological signals. Thus, future research on uncertainty quantification should prioritize investigating the application of these techniques to physiological signals. Overall, our review highlights the significance of integrating uncertainty techniques in healthcare applications of machine learning and deep learning models. This can provide valuable insights and practical solutions to manage uncertainty in real-world medical data, ultimately improving the accuracy and reliability of medical diagnoses and treatment recommendations.
DOI: 10.1007/s10278-023-00789-x
2023
Cited 21 times
PatchResNet: Multiple Patch Division–Based Deep Feature Fusion Framework for Brain Tumor Classification Using MRI Images
DOI: 10.1016/j.compbiomed.2013.05.024
2013
Cited 128 times
Automated identification of normal and diabetes heart rate signals using nonlinear measures
Diabetes mellitus (DM) affects considerable number of people in the world and the number of cases is increasing every year. Due to a strong link to the genetic basis of the disease, it is extremely difficult to cure. However, it can be controlled to prevent severe consequences, such as organ damage. Therefore, diabetes diagnosis and monitoring of its treatment is very important. In this paper, we have proposed a non-invasive diagnosis support system for DM. The system determines whether or not diabetes is present by determining the cardiac health of a patient using heart rate variability (HRV) analysis. This analysis was based on nine nonlinear features namely: Approximate Entropy (ApEn), largest Lyapunov exponet (LLE), detrended fluctuation analysis (DFA) and recurrence quantification analysis (RQA). Clinically significant measures were used as input to classification algorithms, namely AdaBoost, decision tree (DT), fuzzy Sugeno classifier (FSC), k-nearest neighbor algorithm (k-NN), probabilistic neural network (PNN) and support vector machine (SVM). Ten-fold stratified cross-validation was used to select the best classifier. AdaBoost, with least squares (LS) as weak learner, performed better than the other classifiers, yielding an average accuracy of 90%, sensitivity of 92.5% and specificity of 88.7%.
DOI: 10.1016/j.compbiomed.2018.09.008
2018
Cited 127 times
Parkinson's disease: Cause factors, measurable indicators, and early diagnosis
Parkinson's disease (PD) is a neurodegenerative disease of the central nervous system caused due to the loss of dopaminergic neurons. It is classified under movement disorder as patients with PD present with tremor, rigidity, postural changes, and a decrease in spontaneous movements. Comorbidities including anxiety, depression, fatigue, and sleep disorders are observed prior to the diagnosis of PD. Gene mutations, exposure to toxic substances, and aging are considered as the causative factors of PD even though its genesis is unknown. This paper reviews PD etiologies, progression, and in particular measurable indicators of PD such as neuroimaging and electrophysiology modalities. In addition to gene therapy, neuroprotective, pharmacological, and neural transplantation treatments, researchers are actively aiming at identifying biological markers of PD with the goal of early diagnosis. Neuroimaging modalities used together with advanced machine learning techniques offer a promising path for the early detection and intervention in PD patients.
DOI: 10.1016/j.bbe.2018.05.005
2018
Cited 123 times
Use of features from RR-time series and EEG signals for automated classification of sleep stages in deep neural network framework
Sleep is a physiological activity and human body restores itself from various diseases during sleep. It is necessary to get sufficient amount of sleep to have sound physiological and mental health. Nowadays, due to our present hectic lifestyle, the amount of sound sleep is reduced. It is very difficult to decipher the various stages of sleep manually. Hence, an automated system may be useful to detect the different stages of sleep. This paper presents a novel method for the classification of sleep stages based on RR-time series and electroencephalogram (EEG) signal. The method uses iterative filtering (IF) based multiresolution analysis approach for the decomposition of RR-time series into intrinsic mode functions (IMFs). The delta (δ), theta (θ), alpha (α), beta (β) and gamma (γ) waves are evaluated from EEG signal using band-pass filtering. The recurrence quantification analysis (RQA) and dispersion entropy (DE) based features are evaluated from the IMFs of RR-time series. The dispersion entropy and the variance features are evaluated from the different bands of EEG signal. The RR-time series features and the EEG features coupled with the deep neural network (DNN) are used for the classification of sleep stages. The simulation results demonstrate that our proposed method has achieved an average accuracy of 85.51%, 94.03% and 95.71% for the classification of ‘sleep vs wake’, ‘light sleep vs deep sleep’ and ‘rapid eye movement (REM) vs non-rapid eye movement (NREM)’ sleep stages.
DOI: 10.1016/j.future.2018.05.001
2018
Cited 119 times
Age-related Macular Degeneration detection using deep convolutional neural network
Age-related Macular Degeneration (AMD) is an eye condition that affects the elderly. Further, the prevalence of AMD is rising because of the aging population in the society. Therefore, early detection is necessary to prevent vision impairment in the elderly. However, organizing a comprehensive eye screening to detect AMD in the elderly is laborious and challenging. To address this need, we have developed a fourteen-layer deep Convolutional Neural Network (CNN) model to automatically and accurately diagnose AMD at an early stage. The performance of the model was evaluated using the blindfold and ten-fold cross-validation strategies, for which the accuracy of 91.17% and 95.45% were respectively achieved. This new model can be utilized in a rapid eye screening for early detection of AMD in the elderly. It is cost-effective and highly portable, hence, it can be utilized anywhere.
DOI: 10.3390/e19090488
2017
Cited 115 times
Automated Diagnosis of Myocardial Infarction ECG Signals Using Sample Entropy in Flexible Analytic Wavelet Transform Framework
Myocardial infarction (MI) is a silent condition that irreversibly damages the heart muscles.It expands rapidly and, if not treated timely, continues to damage the heart muscles.An electrocardiogram (ECG) is generally used by the clinicians to diagnose the MI patients.Manual identification of the changes introduced by MI is a time-consuming and tedious task, and there is also a possibility of misinterpretation of the changes in the ECG.Therefore, a method for automatic diagnosis of MI using ECG beat with flexible analytic wavelet transform (FAWT) method is proposed in this work.First, the segmentation of ECG signals into beats is performed.Then, FAWT is applied to each ECG beat, which decomposes them into subband signals.Sample entropy (SEnt) is computed from these subband signals and fed to the random forest (RF), J48 decision tree, back propagation neural network (BPNN), and least-squares support vector machine (LS-SVM) classifiers to choose the highest performing one.We have achieved highest classification accuracy of 99.31% using LS-SVM classifier.We have also incorporated Wilcoxon and Bhattacharya ranking methods and observed no improvement in the performance.The proposed automated method can be installed in the intensive care units (ICUs) of hospitals to aid the clinicians in confirming their diagnosis.
DOI: 10.1016/j.compbiomed.2017.06.017
2017
Cited 113 times
Iterative variational mode decomposition based automated detection of glaucoma using fundus images
Glaucoma is one of the leading causes of permanent vision loss. It is an ocular disorder caused by increased fluid pressure within the eye. The clinical methods available for the diagnosis of glaucoma require skilled supervision. They are manual, time consuming, and out of reach of common people. Hence, there is a need for an automated glaucoma diagnosis system for mass screening. In this paper, we present a novel method for an automated diagnosis of glaucoma using digital fundus images. Variational mode decomposition (VMD) method is used in an iterative manner for image decomposition. Various features namely, Kapoor entropy, Renyi entropy, Yager entropy, and fractal dimensions are extracted from VMD components. ReliefF algorithm is used to select the discriminatory features and these features are then fed to the least squares support vector machine (LS-SVM) for classification. Our proposed method achieved classification accuracies of 95.19% and 94.79% using three-fold and ten-fold cross-validation strategies, respectively. This system can aid the ophthalmologists in confirming their manual reading of classes (glaucoma or normal) using fundus images.
DOI: 10.1016/j.cmpb.2018.07.012
2018
Cited 112 times
Computer-aided diagnosis of glaucoma using fundus images: A review
Glaucoma is an eye condition which leads to permanent blindness when the disease progresses to an advanced stage. It occurs due to inappropriate intraocular pressure within the eye, resulting in damage to the optic nerve. Glaucoma does not exhibit any symptoms in its nascent stage and thus, it is important to diagnose early to prevent blindness. Fundus photography is widely used by ophthalmologists to assist in diagnosis of glaucoma and is cost-effective.The morphological features of the disc that is characteristic of glaucoma are clearly seen in the fundus images. However, manual inspection of the acquired fundus images may be prone to inter-observer variation. Therefore, a computer-aided detection (CAD) system is proposed to make an accurate, reliable and fast diagnosis of glaucoma based on the optic nerve features of fundus imaging. In this paper, we reviewed existing techniques to automatically diagnose glaucoma.The use of CAD is very effective in the diagnosis of glaucoma and can assist the clinicians to alleviate their workload significantly. We have also discussed the advantages of employing state-of-art techniques, including deep learning (DL), when developing the automated system. The DL methods are effective in glaucoma diagnosis.Novel DL algorithms with big data availability are required to develop a reliable CAD system. Such techniques can be employed to diagnose other eye diseases accurately.
DOI: 10.1007/s00521-016-2756-z
2016
Cited 110 times
A novel Parkinson’s Disease Diagnosis Index using higher-order spectra features in EEG signals
DOI: 10.1016/j.cogsys.2018.07.010
2018
Cited 107 times
An automated diagnosis of depression using three-channel bandwidth-duration localized wavelet filter bank with EEG signals
Depression is a mental illness. If not diagnosed and treated quickly, it can affect one’s mood and quality of life. Modern life is stressful and fast paced, owing to which depression has emerged as a major source of mental health disorder. The electroencephalogram (EEG) signals, which are used to diagnose depression, are non-stationary, non-linear and complex. Their visual interpretation is difficult and takes time. This makes computer-aided depression diagnosis systems highly desirable for the early detection of the depression. This study aims towards the development of depression detection system using EEG based measures. We propose a computer aided depression diagnosis system using newly designed bandwidth-duration localized (BDL) three-channel orthogonal wavelet filter bank (TCOWFB) and EEG signal for the detection of depression. The EEG signal is decomposed into seven wavelet sub-bands (WSBs) using a optimal six-length TCOWFB. The logarithm of L2 norm (LL2N) of six detailed WSBs and one approximate WSB are used as discriminating features.These features are used in the classification of normal or depression EEG signals by applying them to the least square support vector machine (LS-SVM). The proposed system attained the perfect value of 1 for area under the curve (AUC) of receiver’s operating characteristics (ROC) using seven features. The proposed system with ten-fold cross validation (CV) strategy attained an average classification accuracy (ACA) of 99.58%. The proposed model obtained better ACA than the existing automated depression diagnosis systems (ADDS) and perfect AUC-ROC. Hence, it can be used in a clinical setup to diagnose the depression disorder accurately in lesser time, without any subjectivity due to human intervention.
DOI: 10.1016/j.cmpb.2019.04.032
2019
Cited 105 times
A review of automated sleep stage scoring based on physiological signals for the new millennia
Sleep is an important part of our life. That importance is highlighted by the multitude of health problems which result from sleep disorders. Detecting these sleep disorders requires an accurate interpretation of physiological signals. Prerequisite for this interpretation is an understanding of the way in which sleep stage changes manifest themselves in the signal waveform. With that understanding it is possible to build automated sleep stage scoring systems. Apart from their practical relevance for automating sleep disorder diagnosis, these systems provide a good indication of the amount of sleep stage related information communicated by a specific physiological signal.This article provides a comprehensive review of automated sleep stage scoring systems, which were created since the year 2000. The systems were developed for Electrocardiogram (ECG), Electroencephalogram (EEG), Electrooculogram (EOG), and a combination of signals.Our review shows that all of these signals contain information for sleep stage scoring.The result is important, because it allows us to shift our research focus away from information extraction methods to systemic improvements, such as patient comfort, redundancy, safety and cost.
DOI: 10.1016/j.compbiomed.2019.103387
2019
Cited 105 times
Automated detection of diabetic subject using pre-trained 2D-CNN models with frequency spectrum images extracted from heart rate signals
In this study, a deep-transfer learning approach is proposed for the automated diagnosis of diabetes mellitus (DM), using heart rate (HR) signals obtained from electrocardiogram (ECG) data. Recent progress in deep learning has contributed significantly to improvement in the quality of healthcare. In order for deep learning models to perform well, large datasets are required for training. However, a difficulty in the biomedical field is the lack of clinical data with expert annotation. A recent, commonly implemented technique to train deep learning models using small datasets is to transfer the weighting, developed from a large dataset, to the current model. This deep learning transfer strategy is generally employed for two-dimensional signals. Herein, the weighting of models pre-trained using two-dimensional large image data was applied to one-dimensional HR signals. The one-dimensional HR signals were then converted into frequency spectrum images, which were utilized for application to well-known pre-trained models, specifically: AlexNet, VggNet, ResNet, and DenseNet. The DenseNet pre-trained model yielded the highest classification average accuracy of 97.62%, and sensitivity of 100%, to detect DM subjects via HR signal recordings. In the future, we intend to further test this developed model by utilizing additional data along with cloud-based storage to diagnose DM via heart signal analysis.
DOI: 10.7785/tcrt.2012.500214
2011
Cited 104 times
Cost-Effective and Non-Invasive Automated Benign & Malignant Thyroid Lesion Classification in 3D Contrast-Enhanced Ultrasound Using Combination of Wavelets and Textures: A Class of ThyroScan™ Algorithms
Ultrasound has great potential to aid in the differential diagnosis of malignant and benign thyroid lesions, but interpretative pitfalls exist and the accuracy is still poor. To overcome these difficulties, we developed and analyzed a range of knowledge representation techniques, which are a class of ThyroScan™ algorithms from Global Biomedical Technologies Inc., California, USA, for automatic classification of benign and malignant thyroid lesions. The analysis is based on data obtained from twenty nodules (ten benign and ten malignant) taken from 3D contrast-enhanced ultrasound images. Fine needle aspiration biopsy and histology confirmed malignancy. Discrete Wavelet Transform (DWT) and texture algorithms are used to extract relevant features from the thyroid images. The resulting feature vectors are fed to three different classifiers: K-Nearest Neighbor (K-NN), Probabilistic Neural Network (PNN), and Decision Tree (DeTr). The performance of these classifiers is compared using Receiver Operating Characteristic (ROC) curves. Our results show that combination of DWT and texture features coupled with K-NN resulted in good performance measures with the area of under the ROC curve of 0.987, a classification accuracy of 98.9%, a sensitivity of 98%, and a specificity of 99.8%. Finally, we have proposed a novel integrated index called Thyroid Malignancy Index (TMI), which is made up of texture features, to diagnose benign or malignant nodules using just one index. We hope that this TMI will help clinicians in a more objective detection of benign and malignant thyroid lesions.
DOI: 10.1016/j.cmpb.2020.105604
2020
Cited 100 times
Classification of heart sound signals using a novel deep WaveNet model
The high mortality rate and increasing prevalence of heart valve diseases globally warrant the need for rapid and accurate diagnosis of such diseases. Phonocardiogram (PCG) signals are used in this study due to the low cost of obtaining the signals. This study classifies five types of heart sounds, namely normal, aortic stenosis, mitral valve prolapse, mitral stenosis, and mitral regurgitation. We have proposed a novel in-house developed deep WaveNet model for automated classification of five types of heart sounds. The model is developed using a total of 1000 PCG recordings belonging to five classes with 200 recordings in each class. We have achieved a training accuracy of 97% for the classification of heart sounds into five classes. The highest classification accuracy of 98.20% was achieved for the normal class. The developed model was validated with a 10-fold cross-validation, thus affirming its robustness. The study results clearly indicate that the developed model is able to classify five types of heart sounds accurately. The developed system can be used by cardiologists to aid in the detection of heart valve diseases in patients.
DOI: 10.1016/j.knosys.2015.02.005
2015
Cited 95 times
Computer-aided diagnosis of diabetic subjects by heart rate variability signals using discrete wavelet transform method
Diabetes Mellitus (DM), a chronic lifelong condition, is characterized by increased blood sugar levels. As there is no cure for DM, the major focus lies on controlling the disease. Therefore, DM diagnosis and treatment is of great importance. The most common complications of DM include retinopathy, neuropathy, nephropathy and cardiomyopathy. Diabetes causes cardiovascular autonomic neuropathy that affects the Heart Rate Variability (HRV). Hence, in the absence of other causes, the HRV analysis can be used to diagnose diabetes. The present work aims at developing an automated system for classification of normal and diabetes classes by using the heart rate (HR) information extracted from the Electrocardiogram (ECG) signals. The spectral analysis of HRV recognizes patients with autonomic diabetic neuropathy, and gives an earlier diagnosis of impairment of the Autonomic Nervous System (ANS). Significant correlations with the impaired ANS are observed of the HRV spectral indices obtained by using the Discrete Wavelet Transform (DWT) method. Herein, in order to diagnose and detect DM automatically, we have performed DWT decomposition up to 5 levels, and extracted the energy, sample entropy, approximation entropy, kurtosis and skewness features at various detailed coefficient levels of the DWT. We have extracted relative wavelet energy and entropy features up to the 5th level of DWT coefficients extracted from HR signals. These features are ranked by using various ranking methods, namely, Bhattacharyya space algorithm, t-test, Wilcoxon test, Receiver Operating Curve (ROC) and entropy. The ranked features are then fed into different classifiers, that include Decision Tree (DT), K-Nearest Neighbor (KNN), Naïve Bayes (NBC) and Support Vector Machine (SVM). Our results have shown maximum diagnostic differentiation performance by using a minimum number of features. With our system, we have obtained an average accuracy of 92.02%, sensitivity of 92.59% and specificity of 91.46%, by using DT classifier with ten-fold cross validation.
DOI: 10.1016/j.eswa.2016.06.038
2016
Cited 93 times
An efficient automated technique for CAD diagnosis using flexible analytic wavelet transform and entropy features extracted from HRV signals
Coronary Artery Disease (CAD) causes maximum death among all types of heart disorders. An early detection of CAD can save many human lives. Therefore, we have developed a new technique which is capable of detecting CAD using the Heart Rate Variability (HRV) signals. These HRV signals are decomposed to sub-band signals using Flexible Analytic Wavelet Transform (FAWT). Then, two nonlinear parameters namely; K-Nearest Neighbour (K-NN) entropy estimator and Fuzzy Entropy (FzEn) are extracted from the decomposed sub-band signals. Ranking methods namely Wilcoxon, entropy, Receiver Operating Characteristic (ROC) and Bhattacharya space algorithm are implemented to optimize the performance of the designed system. The proposed methodology has shown better performance using entropy ranking technique. The Least Squares-Support Vector Machine (LS-SVM) with Morlet wavelet and Radial Basis Function (RBF) kernels obtained the highest classification accuracy of 100% for the diagnosis of CAD. The developed novel algorithm can be used to design an expert system for the diagnosis of CAD automatically using Heart Rate (HR) signals. Our system can be used in hospitals, polyclinics and community screening to aid the cardiologists in their regular diagnosis.
DOI: 10.1016/j.compbiomed.2017.11.018
2018
Cited 93 times
Automated localization and segmentation techniques for B-mode ultrasound images: A review
B-mode ultrasound imaging is used extensively in medicine. Hence, there is a need to have efficient segmentation tools to aid in computer-aided diagnosis, image-guided interventions, and therapy. This paper presents a comprehensive review on automated localization and segmentation techniques for B-mode ultrasound images. The paper first describes the general characteristics of B-mode ultrasound images. Then insight on the localization and segmentation of tissues is provided, both in the case in which the organ/tissue localization provides the final segmentation and in the case in which a two-step segmentation process is needed, due to the desired boundaries being too fine to locate from within the entire ultrasound frame. Subsequenly, examples of some main techniques found in literature are shown, including but not limited to shape priors, superpixel and classification, local pixel statistics, active contours, edge-tracking, dynamic programming, and data mining. Ten selected applications (abdomen/kidney, breast, cardiology, thyroid, liver, vascular, musculoskeletal, obstetrics, gynecology, prostate) are then investigated in depth, and the performances of a few specific applications are compared. In conclusion, future perspectives for B-mode based segmentation, such as the integration of RF information, the employment of higher frequency probes when possible, the focus on completely automatic algorithms, and the increase in available data are discussed.
DOI: 10.1159/000504292
2019
Cited 93 times
Artificial Intelligence Techniques for Automated Diagnosis of Neurological Disorders
Authors have been advocating the research ideology that a computer-aided diagnosis (CAD) system trained using lots of patient data and physiological signals and images based on adroit integration of advanced signal processing and artificial intelligence (AI)/machine learning techniques in an automated fashion can assist neurologists, neurosurgeons, radiologists, and other medical providers to make better clinical decisions.This paper presents a state-of-the-art review of research on automated diagnosis of 5 neurological disorders in the past 2 decades using AI techniques: epilepsy, Parkinson's disease, Alzheimer's disease, multiple sclerosis, and ischemic brain stroke using physiological signals and images. Recent research articles on different feature extraction methods, dimensionality reduction techniques, feature selection, and classification techniques are reviewed. Key Message: CAD systems using AI and advanced signal processing techniques can assist clinicians in analyzing and interpreting physiological signals and images more effectively.
DOI: 10.1016/j.infrared.2014.06.001
2014
Cited 91 times
Application of infrared thermography in computer aided diagnosis
The invention of thermography, in the 1950s, posed a formidable problem to the research community: What is the relationship between disease and heat radiation captured with Infrared (IR) cameras? The research community responded with a continuous effort to find this crucial relationship. This effort was aided by advances in processing techniques, improved sensitivity and spatial resolution of thermal sensors. However, despite this progress fundamental issues with this imaging modality still remain. The main problem is that the link between disease and heat radiation is complex and in many cases even non-linear. Furthermore, the change in heat radiation as well as the change in radiation pattern, which indicate disease, is minute. On a technical level, this poses high requirements on image capturing and processing. On a more abstract level, these problems lead to inter-observer variability and on an even more abstract level they lead to a lack of trust in this imaging modality. In this review, we adopt the position that these problems can only be solved through a strict application of scientific principles and objective performance assessment. Computing machinery is inherently objective; this helps us to apply scientific principles in a transparent way and to assess the performance results. As a consequence, we aim to promote thermography based Computer-Aided Diagnosis (CAD) systems. Another benefit of CAD systems comes from the fact that the diagnostic accuracy is linked to the capability of the computing machinery and, in general, computers become ever more potent. We predict that a pervasive application of computers and networking technology in medicine will help us to overcome the shortcomings of any single imaging modality and this will pave the way for integrated health care systems which maximize the quality of patient care.
DOI: 10.1016/j.knosys.2018.07.019
2018
Cited 90 times
MMSFL-OWFB: A novel class of orthogonal wavelet filters for epileptic seizure detection
The optimal filters with minimal bandwidth are highly desirable in many applications such as communication and biomedical signal processing. In this study, we design optimally frequency localized orthogonal wavelet filters and evaluate their performance using electroencephalogram (EEG) signals for automated detection of the epileptic seizure. The paper presents a novel method for designing optimal orthogonal wavelet filter banks (OWFB) with the objective of minimizing their frequency spreads. The designed wavelet filter also possesses the desired degree of regularity. The regularity condition has been imposed analytically so as to satisfy the constraint accurately. We propose a novel semi-definite programming (SDP) formulation which does not involve any parametrization. The solution of the SDP yields optimal orthogonal wavelet filter for the given length of the filter. We have developed an automated diagnosis system that identifies epileptic seizure EEG signals using the features obtained from the designed minimally mean squared frequency localized (MMSFL) OWFB. We have tested the performance of the proposed model using two independent EEG databases in order to ensure the consistency and robustness of the model. Interestingly, the proposed MMSFL-OWFB feature-based model exhibits ceiling level of performance, with classification accuracy ≥ 99% in classifying seizure (ictal) and seizure-free (non-ictal) EEG signals for both databases. Our developed system can be employed in hospitals and community cares to aid the epileptologists in the accurate diagnosis of seizures.
DOI: 10.1016/j.compbiomed.2018.07.005
2018
Cited 90 times
A novel automated diagnostic system for classification of myocardial infarction ECG signals using an optimal biorthogonal filter bank
Myocardial infarction (MI), also referred to as heart attack, occurs when there is an interruption of blood flow to parts of the heart, due to the acute rupture of atherosclerotic plaque, which leads to damage of heart muscle. The heart muscle damage produces changes in the recorded surface electrocardiogram (ECG). The identification of MI by visual inspection of the ECG requires expert interpretation, and is difficult as the ECG signal changes associated with MI can be short in duration and low in magnitude. Hence, errors in diagnosis can lead to delay the initiation of appropriate medical treatment. To lessen the burden on doctors, an automated ECG based system can be installed in hospitals to help identify MI changes on ECG. In the proposed study, we develop a single-channel single lead ECG based MI diagnostic system validated using noisy and clean datasets. The raw ECG signals are taken from the Physikalisch-Technische Bundesanstalt database. We design a novel two-band optimal biorthogonal filter bank (FB) for analysis of the ECG signals. We present a method to design a novel class of two-band optimal biorthogonal FB in which not only the product filter but the analysis lowpass filter is also a halfband filter. The filter design problem has been composed as a constrained convex optimization problem in which the objective function is a convex combination of multiple quadratic functions and the regularity and perfect reconstruction conditions are imposed in the form linear equalities. ECG signals are decomposed into six subbands (SBs) using the newly designed wavelet FB. Following to this, discriminating features namely, fuzzy entropy (FE), signal-fractal-dimensions (SFD), and renyi entropy (RE) are computed from all the six SBs. The features are fed to the k-nearest neighbor (KNN). The proposed system yields an accuracy of 99.62% for the noisy dataset and an accuracy of 99.74% for the clean dataset, using 10-fold cross validation (CV) technique. Our MI identification system is robust and highly accurate. It can thus be installed in clinics for detecting MI.
DOI: 10.1016/j.compbiomed.2017.06.022
2017
Cited 89 times
A novel algorithm to detect glaucoma risk using texton and local configuration pattern features extracted from fundus images
Glaucoma is an optic neuropathy defined by characteristic damage to the optic nerve and accompanying visual field deficits. Early diagnosis and treatment are critical to prevent irreversible vision loss and ultimate blindness. Current techniques for computer-aided analysis of the optic nerve and retinal nerve fiber layer (RNFL) are expensive and require keen interpretation by trained specialists. Hence, an automated system is highly desirable for a cost-effective and accurate screening for the diagnosis of glaucoma. This paper presents a new methodology and a computerized diagnostic system. Adaptive histogram equalization is used to convert color images to grayscale images followed by convolution of these images with Leung-Malik (LM), Schmid (S), and maximum response (MR4 and MR8) filter banks. The basic microstructures in typical images are called textons. The convolution process produces textons. Local configuration pattern (LCP) features are extracted from these textons. The significant features are selected using a sequential floating forward search (SFFS) method and ranked using the statistical t-test. Finally, various classifiers are used for classification of images into normal and glaucomatous classes. A high classification accuracy of 95.8% is achieved using six features obtained from the LM filter bank and the k-nearest neighbor (kNN) classifier. A glaucoma integrative index (GRI) is also formulated to obtain a reliable and effective system.
DOI: 10.1016/j.compbiomed.2020.103632
2020
Cited 89 times
Automated detection of heart valve diseases using chirplet transform and multiclass composite classifier with PCG signals
Heart valve diseases (HVDs) are a group of cardiovascular abnormalities, and the causes of HVDs are blood clots, congestive heart failure, stroke, and sudden cardiac death, if not treated timely. Hence, the detection of HVDs at the initial stage is very important in cardiovascular engineering to reduce the mortality rate. In this article, we propose a new approach for the detection of HVDs using phonocardiogram (PCG) signals. The approach uses the Chirplet transform (CT) for the time–frequency (TF) based analysis of the PCG signal. The local energy (LEN) and local entropy (LENT) features are evaluated from the TF matrix of the PCG signal. The multiclass composite classifier formulated based on the sparse representation of the test PCG instance for each class and the distances from the nearest neighbor PCG instances are used for the classification of HVDs such as mitral regurgitation (MR), mitral stenosis (MS), aortic stenosis (AS), and healthy classes (HC). The experimental results show that the proposed approach has sensitivity values of 99.44%, 98.66%, and 96.22% respectively for AS, MS and MR classes. The classification results of the proposed CT based features are compared with existing approaches for the automated classification of HVDs. The proposed approach has obtained the highest overall accuracy as compared to existing methods using the same database. The approach can be considered for the automated detection of HVDs with the Internet of Medical Things (IOMT) applications.
DOI: 10.1016/j.compbiomed.2018.06.011
2018
Cited 87 times
Application of an optimal class of antisymmetric wavelet filter banks for obstructive sleep apnea diagnosis using ECG signals
Obstructive sleep apnea (OSA) is a sleep disorder caused due to interruption of breathing resulting in insufficient oxygen to the human body and brain. If the OSA is detected and treated at an early stage the possibility of severe health impairment can be mitigated. Therefore, an accurate automated OSA detection system is indispensable. Generally, OSA based computer-aided diagnosis (CAD) system employs multi-channel, multi-signal physiological signals. However, there is a great need for single-channel bio-signal based low-power, a portable OSA-CAD system which can be used at home. In this study, we propose single-channel electrocardiogram (ECG) based OSA-CAD system using a new class of optimal biorthogonal antisymmetric wavelet filter bank (BAWFB). In this class of filter bank, all filters are of even length. The filter bank design problem is transformed into a constrained optimization problem wherein the objective is to minimize either frequency-spread for the given time-spread or time-spread for the given frequency-spread. The optimization problem is formulated as a semi-definite programming (SDP) problem. In the SDP problem, the objective function (time-spread or frequency-spread), constraints of perfect reconstruction (PR) and zero moment (ZM) are incorporated in their time domain matrix formulations. The global solution for SDP is obtained using interior point algorithm. The newly designed BAWFB is used for the classification of OSA using ECG signals taken from the physionet's Apnea-ECG database. The ECG segments of 1 min duration are decomposed into six wavelet subbands (WSBs) by employing the proposed BAWFB. Then, the fuzzy entropy (FE) and log-energy (LE) features are computed from all six WSBs. The FE and LE features are classified into normal and OSA groups using least squares support vector machine (LS-SVM) with 35-fold cross-validation strategy. The proposed OSA detection model achieved the average classification accuracy, sensitivity, specificity and F-score of 90.11%, 90.87% 88.88% and 0.92, respectively. The performance of the model is found to be better than the existing works in detecting OSA using the same database. Thus, the proposed automated OSA detection system is accurate, cost-effective and ready to be tested with a huge database.
DOI: 10.1016/j.cogsys.2018.12.001
2019
Cited 87 times
A novel machine learning approach for early detection of hepatocellular carcinoma patients
Liver cancer is quite common type of cancer among individuals worldwide. Hepatocellular carcinoma (HCC) is the malignancy of liver cancer. It has high impact on individual’s life and investigating it early can decline the number of annual deaths. This study proposes a new machine learning approach to detect HCC using 165 patients. Ten well-known machine learning algorithms are employed. In the preprocessing step, the normalization approach is used. The genetic algorithm coupled with stratified 5-fold cross-validation method is applied twice, first for parameter optimization and then for feature selection. In this work, support vector machine (SVM) (type C-SVC) with new 2level genetic optimizer (genetic training) and feature selection yielded the highest accuracy and F1-Score of 0.8849 and 0.8762 respectively. Our proposed model can be used to test the performance with huge database and aid the clinicians.
DOI: 10.7785/tcrt.2012.500381
2014
Cited 86 times
A Review on Ultrasound-Based Thyroid Cancer Tissue Characterization and Automated Classification
In this paper, we review the different studies that developed Computer Aided Diagnostic (CAD) for automated classification of thyroid cancer into benign and malignant types. Specifically, we discuss the different types of features that are used to study and analyze the differences between benign and malignant thyroid nodules. These features can be broadly categorized into (a) the sonographic features from the ultrasound images, and (b) the non-clinical features extracted from the ultrasound images using statistical and data mining techniques. We also present a brief description of the commonly used classifiers in ultrasound based CAD systems. We then review the studies that used features based on the ultrasound images for thyroid nodule classification and highlight the limitations of such studies. We also discuss and review the techniques used in studies that used the non-clinical features for thyroid nodule classification and report the classification accuracies obtained in these studies.
DOI: 10.1016/j.knosys.2020.105949
2020
Cited 84 times
A novel method for sentiment classification of drug reviews using fusion of deep and machine learning techniques
Nowadays, the development of new computer-based technologies has led to rapid increase in the volume of user-generated textual content on the website. Patient-written medical and health-care reviews are among the most valuable and useful textual content on social media which have not been studied extensively by researchers in the fields of natural language processing (NLP) and data mining. These reviews offer insights into the interaction of patients with doctors, treatment, and their satisfaction or frustration with the delivery of healthcare services. In this study, we propose two deep fusion models based on three-way decision theory to analyze the drug reviews. The first fusion model, 3-way fusion of one deep model with a traditional learning algorithm (3W1DT) developed using a deep learning method as a primary classifier and a traditional learning method as the secondary method that is used when the confidence of the deep method during classification of test samples is low. In the second proposed deep fusion model, 3-way fusion of three deep models with a traditional model (3W3DT), three deep and one traditional models are trained on the entire training data and each classifies the test sample individually. Then, the most confident classifier is selected to classify the test drug review. Our results on the reviews based on Drugs.com dataset show that both proposed 3W1DT and 3W3DT methods outperformed the traditional and deep learning methods by 4% and the 3W3DT outperformed 3W1DT by 2% in terms of accuracy and F1-measure.
DOI: 10.1016/j.bbe.2018.04.004
2018
Cited 83 times
Automated diagnosis of atrial fibrillation ECG signals using entropy features extracted from flexible analytic wavelet transform
Atrial fibrillation (AF) is the most common type of sustained arrhythmia. The electrocardiogram (ECG) signals are widely used to diagnose the AF. Automated diagnosis of AF can aid the clinicians to make a more accurate diagnosis. Hence, in this work, we have proposed a decision support system for AF using a novel nonlinear approach based on flexible analytic wavelet transform (FAWT). First, we have extracted 1000 ECG samples from the long duration ECG signals. Then, log energy entropy (LEE), and permutation entropy (PEn) are computed from the sub-band signals obtained using FAWT. The LEE and PEn features are extracted from different frequency bands of FAWT. We have found that LEE features showed better classification results as compared to PEn. The LEE features obtained maximum accuracy, sensitivity, and specificity of 96.84%, 95.8%, and 97.6% respectively with random forest (RF) classifier. Our system can be deployed in hospitals to assist cardiac physicians in their diagnosis.
DOI: 10.1016/j.knosys.2017.06.026
2017
Cited 81 times
Automated characterization of coronary artery disease, myocardial infarction, and congestive heart failure using contourlet and shearlet transforms of electrocardiogram signal
Undiagnosed coronary artery disease (CAD) progresses rapidly and leads to myocardial infarction (MI) by reducing the blood flow to the cardiac muscles. Timely diagnosis of MI and its location is significant, else, it expands and may impair the left ventricular (LV) function. Thus, if CAD and MI are not picked up by electrocardiogram (ECG) during diagnostic test, it can lead to congestive heart failure (CHF). Therefore, in this paper, the characterization of three cardiac abnormalities namely, CAD, MI and CHF are compared. Performance of novel algorithms is based on contourlet and shearlet transformations of the ECG signals. Continuous wavelet transform (CWT) is performed on normal, CAD, MI and CHF ECG beat to obtain scalograms. Subsequently, contourlet and shearlet transformations are applied on the scalograms to obtain the respective coefficients. Entropies, first and second order statistical features namely, mean (Mni), min (Mini), max (Mxi), standard deviation (Dsti), average power (Pavgi), inter-quartile range (IQRi), Shannon entropy (Eshi), mean Tsallis entropy (Emtsi), kurtosis (Kuri), mean absolute deviation (MADi), and mean energy (Ωmi), are extracted from each contourlet and shearlet coefficients. Only significant features are selected using improved binary particle swarm optimization (IBPSO) feature selection method. Selected features are ranked using analysis of variance (ANOVA) and relieff techniques. The highly ranked features are subjected to decision tree (DT) and K-nearest neighbor (KNN) classifiers. Proposed method has achieved accuracy, sensitivity and specificity of (i) 99.55%, 99.93% and 99.24% using contourlet transform, and (ii) 99.01%, 99.82% and 98.75% using shearlet transform. Among the two proposed techniques, contourlet transform method performed marginally better than shearlet transform technique in classifying the four classes. The proposed CWT combined with contourlet-based technique can be implemented in hospitals to speed up the diagnosis of three different cardiac abnormalities using a single ECG test. This technique, minimizes the unnecessary diagnostic tests required to confirm the diagnosis.
DOI: 10.1016/j.patrec.2020.02.010
2020
Cited 78 times
Association between work-related features and coronary artery disease: A heterogeneous hybrid feature selection integrated with balancing approach
Coronary artery disease (CAD) is a leading cause of death worldwide and is associated with high healthcare expenditure. Researchers are motivated to apply machine learning (ML) for quick and accurate detection of CAD. The performance of the automated systems depends on the quality of features used. Clinical CAD datasets contain different features with varying degrees of association with CAD. To extract such features, we developed a novel hybrid feature selection algorithm called heterogeneous hybrid feature selection (2HFS). In this work, we used Nasarian CAD dataset, in which work place and environmental features are also considered, in addition to other clinical features. Synthetic minority over-sampling technique (SMOTE) and Adaptive synthetic (ADASYN) are used to handle the imbalance in the dataset. Decision tree (DT), Gaussian Naive Bayes (GNB), Random Forest (RF), and XGBoost classifiers are used. 2HFS-selected features are then input into these classifier algorithms. Our results show that, the proposed feature selection method has yielded the classification accuracy of 81.23% with SMOTE and XGBoost classifier. We have also tested our approach with other well-known CAD datasets: Hungarian dataset, Long-beach-va dataset, and Z-Alizadeh Sani dataset. We have obtained 83.94%, 81.58% and 92.58% for Hungarian dataset, Long-beach-va dataset, and Z-Alizadeh Sani dataset, respectively. Hence, our experimental results confirm the effectiveness of our proposed feature selection algorithm as compared to the existing state-of-the-art techniques which yielded outstanding results for the development of automated CAD systems.
DOI: 10.1016/j.patrec.2020.03.009
2020
Cited 78 times
Automated detection of abnormal EEG signals using localized wavelet filter banks
Epilepsy is a neural disorder that is associated with the central nervous system (CNS) in which the brain activity sometimes becomes abnormal, which may lead to seizures, loss of awareness, unusual sensations, and behavior. Electroencephalograms (EEG) are widely used to detect epilepsy accurately. However, the interpretation of a particular type of abnormality using the EEG signal is a subjective affair and may vary from clinician-to-clinician. Visual inspection of the EEG signal by observing a change in frequency or amplitude in long-duration signals is an arduous task for the clinicians. It may lead to an erroneous classification of EEGs. The proposed methodology focuses on automated detection of epilepsy using a novel stop-band energy (SBE) minimized orthogonal wavelet filter bank. Using the wavelet decomposition, we obtain subbands (SBs) of EEG signals. Subsequently, fuzzy entropy, logarithmic of the squared norm, and fractal dimension are computed for each SB. The different combinations of the extracted features were supplied to various classifiers for the classification of normal and abnormal EEG signals. In the proposed method, we have used a single-channel EEG dataset of Temple University Hospital. The dataset is the most substantial EEG data publicly available, which contains an EEG recording of 2130 distinct subjects. Our proposed system obtained the highest classification accuracy (CACC) of 78.4% and 79.34% during training and evaluation using the SVM classifier. We achieved the highest F1-score of 0.88.
DOI: 10.1016/j.cmpb.2020.105740
2020
Cited 78 times
Accurate deep neural network model to detect cardiac arrhythmia on more than 10,000 individual subject ECG records
Cardiac arrhythmia, which is an abnormal heart rhythm, is a common clinical problem in cardiology. Detection of arrhythmia on an extended duration electrocardiogram (ECG) is done based on initial algorithmic software screening, with final visual validation by cardiologists. It is a time consuming and subjective process. Therefore, fully automated computer-assisted detection systems with a high degree of accuracy have an essential role in this task. In this study, we proposed an effective deep neural network (DNN) model to detect different rhythm classes from a new ECG database. Our DNN model was designed for high performance on all ECG leads. The proposed model, which included both representation learning and sequence learning tasks, showed promising results on all 12-lead inputs. Convolutional layers and sub-sampling layers were used in the representation learning phase. The sequence learning part involved a long short-term memory (LSTM) unit after representation of learning layers. We performed two different class scenarios, including reduced rhythms (seven rhythm types) and merged rhythms (four rhythm types) according to the records from the database. Our trained DNN model achieved 92.24% and 96.13% accuracies for the reduced and merged rhythm classes, respectively. Recently, deep learning algorithms have been found to be useful because of their high performance. The main challenge is the scarcity of appropriate training and testing resources because model performance is dependent on the quality and quantity of case samples. In this study, we used a new public arrhythmia database comprising more than 10,000 records. We constructed an efficient DNN model for automated detection of arrhythmia using these records.
DOI: 10.1016/j.asoc.2016.04.036
2016
Cited 75 times
Application of Gabor wavelet and Locality Sensitive Discriminant Analysis for automated identification of breast cancer using digitized mammogram images
Breast cancer is one of the prime causes of death in women. Early detection may help to improve the survival rate to a great extent. Mammography is considered as one of the most reliable methods to prescreen of breast cancer. However, reading the mammograms by radiologists is laborious, taxing, and prone to intra/inter observer variability errors. Computer Aided Diagnosis (CAD) helps to obtain fast, consistent and reliable diagnosis. This paper presents an automated classification of normal, benign and malignant breast cancer using digitized mammogram images. The proposed method used Gabor wavelet for feature extraction and Locality Sensitive Discriminant Analysis (LSDA) for data reduction. The reduced features are ranked using their F-values and fed to Decision Tree (DT), Linear Discriminant Analysis (LDA) and Quadratic Discriminant Analysis (QDA), k-Nearest Neighbor (k-NN), Naïve Bayes Classifier (NBC), Probabilistic Neural Network (PNN), Support Vector Machine (SVM), AdaBoost and Fuzzy Sugeno (FSC) classifiers one by one to select the highest performing classifier using minimum number of features. The proposed method is evaluated using 690 mammogram images taken from a benchmarked Digital Database for Screening Mammography (DDSM) dataset. Our developed method has achieved mean accuracy, sensitivity, specificity of 98.69%, 99.34% and 98.26% respectively for k-NN classifier using eight features with 10-fold cross validation. This system can be employed in hospitals and polyclinics to aid the clinicians to cross verify their manual diagnosis.
DOI: 10.1016/j.compbiomed.2021.104428
2021
Cited 72 times
Automated accurate emotion recognition system using rhythm-specific deep convolutional neural network technique with multi-channel EEG signals
Emotion is interpreted as a psycho-physiological process, and it is associated with personality, behavior, motivation, and character of a person. The objective of affective computing is to recognize different types of emotions for human-computer interaction (HCI) applications. The spatiotemporal brain electrical activity is measured using multi-channel electroencephalogram (EEG) signals. Automated emotion recognition using multi-channel EEG signals is an exciting research topic in cognitive neuroscience and affective computing. This paper proposes the rhythm-specific multi-channel convolutional neural network (CNN) based approach for automated emotion recognition using multi-channel EEG signals. The delta (δ), theta (θ), alpha (α), beta (β), and gamma (γ) rhythms of EEG signal for each channel are evaluated using band-pass filters. The EEG rhythms from the selected channels coupled with deep CNN are used for emotion classification tasks such as low-valence (LV) vs. high valence (HV), low-arousal (LA) vs. high-arousal (HA), and low-dominance (LD) vs. high dominance (HD) respectively. The deep CNN architecture considered in the proposed work has eight convolutions, three average pooling, four batch-normalization, three spatial drop-outs, two drop-outs, one global average pooling and, three dense layers. We have validated our developed model using three publicly available databases: DEAP, DREAMER, and DASPS. The results reveal that the proposed multivariate deep CNN approach coupled with β-rhythm has obtained the accuracy values of 98.91%, 98.45%, and 98.69% for LV vs. HV, LA vs. HA, and LD vs. HD emotion classification strategies, respectively using DEAP database with 10-fold cross-validation (CV) scheme. Similarly, the accuracy values of 98.56%, 98.82%, and 98.99% are obtained for LV vs. HV, LA vs. HA, and LD vs. HD classification schemes, respectively, using deep CNN and θ-rhythm. The proposed multi-channel rhythm-specific deep CNN classification model has obtained the average accuracy value of 57.14% using α-rhythm and trial-specific CV using DASPS database. Moreover, for 8-quadrant based emotion classification strategy, the deep CNN based classifier has obtained an overall accuracy value of 24.37% using γ-rhythms of multi-channel EEG signals. Our developed deep CNN model can be used for real-time automated emotion recognition applications.
DOI: 10.1016/j.compbiomed.2020.104057
2020
Cited 70 times
HAN-ECG: An interpretable atrial fibrillation detection model using hierarchical attention networks
Atrial fibrillation (AF) is one of the most prevalent cardiac arrhythmias that affects the lives of many people around the world and is associated with a five-fold increased risk of stroke and mortality. Like other problems in the healthcare domain, artificial intelligence (AI)-based models have been used to detect AF from patients’ ECG signals. The cardiologist level performance in detecting this arrhythmia is often achieved by deep learning-based methods, however, they suffer from the lack of interpretability. In other words, these approaches are unable to explain the reasons behind their decisions. The lack of interpretability is a common challenge toward a wide application of machine learning (ML)-based approaches in the healthcare which limits the trust of clinicians in such methods. To address this challenge, we propose HAN-ECG, an interpretable bidirectional-recurrent-neural-network-based approach for the AF detection task. The HAN-ECG employs three attention mechanism levels to provide a multi-resolution analysis of the patterns in ECG leading to AF. The detected patterns by this hierarchical attention model facilitate the interpretation of the neural network decision process in identifying the patterns in the signal which contributed the most to the final detection. Experimental results on two AF databases demonstrate that our proposed model performs better than the existing algorithms. Visualization of these attention layers illustrates that our proposed model decides upon the important waves and heartbeats which are clinically meaningful in the detection task (e.g., absence of P-waves, and irregular R-R intervals for the AF detection task).
DOI: 10.1016/j.knosys.2020.106547
2021
Cited 69 times
Automated accurate speech emotion recognition system using twine shuffle pattern and iterative neighborhood component analysis techniques
Speech emotion recognition is one of the challenging research issues in the knowledge-based system and various methods have been recommended to reach high classification capability. In order to achieve high classification performance in speech emotion recognition, a nonlinear multi-level feature generation model is presented by using cryptographic structure. The novelty of this work is the use of cryptographic structure called shuffle box for feature generation and iterative neighborhood component analysis to select the features. The proposed method has three main stages: (i) multi-level feature generation using Tunable Q wavelet transform (TQWT), (ii) twine shuffle pattern (twine-shuf-pat) for feature generation, and (iii) discriminative features are selected using iterative neighborhood component analysis (INCA) and classified. The TQWT is a multi-level wavelet transformation method used to generate high-level, medium-level, and low-level wavelet coefficients. The proposed twine-shuf-pat technique is used to extract the features from the decomposed wavelet coefficients. INCA feature selector is employed to select the clinically significant features. The performance of the obtained model is validated using four speech emotion public databases (RAVDESS Speech, Emo-DB (Berlin), SAVEE, and EMOVO). Our developed twine-shuf-pat and INCA based method yielded 87.43%, 90.09%, 84.79%, and 79.08% classification accuracies using RAVDESS, Emo-DB (Berlin), SAVEE and EMOVO corpora respectively with 10-fold cross-validation strategy. A mixed database is created from four public speech emotion databases which yielded 80.05% classification accuracy. Our obtained speech emotion model is ready to be tested with huge database and can be used in healthcare applications.
DOI: 10.1016/j.compbiomed.2021.104457
2021
Cited 69 times
Automated detection of coronary artery disease, myocardial infarction and congestive heart failure using GaborCNN model with ECG signals
Cardiovascular diseases (CVDs) are main causes of death globally with coronary artery disease (CAD) being the most important. Timely diagnosis and treatment of CAD is crucial to reduce the incidence of CAD complications like myocardial infarction (MI) and ischemia-induced congestive heart failure (CHF). Electrocardiogram (ECG) signals are most commonly employed as the diagnostic screening tool to detect CAD. In this study, an automated system (AS) was developed for the automated categorization of electrocardiogram signals into normal, CAD, myocardial infarction (MI) and congestive heart failure (CHF) classes using convolutional neural network (CNN) and unique GaborCNN models. Weight balancing was used to balance the imbalanced dataset. High classification accuracies of more than 98.5% were obtained by the CNN and GaborCNN models respectively, for the 4-class classification of normal, coronary artery disease, myocardial infarction and congestive heart failure classes. GaborCNN is a more preferred model due to its good performance and reduced computational complexity as compared to the CNN model. To the best of our knowledge, this is the first study to propose GaborCNN model for automated categorizing of normal, coronary artery disease, myocardial infarction and congestive heart failure classes using ECG signals. Our proposed system is equipped to be validated with bigger database and has the potential to aid the clinicians to screen for CVDs using ECG signals.
DOI: 10.1111/exsy.12485
2019
Cited 63 times
Hybrid particle swarm optimization for rule discovery in the diagnosis of coronary artery disease
Abstract Coronary artery disease (CAD) is one of the major causes of mortality worldwide. Knowledge about risk factors that increase the probability of developing CAD can help to understand the disease better and assist in its treatment. Recently, modern computer‐aided approaches have been used for the prediction and diagnosis of diseases. Swarm intelligence algorithms like particle swarm optimization (PSO) have demonstrated great performance in solving different optimization problems. As rule discovery can be modelled as an optimization problem, it can be mapped to an optimization problem and solved by means of an evolutionary algorithm like PSO. An approach for discovering classification rules of CAD is proposed. The work is based on the real‐world CAD data set and aims at the detection of this disease by producing the accurate and effective rules. The proposed algorithm is a hybrid binary‐real PSO, which includes the combination of categorical and numerical encoding of a particle and a different approach for calculating the velocity of particles. The rules were developed from randomly generated particles, which take random values in the range of each attribute in the rule. Two different feature selection methods based on multi‐objective evolutionary search and PSO were applied on the data set, and the most relevant features were selected by the algorithms. The accuracy of two different rule sets were evaluated. The rule set with 11 features obtained more accurate results than the rule set with 13 features. Our results show that the proposed approach has the ability to produce effective rules with highest accuracy for the detection of CAD.
DOI: 10.1016/j.patrec.2019.04.014
2019
Cited 61 times
A new method to identify coronary artery disease with ECG signals and time-Frequency concentrated antisymmetric biorthogonal wavelet filter bank
The extreme deposition of plaque in the inner walls of arteries causes coronary artery disease (CAD). It can be detected by the morphological changes in the electrocardiogram (ECG) signals. The manual analysis of ECG signals is inefficient, as it is laborious and vulnerable to errors. CAD diagnosis tests also demand physical work from patients, which may not be fulfilled as readily by the elderly and physically challenged patients. The use of automated coronary artery disease diagnosis system can help to overcome the aforementioned problems. In this study, ECG segments of different durations (2s and 5s) are employed for the analysis of CAD. We propose the use of a recently developed optimally time-frequency concentrated (OTFC) even-length biorthogonal wavelet filter bank (BWFB) for automatically identifying CAD. The fuzzy entropy (FE) and log-energy (LogE) were extracted from the OTFC decomposed coefficients. Using the Gaussian support vector machine (GSVM) classifier, an average classification accuracy of 99.53% is achieved with 10-fold cross-validation (CV). The average sensitivity and specificity obtained are 98.64% & 99.70%, respectively with the Matthews correlation coefficient(MCC) of 0.983. The classification performance of the proposed model has surpassed most of the state-of-art models. The method presented in this paper can be of great help to clinicians and cardiologists to validate their diagnosis. Our developed model is economical, robust and accurate in diagnosing the CAD.
DOI: 10.1016/j.compbiomed.2020.104095
2021
Cited 61 times
Coronary artery disease detection using artificial intelligence techniques: A survey of trends, geographical differences and diagnostic features 1991–2020
While coronary angiography is the gold standard diagnostic tool for coronary artery disease (CAD), but it is associated with procedural risk, it is an invasive technique requiring arterial puncture, and it subjects the patient to radiation and iodinated contrast exposure. Artificial intelligence (AI) can provide a pretest probability of disease that can be used to triage patients for angiography. This review comprehensively investigates published papers in the domain of CAD detection using different AI techniques from 1991 to 2020, in order to discern broad trends and geographical differences. Moreover, key decision factors affecting CAD diagnosis are identified for different parts of the world by aggregating the results from different studies. In this study, all datasets that have been used for the studies for CAD detection, their properties, and achieved performances using various AI techniques, are presented, compared, and analyzed. In particular, the effectiveness of machine learning (ML) and deep learning (DL) techniques to diagnose and predict CAD are reviewed. From PubMed, Scopus, Ovid MEDLINE, and Google Scholar search, 500 papers were selected to be investigated. Among these selected papers, 256 papers met our criteria and hence were included in this study. Our findings demonstrate that AI-based techniques have been increasingly applied for the detection of CAD since 2008. AI-based techniques that utilized electrocardiography (ECG), demographic characteristics, symptoms, physical examination findings, and heart rate signals, reported high accuracy for the detection of CAD. In these papers, the authors ranked the features based on their assessed clinical importance with ML techniques. The results demonstrate that the attribution of the relative importance of ML features for CAD diagnosis is different among countries. More recently, DL methods have yielded high CAD detection performance using ECG signals, which drives its burgeoning adoption.