**4. Effect of noise in ECG signals and importance of data preprocessing**

Noise is an undesirable signal which disrupts the original message signal and causes the message signal's parameters to be altered. Noise distorts the message and hinders it from being understood in an intended manner. When there is loud,

*Deep Learning Algorithms for Efficient Analysis of ECG Signals to Detect Heart Disorders DOI: http://dx.doi.org/10.5772/intechopen.103075*

distracting noise that disrupts the communication assimilation process, comprehension suffers.

There is no signal without noise. The signal strength may be affected or aided by noise. Noise can cause signal distortion, which is most noticeable in agitated receivers. Both analog and digital systems suffer from noise, which diminishes their performance. Noise degrades the quality of the received signal in analog systems. Noise reduces the overall performance of a digital system because it necessitates retransmission of data packets or additional coding to recover data in the event of an error. The most prevalent and evident issue produced by signal noise is the distortion of the processed signal, which causes inaccurate interpretation or display of a process state by the equipment. Unusual signal noise can cause an apparent signal loss. Noise filtering is incorporated into most current electrical devices. However, in excessively loud circumstances, this filter may not be sufficient, resulting in the device getting no signal and no connection.

The presence of noise can make it difficult or impossible to identify a representative ECG signal. Noises in the ECG signal can lead to incorrect interpretation. In the ECG signal, there are primarily two kinds of noise. Electromyogram noise, additive white Gaussian noise, and power line interference are examples of high-frequency noises. Power line interference distorts the amplitude, duration, and shape of lowamplitude local waves of the ECG signal. Baseline wandering is an example of lowfrequency noise. Baseline wandering alters the ECG signal's ST-segment and LF components.

Noise can be reduced by keeping the signal wires as short as possible or by keeping the wires away from electrical machinery. By using differential inputs, noise can be reduced from both wires. Noise also can be reduced by filtering the signal or by using an integrating A-D converter to reduce mains frequency interference.

There are various ECG denoising techniques [32] that are being used to reduce the noise from signals. Some ECG denoising techniques are EMD-based models, deeplearning-based models, wavelet-based models, sparsity-based models, Bayesian-filterbased models, hybrid models, discrete wavelet transform, etc.

The discrete wavelet transform is a digital processing computational technique that allows for electrical noise with a higher signal-to-noise ratio than lock-in amplifier equipment. A discrete wavelet transform decomposes a signal into a number of sets, each set including a time series of coefficients that describe the signal's time evolution in the associated frequency band.

The process of converting raw data into a comprehensible format is known as data preprocessing. Dealing with raw data is not suitable, thus this is a key stage in data mining. Before using machine learning or data mining methods, make sure the data is of high quality. In every brain-computer interface-based application, preprocessing data is a necessary and significant step. It checks the accuracy, completeness, believability, consistency, interpretability, timeliness of the data. It assists with the removal of undesirable artifacts from the data and prepares it for subsequent processing.

### **5. State-of-the-art techniques**

Peimankar et al. [33] proposed a deep learning model for real-time segmentation of heartbeats which might be utilized in real-time telehealth diagnostic systems. The

proposed technique integrates a CNN and an LSTM model to predict and analyze the onset, peak, and offset of various heartbeat waveforms such as the P-wave, QRS complex, T-wave, and no wave. The proposed model is also known as DENS-ECG model. Using 5-fold cross-validation, this model is trained and evaluated on a dataset of 105 ECGs with a length of 15 min each. It attains an average sensitivity and accuracy of 97.95 and 95.68%, respectively. In addition, the method is calibrated on an unknown dataset to assess how robust it is at detecting QRS with a sensitivity of 99.61% and accuracy of 99.52%. This model illustrates the combined CNN-LSTM model's adaptability and accuracy in delineating ECG signals. The accuracy of the proposed DENS-ECG model in recognizing ECG waveforms leaves the door open for cardiologists to apply this algorithm in-house to evaluate ECG recordings and diagnose cardiac arrhythmias. This model is provided in **Figure 6**.

In **Figure 6**, noise reduction refers to the filtering of the ECG signals to reduce noise and remove baseline wanders. In the segmentation, the ECG signals are divided into 1000-sample chunks and sent into the model as input. Then the segmented ECG signals are split into two sets to separate the testing set from a non-testing set. This model used a 5-fold cross-validation technique to provide a more trustworthy performance in terms of interpretability. The model consists of eight layers, including an input layer, three 1D convolution layers, two BiLSTM layers, and a dropout layer. And the Adam optimization algorithm is used to validate the algorithm, which is radically different from the steepest gradient descent (SGD) optimization technique and achieved higher performance on the validation. The trained model is tested on 26 unseen test records from the QTDB dataset to assess the classifier's performance. Furthermore, the model is evaluated for QRS detection on the unexplored MITDB dataset.

Jambukia et al. [34] represented an overview of ECG classification into arrhythmia categories and stated that classification of electrocardiogram (ECG) signals plays a crucial role in the monitoring heart diseases as early and precise diagnosis of arrhythmia types is essential for monitoring cardiac disorders and selecting the best treatment option for a patient. The survey outlines the challenges of ECG classification and provides a comprehensive overview of preprocessing approaches, ECG databases, feature extraction techniques, ANN-based classifiers, and performance measures for

**Figure 6.** *Flowchart of the proposed DENS-ECG model [33].*

### *Deep Learning Algorithms for Efficient Analysis of ECG Signals to Detect Heart Disorders DOI: http://dx.doi.org/10.5772/intechopen.103075*

evaluating the classifiers' accuracy. According to the survey, many researchers have worked on ECG signal classification. They have used different pre-processing techniques, various feature extraction techniques, and classifiers. For ECG categorization, the majority of the researchers used the MIT-BIH arrhythmia database. A. Dallali et al. used DWT to extract the RR interval and then used Z score to normalize it. They classified ECG beats using FCM. They achieved a 99.05% accuracy rate. RR interval and R point position are two characteristics retrieved using DWT. FCM was used for pre-classification, while 3-layer MLPNN was used for final classification. They were able to reach a 99.99% accuracy rate.

Saadatnejad et al. [35] proposed an ECG classification model, which was suggested for continuous cardiac detection on wearable devices with limited processing resources. This model is demonstrated in **Figure 7** in detail. The model works in such a way that the incoming computerized ECG data were first split into heartbeats and their RR interval while wavelet characteristics were extracted. The ECG signal as well as the extracted characteristics were then put into two RNN-based models that categorized every heartbeat. After that, the two outputs were combined to create the final categorization for every pulse. The suggested method fits the temporal criteria for continuous and real-time execution on wearable devices. Unlike many computeintensive deep-learning-based techniques, the proposed methodology is accurate and lightweight, allowing wearable devices to have continuous monitoring with accurate LSTM-based ECG categorization having negligible computing expenses while running indefinitely on wearable devices with modest processing capability.

**Figure 7.** *The proposed algorithm of LSTM-based ECG classification model [35].*

**Figure 8.** *The DNN architecture used for ECG classification [36].*

Ribeiro et al. [36] had proposed an end-to-end DNN competent of accurately identifying six ECG abnormalities in S12L-ECG examinations, with diagnostic performance comparable to that of medical residents and students. This DNN model trained on data from the Clinical Outcomes in Digital Electrocardiology research which included over 2 million labeled tests analyzed by the Telehealth Network of Minas Gerais. The DNN surpassed cardiology resident medical practitioners in detecting six different types of abnormalities in 12-lead ECG recordings with F1 scores over 80% and specificity exceeding 95%. These results suggest that DNN-based ECG analysis, which was previously tested in a single-lead scenario, generalizes well to 12-lead examinations, bringing the technology closer to practical use. This model has the potential to lead to more accurate automated diagnosis and better clinical practice. Even professional assessment of complex and borderline cases appears to be essential in this future scenario, the implementation of such automatic interpretation by a DNN algorithm may increase the population's access to this fundamental and valuable diagnostic test. **Figure 8** shows the deep learning model used in this work.

In **Figure 8**, the Conv, BN, and dense imply the convolution, batch normalization, and the fully connected layers whereas the ReLU and *σ* represents the activation layers namely the rectified linear unit and the sigmoid, respectively. The ResBlk is nothing but the residual block where the internal architecture of each such block is shown in a detailed fashion below the main architecture. In the residual block, the dropout layer represents the dropout regularization.
