**6. Discussion**

*Sports Science and Human Health - Different Approaches*

**Items Fine** 

**Items Fine** 

selection.

than abnormal and this is because of the imbalanced dataset. Therefore, to reduce the potential over-fitting of the features, a reduction in the number of features used in the training process. The training dataset was re-trained with the reduced number of features (15), and all evaluation parameters were calculated. **Table 7** summarizes the evaluation measures for identifying the best algorithm with feature

From **Table 6**, the overall accuracy was reduced as well as classifying normal and abnormal both were also reduced even though the same algorithms were performing best in the classification after feature reduction. Therefore, it can be said that the features used for classification are optimized and cannot be reduced.

To improve the performance of the best performing algorithms by optimizing the hyperparameters of the algorithms, it was observed that the performance of the ensemble algorithm can be improved. Two important parameters were optimized

> **Weighted KNN**

for the ensemble algorithms: "Distance" and "Number of neighbors."

**KNN**

Accuracy (%) 94.63 93.72 93.17 Accuracy: abnormal (%) 88, 12 85, 15 87, 13 Accuracy: normal (%) 96.6, 3.4 97, 3 95, 5 Error (%) 5.37 6.28 6.83 Sensitivity (%) 96.32 95.24 95.67 Specificity (%) 89.34 88.72 85.49 Precision (%) 96.62 96.54 95.29 FPR (%) 10.66 11.28 14.51 F1 score (%) 96.46 95.88 95.48 MCC (%) 85.34 82.7 81.5

**KNN**

*Performance measures of three best performing algorithms for full-feature set.*

*Performance matrix for the three best performing algorithms on reduced feature set.*

Accuracy (%) 92.36 92.02 92.89 Accuracy: abnormal (%) 84, 16 82, 18 83, 17 Accuracy: normal (%) 95, 5 95, 5 96, 4 Error (%) 7.64 7.98 7.11 Sensitivity (%) 94.85 94.30 94.77 Specificity (%) 84.52 84.62 86.71 Precision (%) 95.08 95.22 95.90 FPR (%) 15.48 15.38 13.29 F1 score (%) 94.96 94.76 95.33 MCC (%) 79.17 78.09 80.42

**Weighted KNN**

**Ensemble subspace discriminant**

**Ensemble subspace discriminant**

**90**

**Table 7.**

**Table 6.**

This chapter summarizes the findings of four different applications of wearable devices to tackle four critical clinical problems. The smart wearable devices reported here can help patients in different settings to manage their diseases, which will reduce frequent hospital visit requirement and can elevate their living standard. The summary of the results from each of the case study can be provided as below.

The FSR-based smart insole can acquire high-quality vGRF for different gait cycles, and it was found that the flexible piezoelectric sensors were performing poor in calibration due to their sensitivity to 3D forces requiring special force calibration machines to control the applied force in x, y, or z directions. Therefore, piezoelectric sensors cannot be utilized as a substitute for FSR in smart insole application.

ReliefF feature selection algorithm produced the best result when combined with Gaussian process regression (GPR) for predicting the systolic and diastolic blood pressure using PPG signal. The feature selected using a combination of ReliefF and GPR performed the best in estimating SBP, while correlation-based feature selection (CFS) and GPR performed best for DBP. It can be noted that this optimized approach can estimate SBP and DBP with the RMSE of 6.74 and 3.57, respectively.

Extended modified B distribution shows the best performance in classifying ST elevation and T-wave inversion in the heart attack detection case study using ECG signals. The variance of the results from EMBD technique showed that the variation for different iterations was at the minimum for the EMBD distribution (**Table 3**). Thus, EMBD distribution was more robust in heart attack detection in case of noisy ECG data.

Heart sound signals can be accurately classified using "Fine Tree" classifier compared with 22 different algorithms [three decision trees, two discriminant analyses, six support vector machines (SVMs), six k-nearest neighbors (KNNs), and five ensemble classifiers]. Feature reduction technique did not help in improving the performance. It was observed that the best performing algorithms can be improved further by optimizing the hyperparameters of the algorithms.
