**5. Role of ML for TES and TMA systems and enhancing PCM reliability**

ML has had a significant impact on enhancing the performance of TES platforms. It plays a pivotal role in predictive modeling for both TES and TMA. A study published by Li et al. in 2020 exemplifies the utilization of ML algorithms for predicting energy demand patterns with heightened precision. These ML models rely on diverse data sources such as weather conditions, user behavior, and historical data to anticipate energy requirements. ML empowers TES systems to store and release energy with pinpoint accuracy, effectively minimizing energy wastage and bolstering overall system efficiency. The outcome is a TES solution that is not only more sustainable but also cost-effective. Moreover, ML finds application in real-time temperature control within TES systems. It continuously monitors temperature fluctuations, fine-tuning the processes of energy storage and release accordingly. This dynamic adaptability ensures that TES systems can perform optimally even as conditions change, thereby enhancing their reliability and overall performance. ML modules can be strategically implemented to fortify TES systems, taking on essential roles in fault detection, resilience, and robustness.

In real time, ML algorithms are harnessed to identify potential faults or irregularities within TES systems. ML models are adept at detecting anomalies and can promptly trigger preventive maintenance or system adjustments. This, in turn, minimizes downtime and augments system resilience. Furthermore, ML models prove invaluable in the selection of the most suitable PCMs. Through an analysis of the thermal properties of various materials, ML-based optimization leads to an increase in energy storage capacity and an overall enhancement of TES system performance. The integration of TES with renewable energy sources, an integral aspect of modern energy systems, is seamlessly facilitated by ML. The ML models proficiently predict renewable energy generation patterns, enabling TES systems to efficiently adapt and store surplus energy. This integration ensures a reliable and uninterrupted energy supply, thereby promoting the sustainability of renewable energy sources. Lastly, ML models wield substantial influence in the management of TES systems within the broader energy grid. ML algorithms dissect grid data and make real-time decisions regarding energy charging and discharging. This grid management competency plays a pivotal role in balancing energy supply and demand while concurrently reducing stress on the energy grid [87, 88].

#### **5.1 Reducing supercooling of PCM using CFT combined with ML**

In addition to precise control of the TES system functionality, the ML models can also be used to solve the problem of supercooling. Integrating the CFT with suitable forecasting methods like ML has the potential to enhance the reliability and consistency of CFT. A previous study shows that the degree of supercooling was reduced to 1°C for LiNO3·3H2O, which was demonstrated over 800 cycles of charging and discharging—with less than 6% degradation in energy storage capacity [89]. The successful implementation and execution of CFT techniques necessitates accurate forecasting models that need to provide accurate control of the melting process and robust predictions for different operating conditions. Hence, the history of PCM charging and discharging needs to be incorporated.

One potential approach for implementing CFT involves achieving a specific melting fraction of the PCM, such as 90% melt-fraction, with a high degree of accuracy and precision. To make this technique successful, it is crucial to predict in advance when a given mass of PCM will reach a target melt-fraction, like 90 or 95%, at which point the melting process should cease, and solidification should commence. The ability to predict the time required to reach the target melt-fraction should be robust and resistant to variations in environmental conditions, especially when dealing with

## *A Review on Phase Change Materials for Sustainability Applications by Leveraging Machine… DOI: http://dx.doi.org/10.5772/intechopen.114380*

multiple cycles of melting and solidification. Analytical and numerical models, which rely on energy and enthalpy balance approaches, often prove inadequate for providing reliable forecasts. These models can be overly sensitive to even minor fluctuations in environmental conditions, slight variations in process parameters, minor deviations in experimental procedures, and a narrow range of measurement uncertainties and experimental errors.

In such situations, ML algorithms prove to be well-suited for the precise prediction of melt fractions, a task that is often challenging for theoretical models due to their inherent complexity and sensitivity to transients. Theoretical models struggle to fully capture the system's behavior and predict the coupled and nonlinear thermalhydraulic dynamics at the system level. ML techniques, on the other hand, rely on analyzing statistical variations within experimental data, especially during transient system states. The effectiveness of ML can be significantly enhanced when a substantial amount of "training data" is accessible for the algorithms to learn from [32, 90].

#### **5.2 Artificial neural network principles as the basis of ML**

The concept of artificial neural networks (ANNs) draws inspiration from the information processing in biological neural systems, such as the human brain [91]. Unlike structured programming methods, which rely on explicit instructions, ANNs operate by detecting patterns and relationships in data, learning from prior experiences or training data. This unique approach to forecasting allows ANNs to discern connections between parameters without requiring an in-depth understanding of the system [92]. One common implementation of ANN is the fully connected multilayer perceptron (MLP) model, consisting of nodes, often called "neurons," organized in sequential layers: input, hidden, and output. Neurons within a layer communicate through two-way connections, with the input layer having neurons equivalent to the input parameters and the output layer's neuron count determined by the number of output parameters. The hidden layer, positioned between the input and output layers, facilitates communication among neurons to solve problems. This mimics the functioning of the brain. The first layer is called the input layer (receiving input data vectors), while the last layer is called the output layer. A schematic of the MLP network is shown in **Figure 7**.

Depending on the output vector length, it may have more than one node. The middle layers are known as hidden layers. Each layer is connected to the previous layer using connectors. A single node is characterized by two entities—(1) bias, b, (2) and active function, f, while connectors have a weight, w. In each neuron, the activation function acts on the input an. received from the node in the preceding layer. The output value (an + 1) from one layer serves as the input for the subsequent layer with a serial number (n + 2). This sequential relationship can be expressed mathematically as shown in the following equation.

$$\mathbf{a}\_{\mathbf{n}+1} = \mathbf{f}\left(\mathbf{w}^T \,\mathbf{a}\_{\mathbf{n}} + \mathbf{b}\right) \tag{1}$$

The development of ANN models includes three major stages including data training, result validation, and efficacy testing [32]. Initially, during the data training phase, weights and biases are initialized randomly, and subsequently, outputs are generated based on the training data. Following this, in the result validation stage,

**Figure 7.** *Schematic of the MLP network [87].*

the produced output is compared with the actual output, utilizing a cost function to compute the extent of disparity or error between them. The specific cost function employed during the result validation phase is the sum of squared error (SSE), as represented in the following equation.

$$\text{SSE} = \sum\_{i=1}^{N} \left( p\_i - a\_i \right)^2 \tag{2}$$

Here, N is the number of result values, *pi* is predicted value from ANN, and *ai* is the actual value. The error denoted as SSE undergoes a backward propagation process from the result layer to the input layer, essentially implementing a gradient descent algorithm. This computational method is employed to adjust the biases and weights iteratively, to minimize the error between the predicted and actual values through multiple numerical iterations. This feedback mechanism continues until the desired level of error reduction is achieved. In the following stage, the efficacy of the ANN model is evaluated using a separate test dataset that was not utilized during the data training and result validation stages.

In previous applications, ANN techniques have been employed to predict the specific heat capacity of molten salt nanofluids used in TES applications, particularly in concentrated solar power (CSP) and solar thermal power generation [93]. Another category of ANNs, known as radial basis function neural networks (RBF-NNs), is particularly well-regarded for modeling material properties, including the thermophysical properties of nanofluids. RBF-NNs, first introduced by Broomhead and Lowe, also consist of three layers: the input layer, hidden layer, and output layer [94]. The primary function of RBF-NN ANNs is function approximation. The key distinction between an MLP model and an RBF-NN model lies in the computation process within the neurons.

In the RBF-NN model, the activation function is a radial basis function ϕ, which operates based on the Euclidean norm between the neuron's center and the input vector. A connection weight is then applied to the result to produce outputs, as represented mathematically in the following equation.

*A Review on Phase Change Materials for Sustainability Applications by Leveraging Machine… DOI: http://dx.doi.org/10.5772/intechopen.114380*

$$\mathcal{Y}(\boldsymbol{\varkappa}) = \sum\_{i=1}^{n} w\_i \boldsymbol{\sigma}(\boldsymbol{\varkappa} - \boldsymbol{\varkappa}\_i) \tag{3}$$

Prior research has demonstrated the viability of predicting melt fraction using RBF-NN models based on temperature measurements [95]. The effectiveness of a fully developed MLP model enables real-time predictions of the time required to achieve a predefined melt fraction (e.g., 90% melt fraction) based on temperature measurements at any given moment during a melting cycle. This predictive capability helps to optimize the utilization of thermal energy storage capacity while eliminating the need for supercooling to initiate nucleation during solidification. Additionally, ANN-based methods help predict the material properties of PCMs while also forecasting their transient performance, particularly their temperature responses for battery thermal management [96]. Therefore, the MLP/ANN-based predictive models can significantly enhance the reliability of PCM TES platforms while simultaneously improving their overall systemic and thermodynamic efficiencies.

#### **5.3 ML approaches used for examining PCM reliability**

ML techniques can be utilized to assess the reliability of PCMs to improve the efficiency of TES systems. **Table 3** presents various ML approaches and their respective purposes in this context [97, 98].

#### **5.4 Recent developments in ML approaches for PCM-based TES systems**

ML approaches offer versatile solutions adaptable to address future challenges in TES systems. Ongoing advancements involve the development of sophisticated ML models capable of predicting PCM behavior with increased precision, encompassing complex phase change processes and accounting for various influencing factors. The fusion of ML with artificial intelligence (AI) is being leveraged to tackle supercooling issues by creating intelligent control systems that actively manage phase transitions. Furthermore, the evolution of ML algorithms aims to provide comprehensive reliability assessments, including the prediction of long-term PCM system behavior and early detection of potential failure modes. Advanced ML systems enable autonomous and initiative-taking maintenance of PCM-based TES systems, detecting anomalies, scheduling maintenance tasks, and optimizing performance without human intervention. ML is also contributing to the rapid discovery and design of innovative PCM materials, enhancing thermal properties and system reliability. Additionally, ML plays a pivotal role in integrating PCM-based TES systems with smart grids and renewable energy sources, optimizing the interaction between energy storage and generation to bolster grid stability and energy efficiency. Evaluating the environmental impact of PCM materials and components and assessing the long-term sustainability of TES solutions are facilitated by advanced ML techniques. Finally, the development of real-time monitoring and control systems utilizing ML-based sensors empowers the effective management of PCM-based TES systems [99, 100].

#### **5.5 Optimizing PCM selection and configuration with ML**

The integration of ML with PCM optimization represents a significant advancement in the development of TES systems. By utilizing data science, this method


#### **Table 3.**

*Machine learning approaches and their intended functions.*

addresses the complex task of selecting the ideal PCM, factoring in thermal conductivity, melting point, latent heat capacity, and material compatibility [101]. Recent strides in ML, particularly through deep learning and reinforcement learning, have demonstrated substantial success in predicting PCM thermophysical behaviors in varying conditions, thus surpassing the limitations of traditional selection methods. The process involves training ML models on extensive datasets, including PCM properties and performance metrics across different scenarios, enriched with computational fluid dynamics (CFD) and finite element analysis (FEA) simulations [102]. These models, notably convolutional neural networks (CNNs) and recurrent neural networks (RNNs), excel at detecting intricate patterns and correlations, accurately forecasting PCM responses [103]. Additionally, the deployment of optimization algorithms, such as genetic algorithms (GAs) and particle swarm optimization (PSO), alongside ML models, meticulously identifies PCM combinations that optimize thermal efficiency while minimizing costs and environmental impact. This innovative approach not only boosts the thermal effectiveness of TES systems but also aligns with sustainable design principles, marking a forward leap toward a more sustainable, energy-efficient future in thermal management solutions.

*A Review on Phase Change Materials for Sustainability Applications by Leveraging Machine… DOI: http://dx.doi.org/10.5772/intechopen.114380*
