*8.1.1 Number of epochs per run*

the classrooms, labs, library and entrance gates. This overall improved the students' response towards academics [30]. Face recognition technology is based on deep CNN models. This process can be performed by using both supervised and

unsupervised approaches but supervised methodologies are mostly preferred. Face recognition is performed by taking an input from video or image and detection is made by taking input to greyscale. The features in greyscale are applied one by one and compared with pixel values. The CNN models give high accuracy than past techniques by overcoming the problems, like light intensity and expressions, with

RNNs are used for the tasks that require consecutive sequential inputs for processing. Initially, training of RNNs was done by using backpropagation. RNNs approach utilises one factor of input, at a time, in sequence by keeping state vector in their hidden nodes, in which implicitly within nodes contains information of all the past value of factors of that sequence. RNNs are dynamic and fairly powerful systems, but during the training process the problem occurs as in gradients of backpropagation algorithm either would shrink or grow at every time step, ultimately they might disappear after many cycles. If we explore RNN, deep

feedforward networks will be found having all layers sharing the same weight. RNN lags to the capability of storing information for a long time, and deficiency is known as long-term dependencies. To control this shortcoming, one approach has been introduced with explicit memory known as long short-term memory (LSTM). In this method, particular hidden nodes are used to store the information in the form of input data for a much higher time. LSTM is very much recognised for the better-

Apple's Siri, Amazon's Alexa, Microsoft's Cortana, and Google's Assistant are the most popular voice recognizer tools and they are used for making a phone call, play reminders, alarms, provide driving directions and much more. The speech recognizers are developed on RNN networks, which are based on LSTM-RNN architecture. This gives the RNN models the ability to deal with long-distance patterns and makes them suitable for learning long-span relations. The models are trained endto-end and output is attained [34, 35]. Other few applications of RNN models are keyphrase recognition, meteorological data updating, speech to text [35–38]. Massachusetts Institute of Technology (MIT) had performed an interesting simulated study on self-driving cars, and its framework was also being developed on the deep

A simple ANN model was developed using Python. The model was designed by using supervised CNN methodology for image classification. Images were collected for training and validation purpose of the model for apples and oranges. For training purpose, 20 images were collected for each (apple and orange), making a total of 40 images. For validation purpose, 10 more images were collected for each, making a total of 20 images. The data for the supervised process, of the ANN model, was arranged in a specific way with a separate folder for each process, i.e., training and validation. In a folder named as 'Training', images of each fruit were placed separately in the folders having their name titles, i.e., 'Apple' and 'Orange', and

the help of trained models using more training samples [31–33].

quality performance in speech recognition systems [1, 27, 28].

**7.2 Recurrent neural network (RNN)**

*Dynamic Data Assimilation - Beating the Uncertainties*

reinforced model [39].

**96**

**8.1 Supervised ANN model**

**8. Examples of ANN model using Python**

The effect of increasing the number of epochs on the model, for each run, is shown in **Table 2**. The effectiveness of the output is measured against the % accuracy, and % loss for different number epochs. The number of hidden layers for these tests were kept constant for each run.

**Table 2** clearly shows that an increasing number of epochs refines the output by increasing the accuracy and decreasing the data loss. The model gave a correct prediction of the fruit classification in all the runs.

### *8.1.2 Number of hidden layers*

The effect of increasing the number of hidden layers on the model, for each run, is shown in **Table 3**. The effectiveness of the output is measured against the % accuracy, and % loss for various number hidden layers. The number of epochs for these tests was kept constant for each run.

**Table 3** clearly shows that an increasing number of hidden layers increases the model effectiveness by increasing the accuracy and decreasing the data loss. The model gave one wrong prediction, when there were 2 hidden layers. Whereas, by increasing the number of hidden layers, the model started to predict correctly.

### *8.1.3 Overall summary*

The output window from the model is shown in **Figure 13**. It can be seen that the model successfully predicted the correct output ('Apple'). The accuracy of the model was increasing with each epoch from almost 37 to 89% and data loss was also decreasing, consecutively. The program code for this model is given in Appendix A.


#### **Table 2.**

*Output summary for increasing number of epochs.*


**Table 3.** *Output summary for increasing number of hidden layers.*

*8.2.1 Overall summary*

*Output window of SOM model.*

*Data Processing Using Artificial Neural Networks DOI: http://dx.doi.org/10.5772/intechopen.91935*

**Figure 15.**

given in Appendix B.

**Acknowledgements**

**Conflict of interest**

**99**

We will like to acknowledge UTP.

There is no conflict of interest.

**9. Conclusions**

It can be seen in the output results that for each test the model detected the distinct colours and using the same colours it reproduced that image. The output window from the model is shown in **Figure 15**. The program code for this model is

Operation of the ANN model is the simulation of the human brain, and they fall under the knowledge domain of AI. The popularity of ANN models were increased in the early 1990s, and many studies have been done since. The basic ANN model has three main layers, and the main process is performed in the middle layer known as the hidden layer. The output of the ANN model is very much dependent on the

characteristics and function it carries under the hidden layer. Among the

Sophia the Robot (Hanson Robotics); the journey is on-going.

feedforward and feedback networks, the latter one propagates the error unless it became minimum for more effective results. The ANN models can perform supervised learning as well as unsupervised learning depending upon the task. The DL algorithms are very much popular among researchers because of effective outputs with large data. CNN and RNN are the two renown deep networks, and they have been used for various applications. Output accuracy of the ANN models is very much dependent on the number of hidden layers and the number of epochs.

In this era of automation, the AI plays an important role, and most of the daily use applications are based on the architecture of ANN models. This ANN technology, combined with other advanced and AI knowledge areas, is making life easier in almost every domain. This evolution of DNN models has led to the creation of

**Figure 13.** *Output summary of CNN model.*

#### **8.2 Unsupervised ANN model**

A simple unsupervised ANN model was developed for the colour quantization of an image, using Python, and Self-Organising Maps (SOM) methodology was adopted. SOM is basically used for feature detection.

Two different images of houses were selected for colour quantization by the SOM model. Separate tests were conducted with each image keeping the same model conditions. In each test, the developed SOM model reduced the distinct colours of the image, and another image was developed. This technique helped the model to learn the colours in the image and then use the same colours to reconstruct that image. The pictorial views for each output are shown in **Figure 14**.

**Figure 14.** *Pictorial output of SOM model.*

*Data Processing Using Artificial Neural Networks DOI: http://dx.doi.org/10.5772/intechopen.91935*

**Figure 15.** *Output window of SOM model.*

#### *8.2.1 Overall summary*

It can be seen in the output results that for each test the model detected the distinct colours and using the same colours it reproduced that image. The output window from the model is shown in **Figure 15**. The program code for this model is given in Appendix B.
