**3.8 The LSTM\_MULTV\_ED\_10 model**

This model is a multivariate version of LSTM\_UNIV\_ED\_10. It uses the last couple of weeks'stock price records and includes all the five attributes, i.e., *open*, *high*, *low*, *close*, and *volume*. Hence, the input data shape for the model is (10, 5). We use a batch size of 16 while training the model over 20 epochs. **Figure 8** depicts the architecture of the multivariate encoder-decoder LSTM model.

**Table 8** shows the number of parameters in the LSTM\_MULTV\_ED\_10 model. The computation of the parameters for this model is exactly similar to that for the model LSTM\_UNIV\_ED\_50 expect for the first LSTM layer. The number of


**Table 7.**

*The number of parameters in the model LSTM\_UNIV\_ED\_10.*

*Design and Analysis of Robust Deep Learning Models for Stock Price Prediction DOI: http://dx.doi.org/10.5772/intechopen.99982*

### **Figure 8.**

*The schematic architecture of the model LSTM\_MULTV\_ED\_10.*


### **Table 8.**

*The number of parameters in the model LSTM\_MULTV\_ED\_10.*

parameters in the first LSTM (i.e., the encoder) layer for this model will be different since the number of parameters is dependent on the count of the features in the input data. The computation of the parameter counts in the encoder LSTM layer, *lstm\_1*, of the model is done as follows: 4 \* [(200 + 5) \* 200 + 200] = 164800. The total number of parameters for the model is found to be 505801.

### **3.9 The LSTM\_UNIV\_CNN\_10 model**

This model is a modified version of the LSTM\_UNIV\_ED\_N\_10 model. A dedicated CNN block carries out the encoding operation. CNNs are poor in their ability to learn from sequential data. However, we exploit the power of a one-dimensional CNN in extracting important features from time-series data. After the feature extraction is done, the extracted features are provided as the input into an LSTM block. The LSTM block decodes the features and makes robust forecasting of the future values in the sequence. The CNN block consists of a couple of convolutional layers, each of which has a feature map size of 64 and a kernel size of 3. The input data shape is (10, 1) as the model uses univariate data of the target variable of the past couple of weeks. The output shape of the initial convolutional layer is (8, 64). The value of 8 is arrived at using the computation: (10–3 + 1), while 64 refers to the feature space dimension.

Similarly, the shape of the output of the next convolutional block is (6, 64). A maxpooling block follows, which contracts the feature-space dimension by 1/2. Hence, the output data shape of the max-pooling layer is (3, 64). The max-pooling layer's output is flattened into an array of single-dimension and size 3\*64 = 192. The flattened vector is fed into the decoder LSTM block consisting of 200 nodes. The decoder architecture remains identical to the decoder block of the LSTM\_UNIV\_ED\_10 model. We train the model over 20 epochs, with each epoch using 16 records. The structure and the data flow of the model are shown in **Figure 9**.

**Table 9** presents the computation of the number of parameters in the model LSTM\_UNIV\_CNN\_10. The input layer, the max-pooling layer, the flatten operation, and the repeat vector layer do not involve any learning, and hence they have no parameters. The number of parameters in the first convolutional layer is computed as follows: (3 + 1) \* 64 = 256. For the second convolutional layer, the number of parameters is computed as: (3 \* 64 + 1) \* 64 = 12352. The number of parameters for the LSTM

### **Figure 9.**

*The schematic architecture of the model LSTM\_UNIV\_CNN\_10.*


### **Table 9.**

*The number of parameters in the model LSTM\_UNIV\_CNN\_10.*

### *Design and Analysis of Robust Deep Learning Models for Stock Price Prediction DOI: http://dx.doi.org/10.5772/intechopen.99982*

layer is computed as follows: 4 \* [(200 + 192) \* 200 + 200] = 314400. In the case of the first dense layer, the number of parameters is computed as follows: (200 \* 100 + 100) = 20100. Finally, the number of parameters in the second dense layer is computed as (100 \* 1 + 1) = 101. The total number of parameters in the model is found out to be 347209.

### **3.10 The LSTM\_UNIV\_CONV\_10 model**

This model is a modification of the LSTM\_UNIV\_CNN\_10 model. The encoder CNN's convolution operations and the decoding operations of the LSTM sub-module are integrated for every round of the sequence in the output. This encoder-decoder model is also known as the Convolutional-LSTM model [58]. This integrated model reads sequential input data, performs convolution operations on the data without any explicit CNN block, and decodes the extracted features using a dedicated LSTM block. The Keras framework contains a class, *ConvLSTM2d*, which is capable of performing two-dimensional convolution operations [58]. The two-dimensional ConvLSTM class is tweaked to enable it to process univariate data of one dimension. The architecture of the model LSTM\_UNIV\_CONV\_10 is represented in **Figure 10**.

The computation of the number of parameters for the LSTM\_UNIV\_CONV\_10 model is shown in **Table 10**. While the input layer, the flatten operation, and the

### **Figure 10.**

*The schematic architecture of the model LSTM\_UNIV\_CONV\_10.*


### **Table 10.**

*Computation of the no. of params in the model LSTM\_UNIV\_CONV\_10.*

repeat vector layer do not involve any learning, the other layers include trainable parameters. The number of parameters in the convolutional LSTM layer (i.e., *conv\_1st\_m2d*) is computed as follows: 4\**x*\*[*k* (1+ *x*) + 1] = 4\*64[3 (1 + 64) + 1] = 50176. The number of parameters in the LSTM layer is computed as follows: 4\*[(200 + 192)\*200 + 100] = 314400. The number of parameters in the *first time distributed dense layer* is computed as (200\*100 + 100) = 20100. The computation for the *final dense layer* is as follows: (100\*1 + 1) = 101. The total number of parameters involved in the model, LSTM\_UNIV\_CONV\_10 is 38,4777.
