**5.1 Max-pooling frequency response**

The first thing to be analyzed was the max-pooling frequency response, as explained earlier, the max-pooling was reduced to a 1D signal to facilitate the results interpretation. Besides, multiple random signals were tested to look for a pattern behavior. **Figure 9** shows some of the frequency responses obtained during these tests. Item (a) presents an irregular response, it has different peaks and valleys without any pattern, while response (b) resembles a low-pass filter, (c) a multi-pass band filter and (d) a stop band filter. However, most of the values are above 0 dB, which means that max-pooling functions more like an amplifier rather than a filter. In summary, the obtained responses show a dynamic behavior dependent of the input signal, different signals triggers different frequency responses. Additionally, although *Random Wavelet Coefficients Pooling for Convolutional Neural Networks DOI: http://dx.doi.org/10.5772/intechopen.105162*

#### **Figure 9.**

*Different max-pooling frequency responses behaves like (a) an irregular filter/amplifier, (b) low-pass filter/ amplifier, (c) multiband filter/amplifier, (d) stopband filter/amplifier.*

max-pooling minimizes some frequency values, its main action is to maximizes some signal harmonics. This behavior consistent but the response is different for every signal, there are however, signals that produces poor responses too and that is a disadvantage.

#### **5.2 Models accuracy**

The test results are concentrated in **Tables 2** and **3**. The standard pooling methods are located in the first table and the wavelet methods in the second table. They are organized as follows. There is one method per column and both tables are divided into two sections, both were performed during the training process. The first section groups 20 iteration-experiments, and the second is for the 40 iteration-experiments.

The first part of the results is shown in **Table 2**. The most relevant but expected observation in the first table is that max-pooling has the highest accuracy values. It is known that among the standard methods max-pooling is the state-of-the-art, as previous studies suggest [22].

On the other hand, the mix-pooling method by channels has the worst result on the table. However, when the mixing method is made by 2*x*2 region, it reaches the second


**Table 2.**

*Accuracy results for conventional pooling methods.*


#### **Table 3.**

*Accuracy results for the wavelet pooling methods.*

highest accuracy levels, just after max-pooling. This points out that processing small sections of an image keeps details better than taking it as a whole.

In **Table 3**, each section uses three wavelet functions with different vanishing moments: Haar, Daubechies 4 and Daubechies 6. In addition, there is one wavelet functions per row and the proposed model is indicated by an asterisk.

From this second part of the results, an improvement from standard to wavelet methods can be noticed. Even for max-pooling, it goes from 99*:*2% as stand-alone version to 99*:*4% as a hybrid method. If we observe wavelets methods without maxpooling combination, the proposed method using random selection by 2*x*2 region is the most consistent and it reaches the second highest accuracy (99*:*3%). In a previous study [22] this dataset had the best results for max-pooling methods; therefore, it is expected to have two hybrid version with max-pooling with the highest accuracy (99*:*4%). As explained before, this may change drastically using different datasets since max-pooling is signal dependable, on the contrary the proposed method reaches the second highest value, but consistently and invariant to signal effects.

It is also interesting that the wavelet model with the lowest accuracy is the Lifting Scheme with all the coefficients fixed. Even most of the standard methods get better results than this approach. In these models all the filters are always present and there is no changing effect at all. All the model behavior is expected and predictable, and therefore there is not any non-linearity that helps to prevent overfitting and selects different attributes in every iteration.

*Random Wavelet Coefficients Pooling for Convolutional Neural Networks DOI: http://dx.doi.org/10.5772/intechopen.105162*
