**3. Probabilistic neural network**

182 Advances in Wavelet Theory and Their Applications in Engineering, Physics and Technology

Fig. 3. Schematic representation of a signal being decomposed at multiple levels.

shows the reconstruction scheme from a single decomposition stage.

Fig. 4. Reconstruction scheme from a single decomposition stage.

samples is given by <sup>2</sup> log *N* .

then obtained by:

**2.3.2 Synthesis or reconstruction** 

Since the multiresolution analysis process is iterative, it can theoretically be continued indefinitely. In fact, the decomposition can proceed only up to 1 (one) detail, consisting of a single sample. The maximum number of decomposition levels for a signal having *N*

The synthesis process or reconstruction is to obtain the original signal from the wavelet coefficients generated by the analysis or decomposition process. While the analysis process involves filtering and sub-sampling, the synthesis process performs a reverse sequence, over-sampling and filtering. The filters used in the synthesis process are called reconstruction filters, being *g k* ( ) the low pass filter, and *h k* ( ) the high pass filter. Figure 4

It is observed from Figure 4 that to retrieve the original signal, it is necessary to reconstruct details and approximations. Details could be obtained with over-sampling of the *cD* coefficients, and a subsequent filtering with *h k* ( ). Approximations are obtained with oversampling of the coefficients *cA* , and a subsequent filtering with *g k* ( ) .The original signal is

The scheme presented in Figure 4 can be extended to a multi-level decomposition.

*S AD* (6)

The structure of a Probabilistic Neural Network (PNN) is similar to a feed forward network. The main difference is that the activation function is no longer the sigmoid; it is replaced by a class of functions which includes, in particular, the exponential function. The main advantage of PNN is that it requires only one step for training and that the decision surfaces are close to the contours of the Bayes optimal decision when the number of training samples increases. Furthermore, the shape of the decision surface can be as complex as necessary, or as simple as desired (Specht, 1990).

The main drawback of PNN is that all samples used for the training process must be stored and used in the classification of new patterns. However, considering the use of high-density memories, problems with storage of training samples should not occur. In addition, the PNN processing speed in the classification of new patterns is quite satisfactory, and even several times faster than using back propagation algorithms as reported by (Maloney et al, 1989).
