**Image Processing for Spider Classification**

Jaime R. Ticay-Rivas, Marcos del Pozo-Baños, Miguel A. Gutiérrez-Ramos, William G. Eberhard, Carlos M. Travieso and Alonso B. Jesús

Additional information is available at the end of the chapter

http://dx.doi.org/10.5772/50341

#### 5, 2011, ISSN 1109-9577, pp. 91-101. **1. Introduction**

158 Biodiversity Conservation and Utilization in a Diverse World

1792-863X, pp 112-117.

basin of Prut river, in *Recent researches in energy, environment, devices, systems, comunications and computers*, Venice, Italy, March, 2011, ISBN 978-960-474-284-4, ISSN

[16] Florin Vartolomei, Mădălina-Teodora Andrei, Legislaţia pentru protecţia factorilor de mediu din România, cu privire specială asupra biodiversităţii, in *Volumul Simpozionului cu participare internaţională al Universităţii Româno-Americane*, 7-8 noiembrie 2008, Bucureşti, intitulat "Performanţă, riscuri şi tendinţe în turismul mondial", Editura Pro-

[17] Florin Vartolomei, Natural protected areas from Vaslui county*,* in *Annals of SPIRU HARET University Geography Series*, No. 10, pp. 163-168, ISSN 1453-8792, Bucureşti, 2007. [18] Bălteanu D., Dumitraşcu Monica, Ciupitu D., Maxim I., România, Ariile naturale

[19] Surd Vasile, Veronica Constantin, Camelia-Maria Kantor, Strategic vision and concept of regional planning and sustainable development in Romania based on the use of geospatial solutions, in *International Journal Of Energy And Environment,* Issue 1, Volume

Universitaria, pp. 159-169, ISBN 978-973-129-326-4, Bucureşti, 2008.

protejate, Map 1:750 000 scale , Editura CD Press, Bucureşti, 2009.

As is defined by UNESCO [22]: "Biological diversity or biodiversity is defined as the diversity of all living forms at different levels of complexity: genes, species, ecosystems and even landscapes and seascapes. Biodiversity is shaped by climatic conditions, the properties of soils and sediments, evolutionary processes and human action. Biodiversity can be greatly enhanced by human activities; however, it can also be adversely impacted by such activities due to unsustainable use or by more profound causes linked to our development models."

It is clear that climate change and biodiversity are interconnected. Biodiversity is impacted by climate change but it also makes an important contribution to both climate-change mitigation and adaptation through the ecosystem services that it supports. Therefore, conserving and sustainable managing biodiversity is crucial to meet the clime change.

Biodiversity conservation is an urgent environmental issue that must be specially attended. It is as critical to humans as it is to the other lifeforms on Earth. Countries of the world acknowledge that species research is crucial in order to obtain and develop the right methods and tools to understand and protect biodiversity. Thus, biodiversity conservation has became a top priority for researchers [1]

In this sense, a big effort is being carried out by the scientific community in order to study the huge biodiversity present on the planet. Sadly, spiders have been one of most unattended groups in conservation biology [2]. These arachnids are plentiful and ecologically crucial in almost every terrestrial and semi-terrestrial habitats [3] [4] [5]. Moreover, they present a series of extraordinary qualities, such as the ability to react to environmental changes and anthropogenic impacts [5] [6].

Several works have studied the spiders' behavior. Some of them focuses on the use of the way spiders build their webs as a source of information for species identification[7] [8]. Artificial intelligent systems have been proven to be of incalculable value for these systems. [9] proposed a model for spider behavior modeling, which provides simulations of how a specific spider specie builds its web. [10] recorded how spiders build their webs in a controlled scenario for further spatiotemporal analysis.

©2012 Ticay-Rivas et al., licensee InTech. This is an open access chapter distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0),which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. ©2012 Ticay-Rivas et al., licensee InTech. This is a paper distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Because spider webs carry an incredibly amount of information, this chapter presents an study about its usage for automatic classification of spider species. In particular, computer vision and artificial intelligence techniques will be used for this aim. Moreover, the amount of information will be such, that it would be enough to perform the spider specie classification. This is, to authors extend, a novel approach on this problem.

Class Number of samples Size (pixels) Bits number (color/gray)

Image Processing for Spider Classi cation 161

Class 2 (C2) corresponds to the *Anapisona Simoni* images. Theses spider webs were built in a controlled environment, with the singularity that they were built as tents, causing overlapping thread and light reflections. This made the processing stage far more more complex than in

Class 3 (C3) is composed by *Micrathena Duodimspinosa*. These images were also taken in a natural environment during the day. This scenario, with the presence of the sun light and natural elements such as leaves and tree branches, required a more complex treatment as

Finally, Class 4 (C4) corresponds to *Zosis Geniculata* images. Again, they were taken in a controlled environment, allowing the capture of images with black background and uniform

As can be seen in Figure 1, spider web images were taken in both controlled and uncontrolled environments. Thus, the preprocessing step was vital in order to isolate the spider webs and

Image processing techniques were employed in order to isolate the spider webs from light reflections and elements of the background of the image such as leaves or tree branches. Once

Since the image collection was not taken for this research, there are information that does not provide valid data for the spiders' study. This information corresponds to any external element of the spiderwebs. This is why the region of interest of the image was manually

Once the spider web has been select, an adjustment of the proportional ratio was necessary in order to obtain a proportional square image. This will be explained in detail in the following

To enhance the contour of cobweb's threads an increase of the color contrast was first applied. A spacial filtering was applied to enhance or attenuate details in order to obtain a better visual interpretation and prepare the data for the next preprocessing step. By using this filtering, the

The technique features of images from each class are summarized in Table 1

remove possible effects of background in the system.

it was applied, a new normalized database was obtained.

 28 1024x768 24 (True color) 41 2240x1488 8 (Gray scale) 39 2240x1488 24 (True color) 42 2216x2112 24 (True color)

**Table 1.** Tecnical features of the dabase.

C1.

well.

light.

section.

**3.2. Image contrast**

**3. Preprocessing**

**3.1. Spider web selection**

selected. This is represented in figure 2.

The remainder of this paper is organized as follow. First, the database is briefly presented. Section 3 explains how images were preprocessed in order to extract the spider webs from the background. The feature extraction and classification techniques are introduced in sections 4 and 5. Next, experiments and results are shown in detail. Finally, the conclusions derived from the results are presented.

#### **2. Database**

The database contains spider web images of four different species named Allocyclosa, Anapisona Simoni, Micrathena Duodecimspinosa and Zosis Geniculata. Each class has respectively 28, 41, 39 and 49 images, which makes a total of 150 images. Some examples can be seen in Figure 1. Since the webs images were taken in both controlled and uncontrolled environments the lightness condition and background differ between classes.

**Figure 1.** Spider web samples of the four species: *a) Allocyclosa. b) Anapisona Simoni. c) Micrathena Duodimspinosa. d) Zosis Geniculata.*

The images that correspond to *Allocyclosa* were assigned to Class 1 (C1). These were taken in a natural night time environment. The flash of the camera in conjunction with dark background enhanced the spider webs, resulting in a set of images with good quality for the processing stage.


**Table 1.** Tecnical features of the dabase.

2 Will-be-set-by-IN-TECH

Because spider webs carry an incredibly amount of information, this chapter presents an study about its usage for automatic classification of spider species. In particular, computer vision and artificial intelligence techniques will be used for this aim. Moreover, the amount of information will be such, that it would be enough to perform the spider specie classification.

The remainder of this paper is organized as follow. First, the database is briefly presented. Section 3 explains how images were preprocessed in order to extract the spider webs from the background. The feature extraction and classification techniques are introduced in sections 4 and 5. Next, experiments and results are shown in detail. Finally, the conclusions derived

The database contains spider web images of four different species named Allocyclosa, Anapisona Simoni, Micrathena Duodecimspinosa and Zosis Geniculata. Each class has respectively 28, 41, 39 and 49 images, which makes a total of 150 images. Some examples can be seen in Figure 1. Since the webs images were taken in both controlled and uncontrolled

environments the lightness condition and background differ between classes.

**Figure 1.** Spider web samples of the four species: *a) Allocyclosa. b) Anapisona Simoni. c) Micrathena*

The images that correspond to *Allocyclosa* were assigned to Class 1 (C1). These were taken in a natural night time environment. The flash of the camera in conjunction with dark background enhanced the spider webs, resulting in a set of images with good quality for the processing

This is, to authors extend, a novel approach on this problem.

from the results are presented.

*Duodimspinosa. d) Zosis Geniculata.*

stage.

**2. Database**

Class 2 (C2) corresponds to the *Anapisona Simoni* images. Theses spider webs were built in a controlled environment, with the singularity that they were built as tents, causing overlapping thread and light reflections. This made the processing stage far more more complex than in C1.

Class 3 (C3) is composed by *Micrathena Duodimspinosa*. These images were also taken in a natural environment during the day. This scenario, with the presence of the sun light and natural elements such as leaves and tree branches, required a more complex treatment as well.

Finally, Class 4 (C4) corresponds to *Zosis Geniculata* images. Again, they were taken in a controlled environment, allowing the capture of images with black background and uniform light.

The technique features of images from each class are summarized in Table 1

## **3. Preprocessing**

As can be seen in Figure 1, spider web images were taken in both controlled and uncontrolled environments. Thus, the preprocessing step was vital in order to isolate the spider webs and remove possible effects of background in the system.

Image processing techniques were employed in order to isolate the spider webs from light reflections and elements of the background of the image such as leaves or tree branches. Once it was applied, a new normalized database was obtained.

#### **3.1. Spider web selection**

Since the image collection was not taken for this research, there are information that does not provide valid data for the spiders' study. This information corresponds to any external element of the spiderwebs. This is why the region of interest of the image was manually selected. This is represented in figure 2.

Once the spider web has been select, an adjustment of the proportional ratio was necessary in order to obtain a proportional square image. This will be explained in detail in the following section.

#### **3.2. Image contrast**

To enhance the contour of cobweb's threads an increase of the color contrast was first applied. A spacial filtering was applied to enhance or attenuate details in order to obtain a better visual interpretation and prepare the data for the next preprocessing step. By using this filtering, the

**Figure 2.** Spider web Selection

value of each pixel is modified according to neighbors' values, transforming the original gray levels so that they become more similar or different to the corresponding neighboring pixels.

In general, the convolution of a image *f* with *MxN* dimensions with a *h mxn* mask is given by 1:

$$\log(\mathbf{x}, \mathbf{y}) = \sum\_{s=-a}^{a} \sum\_{t=-b}^{b} f(\mathbf{x} + \mathbf{s}, \mathbf{y} + t) h(\mathbf{s}, t) \tag{1}$$

This threshold is computed by using the very know Otsu's method [25], which assigns the membership of each pixel to a determined group by computing the optimal value from which

Once the image has been binarized, a denoising process was used aiming to eliminate any irrelevant information. To achieve this goal, two specific techniques were applied: *Wiener*

The Wiener Filtering applied a spatial filtering using statistical methods in order to reduce noise and smooth shapes. It gradually smooths the image by changing the areas where the noise is very apparent, but keeping the areas where the details are present and the noise is less

In this work, the algorithm *wiener2* [21] was used in order to compute the local mean and the

Where *η* is defined as the local neighborhood for each pixel *NxM* in the image *a*. An 2*x*2 block has been chosen by euristics, i.e. this was the configuration that provided the best visual effect.

*<sup>σ</sup>*<sup>2</sup> <sup>−</sup> *<sup>v</sup>*<sup>2</sup>

If the noise variance is not given, *wiener2* uses the average of all the local estimated variances. On the other hand, morphological operations are those transformations that modify the structure or shape of the objects in the image based on the their geometry and shape, simplifying the images. These techniques can be used to denoise an image, for feature

An illustrative example of these operations is shown in figures 4 and 5, where noise and

Finally, the center of the spiderwebs was used as the source of discriminative information in order to classify spiders. Thus, once the images were preprocessed this center area was selected to conform the experimentation database. Figures 7 and 8 show the center of the

projections (in the inner circle) are removed, obtaining more uniform boundaries.

*a*(*η*1, *η*2) (3)

Image Processing for Spider Classi cation 163

*<sup>a</sup>*2(*η*1, *<sup>η</sup>*2) <sup>−</sup> *<sup>μ</sup>*<sup>2</sup> (4)

*<sup>σ</sup>*<sup>2</sup> (*a*2(*η*1, *<sup>η</sup>*2) <sup>−</sup> *<sup>μ</sup>*) (5)

apparent. The Wiener filter is adapted to the local image variance.

*<sup>μ</sup>* <sup>=</sup> <sup>1</sup>

*<sup>σ</sup>*<sup>2</sup> <sup>=</sup> <sup>1</sup>

*b*(*n*1, *b*2) = *μ* +

The resulting image after image denoise can be observed in figure 6.

*NM* ∑ *η*1,*η*<sup>2</sup> *�η*

*NM* ∑ *η*1,*η*<sup>2</sup> *�η*

*Wiener2* then filters the image using these estimates, where *b* is the resulting image.

carrying out that assignment.

*Filtering* and *Morphological Operations*.

variance around each pixel in the image *a*

extraction or processing specific regions.

**3.5. Center of the spiderwebs**

spiderwebs for each specie.

**3.4. Image denoising**

Where *f*(*x* + *s*, *y* + *t*) are the pixel value's of the selected block, *h*(*s*, *t*) are the mask coefficients and *g*(*x*, *y*) is the filtered image. The block dimension is defined by *m* = 2*a* + 1 and *n* = 2*b* + 1. The effect of applying contrast enhancement filtering over a gray image can be observed in figure 3.

**Figure 3.** a) Original Image b) Contrast enhanced

#### **3.3. Image binarization**

The binarization process transforms the image to a black and white format in a way that it does not change the esential properties of the image. Equation 2 defines the binarization process, where *f*(*x*, *y*) is the original image and *g*(*x*, *y*) the obtained image:

$$g(\mathbf{x}, y) = \begin{cases} 0 & f(\mathbf{x}, y) < \text{threshold} \\ 1 & f(\mathbf{x}, y) \ge \text{threshold} \end{cases} \tag{2}$$

This threshold is computed by using the very know Otsu's method [25], which assigns the membership of each pixel to a determined group by computing the optimal value from which carrying out that assignment.

#### **3.4. Image denoising**

4 Will-be-set-by-IN-TECH

value of each pixel is modified according to neighbors' values, transforming the original gray levels so that they become more similar or different to the corresponding neighboring pixels. In general, the convolution of a image *f* with *MxN* dimensions with a *h mxn* mask is given

Where *f*(*x* + *s*, *y* + *t*) are the pixel value's of the selected block, *h*(*s*, *t*) are the mask coefficients and *g*(*x*, *y*) is the filtered image. The block dimension is defined by *m* = 2*a* + 1 and *n* = 2*b* + 1. The effect of applying contrast enhancement filtering over a gray image can be observed in

The binarization process transforms the image to a black and white format in a way that it does not change the esential properties of the image. Equation 2 defines the binarization process,

0 *f*(*x*, *y*) *< threshold*

1 *f*(*x*, *y*) ≥ *threshold*

*f*(*x* + *s*, *y* + *t*)*h*(*s*, *t*) (1)

(2)

*g*(*x*, *y*) =

**Figure 3.** a) Original Image b) Contrast enhanced

where *f*(*x*, *y*) is the original image and *g*(*x*, *y*) the obtained image:

*g*(*x*, *y*) =

⎧ ⎨ ⎩

**3.3. Image binarization**

*a* ∑ *s*=−*a*

*b* ∑ *t*=−*b*

**Figure 2.** Spider web Selection

by 1:

figure 3.

Once the image has been binarized, a denoising process was used aiming to eliminate any irrelevant information. To achieve this goal, two specific techniques were applied: *Wiener Filtering* and *Morphological Operations*.

The Wiener Filtering applied a spatial filtering using statistical methods in order to reduce noise and smooth shapes. It gradually smooths the image by changing the areas where the noise is very apparent, but keeping the areas where the details are present and the noise is less apparent. The Wiener filter is adapted to the local image variance.

In this work, the algorithm *wiener2* [21] was used in order to compute the local mean and the variance around each pixel in the image *a*

$$\mu = \frac{1}{NM} \sum\_{\eta\_1, \eta\_2 \in \eta} a(\eta\_1, \eta\_2) \tag{3}$$

$$
\sigma^2 = \frac{1}{NM} \sum\_{\eta\_1, \eta\_2 \in \eta} a^2(\eta\_1, \eta\_2) - \mu^2 \tag{4}
$$

Where *η* is defined as the local neighborhood for each pixel *NxM* in the image *a*. An 2*x*2 block has been chosen by euristics, i.e. this was the configuration that provided the best visual effect. *Wiener2* then filters the image using these estimates, where *b* is the resulting image.

$$b(\eta\_1, \eta\_2) = \mu + \frac{\sigma^2 - v^2}{\sigma^2} (a^2(\eta\_1, \eta\_2) - \mu) \tag{5}$$

If the noise variance is not given, *wiener2* uses the average of all the local estimated variances.

On the other hand, morphological operations are those transformations that modify the structure or shape of the objects in the image based on the their geometry and shape, simplifying the images. These techniques can be used to denoise an image, for feature extraction or processing specific regions.

An illustrative example of these operations is shown in figures 4 and 5, where noise and projections (in the inner circle) are removed, obtaining more uniform boundaries.

The resulting image after image denoise can be observed in figure 6.

#### **3.5. Center of the spiderwebs**

Finally, the center of the spiderwebs was used as the source of discriminative information in order to classify spiders. Thus, once the images were preprocessed this center area was selected to conform the experimentation database. Figures 7 and 8 show the center of the spiderwebs for each specie.

**Figure 4.** Example of applying morphological operations: elimination of isolate pixels

**Figure 7.** Selecting the center of the spiderwebs.

**Figure 8.** Resulting center of each specie.

used in the present work.

dimension *n < m*, which is termed as transformed domain techniques. This is the concept

Image Processing for Spider Classi cation 165

Two well known techniques were used: Discrete Wavelet Transform (DWT) and Discrete Cosine Transform (DCT). These were selected as they have been successfully used in other biometric studies. Besides being able to reduce the dimensionality of data and reduce the computational requirements of the classifier stage, these techniques can improve the

generalization of the information and the system's success rate.

**Figure 5.** Example of applying morphological operations: smoothing of contour

#### **4. Features extractors**

In general, feature extraction refers to the process of obtaining some numerical measures of images such as area, radio, perimeter, etc. Also it concerns to the process of transforming a set of original features; with dimension *m*, in another set of characteristics; usually with

**Figure 7.** Selecting the center of the spiderwebs.

6 Will-be-set-by-IN-TECH

**Figure 4.** Example of applying morphological operations: elimination of isolate pixels

**Figure 5.** Example of applying morphological operations: smoothing of contour

**Figure 6.** Image after denoising. a) Original image b) Image binarized

In general, feature extraction refers to the process of obtaining some numerical measures of images such as area, radio, perimeter, etc. Also it concerns to the process of transforming a set of original features; with dimension *m*, in another set of characteristics; usually with

**4. Features extractors**

**Figure 8.** Resulting center of each specie.

dimension *n < m*, which is termed as transformed domain techniques. This is the concept used in the present work.

Two well known techniques were used: Discrete Wavelet Transform (DWT) and Discrete Cosine Transform (DCT). These were selected as they have been successfully used in other biometric studies. Besides being able to reduce the dimensionality of data and reduce the computational requirements of the classifier stage, these techniques can improve the generalization of the information and the system's success rate.

**Figure 9.** Algorithm for the Discrete Wavelet Transform

#### **4.1. Wavelet transform**

The discrete wavelet transform (DWT) is based on the idea of decomposing a signal in terms of displaced and dilated versions of a finite wave called mother wavelet. The Wavelet transform is a preprocessing and feature extraction technique which can be directly applied to the image of spiderwebs. The DWT is defined in [24] as follows:

$$\mathbb{C}[j,k] = \sum\_{n \in \mathbb{Z}} f[n] \psi\_{j,k}[n] \tag{6}$$

of the signal to be represented or analyzed. On this work, the families Daubechies1 (db1),

Image Processing for Spider Classi cation 167

Discrete Cosine Transform (DCT) was applied for noise and details of high frequency elimination [23]. Besides, this transform has a good energy compaction property that produces uncorrelated coefficients, where the base vectors of the DCT depend only on the order of the transformation selected, and not of the statistical properties of the input data. Another important aspect of the DCT is its capacity to quantify the coefficients utilizing quantification values, which are chosen of visual way. This transformation has had a great acceptance inside the image digital processing, as there is a high correlation among elements

Once the images were transformed to a set of features, the classification stage tried to produce an answer to the spider identification problem. In this work, the well known Support Vector

The SVM is a method of structural risk minimization (SRM) derived from the statistical learning theory developed by Vapnik and Chervonenkis [17]. It is enclosed in the group of supervised learning methods of pattern recognition, and it is used for classification and

Based on characteristic points called Support Vectors (SVs), the SVM uses an hyperplane or a set of hyperplanes to divide the space in zones enclosing a common class. Labeling these zones the system is able to identify the membership of a testing sample. The interesting aspect of the SVM is that it is able to do so even when the problem is not linearly separable. This is achieved by projecting the problem into a higher dimensional space where the classes are linearly separable. The projection is performed by an operator known as kernel, and this technique is called the kernel trick [18] [19]. The use of hyperplanes to divide the space gives

In this work, the Suykens' et. al. LS-SVM [20] was used along with the Radial Basis Function kernel (RBF-kernel). The regularization parameter and the bandwidth of the RBF function were automatically optimized by the validation results obtained from 10 iterations of a Hold-Out cross-validation process. Two samples from each class (from the training set) were used for testing and the remaining for training as we saw that the number of training samples has a big impact in the LS-SVM optimal parameters. Once the optimal parameters were found, they were used to retrain the LS-SVM using all available training samples.

To sum up, the proposed system normalized all the images to 10x10 pixels. This system used the first M features obtained from the DCT projection of the spider webs images and the outcome of the DWT transformation of the spider webs images as inputs for a RBF-kernel LS-SVM with regularization and kernel parameters. The former parameter (the number of features) was varied during experimentation, while the later two parameters (the

Biorthogonal 3.7 (bior3.7) and Discrete Meyer (dmey) were used.

**4.2. Discrete Cosine Transform**

for the data of a conventional image.

Machine (SVM) technique has been used.

rise to margins as shown in figure 10.

**6. Experiments and results**

regression analysis.

**5. Classification: Support Vector Machine**

where *ψj*,*<sup>k</sup>* is the transform function:

$$\Psi\_{j,k}[n] = 2^{-j/2} \cdot \psi[2^{-j}n - 1] \tag{7}$$

In the wavelet analysis is common evaluate the results as approximations and details. The approximations are the low frequency components of the signal and the details are the high frequency components. For many signals the most important information is content in the low frequencies . This content is what gives identity to the signal. The following is the diagram that has been used in this work, which would be one-dimensional DWT. The filtering process to obtain the approximations and detail of the discrete wavelet transform shown in the following figure 9:

The application of different mother families on pre-processing (artifacts elimination) and on the feature extraction has a set of good and discriminate parameters. Unlike the Fourier transform, the wavelet transform can be implemented on many bases. The different categories of wavelets (continuous, discrete, orthogonal, etc..) and various types of wavelet functions within each category provide a wide number of options for analyzing a signal. This allows selection of the base functions whose shape better approximates the characteristics

of the signal to be represented or analyzed. On this work, the families Daubechies1 (db1), Biorthogonal 3.7 (bior3.7) and Discrete Meyer (dmey) were used.

#### **4.2. Discrete Cosine Transform**

8 Will-be-set-by-IN-TECH

The discrete wavelet transform (DWT) is based on the idea of decomposing a signal in terms of displaced and dilated versions of a finite wave called mother wavelet. The Wavelet transform is a preprocessing and feature extraction technique which can be directly applied to the image

*n*∈**Z**

In the wavelet analysis is common evaluate the results as approximations and details. The approximations are the low frequency components of the signal and the details are the high frequency components. For many signals the most important information is content in the low frequencies . This content is what gives identity to the signal. The following is the diagram that has been used in this work, which would be one-dimensional DWT. The filtering process to obtain the approximations and detail of the discrete wavelet transform shown in

The application of different mother families on pre-processing (artifacts elimination) and on the feature extraction has a set of good and discriminate parameters. Unlike the Fourier transform, the wavelet transform can be implemented on many bases. The different categories of wavelets (continuous, discrete, orthogonal, etc..) and various types of wavelet functions within each category provide a wide number of options for analyzing a signal. This allows selection of the base functions whose shape better approximates the characteristics

*<sup>ψ</sup>j*,*k*[*n*] = <sup>2</sup>−*j*/2 · *<sup>ψ</sup>*[2−*<sup>j</sup>*

*f* [*n*]*ψj*,*k*[*n*] (6)

*n* − 1] (7)

*C*[*j*, *k*] = ∑

**Figure 9.** Algorithm for the Discrete Wavelet Transform

of spiderwebs. The DWT is defined in [24] as follows:

**4.1. Wavelet transform**

the following figure 9:

where *ψj*,*<sup>k</sup>* is the transform function:

Discrete Cosine Transform (DCT) was applied for noise and details of high frequency elimination [23]. Besides, this transform has a good energy compaction property that produces uncorrelated coefficients, where the base vectors of the DCT depend only on the order of the transformation selected, and not of the statistical properties of the input data.

Another important aspect of the DCT is its capacity to quantify the coefficients utilizing quantification values, which are chosen of visual way. This transformation has had a great acceptance inside the image digital processing, as there is a high correlation among elements for the data of a conventional image.
