**4. LP character segmentation algorithm based on 2D Haar WT edge detector**

In image processing, edge detection is the key pre-processing step for identifying the presence of objects in images. This is achieved by identifying the boundary regions of an object. There are several robust edge detection techniques widely reported in the literature from early works by Canny (Canny, 1986) and some of the most recent, such as Palacios (Palacios et al., 2011). However, in custom applications, such as embedded ANPR system where both real-time performance and LP recognition success is demanded, a choice of good edge detector that balances these two factors is important.

The proposed algorithm is based on 2D Haar WT edge detector, which is shown to enhance image edges and improve LP region detection in Musoromy (Musoromy et al., 2010). The algorithm used for LP region detection and extraction explained in Section 3.1 is adapted to perform LP character segmentation. The main reasons for adapting the Haar WT for character segmentation are:


The following sections describe the LP character segmentation algorithm based on a 2D Haar WT edge detector starting with the WT definition.

#### **4.1 Wavelet Transform**

In image processing, we can define a function f(x,y) as an image signal and Ψ(x,y) as a wavelet. A wavelet is a function of Ψ Є L2(R) used to localise a given function such as f(x,y) in both translation (u) and scaling (s). The family of wavelet is obtained by translation and scaling in time (t) using individual wavelet as given in equation (1) and (2) by (Mallat, 1999):

$$
\Psi\_{\left(u,s\right)}\left(t\right) = \frac{1}{\sqrt{s}} \Psi\left(\frac{t-u}{s}\right) \tag{1}
$$

Wavelets are useful in transforming signals from one domain to another, giving useful information for easier analysis hence the term Wavelet Transform which can be defined as:

$$\text{Wf}\left(\mathbf{u}, \mathbf{s}\right) = \prod\_{-\sigma}^{+\sigma} f(t) \frac{1}{\sqrt{s}} \Psi^\*\left(\frac{t-u}{s}\right) dt \tag{2}$$

This represents a Continuous WT (CWT) of a function f at scales s>0 and translated by u Є R, which can also be explained as a 1D. When processing an image, we can apply this wavelet in the x direction where Ψ Є L2(R) as follows:

$$\text{Wf}\left(\mathbf{u}, \mathbf{s}\right) = \prod\_{-n}^{+\infty} f(\mathbf{x}) \frac{1}{\sqrt{s}} \Psi^\*\left(\frac{\mathbf{x} - \boldsymbol{\mu}}{s}\right) d\mathbf{x} \tag{3}$$

The x and y directions can represent rows and columns of an image f(x,y) Є L2(R2) and therefore we can also apply the CWT in 2D using wavelet Ψ Є L2(R2) as (Palacios et al., 2011):

$$\mathbf{W}\_{\text{(s)}}\mathbf{f}\left(\mathbf{u},\mathbf{v}\right) = \int \int \mathbf{f}(\mathbf{x},\mathbf{y}) \frac{1}{\mathbf{s}} \, \mathbf{u} \, \mathbf{y}^\* \left(\frac{\mathbf{x}-\mathbf{u}}{\mathbf{s}}, \frac{\mathbf{y}-\mathbf{v}}{\mathbf{s}}\right) \tag{4}$$

We can rewrite equation (4) with dilation factor s as

$$
\Psi\_{(s)}(\mathbf{x}, \mathbf{y}) = \frac{1}{\mathbf{s}} \,\,\Psi\left(\frac{\mathbf{x}}{\mathbf{s}}, \frac{\mathbf{y}}{\mathbf{s}}\right) \tag{5}
$$

and <sup>Θ</sup> <sup>Θ</sup> Ψ (x,y) Ψ ( x, y) as a convolution

$$\mathbf{W}\_{\mathrm{(s)}}\mathbf{f}\left(\mathbf{u},\mathbf{v}\right) = \mathbf{f}^{\ast}\Psi\_{\mathrm{s}}^{\Theta}\left(\mathbf{u},\mathbf{v}\right) \tag{6}$$

The large number of coefficients produced by CWT makes it necessary to discretely sample signals in order to simplify signal analysis process and also for the use in real-time applications such as image processing. This process is technically known as discrete wavelet transform (DWT).

#### **4.2 Discrete Wavelet Transform**

6 Advances in Wavelet Theory and Their Applications in Engineering, Physics and Technology

edges in HD images and benchmarked their DSP and baseline processor performance to

**4. LP character segmentation algorithm based on 2D Haar WT edge detector**  In image processing, edge detection is the key pre-processing step for identifying the presence of objects in images. This is achieved by identifying the boundary regions of an object. There are several robust edge detection techniques widely reported in the literature from early works by Canny (Canny, 1986) and some of the most recent, such as Palacios (Palacios et al., 2011). However, in custom applications, such as embedded ANPR system where both real-time performance and LP recognition success is demanded, a choice of

The proposed algorithm is based on 2D Haar WT edge detector, which is shown to enhance image edges and improve LP region detection in Musoromy (Musoromy et al., 2010). The algorithm used for LP region detection and extraction explained in Section 3.1 is adapted to perform LP character segmentation. The main reasons for adapting the Haar WT for

 The ability of Haar WT to detect three types of edges using a single filter while traditional methods such as Sobel would require more than one mask for the operation

The following sections describe the LP character segmentation algorithm based on a 2D

In image processing, we can define a function f(x,y) as an image signal and Ψ(x,y) as a wavelet. A wavelet is a function of Ψ Є L2(R) used to localise a given function such as f(x,y) in both translation (u) and scaling (s). The family of wavelet is obtained by translation and scaling in time (t) using individual wavelet as given in equation (1) and (2) by (Mallat, 1999):

> *t u <sup>t</sup> s s*

Wavelets are useful in transforming signals from one domain to another, giving useful information for easier analysis hence the term Wavelet Transform which can be defined as:

Wf u,s <sup>1</sup> \* ( )<sup>Ψ</sup> *t u <sup>f</sup> t dt*

This represents a Continuous WT (CWT) of a function f at scales s>0 and translated by u Є R, which can also be explained as a 1D. When processing an image, we can apply this

Wf u,s <sup>1</sup> \* ( )<sup>Ψ</sup> *x u <sup>f</sup> x dx*

*s s*

*s s*

(1)

(2)

(3)

 , <sup>1</sup> Ψ Ψ *u s*

good edge detector that balances these two factors is important.

Simplicity of the algorithm and its suitability in real-time application

Haar WT edge detector starting with the WT definition.

wavelet in the x direction where Ψ Є L2(R) as follows:

meet real-time requirement.

character segmentation are:

**4.1 Wavelet Transform** 

Discrete wavelet transform (DWT) or fast wavelet transform (FWT) is a specialised case of sub-band filtering, where DWT is a sampled signal of size N using scale at 2*<sup>j</sup> s* for j < 0 and time (for scale 1) (Mallat, 1999). Using the wavelet equation:

$$
\Psi \Psi\_j[n] = \frac{1}{\sqrt{s}} \Psi \left(\frac{n}{s}\right) \tag{7}
$$

DWT is also a circular convolution where:

$$
\Psi^{\Theta}[\mathbf{n}] = \Psi^\*\_{/}[\mathbf{n}] \tag{8}
$$

The convolution of signal f and the wavelet is written as follows:

$$\text{Wf}[\mathbf{n}, \mathbf{s}] = \sum\_{\mathbf{m}=0}^{N-1} \mathbf{f}[\mathbf{m}] \Psi\_{\mathbf{j}}^{\*}[\mathbf{m} - \mathbf{n}] = \mathbf{f}^{\*} \Psi^{\Theta}[\mathbf{n}] \tag{9}$$

Calculations of DWT is done using filter bank which can be a series of cascading digital filter. Implementing the DWT using filter banks entails the signal sampled being passed through high-pass and low-pass filters simultaneously to produce detailed and approximated confidents respectively (Qureshi, 2005). The high frequencies DWT are contained similar to equation (9) as follows:

$$\mathbf{W}\_{\text{High}}\mathbf{f}\left[\mathbf{n},\mathbf{s}\right] = \sum\_{\mathbf{m}=0}^{N-1} \mathbf{f}\left[\mathbf{m}\right] \boldsymbol{\Psi}\_{\text{j}}^{\*}\left[\mathbf{m}-\mathbf{n}\right] = \mathbf{f}^{\*}\boldsymbol{\Psi}\mathbf{\varGamma}^{\Theta}\left[\mathbf{n}\right] \tag{10}$$

The low frequencies are contained in equation (12), in the computation of periodic scaling filter where the scaling function in equation (11) is sampled with scale z and integer k (Mallat, 1999). Let <sup>Θ</sup> \* Φ <sup>k</sup> [n] Φ [n] be a convolution:

$$\Phi\_k[n] = \frac{1}{\sqrt{s}} \Phi\left(\frac{n}{s}\right) \tag{11}$$

Real-Time DSP-Based License Plate

as follows:

where i= {1, 2, 3}.

**4.3 2D Haar WT** 

functions in equation (17) and (18) (Mallat, 1999)

to an original image is demonstrated in figure 2.

Fig. 2. Single level IDWT (reconstruction of f x, y ) (mallat, 1999)

0

Character Segmentation Algorithm Using 2D Haar Wavelet Transform 9

The 2D DWT of an image function f(x,y) of the size M x N can be written using wavelet

j

j

0 , m , <sup>n</sup>

<sup>1</sup> W j ,m, <sup>n</sup> f x, <sup>y</sup> <sup>φ</sup> (x, y) MN

<sup>1</sup> W j,m, <sup>n</sup> f x, <sup>y</sup> <sup>ψ</sup> (x, y)

At the end of analysis stage, the transformed image can be reconstructed back to an original image or to a new image using the inverse of DWT (IDWT). The reconstruction is a process of upsampling the wavelet coefficients by a factor of two and passed through reversed lowpass ( gLP ) and high-pass ( gHP ) filters simultaneously (Qureshi, 2005). The reconstruction

There is a countless number of wavelets available in the wavelet family with more being reported in the literature of wavelets (Mallat, 1999). For this application, we are interested in the simplest but efficient DWT. The Haar is the first and simplest WT in the family of

M N φ 0 j x 0y 0

 M N i i ψ j, m , n x 0y 0

0 0

j ,m,n φ x,y 2 φ (2 x m, 2 y n) (17)

j,m,n ψ x,y 2 ψ (2 x m, 2 y n) (18)

MN (20)

(19)

j j 2

j j 2

<sup>0</sup>

$$\mathbf{f}\left[\mathbf{W}\_{\text{Low}}\mathbf{f}\middle|\mathbf{n},\mathbf{z}\right] = \sum\_{\mathbf{m}=0}^{N-1} \mathbf{f}\left[\mathbf{m}\right] \boldsymbol{\Phi}\_{\text{k}}^{\*}\left[\mathbf{m}-\mathbf{n}\right] = \left.\mathbf{f}^{\*}\boldsymbol{\Phi}^{\Theta}\middle|\mathbf{n}\right] \tag{12}$$

The high-pass filter h n HP is formed from the low pass filter h n LP using the following equation (Qureshi, 2005):

$$\mathbf{h}\_{\rm HP} \begin{bmatrix} \mathbf{n} \end{bmatrix} = -\mathbf{1}^{\mathbf{n}} \mathbf{h}\_{\rm LP} \begin{bmatrix} \mathbf{N} - \mathbf{1} - \mathbf{n} \end{bmatrix}, \qquad \mathbf{n} = \mathbf{0}, \dots, \mathbf{N} - \mathbf{1} \tag{13}$$

where h is the filter and N is the number of taps in the low-pass filter. If the length N of analysis low-pass filter is 4, and

$$\mathbf{h\_{LP}} = \left\{ \mathbf{h\_0}, \mathbf{h\_1}, \mathbf{h\_2}, \mathbf{h\_3} \right\} \tag{14}$$

Applying equation (13), we obtain:

$$\mathbf{h\_{HP}} = \begin{pmatrix} \mathbf{h\_{3'}} - \mathbf{h\_{2'}} \mathbf{h\_{1'}} - \mathbf{h\_0} \end{pmatrix} \tag{15}$$

To analyse DWT the input signal f(x,y)[n] is passed through both filters explained in equations (10) and (12) to give filtered output y[n]. The output is then decimated or down sampled by a factor of two (Qureshi, 2005). Decimation means every other sample is taken from an input to form an output such that:

$$\mathbf{y}[\mathbf{n}] = f\_{(x,y)}[2n] \tag{16}$$

The analysis of DWT with the resulting coefficients is shown in figure 1.

Fig. 1. Single level DWT (analysis stage of f x, y ) (Mallat, 1999)

The 2D DWT of an image function f(x,y) of the size M x N can be written using wavelet functions in equation (17) and (18) (Mallat, 1999)

$$\mathbf{q}\_{\dot{\mathbf{j}}\_{\rm 0}, \mathbf{m}, \mathbf{n}} \left( \mathbf{x}, \mathbf{y} \right) = \mathbf{2}^{\frac{\dot{\mathbf{j}}\_{\rm 0}}{2}} \mathbf{q} \left( \mathbf{2}^{\dot{\mathbf{j}}\_{\rm 0}} \mathbf{x} - \mathbf{m}, \mathbf{2}^{\dot{\mathbf{j}}\_{\rm 0}} \mathbf{y} - \mathbf{n} \right) \tag{17}$$

$$\Psi\_{\mathbf{j},\mathbf{m},\mathbf{n}}(\mathbf{x},\mathbf{y}) = 2^{\frac{\mathbf{j}}{2}} \,\,\upmu(2^{\mathbf{j}}\mathbf{x} - \mathbf{m}, 2^{\mathbf{j}}\mathbf{y} - \mathbf{n})\tag{18}$$

as follows:

8 Advances in Wavelet Theory and Their Applications in Engineering, Physics and Technology

The low frequencies are contained in equation (12), in the computation of periodic scaling filter where the scaling function in equation (11) is sampled with scale z and integer k

<sup>1</sup> <sup>Φ</sup> [ ] <sup>Φ</sup> *<sup>k</sup>*

W f n,z f m Φ m n f\*Φ [n]

The high-pass filter h n HP is formed from the low pass filter h n LP using the following

where h is the filter and N is the number of taps in the low-pass filter. If the length N of

To analyse DWT the input signal f(x,y)[n] is passed through both filters explained in equations (10) and (12) to give filtered output y[n]. The output is then decimated or down sampled by a factor of two (Qureshi, 2005). Decimation means every other sample is taken

The analysis of DWT with the resulting coefficients is shown in figure 1.

Fig. 1. Single level DWT (analysis stage of f x, y ) (Mallat, 1999)

Low k m 0

*n*

*n*

N 1 \* <sup>Θ</sup>

<sup>n</sup> h n 1h N 1 n, HP LP n 0, , N 1 (13)

(11)

(12)

hLP 0 1 2 3 h ,h ,h ,h (14)

hHP 3 2 1 0 h , h ,h , h (15)

y (,) [n] [2 ] *x y f n* (16)

*s s* 

(Mallat, 1999). Let <sup>Θ</sup> \* Φ <sup>k</sup> [n] Φ [n] be a convolution:

equation (Qureshi, 2005):

analysis low-pass filter is 4, and

Applying equation (13), we obtain:

from an input to form an output such that:

$$\mathbf{W}\_{\boldsymbol{\Phi}}\left(\mathbf{j}\_{0},\mathbf{m},\ \mathbf{n}\right) = \sum\_{\mathbf{x}=\mathbf{0}\mathbf{y}=\mathbf{0}}^{\text{M}} \frac{1}{\sqrt{\mathbf{M}\mathbf{N}}} \mathbf{f}\left(\mathbf{x},\ \mathbf{y}\right) \mathbf{q}\_{\mathbf{j}\_{0,\ \mathbf{m},\mathbf{n}}}\left(\mathbf{x},\ \mathbf{y}\right) \tag{19}$$

$$\mathbf{W}\_{\boldsymbol{\Psi}}^{\rm i} \left( \mathbf{j}, \mathbf{m}, \; \mathbf{n} \right) = \sum\_{\mathbf{x} = \mathbf{0} \mathbf{y} = \mathbf{0}}^{\rm M} \frac{1}{\sqrt{\mathbf{M} \mathbf{N}}} \mathbf{f} \begin{pmatrix} \mathbf{x} \ \mathbf{y} \end{pmatrix} \boldsymbol{\upmu}\_{\mathbf{j}, \mathbf{m}, \mathbf{n}}^{\rm i} \mathbf{x} \begin{pmatrix} \mathbf{x} \ \mathbf{y} \end{pmatrix} \tag{20}$$

where i= {1, 2, 3}.

At the end of analysis stage, the transformed image can be reconstructed back to an original image or to a new image using the inverse of DWT (IDWT). The reconstruction is a process of upsampling the wavelet coefficients by a factor of two and passed through reversed lowpass ( gLP ) and high-pass ( gHP ) filters simultaneously (Qureshi, 2005). The reconstruction to an original image is demonstrated in figure 2.

Fig. 2. Single level IDWT (reconstruction of f x, y ) (mallat, 1999)

#### **4.3 2D Haar WT**

There is a countless number of wavelets available in the wavelet family with more being reported in the literature of wavelets (Mallat, 1999). For this application, we are interested in the simplest but efficient DWT. The Haar is the first and simplest WT in the family of wavelets (Haar, 1911). Haar WT is derived starting with Haar wavelet function defined as:

$$\Psi(\mathbf{x}) = \begin{cases} 1 & 0 \le \mathbf{x} < \frac{1}{2} \\ -1 & \frac{1}{2} \le \mathbf{x} < 1 \\ 0 & \text{Otherwise} \end{cases} \tag{21}$$

Real-Time DSP-Based License Plate

HH.

**4.4 2D Haar WT based edge detector** 

subimages as shown by the following equation

Character Segmentation Algorithm Using 2D Haar Wavelet Transform 11

LL LH

HL HH

Fig. 4. Single level Haar WT decomposition (enhanced for display), the top left image is the LL, the top right image is LH, the bottom left image is HL and the bottom right image is the

The main advantage of applying 2D DWT such as Haar to an image is that it decomposes it to four sub images as seen in figure 4, which is mathematically less intensive operation and more suitable for our application. The suitable edges for our application are obtained by applying a 2D Haar WT (2x2) on an image f(x,y) to obtain high and low frequency

LL LH HL HH DWT

where d and a are the detailed and approximate components. The low frequency subimage LL (a (x,y)) and the "high-high" (d (x, HH y)) subimage are then removed from equation (27)

At this stage, the edges can be computed using reconstruction through the use of wavelet transform modulus of d x, LH y HV and d (x,y) and then followed by the calculations of

to give the vertical (d (x,y)) LH and horizontal HL (d (x,y)) components HV (d (x,y)) .

f x,y a x,y d x,y d x,y d x,y (27)

Fig. 3. A Decomposed image into four bands using 2D Haar WT

and in 1D

$$
\Psi\_{\mathbf{j},\mathbf{k},}(\mathbf{x}) = \Psi(\mathbf{2}^{\mathbf{j}}\mathbf{x} - \mathbf{k}) \tag{22}
$$

Its scaling function φ(x) can be defined as:

$$\text{op}(\mathbf{x}) = \begin{cases} \mathbf{1} & 0 \le \mathbf{x} < \mathbf{1} \\ \mathbf{0} & \text{Otherwise} \end{cases} \tag{23}$$

The Haar matrix can be obtained using the wavelets defined in equations (17) to (20) and applying the formula in (10) to form high-pass filter from the low pass filter. The simplest Haar 2x2 matrix when N is 2 is as follows:

$$\mathbf{H}\_2 = \begin{bmatrix} 1 & 1 \\ 1 & -1 \end{bmatrix} \tag{24}$$

and when N is 4 to give Haar 4x4 matrix as follows:

$$\mathbf{H}\_4 = \begin{bmatrix} 1 & 1 & 1 & 1 \\ 1 & 1 & -1 & -1 \\ 1 & -1 & 0 & 0 \\ 0 & 0 & 1 & -1 \end{bmatrix} \tag{25}$$

The Haar WT filter can be derived by transformation, for example transforming H2 to:

$$\mathbf{H}\_2 = \frac{1}{\sqrt{2}} \begin{bmatrix} 1 & 1 \\ 1 & -1 \end{bmatrix} \tag{26}$$

The 2D Haar WT is computed similarly as shown in equations (14) to (17). The result of applying single level 2D Haar WT in an image is a decomposition of an image into four bands including a low-pass filtered approximation "low-low" (LL) sub image, which is the smaller version of the input image and three high-pass filtered detail subimages, "low-high" (LH), "high-low" (HL) and "high-high" (HH). The subbands and shown in figure 3 and the corresponding resulting images are shown in figure 4. In addition the images can also be discomposed using different levels with a series of cascading filter bank to produce a multiresolution (Mallat, 1989).

wavelets (Haar, 1911). Haar WT is derived starting with Haar wavelet function defined

<sup>1</sup> <sup>Ψ</sup>(x) 1 <sup>1</sup> <sup>2</sup> 0

<sup>j</sup>

2

H

4

H

The Haar matrix can be obtained using the wavelets defined in equations (17) to (20) and applying the formula in (10) to form high-pass filter from the low pass filter. The simplest

1 1

1 1 

11 1 1 11 1 1

1 10 0 00 1 1

1 1 1

2 1 1 

The 2D Haar WT is computed similarly as shown in equations (14) to (17). The result of applying single level 2D Haar WT in an image is a decomposition of an image into four bands including a low-pass filtered approximation "low-low" (LL) sub image, which is the smaller version of the input image and three high-pass filtered detail subimages, "low-high" (LH), "high-low" (HL) and "high-high" (HH). The subbands and shown in figure 3 and the corresponding resulting images are shown in figure 4. In addition the images can also be discomposed using different levels with a series of cascading filter bank to produce a multi-

The Haar WT filter can be derived by transformation, for example transforming H2 to:

2

H

1 0 1 <sup>φ</sup>(x) <sup>0</sup>

*x Otherwise* 

  <sup>1</sup> <sup>1</sup> <sup>0</sup> <sup>2</sup>

*x*

*x Otherwise*

j,k, ψ x ψ (2 x k) (22)

(23)

(24)

(25)

(26)

(21)

as:

and in 1D

Its scaling function φ(x) can be defined as:

Haar 2x2 matrix when N is 2 is as follows:

resolution (Mallat, 1989).

and when N is 4 to give Haar 4x4 matrix as follows:

Fig. 3. A Decomposed image into four bands using 2D Haar WT

Fig. 4. Single level Haar WT decomposition (enhanced for display), the top left image is the LL, the top right image is LH, the bottom left image is HL and the bottom right image is the HH.
