**Applying the Technology of Wireless Sensor Network in Environment Monitoring**

Constantin Volosencu

*"Politehnica" University of Timisoara Romania* 

#### **1. Introduction**

22 Will-be-set-by-IN-TECH

96 Cutting Edge Research in New Technologies

Blaschke F. (1971). Das Prinzip der Feldorientierung, die Grundlage f*u*¨r die Transvektor-Regelung von Drehfeldmaschinen, *Siemens Zeitschrift*, Vol. 10, No. 45. Bodson M. & Chiasson J. (1998). Differential-Geometric Methods for Control of Electric Motors, *International Journal of Robust and Nonlinear Control*, No. 8, pp. 927–952. Briz F., Degner M.W., Diez A. & Lorenz R.D. (2002). Static and Dynamic Behavior of

Grˇcar B., Cafuta P., Štumberger G., Stankoviˇc A., Hofer A. (2011). Non-holonomy in Induction

Chakraborty C., Hori Y. (2003). Fast Efficiency Optimization Techniques for the Indirect

Chiasson J. (1998). A New Approach to Dynamic Feedback Linearization Control of an

Depenbrock M. (1986). Direct Self-Control (DSC) of Inverter-Fed Induction Machine, *IEEE*

Harnefors L., Nee H.P. (2000). A General Algorithm for Speed and Position Estimation of AC Motors, *IEEE Transactions on Indusrtial Electronics*, Vol. 47, No. 1, pp. 77–83.

Marino R., Peresada S. & Tomei P. (1999). Global Adaptive Output Feedback Control of

Martin P., Rouchon P. (1996). Flatness and Sampling Control of Induction Motor, *IFAC, 13th*

Ortega R., Loria A., Nicklasson P.J. & Siera-Ramirez H. (1998). *Passivity-based Control of*

Ortega R., Barabanov N., Escobar G. (2000). Direct Torque Control of Induction Motors:

Peresada S., Tilli A., Tonielli A. (2003). Theoretical and Experimental Comparison of

Plounkett A.B., D'Atre J.D., Lipo T.A. (1979). Synchronous Control of a Static AC Induction Motor Drive, *IEEE Transactions on Indusrty Applications*, Vol. 5, No. 4, pp. 430–437. Takagashi I., Noguchi T. (1986). A New Quick-Response and High-Efficiency Control Strategy

Vas P. (1998). *Sensorless Vector and Direct Torque Control*, Oxford University Press, Oxford. Verghese G.C., Sanders S.R. (1988) Observers for Flux Estimation in Induction Machines, *IEEE*

*Transactions on Industrial Electronics*, Vol. 35, No. 1, pp. 85–93.

*Transactions on Industry Applications*, Vol. IA-22, No. 5, pp. 820–827.

*Triennial World Congres, San Francisco*, 2b 27 2, pp. 389–394.

*13th Triennial World Congres, San Francisco*, 2b II, pp. 257–262.

pp. 367–375.

Vol. 39, No. 4, pp. 1070–1076.

Krause P.C. (1986). *Analysis of Electric Machinery*, McGraw-Hill.

*Euler-Lagrange Systems*, Springer-Verlag, Berlin.

*Control*, Vol. 44, No. 5, pp. 967–980.

*Control*, Vol. 46, No. 8, pp. 967–980.

*Electronics*, Vol. 18, No. 1, pp. 151–163.

pp. 820–827.

Saturation-Induced Saliencies and Their Effect on Carrier-Signal-Based Sensorless AC Drives, *IEEE Transactions on Industry Applications*, Vol. 38, No. 3, pp. 670–678. Brockett R. (1996). Characteristic Phenomena and Model Problems in Nonlinear Control, *IFAC,*

Torque Control, *IEEE Transactions on control System Technology*, Vol. 19, No. 2,

Vector-Controlled Induction Motor Drives, *IEEE Transactions on Industry Applications*,

Induction Motor, *IEEE Transactions on Automatic Control*, Vol. 43, No. 3, pp. 391–396.

Induction Motors with Uncertain Rotor Resistance, *IEEE Transactions on Automatic*

Stability Analysis and Performance Improvement, *IEEE Transactions on Automatic*

Indirect Field-Oriented Controllers for Induction Motors, *IEEE Transactions on Power*

of an Induction Motor, *IEEE Transactions on Industry Applications*, Vol. IA-22, No. 5,

This chapter presents some considerations related to the applications in environment monitoring of some concepts as: estimation, fault detection and diagnosis, theory of distributed parameter systems and artificial intelligence based on the modern technology of wireless sensor networks. All these concepts allow treatment of large, complex, non-linear and multivariable system of the environment by learning and extrapolation. The environment may be seen as a complex ensemble of different distributed parameter systems, described with partial differential equations.

Sensor networks (Akyildiz & all, 2002) have large and successful applications in monitoring the environment, they been capable to measure, as a distributed sensor, the physical variables, on a large area, which are characterizing the environment, and also to communicate at long distance the measured values, from the distributed parameter environmental processes. A lot of papers and books have been published in the fields of using sensor networks in environment monitoring in the last years. Some related work is surveyed as follows. The paper (Cuiyun & all., 2006) presents some research consideration related the changes of urban spatial thermal environment, for sustainable urban development, to improve the quality of human habitation environment. The urban thermal phenomenon is revealed using thermal remote sensing imagery, based on the instantaneous radiant temperature of the land surfaces. An architecture of sensor network for environment is presented in (Lan & all, 2008). Environmental pollution and meteorological processes may be studied using various kinds of environmental sensor networks. The modern intelligent sensor networks comprise automatic sensor nodes and communication systems which communicate their data to a sensor network server, where these data are integrated with other environmental information. The paper (Giannopoulos & all, 2009) presents the design and implementaion of a wireless sensor network for monitoring environmental variables and evaluates its effectiveness. It has application in environment variable monitoring such as: temperature, humidity, barometric pressure, soil moisture and ambient light, for research in agriculture, habitat monitoring, weather monitoring and so on. In order to improve the capacity of the environmental sensor networks different techniques may be used. The paper (Talukder & all, 2008) is using a model predictive control for optimal resource management in environment sensor networks, for with application at spatiotemporal events of a coastal monitoring and forecast system. The paper (Dardari & all, 2007)

Applying the Technology of Wireless Sensor Network in Environment Monitoring 99

of the monitoring system based on virtual instrumentation. The main results and future

The environment systems, which are complex heterogeneous systems of distributed parameter systems, may be described using partial differential equations. These equations are used to formulate problems involving functions of several variables, such as the propagation of sound or heat, electrostatics, electrodynamics, fluid flow. Some examples of distributed parameter systems are presented as follow (Rosculet & Craiu, 1979). Diverse categories of systems have specific characteristics that are important in their investigation, simulation, prediction, monitoring and diagnosis. One of the most important domains of applications is represented by the process of heat conduction, with propagation of heat in anisotropy medium. In the field of motion of fluid there are: plane motion of viscous fluids, running of viscous fluids in medium as a tube or running of gases. The processes of cooling and drying are also met in environment systems. Phenomenons of diffusion could be: diffusion flow for chemical reactions, the flames diffusion, the density repartition of particles loading by the meteorites. Other applications in the environment could be: estimation of the ice height covering the snow the arctic seas, motion of underground waters, the growing of the gas particles in a fluid, the temperature modification in the air

mass. For some of the above processes some equations are given as follows.

of heat propagation through an object in which there are no heat sources:

The heat sources in the object have a distribution given by the function:

 

2 1

If the object is homogenous *a k ct* // .

the equation:

The function of the object's temperature is (P, *t*), at the time moment *t*, where P is a point in the space. If different points of object have different temperatures, (P, *t*)ct., then a heat transfer will take place, from the warmer parts to the less warm parts. The vector *grad* has its direction along the normal at the level surface for =ct., in the sense of rising. The law

> *kkk txx yy zz*

*a t xx y y z z*

The initial conditions or of the limit conditions have physical significance. They are given by

<sup>0</sup> ( , , ,) ( , , ) *<sup>t</sup>*

Running of viscous fluids in rectilinear medium may be analyzed with he following equations. Let it be a rectilinear medium, which is leading a viscous liquid. The ax of

(1)

*FtP Ftxyz* (, ) (, , , ) (2)

and the equation (2) is written:

(3)

*xyzt f xyz* (4)

perspectives are presented in conclusion.

**2. Equations for environmental systems 2.1 Primary physical and mathematical models** 

presents and application at the estimation of atmospheric pressure using a wireless sensor network, which is randomly distributes. The estimation error is dicussed and a design criterion is proposed. The author has contribution in the field of monitoring distributed parameter systems based on sensor networks and estimation using adaptive-network-based fuzzy inference (Volosencu, 2010), (Volosencu & Curiac, 2010).

Using the modern intelligent wireless sensor networks multivariable estimation techniques may be applied in environment monitoring, seen as distributed parameter systems. Based on these concepts, environment monitoring becomes more easily and more performing (Fig. 1).

The chapter presents a methodology of how to use the above mentioned topics in the problem of the environmental monitoring, as follows: - principles and technical data of modern sensor networks, - examples of distributed parameter systems, with their mathematical models, useful in environment description, - examples of modeling and simulation of environmental temperature variation, - technical data of the sensor network used in practical experiments, - a case study of environmental temperature estimation base on auto-regression and neuro-fuzzy inference engine.

Fig. 1. Scientific domains for environmental monitoring

The most important domains of applications are: the processes of heat conduction, with propagation of heat in anisotropy medium: propagation of heat in a porous medium, processes of transference of heat between a solid wall and a flow of hot gas; applications related to electricity domain as electrostatic charges in atmosphere; the motion of fluid, the processes of cooling and drying, phenomenon of diffusion. Other applications are: the growing of the gas particles in a fluid, the temperature modification in the air mass.

The chapter presents a short survey of the main characteristics of the above topics involved in the problem of the environmental monitoring, some principles and technical data of modern sensor networks, some examples of distributed parameter systems, with their mathematical models, useful in environment description. The second paragraph presents some equation useful in modelling environmental processes. The third paragraph presents some estimation algorithms useful in environment monitoring, for future estimation of changing in physical variables of the medium. The fourth paragraph presents some examples of modelling and simulation of environmental temperature variation. The fifth paragraph presents some technical data of the sensor network used in practical experiments. The sixth paragraph presents the monitoring structure, the monitoring method and the estimation mechanism. The seventh paragraph presents an example of expert system useful in environment monitoring, based on environment knowledge. The eighth paragraph presents a case study. The ninth paragraph presents a technical solution of implementation of the monitoring system based on virtual instrumentation. The main results and future perspectives are presented in conclusion.

#### **2. Equations for environmental systems**

98 Cutting Edge Research in New Technologies

presents and application at the estimation of atmospheric pressure using a wireless sensor network, which is randomly distributes. The estimation error is dicussed and a design criterion is proposed. The author has contribution in the field of monitoring distributed parameter systems based on sensor networks and estimation using adaptive-network-based

Using the modern intelligent wireless sensor networks multivariable estimation techniques may be applied in environment monitoring, seen as distributed parameter systems. Based on these concepts, environment monitoring becomes more easily and more performing (Fig. 1). The chapter presents a methodology of how to use the above mentioned topics in the problem of the environmental monitoring, as follows: - principles and technical data of modern sensor networks, - examples of distributed parameter systems, with their mathematical models, useful in environment description, - examples of modeling and simulation of environmental temperature variation, - technical data of the sensor network used in practical experiments, - a case study of environmental temperature estimation base

The most important domains of applications are: the processes of heat conduction, with propagation of heat in anisotropy medium: propagation of heat in a porous medium, processes of transference of heat between a solid wall and a flow of hot gas; applications related to electricity domain as electrostatic charges in atmosphere; the motion of fluid, the processes of cooling and drying, phenomenon of diffusion. Other applications are: the

The chapter presents a short survey of the main characteristics of the above topics involved in the problem of the environmental monitoring, some principles and technical data of modern sensor networks, some examples of distributed parameter systems, with their mathematical models, useful in environment description. The second paragraph presents some equation useful in modelling environmental processes. The third paragraph presents some estimation algorithms useful in environment monitoring, for future estimation of changing in physical variables of the medium. The fourth paragraph presents some examples of modelling and simulation of environmental temperature variation. The fifth paragraph presents some technical data of the sensor network used in practical experiments. The sixth paragraph presents the monitoring structure, the monitoring method and the estimation mechanism. The seventh paragraph presents an example of expert system useful in environment monitoring, based on environment knowledge. The eighth paragraph presents a case study. The ninth paragraph presents a technical solution of implementation

growing of the gas particles in a fluid, the temperature modification in the air mass.

fuzzy inference (Volosencu, 2010), (Volosencu & Curiac, 2010).

on auto-regression and neuro-fuzzy inference engine.

Fig. 1. Scientific domains for environmental monitoring

#### **2.1 Primary physical and mathematical models**

The environment systems, which are complex heterogeneous systems of distributed parameter systems, may be described using partial differential equations. These equations are used to formulate problems involving functions of several variables, such as the propagation of sound or heat, electrostatics, electrodynamics, fluid flow. Some examples of distributed parameter systems are presented as follow (Rosculet & Craiu, 1979). Diverse categories of systems have specific characteristics that are important in their investigation, simulation, prediction, monitoring and diagnosis. One of the most important domains of applications is represented by the process of heat conduction, with propagation of heat in anisotropy medium. In the field of motion of fluid there are: plane motion of viscous fluids, running of viscous fluids in medium as a tube or running of gases. The processes of cooling and drying are also met in environment systems. Phenomenons of diffusion could be: diffusion flow for chemical reactions, the flames diffusion, the density repartition of particles loading by the meteorites. Other applications in the environment could be: estimation of the ice height covering the snow the arctic seas, motion of underground waters, the growing of the gas particles in a fluid, the temperature modification in the air mass. For some of the above processes some equations are given as follows.

The function of the object's temperature is (P, *t*), at the time moment *t*, where P is a point in the space. If different points of object have different temperatures, (P, *t*)ct., then a heat transfer will take place, from the warmer parts to the less warm parts. The vector *grad* has its direction along the normal at the level surface for =ct., in the sense of rising. The law of heat propagation through an object in which there are no heat sources:

$$
\eta \rho \frac{\partial \theta}{\partial t} = \frac{\partial}{\partial \mathbf{x}} \left( k \frac{\partial \theta}{\partial \mathbf{x}} \right) + \frac{\partial}{\partial y} \left( k \frac{\partial \theta}{\partial y} \right) + \frac{\partial}{\partial z} \left( k \frac{\partial \theta}{\partial z} \right) \tag{1}
$$

The heat sources in the object have a distribution given by the function:

$$F(t, P) = F(t, \mathbf{x}, \mathbf{y}, z) \tag{2}$$

If the object is homogenous *a k ct* // . and the equation (2) is written:

$$\frac{1}{a^2} \frac{\partial \theta}{\partial t} = \frac{\partial}{\partial \mathbf{x}} \left( \frac{\partial \theta}{\partial \mathbf{x}} \right) + \frac{\partial}{\partial y} \left( \frac{\partial \theta}{\partial y} \right) + \frac{\partial}{\partial z} \left( \frac{\partial \theta}{\partial z} \right) \tag{3}$$

The initial conditions or of the limit conditions have physical significance. They are given by the equation:

$$\left. \partial (\mathbf{x}, y, z, t) \right|\_{t=0} = f(\mathbf{x}, y, z) \tag{4}$$

Running of viscous fluids in rectilinear medium may be analyzed with he following equations. Let it be a rectilinear medium, which is leading a viscous liquid. The ax of

Applying the Technology of Wireless Sensor Network in Environment Monitoring 101

where is the temperature of the material, s is the soil temperature, *t* is the time, *x*, *y* are the Cartesians coordinates and *a* is a coefficient what is characterizing soil thermal diffusion. Suddenly in practice a great importance is to analyze the running of gases, to establish the pressure in a certainly point of a medium. The no stationary running of a gas is defined by

where *p* is the pressure, *v* is the speed related at a section, *d* is the medium diameter and is

A method used to determine the ice height of the arctic seas is the radiometry. Radiometry is based on registration of the heat radiation of which intensity varies with temperature and the radiation coefficient of the objects. The value of the radiations will characterize the relation between ice heights in their different stages. The temperature of the ice surface is determined from the heat equation, which describes the heat repartition in snow and ice

> 2 <sup>2</sup> , 1,2,3 *<sup>j</sup>*

where cj is the specific heat, j is the density, j is the thermal conductivity coefficient, j is the temperature, t is the time, z is the height coordinate. The indices j=1,2,3 correspond to the there medium: air, snow and ice. At the frontiers there are the conditions of equilibrium.

The distributed parameter systems have general mathematical models in continuous time

12 3 *cc c Q* ( ) *<sup>t</sup>*

<sup>2</sup> 12 3 *cc c Q* ( ) *<sup>t</sup>*

axis, (*x, y*) for two axis or (*x, y*, *z*) for three axis, *c*1, *c*2 and *c*3 are coefficients, which could be

2 2 2 2 *f* , . , ,... 0

 

*t t*   

> 

*, t*) is an exterior excitation, variable on time and space.

 

*, t*) are depending on time *t*0 and on space *V*, where is *x* for one

*jj j c j <sup>t</sup> <sup>z</sup>* 

**2.2 General equations for modeling environment system seen as distributed** 

and space as partial differential equation, of parabolic or hyperbolic form, as:

2

  

*t x*

 *p v v x td v*

 

 

the system

the friction coefficient.

**parameter systems** 

where the variables (

also time variant and *Q*(

So, in the general case, an implicit equation may be written:

2 2 2 2 *s s a t x y*

2 2

(12)

(13)

(14)

(15)

(16)

(17)

medium, seen as a tube is *Oz*. Let us consider the movement of a part of the liquid between two transversal sections *z*1 and *z*1+*h*. If A is the transversal section area supposed to be constant and is the fluid density, the movement equation is

$$
\rho A h \frac{\partial v}{\partial t} = A(p\_1 - p\_2) - R \tag{5}
$$

where *p*1 and *p*2 are the pressures in the two sections and *R* is the force on the tube wall. If *v* is the fluid speed in the direction of Oz axis, *v* is independent of *z* if the liquid is incompressible

$$
\upsilon = \upsilon(\mathbf{x}, y, t) \tag{6}
$$

The partial derivative equation is

$$
\rho \frac{\partial v}{\partial t} = -\frac{\partial p}{\partial z} + \mu \left( \frac{\partial^2 v}{\partial x^2} + \frac{\partial^2 v}{\partial y^2} \right) \tag{7}
$$

if the pressure *p* is constant

$$\frac{1}{a}\frac{\partial v}{\partial t} = \left(\frac{\partial^2 v}{\partial x^2} + \frac{\partial^2 v}{\partial y^2}\right) \tag{8}$$

which it is the equation of heat propagation in plane, where *a*=/.

For the plane motion of viscous fluids let's consider an incompressible, viscous fluid of constant density , in a plane movement. If (*v*x, *v*y) are the speed components in the point P(*x*, *y*) of the plane at the time moment *t* , the movement equations are

$$\begin{aligned} \frac{\partial \upsilon\_x}{\partial t} + \upsilon\_x \frac{\partial \upsilon\_x}{\partial x} + \upsilon\_y \frac{\partial \upsilon\_x}{\partial y} &= -\frac{1}{\rho} \frac{\partial p}{\partial x} + \nu \Delta \upsilon\_x \\ \frac{\partial \upsilon\_y}{\partial t} + \upsilon\_x \frac{\partial \upsilon\_y}{\partial x} + \upsilon\_y \frac{\partial \upsilon\_y}{\partial y} &= -\frac{1}{\rho} \frac{\partial p}{\partial y} + \nu \Delta \upsilon\_y \end{aligned} \tag{9}$$

where *p* is the pressure in this point, , is the viscosity coefficient. At the equations (5) the equations of continuity are added

$$\frac{\partial v\_x}{\partial \mathbf{x}} + \frac{\partial v\_y}{\partial y} = 0 \tag{10}$$

The current function is introduced

$$\mathbf{v}\_x = -\frac{\partial \mathfrak{q}}{\partial y}, \quad \mathbf{v}\_y = -\frac{\partial \mathfrak{q}}{\partial \mathfrak{x}} \tag{11}$$

Analyze of the no stationary heat in subterranean could be done when a series of problems arise at the calculation of heat losses in conditions of a heat change no stationary. For determining the no stationary heat losses in the subterranean, the next equation is used

medium, seen as a tube is *Oz*. Let us consider the movement of a part of the liquid between two transversal sections *z*1 and *z*1+*h*. If A is the transversal section area supposed to be

1 2 ( ) *<sup>v</sup> Ah A p p R <sup>t</sup>*

where *p*1 and *p*2 are the pressures in the two sections and *R* is the force on the tube wall. If *v* is the fluid speed in the direction of Oz axis, *v* is independent of *z* if the liquid is incompressible

> *v vv p t z x y*

 

> 2 2 2 2

 

1 *v vv a t x y* 

For the plane motion of viscous fluids let's consider an incompressible, viscous fluid of constant density , in a plane movement. If (*v*x, *v*y) are the speed components in the point

*xxx <sup>x</sup> <sup>y</sup> <sup>x</sup>*

*vvv <sup>p</sup> vv v txy x*

 

 

*vvv <sup>p</sup> vv v txy y*

<sup>0</sup> *<sup>y</sup> <sup>x</sup> <sup>v</sup> <sup>v</sup> x y* 

*<sup>x</sup> <sup>v</sup> <sup>y</sup> vx <sup>y</sup>* 

Analyze of the no stationary heat in subterranean could be done when a series of problems arise at the calculation of heat losses in conditions of a heat change no stationary. For determining the no stationary heat losses in the subterranean, the next equation is used

*x y y*

2 2 2 2

1

, is the viscosity coefficient. At the equations (5)

(10)

, (11)

1

(5)

*v vxyt* ( , ,) (6)

(7)

(8)

(9)

constant and is the fluid density, the movement equation is

The partial derivative equation is

where *p* is the pressure in this point,

the equations of continuity are added

The current function is introduced

if the pressure *p* is constant

which it is the equation of heat propagation in plane, where *a*=/.

P(*x*, *y*) of the plane at the time moment *t* , the movement equations are

*yyy*

$$\frac{\partial \theta}{\partial t} = a \left[ \frac{\partial^2 \theta\_s}{\partial x^2} + \frac{\partial^2 \theta\_s}{\partial y^2} \right] \tag{12}$$

where is the temperature of the material, s is the soil temperature, *t* is the time, *x*, *y* are the Cartesians coordinates and *a* is a coefficient what is characterizing soil thermal diffusion. Suddenly in practice a great importance is to analyze the running of gases, to establish the pressure in a certainly point of a medium. The no stationary running of a gas is defined by the system

$$\begin{aligned} \frac{\partial \rho}{\partial \mathbf{x}} &= \rho \left( \frac{\partial v}{\partial t} + \frac{\lambda v^2}{2d} \right) \\\\ \frac{\partial \rho}{\partial t} &= \frac{\partial \left( \rho v \right)}{\partial \mathbf{x}} \end{aligned} \tag{13}$$

where *p* is the pressure, *v* is the speed related at a section, *d* is the medium diameter and is the friction coefficient.

A method used to determine the ice height of the arctic seas is the radiometry. Radiometry is based on registration of the heat radiation of which intensity varies with temperature and the radiation coefficient of the objects. The value of the radiations will characterize the relation between ice heights in their different stages. The temperature of the ice surface is determined from the heat equation, which describes the heat repartition in snow and ice

$$
\omega\_j \rho\_j \frac{\partial \theta\_j}{\partial t} = \lambda\_j \frac{\partial^2 \theta}{\partial z^2}, \ j = 1, 2, 3 \tag{14}
$$

where cj is the specific heat, j is the density, j is the thermal conductivity coefficient, j is the temperature, t is the time, z is the height coordinate. The indices j=1,2,3 correspond to the there medium: air, snow and ice. At the frontiers there are the conditions of equilibrium.

#### **2.2 General equations for modeling environment system seen as distributed parameter systems**

The distributed parameter systems have general mathematical models in continuous time and space as partial differential equation, of parabolic or hyperbolic form, as:

$$\frac{\partial \theta}{\partial t} = c\_1 \nabla (c\_2 \nabla \theta) + c\_3 \theta + Q \tag{15}$$

$$\frac{\partial^2 \theta}{\partial t^2} = c\_1 \nabla(c\_2 \nabla \theta) + c\_3 \theta + Q \tag{16}$$

where the variables (*, t*) are depending on time *t*0 and on space *V*, where is *x* for one axis, (*x, y*) for two axis or (*x, y*, *z*) for three axis, *c*1, *c*2 and *c*3 are coefficients, which could be also time variant and *Q*(*, t*) is an exterior excitation, variable on time and space. So, in the general case, an implicit equation may be written:

$$f\left(\frac{\partial\theta}{\partial t}, \frac{\partial^2\theta}{\partial t^2}, \frac{\partial\theta}{\partial \zeta}, \frac{\partial^2\theta}{\partial \zeta^2}, \dots\right) = 0\tag{17}$$

Applying the Technology of Wireless Sensor Network in Environment Monitoring 103

 

 

> 

 

We may consider the variable is measured as the sample ),( *ki*

where, this time, is a vector containing the values of the variable (

 

differences results as a model for the hyperbolic equation:

algorithms of estimation are presented as it follows.

with variables written in vectors could be written.

of the space and at different time moments.

results for the parabolic equation:

equations:

intervals with the value:

moments *t*k=*k*.*h*.

may be used:


(22)

(24)


(27)

(29)

(28)

*, t*) in different points

*<sup>i</sup> t* , i*V*, at equal time

(25)

*k*

(23)

*k k i i k k tt t*

1

 


<sup>2</sup> - 2 1 1 2 2 <sup>1</sup> ( ) *k kk i ii*

The first and the second derivatives in space may be approximated with small variations in space to obtain the following relations. For the *x*-axis we may write the following

> <sup>2</sup> - 2 1 1 2 2

The same equations may be written also for the *y* and also *z*-axis. Of course, an equation

called sample period, in a sampling procedure, with a digital equipment, at the sample time

For the above equation, a linear approximate system of derivative equations of first degree

*<sup>d</sup> A BQ dt*

Combining the equations (17, 22, 24) in equation (15), a system of equations with differences

1 1 1 (, , , )0 *kk k k pi i i i f* 


 

and, combining the equations (17, 23, 25) in equation (16), an equivalent system with


Taking account of equations (28, 29), it is obvious that several estimation algorithms may be developed as follows, based on the discrete models of the partial derivative equations. These

1 11 1 (, , , , , )0 *kk k k k k hi i i i i i f*

111

*<sup>p</sup> x l*

*k kk i ii*

 

*k k t tt*

For the partial differential equations (1, 2) some boundary conditions may be imposed to establish a solution. So, when the variable value of the boundary is specified, there are Dirichlet conditions:

$$
\sigma\_4 \theta = q \tag{18}
$$

And, when the variable flux and transfer coefficient are specified, there are Neumann conditions:

$$
\mathcal{L}\_{\mathbb{S}} \nabla \theta + \mathfrak{c}\_{6} \theta = 0 \tag{19}
$$

In the practical application case studies limits and initial conditions of the equation (1) are imposed:

$$\begin{aligned} \theta(0, t) &= \theta\_{\zeta 0 \prime} \ t \in [0, T], \theta(\zeta, 0) = 0, \,\zeta \in [0, l], \\ \theta(l, t) &= \theta\_{\zeta 1 \prime} \ t \in [0, T] \end{aligned} \tag{20}$$

A system with finite differences may be associated to the equations (1) and (2). For this purpose the space S is divided into small dimension pieces *l*p:

$$1\_p = 1 \nmid n \tag{21}$$

In each small piece Spi, *i*=1,…,*n* of the space S the variable could be measured at each moment *t*k, using a sensor from the sensor network, in a characteristic point Pi(i), of coordinate i. Let it be <sup>i</sup> k the variable value in the point Pi(i) at the moment *t*k.

The points from the space in which the phenomenon is happening are denoted Pi, with the coordinate *z*i. For a bi-dimensional space in a system coordinate xOy *z*i=(*x*i,*y*i). The phenomenon as distributed system is monitored with a sensor network with *n* sensors Si, *i*=1,…,*n*, placed in *n* points Pi from the space, like in Fig. 2.

$$
\begin{array}{c}
\begin{array}{c}
\begin{array}{c}
\textbf{y} \\
\textbf{y} \\
\end{array} \\
\circ \\
\circ \\
\circ \\
\circ \\
\circ \\
\circ \\
\circ \\
\circ \\
\circ \\
\circ \\
\end{array}
\begin{array}{c}
\begin{array}{c}
\textbf{P}\_{1} \\
\circ \\
\circ \\
\circ \\
\circ \\
\circ \\
\circ \\
\circ \\
\circ \\
\end{array}
\end{array}
\begin{array}{c}
\begin{array}{c}
\textbf{P}\_{2} \\
\circ \\
\circ \\
\circ \\
\circ \\
\circ \\
\circ \\
\circ \\
\circ \\
\end{array}
\end{array}
\end{array}
$$

Fig. 2. Space monitoring scheme

It is a general known method to approximate the derivatives of a variable with small variations. In the equation with partial derivatives there are derivatives of first order, in time, and derivatives of first and second order in space. So, theoretically, we may approximate the variable derivatives in time with small variations in time, with the following relations:

For the partial differential equations (1, 2) some boundary conditions may be imposed to establish a solution. So, when the variable value of the boundary is specified, there are

> <sup>4</sup> *c q*

And, when the variable flux and transfer coefficient are specified, there are Neumann

In the practical application case studies limits and initial conditions of the equation (1) are

<sup>0</sup> (0, ) , [0, ], ( ,0) 0, [0, ],

A system with finite differences may be associated to the equations (1) and (2). For this

/ *pl ln* (21)

In each small piece Spi, *i*=1,…,*n* of the space S the variable could be measured at each moment *t*k, using a sensor from the sensor network, in a characteristic point Pi(i), of

The points from the space in which the phenomenon is happening are denoted Pi, with the coordinate *z*i. For a bi-dimensional space in a system coordinate xOy *z*i=(*x*i,*y*i). The phenomenon as distributed system is monitored with a sensor network with *n* sensors Si,

It is a general known method to approximate the derivatives of a variable with small variations. In the equation with partial derivatives there are derivatives of first order, in time, and derivatives of first and second order in space. So, theoretically, we may approximate the

variable derivatives in time with small variations in time, with the following relations:

k the variable value in the point Pi(i) at the moment *t*k.

*t tT l*

 

(20)

5 6 *c c* 

( , ) , [0, ] *<sup>l</sup>*

*lt t T* 

 

 

purpose the space S is divided into small dimension pieces *l*p:

*i*=1,…,*n*, placed in *n* points Pi from the space, like in Fig. 2.

(18)

0 (19)

Dirichlet conditions:

coordinate i. Let it be <sup>i</sup>

Fig. 2. Space monitoring scheme

conditions:

imposed:

$$\frac{\partial \Theta}{\partial t} = \frac{\theta\_i^{k+1} \cdot \theta\_i^k}{t\_{k+1} \cdot t\_k} \tag{22}$$

$$\frac{\partial^2 \theta}{\partial t^2} = \frac{\theta\_i^{k+1} \cdot 2\theta\_i^k + \theta\_i^{k-1}}{\left(t\_{k+1} - t\_k\right)^2} \tag{23}$$

The first and the second derivatives in space may be approximated with small variations in space to obtain the following relations. For the *x*-axis we may write the following equations:

$$\frac{\partial \mathcal{O}}{\partial \mathbf{x}} = \frac{\boldsymbol{\theta}\_{i}^{k} \cdot \boldsymbol{\theta}\_{i-1}^{k}}{l\_{p}} \tag{24}$$

$$\frac{\partial^2 \theta}{\partial \mathbf{x}^2} = \frac{\theta\_{i+1}^k \cdot \mathbf{2} \theta\_i^k + \theta\_{i-1}^k}{l\_p^2} \tag{25}$$

The same equations may be written also for the *y* and also *z*-axis. Of course, an equation with variables written in vectors could be written.

We may consider the variable is measured as the sample ),( *ki k <sup>i</sup> t* , i*V*, at equal time intervals with the value:

$$\mathbf{h} = \mathbf{t}\_{k+1} - \mathbf{t}\_k \tag{26}$$

called sample period, in a sampling procedure, with a digital equipment, at the sample time moments *t*k=*k*.*h*.

For the above equation, a linear approximate system of derivative equations of first degree may be used:

$$\frac{d\Psi}{dt} = A\Psi + BQ\tag{27}$$

where, this time, is a vector containing the values of the variable (*, t*) in different points of the space and at different time moments.

Combining the equations (17, 22, 24) in equation (15), a system of equations with differences results for the parabolic equation:

$$f\_p(\theta\_i^k, \theta\_{i\cdot 1}^k, \theta\_i^{k+1}, \theta\_{i+1}^k) = 0 \tag{28}$$

and, combining the equations (17, 23, 25) in equation (16), an equivalent system with differences results as a model for the hyperbolic equation:

$$f\_h(\theta\_i^k, \theta\_{i\cdot 1}^k, \theta\_{i\cdot 1}^k, \theta\_{i\cdot 1}^{k+1}, \theta\_i^{k-1}, \theta\_{i\cdot 1}^{k-1}) = 0 \tag{29}$$

Taking account of equations (28, 29), it is obvious that several estimation algorithms may be developed as follows, based on the discrete models of the partial derivative equations. These algorithms of estimation are presented as it follows.

Applying the Technology of Wireless Sensor Network in Environment Monitoring 105

Identical characteristics may be obtained for other distributed parameter systems involved

The modern sensors are smart, small, lightweight and portable devices, with a communication infrastructure intended to monitor and record specific parameters like temperature, humidity, pressure, wind direction and speed, illumination intensity, vibration intensity, sound intensity, power-line voltage, chemical concentrations and pollutant levels at diverse locations. The sensor number in a network is over hundreds or thousands of ad hoc tiny sensor nodes spread across different area. Thus, the network actively participates in creating a smart environment. With them we may developed low cost wireless platforms, including integrated radio and microprocessors. The sensors are adequate for autonomous operation in highly dynamic environments as distributed parameter systems. We may add sensors when they fail. They require distributed computation and communication protocols. They insure scalability, where the quality can be traded for system lifetime. They insure

The temperature variation in 3D is presented in Fig. 4, at a certain time moment.

Fig. 4. Temperature variation in space

in environmental modeling.

Fig. 5. Temperature isotherms

Internet connections via satellite.

The structure of a modern sensor is presented in Fig. 6.

**5. Sensor network** 

Temperature isotherms in plane are presented in Fig. 5.

#### **3. Algorithms of estimation**

#### **3.1 Parabolic systems**

*Estimation algorithm 1*. It estimates the value of the variable *<sup>k</sup>* <sup>1</sup> *i* at the moment *t*k+1, measuring the values of the variables -1 1 , , *kk k ii i* at the anterior moment *t*k:

$$
\Theta\_i^{k+1} = f\_1 \{ \theta\_{i:1}^k, \theta\_{i+1}^k, \theta\_i^k \} \tag{30}
$$

This is a multivariable estimation algorithm, based on the adjacent nodes.

*Estimation algorithm 2*. It estimates the value of the variable *<sup>k</sup>* <sup>1</sup> *i* at the moment *t*k+1, measuring the values of the same variable --- <sup>123</sup> ,,, *kk k k ii i i* , but at four anterior moments *t*k, *t*k-1, *t*k-2 and *t*k-3.

$$\theta\_i^{k+1} = f\_2\left(\theta\_i^k, \theta\_i^{k\cdot 1}, \theta\_i^{k\cdot 2}, \theta\_i^{k\cdot 3}\right) \tag{31}$$

This is an autoregressive algorithm, using the values from the same node.

#### **3.2 Hyperbolic systems**

*Estimation algorithm 3*. It estimates the value of the variable *<sup>k</sup>* <sup>1</sup> *i* at the moment *t*k+1, measuring the values of the variables -1 1 , , *kk k ii i* at the anterior time moment *t*k and *t*k-1:

$$\boldsymbol{\Theta}\_{i}^{k+1} = f\_1 \left( \boldsymbol{\theta}\_{i:1}^k, \boldsymbol{\theta}\_{i+1}^k, \boldsymbol{\theta}\_i^k, \boldsymbol{\theta}\_{i:\overline{1}}^k, \boldsymbol{\theta}\_{i+\overline{1}}^k, \boldsymbol{\theta}\_i^{k-1} \right) \tag{32}$$

This is a multivariable estimation algorithm, based on the adjacent nodes and 2 time anterior moments.

*Estimation algorithm 4*. It estimates the value of the variable *<sup>k</sup>* <sup>1</sup> *i* at the moment *t*k+1, measuring the values of the same variable (the same node) ----- <sup>12345</sup> ,,,,, *kk k k k k ii i i i i* , but at six anterior moments *t*k, *t*k-1, *t*k-2, *t*k-3, *t*k-4and *t*k-3:

$$\boldsymbol{\Theta}\_{i}^{k+1} = f\_{2}\left(\boldsymbol{\theta}\_{i}^{k}, \boldsymbol{\theta}\_{i}^{k\cdot 1}, \boldsymbol{\theta}\_{i}^{k\cdot 2}, \boldsymbol{\theta}\_{i}^{k\cdot 3}, \boldsymbol{\theta}\_{i}^{k\cdot 4}, \boldsymbol{\theta}\_{i}^{k\cdot 5}\right) \tag{33}$$

#### **4. Modeling and simulation**

Environment behavior may be modeled with the equation from the above paragraph. Using these models, some analysis in time and space domains may be accomplished. Some transient characteristics of the temperature are there presented for 101 samples. The nodes and meshes structure for a sensor network with reduced number of sensor, in this case 13, is presented in Fig. 3.

Fig. 3. Nodes and meshes for heat transfer in plane

The temperature variation in 3D is presented in Fig. 4, at a certain time moment.

Fig. 4. Temperature variation in space

104 Cutting Edge Research in New Technologies

*ii i*

 

*ii i* 

111 1 1 \_\_

measuring the values of the same variable (the same node) ----- <sup>12345</sup> ,,,,, *kk k k k k*

+- - <sup>+</sup> θ = θ ,θ ,θ ,θ ,θ ,θ

<sup>+</sup><sup>1</sup> ----- θ = θ ,θ ,θ ,θ ,θ ,θ*<sup>k</sup>*

Environment behavior may be modeled with the equation from the above paragraph. Using these models, some analysis in time and space domains may be accomplished. Some transient characteristics of the temperature are there presented for 101 samples. The nodes and meshes structure for a sensor network with reduced number of sensor, in this case 13, is

This is a multivariable estimation algorithm, based on the adjacent nodes and 2 time anterior

*<sup>i</sup>* θ = *f* θ ,θ ,θ +-

111

 1 123 --- <sup>2</sup> ,,, *k kk k k i ii i i*

( ) *<sup>k</sup> i k i k i*

*ii i i*

 

( ) <sup>11</sup>

( ) <sup>54321</sup>

*k i k i k i k i*

1 1

*k i*

*i k i k i k i k i k i*

*<sup>i</sup> f* (33)

*<sup>i</sup> f* (32)

+ \_

*k i*

 

*i* 

1 (30)

*i* 

*f* (31)

*i*

at the anterior time moment *t*k and *t*k-1:

*i* 

*ii i i i i*

at the anterior moment *t*k:

at the moment *t*k+1,

at the moment *t*k+1,

at the moment *t*k+1,

at the moment *t*k+1,

, but at

, but at four anterior moments *t*k,

*Estimation algorithm 1*. It estimates the value of the variable *<sup>k</sup>* <sup>1</sup>

*k*

This is a multivariable estimation algorithm, based on the adjacent nodes. *Estimation algorithm 2*. It estimates the value of the variable *<sup>k</sup>* <sup>1</sup>

This is an autoregressive algorithm, using the values from the same node.

*Estimation algorithm 3*. It estimates the value of the variable *<sup>k</sup>* <sup>1</sup>

*Estimation algorithm 4*. It estimates the value of the variable *<sup>k</sup>* <sup>1</sup>

2

*k*

*k*

+

**3. Algorithms of estimation** 

measuring the values of the variables -1 1 , , *kk k*

measuring the values of the variables -1 1 , , *kk k*

six anterior moments *t*k, *t*k-1, *t*k-2, *t*k-3, *t*k-4and *t*k-3:

Fig. 3. Nodes and meshes for heat transfer in plane

**4. Modeling and simulation** 

presented in Fig. 3.

measuring the values of the same variable --- <sup>123</sup> ,,, *kk k k*

**3.1 Parabolic systems** 

*t*k-1, *t*k-2 and *t*k-3.

moments.

**3.2 Hyperbolic systems** 

Temperature isotherms in plane are presented in Fig. 5.

Identical characteristics may be obtained for other distributed parameter systems involved in environmental modeling.

Fig. 5. Temperature isotherms

#### **5. Sensor network**

The modern sensors are smart, small, lightweight and portable devices, with a communication infrastructure intended to monitor and record specific parameters like temperature, humidity, pressure, wind direction and speed, illumination intensity, vibration intensity, sound intensity, power-line voltage, chemical concentrations and pollutant levels at diverse locations. The sensor number in a network is over hundreds or thousands of ad hoc tiny sensor nodes spread across different area. Thus, the network actively participates in creating a smart environment. With them we may developed low cost wireless platforms, including integrated radio and microprocessors. The sensors are adequate for autonomous operation in highly dynamic environments as distributed parameter systems. We may add sensors when they fail. They require distributed computation and communication protocols. They insure scalability, where the quality can be traded for system lifetime. They insure Internet connections via satellite.

The structure of a modern sensor is presented in Fig. 6.

Applying the Technology of Wireless Sensor Network in Environment Monitoring 107

They are working at the frequency of 2.4 GHz. The sensor network is also provided with a software for data acquisition, which is reading data from a data base. The sensor network is working in real time with a driver which insures data acquisition from the base station.

The estimation model describes the evolution of a variable measured over the same sample period as a non-linear function of past evolutions. This kind of systems evolves due to its "non-linear memory", generating internal dynamics. The estimation model definition is:

where *u*(t) is a vector of the series under investigation (in our case is the series of values

1 2 ... *<sup>T</sup>*

and *f* is the non-linear estimation function of non-linear regression, *n* is the order of the regression. By convention all the components *u*1(t),…,*u*n(t) of the multivariable time series *u*(t) are assumed to be zero mean. The function *f* may be estimated in case that the time series *u*(t), *u*(t-1),…, *u*(t-n) is known (recursive parameter estimation), either predict future value in case that the function *f* and past values *u*(t-1),…, *u*(t-n) are known (AR prediction). The method uses the time series of measured data provided by each sensor and relies on an (auto)-regressive multivariable predictor placed in base stations as it is presented in Fig. 9.

The principle of the estimation is: the sensor nodes will be identified by comparing their output values (*t*) with the values *y*(*t*) predicted using past/present values provided by the same sensors or adjacent sensors (*adj*). After this initialization, at every instant time *t* the estimated values are computed relying only on past values A(t-1), …, A(0) and both parameter estimation and prediction are used. First, the parameters of the function *f* are estimated using training from measured values with a training algorithm as back-

> () () () - *A A et t*

*y t* (36)

*t* measured by the sensor

<sup>1</sup> ( ) ( ( ),..., ( )) *<sup>n</sup> y t fu t u t* (34)

*uuu u <sup>n</sup>* (35)

**6. Monitoring application 6.1 Monitoring structure** 

measured by the sensors from the network):

Fig. 9. Estimation and detection structure

propagation, for example. After that, the present values ( ) *<sup>A</sup>*

nodes may be compared with their estimated values *y*(*t*) by computing the errors:

#### Fig. 6. The structure of a modern sensor

The constructive and functional representation of a sensor network is presented in Fig. 7.

Fig. 7. Sensor network

The sensor SA measures the temperature A in a point in this space. The sensor SA measures the temperature A in a point in this space.

There have been used in practice: a Memsic eKo Outdoor Wireless Monitoring System with 4 eKo sensor nodes EN2100, an eKo base radio EB2110, an eKo gateway w/ built-in eKoView web application. The eKo Wireless Sensor Nodes form wireless mesh network with communication range from several hundred meters, accepting up to four sensor inputs. Solar cell or rechargeable batteries powered them. The eKo base radio provides connection between eKo sensor nodes and eKo gateway via USB interface for data transfer. pThere had been used an eKo weather station sensor suite with wind speed, wind direction, rain gauge, ambient temp/humidity, barometric pressure and solar radiation. Each node has a temperature and humidity sensor to measure the ambient relative humidity and air temperature and to calculate the dew point. The base station is wireless, with computing energy and communication resources, which is acting like an access gate between the sensor nodes and the end user. The senor nodes have two components. The processor/radio modules are activating the measuring system of small power.

Fig. 8. Components of the sensor network used in practice

They are working at the frequency of 2.4 GHz. The sensor network is also provided with a software for data acquisition, which is reading data from a data base. The sensor network is working in real time with a driver which insures data acquisition from the base station.

#### **6. Monitoring application**

#### **6.1 Monitoring structure**

106 Cutting Edge Research in New Technologies

The constructive and functional representation of a sensor network is presented in Fig. 7.

The sensor SA measures the temperature A in a point in this space. The sensor SA measures

There have been used in practice: a Memsic eKo Outdoor Wireless Monitoring System with 4 eKo sensor nodes EN2100, an eKo base radio EB2110, an eKo gateway w/ built-in eKoView web application. The eKo Wireless Sensor Nodes form wireless mesh network with communication range from several hundred meters, accepting up to four sensor inputs. Solar cell or rechargeable batteries powered them. The eKo base radio provides connection between eKo sensor nodes and eKo gateway via USB interface for data transfer. pThere had been used an eKo weather station sensor suite with wind speed, wind direction, rain gauge, ambient temp/humidity, barometric pressure and solar radiation. Each node has a temperature and humidity sensor to measure the ambient relative humidity and air temperature and to calculate the dew point. The base station is wireless, with computing energy and communication resources, which is acting like an access gate between the sensor nodes and the end user. The senor nodes have two components. The processor/radio

Fig. 6. The structure of a modern sensor

the temperature A in a point in this space.

modules are activating the measuring system of small power.

Fig. 8. Components of the sensor network used in practice

Fig. 7. Sensor network

The estimation model describes the evolution of a variable measured over the same sample period as a non-linear function of past evolutions. This kind of systems evolves due to its "non-linear memory", generating internal dynamics. The estimation model definition is:

$$y(t) = f(\mu\_1(t), \dots, \mu\_n(t))\tag{34}$$

where *u*(t) is a vector of the series under investigation (in our case is the series of values measured by the sensors from the network):

$$\boldsymbol{\mu} = \begin{bmatrix} \boldsymbol{\mu}\_1 & \boldsymbol{\mu}\_2 & \dots & \boldsymbol{\mu}\_n \end{bmatrix}^T \tag{35}$$

and *f* is the non-linear estimation function of non-linear regression, *n* is the order of the regression. By convention all the components *u*1(t),…,*u*n(t) of the multivariable time series *u*(t) are assumed to be zero mean. The function *f* may be estimated in case that the time series *u*(t), *u*(t-1),…, *u*(t-n) is known (recursive parameter estimation), either predict future value in case that the function *f* and past values *u*(t-1),…, *u*(t-n) are known (AR prediction). The method uses the time series of measured data provided by each sensor and relies on an (auto)-regressive multivariable predictor placed in base stations as it is presented in Fig. 9.

Fig. 9. Estimation and detection structure

The principle of the estimation is: the sensor nodes will be identified by comparing their output values (*t*) with the values *y*(*t*) predicted using past/present values provided by the same sensors or adjacent sensors (*adj*). After this initialization, at every instant time *t* the estimated values are computed relying only on past values A(t-1), …, A(0) and both parameter estimation and prediction are used. First, the parameters of the function *f* are estimated using training from measured values with a training algorithm as backpropagation, for example. After that, the present values ( ) *<sup>A</sup> t* measured by the sensor nodes may be compared with their estimated values *y*(*t*) by computing the errors:

$$\left| \left| e\_A(t) = \right| \theta\_A(t) \text{ - } y(t) \right| \tag{36}$$

Applying the Technology of Wireless Sensor Network in Environment Monitoring 109

algorithms, future states of the process may be estimated. Possible fault in the system are chosen and strategies for detection may be developed, to identify and to diagnose them, based on the state estimation. In practice, applying the method presumes the following steps: -placing a sensor network in the field of the distributed parameter system; acquiring data, in time, from the sensor nodes, for the system variables; -using measured data to determine an estimation model based on ANFIS; -using measured data to estimate the future values of the system variables; -imposing an error threshold for the system variables; -comparing the measured data with the estimated values; -if the determined error is greater then the threshold, a default occurs; -diagnosing the default, based on estimated data, determining its place in the sensor network and in the distribute

Knowledge that may be determinate from measurements upon the process variables made




at two onsecutive time moments *t* and *t*-*h*. The speed of the difference in space is given the speed of the space displacement in a sense in which the phenomenon is happening.

which is the value provided by the estimator Ei for the point Pi, at the time moment *t*:

^

estimated variables provided by the estimator Ei for the point Pi, at two consecutive

*vi* of the phenomenon at a time moment, in a point of the space Pi,

We may use also the variables obtained as estimation, as it follows.


, where the discrete time approximation is used, for a constant

*dt h* 

, which is the derivative in time of the

 ,

parameter system field.

**7.1 Process knowledge** 

using sensor networks is as it follows:

temperature in this case;

() () ( ) - - *i ii d t t th dt h*

 

sample period *h*;


^ *<sup>i</sup>*( )*t* ;

**7. Expert system** 

If these errors are higher than the thresholds *A* at the sensor measuring point, a fault occurs. Here, based on a database containing the known models, on a knowledge-based system, we may see the case as a multi-agent system, which can do critics, learning and changes, taking decision based on node analysis from network topology. Two parameters can influence the decision: the type of the distributed parameter system, which is offering the data measured by sensors and the computing limitations. Because both of them are a priori known, an off-line methodology is proposed. Realistic values are situated between 3 and 6.

#### **6.2 Estimator mechanism**

The estimator is a non-linear one, described by the function *y*=*f*(*u*1, *u*2, *…, u*n), using the adaptive-network-based fuzzy inference. Its general structure is presented in Fig. 10.

Fig. 10. The estimator input-output general structure

The number of inputs depends on the estimation algorithm, on the specific position in space of the measuring points, on the conditions of determination. The ANFIS procedure is well known and it may use a hybrid learning algorithm to identify the membership function parameters of the adaptive system. A combination of least-squares and back-propagation gradient descent methods may be used for training membership function parameters, modeling a given set of input/output data.

#### **6.3 Monitoring method**

The following method is according to the objectives of monitoring of defined distributed parameter system from the practical application in the real world, as heat distribution, wave propagation. These systems have known mathematical model as a partial differential equation as a primary model from physics, with well-defined boundary and initial conditions for the system in practice. These represent the basic knowledge for a reference model from real data observation. The primary physical model must be meshed, in order to obtain a mathematical model as a multi input - multi output state space model. The unstructured meshes may be generated. The sensors must be placed in the field, according to the meshes structured under the form of nodes and triangles. A scenario for practical applications could be chosen and simulated. The simulation and the practical measurements are producing transient regime characteristics. Those transient characteristics are due to the system dynamics in a training process. In steady state we cannot train the neural model. On these transient characteristics, seen as times series, the estimation algorithms may be applied. ANFIS is used to implement the non-linear estimation algorithms. With these algorithms, future states of the process may be estimated. Possible fault in the system are chosen and strategies for detection may be developed, to identify and to diagnose them, based on the state estimation. In practice, applying the method presumes the following steps: -placing a sensor network in the field of the distributed parameter system; acquiring data, in time, from the sensor nodes, for the system variables; -using measured data to determine an estimation model based on ANFIS; -using measured data to estimate the future values of the system variables; -imposing an error threshold for the system variables; -comparing the measured data with the estimated values; -if the determined error is greater then the threshold, a default occurs; -diagnosing the default, based on estimated data, determining its place in the sensor network and in the distribute parameter system field.

## **7. Expert system**

108 Cutting Edge Research in New Technologies

at the sensor measuring point, a fault occurs.

methodology is proposed. Realistic values are situated between 3 and 6.

Here, based on a database containing the known models, on a knowledge-based system, we may see the case as a multi-agent system, which can do critics, learning and changes, taking decision based on node analysis from network topology. Two parameters can influence the decision: the type of the distributed parameter system, which is offering the data measured by sensors and the computing limitations. Because both of them are a priori known, an off-line

The estimator is a non-linear one, described by the function *y*=*f*(*u*1, *u*2, *…, u*n), using the

The number of inputs depends on the estimation algorithm, on the specific position in space of the measuring points, on the conditions of determination. The ANFIS procedure is well known and it may use a hybrid learning algorithm to identify the membership function parameters of the adaptive system. A combination of least-squares and back-propagation gradient descent methods may be used for training membership function parameters,

The following method is according to the objectives of monitoring of defined distributed parameter system from the practical application in the real world, as heat distribution, wave propagation. These systems have known mathematical model as a partial differential equation as a primary model from physics, with well-defined boundary and initial conditions for the system in practice. These represent the basic knowledge for a reference model from real data observation. The primary physical model must be meshed, in order to obtain a mathematical model as a multi input - multi output state space model. The unstructured meshes may be generated. The sensors must be placed in the field, according to the meshes structured under the form of nodes and triangles. A scenario for practical applications could be chosen and simulated. The simulation and the practical measurements are producing transient regime characteristics. Those transient characteristics are due to the system dynamics in a training process. In steady state we cannot train the neural model. On these transient characteristics, seen as times series, the estimation algorithms may be applied. ANFIS is used to implement the non-linear estimation algorithms. With these

adaptive-network-based fuzzy inference. Its general structure is presented in Fig. 10.

If these errors are higher than the thresholds *A*

Fig. 10. The estimator input-output general structure

modeling a given set of input/output data.

**6.3 Monitoring method** 

**6.2 Estimator mechanism** 

#### **7.1 Process knowledge**

Knowledge that may be determinate from measurements upon the process variables made using sensor networks is as it follows:


sample period *h*;


at two onsecutive time moments *t* and *t*-*h*. The speed of the difference in space is given the speed of the space displacement in a sense in which the phenomenon is happening.

We may use also the variables obtained as estimation, as it follows.


^

Applying the Technology of Wireless Sensor Network in Environment Monitoring 111

true. When an exert system is developed for monitoring distributed parameter systems, it is

There is presented a basic case study consisting in a heat distribution flux through a plane square surface of dimensions l=1, with Dirichlet boundary conditions as constant

> *h r*

*nk q g* 

() ( ) *C k Qh ext <sup>t</sup>*

where is the density of the medium, C is the thermal (heat) capacity, *k* is the thermal conductivity, coefficient of heat conduction, *Q* is the heat source, *h* is the convective heat transfer coefficient, ext is the external temperature. Relative values are chosen for the

In the case of study, a small sensor network with only 13 nodes had been used in laboratory tests. The number of sensor is equivalent to a reduced number of nodes and meshes, as it is

In the case study, we are choosing the nodes 8, 13, 12 5 and 11 in order to apply the

The transient characteristics of the temperature (in relative values) are presented in Fig. 12,

The transient characteristics of the 12th and 13th nods are the same, so they are plotted one

 

(39)

with *r*=0, and a Neumann boundary condition as a flux temperature from a source

estimation method. These nodes are marked with bold characters on figure.

over the other, and in the Fig. 12 there are only four characteristics instead of five.

*C*=1, *Q*=10, *k*=1.

(37)

(38)

necessary to test both, to see what it is happening in the field.

where *q* is the heat transfer coefficient *q*=0, *g*=0, *h*=1.

The heat equation, of a parabolic type, is:

Fig. 11. Sensor network position in the field

in the position scheme from Fig. 11.

**8. Case study** 

equation parameters:

for 101 samples.

temperature on three margins:

time moments *t* and *t*-*h*: *h htt dt td iii* )-(-)()( ^ ^^ , where the discrete time approximation is used, for a constant sample period *h*;

the estimated difference in space ^ *dij* , from two values of two adjacent sensor variables: ^ - ^ ^ *ij i j* () () () *t tt* , given by the estimators Ei and Ej, for the points Pi and Pj; The estimated difference in space is given the estimated sense in which the phenomenon is estimated to take place.


*dt h* the difference of estimates in space is given the speed of the estimate of the space displacement in a sense in which the phenomenon is estimated to happen.

Some errors between the estimates and the actual variables may be introduced: - ^ *ve vv* the error at the process value; - ^ *se ss* - the error in speed of phenomenon happening in some field point; - ^ *de dd* -the error in space difference of two adjacent points and - ^ *sd e sd sd* - the error of speed of phenomenon propagation in space.

In order to make estimations, we may use the values provided by the sensors.

#### **7.2 Expert system structure**

For these process variables *v*, *s*, *d*, *sd* and for the estimated variables ^^^ ^ *v s d sd* ,, , some values may be defined as negative N and positive P or around zero Z, with some degrees: small S, medium M or big B. So, we may have the following combinations put on an axis: NB, NM, NS, Z, PS, PM, PB. To emphasize a non-linear character of the process, the usage of only three fuzzy values is recommended.

The reasoning is as it follows: -If the derivatives are negative, we may say the phenomenon is decreasing. -If the derivative are positive, the phenomenon is increasing; -If the differences are negative, the phenomenon sense is opposite from the two sensors and measuring points. - If the speed of the difference is positive, the space becomes to be not homogenous, something is happening in the space between the two sensors.

The expert system is developed using a backward chaining. Some rules from the rule base for this expert system are: (1) IF *v* is Z THEN the process is supressed (cf = 10 %); (2) IF *v* is NOT Z THEN the process is NOT supressed (cf = 90 %); (3) IF s is Z THEN the process is NOT in course (cf = 10 %); (4) IF *s* is NOT Z THEN the process is in course (cf = 90 %), and so on. Many other rules may be developed according to the above considerations.

The application may be framed in so called "goal driven methods". In the real distributed parameter systems there are phenomena with small certainty and their opposite seems to be true. When an exert system is developed for monitoring distributed parameter systems, it is necessary to test both, to see what it is happening in the field.

#### **8. Case study**

110 Cutting Edge Research in New Technologies

, where the discrete time approximation

*dij* , from two values of two adjacent sensor variables:

^ *ve vv* -

*v s d sd* ,, , some values

*sdij* of the estimated difference variation between two estimators Ei

*se ss* - the error in speed of phenomenon happening in

*de dd* -the error in space difference of two adjacent points and

*h htt*

 *ij i j* () () () *t tt* , given by the estimators Ei and Ej, for the points Pi and Pj; The estimated difference in space is given the estimated sense in which the phenomenon is estimated to

and Ej, for two adjacent points Pi and Pj, as time derivative of estimated space difference

the difference of estimates in space is given the speed of the estimate of the space

displacement in a sense in which the phenomenon is estimated to happen.

^

In order to make estimations, we may use the values provided by the sensors.

For these process variables *v*, *s*, *d*, *sd* and for the estimated variables ^^^ ^

homogenous, something is happening in the space between the two sensors.

on. Many other rules may be developed according to the above considerations.

may be defined as negative N and positive P or around zero Z, with some degrees: small S, medium M or big B. So, we may have the following combinations put on an axis: NB, NM, NS, Z, PS, PM, PB. To emphasize a non-linear character of the process, the usage of only

The reasoning is as it follows: -If the derivatives are negative, we may say the phenomenon is decreasing. -If the derivative are positive, the phenomenon is increasing; -If the differences are negative, the phenomenon sense is opposite from the two sensors and measuring points. - If the speed of the difference is positive, the space becomes to be not

The expert system is developed using a backward chaining. Some rules from the rule base for this expert system are: (1) IF *v* is Z THEN the process is supressed (cf = 10 %); (2) IF *v* is NOT Z THEN the process is NOT supressed (cf = 90 %); (3) IF s is Z THEN the process is NOT in course (cf = 10 %); (4) IF *s* is NOT Z THEN the process is in course (cf = 90 %), and so

The application may be framed in so called "goal driven methods". In the real distributed parameter systems there are phenomena with small certainty and their opposite seems to be

*sd e sd sd* - the error of speed of phenomenon propagation in space.

Some errors between the estimates and the actual variables may be introduced: -

, at two consecutive time moments *t* and *t*-*h*. The speed of

*td iii* )-(-)()( ^ ^^

*dt*

time moments *t* and *t*-*h*:

the estimated difference in space ^


the error at the process value; -

some field point; -

**7.2 Expert system structure** 

three fuzzy values is recommended.




take place.

 

is used, for a constant sample period *h*;

^

^

There is presented a basic case study consisting in a heat distribution flux through a plane square surface of dimensions l=1, with Dirichlet boundary conditions as constant temperature on three margins:

$$
\hbar\_0 \theta = \mathbf{r} \tag{37}
$$

with *r*=0, and a Neumann boundary condition as a flux temperature from a source

$$mk\nabla\theta + q\theta = \mathcal{g}\tag{38}$$

where *q* is the heat transfer coefficient *q*=0, *g*=0, *h*=1. The heat equation, of a parabolic type, is:

$$
\rho \mathbf{C} \frac{\partial \theta}{\partial t} = \nabla (k \nabla \theta) + \mathbf{Q} + h\_{\theta} (\theta\_{\text{ext}} - \theta) \tag{39}
$$

where is the density of the medium, C is the thermal (heat) capacity, *k* is the thermal conductivity, coefficient of heat conduction, *Q* is the heat source, *h* is the convective heat transfer coefficient, ext is the external temperature. Relative values are chosen for the equation parameters: *C*=1, *Q*=10, *k*=1.

In the case of study, a small sensor network with only 13 nodes had been used in laboratory tests. The number of sensor is equivalent to a reduced number of nodes and meshes, as it is in the position scheme from Fig. 11.

In the case study, we are choosing the nodes 8, 13, 12 5 and 11 in order to apply the estimation method. These nodes are marked with bold characters on figure.

The transient characteristics of the temperature (in relative values) are presented in Fig. 12, for 101 samples.

The transient characteristics of the 12th and 13th nods are the same, so they are plotted one over the other, and in the Fig. 12 there are only four characteristics instead of five.

Fig. 11. Sensor network position in the field

Applying the Technology of Wireless Sensor Network in Environment Monitoring 113

antecedents of fuzzy rules determine the membership degree of the input. The activation function represents the membership functions. The 3rd layer represents the fuzzy rule base layer. Each neuron corresponds to a single fuzzy rule from the rule base. The inference is in this case the sum-prod inference method, the conjunction of the rule antecedents being made with product. The weights of the 3rd and 4th layers are the normalized degree of confidence of the corresponding fuzzy rules. These weights are obtained by training in the learning process. The 4th layer represents the output membership function. The activation function is the output membership function. The 5th layer represents the defuzzification

The comparison transient characteristics for training and testing output data are presented

The characteristics are plotted two on the same graph, to show that there is no significant difference. The characteristic for the training data is plotted with . The characteristic for the FIS output is there plotted with \*. The difference between the training case and the testing case is very small. The plotting signs and \* are on the same points for the both characteristics. The average testing error is 2,017.10-5. The number of training epochs was 3. If a fault appears at a sensor, for example at the time moment of the 50th sample, an error

Detection of this error is equivalent to a default at this sensor, from other point of view in the place of the monitored sensor in the space of the distributed parameter systems and in

layer, with single output, and the defuzzification method is the centre of gravity.

Fig. 14. Comparison between training and testing output

Fig. 15. Error at the fifth node for a fault in the network

occurs in estimation, as it is in Fig. 15.

the heat flow around the sensor.

in Fig. 14.

Fig. 12. Transient characteristics

We are presenting as an example the estimation for the 5th node. It is the node of the estimated variable, based on the first recursive algorithm:

$$
\theta\_{\mathbb{S}}^{k+1} = f(\theta\_{8}^{k}, \theta\_{13}^{k}, \theta\_{12}^{k}, \theta\_{11}^{k}) \tag{40}
$$

The fuzzy inference system structure is presented in Fig. 13.

Fig. 13. FIS structure

A short description about the ANFIS and its function approximating property is provided as it follows. The number of inputs depends on the algorithm type. For the 1st and 2nd algorithms there are 4 inputs, because of the first order derivation in time of the parabolic model. For the 3rd and the 4th algorithms there are 6 inputs, because of the second order derivation in time of the hyperbolic model. The ANFIS procedure may use a hybrid learning algorithm to identify the membership function parameters of single-output, Sugeno type fuzzy inference system. A combination of least-squares and back-propagation gradient descent methods may be used for training membership function parameters, modeling a given set of input/output data.

In the inference method *and*, there may be implemented with product or minimum, *or* with maximum or summation, implication with product or minimum and aggregation with maximum or arithmetic media. The first layer is the input layer. The second layer represents the input membership or fuzzification layer. The neurons represent fuzzy sets used in the

We are presenting as an example the estimation for the 5th node. It is the node of the

5 8 13 12 11 (, , , ) *k kk k k*

A short description about the ANFIS and its function approximating property is provided as it follows. The number of inputs depends on the algorithm type. For the 1st and 2nd algorithms there are 4 inputs, because of the first order derivation in time of the parabolic model. For the 3rd and the 4th algorithms there are 6 inputs, because of the second order derivation in time of the hyperbolic model. The ANFIS procedure may use a hybrid learning algorithm to identify the membership function parameters of single-output, Sugeno type fuzzy inference system. A combination of least-squares and back-propagation gradient descent methods may be used for training membership function parameters, modeling a

In the inference method *and*, there may be implemented with product or minimum, *or* with maximum or summation, implication with product or minimum and aggregation with maximum or arithmetic media. The first layer is the input layer. The second layer represents the input membership or fuzzification layer. The neurons represent fuzzy sets used in the

(40)

 

1

*f*

Fig. 12. Transient characteristics

Fig. 13. FIS structure

given set of input/output data.

estimated variable, based on the first recursive algorithm:

The fuzzy inference system structure is presented in Fig. 13.

antecedents of fuzzy rules determine the membership degree of the input. The activation function represents the membership functions. The 3rd layer represents the fuzzy rule base layer. Each neuron corresponds to a single fuzzy rule from the rule base. The inference is in this case the sum-prod inference method, the conjunction of the rule antecedents being made with product. The weights of the 3rd and 4th layers are the normalized degree of confidence of the corresponding fuzzy rules. These weights are obtained by training in the learning process. The 4th layer represents the output membership function. The activation function is the output membership function. The 5th layer represents the defuzzification layer, with single output, and the defuzzification method is the centre of gravity.

The comparison transient characteristics for training and testing output data are presented in Fig. 14.

Fig. 14. Comparison between training and testing output

The characteristics are plotted two on the same graph, to show that there is no significant difference. The characteristic for the training data is plotted with . The characteristic for the FIS output is there plotted with \*. The difference between the training case and the testing case is very small. The plotting signs and \* are on the same points for the both characteristics. The average testing error is 2,017.10-5. The number of training epochs was 3. If a fault appears at a sensor, for example at the time moment of the 50th sample, an error occurs in estimation, as it is in Fig. 15.

Fig. 15. Error at the fifth node for a fault in the network

Detection of this error is equivalent to a default at this sensor, from other point of view in the place of the monitored sensor in the space of the distributed parameter systems and in the heat flow around the sensor.

Applying the Technology of Wireless Sensor Network in Environment Monitoring 115

This chapter presents some considerations on environmental monitoring using sensor networks and estimation techniques based on ANFIS, one of the main tools of artificial

There are presented four algorithms for estimation and one method for fault detection and diagnosis of distributed parameter systems. The algorithms are based on non-linear exogenous models with regression and auto-regression. The firsts are using the values provided by the adjacent nodes of the sensor network. The seconds are using the values from anterior time moments of the same node. The non-linear adaptive network based fuzzy inference scheme (ANFIS) is used for system identification based on time series data acquired from an autonomous wireless intelligent sensor network. There is presented an application expert systems for environment monitoring, based on distributed parameter system theory, with exemplification at the process of heat transfer. There are used: the knowledge on distributed parameter system, the measured variables acquired from the system using a sensor network and some estimates obtained with estimation techniques. The sensor network is seen as a distributed sensor, placed in the measuring field of the distributed parameter system. The positioning of sensors in the field may be done according to optimal nodes and triangular meshes of a modelling and simulation of the environmental process based on distributed parameter system theory. There is presented an example of generated meshes and estimated temperature. The method offers the way of how to use all these concepts for fault detection and diagnosis in environment systems, based on the measured values provided by the sensors and the estimated values computed by the ANFIS estimator, calculating an error and detecting the fault based on a decision taken after a threshold comparison. The usage of virtual instrumentation on personal computers offers a good user interface. This methodology can be efficiently implemented on sensor network base stations, so there is no need for other hardware resources. The research results are presented in the frame of a practical case study, with tests, which are validating the theory. The key point of the chapter is the development of a methodology for environment monitoring, based on some summated concepts: estimation techniques, the theory of

Fig. 18. Web page for monitoring

**10. Conclusion** 

intelligence.

### **9. Implementation using virtual instrumentation**

Virtual instrumentation, based on National Instruments technology, had been used for sensor network monitoring. A virtual instrument for sensor network monitoring was built on a personal computer [11]. It includes: data acquisition and processing, estimator, data base, results table and an Excel data base. The control panel is presented in Fig. 16.

Fig. 16. The control panel of the virtual instrument

The block diagram of the virtual instruments for sensor network monitoring is presented in Fig. 17.

Fig. 17. The block diagram of the virtual instrument

The block diagram is built using sub-VIs, input-output virtual instruments and estimation sub-VIs. In this block diagram, the rules may be introduced and computed using inference and confidence factors. The driver assures data manipulation with a very small delay. A long distance monitoring is allowed, using a web page, presented in Fig. 18.


Fig. 18. Web page for monitoring

#### **10. Conclusion**

114 Cutting Edge Research in New Technologies

Virtual instrumentation, based on National Instruments technology, had been used for sensor network monitoring. A virtual instrument for sensor network monitoring was built on a personal computer [11]. It includes: data acquisition and processing, estimator, data

The block diagram of the virtual instruments for sensor network monitoring is presented in

The block diagram is built using sub-VIs, input-output virtual instruments and estimation sub-VIs. In this block diagram, the rules may be introduced and computed using inference and confidence factors. The driver assures data manipulation with a very small delay.

A long distance monitoring is allowed, using a web page, presented in Fig. 18.

base, results table and an Excel data base. The control panel is presented in Fig. 16.

**9. Implementation using virtual instrumentation** 

Fig. 16. The control panel of the virtual instrument

Fig. 17. The block diagram of the virtual instrument

Fig. 17.

This chapter presents some considerations on environmental monitoring using sensor networks and estimation techniques based on ANFIS, one of the main tools of artificial intelligence.

There are presented four algorithms for estimation and one method for fault detection and diagnosis of distributed parameter systems. The algorithms are based on non-linear exogenous models with regression and auto-regression. The firsts are using the values provided by the adjacent nodes of the sensor network. The seconds are using the values from anterior time moments of the same node. The non-linear adaptive network based fuzzy inference scheme (ANFIS) is used for system identification based on time series data acquired from an autonomous wireless intelligent sensor network. There is presented an application expert systems for environment monitoring, based on distributed parameter system theory, with exemplification at the process of heat transfer. There are used: the knowledge on distributed parameter system, the measured variables acquired from the system using a sensor network and some estimates obtained with estimation techniques. The sensor network is seen as a distributed sensor, placed in the measuring field of the distributed parameter system. The positioning of sensors in the field may be done according to optimal nodes and triangular meshes of a modelling and simulation of the environmental process based on distributed parameter system theory. There is presented an example of generated meshes and estimated temperature. The method offers the way of how to use all these concepts for fault detection and diagnosis in environment systems, based on the measured values provided by the sensors and the estimated values computed by the ANFIS estimator, calculating an error and detecting the fault based on a decision taken after a threshold comparison. The usage of virtual instrumentation on personal computers offers a good user interface. This methodology can be efficiently implemented on sensor network base stations, so there is no need for other hardware resources. The research results are presented in the frame of a practical case study, with tests, which are validating the theory. The key point of the chapter is the development of a methodology for environment monitoring, based on some summated concepts: estimation techniques, the theory of

**0**

**6**

*Portugal*

**Tracking Players in Indoor Sports Using a Vision**

**System Inspired in Fuzzy and Parallel Processing**

<sup>2</sup>*School of Engineering, University of Minho and LIACC, University of Porto, Porto*

Sports are an important part of nowadays society and there is an increasing interest by the sports' community on having mechanisms that allow them to better understand the dynamics of teams (their own and their opponents). This information is frequently extracted manually by operators that, after the game, visualize game recordings (frequently TV footages) and perform hand annotation, which is a time consuming and error prone task. There is a clear necessity for developing automatic mechanisms and methodologies which allow performing these tasks much faster and systematically. The importance of such systems was first highlighted in the late 80's by Franks et al. ( (Franks & Nagelkerke, 1988; Franks et al., 1987) ). In this chapter, we present an automatic and intelligent visual system for detecting and tracking handball players based on two cameras that cover the entire playing area. The followed methodology includes the identification of foreground pixels using dynamic background subtraction, the definition of colour subspaces for each team using a Fuzzy inspired model that allows detecting the players based on the colour properties of their clothes. Player tracking is further improved by using one Kalman Filter per player (object to track). The resulting information is aggregated in an undistorted image view of the entire field that is very interesting and meaningful to the target end-user. The generation of the video is a demanding computational task that takes advantage of using parallel computing. The resulting videos are interesting and include several important informations to the human end-user. Tests were conducted on videos collected during the Handball Portuguese SuperCup competition, where the best six Portuguese teams competed for the

The chapter structure is as follow: the next section presents some relevant information on image segmentation methodologies, parallel processing and implementations of automatic visual systems for detecting and tracking players and describes the main methodologies used. Section 3 discusses the proposed architecture principles, providing an overview of the methodology used. Section 4 presents the results achieved and finally section 5 concludes this

**1. Introduction**

trophy of the year of 2010/11.

chapter with the main conclusions.

Catarina B. Santiago1, Lobinho Gomes1, Armando Sousa1,

<sup>1</sup>*Faculty of Engineering, University of Porto and INESC-Porto, Porto*

Luis Paulo Reis<sup>2</sup> and Maria Luisa Estriga3

<sup>3</sup>*Faculty of Sports, University of Porto and CIFI2D, Porto*

distributed parameter systems, expert systems and wireless sensor networks. A negative aspect is the lack of information related to the error of measuring data, for different environment applications in practice. In future, some researches may be done in order to respond to this question related to the accuracy of measurements for different practical cases. Future applications could be done in computing interpolative values in inaccessible places from the sensor area, in the control of distributed parameter systems, and other.

#### **11. Acknowledgement**

This work was made in the frame of the CNCSIS – UEFISCSU, PNII –IDEI\_PCE\_ID923 grant.

#### **12. References**


## **Tracking Players in Indoor Sports Using a Vision System Inspired in Fuzzy and Parallel Processing**

Catarina B. Santiago1, Lobinho Gomes1, Armando Sousa1, Luis Paulo Reis<sup>2</sup> and Maria Luisa Estriga3 <sup>1</sup>*Faculty of Engineering, University of Porto and INESC-Porto, Porto* <sup>2</sup>*School of Engineering, University of Minho and LIACC, University of Porto, Porto* <sup>3</sup>*Faculty of Sports, University of Porto and CIFI2D, Porto Portugal*

#### **1. Introduction**

116 Cutting Edge Research in New Technologies

distributed parameter systems, expert systems and wireless sensor networks. A negative aspect is the lack of information related to the error of measuring data, for different environment applications in practice. In future, some researches may be done in order to respond to this question related to the accuracy of measurements for different practical cases. Future applications could be done in computing interpolative values in inaccessible places from the sensor area, in the control of distributed parameter systems, and other.

This work was made in the frame of the CNCSIS – UEFISCSU, PNII –IDEI\_PCE\_ID923 grant.

Akyildiz, F.; Su, W.; Sankarasubramaniam, Y. & Cayirci, E., Wireless Sensor Networks: A

Cuiyun, W.; Naiang, W.; Xibao, X.; Feng, Z. & Yinzhou, H. Study on Spatial Thermal

IEEE Trans. on Mobile Computing, July, 2007, Vol. 6, Issue 7, p. 790-802. Giannopoulos, N; Goumopoulos, & Kameas, A. Design Guidelines for Building a Wireless

Lan, S.; Qilong, M. & Du, J. Architecture of Wireless Sensor Networks for Environmental

Rosculet, M.N. & M. Craiu, M. *Differential applicative equations*, RSR Academy Publishing,

Talukder, A.; Panangadan, A. & Herrington, A.T. Autonomous Adaptive Resource

Volosencu, C., Environmental Monitoring Based on Sensor Networks and Artificial

Volosencu, C., Algorithms for estimation in distributed parameter systems based on sensor

Environment in Lanzhou City Based on Remote Sensing and GIS, *IEEE Int. Conf. On Geoscience and remote Sensing Symposium*, July, 31, 2006, Denver, pp. 2466-1468. Dardari, D.; Conti, A.; Buratti, C. & Verdone, R. Mathematical Evaluation of Environmental

Monitoring Estimation Error through Energy-Efficient Wireless Sensor Networks,

Sensor Network for Environmental Monitoring, *PCI'09 13th Panhellenic Conf. on* 

Monitoring, *GRS 2008 Int. Workshop on Geoscience and Remote Sensing*, 21-22 Dec.

Management in Sensor Network Systems for Environmental Monitoring, *2008 IEEE* 

Intelligence, *Development, Energy, Environment, Economics (DEEE '10*), Puerto De La

networks and ANFIS, *WSEAS Transactions on Systems*, Volume 9, Issue 3, March

Survey. *Computer Networks*, 38(4), March, 2002.

*Informatics*, 10-12 Sept. 2009, Corfu, p. 148-152.

*Aerospace Conf.*, 1-8 March 2008, Big Sky, MT, pp. 1-9.

Cruz, Tenerife, Nov. 30- Dec. 2, 2010, p. 79 – 83.

2008, Shanghai, Vol. 1, p. 579-582.

Bucharest,, 1979.

2010, pag. 283- 294.

**11. Acknowledgement** 

**12. References** 

Sports are an important part of nowadays society and there is an increasing interest by the sports' community on having mechanisms that allow them to better understand the dynamics of teams (their own and their opponents). This information is frequently extracted manually by operators that, after the game, visualize game recordings (frequently TV footages) and perform hand annotation, which is a time consuming and error prone task. There is a clear necessity for developing automatic mechanisms and methodologies which allow performing these tasks much faster and systematically. The importance of such systems was first highlighted in the late 80's by Franks et al. ( (Franks & Nagelkerke, 1988; Franks et al., 1987) ). In this chapter, we present an automatic and intelligent visual system for detecting and tracking handball players based on two cameras that cover the entire playing area. The followed methodology includes the identification of foreground pixels using dynamic background subtraction, the definition of colour subspaces for each team using a Fuzzy inspired model that allows detecting the players based on the colour properties of their clothes. Player tracking is further improved by using one Kalman Filter per player (object to track). The resulting information is aggregated in an undistorted image view of the entire field that is very interesting and meaningful to the target end-user. The generation of the video is a demanding computational task that takes advantage of using parallel computing. The resulting videos are interesting and include several important informations to the human end-user. Tests were conducted on videos collected during the Handball Portuguese SuperCup competition, where the best six Portuguese teams competed for the trophy of the year of 2010/11.

The chapter structure is as follow: the next section presents some relevant information on image segmentation methodologies, parallel processing and implementations of automatic visual systems for detecting and tracking players and describes the main methodologies used. Section 3 discusses the proposed architecture principles, providing an overview of the methodology used. Section 4 presents the results achieved and finally section 5 concludes this chapter with the main conclusions.

Category Description

Using a Vision System Inspired in Fuzzy and Parallel Processing

Table 1. Overview of image segmentation categories

Fuzzy inspired categorization for colour calibration.

displacements during time.

**2.2 Parallel processing**

interesting.

mixture of both situations) then

Histogram Thresholding Determine the peaks or modes of the multi-dimensional histogram of a colour image. Feature Space Clustering Groups the image feature space into a set of meaningful

<sup>119</sup> Tracking Players in Indoor Sports

Region based Includes region growing, Watershed transform and

Edge Detection Segment the image by finding the edges of

Neural Networks Allow parallel processing and the incorporation of

Optical flow (Barron & Thacker, 2005) is based on the fact that when an object moves in front of a camera, there is a corresponding change on the image, however it assumes small

Following the tendency, we propose, for the video segmentation step, a methodology that combines background subtraction for detecting foreground regions, region growing and a

Initially, computing power was pure sequential processing, that is, sequential operations over time in a single dedicated processor. The hunger for usefulness and processing power lead to advanced solutions such as concurrency and distributed computing. Given large "workloads", concurrency is interleaving processing over time for the multiple "workloads" (even in a single processor). Distributed computing is sharing "workloads" (or parts) over different processors linked over a network. Recent advances in computer architecture introduced multi core architectures that enable true parallel processing of several "workloads" in parallel in the same computer. The challenge remains unaltered: to maximize the usefulness of a computer systems and harness as much computing power as possible, possibly circumventing technical limitations of an architecture, whilst maximizing performance but still keeping cost

When considering a single complex (large) computational job, parallel processing starts by: • (i) dividing work into several pieces (this can be done by the programmer, at run time or a

recognition, classification or clustering.

among them.

texture or motion).

image processing.

Roberts (Roberts, 1963)). Fuzzy These methods allow classes and regions to have a certain

groups or classes based on intensity, colour or texture characteristics of pixels and not on the spatial relation

region split and merge. These methods try to divide the image domain based on the fact that adjacent pixels in a same region have similar visual features (colour, intensity,

each region using one of the well known edge detectors (Canny (Canny, 1986), Sobel (Sobel, 1970),

uncertainty and ambiguity which is generally the case in

non-linearities. They can be used either to pattern

#### **2. Related research**

This section focus on the three main areas involved on the implementation of the system, and therefore presents some of the most common used methodologies for video/image segmentation and for parallel processing as well as an overview of systems that are able to detect and track players in indoor team sports.

#### **2.1 Video/image segmentation**

Video and inherently image segmentation is the first step and probably the most critical step in any vision system. Video segmentation can be subdivided into temporal segmentation and spatial segmentation.

Temporal segmentation corresponds to segmenting the video into meaningful temporal sequences. This kind of segmentation is usually used as the first step of video annotation and tries to segment the video taking into account similarities/dissimilarities between successive frames (Koprinska & Carrato, 2001). On the other hand, spatial segmentation analyses the content of each frame, and divides it into homogeneous regions that correspond to independent objects. The focus of this work is more on spatial segmentation, therefore for a detailed survey on temporal video segmentation please refer to (Koprinska & Carrato, 2001). Spatial video segmentation may be performed using the methodologies that are used for image segmentation and further enhanced using the temporal characteristics of video. In addition, when performing colour analysis there is also the need to choose a colour space.

Regarding colour image segmentation, a detailed survey is provided by (Cheng et al., 2001). As they state, most of the existing colour image segmentation methodologies have their origins on grey scale image segmentation with the addition of a proper colour space choice. The main categories of image segmentation methodologies (Cheng et al., 2001) are summarized on Table 1.

Nowadays there is the tendency to apply techniques from different categories in order to achieve better results. A good example of this tendency is the JSEG algorithm (Deng & Manjunath, 2001) which initially clusters colours into several representative classes, afterwards replaces each pixel by their corresponding colour class label and at the end employs a region growing process directly to the class map in order to identify homogeneous regions.

On videos, contrary to static images, besides the physical (x and y coordinates) and colour information there is also the time component. Using this property it is possible to segment images based on motion along time. There are two main approaches to perform this task: background subtraction and optical flow.

Background subtraction is usually used in situations where a more or less fixed background exists and subdivides the image into foreground and background regions. Several background subtraction techniques have been proposed in literature. The main issue on these methods is to obtain a good estimate of the background.

The simplest method to model the background is to use a single static image without objects, however its efficiency is low, because it does not take into account changes such as light effects or shadows that may occur in the background. More robust methods include estimating the background model using a moving average (Heikkila & Silvén, 1999), median or even a mixture of Gaussians (Grimson et al., 1998).

2 Will-be-set-by-IN-TECH

This section focus on the three main areas involved on the implementation of the system, and therefore presents some of the most common used methodologies for video/image segmentation and for parallel processing as well as an overview of systems that are able to

Video and inherently image segmentation is the first step and probably the most critical step in any vision system. Video segmentation can be subdivided into temporal segmentation and

Temporal segmentation corresponds to segmenting the video into meaningful temporal sequences. This kind of segmentation is usually used as the first step of video annotation and tries to segment the video taking into account similarities/dissimilarities between successive frames (Koprinska & Carrato, 2001). On the other hand, spatial segmentation analyses the content of each frame, and divides it into homogeneous regions that correspond to independent objects. The focus of this work is more on spatial segmentation, therefore for a detailed survey on temporal video segmentation please refer to (Koprinska & Carrato, 2001). Spatial video segmentation may be performed using the methodologies that are used for image segmentation and further enhanced using the temporal characteristics of video. In addition, when performing colour analysis there is also the need to choose a colour space. Regarding colour image segmentation, a detailed survey is provided by (Cheng et al., 2001). As they state, most of the existing colour image segmentation methodologies have their origins on grey scale image segmentation with the addition of a proper colour space choice. The main categories of image segmentation methodologies (Cheng et al., 2001) are

Nowadays there is the tendency to apply techniques from different categories in order to achieve better results. A good example of this tendency is the JSEG algorithm (Deng & Manjunath, 2001) which initially clusters colours into several representative classes, afterwards replaces each pixel by their corresponding colour class label and at the end employs a region growing process directly to the class map in order to identify homogeneous

On videos, contrary to static images, besides the physical (x and y coordinates) and colour information there is also the time component. Using this property it is possible to segment images based on motion along time. There are two main approaches to perform this task:

Background subtraction is usually used in situations where a more or less fixed background exists and subdivides the image into foreground and background regions. Several background subtraction techniques have been proposed in literature. The main issue on these

The simplest method to model the background is to use a single static image without objects, however its efficiency is low, because it does not take into account changes such as light effects or shadows that may occur in the background. More robust methods include estimating the background model using a moving average (Heikkila & Silvén, 1999), median or even

**2. Related research**

spatial segmentation.

summarized on Table 1.

background subtraction and optical flow.

a mixture of Gaussians (Grimson et al., 1998).

methods is to obtain a good estimate of the background.

regions.

**2.1 Video/image segmentation**

detect and track players in indoor team sports.


Table 1. Overview of image segmentation categories

Optical flow (Barron & Thacker, 2005) is based on the fact that when an object moves in front of a camera, there is a corresponding change on the image, however it assumes small displacements during time.

Following the tendency, we propose, for the video segmentation step, a methodology that combines background subtraction for detecting foreground regions, region growing and a Fuzzy inspired categorization for colour calibration.

#### **2.2 Parallel processing**

Initially, computing power was pure sequential processing, that is, sequential operations over time in a single dedicated processor. The hunger for usefulness and processing power lead to advanced solutions such as concurrency and distributed computing. Given large "workloads", concurrency is interleaving processing over time for the multiple "workloads" (even in a single processor). Distributed computing is sharing "workloads" (or parts) over different processors linked over a network. Recent advances in computer architecture introduced multi core architectures that enable true parallel processing of several "workloads" in parallel in the same computer. The challenge remains unaltered: to maximize the usefulness of a computer systems and harness as much computing power as possible, possibly circumventing technical limitations of an architecture, whilst maximizing performance but still keeping cost interesting.

When considering a single complex (large) computational job, parallel processing starts by:

• (i) dividing work into several pieces (this can be done by the programmer, at run time or a mixture of both situations) then

has inherent technological limitations as code and data must be transferred into the video card, an "external" element when compared to CPUs and Memory Systems. While using the video card(s) as generic processors presents limitations to hardware changes it is most interesting for application sharing lots of data, with the added benefit of freeing up the main processor(s) for other tasks. High-end video cards have risen in complexity, performance and cost and may currently be more expensive than the "main" general purpose CPU of the

<sup>121</sup> Tracking Players in Indoor Sports

Using a Vision System Inspired in Fuzzy and Parallel Processing

The reader should be aware that at, the time of writing this article, parallel programming is still at its infancy. A promising framework exists, however: OpenCL (Open Computing Language) is open, royalty-free, standard for general-purpose parallel programming of heterogeneous systems across different hardware (Group, 2011b). OpenCL provides a uniform programming environment for software developers to write efficient, portable code for high-performance computing using a diverse mix of multi-core CPUs, GPUs, etc. OpenCL is recent and its first introduction is in late 2009 by the Kronos Group (Group, 2011a), a consortium of companies Intel, AMD, ATI, ARM, NVidia, among others. The performance will likely be inferior to pure CUDA as OpenCL calls CUDA when possible. These topics are currently issues of large interest in the Scientific and Technological communities because they promise great performance benefits. The most important topics and dominant toolkits were,

Prospective users of parallel computing should take into consideration that there is a non-negligible effort in learning the concepts at stake and even more in porting large conventional algorithms to take advantage of parallel processing. While OpenMP is not far to common programming techniques, CUDA approach of offering massive parallelism with common data requires different approaches and most likely the new algorithm will demand very large portions of code to be written almost from start. As in other optimization techniques, profiling the application is of great interest to find which part of the code takes more time to execute and what part(s) of the code will benefit from parallel computing.

Although there are devices to detect and track players using other methodologies besides vision (Radio Frequency Identifiers, Local Position Measurement (Stelzer & Fischer, 2004) or Global Positions Systems) these other methodologies are beyond the scope of this paper and

The sport that has deserved the highest attention by the scientific community has been soccer, nevertheless the focus of this section will be on the work developed in indoor sports since our object of study is handball and also because the challenges posed by indoor/outdoor sports are quite different. Outdoor sports usually receive more attention by the media and therefore it is possible to use images provided by TV broadcast cameras, however the light conditions are much worse and the environment is less controllable. On the other hand, indoor sports usually need a dedicated camera system and despite being a more controlled environment, players tend to be closer to each other (the playing area is smaller) which brings

On indoor team sports it is possible to find works on basketball, handball and indoor soccer, using either broadcast video footages or dedicated cameras placed at strategic places and

added difficulties since merging and occlusion situations occur more often.

computer.

however, briefly addressed in this section.

will not be addressed.

**2.3 Player detection and tracking using vision systems**

several methodologies for player detection and tracking.


There are a number of complexities with parallel computing:


In parallel computing, the Scalability curve is defined as the speed-up in processing (performance gain), over the number of available processors. Generically, it is not likely at all that a job done in a single processor in a time *t*<sup>1</sup> could be done in *N* processors in *t*1{*N* of that time, that is, time to solve a problem in *N* processors is very frequently *tN* ą *t*1{*N*. By adding too many processors (large *N*), the scalability curve will drop heavily when too many work parts are generated and computational overhead for parallel processing outweighs the benefits of having many "workers" (processors). Additionally, adding processors will likely have severe financial impact on the final overall cost of the computational solution.

OpenMP ( (OpenMP, 2011)) is a technological framework for cross platform, shared-memory application programming interface (API) for taking advantage of run time distribution of work parts into several processors ( (Chapman et al., 2007)). OpenMP (OMP) was introduced formally in 1997 and is maintained by the Architecture Review Board (ARB), a consortium of industry and academia and its licence is essentially free. Advertised benefits include, among others, good scalability, implicit communication and programming at (somewhat) high level. The technological limitations for performance include transferring code and data among several types of memory and assigning work parts (threads) to available resources (processors).

CUDA ( (NVIDIA, 2011)) stands for Compute Unified Device Architecture and is a software platform for massively parallel high-performance computing using NVidia's video graphics hardware. The CUDA software toolkit is proprietary from NVidia, it essentially allows a free licence of use and was formally introduced in 2006. A large portion of the (recent, high-end) hardware from that manufacturer is usable with CUDA. The intent is to turn the resources of the video card into a number of generic processors and memory ((Halfhill, 2008)). The actual number of CPUs and memory available for generic use in the video card is hardware dependent and a highly volatile issue over time. Sometimes 16 or more independent processors are available for custom parallel programming (working frequencies frequently below 1 GHz) and about 512 MB of memory is also frequently usable. These resources are usable if the video card is used for showing 2D images. Communication with the video card 4 Will-be-set-by-IN-TECH

• (ii) "transfers" work parts to collaborators (several processors inside the same computer or

• (iii) completes the job by assembling all intermediate results into a meaningful final

• Management of parallelization - the additional computational cost (overhead) to start and maintain parallel execution (sharing program, data, managing requests, extra memory

• Communication overhead - additional computational cost to communicate to several other processors (examples: transmission of initial relevant data or fundamental intermediate

• Synchronization problems - the explicit need for sequential operations caused by

• Load Imbalancing - the difficulty in sharing workloads as even amounts of work with the likely problem that, if one of the intermediate calculations takes longer to process than the

In parallel computing, the Scalability curve is defined as the speed-up in processing (performance gain), over the number of available processors. Generically, it is not likely at all that a job done in a single processor in a time *t*<sup>1</sup> could be done in *N* processors in *t*1{*N* of that time, that is, time to solve a problem in *N* processors is very frequently *tN* ą *t*1{*N*. By adding too many processors (large *N*), the scalability curve will drop heavily when too many work parts are generated and computational overhead for parallel processing outweighs the benefits of having many "workers" (processors). Additionally, adding processors will likely

OpenMP ( (OpenMP, 2011)) is a technological framework for cross platform, shared-memory application programming interface (API) for taking advantage of run time distribution of work parts into several processors ( (Chapman et al., 2007)). OpenMP (OMP) was introduced formally in 1997 and is maintained by the Architecture Review Board (ARB), a consortium of industry and academia and its licence is essentially free. Advertised benefits include, among others, good scalability, implicit communication and programming at (somewhat) high level. The technological limitations for performance include transferring code and data among several types of memory and assigning work parts (threads) to available resources

CUDA ( (NVIDIA, 2011)) stands for Compute Unified Device Architecture and is a software platform for massively parallel high-performance computing using NVidia's video graphics hardware. The CUDA software toolkit is proprietary from NVidia, it essentially allows a free licence of use and was formally introduced in 2006. A large portion of the (recent, high-end) hardware from that manufacturer is usable with CUDA. The intent is to turn the resources of the video card into a number of generic processors and memory ((Halfhill, 2008)). The actual number of CPUs and memory available for generic use in the video card is hardware dependent and a highly volatile issue over time. Sometimes 16 or more independent processors are available for custom parallel programming (working frequencies frequently below 1 GHz) and about 512 MB of memory is also frequently usable. These resources are usable if the video card is used for showing 2D images. Communication with the video card

have severe financial impact on the final overall cost of the computational solution.

interdependence among several work parts or using shared resources;

others, optimization is not as perfect as one could wish for.

There are a number of complexities with parallel computing:

not) and later

required, etc);

calculations);

(processors).

solution.

has inherent technological limitations as code and data must be transferred into the video card, an "external" element when compared to CPUs and Memory Systems. While using the video card(s) as generic processors presents limitations to hardware changes it is most interesting for application sharing lots of data, with the added benefit of freeing up the main processor(s) for other tasks. High-end video cards have risen in complexity, performance and cost and may currently be more expensive than the "main" general purpose CPU of the computer.

The reader should be aware that at, the time of writing this article, parallel programming is still at its infancy. A promising framework exists, however: OpenCL (Open Computing Language) is open, royalty-free, standard for general-purpose parallel programming of heterogeneous systems across different hardware (Group, 2011b). OpenCL provides a uniform programming environment for software developers to write efficient, portable code for high-performance computing using a diverse mix of multi-core CPUs, GPUs, etc. OpenCL is recent and its first introduction is in late 2009 by the Kronos Group (Group, 2011a), a consortium of companies Intel, AMD, ATI, ARM, NVidia, among others. The performance will likely be inferior to pure CUDA as OpenCL calls CUDA when possible. These topics are currently issues of large interest in the Scientific and Technological communities because they promise great performance benefits. The most important topics and dominant toolkits were, however, briefly addressed in this section.

Prospective users of parallel computing should take into consideration that there is a non-negligible effort in learning the concepts at stake and even more in porting large conventional algorithms to take advantage of parallel processing. While OpenMP is not far to common programming techniques, CUDA approach of offering massive parallelism with common data requires different approaches and most likely the new algorithm will demand very large portions of code to be written almost from start. As in other optimization techniques, profiling the application is of great interest to find which part of the code takes more time to execute and what part(s) of the code will benefit from parallel computing.

#### **2.3 Player detection and tracking using vision systems**

Although there are devices to detect and track players using other methodologies besides vision (Radio Frequency Identifiers, Local Position Measurement (Stelzer & Fischer, 2004) or Global Positions Systems) these other methodologies are beyond the scope of this paper and will not be addressed.

The sport that has deserved the highest attention by the scientific community has been soccer, nevertheless the focus of this section will be on the work developed in indoor sports since our object of study is handball and also because the challenges posed by indoor/outdoor sports are quite different. Outdoor sports usually receive more attention by the media and therefore it is possible to use images provided by TV broadcast cameras, however the light conditions are much worse and the environment is less controllable. On the other hand, indoor sports usually need a dedicated camera system and despite being a more controlled environment, players tend to be closer to each other (the playing area is smaller) which brings added difficulties since merging and occlusion situations occur more often.

On indoor team sports it is possible to find works on basketball, handball and indoor soccer, using either broadcast video footages or dedicated cameras placed at strategic places and several methodologies for player detection and tracking.

image resolution and players' speed restrictions. The multiple tracking is also achieved by partitioning the world into Voronoi regions where each tracker acts. The fusion between the two images is performed in real world coordinates. Results indicate average correction rates

<sup>123</sup> Tracking Players in Indoor Sports

Regarding indoor soccer, there are the works of (Needham & Boyle, 2001) and (Kasiri-Bidhendi & Safabakhsh, 2009). (Needham & Boyle, 2001) use a single stationary camera system and apply a multiple object condensation algorithm. Initially a bounding box of each player is detected, then, through a propagation algorithm, the fitness of each bounding box is evaluated and adjusted. The prediction stage of the condensation algorithm takes into consideration position estimates from a Kalman filter. They report on trajectory based results compared with hand annotated values and indicate distance errors less than 1 meter, which is a quite high value given that an indoor soccer minimum field measures 15x25 meters. On the other hand, (Kasiri-Bidhendi & Safabakhsh, 2009) use TV broadcast images. Initially, the background colour is determined through clustering, afterwards the entire field region is extracted using the Mahalanobis distance between pixels on the frame and on the background colour distribution. Once the image is background free the lines of the field are extracted using a Canny edge operator. The remainder of the information stored in the image corresponds to the players and the ball that are further refined using physical constraints of area and roundness. The player and ball tracking is based on level set contours. By using this tracking technique occluded players are tracked by a single contour instead of two during the occlusion

This work follows (Santiago et al., 2011), that present a methodology based on colour in order to detect handball players. In addition, we have included a dynamic background model in order to take into consideration light changes and a Fuzzy inspired methodology to calibrate the colour teams. Moreover, a vector of Kalman filters is used for the tracking process and an

The challenges of defining a vision system able to identify and track the players in a team game are quite huge due to the dynamic and spatial characteristics of the game itself. Usually indoor team games are quite dynamic with high physical contact among players and rapid movements, for example a player can achieve velocities of more than 5 m/s. These characteristics impose a careful choice on the system's architecture which includes choosing

In order to minimize crowd interference, merging and occlusion situations and have a good view of the entire field we propose using two cameras placed at the ceiling of the sport's hall to cover each half of the field (this solution was also adopted by (Kristan et al., 2009; Monier

The cameras chosen are GigEthernet (DFK 31BG03.H model from Imaging Source) which allows placing the recording unit far away from the cameras and maintaining the signal integrity, have a resolution of 1024x768 pixels and can supply images at 30 frames per second.

We propose a two module software system, one responsible for acquiring the images from the two cameras (Acquisition System) and another responsible for the offline processing of

not only the cameras and their disposition but also defining the software design.

between 0.0019 and 0.00677 correction/frame/player.

Using a Vision System Inspired in Fuzzy and Parallel Processing

period and are again correctly detected after splitting.

HyperVideo with the resulting information is generated.

The images collected from the cameras are shown on Fig. 1 .

**3. Architecture**

et al., 2009)).

**3.1 Engineering solution**

Concerning basketball games we can find (Hu et al., 2011) using broadcast videos. Players are detect by extracting the field (through dominant colour detection) and generating a player mask. Afterwards players are tracked in image coordinates using a CamShift-based tracking algorithm and their positions are converted into real world coordinates using an automatic calibration methodology. Results are presented for 48 consecutive frames and show high precision and recall percentages, 91.38% and 91.34%, respectively. However occlusion and merging between players of the same team affect the detection and are not taken into consideration.

A very promising project is the Autonomous Production of Images based on Distributed and Intelligent Sensing (APIDIS), (APIDIS, 2008), that equipped a basketball court with an acquisition network composed of microphones, conventional and (arrays of) omnidirectional cameras in order to provide a basketball dataset. This dataset was used by (Alahi et al., 2009; Delannay et al., 2009).

Taking advantage of the setup flexibility and amount of cameras (Alahi et al., 2009) were able to minimize occlusion and merging problems and determine the 3D positions of the players. The player detection is performed via a sparsity constrained binary occupancy map based on severely degraded silhouettes. These silhouettes are obtained through a basic background subtraction method. Results show that the usage of more than one camera can tremendously increase both the precision and recall rates for the player detection, from 57% and 62% in a single camera system to 76% and 72% if an omnidirectional camera is added.

(Delannay et al., 2009) detect the players from the foreground activity masks determined on multiple views. They take into consideration players occupy a 3D space and therefore sum the cumulative projections of the multiple views' foreground activity masks on a set of planes that are parallel to the ground plane. Afterwards, regions with larger projection values are considered to be a player and scanned for digits in order to detect the player's number. The tracking propagation is performed frame by frame and is based on the Munkres general assignment algorithm (Munkres, 1957). They compare their methodology with methods that project the activity masks into a single plane and conclude that projecting into multiple planes increases the detection rate and minimizes the shadows effects.

We could find two works (Kristan et al., 2009; Monier et al., 2009) that explore both basketball and handball, use closed world assumptions and two dedicated cameras placed at the ceiling of the sports hall. Placing the cameras at the ceiling has the advantage of minimizing occlusion problems.

On the first work, players are detected by applying a background mask image obtained from thresholding the differences between the background (background is obtained by a simple method, for example a median filter) and the current frame with a dynamic threshold specific for each player. The tracking algorithm is formulated as a closed world problem based on a Boot-Strap Particle Filter and each tracker is initialized manually. The multiple players' tracking is achieved by partitioning the world into several Voronoi cells, one for each player, that are updated at each time step. Results indicate low failure rates per player per minute, less than 1.1 in the worst test case.

The second work follows some of the assumptions of the one from (Kristan et al., 2009), however the foreground pixels are detected by creating a dynamic background model rather than acting on the threshold level or generating a background mask and afterwards apply template matching to track the players. The templates for each player are manually initialized by the user and the tracking is performed by searching in a region of interest determined by image resolution and players' speed restrictions. The multiple tracking is also achieved by partitioning the world into Voronoi regions where each tracker acts. The fusion between the two images is performed in real world coordinates. Results indicate average correction rates between 0.0019 and 0.00677 correction/frame/player.

Regarding indoor soccer, there are the works of (Needham & Boyle, 2001) and (Kasiri-Bidhendi & Safabakhsh, 2009). (Needham & Boyle, 2001) use a single stationary camera system and apply a multiple object condensation algorithm. Initially a bounding box of each player is detected, then, through a propagation algorithm, the fitness of each bounding box is evaluated and adjusted. The prediction stage of the condensation algorithm takes into consideration position estimates from a Kalman filter. They report on trajectory based results compared with hand annotated values and indicate distance errors less than 1 meter, which is a quite high value given that an indoor soccer minimum field measures 15x25 meters.

On the other hand, (Kasiri-Bidhendi & Safabakhsh, 2009) use TV broadcast images. Initially, the background colour is determined through clustering, afterwards the entire field region is extracted using the Mahalanobis distance between pixels on the frame and on the background colour distribution. Once the image is background free the lines of the field are extracted using a Canny edge operator. The remainder of the information stored in the image corresponds to the players and the ball that are further refined using physical constraints of area and roundness. The player and ball tracking is based on level set contours. By using this tracking technique occluded players are tracked by a single contour instead of two during the occlusion period and are again correctly detected after splitting.

This work follows (Santiago et al., 2011), that present a methodology based on colour in order to detect handball players. In addition, we have included a dynamic background model in order to take into consideration light changes and a Fuzzy inspired methodology to calibrate the colour teams. Moreover, a vector of Kalman filters is used for the tracking process and an HyperVideo with the resulting information is generated.

#### **3. Architecture**

6 Will-be-set-by-IN-TECH

Concerning basketball games we can find (Hu et al., 2011) using broadcast videos. Players are detect by extracting the field (through dominant colour detection) and generating a player mask. Afterwards players are tracked in image coordinates using a CamShift-based tracking algorithm and their positions are converted into real world coordinates using an automatic calibration methodology. Results are presented for 48 consecutive frames and show high precision and recall percentages, 91.38% and 91.34%, respectively. However occlusion and merging between players of the same team affect the detection and are not taken into

A very promising project is the Autonomous Production of Images based on Distributed and Intelligent Sensing (APIDIS), (APIDIS, 2008), that equipped a basketball court with an acquisition network composed of microphones, conventional and (arrays of) omnidirectional cameras in order to provide a basketball dataset. This dataset was used by (Alahi et al., 2009;

Taking advantage of the setup flexibility and amount of cameras (Alahi et al., 2009) were able to minimize occlusion and merging problems and determine the 3D positions of the players. The player detection is performed via a sparsity constrained binary occupancy map based on severely degraded silhouettes. These silhouettes are obtained through a basic background subtraction method. Results show that the usage of more than one camera can tremendously increase both the precision and recall rates for the player detection, from 57% and 62% in a

(Delannay et al., 2009) detect the players from the foreground activity masks determined on multiple views. They take into consideration players occupy a 3D space and therefore sum the cumulative projections of the multiple views' foreground activity masks on a set of planes that are parallel to the ground plane. Afterwards, regions with larger projection values are considered to be a player and scanned for digits in order to detect the player's number. The tracking propagation is performed frame by frame and is based on the Munkres general assignment algorithm (Munkres, 1957). They compare their methodology with methods that project the activity masks into a single plane and conclude that projecting into multiple planes

We could find two works (Kristan et al., 2009; Monier et al., 2009) that explore both basketball and handball, use closed world assumptions and two dedicated cameras placed at the ceiling of the sports hall. Placing the cameras at the ceiling has the advantage of minimizing occlusion

On the first work, players are detected by applying a background mask image obtained from thresholding the differences between the background (background is obtained by a simple method, for example a median filter) and the current frame with a dynamic threshold specific for each player. The tracking algorithm is formulated as a closed world problem based on a Boot-Strap Particle Filter and each tracker is initialized manually. The multiple players' tracking is achieved by partitioning the world into several Voronoi cells, one for each player, that are updated at each time step. Results indicate low failure rates per player per minute,

The second work follows some of the assumptions of the one from (Kristan et al., 2009), however the foreground pixels are detected by creating a dynamic background model rather than acting on the threshold level or generating a background mask and afterwards apply template matching to track the players. The templates for each player are manually initialized by the user and the tracking is performed by searching in a region of interest determined by

single camera system to 76% and 72% if an omnidirectional camera is added.

increases the detection rate and minimizes the shadows effects.

consideration.

problems.

less than 1.1 in the worst test case.

Delannay et al., 2009).

#### **3.1 Engineering solution**

The challenges of defining a vision system able to identify and track the players in a team game are quite huge due to the dynamic and spatial characteristics of the game itself. Usually indoor team games are quite dynamic with high physical contact among players and rapid movements, for example a player can achieve velocities of more than 5 m/s. These characteristics impose a careful choice on the system's architecture which includes choosing not only the cameras and their disposition but also defining the software design.

In order to minimize crowd interference, merging and occlusion situations and have a good view of the entire field we propose using two cameras placed at the ceiling of the sport's hall to cover each half of the field (this solution was also adopted by (Kristan et al., 2009; Monier et al., 2009)).

The cameras chosen are GigEthernet (DFK 31BG03.H model from Imaging Source) which allows placing the recording unit far away from the cameras and maintaining the signal integrity, have a resolution of 1024x768 pixels and can supply images at 30 frames per second. The images collected from the cameras are shown on Fig. 1 .

We propose a two module software system, one responsible for acquiring the images from the two cameras (Acquisition System) and another responsible for the offline processing of

The second step consists on detecting foreground regions through a dynamic background subtraction method, which uses an empty image of the field and a dynamic threshold that is continuously and locally updated at each new frame (as will be described later on this

<sup>125</sup> Tracking Players in Indoor Sports

After the foreground pixels are identified, their colour is compared against the colour subspaces and classified into one of the teams. In case there is a belonging tie between teams, the adjacent pixels are searched in order to break the tie. Additionally the teams' colour

Finally, pixels are aggregated to form blobs and categorized into player or no player according to size and density restrictions. The centre of mass of the blob is considered the player's position that is afterwards transformed into real world coordinates (court coordinates) using

The colour calibration is performed under the supervision of the user and is achieved using a

Let us define colour subspace *Sc* as the set of RGB colour triplets that are tagged as having the colours of the vests of the team *c*. The initial colour seeds *C*p*xs*, *ys*q for each colour subspace *Sc* are set manually using the mouse to click on the objects that will be segmented, afterwards the surrounded pixels' colours *C*p*xa*, *ya*q are agglomerate around these seeds using colour distance criteria. The colour expansion is performed on the HSL (Hue, Saturation and Luminance)

The regions growth is performed in all directions (using a 8 neighbour mask *n*8) in a recursive way until reaching a pixel that in terms of colour is away from the seed more than a global threshold (*CThresG*) or from its previous neighbour *C*p*xp*, *yp*q more than a local threshold

> *C*p*xa*, *ya*q P *Sc* ô @p*xa*, *ya*q P *n*8p*xp*, *yp*q : Δp*C*p*xa*, *ya*q, *C*p*xp*, *yp*qq ă *CThresL* ^ Δp*C*p*xa*, *ya*q, *C*p*xs*, *ys*qq ă *CThresG*

• Δp*C*1, *C*2q is a configurable weighed distance function, involving HSL components of

During the colour expansion each colour value is attributed a given belonging degree to the subspace being calibrated. This value is stored in a lookup table that contains for each colour triplet the belonging degree to each subspace. Despite the expansion is performed on the HSL colour space, the colour lookup table is built on the RGB (red, green, blue) colour space as

The fuzzy belonging *μ*pq of the colour *C*pq of a pixel *P* of coordinates p*xP*, *yP*q to a given colour subspace *Sc* is *μSc*p*C*p*xP*, *yP*qq and can assume four levels: when not belonging to the colour subspace *μ*pq " 0 and *B*pq " *C*0, by default and before the calibration takes place, all the colours are categorized with no belong degree to every subspace; for low belong degree to the colour subspace *μ*pq " 0.5 and *B*pq " *CL*; for full belong degree to the colour subspace *μ*pq " 1

colour space in order to minimize the effects of shadows and light variations.

(*CThresL*), according to the following definition (both thresholds are user definable):

chapter).

,where

colours *C*<sup>1</sup> and *C*2.

subspaces are updated with new information.

Using a Vision System Inspired in Fuzzy and Parallel Processing

region growing method (Santiago et al., 2011).

• *C*p*x*, *y*q is the HSL colour of pixel at location (x,y),

well as the remainder image processing operations.

• *n*8p*x*, *y*q are the eight neighbours of pixel at location (x,y),

the cameras' homographies.

**3.2.1 Colour calibration**

Fig. 1. A single frame from the two video streams.

the two video streams (Processing System). This last module is responsible for detecting and tracking the players as well as for generating an HyperVideo that consists on an unified image of the two streams with the positions of the players. Additionally it generates log files with the players' positions so that they can be used by the sports community to perform game analysis and infer game statistics. Figure 2 presents a scheme of the proposed architecture.

Fig. 2. System's architecture.

#### **3.2 Player detection**

Following previous work ((Santiago et al., 2011)) the player detection is achieved through colour identification and is composed of three steps. The first step consists on the colour calibration of each team using a region growing method allied with a Fuzzy inspired categorization methodology. This calibration is responsible for subdividing the colour space into subspaces, which are not necessary disjoint since there may be colours that are common to both teams (for example there are many teams that have uniforms with white stripes).

Afterwards, the user manually indicates the location of each player on the field by clicking on the images. Despite some works make this initialization automatically we choose this approach because both handball and basketball allow unlimited players' substitutions, and this manual initialization enables the possibility to always give the same number to the player and also discard the player when he/she leaves the field.

The second step consists on detecting foreground regions through a dynamic background subtraction method, which uses an empty image of the field and a dynamic threshold that is continuously and locally updated at each new frame (as will be described later on this chapter).

After the foreground pixels are identified, their colour is compared against the colour subspaces and classified into one of the teams. In case there is a belonging tie between teams, the adjacent pixels are searched in order to break the tie. Additionally the teams' colour subspaces are updated with new information.

Finally, pixels are aggregated to form blobs and categorized into player or no player according to size and density restrictions. The centre of mass of the blob is considered the player's position that is afterwards transformed into real world coordinates (court coordinates) using the cameras' homographies.

#### **3.2.1 Colour calibration**

8 Will-be-set-by-IN-TECH

the two video streams (Processing System). This last module is responsible for detecting and tracking the players as well as for generating an HyperVideo that consists on an unified image of the two streams with the positions of the players. Additionally it generates log files with the players' positions so that they can be used by the sports community to perform game analysis

Following previous work ((Santiago et al., 2011)) the player detection is achieved through colour identification and is composed of three steps. The first step consists on the colour calibration of each team using a region growing method allied with a Fuzzy inspired categorization methodology. This calibration is responsible for subdividing the colour space into subspaces, which are not necessary disjoint since there may be colours that are common to both teams (for example there are many teams that have uniforms with white stripes). Afterwards, the user manually indicates the location of each player on the field by clicking on the images. Despite some works make this initialization automatically we choose this approach because both handball and basketball allow unlimited players' substitutions, and this manual initialization enables the possibility to always give the same number to the player

and infer game statistics. Figure 2 presents a scheme of the proposed architecture.

Fig. 1. A single frame from the two video streams.

Fig. 2. System's architecture.

and also discard the player when he/she leaves the field.

**3.2 Player detection**

The colour calibration is performed under the supervision of the user and is achieved using a region growing method (Santiago et al., 2011).

Let us define colour subspace *Sc* as the set of RGB colour triplets that are tagged as having the colours of the vests of the team *c*. The initial colour seeds *C*p*xs*, *ys*q for each colour subspace *Sc* are set manually using the mouse to click on the objects that will be segmented, afterwards the surrounded pixels' colours *C*p*xa*, *ya*q are agglomerate around these seeds using colour distance criteria. The colour expansion is performed on the HSL (Hue, Saturation and Luminance) colour space in order to minimize the effects of shadows and light variations.

The regions growth is performed in all directions (using a 8 neighbour mask *n*8) in a recursive way until reaching a pixel that in terms of colour is away from the seed more than a global threshold (*CThresG*) or from its previous neighbour *C*p*xp*, *yp*q more than a local threshold (*CThresL*), according to the following definition (both thresholds are user definable):

$$\begin{aligned} \mathsf{C}(\mathbf{x}\_{a\prime} y\_a) \in \mathsf{S}\_c &\Leftrightarrow \forall (\mathbf{x}\_{a\prime} y\_a) \in n\_{\mathsf{S}}(\mathbf{x}\_{p\prime} y\_p) :\\ \Delta(\mathsf{C}(\mathbf{x}\_{a\prime} y\_a), \mathsf{C}(\mathbf{x}\_{p\prime} y\_p)) < \mathsf{C}\_{\text{Threshold}} &\land \Delta(\mathsf{C}(\mathbf{x}\_{a\prime} y\_a), \mathsf{C}(\mathbf{x}\_{\mathrm{s}\prime} y\_{\mathrm{s}})) < \mathsf{C}\_{\text{Threshold}} \end{aligned}$$

,where


During the colour expansion each colour value is attributed a given belonging degree to the subspace being calibrated. This value is stored in a lookup table that contains for each colour triplet the belonging degree to each subspace. Despite the expansion is performed on the HSL colour space, the colour lookup table is built on the RGB (red, green, blue) colour space as well as the remainder image processing operations.

The fuzzy belonging *μ*pq of the colour *C*pq of a pixel *P* of coordinates p*xP*, *yP*q to a given colour subspace *Sc* is *μSc*p*C*p*xP*, *yP*qq and can assume four levels: when not belonging to the colour subspace *μ*pq " 0 and *B*pq " *C*0, by default and before the calibration takes place, all the colours are categorized with no belong degree to every subspace; for low belong degree to the colour subspace *μ*pq " 0.5 and *B*pq " *CL*; for full belong degree to the colour subspace *μ*pq " 1

The background subtraction is performed on the RGB colour space, because tests showed that for some pixels a small difference between the RGB colour components of the background and the processed images corresponded to a large difference on the Hue component (HSL colour space). In fact, non-linear colour spaces suffer from the non removable singularity problem as

<sup>127</sup> Tracking Players in Indoor Sports

Also in order to make the processing time shorter, the subtraction is executed locally and not to the entire image. In other words, only predefined regions, which are defined by the Kalman

The threshold applied to each pixel is only updated if the pixel is classified as background, otherwise its value remains unchanged. The update obeys to equation 1 and the value is never allowed to go below 4% or above 23.5% of the entire colour range (0-255) for each colour

*<sup>t</sup>* p*x*, *y*q , if *It*p*x*, *y*q P *B*p*x*, *y*q

(1)

*<sup>t</sup>* <sup>p</sup>*x*, *<sup>y</sup>*q ´ *<sup>B</sup>c*p*x*, *<sup>y</sup>*qq ` p<sup>1</sup> ´ *<sup>α</sup>*<sup>q</sup> *<sup>σ</sup><sup>c</sup>*

*σ* is the threshold of the pixel at position *(x,y)*, time *t+1* and colour component *c*, *I* is the colour intensity of the pixel at position *(x,y)*, time *t* and colour component *c*,

*B* is the background colour intensity of the pixel at position *(x,y)* and colour component *c*,

Pixels whose colour difference to the background image is less than the respective threshold

After the foreground pixels are identified, their colour is compared against the colour lookup table that resulted from the calibration process ( 3.2.1) and classified into one of the teams. Since the same colour can belong to different teams (subspaces) it may occur that a pixel is classified into more than one team. To break this tie, information not only from the belonging

Spatial information is used by counting the number of adjacent pixels that belong to each team, afterwards if the sum of the team with the highest value is 1.5 times the value of the lowest team then that pixel is assigned a weight of 2 to that team otherwise it is assigned a value of 1 to both teams. Colour calibration information is used by adding to the previous

The team with the highest final weight is the one assigned to the pixel. This way, it is possible that, although the belonging degree of a pixel to a team based on the colour calibration information is higher than the belonging degree to the other team it may be the winner due to

Additionally, if the winning team has a full belong to that colour triplet and corresponds to a seed colour (*B*pq " *CS*) then a region growing process is triggered and the colour lookup table that contains the information concerning the colour subspaces is updated. This auto expansion is more restrictive than the one performed during the manual initialization and is performed at time intervals (*texpans*), an adjustable setting. This setting may change to take

degree itself but also from adjacent pixels that have already been classified is used.

*<sup>t</sup>* p*x*, *y*q , *otherwise*

*α* is a learning constant, that for our specific case was set to 0.02.

are labelled as background, the others are labelled as foreground.

weight the corresponding fuzzy belong values (as shown in Table 2).

stated by (Cheng et al., 2001).

*σc*

**3.2.3 Team identification**

the neighbourhood characteristics.

,where

*<sup>t</sup>*`1p*x*, *y*q "

Filter predictive stage, suffer this process.

component. These values were obtained experimentally.

Using a Vision System Inspired in Fuzzy and Parallel Processing

*<sup>α</sup>*p*I<sup>c</sup>*

*σc*

\$ '& '%

and *B*pq " *CF* and additionally a full belong degree with the characteristic of also being a colour seed *μ*pq " 1 and *B*pq " *CS*. The *BSc* function maps the four colour categories (*C*0, *CL*, *CF* and *CS*) into the fuzzy belonging according to Table 2.


Table 2. Mapping of *BSc* to fuzzy belonging *μ*pq.

In order to determine the belonging degree of the colour triplet the following rules are applied sequentially during the region growing process:

• **Rule1** - if the pixel was assigned to the subspace, is physically quite close to the initial seed pixel and the colour distance to the initial seed pixel is less than one fifth the maximum allowed distance for the growing process (less than <sup>1</sup> <sup>5</sup>*CThresG*) then it is also assumed to be a seed pixel with a full belonging degree.

$$B\_{\mathbb{S}\_{\mathbb{C}}}(\mathbb{C}(\mathbf{x}\_{d\prime}y\_{d})) = \mathbb{C}\_{\mathbb{S}} \Leftarrow \subset \mathbb{C}(\mathbf{x}\_{d\prime}y\_{d}) \in \mathbb{S}\_{\mathbb{C}} \land \Delta((\mathbf{x}\_{d\prime}y\_{d}),(\mathbf{x}\_{\textnormal{s}\prime}y\_{\textnormal{s}})) < \frac{1}{5}\mathsf{C}\_{\text{Thres}\square}$$

• **Rule2** - if the colour distance to the initial seed pixel is less than two fifths the maximum allowed distance for the growing process than the pixel is categorized with a full belonging degree but without being a seed.

$$\mathcal{B}\_{\mathbb{S}\_{\mathbb{C}}}(\mathbb{C}(\mathbf{x}\_{d\prime}y\_{d})) = \mathbb{C}\_{F} \Leftarrow \mathcal{C}(\mathbf{x}\_{d\prime}y\_{d}) \in \mathbb{S}\_{\mathbb{C}} \land \Delta(\mathbb{C}(\mathbf{x}\_{d\prime}y\_{d}), \mathbb{C}(\mathbf{x}\_{\mathcal{S}\prime}y\_{\mathcal{S}})) < \frac{2}{5} \mathsf{C}\_{\text{Tr}\text{res}\,G}$$

• **Rule3** otherwise and in case the pixel obeys to the region growing conditions it is categorized with a low belong degree (*CL*).

By the end of the calibration process the colour space is subdivided into subspaces, which are not necessary disjoint since the same colour can belong to different subspaces. The motivation for allowing non-disjoint subspaces is that teams frequently share colours, for example uniforms with white stripes are common and thus the exact same well known colour belongs to the two opposing teams. Hence the Fuzzy Logic inspiration methodology, however implementation issues make it not interesting to allow for continuous degrees of belonging. The belonging degrees attributed to each colour triplet, as will be seen later, will allow to break

ties but also to generate dynamic subspaces that can adapt (either grow or shrink) during the game. Subspaces do not have nor ever create any predefined specific shape as they are created from user-selected seeds on the image, that have been grown in the user selected video frames and in colour space.

#### **3.2.2 Background subtraction**

Since the background is more or less static, due to the semi-controlled environment of an indoor game, the subtraction is performed using an empty image of the viewed scene and only the threshold used to distinguish between foreground and background pixels is dynamic and specific for each pixel.

The background subtraction is performed on the RGB colour space, because tests showed that for some pixels a small difference between the RGB colour components of the background and the processed images corresponded to a large difference on the Hue component (HSL colour space). In fact, non-linear colour spaces suffer from the non removable singularity problem as stated by (Cheng et al., 2001).

Also in order to make the processing time shorter, the subtraction is executed locally and not to the entire image. In other words, only predefined regions, which are defined by the Kalman Filter predictive stage, suffer this process.

The threshold applied to each pixel is only updated if the pixel is classified as background, otherwise its value remains unchanged. The update obeys to equation 1 and the value is never allowed to go below 4% or above 23.5% of the entire colour range (0-255) for each colour component. These values were obtained experimentally.

$$\sigma\_{l+1}^{\varepsilon}(\mathbf{x}, \mathbf{y}) = \begin{cases} \mathfrak{a}(I\_t^{\varepsilon}(\mathbf{x}, \mathbf{y}) - \mathcal{B}^{\varepsilon}(\mathbf{x}, \mathbf{y})) + (1 - \mathfrak{a})\,\sigma\_t^{\varepsilon}(\mathbf{x}, \mathbf{y}) \,, \text{if } I\_l(\mathbf{x}, \mathbf{y}) \in \mathcal{B}(\mathbf{x}, \mathbf{y}) \\\\ \sigma\_t^{\varepsilon}(\mathbf{x}, \mathbf{y}) \quad , \text{otherwise} \end{cases} \tag{1}$$

,where

10 Will-be-set-by-IN-TECH

and *B*pq " *CF* and additionally a full belong degree with the characteristic of also being a colour seed *μ*pq " 1 and *B*pq " *CS*. The *BSc* function maps the four colour categories (*C*0, *CL*,

> Colour *BSc μSc* Not the colour *C*<sup>0</sup> 0 Resembles the colour *CL* 0.5 Is the colour *CF* 1 Is a seed colour *CS* 1

In order to determine the belonging degree of the colour triplet the following rules are applied

• **Rule1** - if the pixel was assigned to the subspace, is physically quite close to the initial seed pixel and the colour distance to the initial seed pixel is less than one fifth the maximum

*BSc* <sup>p</sup>*C*p*xa*, *ya*qq " *CS* <sup>ð</sup> *<sup>C</sup>*p*xa*, *ya*q P *Sc* ^ <sup>Δ</sup>pp*xa*, *ya*q, <sup>p</sup>*xs*, *ys*qq ă <sup>1</sup>

*BSc* <sup>p</sup>*C*p*xa*, *ya*qq " *CF* <sup>ð</sup> *<sup>C</sup>*p*xa*, *ya*q P *Sc* ^ <sup>Δ</sup>p*C*p*xa*, *ya*q, *<sup>C</sup>*p*xs*, *ys*qq ă <sup>2</sup>

• **Rule3** otherwise and in case the pixel obeys to the region growing conditions it is

By the end of the calibration process the colour space is subdivided into subspaces, which are not necessary disjoint since the same colour can belong to different subspaces. The motivation for allowing non-disjoint subspaces is that teams frequently share colours, for example uniforms with white stripes are common and thus the exact same well known colour belongs to the two opposing teams. Hence the Fuzzy Logic inspiration methodology, however implementation issues make it not interesting to allow for continuous degrees of belonging. The belonging degrees attributed to each colour triplet, as will be seen later, will allow to break ties but also to generate dynamic subspaces that can adapt (either grow or shrink) during the game. Subspaces do not have nor ever create any predefined specific shape as they are created from user-selected seeds on the image, that have been grown in the user selected video frames

Since the background is more or less static, due to the semi-controlled environment of an indoor game, the subtraction is performed using an empty image of the viewed scene and only the threshold used to distinguish between foreground and background pixels is dynamic

• **Rule2** - if the colour distance to the initial seed pixel is less than two fifths the maximum allowed distance for the growing process than the pixel is categorized with a full belonging

<sup>5</sup>*CThresG*) then it is also assumed to be

5 *CThresG*

> 5 *CThresG*

*CF* and *CS*) into the fuzzy belonging according to Table 2.

Table 2. Mapping of *BSc* to fuzzy belonging *μ*pq.

sequentially during the region growing process:

a seed pixel with a full belonging degree.

categorized with a low belong degree (*CL*).

degree but without being a seed.

and in colour space.

**3.2.2 Background subtraction**

and specific for each pixel.

allowed distance for the growing process (less than <sup>1</sup>

*σ* is the threshold of the pixel at position *(x,y)*, time *t+1* and colour component *c*,

*I* is the colour intensity of the pixel at position *(x,y)*, time *t* and colour component *c*,

*B* is the background colour intensity of the pixel at position *(x,y)* and colour component *c*,

*α* is a learning constant, that for our specific case was set to 0.02.

Pixels whose colour difference to the background image is less than the respective threshold are labelled as background, the others are labelled as foreground.

#### **3.2.3 Team identification**

After the foreground pixels are identified, their colour is compared against the colour lookup table that resulted from the calibration process ( 3.2.1) and classified into one of the teams. Since the same colour can belong to different teams (subspaces) it may occur that a pixel is classified into more than one team. To break this tie, information not only from the belonging degree itself but also from adjacent pixels that have already been classified is used.

Spatial information is used by counting the number of adjacent pixels that belong to each team, afterwards if the sum of the team with the highest value is 1.5 times the value of the lowest team then that pixel is assigned a weight of 2 to that team otherwise it is assigned a value of 1 to both teams. Colour calibration information is used by adding to the previous weight the corresponding fuzzy belong values (as shown in Table 2).

The team with the highest final weight is the one assigned to the pixel. This way, it is possible that, although the belonging degree of a pixel to a team based on the colour calibration information is higher than the belonging degree to the other team it may be the winner due to the neighbourhood characteristics.

Additionally, if the winning team has a full belong to that colour triplet and corresponds to a seed colour (*B*pq " *CS*) then a region growing process is triggered and the colour lookup table that contains the information concerning the colour subspaces is updated. This auto expansion is more restrictive than the one performed during the manual initialization and is performed at time intervals (*texpans*), an adjustable setting. This setting may change to take

, where

*Hw* "

, where

» –

*f* 0 *cx* 0 *f cy* 00 1

• *f* is the focal length,

**3.3 Player tracking**

, where

fi fl » –

• *<sup>r</sup>*<sup>2</sup> " p*xd* ´ *xc*q<sup>2</sup> ` p*yd* ´ *yc*q2,

• p*xu*, *yu*q are the undistorted coordinates, • p*xd*, *yd*q are the distorted coordinates,

into world coordinates (x) according to Eq.5.

The H matrix is defined according to Eq.6.

• *cx* and *cy* are the coordinates of the optical center,

enables the information fusion between cameras.

• *Tx*, *Ty*, *Tz* are the translations on x, y and z directions.

• *φ*, *ω* and *α* are the rotations around the x, y and x axis, respective,

• p*xc*, *yc*q are the coordinates of the center of distortion of the lens, • *k*1, *k*<sup>2</sup> and *k*<sup>3</sup> are the radial coefficients for barrel distortion.

Using a Vision System Inspired in Fuzzy and Parallel Processing

Fig. 3 illustrates the images before and after removing the barrel effect for the two cameras. Once the barrel effect is removed from the players' positions, it is possible to apply the pinhole camera model in order to obtain the world coordinates of the players. This model uses intrinsic parameters (K) and extrinsic parameters (R and T) to map image coordinates (X)

<sup>129</sup> Tracking Players in Indoor Sports

The coordinates are projected at 1.2m from the ground which corresponds to a best effort of the height of the center of mass of an average person, when seen in most frequent positions of the field. This projection allows to have a more correct measure of the players' positions and

The player tracking is based on a vector of Kalman filters (Kalman & Others, 1960; Welch & Bishop, 2002) (one per player) with state *xk* (Eq. 7) and measure *zk* (Eq. 8) at instant time *k*.

*xk* " r *xyvx vy* s

*zk* " r *x y* s

And modelled according to the following linear stochastic difference equations (Eq. 9, 10).

• *x* and *y* are the player center of mass position in real world coordinates,

• *vx* and *vy* are the player velocity in real world coordinates.

cos *φ* cos *α* sin *ω* sin *φ* cos *α* ´ cos *ω* sin *α* cos *ω* sin *φ* cos *α* ` sin *ω* sin *ω* sin *α Tx* cos *φ* sin *α* sin *ω* sin *φ* sin *α* ` cos *ω* cos *α* cos *ω* sin *φ* sin *α* ´ sin *ω* cos *α Ty* ´ sin *φ* sin *ω* cos *φ* cos *ω* cos *φ Tz*

*x* " *K*r*R*|*T*s*X* ô *x* " *HwX* (5)

*<sup>T</sup>* (7)

*<sup>T</sup>* (8)

*xk* " *Axk*´<sup>1</sup> ` *wk*´<sup>1</sup> (9) *zk* " *Hxk* ` *vk* (10)

fi fl (6)

into consideration the speed at which light changes in the pavilion and yields better overall performance to the application.

In order for this update to add not only colour triples to the subspaces but also to remove them (otherwise subspaces would grow too much), each colour triplet has associated a persistence (*pSc* p*R*, *G*, *B*q) to that subspace. Colours with lower belonging have lower persistence and colours with higher belonging have higher persistence. The initial persistence given to the colour is proportional to the time between auto expansions according to Eq.2.

$$\begin{cases} p\_{\mathcal{S}\_{\boldsymbol{\ell}}}(\mathcal{R}, \mathcal{G}, \mathcal{B}) &= \frac{1}{8} t\_{\text{expans}} \\ p\_{\mathcal{S}\_{\boldsymbol{\ell}}}(\mathcal{R}, \mathcal{G}, \mathcal{B}) &= \frac{1}{4} t\_{\text{expans}} \\ p\_{\mathcal{S}\_{\boldsymbol{\ell}}}(\mathcal{R}, \mathcal{G}, \mathcal{B}) &= \infty \end{cases}, \text{if } B\_{\mathcal{S}\_{\boldsymbol{\ell}}}(\mathcal{R}, \mathcal{G}, \mathcal{B}) = \mathcal{C}\_{\mathcal{F}} \tag{2}$$

The persistence is maximum (with the values defined in Eq. 2) when the colour is added to the subspace and diminishes whenever it is not detected in a frame, however seed colours have infinite persistence and will therefore always remain in the subspace. Whenever the persistence value reaches zero the colour triplet is removed from the subspace.

With the introduction of this dynamic it is possible to have mutable subspaces that adapt to light changes either occurring at different regions of the same frame or between frames.

At the same time the foreground pixels are classified, they are also aggregated horizontally to form run length encoding (RLE) structures characterized by the y, *xmin* and *xmax* positions of the RLE. Finally the RLEs are merged vertically to form blobs. A full description of this pixel aggregation can be found in (Santiago et al., 2011) with the particularity that small horizontal RLEs are ignored and do not pass to the vertical merging process in order to minimize noise. The blobs resulting from this pixel aggregation are further refined according to size and colour density constraints. Therefore, blobs that are too small or too large or blobs that have low colour density are discarded as being players. The colour density is measured as the percentage of pixels inside the bounding box of the blob that belong to the team divided by the total number of pixels of the bounding box. The remainder blobs are considered players that belong to a given team (*Sc*) and have an p*x*, *y*q position on image and world coordinates. The position on image coordinates is calculated as being the center of mass of the blob according to Eq.3 .

$$\left(\mathbf{x}\_{\text{Cfl}\_{\text{Ibb}}}, y\_{\text{Cfl}\_{\text{Ibb}}}\right) = \left(\frac{\sum\_{\mathbf{x}} \sum\_{\mathbf{y}} \mu\_{\text{Sc}}(\mathbf{C}(\mathbf{x}, \mathbf{y})) \ge 1}{\sum\_{\mathbf{x}} \sum\_{\mathbf{y}} \mu\_{\text{Sc}}(\mathbf{C}(\mathbf{x}, \mathbf{y}))}\right) \cdot \frac{\sum\_{\mathbf{x}} \sum\_{\mathbf{y}} \mu\_{\text{Sc}}(\mathbf{C}(\mathbf{x}, \mathbf{y})) \,\,\mathrm{y}}{\sum\_{\mathbf{x}} \sum\_{\mathbf{y}} \mu\_{\text{Sc}}(\mathbf{C}(\mathbf{x}, \mathbf{y}))}\,\tag{3}$$

, where c is the team the blob belongs to.

The world coordinates<sup>1</sup> are obtained by first removing the barrel effect produced by the lens (only radial effect was considered, since the tangential component can, in most cases, be neglected) using Eq. 4 . The unknowns in these equations (*k*<sup>1</sup> , *k*<sup>2</sup> , *k*<sup>3</sup> , *xc* and *yc*) are determined using the information extracted from the field lines.

$$\begin{cases} \mathbf{x}\_{\rm u} = \mathbf{x}\_{d} + (\mathbf{x}\_{d} - \mathbf{x}\_{c})(k\_{1}r^{2} + k\_{2}r^{4} + k\_{3}r^{6})\\ y\_{\rm u} = y\_{d} + (y\_{d} - y\_{c})(k\_{1}r^{2} + k\_{2}r^{4} + k\_{3}r^{6}) \end{cases} \tag{4}$$

<sup>1</sup> We would like to thank professor Paulo Costa and Paulo Malheiros from the University of Porto, Faculty of Engineering for the specific camera calibration software.

, where

12 Will-be-set-by-IN-TECH

into consideration the speed at which light changes in the pavilion and yields better overall

In order for this update to add not only colour triples to the subspaces but also to remove them (otherwise subspaces would grow too much), each colour triplet has associated a persistence (*pSc* p*R*, *G*, *B*q) to that subspace. Colours with lower belonging have lower persistence and colours with higher belonging have higher persistence. The initial persistence given to the

*pSc* p*R*, *G*, *B*q "8 , if *BSc* p*R*, *G*, *B*q " *CS*

The persistence is maximum (with the values defined in Eq. 2) when the colour is added to the subspace and diminishes whenever it is not detected in a frame, however seed colours have infinite persistence and will therefore always remain in the subspace. Whenever the

With the introduction of this dynamic it is possible to have mutable subspaces that adapt to light changes either occurring at different regions of the same frame or between frames. At the same time the foreground pixels are classified, they are also aggregated horizontally to form run length encoding (RLE) structures characterized by the y, *xmin* and *xmax* positions of the RLE. Finally the RLEs are merged vertically to form blobs. A full description of this pixel aggregation can be found in (Santiago et al., 2011) with the particularity that small horizontal RLEs are ignored and do not pass to the vertical merging process in order to minimize noise. The blobs resulting from this pixel aggregation are further refined according to size and colour density constraints. Therefore, blobs that are too small or too large or blobs that have low colour density are discarded as being players. The colour density is measured as the percentage of pixels inside the bounding box of the blob that belong to the team divided by the total number of pixels of the bounding box. The remainder blobs are considered players that belong to a given team (*Sc*) and have an p*x*, *y*q position on image and world coordinates. The position on image coordinates is calculated as being the center of mass of the blob

*<sup>y</sup> μSc*p*C*p*x*, *y*qq *x*

The world coordinates<sup>1</sup> are obtained by first removing the barrel effect produced by the lens (only radial effect was considered, since the tangential component can, in most cases, be neglected) using Eq. 4 . The unknowns in these equations (*k*<sup>1</sup> , *k*<sup>2</sup> , *k*<sup>3</sup> , *xc* and *yc*) are

*xu* " *xd* ` p*xd* ´ *xc*qp*k*1*r*<sup>2</sup> ` *<sup>k</sup>*2*r*<sup>4</sup> ` *<sup>k</sup>*3*r*6<sup>q</sup>

<sup>1</sup> We would like to thank professor Paulo Costa and Paulo Malheiros from the University of Porto,

*<sup>y</sup> <sup>μ</sup>Sc*p*C*p*x*, *<sup>y</sup>*qq ,

ř *x* ř

> ř *x* ř

*yu* " *yd* ` p*yd* ´ *yc*qp*k*1*r*<sup>2</sup> ` *<sup>k</sup>*2*r*<sup>4</sup> ` *<sup>k</sup>*3*r*6<sup>q</sup> (4)

*<sup>y</sup> μSc*p*C*p*x*, *y*qq *y*

*<sup>y</sup> <sup>μ</sup>Sc*p*C*p*x*, *<sup>y</sup>*qq ) (3)

<sup>8</sup> *texpans* , if *BSc* p*R*, *G*, *B*q " *CL*

<sup>4</sup> *texpans* , if *BSc* p*R*, *G*, *B*q " *CF*

(2)

colour is proportional to the time between auto expansions according to Eq.2.

persistence value reaches zero the colour triplet is removed from the subspace.

*pSc* <sup>p</sup>*R*, *<sup>G</sup>*, *<sup>B</sup>*q " <sup>1</sup>

*pSc* <sup>p</sup>*R*, *<sup>G</sup>*, *<sup>B</sup>*q " <sup>1</sup>

performance to the application.

according to Eq.3 .

\$ '&

'%

p*xcmblob* , *ycmblob* q"p

, where c is the team the blob belongs to.

ř *x* ř

determined using the information extracted from the field lines.

Faculty of Engineering for the specific camera calibration software.

#

ř *x* ř


Fig. 3 illustrates the images before and after removing the barrel effect for the two cameras. Once the barrel effect is removed from the players' positions, it is possible to apply the pinhole camera model in order to obtain the world coordinates of the players. This model uses intrinsic parameters (K) and extrinsic parameters (R and T) to map image coordinates (X) into world coordinates (x) according to Eq.5.

$$\mathbf{x} = \mathbf{K}[R|T]\mathbf{X} \Leftrightarrow \mathbf{x} = H\_W \mathbf{X} \tag{5}$$

The H matrix is defined according to Eq.6.

$$H\_{w} = \begin{bmatrix} f \ 0 \ c\_{x} \\ 0 \ f \ c\_{y} \\ 0 \ 0 \ 1 \end{bmatrix} \begin{bmatrix} \cos\phi\cos\alpha \sin\omega\sin\phi\cos\alpha - \cos\omega\sin\alpha\cos\omega\sin\phi\cos\alpha + \sin\omega\sin\omega\sin\alpha & T\_{x} \\ \cos\phi\sin\alpha\sin\omega\sin\phi\sin\alpha + \cos\omega\cos\alpha & \cos\omega\sin\phi\sin\alpha - \sin\omega\cos\alpha & T\_{y} \\ -\sin\phi & \sin\omega\cos\phi & \cos\omega\cos\phi & T\_{z} \end{bmatrix} \tag{6}$$

, where


The coordinates are projected at 1.2m from the ground which corresponds to a best effort of the height of the center of mass of an average person, when seen in most frequent positions of the field. This projection allows to have a more correct measure of the players' positions and enables the information fusion between cameras.

#### **3.3 Player tracking**

The player tracking is based on a vector of Kalman filters (Kalman & Others, 1960; Welch & Bishop, 2002) (one per player) with state *xk* (Eq. 7) and measure *zk* (Eq. 8) at instant time *k*.

$$\mathbf{x}\_k = \begin{bmatrix} \mathbf{x} \ y \ v\_x \ v\_y \ \end{bmatrix}^T \tag{7}$$

$$z\_k = \begin{bmatrix} \ x \ y \ \end{bmatrix}^T \tag{8}$$

, where


And modelled according to the following linear stochastic difference equations (Eq. 9, 10).

$$\mathbf{x}\_{k} = A\mathbf{x}\_{k-1} + w\_{k-1} \tag{9}$$

$$z\_k = H\mathbf{x}\_k + v\_k \tag{10}$$

the predicted measure is searched according to the process explained on 3.2.3 to generate a

<sup>131</sup> Tracking Players in Indoor Sports

In addition, and since the players' velocity is not constant through out the game, at each frame

By predicting the position of the players on the subsequent frames it is possible to reduce the computational cost because only a few regions of the entire image are search for players.

This chapter concerns primarily with the generation of a single complete image video stream that is of the utmost importance for the possible human end-users of our system, for example

Generating a single, high quality, "undistorted", video stream is a complex task due to the usage of complex optical systems (wide angle zoom lenses) and the need for accuracy of the system that involves dealing with 2 sets of intrinsic and extrinsic camera parameters for the so called pinhole models of the cameras. High accuracy mapping of the pixels of the images onto real world is needed, with the added difficulty of covering large real world areas. This is an even greater task given that high resolution and high frame rate are of interest, thus producing large amounts of data (2 cameras @ 1024 x 768 resolution, RGB colour depth @ 30fps). By using advanced camera calibration techniques, the two different sets of parameters were found, thus allowing image to world accurate mapping (on separate images) - as seen in

In order to produce the "undistorted" Human Meaningful Video stream, the first task is to optimize the algorithm used. Firstly a pair of "static", off-line created, Look Up Tables (LUTs) are created to map real world pixels into their origin in the original images. Mapping non-overlapped image areas is not complex if all data is available. Additional processing is necessary for the overlapped parts of the image, in order to get a human meaningful image (without neither repeated nor cut objects). LUTs are very useful because they exchange complex mathematical operations with memory storage for the repeated computations which is very interesting in terms of general performance and parallel computing in particular. One of the issues to be solved is to show an interesting view of the overlapped portions of the images shown in Fig. 1. This is an issue because near the centre line of the handball field, where images overlap, the same player is seen tilted in opposite directions by both cameras (as shown in Fig. 4). The strategy is to have complete objects always from one of the cameras - this is always possible because the largest object is inferior in image size to the size of the overlap in the images (see Fig. 4). It is not computationally immediate, though, because objects and

measure (*zk*) to updated the estimate.

Using a Vision System Inspired in Fuzzy and Parallel Processing

**3.4 Generation of human meaningful video**

background have to be identified and reconstructed.

Fig. 4. Same player over time, transitioning from camera to the other.

a sports' scientist, educator or coach.

Fig. 3.

each player velocity is updated.

Fig. 3. (a) and (b) left image before and after removing the barrel effect distortion. (c) and (d) right image before and after removing the barrel effect.

Where A is the state model matrix and assumes the form of Eq.11 (where Δ*t* is the time between frames - in this case corresponds to <sup>1</sup> <sup>30</sup> seconds), H is the observation model matrix (Eq.12) and the random variables *wk* and *vk* represent the process and measurement noise. The usage of real world coordinates allows for a transparent tracking between the two video streams.

$$A = \begin{bmatrix} 1 \ 0 \ \Delta t \ 0 \\ 0 \ 1 \ 0 \ \Delta t \\ 0 \ 0 \ 1 \ 0 \\ 0 \ 0 \ 0 \ 1 \end{bmatrix} \tag{11}$$

$$H = \begin{bmatrix} 1 \ 0 \ 0 \ 0 \ 0 \\ 0 \ 1 \ 0 \ 0 \end{bmatrix} \tag{12}$$

Whenever the user indicates a player (using the mouse), a new Kalman filter is added to the vector with the real world position of the player and a default velocity of 0 m/s. Afterwards, the players' locations on the subsequent frames are predicted using Eq.9. The area around the predicted measure is searched according to the process explained on 3.2.3 to generate a measure (*zk*) to updated the estimate.

In addition, and since the players' velocity is not constant through out the game, at each frame each player velocity is updated.

By predicting the position of the players on the subsequent frames it is possible to reduce the computational cost because only a few regions of the entire image are search for players.

#### **3.4 Generation of human meaningful video**

14 Will-be-set-by-IN-TECH

(a) (b)

(c) (d)

Fig. 3. (a) and (b) left image before and after removing the barrel effect distortion. (c) and (d)

Where A is the state model matrix and assumes the form of Eq.11 (where Δ*t* is the time

(Eq.12) and the random variables *wk* and *vk* represent the process and measurement noise. The usage of real world coordinates allows for a transparent tracking between the two video

fi ffi ffi fl

j

*A* "

*H* " " 1000 0100

Whenever the user indicates a player (using the mouse), a new Kalman filter is added to the vector with the real world position of the player and a default velocity of 0 m/s. Afterwards, the players' locations on the subsequent frames are predicted using Eq.9. The area around

» — — – <sup>30</sup> seconds), H is the observation model matrix

(11)

(12)

right image before and after removing the barrel effect.

between frames - in this case corresponds to <sup>1</sup>

streams.

This chapter concerns primarily with the generation of a single complete image video stream that is of the utmost importance for the possible human end-users of our system, for example a sports' scientist, educator or coach.

Generating a single, high quality, "undistorted", video stream is a complex task due to the usage of complex optical systems (wide angle zoom lenses) and the need for accuracy of the system that involves dealing with 2 sets of intrinsic and extrinsic camera parameters for the so called pinhole models of the cameras. High accuracy mapping of the pixels of the images onto real world is needed, with the added difficulty of covering large real world areas. This is an even greater task given that high resolution and high frame rate are of interest, thus producing large amounts of data (2 cameras @ 1024 x 768 resolution, RGB colour depth @ 30fps). By using advanced camera calibration techniques, the two different sets of parameters were found, thus allowing image to world accurate mapping (on separate images) - as seen in Fig. 3.

In order to produce the "undistorted" Human Meaningful Video stream, the first task is to optimize the algorithm used. Firstly a pair of "static", off-line created, Look Up Tables (LUTs) are created to map real world pixels into their origin in the original images. Mapping non-overlapped image areas is not complex if all data is available. Additional processing is necessary for the overlapped parts of the image, in order to get a human meaningful image (without neither repeated nor cut objects). LUTs are very useful because they exchange complex mathematical operations with memory storage for the repeated computations which is very interesting in terms of general performance and parallel computing in particular.

One of the issues to be solved is to show an interesting view of the overlapped portions of the images shown in Fig. 1. This is an issue because near the centre line of the handball field, where images overlap, the same player is seen tilted in opposite directions by both cameras (as shown in Fig. 4). The strategy is to have complete objects always from one of the cameras - this is always possible because the largest object is inferior in image size to the size of the overlap in the images (see Fig. 4). It is not computationally immediate, though, because objects and background have to be identified and reconstructed.

Fig. 4. Same player over time, transitioning from camera to the other.

(a) (b)

Fig. 6. (a) Initial colour subspaces. Green dots correspond to team A and red dots to team B. Lighter dots are seed colours (*CS*), intermediate are team colour (*CF*) and the darkest

<sup>133</sup> Tracking Players in Indoor Sports

Using a Vision System Inspired in Fuzzy and Parallel Processing

resemble the team colour (*CL*). (b) Examples of players from team A (up) and team B (down).

Fig. 7. Final colour subspaces after 34 auto expansions (team A in green and team B in red). Team B @F1 @F5 @F31 @F35 @F61 @F301 @F421 @F691 @F811 @F1021 A *CL* 34 28 12 10 10 10 3 1 4 0 *CF* 6 6 6 6 8 4 9 11 3 7 *CS* 39 39 44 44 51 81 92 105 109 112 B *CL* 2 2 1 0 1 2 4 7 7 3 *CF* 6 6 6 6 6 11 18 42 44 43 *CS* 8 8 12 12 12 25 39 103 116 149 Table 3. Evolution of the number of colour triples that belong to each team and the respective

It is also possible to verify the effect of colour persistence, namely the number of colour triplets that are *CL* decreases on Team A from frame 1 to frame 5 (the persistence of these colours is of

the periphery of the colour subspace and therefore do not appear so often, and triplets with *CF* decrease on Team A from frame 61 to frame 301 and on Team B from frame 811 to frame

<sup>8</sup> *texpans*)) also they tend to have a more erratic behaviour because they belong to

belonging degree. The expansion is performed at every 30*th* frame (*texpans* " 30)

3.75 (*pSc* " <sup>1</sup>

1021.

The implementation was built in C++, under windows and Microsoft Visual C++, using previously mentioned frameworks as OpenMP (OpenMP, 2011) and Compute Unified Device Architecture (CUDA) (NVIDIA, 2011). An example of the processed image is shown in Fig. 5.

Fig. 5. Single frame from the "undistorted" human meaningful video.

#### **4. Results**

#### **4.1 Player detection and tracking**

In order to validate the approach, the system was mounted at the public sports hall of Portimão to film the games of the portuguese SuperCup. The video footages collected validated the engineering solution since we were able to cover the entire field with good resolution and a good overlapped zone as shown on Fig. 1.

Initially two distinct teams (team A and team B - examples of both teams can be seen on Fig. 6(b)) were calibrated as explained on section 3.2.1. The original seeds were selected by clicking on the players of both teams which resulted on the initial colour subspaces illustrated on Fig. 6(a).

After 1200 frames and a time between expansions (*texpans*) of 30 frames (which corresponds to 34 expansions), it is possible to verify that the teams' colour subspaces have updated and grown around the initial seeds resulting on the new colour subspaces of Fig. 7.

Table 3 provides an overview of the colour spaces dimensions during the auto calibration process. The @F represents the frame number (remember that the auto expansion occurs at every 30*th* frame).

The table clearly illustrates that from the initial colour subspaces (Fig. 6(a)) to the final (Fig. 7) the number of colour triplets that are seeds (*CS*) increases so that the colour subspaces can adapt to the colour conditions of that team through the entire field and through time.

In addition, if the colour subspace is more condensed, triplets that resemble the colour (*CL*) and are colour (*CF*) tend to exist in higher number (Team B) than when the colour seeds are more spread in the entire colour space (Team A).

16 Will-be-set-by-IN-TECH

The implementation was built in C++, under windows and Microsoft Visual C++, using previously mentioned frameworks as OpenMP (OpenMP, 2011) and Compute Unified Device Architecture (CUDA) (NVIDIA, 2011). An example of the processed image is shown in Fig. 5.

Fig. 5. Single frame from the "undistorted" human meaningful video.

resolution and a good overlapped zone as shown on Fig. 1.

more spread in the entire colour space (Team A).

In order to validate the approach, the system was mounted at the public sports hall of Portimão to film the games of the portuguese SuperCup. The video footages collected validated the engineering solution since we were able to cover the entire field with good

Initially two distinct teams (team A and team B - examples of both teams can be seen on Fig. 6(b)) were calibrated as explained on section 3.2.1. The original seeds were selected by clicking on the players of both teams which resulted on the initial colour subspaces illustrated on Fig.

After 1200 frames and a time between expansions (*texpans*) of 30 frames (which corresponds to 34 expansions), it is possible to verify that the teams' colour subspaces have updated and

Table 3 provides an overview of the colour spaces dimensions during the auto calibration process. The @F represents the frame number (remember that the auto expansion occurs at

The table clearly illustrates that from the initial colour subspaces (Fig. 6(a)) to the final (Fig. 7) the number of colour triplets that are seeds (*CS*) increases so that the colour subspaces can

In addition, if the colour subspace is more condensed, triplets that resemble the colour (*CL*) and are colour (*CF*) tend to exist in higher number (Team B) than when the colour seeds are

adapt to the colour conditions of that team through the entire field and through time.

grown around the initial seeds resulting on the new colour subspaces of Fig. 7.

**4. Results**

6(a).

every 30*th* frame).

**4.1 Player detection and tracking**

Fig. 6. (a) Initial colour subspaces. Green dots correspond to team A and red dots to team B. Lighter dots are seed colours (*CS*), intermediate are team colour (*CF*) and the darkest resemble the team colour (*CL*). (b) Examples of players from team A (up) and team B (down).

Fig. 7. Final colour subspaces after 34 auto expansions (team A in green and team B in red).


Table 3. Evolution of the number of colour triples that belong to each team and the respective belonging degree. The expansion is performed at every 30*th* frame (*texpans* " 30)

It is also possible to verify the effect of colour persistence, namely the number of colour triplets that are *CL* decreases on Team A from frame 1 to frame 5 (the persistence of these colours is of 3.75 (*pSc* " <sup>1</sup> <sup>8</sup> *texpans*)) also they tend to have a more erratic behaviour because they belong to the periphery of the colour subspace and therefore do not appear so often, and triplets with *CF* decrease on Team A from frame 61 to frame 301 and on Team B from frame 811 to frame 1021.

(a)

<sup>135</sup> Tracking Players in Indoor Sports

Using a Vision System Inspired in Fuzzy and Parallel Processing

(b)

Fig. 8. Results of the player detection at frame 762: (a) without colour auto expansion and (b)

In order to ascertain the importance of parallel processing in this application, the same algorithm was ported to the OpenMP and CUDA frameworks, yielding the results shown in Table 6. Naturally, parallel processing is heavily dependent on hardware and results are shown for two laptop solutions, only one of each is able to run the proprietary CUDA toolkit

with colour auto expansion.

for GPU processing.

**4.2 Generation of human meaningful video**

However, it is possible to verify that the initial seed choice will influence how well the colour subspace adapts to the environment conditions. In fact, the initial seeds for team A resulted in a faster adaptation: the colour subspace growth is higher at the initial expansions and tends to stabilize, while for team B there is still a reasonable growth even at frame 1021. The initial seed choice is a well known problem of region growing methods.

Comparing the results with and without the Fuzzy inspired model of colour expansion it is possible to verify that the players' detection achieves better results with the mutable colour subspaces as depicted on Fig. 8.

These images show the players' detection at frame 762, where the green crosses correspond to players detected from team A and the purple crosses correspond to players detected from team B. The green and red highlighted pixels correspond to pixels that have been labelled as belonging to one of the teams (green correspond to team A and red to team B).

Analysing the two images it is possible to verify that using the Fuzzy inspired auto expansion model all fielders from both teams were detected (Fig. 8(b)), while using the initial colour subspaces (Fig. 6(a)) the system is unable to detect four players of team B (Fig. 8(a)). In addition, the detected area of the players is higher with the Fuzzy model which allows to have a better measure of the player center of mass.

Moreover, the detection rate increases along with the colour subspaces update as illustrated on Table 4. This increase rate is more visible on players from team B, since its initial colour subspace did not reflect so well the colour properties of the team.


Table 4. Player detection of three players from teams A and B from frame 1 until frame 800.

The usage of Kalman filters to perform the tracking allows not only to make the processing time shorter, but also to minimize the miss-detections because the predictive stage (Eq. 10) determines the next position of the player (taking into account the model of the player movement). This position is used on the next frame to search for the blob and in case of not finding a measure the value is used as the position of the player. The following table (Table 5) shows how the tracking of players from team B is improved using the Kalman filter.


Table 5. Detection improvement using the Kalman filter of the three players of team B from Table 4.

Additionally, by performing the tracking in real world coordinates it is possible to track the players between cameras in a transparent way. Figure 9 illustrates players from team B crossing the middle line and being identified initially by the left camera and afterwards by the right camera, without the need for user intervention.

18 Will-be-set-by-IN-TECH

However, it is possible to verify that the initial seed choice will influence how well the colour subspace adapts to the environment conditions. In fact, the initial seeds for team A resulted in a faster adaptation: the colour subspace growth is higher at the initial expansions and tends to stabilize, while for team B there is still a reasonable growth even at frame 1021. The initial

Comparing the results with and without the Fuzzy inspired model of colour expansion it is possible to verify that the players' detection achieves better results with the mutable colour

These images show the players' detection at frame 762, where the green crosses correspond to players detected from team A and the purple crosses correspond to players detected from team B. The green and red highlighted pixels correspond to pixels that have been labelled as

Analysing the two images it is possible to verify that using the Fuzzy inspired auto expansion model all fielders from both teams were detected (Fig. 8(b)), while using the initial colour subspaces (Fig. 6(a)) the system is unable to detect four players of team B (Fig. 8(a)). In addition, the detected area of the players is higher with the Fuzzy model which allows to

Moreover, the detection rate increases along with the colour subspaces update as illustrated on Table 4. This increase rate is more visible on players from team B, since its initial colour

> Team player 1-100 101-200 201-300 301-400 401-500 501-600 601-700 701-800 A 1 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 2 1.00 1.00 1.00 0.95 1.00 1.00 1.00 1.00 3 1.00 1.00 1.00 0.96 1.00 1.00 1.00 1.00 B 1 0.00 0.27 0.00 0.70 0.63 1.00 0.93 1.00 2 0.03 0.35 0.33 0.49 0.92 0.90 0.90 1.00 3 0.13 0.43 0.24 0.56 1 1 0.98 0.99

Table 4. Player detection of three players from teams A and B from frame 1 until frame 800.

The usage of Kalman filters to perform the tracking allows not only to make the processing time shorter, but also to minimize the miss-detections because the predictive stage (Eq. 10) determines the next position of the player (taking into account the model of the player movement). This position is used on the next frame to search for the blob and in case of not finding a measure the value is used as the position of the player. The following table (Table 5) shows how the tracking of players from team B is improved using the Kalman filter. Player 1-100 101-200 201-300 301-400 401-500 501-600 601-700 701-800 1 0.00 0.79 0.15 1.00 1.00 1.00 0.95 1.00 2 0.16 0.83 0.57 0.90 1.00 0.98 0.93 1.00 3 0.74 0.86 0.75 1.00 1.00 1.00 0.99 1.00 Table 5. Detection improvement using the Kalman filter of the three players of team B from

Additionally, by performing the tracking in real world coordinates it is possible to track the players between cameras in a transparent way. Figure 9 illustrates players from team B crossing the middle line and being identified initially by the left camera and afterwards by

belonging to one of the teams (green correspond to team A and red to team B).

seed choice is a well known problem of region growing methods.

subspace did not reflect so well the colour properties of the team.

the right camera, without the need for user intervention.

have a better measure of the player center of mass.

subspaces as depicted on Fig. 8.

Table 4.

(b)

Fig. 8. Results of the player detection at frame 762: (a) without colour auto expansion and (b) with colour auto expansion.

#### **4.2 Generation of human meaningful video**

In order to ascertain the importance of parallel processing in this application, the same algorithm was ported to the OpenMP and CUDA frameworks, yielding the results shown in Table 6. Naturally, parallel processing is heavily dependent on hardware and results are shown for two laptop solutions, only one of each is able to run the proprietary CUDA toolkit for GPU processing.

(a)

Fig. 10. Execution times for several parallel computing techniques (see Table 6).

gains are very much hardware dependent.

temporal changes of the recognized colours.

**5. Conclusions**

considerably the overall performance. As recent processors offer high frequency dual/quad (or more) cores, the interest of using 16 much slower processors is also to be considered - but

<sup>137</sup> Tracking Players in Indoor Sports

Using a Vision System Inspired in Fuzzy and Parallel Processing

The previous study is related to producing a single frame of the human meaningful video stream with parallel computing. Future work includes using distributed computing to producing the stream, for example, using 2 computers to produce alternating frames is expected to produce excellent performance gains (almost halving processing time). This expectation is justified as little dependencies exist among consecutive frames and, as such, parallel processing is most effective and the "slow" transmission over network would not introduce significant performance loss, when compared to the expected benefits of the double computational power available for processing. A similar approach (several frames at once) would also be possible in a single computer but would likely not be as interesting as most available resources are used fully most of the time. As mentioned earlier, actual performance

In order to really have a meaningful video for humans, the found objects (handball players,

This chapter presented a visual automatic system for detecting and tracking players in indoor games. The main objectives were to implement a system that could be adaptative and take into consideration the light changes (and subsequent colour change) that frequently occurs in sports pavilions (for example due to windows, clouds, sun orientation, ...). The ultimate goal

Players are identified by color segmentation. The used techniques include foreground detection and classification of team colours using a Fuzzy inspired categorization model that allows a single colour to belong to both teams simultaneously. The colour subspaces have no particular shape and grow or shrink over time in order to take into account spatial and

Player tracking is improved by making use of a Kalman Filter per player and the resulting information is organized and shown in a single undistorted image view of the entire field,

etc) have to be marked onto the image, a task that was not parallelized.

is to further improve information for sports agents and sportive quality.

using the GPU processors of the video card will free up the main CPU for other tasks.

Fig. 9. Tracking players between cameras. (a) Players are being tracked by the left camera. (b) Some players start to be tracked by the right camera.


Table 6. Parallel computing execution times: the parcels (a+b) are: a - other algorithms; b undistorting and correctly joining incoming images. The ˆ indicates NO and the indicates WITH.

By studying Table 6 and Figure10, it can be found that a single better CPU improves performance marginally - a much better CPU running with a clock at least 1.49 times faster yields about 17% performance gain. This is probably due to the large amount of data being manipulated in this application - the limitations are in the memory area, not pure single CPU power. The same results also demonstrate the advantage of parallel computing with joint usage of OpenMP (OMP) and CUDA, with a performance gain little over 2.6 times (in the same computer). Even larger gains were initially expected but the complexity of the algorithm on the selection of the objects to show on the overlapped portion of the initial images, limited

considerably the overall performance. As recent processors offer high frequency dual/quad (or more) cores, the interest of using 16 much slower processors is also to be considered - but using the GPU processors of the video card will free up the main CPU for other tasks.

The previous study is related to producing a single frame of the human meaningful video stream with parallel computing. Future work includes using distributed computing to producing the stream, for example, using 2 computers to produce alternating frames is expected to produce excellent performance gains (almost halving processing time). This expectation is justified as little dependencies exist among consecutive frames and, as such, parallel processing is most effective and the "slow" transmission over network would not introduce significant performance loss, when compared to the expected benefits of the double computational power available for processing. A similar approach (several frames at once) would also be possible in a single computer but would likely not be as interesting as most available resources are used fully most of the time. As mentioned earlier, actual performance gains are very much hardware dependent.

In order to really have a meaningful video for humans, the found objects (handball players, etc) have to be marked onto the image, a task that was not parallelized.

#### **5. Conclusions**

20 Will-be-set-by-IN-TECH

(a) (b)

Fig. 9. Tracking players between cameras. (a) Players are being tracked by the left camera. (b)

OMP CUDAˆ

137.9+34.5 = 172.4 ms (5.8 fps)

40.7+28.6 = 69.3 ms (14.42 fps)

OMPˆ CUDA

91.1+22.2 = 113.3 ms (8.83 fps)

OMP CUDA

40.7+21.5 = 62.5 ms (16.02 fps)


OMPˆ CUDAˆ

145.1+42.1 = 187.2 ms (5.3 fps)

91.1+69.7 = 160.8 ms (6.2 fps)

Table 6. Parallel computing execution times: the parcels (a+b) are: a - other algorithms; b undistorting and correctly joining incoming images. The ˆ indicates NO and the indicates

By studying Table 6 and Figure10, it can be found that a single better CPU improves performance marginally - a much better CPU running with a clock at least 1.49 times faster yields about 17% performance gain. This is probably due to the large amount of data being manipulated in this application - the limitations are in the memory area, not pure single CPU power. The same results also demonstrate the advantage of parallel computing with joint usage of OpenMP (OMP) and CUDA, with a performance gain little over 2.6 times (in the same computer). Even larger gains were initially expected but the complexity of the algorithm on the selection of the objects to show on the overlapped portion of the initial images, limited

Some players start to be tracked by the right camera.

PC (Year) / Processor / Video Processor / O. S. / CPUs (GPUs)

Asus V6J (2006) Intel Core Duo no CUDA capabilities

Toshiba Tecra S11 (2011) Intel core i7-640M NVidia NVS2100M 512MB

4@2.8+GHz (16@0.5+GHz)

Windows 7 2@1.87GHz (none)

Windows 7

WITH.

This chapter presented a visual automatic system for detecting and tracking players in indoor games. The main objectives were to implement a system that could be adaptative and take into consideration the light changes (and subsequent colour change) that frequently occurs in sports pavilions (for example due to windows, clouds, sun orientation, ...). The ultimate goal is to further improve information for sports agents and sportive quality.

Players are identified by color segmentation. The used techniques include foreground detection and classification of team colours using a Fuzzy inspired categorization model that allows a single colour to belong to both teams simultaneously. The colour subspaces have no particular shape and grow or shrink over time in order to take into account spatial and temporal changes of the recognized colours.

Player tracking is improved by making use of a Kalman Filter per player and the resulting information is organized and shown in a single undistorted image view of the entire field,

Delannay, D., Danhier, N. & De Vleeschouwer, C. (2009). Detection and recognition of sports

<sup>139</sup> Tracking Players in Indoor Sports

Deng, Y. & Manjunath, B. (2001). Unsupervised segmentation of color-texture regions in

Franks, I. M. & Nagelkerke, P. (1988). The Use of Computer Interactive Video in Sport

Franks, I., Willison, G. E. & Goodman, D. (1987). Analysing a team sport with the aid of

Grimson, W., Stauffer, C., Romano, R. & Lee, L. (1998). Using adaptive tracking to classify and

Group, K. (2011b). Opencl - the open standard for parallel programming of heterogeneous

Halfhill, T. (2008). Parallel Processing with CUDA Nvidia's High-Performance Computing Platform Uses Massive Multithreading, *Microprocessor Report* 22: 1–8.

*With+CUDA:+Nvidia's+High-Performance+Computing+Platform+Uses+Massive+*

Heikkila, J. & Silvén, O. (1999). A real-time system for monitoring of cyclists and pedestrians, *Visual Surveillance, 1999. Second IEEE Workshop on,(VS'99)*, IEEE, pp. 74–81. Hu, M., Chang, M., Wu, J. & Chi, L. (2011). Robust Camera Calibration and Player Tracking in Broadcast Basketball Video, *Multimedia, IEEE Transactions on* 13(2): 266–279. Kalman, R. & Others (1960). A new approach to linear filtering and prediction problems,

Kasiri-Bidhendi, S. & Safabakhsh, R. (2009). Effective tracking of the players and ball in indoor

Koprinska, I. & Carrato, S. (2001). Temporal video segmentation: A survey, *Signal processing:*

Kristan, M., Perš, J., Perše, M. & Kovaˇciˇc, S. (2009). Closed-world tracking of

Monier, E., Wilhelm, P. & Ruckert, U. (2009). Template matching based tracking of players

Munkres, J. (1957). Algorithms for the Assignment and Transportation Problems, *Journal of the*

Needham, C. & Boyle, R. (2001). Tracking multiple sports players through occlusion, congestion and scale, *British Machine Vision Conference*, BMVA, pp. 93–1022.

*Society for Industrial and Applied Mathematics* 5(1): 32–38.

URL: *http://www.nvidia.com/object/cuda \_home \_new.html*

soccer games in the presence of occlusion, *Computer Conference, 2009. CSICC 2009.*

multiple interacting targets for indoor-sports applications, *Computer Vision and Image*

in indoor team sports, *2009 Third ACM/IEEE International Conference on Distributed*

*ACM/IEEE International Conference on*, IEEE, pp. 1–7.

computers, *Canadian Journal of Sport Sciences* 12(2): 120–125.

*1998 IEEE Computer Society Conference on*, IEEE, pp. 22–29.

Analysis, *Ergonomics* 31(11): 1593–1603.

Using a Vision System Inspired in Fuzzy and Parallel Processing

23(8): 800–810.

systems.

*Multithreading#0*

Group, K. (2011a). The khronos group inc. URL: *http://www.khronos.org/*

URL: *http://www.khronos.org/opencl/*

*Journal of basic Engineering* 82(1): 35–45.

*14th International CSI*, IEEE, pp. 524–529.

*Image communication* 16(5): 477–500.

*Understanding* 113(5): 598–611.

*Smart Cameras (ICDSC)* pp. 1–6.

NVIDIA (2011). Cuda zone.

(wo)men from multiple views, *Distributed Smart Cameras, 2009. ICDSC 2009. Third*

images and video, *IEEE Transactions on Pattern Analysis and Machine Intelligence*

monitor activities in a site, *Computer Vision and Pattern Recognition, 1998. Proceedings.*

URL: *http://scholar.google.com/scholar?hl=en&btnG=Search&q=intitle:Parallel+Processing+*

adequate for human studies. Although the system is multi-camera, the usage of parallel processing allows efficient generation of this final image.

The proposed methodology was validated with a real game footage filmed at an important Portuguese handball championship. Results show that, using simple features such as colour combined with a powerful tool for tracking (Kalman Filter), it is possible to detect and track players throughout the game area with very limited user intervention - initial colour calibration by user and little more. The usage of adaptative colour subspaces generated by a Fuzzy inspired methodology allows to better define the teams' colour properties during the game and increasing detection rates.

#### **5.1 Future work**

Future work includes exploring methodologies to automatically define the time between expansions along the game according to the colour subspaces dynamics and also the detection rate and incorporate more information on the Kalman Filter model in order to make it more robust to long merging and occlusion situations and also enable both the detection and the tracking with the benefits of parallel processing.

Additionally, interesting game sequences are also intended to be automatically detected (taking into consideration the players positions) and the generated video tagged accordingly. This video is to be made "active" (searchable, "jumpable", ...) to be integrated into a human interface navigation tool to allow for an easy usage of the extracted information by the final end users.

#### **6. Acknowledgments**

We would like to thank Fundação Calouste Gulbenkian by the support given through a PhD scholarship with ref. 104410.

#### **7. References**


URL: *http://www.apidis.org/index.htm*


22 Will-be-set-by-IN-TECH

adequate for human studies. Although the system is multi-camera, the usage of parallel

The proposed methodology was validated with a real game footage filmed at an important Portuguese handball championship. Results show that, using simple features such as colour combined with a powerful tool for tracking (Kalman Filter), it is possible to detect and track players throughout the game area with very limited user intervention - initial colour calibration by user and little more. The usage of adaptative colour subspaces generated by a Fuzzy inspired methodology allows to better define the teams' colour properties during the

Future work includes exploring methodologies to automatically define the time between expansions along the game according to the colour subspaces dynamics and also the detection rate and incorporate more information on the Kalman Filter model in order to make it more robust to long merging and occlusion situations and also enable both the detection and the

Additionally, interesting game sequences are also intended to be automatically detected (taking into consideration the players positions) and the generated video tagged accordingly. This video is to be made "active" (searchable, "jumpable", ...) to be integrated into a human interface navigation tool to allow for an easy usage of the extracted information by the final

We would like to thank Fundação Calouste Gulbenkian by the support given through a PhD

Alahi, A., Boursier, Y., Jacques, L. & Vandergheynst, P. (2009). Sport players detection and

APIDIS (2008). Autonomous Production of Images based on Distributed and Intelligent

Barron, J. & Thacker, N. (2005). Tutorial: Computing 2D and 3D optical flow, *Tina Memo*

Canny, J. (1986). A computational approach to edge detection, *IEEE Trans. Pattern Anal. Mach.*

Chapman, B., Jost, G. & Van Der Pas, R. (2007). *Using OpenMP: portable shared memory parallel*

Cheng, H., Jiang, X., Sun, Y. & Wang, J. (2001). Color image segmentation: advances and

URL: *http://books.google.com/books?hl=en&lr=&id=MeFLQSKmaJYC&oi= fnd&pg=PR7&dq=Using+OpenMP,+Portable+Shared+Memory+Parallel+ Programming&ots=5zOPjR26VC&sig=R3WOZRMwGX1tAw-pcR46NuId3xw*

tracking with a mixed network of planar and omnidirectional cameras, *Distributed Smart Cameras, 2009. ICDSC 2009. Third ACM/IEEE International Conference on*, IEEE,

processing allows efficient generation of this final image.

game and increasing detection rates.

tracking with the benefits of parallel processing.

URL: *http://www.apidis.org/index.htm*

*programming*, Vol. 10, The MIT Press.

prospects, *Pattern recognition* 34(12): 2259–2281.

**5.1 Future work**

end users.

**7. References**

**6. Acknowledgments**

scholarship with ref. 104410.

pp. 1–8.

Sensing.

*Internal* (2004-12).

*Intell.* 8: 679–698.


URL: *http://www.khronos.org/opencl/*


URL: *http://www.nvidia.com/object/cuda \_home \_new.html*

**7** 

 *Spain* 

**Logistics Services and Intelligent Security** 

*Department of Signal Theory, Communications and Telematics Engineering Telecommunications Engineering School, Valladolid University, Valladolid,* 

José F. Díez-Higuera, Francisco J. Díaz-Pernas, Miriam Antón-Rodríguez,

In recent years, both growth of urban areas and trade expansion have increased freight traffic on the public roads. This increase in traffic affects transport companies, which suffers long delays in delivery of goods. Therefore, they experience degradation in service quality while increasing operating costs. Intelligent transportation systems have shown their potential to improve significantly the management and the operation of existing transportation systems (Siwek, 1998). Transport companies can benefit from using this technology to improve the quality of their services and to optimize the use of their resources. In order to increase the safety of their workers, these systems may also include the monitoring of the driver fatigue level, an important aspect considering that driver fatigue causes 60% of

The developed system meets these requirements by means of a corporative web site application where the whole telematics infrastructure of the service provider is hosted. The system has a fully customizable friendly-user interface based on Joomla!, which integrates a real-time geographic information system, in addition to other functions such as both customer management and dispatching scheduling for different user profiles (administrators, operators, managers, …) in a company and different company profiles (backbone, retail, multimodal, etc.). Besides the usual real-time route guidance and communication functions, the on-board system incorporates an intelligent security control, which monitors driver fatigue and alertness level in real time. This functionality is accomplished by a computer vision system based on both face tracking and gesture recognition techniques. Collected data are processed to avoid accidents caused by driver

One of the most complex problems that companies with transport and/or delivery activities must address nowadays is to have suitable information to improve some of their tasks (operational efficiency and management, customer services, outputs, and other factors) by managing and monitoring the different operational variables that transport operations need. Outsourcing is the trend solution to minimize costs with no loss of efficiency. On the other

fatal accidents in which trucks are involved (AWAKE Consortium, 2004).

**1. Introduction** 

distractions or sleepiness.

**1.1 Outsourcing** 

**Control for Transport Companies** 

David González-Ortega and Mario Martínez-Zarzuela

OpenMP (2011). OpenMP.org.

URL: *http://openmp.org/wp/*

