**2. Motion estimation**

The term "optic flow" was named by James Gibson after the Second World War, when he was working on the development of tests for pilots in the U.S. Air Force (Gibson, 1947). His basic principles provided the basis for much of the work of computer vision 30 years later (Mar, 1982). In later publications, Gibson defines the deformation gradient image along the motion of the observer, a concept that was dubbed "flow".

Real-Time Motion Processing Estimation Methods in Embedded Systems 267

Among these atypical situations, under certain limits it is possible to recover motion estimation. Motion algorithms described in this chapter recover the flow as an approximation of the velocity field projected flow information. They deliver two dimensional array of vectors that may be subject to several high-level performances. In the literature appears several applications in this regard (Nakayama, 1985; Mitiche, 1996).

Optical flow is an ill-conditioned problem since it is based on imaging three dimensions on a two-dimensional detector. The process removes information, and its recovery is not trivial. Therefore, the flow is a measure of the ill-posed problem, since there are infinite velocity fields that may cause the observed changes in the luminance distribution, in addition, there are infinite three-dimensional movements that can generate a given field of velocities.

Thus, it is necessary to consider a number of clues or signals to restore the flow. The socalled problem of aperture (Wallach, 1976; Horn & Schunck, 1981; Adelson & Bergen, 1985) appears when measuring the two-dimensional velocity components using only local measurements. It is possible recovering the component of the velocity gradient perpendicular to the edge, forcing adding external conditions that usually require obtaining information from a finite neighbourhood area, see Figure 3a. The region must be large enough to get a solution as the search for a corner to resolve this problem, for example. However, the collection of information across a region increases the probability of also taking on different motion contours and hence, to truncate the results, needing a trade-off solution, this last question is named as the aperture general problem (Ong & Spann, 1999).

There are many algorithms and architectures frequently used for real-time optical flow estimation, emanating from artificial intelligence, signal theory, robotics, psychology and biology. There is an extensive literature, and it is not the purpose of this section to explain all the algorithms. It will be reviewed the state of the art as descriptive as possible, for the sake of clarity, in order to justify the real-time implementations presented specifically at the

• Correlation based methods. They work comparing positions from image structure between adjacent frames and inferring the speed of the change in each location. They

• Differential or gradient methods. They are derived from work using the image intensity in space and time. The speed is obtained as a ratio from the above measures (Baker and

• Energy methods. They are represented by filters constructed with a response oriented in space and time to work at certain speeds. The structures used in this processing are

The different approaches to motion estimation are appropriate under each application. According to the sampling theorem (Nyquist, 2006), a signal must be sampled at a sampling rate that is at least twice the highest frequency that has such this signal. Therefore, it ensures us the motion between two frames is small compared to the scale of the input pattern.

parallel filter banks that are activated for a range of values (Huang, 1995).

We can classify motion estimation models in three different categories:

are probably the most intuitive methods (Oh, 2000).

Matthews, 2004; Lucas & Kanade, 2001).

**2.1 State of art estimating real-time motion** 

end of this chapter.

When an object moves on, for example, a camera, the two-dimensional projection image moves with the project as shown in Figure 1. The projection of three-dimensional motion vectors on the two-dimensional detector is called the apparent field of flow velocities or image. The observer does not have access to the velocity field directly, since the optical sensors provide luminance distributions, not speed, and motion vectors may be decidedly different from this luminance distribution.

Fig. 1. Projection from 3D world to 2D detector surface.

The term "apparent" refers to one of the capital problems in computer vision, in the absence of speed never real, but a two dimensional field called motion field. However, it is possible calculate the movement of local regions of the luminance distribution, being known as the field of optical flow motion, providing only an approximation to the actual field of speeds (Verri & Poggio, 1989).

As an example, it is easy to see that the optical flow is different from the velocity field. As shown in Figure 2, there is a sphere rotating with constant brightness, which produces no change in the luminance of the image. The optical flow is zero everywhere in contradiction with the velocity field. The reverse situation yields with a static scene with a moving light in which the velocity field is zero everywhere, even though the luminance contrast induces no zero optical flow. One additional example can be appreciated with the rotational poles announcing the barbershops of yesteryears, with the velocity field perpendicular to the flow.

Fig. 2. Difference between velocity field and optical flow.

266 Real-Time Systems, Architecture, Scheduling, and Application

When an object moves on, for example, a camera, the two-dimensional projection image moves with the project as shown in Figure 1. The projection of three-dimensional motion vectors on the two-dimensional detector is called the apparent field of flow velocities or image. The observer does not have access to the velocity field directly, since the optical sensors provide luminance distributions, not speed, and motion vectors may be decidedly

The term "apparent" refers to one of the capital problems in computer vision, in the absence of speed never real, but a two dimensional field called motion field. However, it is possible calculate the movement of local regions of the luminance distribution, being known as the field of optical flow motion, providing only an approximation to the actual field of speeds

As an example, it is easy to see that the optical flow is different from the velocity field. As shown in Figure 2, there is a sphere rotating with constant brightness, which produces no change in the luminance of the image. The optical flow is zero everywhere in contradiction with the velocity field. The reverse situation yields with a static scene with a moving light in which the velocity field is zero everywhere, even though the luminance contrast induces no zero optical flow. One additional example can be appreciated with the rotational poles announcing the barbershops of yesteryears, with the velocity field perpendicular to the

different from this luminance distribution.

Fig. 1. Projection from 3D world to 2D detector surface.

Fig. 2. Difference between velocity field and optical flow.

(Verri & Poggio, 1989).

flow.

Among these atypical situations, under certain limits it is possible to recover motion estimation. Motion algorithms described in this chapter recover the flow as an approximation of the velocity field projected flow information. They deliver two dimensional array of vectors that may be subject to several high-level performances. In the literature appears several applications in this regard (Nakayama, 1985; Mitiche, 1996).

Optical flow is an ill-conditioned problem since it is based on imaging three dimensions on a two-dimensional detector. The process removes information, and its recovery is not trivial. Therefore, the flow is a measure of the ill-posed problem, since there are infinite velocity fields that may cause the observed changes in the luminance distribution, in addition, there are infinite three-dimensional movements that can generate a given field of velocities.

Thus, it is necessary to consider a number of clues or signals to restore the flow. The socalled problem of aperture (Wallach, 1976; Horn & Schunck, 1981; Adelson & Bergen, 1985) appears when measuring the two-dimensional velocity components using only local measurements. It is possible recovering the component of the velocity gradient perpendicular to the edge, forcing adding external conditions that usually require obtaining information from a finite neighbourhood area, see Figure 3a. The region must be large enough to get a solution as the search for a corner to resolve this problem, for example. However, the collection of information across a region increases the probability of also taking on different motion contours and hence, to truncate the results, needing a trade-off solution, this last question is named as the aperture general problem (Ong & Spann, 1999).
