**2.1 State of art estimating real-time motion**

There are many algorithms and architectures frequently used for real-time optical flow estimation, emanating from artificial intelligence, signal theory, robotics, psychology and biology. There is an extensive literature, and it is not the purpose of this section to explain all the algorithms. It will be reviewed the state of the art as descriptive as possible, for the sake of clarity, in order to justify the real-time implementations presented specifically at the end of this chapter.

We can classify motion estimation models in three different categories:


The different approaches to motion estimation are appropriate under each application. According to the sampling theorem (Nyquist, 2006), a signal must be sampled at a sampling rate that is at least twice the highest frequency that has such this signal. Therefore, it ensures us the motion between two frames is small compared to the scale of the input pattern.

Real-Time Motion Processing Estimation Methods in Embedded Systems 269

Previously, it filters the sequence of images or the temporary buffer to obtain basic measures through convolutions, Fast Fourier Transform (FFT), extraction of patterns, arithmetic, and

The measures are then recombined by through various methods to reach a basic evaluation of speed (usually incomplete and deficient in these early stages). Subsequently, the final flow estimation is done by imposing a set of constraints on action and results. These are generated by assumptions about the nature of the flow or change (such as restrictions of a rigid body), but even with these restrictions, the retrieved information is often not robust

At the beginning of this section, was explained that the optical flow motion is estimated from the observable changes in the pattern of luminance over time. In the case of a no detectable movement, situation such as a sphere rotating with the same brightness (Figure 2), the estimated optical flow is zero everywhere, even if current speeds are not zero. Another attention is given by the non-existence of a unique movement of the image, in order to justify a change in the observed brightness, therefore, the visual motion measurement is often awkward and always have to be associated with a number of

Despite the difficulties in the recovery of flow, the biological systems work surprisingly well in real-time. In the same way that these systems have specialized mechanisms to detect color and stereopsis, are also devoted to visual motion mechanisms (Albright, 1993). As in other areas of research in computer vision, they are formed models of such natural systems to

Thanks to psychophysical and neurophysiological studies, it has been possible to build models that extract the motion from a sequence of images, usually characterized these biological models to be complex and designed to operate poorly at high speed in real-time. One of the first models based on real-time bio-inspired visual sensor was proposed by Reichardt (Reichardt, 1961). The detector consists of a couple of receptive fields as shown in Figure 5, where the first signal is delayed with respect to the second before nonlinearly

Receptors 1 and 2 (shown as edge detectors) are spaced a distance ΔS, imposing a delay on each signal, after the signal C1 and C2 are operated via multiplication. In the final stage, the result of the first half of the detector is subtracted from the next, and then estimated what each contributes to increased directional selectivity. The sensors shown in Figure 5 are

Laplacian detectors, although it is possible using any spatial filter or feature detector.

Fig. 4. Functional real-time architecture found in most optical flow algorithms

enough to get a unique solution for optical flow field.

so on.

physical interpretations.

**2.2 Basic movement patterns** 

formalize bio-inspired solutions.

combined by multiplication.

When this theorem is no longer fulfilled, it appears the phenomenon of sub-sampling or aliasing. In space-time images, this phenomenon produces incorrect inclinations or structures unrelated to each other, as an example of temporal aliasing, we can observe a rotation of the propeller of the planes in the opposite direction to true as shown in Figure 3b. In short, no long displacements can be estimated from input patterns with small scales. In addition to this problem, we have the problem of aperture, discussed previously. These two problems (aliasing and aperture) fulfill the general problem of correspondence as shown in Figure 3.

Fig. 3. (3a. left). So-called "Aperture problem". (3b. right) "Aliasing problem". The two problems conform the "correspondence problem" for motion estimation.

Therefore, the movement of the input patterns does not always corresponds to features of consecutive frames in an unambiguous manner. The physical correspondence may be undetectable due to the problem of aperture, the lack of texture (example of Figure 2), the long displacements which commute between frames, etc. Similarly, the apparent motion can lead to a false correspondence. For such situations, it is possible using matching algorithms (tracking and correlation), although currently there is much debate about the advantages and disadvantages of using these techniques rather than those based on gradient and energy of motion.

The correlation methods are less sensitive to changes in lighting, they are able to estimate long displacements that do not meet the sampling theorem (Yacoob & Davis, 1999). However, they are extremely sensitive to cyclical structures providing various local minima and when the aperture problem arises, the responses obtained are unpredictable.

Alternatively, the other methods are better in efficiency and accuracy; they are able to estimate the perpendicular optical flow (in the presence of the aperture problem).

Typically in machine vision CCD cameras are used with a discrete ratio, where varying this modifies the displacement between frames, if these shifts are too large, so gradient methods fail (since it fractures the continuity of space-time volume). Although it is possible using an anti-aliasing spatial smoothing to avoid temporal aliasing (Christmas, 1998; Zhang & Wu, 2001), this is the counterpart to degrade spatial information. Therefore, for a given spatial resolution, one has to sample at a high temporal frequency (Yacoob & Davis, 1999).

On the other hand, it is quite common, for real-time optical flow algorithms remain a functional architecture, as shown in Figure 4, via a hierarchical process.

Fig. 4. Functional real-time architecture found in most optical flow algorithms

Previously, it filters the sequence of images or the temporary buffer to obtain basic measures through convolutions, Fast Fourier Transform (FFT), extraction of patterns, arithmetic, and so on.

The measures are then recombined by through various methods to reach a basic evaluation of speed (usually incomplete and deficient in these early stages). Subsequently, the final flow estimation is done by imposing a set of constraints on action and results. These are generated by assumptions about the nature of the flow or change (such as restrictions of a rigid body), but even with these restrictions, the retrieved information is often not robust enough to get a unique solution for optical flow field.

At the beginning of this section, was explained that the optical flow motion is estimated from the observable changes in the pattern of luminance over time. In the case of a no detectable movement, situation such as a sphere rotating with the same brightness (Figure 2), the estimated optical flow is zero everywhere, even if current speeds are not zero. Another attention is given by the non-existence of a unique movement of the image, in order to justify a change in the observed brightness, therefore, the visual motion measurement is often awkward and always have to be associated with a number of physical interpretations.
