**2.3.2 Different general methods for restricting MCE and improve optical flow measures**

Due to the lack of information and spatial structure of the image, is not easy to estimate a sufficiently dense velocity field.

To correct this problem, several restrictions are applied, as for example, that the points move closer together in a similar way. The general philosophy is that the original flow field, once estimated, is iteratively regularized with respect to the smoothing restriction.

The first constraint was proposed by Horn and Schunk (Horn & Schunk, 1981). Optic flow resulting from the global constraints, is quite robust due to the combination of results, and is also flattering to the human eye. Two of the biggest drawbacks are its iterative nature, requiring large amounts of time and computing resources, and motion discontinuities are not handled properly, so that erroneous results are produced in the regions surrounding the motion edges. To address these latter gaps are proposed other techniques that use global statistics such as random Markov chains (Heitz & Bouthemy, 1993).

In all MCE estimation techniques, appear significant restrictions on a neighborhood where it is assumed constant flux. To meet this requirement when there are movement patterns, this neighborhood has to be as small as possible, but at the same time it must be large enough to obtain information and to avoid the aperture problem. Therefore, we need a trade-off compromise.

A variety of models uses estimations related to this neighborhood, such as least squares. If using a quadratic objective function, is assumed a Gaussian residual error rate, but if having multiple movements in the neighborhood, these errors can no longer be considered Gaussian. Even if these errors were independent (very usual situation) the error distribution can be modeled as a bimodal.

There are approximate models can be incorporated into the flow range of techniques that are being exposed. These approaches also model spatial variations of multiple movements. The neighborhood integration techniques, as mentioned, assume that the image motion is purely translational in local regions. Thus, more elaborate models (such as the affine model) can extend the range of motion, and provides additional restrictions. These methods recast the MCE with an error function that will resolve or minimize least squares (Campani & Verri, 1992; Bergen & Bart, 1992; Gupta & Kanal, 1995, 1997; Giaccone & Jones, 1997, 1998).

Large displacements between frames that originate gradient methods, behave inappropriately, since the image sequences are insufficient or that the time derivative measures are inaccurate. As a workaround it is possible using larger spatial filters than early model (Christmas, 1998).

The use of multi-scale Gaussian pyramid can handle high movements between frames and fill the gaps in large regions where the texture is uniform, so that estimates of coarse-scale motion are used as sources for a finer scale (Zhang, 2001).

The use of temporal multi-scale (Yacoob & Davis, 1999) also allows the accurate estimation of a range of different movements, but this method requires using a high enough sampling rate to reduce movements about a pixel/frame.

Real-Time Motion Processing Estimation Methods in Embedded Systems 277

• ASSET-2 algorithm is based on features and has been implemented to run in real time using custom hardware (Smith, 1995). The algorithm is easy and determines the position of the axes and corners trying to solve the problem of the correspondence between neighboring frames in time. The system was implemented using a PowerPC with custom hardware to extract features in real time. This system does not provide continuous outcomes, these being scarce, but cluster groups of similar speeds to

• Niitsuma *et al*. (Niitsuma & Maruyama, 2004) apply a model-based optical flow correlation, with a joint operation of a stereoscopic system also measures the distance of moving objects. This system is based on a Virtex 2 XC2V6000 scheduled at 68 MHz and

• Tomasi & Diaz (Díaz *et al*., 2006) implemented a real-time system based on Lucas and Kanade algorithm (Lucas & Kanade, 1981; Díaz *et al*., 2006) with satisfactory results in terms of performance. This is an algorithm within the so-called gradient that is used as a didactic introduction to optical flow in most colleges. It has shown a significant relationship between performance and implementation effort. The problem with this implementation is given by abrupt changes in light and heavy dependence on the aperture problem. Its advantage is the extensive documentation and the experience gathered with this algorithm (more than 25 years) from the scientific community. After that, Tomasi has implemented in 2010 and 2011 a fully real-time multimodal system mixing motion estimation and binocular disparity (Tomassi *et al*., 2010, 2011) combining

• Botella *et al*. implemented a robust gradient based optical flow real-time system and its extension to mid-level vision combining orthogonal variant moments (Botella *et al*., 2009, 2010, 2011). Also the block matching acceleration motion estimation has been implemented in real-time by Gonzalez and Botella (González *et al*., 2011). All these

They are presented several case studies of the real-time implementation performed in these

Multichannel gradient Model (McGM), developed by Johnston (Johnston *et al*., 1995, 1996), has been recently implemented and selected due its robustness and bio-inspiration. This model deals with many goals, such as illumination, static patterns, contrast invariance, noisy environments. Additionally it is robust against fails, justifies some optical illusions (Anderson *et al*., 2003), and detects second order motion (Johnston, 1994) that is particularly

recorded earlier on the reliability of the model used and grazing real time.

segment objects according to their movement.

low-level and mid-level vision primitives.

**3.1 Multichannel gradient Model** 

models will be analyzed thoroughly in this chapter.

**3. Case studies of real-time implementation performed** 

last years by the author of the present chapter and other authors.

delivers results in real time resolution of 640x480 points.

not provide optimal overall results in software, but the implementation is efficient and the model is capable of operating in real time. The design uses a recursive implementation of the constriction of applying a smoothing iteration in each frame. • There is also an implementation of Horn and Schunck algorithm (Horn & Schunk, 1981) by Cobos (Cobos *et al*., 1998) on a FPGA platform, but with the same counterparty

The schemes discussed so far, consider the calculation of optical flow as a separate problem for each frame, without any feedback (the results of motion of a frame does not cover the analysis of the following). Giaccone and Jones (Giaccone & Jones; 1998, 1999) have designed an architecture capable of dealing with multiple motions (keeping the temporal consistency) which segments moving regions by using a method of least squares.

This algorithm has proven to be robust for a given speed limits, also works well when compared to similar models. The cost calculation is overwhelmed by the generation of a projected image, PAL sizes needed for about 40 seconds/image, with SPARC 4/670MP.

However, this time consistency constraint is only used sporadically today. The objects in the real world must obey physical laws of motion and the inertia and gravity, so that there is predictability in their behavior, and at least surprising, that most real-time algorithms do not implement a flow based feedback.

For this purpose, it is used the fact that it is possible to create an additional constraint equation from the velocity field for use in the next iteration, managing the problem as an evolutionary phenomenon. The use of probabilistic models or Bayesian (Simoncelli & Heeger, 1991) may be an alternative to using real world information and update results of previous estimates integrating temporal information.

We have seen that the perception of motion can be modeled as an orientation in space-time, where the methods of extracting this orientation gradient across the filter ratio oriented. The so-called motion energy models are often based or similar in many respects to models of gradient, since both systems use filter banks to obtain this time-space orientation, and therefore the motion. The main difference is that the filters used in energy models, are designed to meet time-space directions, rather than a ratio of filters. The design of spacetime oriented filters is usually performed in the frequency domain.

The methods of energy of motion are biologically plausible, but the implementations have an extra computer associated high due to the large number of filtering required being difficult its implementation in real-time. The resultant velocity of energy methods is not obtained explicitly, unlike gradient methods, only using a solution population, being these last Bayesian models.

One advantage, is that the bimodal velocity measurements as invisible movements, can be treated by these structures (Simoncelli & Heeger, 1991). The correct interpretation of the processed results is not an easy task when dealing with models of probabilistic nature. Interesting optimizations have been developed to increase the speed of these methods, combined with Reichardt detectors (Franceschini *et al*, 1992) as support.
