**4. Computational resources and throughput of the real-time systems**

In this section, we analyse each of these real-time motion estimation systems regarding the computational resources needed and the throughput obtained. The resources used in the implementations described in 3.1 and 3.2 are shown in the Tables 1-3 as Slices and Block Ram percentage accounting. MC is the maximum delay in terms of the number of clock cycles needed for each modulus implemented. Finally, the Throughput provides the Kilo pixels per second (Kpps) of the implementations.

Table 1 shows a set of these parameters just for the McGM optical flow implementation, (see case 3.1) but Table 2 manages the multimodal sensor treated in 3.2. For this last case, has been separated the implementation of each one of the orthogonal variant moments (Table 2) and just the whole implementation (all modules working together as shown in Table 3).

Tables 4 is regarding the implementation explained in 3.3. The FPGA used is Altera Cyclone II EP2C35F672C6 with a core NIOS II processor embedded in a DE2 platform.

Real-Time Motion Processing Estimation Methods in Embedded Systems 287

(13%) FST, e / Quality II 7708 (23%) 28 (40%)

(20%) FST, s/ Quality II 8501 (26%) 20 (28%)

(28%) FST, f / Quality II 9225 (30%) 32 (48%)

Table 4. Resources needed and performance for Low level vision Block matching system.

The parameters for measuring the resources for this technology are Logic Cell and Embedded DSPs. For this implementation, we have used different embedded NIOS II microprocessors (E, S, F from "economic", "standard", and "full" with different characteristics (González *et al*., 2011) and for each one of these microprocessors we have applied different parts of the code, distinguishing between "Quality I, II or III" depending of the piece of the code running just into the microprocessor and the part that is accelerated

For the multimodal system, it is possible reach up a real-time performance (see Section 3.2 and Table 5). This throughput is between 12 Mega pixels per seconds (Mpps) with basic quality (most of the source code running in Nios II microprocessor) and 30 Mpps (most of the source code being accelerated with C to hardware compiler) which means between 40

resolution 120x96 395 frames/sec 174 frames/sec 174 frames/sec

resolution 320x240 59 frames/sec 26 frames/sec 26 frames/sec

resolution 640x480 28 frames/sec 14 frames/sec 14 frames/sec

Throughput 4546 Kpixels/sec 2000 Kpixels/sec 2000 Kpixels/sec

Motion Estimation (Low-Level)

FST, e / Quality I 4925 (15%) 0 (0%)

FST, e / Quality III 14267 (43%) 39 (56%) FST, s / Quality I 5808 (18% 0 (0%)

FST, s / Quality III 15627 (47%) 32 (48%) FST, f / Quality I 6488 (20%) 0 (0%)

FST, f / Quality III 16396 (50%) 43 (61%)

Orthogonal Variant Moments (Low-Level)

Table 5. Throughput in terms of Kpps and fps for the embedded sensor.

Quality Logic Cells Embedded DSPs (9 × 9) Total memory

bits

62720

99200

134656

Multimodal Bioinspired Sensor. (Mid & Low-Level)

Method, Processor /

using C2H compiler. (Altera, 2011).

and 98 fps at 640x480 resolution.

COMPLETE Midlevel and Low-level Vision


Table 1. Slices, memory, number of cycles and performance for Low-level Optical flow scheme.


Table 2. Slices, memory, number of cycles and performance for Orthogonal Moment scheme.


Table 3. Resources needed and performance for Low and Mid-Level vision. Multimodal system.

286 Real-Time Systems, Architecture, Scheduling, and Application

Steering (III)

> LX (MII)

Slices (%) 321(2%) 1245(7%) 1245(7%) 658(4%) 658(4%)

MC 7 11 11 5 5

Block RAM (%) 1% 4% 4% 3% 3%

Table 2. Slices, memory, number of cycles and performance for Orthogonal Moment scheme.

Orthogonal Variant Moments (Low-Level)l

Slices (%) 4127 (24%) 11842 (65%) 1304 (6%) 17710 (97%)

Block RAM (%) 15% 80% 4% (99%)

MC (limiting) 29 11 18 29

(Kpixels/s)/Freq.(Mhz) 4546/49 2000/38 2000/38 2000/38

Table 3. Resources needed and performance for Low and Mid-Level vision. Multimodal

Slices (%) 190 (1%) 1307 (7%) 1206(6%) 3139(19%) 3646(20%) 2354(12%)

MC 13 17 19 23 21 19

Block RAM (%) 1% 31% 2% 13% 16% 19%

(Kpixels/s)/Freq.(Mhz) 4846/ 63 3235/55 2526/48 1782/41 1695/39 2000/38

Table 1. Slices, memory, number of cycles and performance for Low-level Optical flow

Area (MI)

Frecuency (MHz) 4546/49

Motion Estimation (Low-Level)

Product & Taylor (IV)

> LY (MIII)

Tracking & Segmentation Unit (Mid-Level)

Quotient (V)

> PX (MIV)

Primitives (VI)

> PY (MV)

Multimodal Bioinspired Sensor. (Mid & Low Level)

FIR Spatial Filtering(II)

Low-level Vision Stage (Optical flow)

Throughput

Low-level Vision Stage (Orthogonal Variant Moments)

Throughput (Kpixels/sec)/

COMPLETE Mid-level and Low level Vision

Throughput

system.

scheme.

FIR Temporal Filtering(I)


Table 4. Resources needed and performance for Low level vision Block matching system.

The parameters for measuring the resources for this technology are Logic Cell and Embedded DSPs. For this implementation, we have used different embedded NIOS II microprocessors (E, S, F from "economic", "standard", and "full" with different characteristics (González *et al*., 2011) and for each one of these microprocessors we have applied different parts of the code, distinguishing between "Quality I, II or III" depending of the piece of the code running just into the microprocessor and the part that is accelerated using C2H compiler. (Altera, 2011).

For the multimodal system, it is possible reach up a real-time performance (see Section 3.2 and Table 5). This throughput is between 12 Mega pixels per seconds (Mpps) with basic quality (most of the source code running in Nios II microprocessor) and 30 Mpps (most of the source code being accelerated with C to hardware compiler) which means between 40 and 98 fps at 640x480 resolution.


Table 5. Throughput in terms of Kpps and fps for the embedded sensor.

Real-Time Motion Processing Estimation Methods in Embedded Systems 289

This work is partially supported by Spanish Project project TIN 2008-00508/MIC. The authors want to thank to Prof. Alan Johnston and Dr. Jason Dale from University College London, (UK) for their great help with Multichannel Gradient Model (McGM), explained

A. Anderson, P. W. McOwan. (2003). Humans deceived by predatory stealth strategy

A. Johnston, C. W Clifford. (1995). Perceived Motion of Contrast modulated Gratings:

A. Johnston, C. W. Clifford. (1994). A Unified Account of Three Apparent Motion Illusions,

A. Johnston, P.W. McOwan, C.P. Benton. (2003). Biological computation of image motion from flows over boundaries, *Journal of Physiology*. Paris, vol. 97, pp. 325-334. A. Mikami, W.T Newsome, R.H Wurtz. (1986). Motion selectivity in macaque visual cortex.

Accame M., De Natale F. G. B., Giusto D. (1998). High Performance Hierachical Block-based Motion Estimation for Real-Time Video Coding, *Real-Time Imaging*, 4, pp. 67-79. Adelson E. H., Bergen J. R. (1985). Spatiotemporal Energy Models for the Perception of

Albright T. D. (1993). *Cortical Processing of Visual Motion*, In Miles & Wallman. Visual Motion

Anandan P., Bergen J. R., Hanna K. J., Hingorani R. (1993). *Hierarchical Model-based Motion* 

Arias-Estrada M., Tremblay M., Poussart D. (1996). A Focal Plane Architecture for Motion

Arnspang J. (1993). Motion Constraint Equations Based on Constant Image Irradiance, *Image* 

Baker, S., Matthews, I. (2004). Lucas-Kanade 20 Years On: A Unifying Framework.

Beare R., Bouzerdoum A. (1999). Biologically inspired Local Motion Detector Architecture,

Bergen J. R., Burt P. J. (1992). A Three-Frame Algorithm for Estimating Two-Component Image Motion, *IEEE T. On Pattern Analysis and Machine Intellig.*, 14 (9), pp. 886-895. C.-H. Teh, R.T. Chin. (1988). On image analysis by the methods of moments, *IEEE Trans.* 

C.-Y. Wee, R. Paramesran, F. Takeda. (2004). New computational methods for full and

Campani M., Verri A. (1992). Motion Analysis from First-Order Properties of Optical Flow.

*International Journal of Computer Vision*, Vol.56, Nº 3, pp. 221-255.

*Journal of the Optical Society of America A*, 16 (9), pp. 2059-2068.

subset zernike moments, *Inform. Sci*. vol. 159 (3–4), pp. 203–220.

*Pattern Anal. Mach. Intell*igence. vol.10 (4), pp. 496–513.

CVGIP: Image Understanding 56 (1), pp. 90-107.

*Estimation*. In M.I.Sean and R.L.Lagendijk, editors, Motion Analysis and Image

Motion. *Journal of the Optical Society of America A*, 2 (2), pp. 284-299.

AlphaData RC1000 product (2006). , http://www.alpha-data.com/adc-rc1000.html.

and its Role in the Stabilisation of Gaze, pp. 177-201.

Sequence Processing. Kluwer Academic Publishers.

Computation, *Real-Time Imaging*, 2, pp. 351- 360.

*and Vision Computing*, 11 (9), pp. 577-587.

Prediction of the McGM and the role of Full-Wave rectification, *Vision Research*, 35,

I. Mechanisms of direction and speed selectivity in extrastriate area MT. *J.*

camouflaging motion, *Proceedings of the Royal Society*, B 720, 18-20.

**6. Acknowledgment**

partially in this chapter.

pp. 1771-1783.

Altera (2011). *C to Hardware.* 

Vision Research, vol. 35, pp. 1109-1123.

*Neurophysiol*, vol. 55, pp. 1308-1327.

**7. References** 


Table 6. Throughput in terms of Kpps and fps for the block matching technique.
