**5. Conclusion**

This chapter has approached the problem of the real-time implementation of motion estimation in embedded systems. It has been descripted representative techniques and systems belonging to different families that contribute with solutions and approximations to a question that remains still open, due the ill-posed nature of the motion constraint equation.

It has been also proposed an overview of different implementations capable of computing real-time motion estimation in embedded systems delivering just low primitives: (gradient model and block matching respectively) and delivering the mid-level vision primitives (combination optical flow with orthogonal variant moments). In the Table 7 are shown the different methods implemented regarding the machine vision domain, the final performance obtained, the robustness implementation and the complexity of the final system.

These systems designed are scalable and modular, also being possible choice the visual primitives involved -number of moments- as well as the bit-width of the filters and computations in the low-level vision -optical flow-. This architecture can process concurrently different visual processing channels, so the system described opens the door to the implementation of complex bioinspired algorithms on-chip.

The implementation of these systems shown offers robustness and real-time performance to applications in which the luminance varies significantly and noisy environments, as industrial environments, sport complex, animal and robotic tracking among others.


