**1. Introduction**

372 Serial and Parallel Robot Manipulators – Kinematics, Dynamics, Control and Optimization

Thanh, N.M.; Glazunov, V.; Tuan, T.C. & Vinh, N.X. (2010). Multi-criteria optimization of

Thanh, N.M.; Glazunov, V.; Vinh, L.N. & Mau, N.C. (2008). Parametrical optimization of the

Thanh, N.M.; Quoc, L.H. & Glazunov, V. (2009). Constraints analysis, determination twists

*and Vision, IEEE 2010*, 1772-1778.

*and Vision, IEEE 2008*, 1872-1877.

*2009*, 89-95.

the parallel mechanism with actuators located outside working space, *In ICARCV 2010 Proceedings, Singapore, International Conference on Control, Automation, Robotics* 

parallel mechanisms while taking into account singularities, *In ICARCV 2008 Proceedings, Hanoi, Vietnam, International Conference on Control, Automation, Robotics* 

inside singularity and parametrical optimization of the parallel mechanisms by means the theory of screws, *In CCE 2009 Proceedings, Toluca, Mexico, International Conference on Electrical Engineering, Computing Science and Automatic Control, IEEE* 

> Multi-sensory information is a generic concept since such information is of concern in all robotic systems where information processing is central. In such systems for the enhancement of the accurate action information redundant sensors are necessary where not only the number of the sensors but also the resolutional information of the sensors can vary due to information with different sampling time from the sensors. The sampling can be regular with a constant sampling rate as well as irregular. Different sensors can have different merits depending on their individual operating conditions and such diverse information can be a valuable gain for accurate as well as reliable autonomous robot manipulation via its dynamics and kinematics. The challenge in this case is the unification of the common information from various sensors in such a way that the resultant information presents enhanced information for desired action. One might note that, such information unification is a challenge in the sense that the common information is in general in different format and different size with different merits. The different qualities may involve different accuracy of sensors due to various random measurement errors. Autonomous robotics constitutes an important branch of robotics and the autonomous robotics research is widely reported in literature, e.g. (Oriolio, Ulivi et al. 1998; Beetz, Arbuckle et al. 2001; Wang and Liu 2004). In this branch of robotics continuous information from the environment is obtained by sensors and real-time processed. The accurate and reliable information driving the robot is essential for a safe navigation the trajectory of which is in general not prescribed in advance. The reliability of this information is to achieve by means of both physical and analytical redundancy of the sensors. The accuracy is obtained by coordinating the sensory information from the redundant sensors in a multisensor system. This coordination is carried out by combining information from different sensors for an ultimate measurement outcome and this is generally termed as sensor fusion. Since data is the basic elements of the information, sometimes to emphasize this point the fusion process is articulated with data as *data fusion* where the *sensor fusion* is thought to be as a synonym. Some examples are as follows.

"Data fusion is the process by which data from a multitude of sensors is used to yield an optimal estimate of a specified state vector pertaining to the observed system." (Richardson and Marsh 1988)

"Data fusion deals with the synergistic combination of information made available by various knowledge sources such as sensors, in order to provide a better understanding of a given scene." (Abidi and Gonzales 1992)

Data Sensor Fusion for Autonomous Robotics 375

perception centered research by a human can have an inherent bias due to the interests and background of that human. In this respect, a robot can be viewed as an impartial observer with emulated human perception. Therefore, in this research, the sensory information is treated as robot's visual perception as an emulation of that of a human. A theory for the human perception from a viewpoint of perception quantification and computation is presented earlier (Ciftcioglu, Bittermann et al. 2007; Ciftcioglu 2008). The robot can be a physical real artifact autonomously wandering in an architectural environment. Or alternatively, it can be a virtual robot, wandering in a virtual reality environment. Both cases are equally valid utilization options in the realm of perceptual robotics in architecture. Apart from our interest on human perception of architectural artifacts as motivation, the present research is equally of interest to other adjacent robotics research areas like social robots which are closely related to perception robots. Namely, thanks to the advancements in robotics, today the social robots are more and more penetrating in social life as an aid to many human endeavors. With the advent of rapid progresses in robotics and evolutions on hardware and software systems, many advanced social, service and surveillance mobile robots have been coming into realization in the recent decades; see for instance, *http:/spectrum.ieee.org/robotics*. One of the essential merits of such robots is the ability to detect and track people in the view in real time, for example in a care center. A social robot should be able to keep eye on the persons in the view and keep tracking the persons of concern for probable interaction (Bellotto and Hu 2009). A service robot should be aware of people around and track a person of concern to provide useful services. A surveillance robot can monitor persons in the scene for the identification of probable misbehavior. For such tasks, detecting and tracking multiple persons in often crowded and cluttered scenes in public domain or in a working environment is needed. In all these challenging scenarios perceptual mobile robotics can give substantial contribution for the functionality of such special variety of robots in view of two main aspects. One aspect is vision, which is not the subject-matter of this work. The other aspect is the sensor-data fusion for effective information processing, which is the subject matter of this research where Kalman filtering is the main machinery, as it is a common approach in mobile robotics for optimal

The further organization of the present work is as follows. After the description of Kalman filtering and wavelet transform in some detail, detailed description of optimal fusion process of information from different multiresolutional levels is presented. The optimality is based on minimum fusion estimation error variance. Finally, autonomous robot implementation is described with the computer experiments the results of which are illustrated by means of both true and estimated trajectories demonstrating the effective multisensor-based, multiresolutional fusion. The work is concluded with a brief discussion

Kalman filtering theory and its applications are well treated in literature (Jazwinski 1970; Gelb 1974; Kailath 1981; Maybeck 1982; Brown 1983; Sorenson 1985; Mendel 1987; Grewal and Andrews 2001; Simon 2006). In order to apply Kalman filtering to a robot movement the system dynamics must be described by a set of differential equations which are in state-

information processing.

and conclusions.

**2. Kalman filter** 

space form, in general

**2.1 Description of the system dynamics** 

"The problem of sensor fusion is the problem of combining multiple measurements from sensors into a single measurement of the sensed object or attribute, called the *parameter*." (McKendall and Mintz 1992; Hsin and Li 2006)

The ultimate aim of information processing as fusion is to enable the system to estimate the state of the environment and in particular we can refer to the state of a robot's environment in the present case. A similar research dealing with this challenge, namely a multiresolutional filter application for spatial information fusion in robot navigation has been reported earlier (Ciftcioglu 2008) where data fusion is carried out using several data sets obtained from wavelet decomposition and not from individual sensors. In contrast with the earlier work, in the present work, in a multi-sensor environment, fusion of sensory information from different sensors is considered. Sensors generally have different characteristics with different merits. For instance a sensor can have a wide frequency range with relatively poor signal to noise ratio or vice versa; the response time of the sensor determines the frequency range. On the other hand sensors can operate synchronized or non-synchronized manner with respect to their sampling intervals to deliver the measurement outcomes. Such concerns can be categorized as matters of *sensor management* although sensor management is more related to the positioning of the sensors in a measurement system. In the present work data fusion sensor fusion and sensor management issues are commonly are referred to as sensor fusion. The novelty of the research is the enhanced estimation of the spatial sensory information in autonomous robotics by means of multiresolutional levels of information with respect to sampling time intervals of different sensors. Coordination outcome of such redundant information reflects the various merits of these sensors yielding enhanced positioning estimation or estimate the state of the environment. To consider a general case the sensors are operated independently without a common synchronizing sampling command, for instance. The multiresolutional information is obtained from sensors having different resolutions and this multiple information is synergistically combined by means of inverse wavelet transformation developed for this purpose in this work. Although wavelet-based information fusion is used in different applications (Hong 1993; Hsin and Li 2006), its application in robotics is not common in literature. One of the peculiarities of the research is essentially the application of wavelet-based dynamic filtering with the concept of multiresolution as the multiresolution concept is closely tied to the discrete wavelet transform. The multiresolutional dynamic filtering is central to the study together with the Kalman filtering which has desirable features of fusion. Therefore the vector wavelet decomposition is explained in some detail. For the information fusion process extended Kalman filtering is used and it is also explained in some detail emphasizing its central role in the fusion process. In an autonomous robot trajectory the estimation of angular velocity is not a measurable quantity and it has to be estimated from the measurable state variables so that obstacle avoidance problem is taken care of. The angular velocity estimation in real-time is a critical task in autonomous robotics and from this viewpoint, the multiresolutional sensor-based spatial information fusion process by Kalman filtering is particularly desirable for enhanced robot navigation performance. In particular, the multiresolutional sensors provide diversity in the information subject to fusion process. In this way different quality of information with respective merits are synergistically combined.

The motivation of this research is the use of a vision robot for an architectural design and the architectural artifacts therein from the viewpoint of human perception, namely to investigate the perceptual variations in human observation without bias. The similar

The ultimate aim of information processing as fusion is to enable the system to estimate the state of the environment and in particular we can refer to the state of a robot's environment in the present case. A similar research dealing with this challenge, namely a multiresolutional filter application for spatial information fusion in robot navigation has been reported earlier (Ciftcioglu 2008) where data fusion is carried out using several data sets obtained from wavelet decomposition and not from individual sensors. In contrast with the earlier work, in the present work, in a multi-sensor environment, fusion of sensory information from different sensors is considered. Sensors generally have different characteristics with different merits. For instance a sensor can have a wide frequency range with relatively poor signal to noise ratio or vice versa; the response time of the sensor determines the frequency range. On the other hand sensors can operate synchronized or non-synchronized manner with respect to their sampling intervals to deliver the measurement outcomes. Such concerns can be categorized as matters of *sensor management* although sensor management is more related to the positioning of the sensors in a measurement system. In the present work data fusion sensor fusion and sensor management issues are commonly are referred to as sensor fusion. The novelty of the research is the enhanced estimation of the spatial sensory information in autonomous robotics by means of multiresolutional levels of information with respect to sampling time intervals of different sensors. Coordination outcome of such redundant information reflects the various merits of these sensors yielding enhanced positioning estimation or estimate the state of the environment. To consider a general case the sensors are operated independently without a common synchronizing sampling command, for instance. The multiresolutional information is obtained from sensors having different resolutions and this multiple information is synergistically combined by means of inverse wavelet transformation developed for this purpose in this work. Although wavelet-based information fusion is used in different applications (Hong 1993; Hsin and Li 2006), its application in robotics is not common in literature. One of the peculiarities of the research is essentially the application of wavelet-based dynamic filtering with the concept of multiresolution as the multiresolution concept is closely tied to the discrete wavelet transform. The multiresolutional dynamic filtering is central to the study together with the Kalman filtering which has desirable features of fusion. Therefore the vector wavelet decomposition is explained in some detail. For the information fusion process extended Kalman filtering is used and it is also explained in some detail emphasizing its central role in the fusion process. In an autonomous robot trajectory the estimation of angular velocity is not a measurable quantity and it has to be estimated from the measurable state variables so that obstacle avoidance problem is taken care of. The angular velocity estimation in real-time is a critical task in autonomous robotics and from this viewpoint, the multiresolutional sensor-based spatial information fusion process by Kalman filtering is particularly desirable for enhanced robot navigation performance. In particular, the multiresolutional sensors provide diversity in the information subject to fusion process. In this way different quality of information with

*parameter*." (McKendall and Mintz 1992; Hsin and Li 2006)

respective merits are synergistically combined.

The motivation of this research is the use of a vision robot for an architectural design and the architectural artifacts therein from the viewpoint of human perception, namely to investigate the perceptual variations in human observation without bias. The similar

"The problem of sensor fusion is the problem of combining multiple measurements from sensors into a single measurement of the sensed object or attribute, called the perception centered research by a human can have an inherent bias due to the interests and background of that human. In this respect, a robot can be viewed as an impartial observer with emulated human perception. Therefore, in this research, the sensory information is treated as robot's visual perception as an emulation of that of a human. A theory for the human perception from a viewpoint of perception quantification and computation is presented earlier (Ciftcioglu, Bittermann et al. 2007; Ciftcioglu 2008). The robot can be a physical real artifact autonomously wandering in an architectural environment. Or alternatively, it can be a virtual robot, wandering in a virtual reality environment. Both cases are equally valid utilization options in the realm of perceptual robotics in architecture. Apart from our interest on human perception of architectural artifacts as motivation, the present research is equally of interest to other adjacent robotics research areas like social robots which are closely related to perception robots. Namely, thanks to the advancements in robotics, today the social robots are more and more penetrating in social life as an aid to many human endeavors. With the advent of rapid progresses in robotics and evolutions on hardware and software systems, many advanced social, service and surveillance mobile robots have been coming into realization in the recent decades; see for instance, *http:/spectrum.ieee.org/robotics*. One of the essential merits of such robots is the ability to detect and track people in the view in real time, for example in a care center. A social robot should be able to keep eye on the persons in the view and keep tracking the persons of concern for probable interaction (Bellotto and Hu 2009). A service robot should be aware of people around and track a person of concern to provide useful services. A surveillance robot can monitor persons in the scene for the identification of probable misbehavior. For such tasks, detecting and tracking multiple persons in often crowded and cluttered scenes in public domain or in a working environment is needed. In all these challenging scenarios perceptual mobile robotics can give substantial contribution for the functionality of such special variety of robots in view of two main aspects. One aspect is vision, which is not the subject-matter of this work. The other aspect is the sensor-data fusion for effective information processing, which is the subject matter of this research where Kalman filtering is the main machinery, as it is a common approach in mobile robotics for optimal information processing.

The further organization of the present work is as follows. After the description of Kalman filtering and wavelet transform in some detail, detailed description of optimal fusion process of information from different multiresolutional levels is presented. The optimality is based on minimum fusion estimation error variance. Finally, autonomous robot implementation is described with the computer experiments the results of which are illustrated by means of both true and estimated trajectories demonstrating the effective multisensor-based, multiresolutional fusion. The work is concluded with a brief discussion and conclusions.
