**5.2 Vision-based navigation for micro helicopters**

*Geographic Information Systems in Geospatial Intelligence*

and varied integration schemes.

be found for a given success rate.

navigation control law.

navigation technologies.

experimental result.

**5. Space navigation systems**

**5.1 Autonomous navigation of micro aerial vehicles**

(HMM). The Viterbi algorithm decodes the HMM model and selects the most likely map segment. The projection of vehicle states onto the map segment is used as a supplementary position update to the integration filter. The solution framework has been developed and tested on a land-based vehicular platform. The results show a reliably mitigate biased GNSS position and accurate map segment selection in complex intersections, forks, and joins. In contrast to common existing adaptive Kalman filter methods, this solution does not depend on redundant pseudo-ranges and residuals, which makes it suitable for use with arbitrary noise characteristics

**4.3 Navigation based on compass-based navigation control law**

Urban environments offer a challenging scenario for autonomous driving [6]. The proposed solution allows autonomously navigate urban roadways with minimum a priori map or GPS. Localization is achieved by Kalman filter extended with odometry, compass, and sparse landmark measurement updates. Navigation is accomplished by a compass-based navigation control law. Experiments validate simulated results and demonstrate that, for given conditions, an expected range can

The architecture contains steering and speed controllers, an object tracker, a path generator, a pose estimator, and a navigation algorithm using sensors allowing real-time control. High-level localization is provided by the pose estimator, which utilizes only odometry measurements, compass measurements, and sparse map-based measurements. The sparse map-based measurements generated from computer vision methods compare raw camera images to landmark images contained within a sparse map. The roadway scene includes lane line markings, road signs, traffic lights, and other sensor measurements. The scene information and the inertial pose estimate are fed into a navigation algorithm to determine the best route required to reach the target. This navigation scheme is provided by a compass-based

Common navigation technologies assume navigation on a surface with twodimension (2D), flat land area. Navigation in three-dimension (3D) is much more complicated requiring at least new technologies to complement the existing 2D

In this section we present a low-computational method for state estimator enabling autonomous flight of micro aerial vehicles [7]. All the estimation and control tasks are solved on board and in real time on a simple computational unit. The state estimator fuses observations from an inertial measurement unit, an optical flow smart camera, and a time-of-flight range sensor. The smart camera provides optical flow measurements and odometry estimation, avoiding the need for image processing, usable during flight times of several minutes. A nonlinear controller operating in the special Euclidean group SE(3) can drive, based on the estimated vehicle's state, a quadrotor platform in 3D space guaranteeing the asymptotic stability of 3D position and heading. The approach is validated through simulations and

**20**

Weiss [8] developed a vision-based navigation system for micro helicopters operating in large and unknown environments. It is based on vision-based methods and a sensor fusion approach for state estimation and sensor self-calibration of sensors and with their different availability during flight. This is enabled by an onboard camera, real-time motion sensor, and vision algorithms. It renders the camera and an onboard multi-sensor fusion framework capable to estimate at the same time the vehicle's pose and the inter-sensor calibration for continuous operation. It runs at linear time to the number of key frames captured in a previously visited area. To maintain constant computational complexity, improve performance, and increase scalability and reliability, the computationally expensive vision part is replaced by the final calculated camera pose.
