**2.3 The visual navigation**

The visual navigation uses the imaging sensors to accomplish the navigation task. In this chapter, the visual navigation means to use the visual sensors and the corresponding algorithms to the autonomous robot vehicle or the unmanned aerial vehicle for the navigation purpose. In many cases, the visual navigation is used as the direction guidance or the obstacle avoidance device. Without loss of generality, the imaging sensors in the visual navigation include the visible light camera, the infrared or the near-infrared camera, the multispectral camera, the laser imaging camera [35], and even the ghost imaging (i.e., the quantum imaging) camera [36].

The imaging system can be realized by the monocular vision system [37], the binocular vision system, the multi-sensor vision system, and the depth vision system [38]. In engineering, the narrow field camera, the wide field camera, and the panoramic camera can always be used. **Table 4** presents the popular visual navigation techniques. Different from the traditional navigation methods, the visual navigation can capture 2D image data; thus, the image processing and the pattern recognition techniques can be used for its navigation information analyses and estimations. For example, the region-based matching, the feature-based matching (point, line, or region feature), and the semantics-based matching can all be developed [39].

Clearly, the visual navigation has lots of application advantages. First, the imaging data have the abundant details, they are easy to be understood by people, and their processing results are also intuitive. Second, the imaging sensor can be used under a flexible working mode, such as the active mode or the passive mode. The active mode means the imaging sensor system will project some light to the observed target [40] to improve its visibility. The projected light can be the visible light, the infrared light, or even the laser in certain spectral bands. The passive mode indicates that the imaging sensor system will not project any light to the exterior environment. Third, the navigation accuracy is high. In many cases, the imagebased navigation can realize the high precision navigation in centimeter degree or even more. This excellent performance can guarantee it to be used in some complex


#### **Table 3.**

*A kind of classification of the celestial navigation technique.*


**19**

**Table 5.**

*Autonomous Navigation for Mars Exploration DOI: http://dx.doi.org/10.5772/intechopen.92093*

processing result.

application.

**2.4 The integrated navigation**

**2.5 Some other autonomous navigation methods**

**No. The integrated navigation method The application**

1 The inertial navigation and the celestial

2 The inertial navigation and the visual

3 The visual navigation and the celestial

4 The inertial navigation, the celestial

navigation, and the visual navigation

*A kind of classification of the integrated navigation technique.*

navigation

navigation

navigation

terrain environments. Obviously, the visual navigation also has some drawbacks. For example, its data processing amount is very large. Comparing with the data processing of the inertial navigation, the computation of the image processing algorithm is really complex; as a result, the visual navigation needs its processing unit to contain the high data processing performance and the large power consumption. Another problem is that the visual sensor is sensitive to the environment light [41]. All kinds of atmosphere phenomena will create great negative influences on its

The integrated navigation uses the combination of two or more than two basic navigation methods above to realize the high precision direction guidance and the attitude estimation of carrier. Its key techniques include the computer-based feature extraction, the information fusion, and the time synchronization [42]. The integrated navigation can make up the disadvantage of the sole navigation method, realize the seamless navigation, and achieve the information redundancy designing of the navigation system [43]. Its system reliability and the navigation accuracy can be improved a lot. Currently, the most successful integrated navigation is the inertial plus the Global Positioning System (GPS)-based navigation method, that is, the combination of the inertial navigation and the satellite navigation. Regarding the autonomous navigation technique for the deep space exploration, **Table 5** presents the popular integrated navigation methods. From **Table 5**, any two basic navigation methods can be used for the integrated navigation application. The integrated navigation also has some drawbacks. For example, its complex system design, its big size, and its high power consumption are all the problems for the spacecraft

Some other autonomous navigation methods can also be used for the Mars exploration. First, the radio-based navigation [44] is a choice. Traditionally, the radio-based navigation uses the satellites or the beacons on the ground to provide the prober some guidance information. Clearly, this method is not an autonomous technique. However, if we look both the satellite and the prober as a whole that means the Mars exploration prober system includes both the lander and the navigation satellites, the radio-based navigation can be considered. Second, the bionic navigation method [45] is another choice. The bionic navigation imitates the animals' sensory processing to accomplish the navigation. The stereoscopic vision, the auditory

Three accelerometer, three gyroscope, and the star

Three accelerometer, three gyroscope, the visible light

The visible light camera, the laser radar, and the star recognition sensor (the observation from ground)

Three accelerometers, three gyroscopes, the star sensors, and the celestial body feature recognition

sensors (the observation from orbit)

camera, and the laser radar

sensor

#### **Table 4.**

*A kind of classification of the visual navigation technique.*

#### *Autonomous Navigation for Mars Exploration DOI: http://dx.doi.org/10.5772/intechopen.92093*

*Mars Exploration - A Step Forward*

The imaging system can be realized by the monocular vision system [37], the binocular vision system, the multi-sensor vision system, and the depth vision system [38]. In engineering, the narrow field camera, the wide field camera, and the panoramic camera can always be used. **Table 4** presents the popular visual navigation techniques. Different from the traditional navigation methods, the visual navigation can capture 2D image data; thus, the image processing and the pattern recognition techniques can be used for its navigation information analyses and estimations. For example, the region-based matching, the feature-based matching (point, line, or region feature), and the semantics-based matching can all be developed [39]. Clearly, the visual navigation has lots of application advantages. First, the imaging data have the abundant details, they are easy to be understood by people, and their processing results are also intuitive. Second, the imaging sensor can be used under a flexible working mode, such as the active mode or the passive mode. The active mode means the imaging sensor system will project some light to the observed target [40] to improve its visibility. The projected light can be the visible light, the infrared light, or even the laser in certain spectral bands. The passive mode indicates that the imaging sensor system will not project any light to the exterior environment. Third, the navigation accuracy is high. In many cases, the imagebased navigation can realize the high precision navigation in centimeter degree or even more. This excellent performance can guarantee it to be used in some complex

**No. The name The basic principle The application**

**No. The name The basic principle The application**

The angle between any two typical objects can be used as a navigation reference

The spatial positions of some planets can be used as a navigation reference

The horizons of some planets can be used as a navigation

reference

*A kind of classification of the celestial navigation technique.*

The digital map construction and the feature matching

The feature detection and the motion information estimation

The image feature matching

The information fusion of different visual navigation techniques

The navigation method in a known environment, it can be implemented by the absolute positioning and the relative positioning

The star sensors, such as the X-ray pulsar sensor, the asteroid sensor, the Mars sensor, the sun sensor, and the planetary photographic

The Earth horizon detection device, the Mars horizon detection device, and so on

The translation motion estimation method and the rotation motion estimation method

The terrain matching and the landmark

The visual light camera and the infrared camera-based navigation, the visual light camera and the laser radar-based navigation, and the visual light camera and the depth

techniques, and so on

The celestial sextant

instrument

camera-based navigation

matching

1 The map-based navigation

1 The

2 The

3 The

**Table 3.**

sextant

celestial sensor

horizon sensor

2 The optical flowbased navigation

3 The appearancebased navigation

4 The integrated

visual navigation

*A kind of classification of the visual navigation technique.*

**18**

**Table 4.**

terrain environments. Obviously, the visual navigation also has some drawbacks. For example, its data processing amount is very large. Comparing with the data processing of the inertial navigation, the computation of the image processing algorithm is really complex; as a result, the visual navigation needs its processing unit to contain the high data processing performance and the large power consumption. Another problem is that the visual sensor is sensitive to the environment light [41]. All kinds of atmosphere phenomena will create great negative influences on its processing result.
