**2. Literature review**

We have seen many efforts been made to improve the positioning accuracy of robotic systems in the past few decades. An effective way to reduce the amount of inaccuracy is to measure it with sensors and compensate is through feedback control loop. Many metrology techniques have been investigated and applied for different kinds of data capturing. Among them, three methods have gained the most popularity in the recent research, namely, vision-based methods, tactile-based methods, and vision-tactile integrated methods. In this section, we will briefly review those approaches and their applications.

*Role of Uncertainty in Model Development and Control Design for a Manufacturing Process DOI: http://dx.doi.org/10.5772/intechopen.104780*

### **2.1 Vision-based methods**

The vision-based methods have been widely developed in the recent years and used to determine position and orientation of target objects in robotic systems. Zhu et al. discussed Abbe errors in a 2D vision system for robotic drilling. Four laser displacement sensors were used to improve the accuracy of the vison-based measurement system [10]. Liu et al. proposed a visual servoing method for positioning in aircraft digital assembly [11, 12]. With the measurements from two CCD cameras and four distances sensors, the proposed method can accurately align the positioner's ball-socket with the ball-head fixed on the aircraft structures in a finite time.

In addition to mentioned applications, we have seen many contributions to the vision-based methods in robotic manipulation. However, most of those researchers focused on success rate of grasping on end-effector without enough analysis on the positioning accuracy. Du et al. published a study for the robotic grasp detection by visually localizing the object and estimating its pose [13]. Avigal et al. proposed a 6-DoFs grasp planning method using fast 3D reconstruction and grasp quality convolutional neural network (CNN) [14]. Wu et al. proposed an end-to-end solution for visual learning [15].

### **2.2 Tactile-based methods**

In addition, with the development of tactile sensors in the last few years, we have seen more and more focus on tactile-based methods in robotic positioning domain. The tactile sensors can show contact states of the end-effector and the object in robotic manipulations. The contact state can be used to determines objects' relative orientations and positions with respective to the gripper. Li et al. designed a tactile sensor of GelSight and generated tactile maps for different poses of a small object in the gripper [16]. He studied the localization and control manipulation for a specific USB connector insertion task. Dong et al. studied the tactile-based insertion task for dense box packing with two GelSlim fingers which are used to estimate object's pose error based on neural network [17]. Furthermore, Hogan et al. developed a tacilebased feedback loop in order to control a dual-palm robotic system for dexterous manipulations [18]. Those tactile-based methods can only realize relative accurate positioning of the tool with the end-effector, but the positioning of the robot manipulator itself is not addressed.

### **2.3 Vision-tactile integrated methods**

Vision sensing can provide more environment information with a wide measurement range, while tactile sensing can provide more detailed information in robotic manipulations. Therefore, the vision–tactile integrated methods came into being. Fazeli et al. proposed a hierarchical learning method for complex manipulation skills with multisensory fusion in seeing and touching [19]. Gregorio et al. developed a manipulation system for automatic electric wires insertion performed by an industrial robot with a camera and tactile sensors implemented on a commercial gripper [20].

According to the analysis, all the integrated sensory applications have achieved accurate robotic manipulation tasks such as insertion and their performances have been verified in experiments. However, the error space in those references is usually small and none of them has considered all the translational and rotational errors in 6 DoFs. Moreover, tactile-based or vision-tactile integrated methods will increase expense of massive manufacturing because tactile sensors are more expensive to purchase and maintain compared to visual sensors. Based on that, our work is to explore the capability of the vison-based methods and design a new method to improve the accuracy of positioning in the robot manipulation system.
