**4. Experimental results analysis and discussion**

Usually, when the robotic arm moves to a specified position, it generally vibrates and shakes due to undesired external force or load. Therefore, upon reaching the working position, it often stops for a short time till the shaking ends and the arm becomes stable. In this study, the end effector of the robotic arm is measured in three dimensions during the standstill time, and the morphological information of the end

effector is recorded and compared with the prebuilt database to find the point cloud closest to the pose and obtain the initial transformation matrix. The position and pose variation of the end effector are calculated and displayed as error diagrams in the measurement software's graphic user interface (GUI), as shown in **Figure 32**.

### **4.1 Database generation**

Coarse registration requires database creation for prematched objects before measurement. In this experiment, the 3D point cloud information of the robot arm end effector at the working position was used as the model point cloud, the virtual camera recorded the template point clouds at different viewing angles, and the surface area feature descriptors were calculated and stored in the database as the basis for subsequent comparison.

Step 1: Move the robot arm to the working position, as shown in **Figure 33**.

Step 2: Obtain the 3D constructed point cloud of the robotic arms ends effector by the scanning probe. **Figure 34** shows the measurement range of the scanning probe. **Figure 35** shows the image of the robotic arm captured by the camera, and **Figure 36** shows the raw data of the 3D reconstructed point cloud.

Step 3: Preprocess point cloud data. **Figure 37** shows the point cloud after noise removal, and **Figure 38** shows the downsampled point cloud.

Step 4: Create a multiview template point cloud from the model point cloud, as shown in **Figure 39a–h**.

5: Calculate the regional surface area descriptor of the corresponding template point cloud, as shown in **Figure 40a–h**.

### **4.2** *Experimental results analyses and discussion*

To verify the actual performance of the developed method, a robotic arm with an end effector was repeatedly moved 100 times to observe its variations in position and orientation. As shown in **Figure 41a** and **b**, the robotic arm repeatedly moves between position A and position B. In the test, the developed 3D scanner was integrated with

**Figure 32.** *Pose variation detection during the robotic arm.*

**Figure 33.** *The robot arm is moved to the work position.*

**Figure 34.** *Measurement FOV of the probe.*

**Figure 35.** *Image of robot arm end effector captured by the measurement probe.*

**Figure 36.** *Reconstructed point cloud of robot arm end effector.*

**Figure 37.** *Point cloud after outlier removal.*

the system to measure the robotic arms end effector for its 3D point clouds whenever the robotic arm reaches position B.

In the test, the 6DOF variations of the robotic arms end effector are compared with the measured point cloud whenever the arm moves from position A to B and stops at position B. **Figure 42a** shows the image captured using the developed 3D scanner when the robotic arm reaches position B and **Figure 42b** is the original reconstructed point cloud. **Figure 43** shows point-cloud registration results before and after alignment without referencing any target.

**Figures 44** and **45** show the dynamic variations in position and angular orientation, respectively, obtained after 100 tests. As can be seen, for the robot end effector in the x-, y-, and z-axis, the averaged pose variations are 0.039 mm, 0.003 mm, and 0.005 mm, respectively. In contrast, the averaged orientation variations are 0.009°, 0.029°, and 0.009°, respectively. These results indicate that the tested robot end effector achieved positioning with micron accuracy in the worst scenario. The

**Figure 38.** *Point cloud after downsampling.*

**Figure 39.** *Multiview template point clouds.*

**Figure 40.** *RSAD of the corresponding template point cloud.*

**Figure 41.** *Different measured positions of the tested robotic arm with its end effector.*

experimental results also show that the translational error of the x-axis and the rotation error of the y-axis may increase significantly with the operation time of the robotic arm. This discovery can lead to possible inspection and maintenance of the

*(a) Image captured using the developed 3D scanner when the robotic arm reaches position B. (b) Reconstructed point cloud at position B.*

**Figure 43**. *Point-cloud registration before (left) and after (right) alignment without using any calibration artifact.*

**Figure 44.** *Dynamic variations in positioning of robot end-effector along x-, y- and z axes.*

#### **Figure 45.**

*Dynamic variations in angular orientation of the robot end effector along x-, y- and z axes, defined as roll, pitch, and yaw angular errors.*

robot to be arranged to ensure robust manufacturing operation and avoid any potential catastrophic damage.

#### **4.3 Error analysis of robot arm pose variation**

The sources of error in the analysis of the robot's pose variation include the error of the structured-light measurement probe, pose detection algorithm, and the robotic arm itself. In addition to the positioning error of the robot arm itself, which is known from the technical specifications provided by the original manufacturer, the positioning repetition of the robot arm is 25 μm. In contrast, the reconstruction error of the measurement probe and the error of the pose detection algorithm must be found through experiments. The ideal point cloud data is used as the benchmark to quantify how error sources affect measurement results. The principle is to transform the known point cloud data through a known transformation matrix and to obtain the pose variation using the proposed pose detection algorithm. The method obtains the transformation matrix between the transformed point cloud data and the original point cloud data. It compares the known transformation matrix with the transformation matrix obtained using the proposed algorithm to quantify the error. The error of the structured-light measurement probe is calculated by repeatedly measuring the pose variation of the robot arm at a fixed position, and the remaining error after deduction from the algorithm error can be regarded as the error of the structured-light measurement probe.

The translation error *Terr*, defined as the absolute difference between the translation component of the known transformation matrix *Ttrue* and the translation vector of the transformation matrix *Talg* obtained using the proposed pose detection algorithm [20], is expressed as:

$$T\_{err} = \left|| T\_{alg} - T\_{true} \right|| \tag{9}$$

The measured point cloud is converted in the 3D space to quantify the algorithm error using a known transformation matrix, as shown in **Figure 46**. The object to be

**Figure 46.** *Original (red) and transformed (yellow) point clouds.*

measured is a 3D printing hammer. The red point cloud is the original point cloud, while the yellow one is transformed from the original point cloud in 3D space using 30 different known transformation matrices. **Figure 47** shows the translation errors obtained using the proposed pose detection algorithm. The average translation error of the 30 transformations is 0.007 mm.

System uncertainty of the 3D structured-light measurement probe will cause the measurement results of the same object to deviate. The robotic arm was scanned by the measurement probe 30 times to detect pose variation. For a fixed robot arm, the pose variation should be zero. Otherwise, the measurement obtained after deducting the algorithm error can be regarded as an error attributed to the structured-light measurement probe. **Figure 48** shows the translation error obtained for the 30 scannings. The average translation error contributed by the structured-light measurement probe was 46 μm.

**Table 3** summarizes the error source distribution of the robot arm pose variations. As can be seen, the error comes mainly from the structured-light measurement probe. Light perception by the image sensor in the measurement probe may vary for the same object even under the same conditions, thus causing differences in measurement results. Experiments detected an average pose variation of 46 μm from this source, accounting for 58.5% of the total error. The next major source of error is the robotic

**Figure 47.** *Translation error was obtained using the proposed pose detection algorithm.*

**Figure 48.** *Translation error contributed by the structured-light measurement probe.*


#### **Table 3.**

*Error budget analysis of pose variation of the robotic arm.*

arm itself. System errors lead to the difference in the position specified by the controller and the actual position reached. According to the original technical specifications, the pose variation is 25 μm, which accounts for 31.8% of the total error. A minor source of error is the pose detection algorithm, with pose variation attributed to floating point and matrix calculation errors. An average pose variation of 7 μm was detected, accounting for 9.6% of the total error.
