**1. Introduction**

Many works related with manipulators arms have been developed since the first patent of anthropomorphic industrial robots was presented in 1973 [1]. These kinds of robot were thought to carry out a complex task in industries which represented great physical loads, or some kinds of risks for the workers. In addition, they used to be related with the search of the improvement of quality and production [2–4]. Currently, these robots have been associated to high accuracy tasks like classification, welding, object manipulation, assembly, and so on.

Artificial vision is a topic which is getting strength in the last years, which can be seen in several works where the artificial vision is used in different applications of accuracy and classification as is presented in [5, 6]. Nevertheless, the artificial vision is not constantly used in works which make developments with manipulator

arms; this is normally due to the robot programmer who plans the given robot task, which is executed cyclically [7]. However, some authors present approaches where they implemented different tasks using artificial vision; an example is presented in [8], where the artificial vision is shown as an alternative to guarantee the security of the manipulator arms against collisions, instead of using sensors. On the other hand, in [9], it is shown how an object manipulator robot, which is able to distinguish between different pieces of cutlery, was designed; they divided their work in two stages: the first stage includes the recognition of the objects through computer vision algorithms and finding the coordinates of the objects; and the second stage is the realization of the movement of the robot to the found coordinates using the forward kinematic, and in addition, by means of descent gradient method. A similar approach was shown in [10], where the authors presented a prototype of a system that classifies the rocks that feed the grinding process in a concentration plant; in this case, the researchers use a 2D artificial vision system that estimates the size of the rocks based on their area and distinguishes them; nevertheless, their results are ambiguous because they did the images acquisition statically in a dynamic system.

**2.1 Robot arm**

*Methodology description.*

**Figure 1.**

hardware and software.

**2.2 Software-robot link**

**Figure 2.**

**65**

*Welding robot Miller.*

An MR-2000 welding robot Miller manufactured in 1995 is used for the development project, as shown in **Figure 2**. The robot spent several years deteriorating, and at the time of the evaluation, several errors in the motherboard and the encoders of the servomotors were found. This made impossible to use the original

*Implementation of an Artificial Vision System for Welding in the Retrofitting Process…*

*DOI: http://dx.doi.org/10.5772/intechopen.88360*

The arm counts with a morphology of a palletizing robot, 6 freedom grades, and an approximated weight of 353 lb (160 kg). Due to the bad condition of the robot, a total rebuild of the control panel was made; for these modifications, several components were required and the most representative of them are exposed in **Table 1**. Taking as reference the architecture of palletizing robot presented in [11], the control of the heaviest parts of the robot (base, big forward arm, and big back arm) is designed so that three servomotors of great power (1.0 kW) are responsible for global robot movements. Another three servomotors of 0.12 kW control the remaining pieces which are more focused in the final tool orientation (forearm, wrist, and a grabber). Also, the control includes computational hardware that contains the OS Linux and the vision system for the location and control of the arm.

An intensive search was made for the selection of the most appropriate software for arm control; taking that into count, we realized that some of the programs in the middle are made by people who do not find a good option for this application;

In the case of the welding robot arms, when we can integrate technologies like artificial vision and computational intelligence with industrial robot, it is possible to adapt us to multiple environments and different types of processes, which are very attractive. The principal reason to do this implementation is to search a reduction in production times and an increase in the efficiency of the welding process. Developments as discussed earlier in this chapter take strength because most of the technologies currently available are very expensive for medium and small companies.

The aim of this chapter is to include an artificial vision system in the retrofitting robotic arm process. For this purpose, an old robot manipulator supplied by a company was used, and its behavior was checked. In the general review of the system, it was observed that it cannot make complex moves due to failures in the system, associated mainly to encoders of the servomotors. A new panel control was designed, and in addition, an open source software was implemented to integrate the welding process with artificial vision, in order to locate spatially the working pieces. In this section, the kind of robot arm, camera, and the welding system are described in a deeper way. The methodology of each stage of the project is explained, based on the robot arm kinematic and the generation of a G code with the intersection points of the pieces. A similar work as proposed in this chapter is presented in [1], where the retrofitting of an industrial robot is developed; this work is specially related with ours because the authors showed a modernization of an old robot arm, and besides, they did a comparative study related with the numerical control machines controllers integration using LinuxCNC controller and MatLab/Mach3, where they concluded that LinuxCNC is a better solution because it needs few hardware resources in comparison with Mach3/MatLab alternative; in addition, the license does not have any cost, however this work does not use artificial vision in the positioning of the tool.
