**2.3 Measurement of contact tip-to-work distance**

The *contact tip-to-work distance (CTWD)* can be obtained from the robot control system related to torch or piece position, or it can be measured with a laser distance sensor. Other variables, such as *electrical stick-out* or *electrode extension* and *arc length* (see **Figure 4**) need more complex procedures to obtain a measurement because they depend on the fusion rate. All these variables are expressed in longitude unit, such as *mm*.

The first method to obtain the *CTWD* is more economical but has low accuracy. The zero references of the robotic system are calibrated with the workpiece position, but the surface variations and thickness of the material can affect the accuracy of the value. In arc welding processes, small variations in CTWD can affect the electric resistance of the arc, and consequently, the welding current and the heat input can be significantly affected. For example, reduction of the arc length causes

#### **Figure 3.** *Welding speed, torch angles, and orbital angle (adapted from [10]).*

**Figure 4.** *Contact tip-to-work distance and electrical stick-out (or electrode extension) differences (adapted from [11]).*

modern power sources is serial<sup>2</sup>

*and analogical inputs and outputs.*

*Welding - Modern Topics*

shown in **Figure 2**, is an example [9].

unit, such as *mm=s*.

**82**

digital format.

**Figure 2.**

, and information is obtained or sent in

The old welding power sources can have an interface with digital and analogical inputs and outputs, to let the monitoring and control of the source. These power sources need a dedicated data acquisition and control system to convert the analog values in digital information and vice versa. The user, using this system, can monitor and control the power source operation partially or fully. To design or select the system, the sampling time, resolution and range of analogical inputs and outputs, and range and type of digital inputs and outputs, based on the power source

*Fronius interface to translate the information between serial protocol and data acquisition system with digital*

Some manufacturers offer hardware interfaces to translate the information from

Other variables that must be measured to control robotic welding processes are those that characterize the relative movement and position between the piece and torch. These variables can be obtained from the position control system of the

The relative movement between the piece and torch is defined by the *welding speed*. Sometimes, the torch is coupled to the robot manipulator and moves while the piece is fixed. Other systems fixed the torch and move the piece. More complex robotic systems move the torch and piece. In all cases, it is important to consider the relative speed between the torch and the piece to obtain the welding speed. This speed can be calculated using the torch and the piece speed obtained from the robotic system. On plane welding, the speed can be expressed in longitude per time

In orbital welding, the *orbital angle* is used to indicate the torch position in the polar coordinate system with the origin on the pipe axis. This measurement is very important because the welding conditions can be different for each orbital position. Its value must be obtained from the robotic system too. In this case, it is possible to

express the welding speed using orbital angle per time unit, such as *<sup>o</sup>=s*.

<sup>2</sup> The information bits are sent one by one through a communication channel.

serial protocol to digital and analogical inputs and outputs. If you do not have conditions to use a serial protocol, these interfaces can help to get and send information to power sources. These systems can be more slow, inaccurate, and inefficient than serial protocol, because of the sequential conversions from analogical to digital (on the source), from digital to analogical (on the interface), and from analogical to digital again (on the acquisition system). The ROB 5000 of Fronius,

**2.2 Measurement of welding speed, welding angles, and orbital angle**

robot, but many times an initial or absolute reference is necessary.

characteristics and application requirements, should be considered.

**Figure 5.** *Time of flight measurement principle (adapted from [12]).*

In the time of flight measurement principle, shown in **Figure 5**, a laser diode produces short pulses which are projected onto the target. The light reflected from the target is recorded by the sensor element. The time of flight of the light pulse to the target and back determines the measured distance. The integrated electronics in the sensor compute the distance using the time of flight. Sensors using this principle

In the phase comparison measuring principle, a high-frequency modulated laser

In the triangulation method, shown in **Figure 7**, the laser beam is projected and reflected from a target surface to a collection lens. The lens focuses an image of the spot on a linear array camera. The camera views the measurement range from an angle at the center of the measurement range. The position of the spot image on the pixels of the camera is then processed to determine the distance to the target. The camera integrates the light falling on it, so long exposure times allow greater sensi-

The most used laser emitter distance sensors are in the wavelengths of red color (close to 658 *nm*), but in recent years, blue-violet lasers with a shorter wavelength than a red laser (close to 405 *nm*) have been used in welding processes and others that work with red-hot glowing metals. This shorter wavelength provides higher optical resolution and noise reduction. The blue-violet laser sensors enable more

Other research methods use the voltage and current feedback signals from the

The *CTWD* can be calculated with the same bead profile obtained from the laser profilometer. The distance between the torch contact tip and the sensor reference is fixed and known. The CTWD is the difference between the baseline value and this

The quality of arc welding is commonly described by the geometry of the molten weld pool and the weld bead because the mechanical properties of the

welding process, computing the minimum resistance during the short circuit period, and uses this value to estimate the *CTWD* after applying a correction factor for the duration of the short circuit. The effect of wire feed speed, actual CTWD, and shielding gas on the correction factor is determined experimentally [14].

reliable measurements on these processes than red laser sensors.

**2.4 Measurement of welding joint and weld bead geometry**

Depending on the distance of the object, it changes the phase relationship between transmitted and received signals. Sensors using this principle operate with high

light with low amplitude is transmitted to the target as shown in **Figure 6**.

are not sensitive to external light.

*Contact tip-to-work distance measurement [8].*

*Online Measurements in Welding Processes DOI: http://dx.doi.org/10.5772/intechopen.91771*

**Figure 8.**

tivity to weak reflections.

distance, as shown in **Figure 8**.

**85**

accuracy for measurement distances up to 150 m.

**Figure 6.** *Phase comparison measuring principle (adapted from [12]).*

**Figure 7.** *Triangulation measuring principle (adapted from [13]).*

the increase of heat input (and more current because it reduces the equivalent resistance of arc) which makes the wire electrode melt more quickly and thereby restore the original arc length.

A laser sensor can be more accurate, but the measurement point needs to be selected correctly. This sensor measures the distance to a point on the surface of the base metal of the piece, and it has three basic principles of operation: time of flight, phase comparison, or triangulation method.

*Online Measurements in Welding Processes DOI: http://dx.doi.org/10.5772/intechopen.91771*

**Figure 8.** *Contact tip-to-work distance measurement [8].*

In the time of flight measurement principle, shown in **Figure 5**, a laser diode produces short pulses which are projected onto the target. The light reflected from the target is recorded by the sensor element. The time of flight of the light pulse to the target and back determines the measured distance. The integrated electronics in the sensor compute the distance using the time of flight. Sensors using this principle are not sensitive to external light.

In the phase comparison measuring principle, a high-frequency modulated laser light with low amplitude is transmitted to the target as shown in **Figure 6**. Depending on the distance of the object, it changes the phase relationship between transmitted and received signals. Sensors using this principle operate with high accuracy for measurement distances up to 150 m.

In the triangulation method, shown in **Figure 7**, the laser beam is projected and reflected from a target surface to a collection lens. The lens focuses an image of the spot on a linear array camera. The camera views the measurement range from an angle at the center of the measurement range. The position of the spot image on the pixels of the camera is then processed to determine the distance to the target. The camera integrates the light falling on it, so long exposure times allow greater sensitivity to weak reflections.

The most used laser emitter distance sensors are in the wavelengths of red color (close to 658 *nm*), but in recent years, blue-violet lasers with a shorter wavelength than a red laser (close to 405 *nm*) have been used in welding processes and others that work with red-hot glowing metals. This shorter wavelength provides higher optical resolution and noise reduction. The blue-violet laser sensors enable more reliable measurements on these processes than red laser sensors.

Other research methods use the voltage and current feedback signals from the welding process, computing the minimum resistance during the short circuit period, and uses this value to estimate the *CTWD* after applying a correction factor for the duration of the short circuit. The effect of wire feed speed, actual CTWD, and shielding gas on the correction factor is determined experimentally [14].

The *CTWD* can be calculated with the same bead profile obtained from the laser profilometer. The distance between the torch contact tip and the sensor reference is fixed and known. The CTWD is the difference between the baseline value and this distance, as shown in **Figure 8**.

#### **2.4 Measurement of welding joint and weld bead geometry**

The quality of arc welding is commonly described by the geometry of the molten weld pool and the weld bead because the mechanical properties of the

the increase of heat input (and more current because it reduces the equivalent resistance of arc) which makes the wire electrode melt more quickly and thereby

A laser sensor can be more accurate, but the measurement point needs to be selected correctly. This sensor measures the distance to a point on the surface of the base metal of the piece, and it has three basic principles of operation: time of flight,

restore the original arc length.

**Figure 5.**

*Welding - Modern Topics*

**Figure 6.**

**Figure 7.**

**84**

*Time of flight measurement principle (adapted from [12]).*

*Phase comparison measuring principle (adapted from [12]).*

phase comparison, or triangulation method.

*Triangulation measuring principle (adapted from [13]).*

welding joint are reflected in these geometry characteristics. The geometry of the weld bead is a set of parameters defined in the design stage, and to achieve the required quality, it should be measured and controlled throughout the process. The parameters or variables which define the most important characteristics of the geometry of the weld bead (including the weld pool) are the *weld bead width*, the *weld bead reinforcement*, and *weld bead depth* or *weld bead penetration*, as shown in **Figure 9**.

would damage the measuring instruments that require physical contact with the surface of the piece. These conditions make the measuring of the weld bead geometry a difficult task using conventional measuring principles. Instead, noncontact

formed by a laser beam, which draws one or more lines on the surface to be measured, the image is filtered to obtain only the wavelength emitted by the laser, and a camera or matrix image sensor captures the line created by the laser. Subsequently, using image processing algorithms and triangulation techniques, a profile

This system, referred to as a laser scanner or profile sensor, is shown in **Figure 10**. In it, the optical system projects the diffusely reflected light of this laser line onto a highly sensitive sensor matrix. From this matrix image and the angle between the camera and the laser diode (*α*), the controller calculates the distance information (z-axis) and the position alongside the laser line (x-axis). These measured values are then projected in a two-dimensional coordinate system that is fixed for the sensor. To obtain three-dimensional measurement values, the sensor or

Another way to obtain a three-dimensional profile is by using a laser pattern with a dot matrix or line grill, as shown in **Figure 11**. The two-dimensional information is processed using triangulation techniques to obtain a three-dimensional profile. In [15], the weld pool surface deformation is obtained from the projection pattern using the deformation of the lines (see **Figure 11b**) or the distance between points (see **Figure 11c**). Other implementations of this method are shown in

*Three-dimensional profile of the weld pool surface using structured light and triangulation method: (a) diagram of measure system, (b) reflected image using line laser pattern, and (c) reflected image using dot laser*

For this purpose, vision sensing is a promising solution. One common method is

techniques have been developed and employed successfully.

*Online Measurements in Welding Processes DOI: http://dx.doi.org/10.5772/intechopen.91771*

of the piece with the required information is obtained.

workpiece can be moved in a controlled manner.

**Figure 11.**

**87**

*matrix (adapted from [15]).*

The visual information of the molten weld pool is used by expert operators to control the welding process in manual welding. In automatic welding processes, this information can be used to improve the control behavior and achieve the desired quality in the welding joint. In arc welding processes, the geometry parameters are governed by many factors, such as *welding current*, *welding voltage*, *wire feed speed*, *welding speed*, and the *contact tip-to-work distance*. Then, for successful control of geometric variables, it is necessary to provide feedback to the control system.

For feedback implementation, it is important to know that the process' adverse environmental conditions, in the vicinity of the electric arc and the molten pool,

#### **Figure 9.**

*Geometric dimensions of the weld bead: (a) cross-section, (b) top view, (c) side view, and (d) side view of a longitudinal cut [7].*

**Figure 10.** *Two-dimensional laser triangulation principle [7].*

#### *Online Measurements in Welding Processes DOI: http://dx.doi.org/10.5772/intechopen.91771*

welding joint are reflected in these geometry characteristics. The geometry of the weld bead is a set of parameters defined in the design stage, and to achieve the required quality, it should be measured and controlled throughout the process. The parameters or variables which define the most important characteristics of the geometry of the weld bead (including the weld pool) are the *weld bead width*, the *weld bead reinforcement*, and *weld bead depth* or *weld bead penetration*, as

The visual information of the molten weld pool is used by expert operators to control the welding process in manual welding. In automatic welding processes, this information can be used to improve the control behavior and achieve the desired quality in the welding joint. In arc welding processes, the geometry parameters are governed by many factors, such as *welding current*, *welding voltage*, *wire feed speed*, *welding speed*, and the *contact tip-to-work distance*. Then, for successful control of geometric variables, it is necessary to provide feedback to

For feedback implementation, it is important to know that the process' adverse environmental conditions, in the vicinity of the electric arc and the molten pool,

*Geometric dimensions of the weld bead: (a) cross-section, (b) top view, (c) side view, and (d) side view of a*

shown in **Figure 9**.

*Welding - Modern Topics*

the control system.

**Figure 10.**

**86**

**Figure 9.**

*longitudinal cut [7].*

*Two-dimensional laser triangulation principle [7].*

would damage the measuring instruments that require physical contact with the surface of the piece. These conditions make the measuring of the weld bead geometry a difficult task using conventional measuring principles. Instead, noncontact techniques have been developed and employed successfully.

For this purpose, vision sensing is a promising solution. One common method is formed by a laser beam, which draws one or more lines on the surface to be measured, the image is filtered to obtain only the wavelength emitted by the laser, and a camera or matrix image sensor captures the line created by the laser. Subsequently, using image processing algorithms and triangulation techniques, a profile of the piece with the required information is obtained.

This system, referred to as a laser scanner or profile sensor, is shown in **Figure 10**. In it, the optical system projects the diffusely reflected light of this laser line onto a highly sensitive sensor matrix. From this matrix image and the angle between the camera and the laser diode (*α*), the controller calculates the distance information (z-axis) and the position alongside the laser line (x-axis). These measured values are then projected in a two-dimensional coordinate system that is fixed for the sensor. To obtain three-dimensional measurement values, the sensor or workpiece can be moved in a controlled manner.

Another way to obtain a three-dimensional profile is by using a laser pattern with a dot matrix or line grill, as shown in **Figure 11**. The two-dimensional information is processed using triangulation techniques to obtain a three-dimensional profile. In [15], the weld pool surface deformation is obtained from the projection pattern using the deformation of the lines (see **Figure 11b**) or the distance between points (see **Figure 11c**). Other implementations of this method are shown in

#### **Figure 11.**

*Three-dimensional profile of the weld pool surface using structured light and triangulation method: (a) diagram of measure system, (b) reflected image using line laser pattern, and (c) reflected image using dot laser matrix (adapted from [15]).*

[16–18]. In these methods, the uncertainty to recognize the real position is a problem. For this reason, a point of the pattern dot is intentionally missed to serve as a reference.

partial penetration with other methods, but few obtain a useful value for the control

penetration, but this method is difficult because of the reduced space, limited movements, and restricted access, among other conditions, that make it impossible

multi-sensor or sensor fusion technologies, as shown in **Figure 13**.

monitoring technologies to estimate the weld bead geometry.

Measurement of the backside width of the weld pool can be used to ensure total

The front side of the weld pool offers information about the total penetration status. This information includes the *temperature of arc and weld pool*, *arc voltage*, *arc light*, *arc sound*, *geometry parameters*, *oscillation frequency*, and *resonance frequency*, among others. For obtaining this information, indirect sensing technologies are used. These measuring technologies can be classified as conventional, vision, and

Conventional sensing technologies monitor parameters closely related to the weld bead and the weld pool geometry. It includes ultrasound, infrared thermography, weld pool oscillation, arc sound, and X-ray, among others. Vision sensing technologies obtain these features as skilled welders do. It can be divided into twodimensional and three-dimensional sensing. These methods are applied to obtain the geometric shape of the weld pool and weld bead with good results. Sensor fusion integrates several sensing technologies in the same monitoring system. **Figure 14** shows the literature review statistics, obtained in [7], about the use of indirect

Some of these measuring methods need a model to obtain the desired information from the process. To obtain a representative model, the estimators have been used successfully under specific conditions. But it is imperative to create a model that can be easily programmed and fed into the control system. The model must have satisfactory precision in the prediction of the depth of the weld bead and cover all of the positions used in the welding work. It is very useful if it also represents a

The research about modeling the weld bead depth tries to relate this variable with the welding electric current intensity, welding voltage, wire feed speed, and welding speed. Documents analyzed in [7], show that the research is making mainly in horizontal or flat welding to obtain static models and the researchers are using artificial intelligence algorithms. The most used methods to estimate the weld bead geometry

*Literature review statistics about indirect monitoring technologies used to obtain measurements of the weld bead*

wide range of thicknesses of the material, but this is not always possible.

system algorithm.

to place the sensors under the weld pool.

*Online Measurements in Welding Processes DOI: http://dx.doi.org/10.5772/intechopen.91771*

*2.5.1 Modeling and estimating*

**Figure 14.**

*geometry [7].*

**89**

It can also use the same measuring principle to obtain a profile of the weld joint before joining the one. This allows the implementation of algorithms to define or adjust the trajectory to be followed by the torch (seam-tracking algorithms). Similarly, it is possible to estimate the amount of material required (deposition rate) for the formation of a bead with the desired dimensions [7].

The use of video cameras to measure the weld bead width and reinforcement is possible too, as is shown in [19–21], but these methods need optimal light conditions and are difficult to apply in the industrial environment.

These principles cannot be applied to penetration measurement, and this variable could be estimated from a different way. Due to the complexity of the measurement methods, we are going to dedicate another section to show some estimation methods of this magnitude.

#### **2.5 Measurement of weld bead depth or penetration**

Total penetration in welding processes is important to ensure weld quality. When the total penetration takes place, the melt weld pool crosses to the bottom side of the workpiece, as shown in **Figure 12**. The depth of penetration of the weld bead can be determined by nondestructive testing techniques such as ultrasound or X-ray. However, the portability and robustness of these traditional instruments are not a good option for the harsh conditions of the process and to develop an online measuring system. Because of that, many research works attempt to detect total or

**Figure 13.** *Indirect sensing technologies used to monitor the bead and weld pool [7].*

### *Online Measurements in Welding Processes DOI: http://dx.doi.org/10.5772/intechopen.91771*

[16–18]. In these methods, the uncertainty to recognize the real position is a problem. For this reason, a point of the pattern dot is intentionally missed to serve as a

the formation of a bead with the desired dimensions [7].

**2.5 Measurement of weld bead depth or penetration**

estimation methods of this magnitude.

*Total and partial penetration in the weld bead [7].*

*Indirect sensing technologies used to monitor the bead and weld pool [7].*

tions and are difficult to apply in the industrial environment.

It can also use the same measuring principle to obtain a profile of the weld joint before joining the one. This allows the implementation of algorithms to define or adjust the trajectory to be followed by the torch (seam-tracking algorithms). Similarly, it is possible to estimate the amount of material required (deposition rate) for

The use of video cameras to measure the weld bead width and reinforcement is possible too, as is shown in [19–21], but these methods need optimal light condi-

These principles cannot be applied to penetration measurement, and this variable could be estimated from a different way. Due to the complexity of the measurement methods, we are going to dedicate another section to show some

Total penetration in welding processes is important to ensure weld quality. When the total penetration takes place, the melt weld pool crosses to the bottom side of the workpiece, as shown in **Figure 12**. The depth of penetration of the weld bead can be determined by nondestructive testing techniques such as ultrasound or X-ray. However, the portability and robustness of these traditional instruments are not a good option for the harsh conditions of the process and to develop an online measuring system. Because of that, many research works attempt to detect total or

reference.

*Welding - Modern Topics*

**Figure 12.**

**Figure 13.**

**88**

partial penetration with other methods, but few obtain a useful value for the control system algorithm.

Measurement of the backside width of the weld pool can be used to ensure total penetration, but this method is difficult because of the reduced space, limited movements, and restricted access, among other conditions, that make it impossible to place the sensors under the weld pool.

The front side of the weld pool offers information about the total penetration status. This information includes the *temperature of arc and weld pool*, *arc voltage*, *arc light*, *arc sound*, *geometry parameters*, *oscillation frequency*, and *resonance frequency*, among others. For obtaining this information, indirect sensing technologies are used. These measuring technologies can be classified as conventional, vision, and multi-sensor or sensor fusion technologies, as shown in **Figure 13**.

Conventional sensing technologies monitor parameters closely related to the weld bead and the weld pool geometry. It includes ultrasound, infrared thermography, weld pool oscillation, arc sound, and X-ray, among others. Vision sensing technologies obtain these features as skilled welders do. It can be divided into twodimensional and three-dimensional sensing. These methods are applied to obtain the geometric shape of the weld pool and weld bead with good results. Sensor fusion integrates several sensing technologies in the same monitoring system. **Figure 14** shows the literature review statistics, obtained in [7], about the use of indirect monitoring technologies to estimate the weld bead geometry.

## *2.5.1 Modeling and estimating*

Some of these measuring methods need a model to obtain the desired information from the process. To obtain a representative model, the estimators have been used successfully under specific conditions. But it is imperative to create a model that can be easily programmed and fed into the control system. The model must have satisfactory precision in the prediction of the depth of the weld bead and cover all of the positions used in the welding work. It is very useful if it also represents a wide range of thicknesses of the material, but this is not always possible.

The research about modeling the weld bead depth tries to relate this variable with the welding electric current intensity, welding voltage, wire feed speed, and welding speed. Documents analyzed in [7], show that the research is making mainly in horizontal or flat welding to obtain static models and the researchers are using artificial intelligence algorithms. The most used methods to estimate the weld bead geometry

#### **Figure 14.**

*Literature review statistics about indirect monitoring technologies used to obtain measurements of the weld bead geometry [7].*

#### **Figure 15.**

*Literature review statistics about analysis method used to estimate the weld bead geometry [7].*

are artificial neural networks, fuzzy logic, and their combinations. This group represents 28% of the works, as shown in **Figure 15**. Image processing and statistical techniques such as multiple regression analysis, least squares, or factorial design are also frequently used. All these make up 58% of the total of the publications found.

as shown in **Figure 16**, but it is very common to find more than one way of working

• Information level represents the input magnitudes and zones that are

*Classification according to the relationships of the input data (adapted from [24]).*

• The fusion of information is the algorithms used to obtain the fused

• Fused information is the final result, for example, the weld bead depth.

**Figure 16** shows four levels of the fusion process. The following list exemplifies

measured by sensors, for example, the weld bead and weld pool dimensions.

• Source of information is the sensors installed on the process, for example, the

information, for example, an image processing and neural network algorithms.

The *complementary fusion* is about fusing incomplete information that is obtained starting from different sources. This is the case in that several sensors are measuring different parts of an atmosphere or phenomenon, covering a bigger area, and allowing a more complete and more global vision of the process. For example, you can combine the weld pool thermographic information (*A*) obtained from an infrared camera (*S1*) and weld bead dimensions (*B*) obtained from a video camera (*S2*)

In *competitive or redundant fusion*, all the sensors are monitoring the same area, working redundantly and competitively. These sensors can have similar or different measurement principles. An example is the dimensional information (*B*) about the weld bead obtained from two video cameras (*S2* and *S3*) used to calculate the weld

*Cooperative or coordinated fusion* uses the information from independent several sensors to obtain new information, for example, the combination of the information of the weld bead (*C*) obtained from a vision system (*S4*) and pyrometer (*S5*) to

A cooperative sensor fusion algorithm is used in [22] to obtain an estimator of the weld bead depth *D*^ for GMAW process. The developed algorithm tries to obtain information about the amount and spatial distribution of the energy supplied to the workpiece to estimate the depth of the weld bead using a thermographic camera and welding electric current measurements. The fusion algorithm is based on a perceptron neural network that combines the infrared features *T* of the weld molten pool, the welding current in the actual *i nT* ð Þ and previous sample *i nT* ð Þ � *T* ,

on the same group of sensors.

*Online Measurements in Welding Processes DOI: http://dx.doi.org/10.5772/intechopen.91771*

**Figure 16.**

welding process parameters:

video and thermographic cameras.

to calculate the weld pool dimensions (*a+b*).

estimate the weld bead penetration value (*c*).

bead dimensions (*b*).

**91**

In the welding process, the thermal energy is supplied through an electric current and stored in the material. Due to thermal inertia, dynamic models can be a better representation of the process. In these models, the historical values of the welding parameters are also selected as inputs. Dynamic behavior is essential to estimate the current and future state form to the past state. Despite this, [7] shows dynamic models in a minority. The main cause of the selection of the static model is the difficulty in obtaining a continuous data set of the weld bead depth. One way to obtain it is to perform a longitudinal cut and a macrographic analysis on the weld bead with an image processing algorithm as made in [22]. In the traditional crosssection cut, it is not possible to obtain enough information to make a dynamic model.

#### *2.5.2 Sensor fusion*

Measurements obtained by several sensors or measuring systems can be used to estimate the values of other or the same magnitudes. These techniques that combined different sensing technologies or information sources to obtain better sensing results are named sensor fusion or fusion sensory data. The sensor fusion is a multilevel process that needs a model to combine the information and describe the static or dynamic behavior of processes.

There are applications in different spheres such as aerial and ground navigation of mobile robots, systems for environmental monitoring, visual sensor networks, medicine, security, fault detection, and quality control, among others, as shown in [23]. This is a relatively young research area, but it is the third method used for the indirect monitoring of the welding process, as shown in **Figure 14**. In recent years, these methods have been studied to achieve effective welding and sense of the weld bead.

The sensor fusion can be classified in different ways as discussed in [24]. One of the most representative classifications for welding processes is *according to the relationships of the input data*, which defines how the information relates between the sensors. It can be complementary, competitive or redundant, and cooperative,

*Online Measurements in Welding Processes DOI: http://dx.doi.org/10.5772/intechopen.91771*

**Figure 16.** *Classification according to the relationships of the input data (adapted from [24]).*

as shown in **Figure 16**, but it is very common to find more than one way of working on the same group of sensors.

**Figure 16** shows four levels of the fusion process. The following list exemplifies welding process parameters:


The *complementary fusion* is about fusing incomplete information that is obtained starting from different sources. This is the case in that several sensors are measuring different parts of an atmosphere or phenomenon, covering a bigger area, and allowing a more complete and more global vision of the process. For example, you can combine the weld pool thermographic information (*A*) obtained from an infrared camera (*S1*) and weld bead dimensions (*B*) obtained from a video camera (*S2*) to calculate the weld pool dimensions (*a+b*).

In *competitive or redundant fusion*, all the sensors are monitoring the same area, working redundantly and competitively. These sensors can have similar or different measurement principles. An example is the dimensional information (*B*) about the weld bead obtained from two video cameras (*S2* and *S3*) used to calculate the weld bead dimensions (*b*).

*Cooperative or coordinated fusion* uses the information from independent several sensors to obtain new information, for example, the combination of the information of the weld bead (*C*) obtained from a vision system (*S4*) and pyrometer (*S5*) to estimate the weld bead penetration value (*c*).

A cooperative sensor fusion algorithm is used in [22] to obtain an estimator of the weld bead depth *D*^ for GMAW process. The developed algorithm tries to obtain information about the amount and spatial distribution of the energy supplied to the workpiece to estimate the depth of the weld bead using a thermographic camera and welding electric current measurements. The fusion algorithm is based on a perceptron neural network that combines the infrared features *T* of the weld molten pool, the welding current in the actual *i nT* ð Þ and previous sample *i nT* ð Þ � *T* ,

are artificial neural networks, fuzzy logic, and their combinations. This group represents 28% of the works, as shown in **Figure 15**. Image processing and statistical techniques such as multiple regression analysis, least squares, or factorial design are also frequently used. All these make up 58% of the total of the publications found. In the welding process, the thermal energy is supplied through an electric current and stored in the material. Due to thermal inertia, dynamic models can be a better representation of the process. In these models, the historical values of the welding parameters are also selected as inputs. Dynamic behavior is essential to estimate the current and future state form to the past state. Despite this, [7] shows dynamic models in a minority. The main cause of the selection of the static model is the difficulty in obtaining a continuous data set of the weld bead depth. One way to obtain it is to perform a longitudinal cut and a macrographic analysis on the weld bead with an image processing algorithm as made in [22]. In the traditional crosssection cut, it is not possible to obtain enough information to make a dynamic model.

*Literature review statistics about analysis method used to estimate the weld bead geometry [7].*

Measurements obtained by several sensors or measuring systems can be used to estimate the values of other or the same magnitudes. These techniques that combined different sensing technologies or information sources to obtain better sensing results are named sensor fusion or fusion sensory data. The sensor fusion is a multilevel process that needs a model to combine the information and describe the

There are applications in different spheres such as aerial and ground navigation of mobile robots, systems for environmental monitoring, visual sensor networks, medicine, security, fault detection, and quality control, among others, as shown in [23]. This is a relatively young research area, but it is the third method used for the indirect monitoring of the welding process, as shown in **Figure 14**. In recent years, these methods have been studied to achieve effective welding and sense

The sensor fusion can be classified in different ways as discussed in [24]. One of

the most representative classifications for welding processes is *according to the relationships of the input data*, which defines how the information relates between the sensors. It can be complementary, competitive or redundant, and cooperative,

*2.5.2 Sensor fusion*

**Figure 15.**

*Welding - Modern Topics*

of the weld bead.

**90**

static or dynamic behavior of processes.

and previous depth estimation *D nT* ^ ð Þ � *<sup>T</sup>* . The symbol *<sup>T</sup>* is the sample time and *<sup>n</sup>* is the sample number. These previous values allow capturing the dynamic behavior of the process.

The artificial neural network has 8 neurons in the input layer, 12 neurons in the hidden layer, and 1 in the output layer. The activate function is the hyperbolic tangent sigmoid transfer function. The network training should be done with experimental measurements of the parameters of input and output and by using the backpropagation algorithm. A block diagram is shown in **Figure 17**.

The thermographic matrix, supplied by the thermographic camera, is processed with a moving average filter to obtain the thermographic peak *Tp*, base plane *Tb*, thermographic curve width *Tw*, thermographic area *Ta*, and thermographic volume *Tv*. The thermographic image is taken on the weld pool area as a physical reference, but the welding arc and the electrode are included. These features from a sample are shown in **Figure 18**.

The base plane is calculated as the average of 10% of the values on the left and right sides, as shown in **Figure 18b**. The boundary plane is 10% above the base plane. The sum of active pixels in the intersection plane between the thermographic surface and the boundary plane is the area. The sum of the thermographic values within the intersection plane is the thermographic volume, shown in **Figure 18d**.

*Results obtained in the estimation of the weld bead depth in [22]: (a) weld bead depth longitudinal profile and (b) model response and experimental measurements. In the sample axis, each value corresponds to a position in*

This algorithm was optimized for implementation in embedded devices.

response that represents the behavior of the process with great accuracy.

effective information fusion techniques.

accuracy.

**93**

**Figure 19.**

*Online Measurements in Welding Processes DOI: http://dx.doi.org/10.5772/intechopen.91771*

*the piece.*

A contribution of the method is the minimization of errors when multiple inflection points are found because it does not use the second derivative for the calculation of thermographic width. Also, volume calculations are performed using the actual thermographic curve instead of the ideal Gaussian curve used in most research as an approximation value and using more complex equations. This approach uses only addition operations, simplifies calculations, and improves model

The weld bead depth profile to network training was obtained using an image processing algorithm on the macrographic picture of the longitudinal cut of the piece. **Figure 19a** shows 610 measurements of weld bead depth in a yellow line and the base metal surface in a red line. The model has a fit of 0.99844, a performance or median square error (MSE) of 7*:*<sup>61</sup> <sup>10</sup>4, and an estimation error less than 0.05 mm (less than 5% of full range). The error curves in **Figure 19b** show a model

It can be noticed that accurate sensing results were obtained based on multisensor information fusion technology, due to more weld pool information and

**3. Evolution and comparison of the techniques and methods used for measurement and estimation of the geometry of weld bead**

An analysis made in [7] about techniques and methods used for measurement and estimation of the geometry of the weld bead found publications and patents

#### **Figure 18.**

*Features extracted from an infrared image: (a) three-dimensional curve, (b) front view of the curve, (c) intersection between the boundary plane and three-dimensional curve, (d) and the calculation of the width, area and thermographic volume.*

*Online Measurements in Welding Processes DOI: http://dx.doi.org/10.5772/intechopen.91771*

#### **Figure 19.**

and previous depth estimation *D nT* ^ ð Þ � *<sup>T</sup>* . The symbol *<sup>T</sup>* is the sample time and *<sup>n</sup>* is the sample number. These previous values allow capturing the dynamic behavior of

The artificial neural network has 8 neurons in the input layer, 12 neurons in the

The thermographic matrix, supplied by the thermographic camera, is processed with a moving average filter to obtain the thermographic peak *Tp*, base plane *Tb*, thermographic curve width *Tw*, thermographic area *Ta*, and thermographic volume *Tv*. The thermographic image is taken on the weld pool area as a physical reference, but the welding arc and the electrode are included. These features from a sample are

hidden layer, and 1 in the output layer. The activate function is the hyperbolic tangent sigmoid transfer function. The network training should be done with experimental measurements of the parameters of input and output and by using the

backpropagation algorithm. A block diagram is shown in **Figure 17**.

*Weld bead depth estimator block diagram, based on artificial neural network, developed in [22].*

*Features extracted from an infrared image: (a) three-dimensional curve, (b) front view of the curve, (c) intersection between the boundary plane and three-dimensional curve, (d) and the calculation of the width,*

the process.

*Welding - Modern Topics*

shown in **Figure 18**.

**Figure 17.**

**Figure 18.**

**92**

*area and thermographic volume.*

*Results obtained in the estimation of the weld bead depth in [22]: (a) weld bead depth longitudinal profile and (b) model response and experimental measurements. In the sample axis, each value corresponds to a position in the piece.*

The base plane is calculated as the average of 10% of the values on the left and right sides, as shown in **Figure 18b**. The boundary plane is 10% above the base plane. The sum of active pixels in the intersection plane between the thermographic surface and the boundary plane is the area. The sum of the thermographic values within the intersection plane is the thermographic volume, shown in **Figure 18d**. This algorithm was optimized for implementation in embedded devices.

A contribution of the method is the minimization of errors when multiple inflection points are found because it does not use the second derivative for the calculation of thermographic width. Also, volume calculations are performed using the actual thermographic curve instead of the ideal Gaussian curve used in most research as an approximation value and using more complex equations. This approach uses only addition operations, simplifies calculations, and improves model accuracy.

The weld bead depth profile to network training was obtained using an image processing algorithm on the macrographic picture of the longitudinal cut of the piece. **Figure 19a** shows 610 measurements of weld bead depth in a yellow line and the base metal surface in a red line. The model has a fit of 0.99844, a performance or median square error (MSE) of 7*:*<sup>61</sup> <sup>10</sup>4, and an estimation error less than 0.05 mm (less than 5% of full range). The error curves in **Figure 19b** show a model response that represents the behavior of the process with great accuracy.

It can be noticed that accurate sensing results were obtained based on multisensor information fusion technology, due to more weld pool information and effective information fusion techniques.
