**Meet the editor**

Dr. Luigi Cocco is a professional engineer with more than 10 years of experience in automotive industry. He worked in Electrical and Electronic R&D and Supply Quality departments of racing/luxury Italian vehicles manufacturers such as Ferrari F1 team, Maserati S.p.A., and Automobili Lamborghini. Currently he is electrical/ electronic project responsible of new Alfa Romeo pre-

mium car. After his master's degree in Telecommunication Engineering, he received his PhD in Information Engineering; he has published several papers and conference contributions about electronic measurements and digital signal processing. "Modern Metrology Concerns" was the first book edited by Luigi Cocco for InTech in 2012.

### Contents



Roman Nevshupa and Marcello Conte


## Preface

Chapter 7 **Optical Pressure Measurement Principle System 157**

Chapter 9 **Innovative Theoretical Approaches Used for RF Power Amplifiers in Modern HDTV Systems 211**

Chapter 10 **Objectifying the Subjective: Fundaments and Applications of**

Daniel Discini Silveira, Marcos Paulo de Souza Silva, Marcel Veloso

Limin Gao, Ruiyu Li and Bo Liu

**Metrology Systems 185** Chuxiong Hu and Yu Zhu

Campos and Maurício Silveira

**Soft Metrology 255**

**Section 4 Soft Metrology 253**

**VI** Contents

Laura Rossi

Chapter 8 **Self-Calibration of Two-Dimensional Precision**

Investigating the incessant technology growth and the even higher complexity of engineer‐ ing systems, one of the crucial requirements to confidently steer both scientific and industri‐ al challenges is to identify an appropriate measurement approach. A general process can be considered effective and under control if the following elements are consciously and cycli‐ cally managed: numerical target, adequate tools, output analysis, and corrective actions. The role of metrology is to rigorously harmonize this virtuous circle, providing guidance in terms of instruments, standards, and techniques to improve the robustness and the accuracy of the results. This book is designed to offer an interdisciplinary experience into the science of measurement, not only covering high-level measurement strategies but also supplying analytical details and experimental setups. The selection of topics treated in this work was oriented to collect relevant advances in some key areas of metrology; the text is composed of 10 chapters and it is targeted toward academic as well as professional readers.

The volume is divided into four sections:

#### **Microwave and Terahertz Spectrum**

*Chapter 1* provides a background of radio frequency and microwave power measurements; the working principle of primary power standard is described, followed by the discussions of direct comparison transfer technique and the performance evaluation/uncertainty estima‐ tion. Some recent developments of waveguide micro-calorimeter are mentioned.

*Chapter 2* presents the typical architecture of measurement systems based on the THz tech‐ nology for scientific and industrial applications; most common sources and detectors that are routinely in use for the manipulation of T-waves are described and THz system per‐ formance and measurements are focused in terms of issues and solutions.

#### **Biomedical and Agriculture**

*Chapter 3* recommends a flowchart to select the right model of uncertainty measurement in medical laboratory in order to produce more realistic results; four approaches are presented and an example is given to a single test for an easier understanding.

*Chapter 4* proposes an alternative method for speech recognition based on the electromyog‐ raphy signal generated by the human speech articulation system; the common methods for speech recognition are based on the analysis of an acoustic signal with limits in a noisy or silent environment.

*Chapter 5* describes the foundations of the electrical conductivity techniques nowadays avail‐ able for the assessment of soil salinity in agriculture; dedicated models, main commercial sensors, examples of the practical applications, and future trends in this technology are pre‐ sented.

#### **Force, Pressure, Dimensions and Signals**

*Chapter 6* summarizes some recent advances in development of force measuring systems for high-temperature nano-indentation and ultrahigh vacuum tribometry; an example of force measuring head for an ultrahigh vacuum system for tribological characterization of materi‐ als and lubricants is shown.

*Chapter 7* explains an innovative technique to measure the surface pressure distribution of aerodynamic components using pressure-sensitive paint, comparing it with the traditional method in terms of complexity, cost, timing, and performance.

*Chapter 8* studies a least-square-based self-calibration of two-dimensional precision metrolo‐ gy systems and a holistic self-calibration algorithm, comparing it with traditional standard process in terms of accuracy, robustness, and simplicity; a practical self-calibration proce‐ dure is available.

*Chapter 9* suggests theoretical and numerical approaches that can be used for modeling non‐ linear effects in the design of power amplifiers, used widely in modern industrial produc‐ tion lines of high-resolution digital TV systems.

#### **Soft Metrology**

*Chapter 10* is focused on soft measurements (subject to human perception and interpreta‐ tion); theoretical overview of main concepts and a mathematical model to measure a "soft measurand" through a dedicated index (IPER – Influence on Performance Index) are pro‐ posed.

I would like to express my very great appreciation to all authors for their valuable contribu‐ tion; I would also like to thank my fiancée Rosy for all her support and patience.

> **Luigi Cocco** R&D Electronics Maserati S.p.A. Modena, Italy

**Microwave and Terahertz Spectrum**

sensors, examples of the practical applications, and future trends in this technology are pre‐

*Chapter 6* summarizes some recent advances in development of force measuring systems for high-temperature nano-indentation and ultrahigh vacuum tribometry; an example of force measuring head for an ultrahigh vacuum system for tribological characterization of materi‐

*Chapter 7* explains an innovative technique to measure the surface pressure distribution of aerodynamic components using pressure-sensitive paint, comparing it with the traditional

*Chapter 8* studies a least-square-based self-calibration of two-dimensional precision metrolo‐ gy systems and a holistic self-calibration algorithm, comparing it with traditional standard process in terms of accuracy, robustness, and simplicity; a practical self-calibration proce‐

*Chapter 9* suggests theoretical and numerical approaches that can be used for modeling non‐ linear effects in the design of power amplifiers, used widely in modern industrial produc‐

*Chapter 10* is focused on soft measurements (subject to human perception and interpreta‐ tion); theoretical overview of main concepts and a mathematical model to measure a "soft measurand" through a dedicated index (IPER – Influence on Performance Index) are pro‐

I would like to express my very great appreciation to all authors for their valuable contribu‐

**Luigi Cocco** R&D Electronics Maserati S.p.A. Modena, Italy

tion; I would also like to thank my fiancée Rosy for all her support and patience.

sented.

VIII Preface

**Force, Pressure, Dimensions and Signals**

method in terms of complexity, cost, timing, and performance.

tion lines of high-resolution digital TV systems.

als and lubricants is shown.

dure is available.

**Soft Metrology**

posed.

### **Microwave Power Measurements: Standards and Transfer Techniques**

Xiaohai Cui, Yu Song Meng, Yueyan Shan and Yong Li

Additional information is available at the end of the chapter

http://dx.doi.org/10.5772/60442

#### **Abstract**

In this chapter, precision power measurement, which is probably the most important area in RF and microwave metrology, will be discussed. Firstly, the background of RF and microwave power measurements and standards will be introduced. Secondly, the working principle of primary power standard (i.e., microcalorimeter) will be descri‐ bed, followed by the discussions of direct comparison transfer technique. Finally, there will be some discussions about the performance evaluation and uncertainty estima‐ tion for microwave power measurements.

**Keywords:** Direct comparison transfer, Microcalorimeter, Primary standard, RF and microwave power, Thermistor mount

#### **1. Introduction**

Recently, there are growing interests in higher frequency such as microwave and millimeterwave applications, which is becoming a promising solution for satellite communications [1, 2] and millimeter-wave mobile backhauling [3]. For proper deployments of these applications and services, accurate and reliable signal power measurements are essential and important for system designers. Normally for the end users (i.e., system designers), microwave and millimeterwave power measurements are highly relied on a conventional power detector and power meter combination or a spectrum analyzer. These measuring instruments have to be properly calibrated with traceability to the International System of Units (SI) for assuring the quality of measure‐ ment results as required by the industry.

© 2016 The Author(s). Licensee InTech. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

As stated in ref. [4], the traceability of measuring instruments shall be achieved by means of an unbroken chain of calibrations or comparisons linking to relevant primary standards of the SI units of measurement as illustrated in **Figure 1**. The link to SI unit could be realized by a primary standard developed and maintained by a national metrology institute (NMI) such as the National Institute of Metrology (NIM) of China and the National Metrology Centre (NMC), A\*STAR of Singapore. For RF, microwave, and millimeter-wave measurements and standards, power measurement has been recognized as one of the primary areas [5] and probably the most important research area by the NMIs. For simplicity in the rest of this chapter, microwave measurement will be synonymous to "RF, microwave, and millimeter-wave measurement."

NMIs: National Metrology Institutes

**Figure 1.** Typical traceability chain of RF, microwave, and millimeter-wave power measurements.

In the following, we will firstly give a background of microwave power measurements and standards. Secondly, a primary power standard (e.g., a microcalorimeter) will be discussed with recent developments at NIM, China. This will be followed by the discussions of the working principle of the microcalorimeter measurement system. The direct comparison transfer technique will be then introduced, together with some improvements at NMC, A\*STAR of Singapore. Finally, performance evaluation and uncertainty estimation for microwave power measurements will be discussed.

#### **2. Background of Power Measurements and Standards**

Basically, microwave power can be measured by the combination of a power detector and a power meter, as shown in **Figure 2**. The power detector is a key instrument for power measurements, and its function is to convert high-frequency (i.e., RF, microwave, and millimeter-wave or higher) power to a direct current (DC) or low-frequency signal that a power meter can measure with a display. Different working principles and fabrication techniques have led to several power detectors that have been widely used in commercial applications.

**Figure 2.** Microwave power measurements in the combination of a power detector and a power meter.

#### **2.1. Commercial Power Detectors**

As stated in ref. [4], the traceability of measuring instruments shall be achieved by means of an unbroken chain of calibrations or comparisons linking to relevant primary standards of the SI units of measurement as illustrated in **Figure 1**. The link to SI unit could be realized by a primary standard developed and maintained by a national metrology institute (NMI) such as the National Institute of Metrology (NIM) of China and the National Metrology Centre (NMC), A\*STAR of Singapore. For RF, microwave, and millimeter-wave measurements and standards, power measurement has been recognized as one of the primary areas [5] and probably the most important research area by the NMIs. For simplicity in the rest of this chapter, microwave measurement will be synonymous to "RF, microwave, and millimeter-wave measurement."

> Microcalorimeter National Reference Standard

> > Working Standards

Measurement Reference Standard

Reference

Transfer Standard

In the following, we will firstly give a background of microwave power measurements and standards. Secondly, a primary power standard (e.g., a microcalorimeter) will be discussed with recent developments at NIM, China. This will be followed by the discussions of the working principle of the microcalorimeter measurement system. The direct comparison transfer technique will be then introduced, together with some improvements at NMC, A\*STAR of Singapore. Finally, performance evaluation and uncertainty estimation for

Basically, microwave power can be measured by the combination of a power detector and a power meter, as shown in **Figure 2**. The power detector is a key instrument for power measurements, and its function is to convert high-frequency (i.e., RF, microwave, and millimeter-wave or higher) power to a direct current (DC) or low-frequency signal that a power

General Test Equipment User

Manufacturing Facility

Use

Manufa

NMIs

NMIs

N

Commercial Laboratory

Reduced cost with decreasing accuracy

4 New Trends and Developments in Metrology

NMIs: National Metrology Institutes

microwave power measurements will be discussed.

**2. Background of Power Measurements and Standards**

**Figure 1.** Typical traceability chain of RF, microwave, and millimeter-wave power measurements.

Three main types of power detectors have been commercially available, which are designed and based on the bolometric element, thermoelectric element, and diode. The respective working principles behind are through


It is noted that each type of power detector indicated above has its own strengths and weak‐ nesses for its application. In the early days [5], the diode detector was very sensitive to ambient temperature and also with a poor linearity, and therefore, it was rarely adopted as a transfer standard. Most of the NMIs have been continuously using bolometric detectors (i.e., bolome‐ ters) as transfer standards, since they are very nearly linear when used with a primary power standard (e.g., a microcalorimeter [6]) through the DC substitution technique. Bolometric detectors can also offer an extremely high long-term stability with a very low measurement uncertainty [7, 8].

However, a bolometric detector normally has a narrow dynamic range and limited power capability (e.g., with a power range from 10 μW to 10 mW [7]). Additionally, its productions have been discontinued, accompanying a new industry production trend toward other types of power sensors (e.g., diodes and thermocouples). Some NMIs therefore have attempted to use thermoelectric detectors (e.g., thermocouples) as the transfer standards [9], which are linear with a better sensitivity and dynamic range. Performance comparison between bolometric and thermoelectric sensors has recently been reported in ref. [10] using the same microcalorimeter. The results revealed that the two power standards (bolometric and thermoelectric sensors) in the comparison can be considered equivalent.

In the following, the bolometric detector as the transfer standard will be introduced, since it has been widely used by most of the NMIs including NIM of China and NMC, A\*STAR of Singapore. Its calibration using a microcalorimeter will be focused upon and described.

#### **2.2. Reference Power Standard: Bolometer**

Bolometers have a very high reliability and have been used as the reference power standards in most of the NMIs, together with a microcalorimeter. A bolometer consists of a small temperature-sensitive resistor. It is operated by changing its own resistance following a change in its temperature resulted from the incident microwave power being dissipated in the bolometric element.

Two types of bolometers have been commonly used, namely, barretter and thermistor. The barretter is a thin metal wire with a positive temperature coefficient of resistance, and the thermistor is a small bead of semiconductor material with a negative temperature coefficient of resistance [11]. It is noted that the thermistor is more sensitive than the barretter due to a much greater temperature coefficient, but it has a slower response time due to its larger thermal time constant [12].

The thermistor is therefore more popularly used. Typically, a thermistor bead has a diameter of around 0.05–0.5 mm with a small-size (diameter of 15–100 μm) metal wire embedded inside. A waveguide or coaxial termination that houses a thermistor with an internal matching circuit to obtain specified impedance conditions (e.g., 100 Ω or 200 Ω) with appropriate DC bias power applied [11] is called a thermistor mount. The schematic diagram of a popular type of wave‐ guide thermistor mount is shown in **Figure 3**, and some samples of waveguide thermistor mounts used at NIM, China, are shown in **Figure 4**.

**Figure 3.** One popular type of waveguide thermistor mount.

**Figure 4.** Waveguide thermistor mounts.

#### **2.3. Primary Power Standard: Microcalorimeter**

**2.2. Reference Power Standard: Bolometer**

6 New Trends and Developments in Metrology

mounts used at NIM, China, are shown in **Figure 4**.

Bar

**Figure 3.** One popular type of waveguide thermistor mount.

**Figure 4.** Waveguide thermistor mounts.

bolometric element.

time constant [12].

Bolometers have a very high reliability and have been used as the reference power standards in most of the NMIs, together with a microcalorimeter. A bolometer consists of a small temperature-sensitive resistor. It is operated by changing its own resistance following a change in its temperature resulted from the incident microwave power being dissipated in the

Two types of bolometers have been commonly used, namely, barretter and thermistor. The barretter is a thin metal wire with a positive temperature coefficient of resistance, and the thermistor is a small bead of semiconductor material with a negative temperature coefficient of resistance [11]. It is noted that the thermistor is more sensitive than the barretter due to a much greater temperature coefficient, but it has a slower response time due to its larger thermal

The thermistor is therefore more popularly used. Typically, a thermistor bead has a diameter of around 0.05–0.5 mm with a small-size (diameter of 15–100 μm) metal wire embedded inside. A waveguide or coaxial termination that houses a thermistor with an internal matching circuit to obtain specified impedance conditions (e.g., 100 Ω or 200 Ω) with appropriate DC bias power applied [11] is called a thermistor mount. The schematic diagram of a popular type of wave‐ guide thermistor mount is shown in **Figure 3**, and some samples of waveguide thermistor

Thermistor

Flange

Microwave power

Bar

Thermistor

DC connection

At present, calorimeters have been accepted as the basis of primary standards for microwave power measurements and calibrations within the NMIs or the standards laboratories [5]. Among several different types of calorimeters (e.g., dry load calorimeters and flow calorime‐ ters [8]), microcalorimeters [13, 14] have been popularly used. The microcalorimeter technique is based on the DC substitution method, and its traceability is established through the principle of "thermal effect equivalence." It allows the experimental determination of effective efficiency of thermal power sensors (i.e., bolometric and thermoelectric sensors).

**Figure 5.** Prototype of a waveguide microcalorimeter with twin-line structure.

**Figure 5** presents the design and configuration of a waveguide microcalorimeter, which is China's national primary power standard developed and maintained at NIM, China, for the thermistor mount [device under test (DUT) in this case] measurements [15–18] through the DC substitution technique (i.e., applied microwave power is compensated by an appropriate reduction of DC power). As shown in **Figure 5**, the design of a waveguide microcalorimeter at NIM, China, is based on a twin-line structure with a symmetrically located inactive mount (i.e., a "dummy" thermistor mount) as the temperature reference. A thermopile is attached in between to monitor the temperature difference, when the microcalorimeter is in operation with a thermistor mount (DUT). With the same DUT and dummy mounts, nearly the same thermal transmission paths could be achieved and therefore produce almost identical response to the ambient temperature at both the terminals of the thermopile. This twin-line design makes the microcalorimeter less affected by the ambient temperature. More specifically, it could effectively reduce the influence of a long-term ambient temperature drift on the measurement results.

The core part of the microcalorimeter as shown in **Figure 5** consists of a base extension, two thermal isolation sections (TIS), and two interface plates. The DUT is attached to a standard waveguide flange on the interface plate with screws that pass through all the three core components. The TIS is about 6 mm thick and is made of gold-coated ABS plastic so that the waveguide section has little loss. A thermistor has been embedded into the TIS for monitoring its temperature rise due to unexpected power consumption within the thermal isolation waveguide. A sample of fabricated WR-22 waveguide microcalorimeter at NIM, China, is shown in **Figure 6**.

**Figure 6.** Fabricated WR-22 waveguide microcalorimeter at NIM, China.

#### **3. The Working Principle of Microcalorimeter**

The microcalorimeter is used to measure the effective efficiency *ηe* of a thermistor mount which serves as the reference standard for power measurements. The effective efficiency *ηe* of a bolometer unit (e.g., a thermistor mount) is defined as the ratio of the changes in the DC- substituted power *Psub*to the total microwave power *Prf* dissipated within the bolometer unit, as specified in ref. [11]. That is,

$$
\eta\_a = \frac{P\_{sub}}{P\_{\eta'}}.\tag{1}
$$

It is noted that, in practice, the effective efficiency *ηe* of a thermistor mount is determined using the DC substitution technique with a microcalorimeter, in conjunction with a self-balancing bridge circuit.

#### **3.1. DC Substitution Technique**

ambient temperature at both the terminals of the thermopile. This twin-line design makes the microcalorimeter less affected by the ambient temperature. More specifically, it could effectively reduce the influence of a long-term ambient temperature drift on the measurement

The core part of the microcalorimeter as shown in **Figure 5** consists of a base extension, two thermal isolation sections (TIS), and two interface plates. The DUT is attached to a standard waveguide flange on the interface plate with screws that pass through all the three core components. The TIS is about 6 mm thick and is made of gold-coated ABS plastic so that the waveguide section has little loss. A thermistor has been embedded into the TIS for monitoring its temperature rise due to unexpected power consumption within the thermal isolation waveguide. A sample of fabricated WR-22 waveguide microcalorimeter at NIM, China, is

> Thermopile Dummy DUT

results.

shown in **Figure 6**.

8 New Trends and Developments in Metrology

Heater Interface

TIS

**Figure 6.** Fabricated WR-22 waveguide microcalorimeter at NIM, China.

**3. The Working Principle of Microcalorimeter**

The microcalorimeter is used to measure the effective efficiency *ηe* of a thermistor mount which serves as the reference standard for power measurements. The effective efficiency *ηe* of a bolometer unit (e.g., a thermistor mount) is defined as the ratio of the changes in the DC-

Thermistor

The DC substitution technique has been implemented through automatically reducing the DC bias power to keep the operating resistance of a thermistor constant, when the microwave power is applied to the thermistor mount. Ideally, if the applied microwave power is totally absorbed by the thermistor element and the thermistor also has the same thermal reaction for DC and microwave power, *Psub* = *Prf*. Thereby, *ηe* = 1.

However, practically, there are some existing losses in the input transmission line, the mount structure, and others. For example, as shown in **Figure 7**, some unexpected power consump‐ tions could be on the wall of the thermistor mount (*Pw*) and within the thermal isolation waveguide (*Pi* ), besides the net power *Prft* dissipated on the thermistor bead. It is noted that practically the net power *Prft* is very difficult/impossible to be measured and is represented by the compensated DC power (i.e., DC-substituted power *Psub*). These effects can result in a measurement error, which is normally characterized as the mount efficiency. Moreover, the thermistor bead has a different thermal reaction and power distribution for DC and microwave powers and can cause a microwave-to-DC (or RF-to-DC) substitution error. Both the mount efficiency and the microwave-to-DC substitution error shall be considered in the correction factor *g* of a microcalorimeter for accurate effective efficiency determination.

**Figure 7.** Main power absorptions when calibrating a thermistor mount using a microcalorimeter.

The DC substitution requires a self-balancing bridge circuit to work with the thermistor mount for keeping its operating resistance *RT* constant, when the microwave power is applied. **Figure 8** shows a typical self-balancing bridge circuit for maintaining *RT*. The initial resistance *RT* of a thermistor is normally at 200 Ω (or 100 Ω for different types of thermistor mounts) with a DC bias. When the microwave power is fed to the thermistor, *RT* will change due to the temperature rising of the thermistor. The DC bias power has to be reduced to balance the bridge circuit. It is noted that the reduced amount of DC-biased power is proportional to the micro‐ wave power *Prft* applied onto the thermistor bead.

**Figure 8.** An example of a self-balancing bridge circuit for monitoring the resistance change in a thermistor mount.

#### **3.2. Operation of a Microcalorimeter**

A Type IV power meter has been fabricated at NIM, China, for realizing the DC substitution technique with a self-balancing bridge circuit inside. It works with the thermistor mount in a closed loop to keep *RT* constant. The internal circuit of the NIM-manufactured Type IV power meter for calibrating the thermistor mount is shown in **Figure 9**.

**Figure 9.** Internal circuit diagram of the NIM-manufactured Type IV power meter for measuring the thermistor mount.

**Figure 10.** Schematic diagram for microcalorimeter measurements inside a water bath.

**Figure 10** presents a complete operation setup for thermistor mount measurements and calibrations using a microcalorimeter at NIM, China. The microcalorimeter is sealed within a watertight housing and then is placed inside a water bath. The water bath has a very good thermal stability with a temperature fluctuation of less than 1 mK. During the measurements, signal source, digital voltmeter (DVM), nanovoltmeter (NVM), DC reference (DC Ref), and Type IV power meter were controlled by a computer for automation.

The measurement system is used to determine the DC bias voltages (*V*1 and *V*2) and thermopile outputs (*e*1 and *e*2) when the applied microwave power is off/on and the system reaches the thermal equilibrium. A typical output curve from the thermopile is also shown in **Figure 10** as a reference when the applied microwave power is off/on. With *V*1, *V*2, *e*1, *e*2, and correction factor *g* of a microcalorimeter, the effective efficiency *ηe* can be determined.

#### **3.3. Measurement and Calibration Model**

**Figure 8** shows a typical self-balancing bridge circuit for maintaining *RT*. The initial resistance *RT* of a thermistor is normally at 200 Ω (or 100 Ω for different types of thermistor mounts) with a DC bias. When the microwave power is fed to the thermistor, *RT* will change due to the temperature rising of the thermistor. The DC bias power has to be reduced to balance the bridge circuit. It is noted that the reduced amount of DC-biased power is proportional to the micro‐

V

*<sup>R</sup> <sup>R</sup>*<sup>2</sup> <sup>1</sup>

meter for calibrating the thermistor mount is shown in **Figure 9**.

10K <sup>x</sup> OP-07

U1 –

+15V(A)

–15V(A)

2 4

3

*RT*

**Figure 8.** An example of a self-balancing bridge circuit for monitoring the resistance change in a thermistor mount.

A Type IV power meter has been fabricated at NIM, China, for realizing the DC substitution technique with a self-balancing bridge circuit inside. It works with the thermistor mount in a closed loop to keep *RT* constant. The internal circuit of the NIM-manufactured Type IV power

R2 R6 E2

A0 Rt

C5

<sup>+</sup> U2

V– V+

**Figure 9.** Internal circuit diagram of the NIM-manufactured Type IV power meter for measuring the thermistor mount.

I– I+

200

Thermistor

Microwave Power *Prf*

Amp

15VB

OP-07

6

+ –

7

4

0.1u R5 B0 R4 20K <sup>2</sup> 3

–15V(B)

Working current

mA

1k 10K P2

+15V(B) Q3

9014

– +

wave power *Prft* applied onto the thermistor bead.

10 New Trends and Developments in Metrology

*R*3

**3.2. Operation of a Microcalorimeter**

R1 1k

9014

100K R3

P1

Q2

30

6

E1 + –

Q1

9014

15VA

From the definition, the effective efficiency *ηe* of the thermistor mount is

$$
\eta \eta\_c = \frac{P\_{sub}}{P\_{r\circ}} = \frac{P\_{sub}}{P\_{sub} + P\_{dv}}.\tag{2}
$$

Here, the total microwave power *Prf* dissipated within the thermistor mount includes the DCsubstituted power *Psub* on the thermistor bead and the total loss *Pdw* (including the loss *Pw* on the wall of the thermistor mount and some portion of unsubstituted power). The DC-substi‐ tuted power *Psub* can be estimated through the DC bias voltages *V*1 and *V*2, as

$$P\_{\rm Saab} = \frac{V\_1^2 - V\_2^2}{R}.\tag{3}$$

The loss *Pdw* contributes to the portion of thermopile output change (∆*e* = *e*<sup>2</sup> − *e*1) due to the temperature rising. The thermopile output change ∆*e* has the following relationship:

$$
\Delta \mathbf{e} \propto \left( P\_{\rm dev} + \mathbf{c} P\_l \right). \tag{4}
$$

The coefficient *c* is the weighted thermal factor due to the loss *Pi* within the thermal isolation waveguide onto the thermopile output. If the loss is uniformly distributed along the axial direction of the thermal isolation waveguide (typical case), *c* = 0.5.

Taking into consideration all the contributions from heat dissipated in different locations (such as the mount wall, TIS) of a microcalorimeter into its correction factor *g*, the effective efficiency *ηe* of thermistor mount at each frequency of interest can be calculated using the following recommendation [14]:

$$\mathfrak{m}\_e = \mathbf{g} \frac{1 - \left(\frac{V\_2}{V\_1}\right)^2}{\frac{\mathfrak{e}\_2}{\mathfrak{e}\_1} - \left(\frac{V\_2}{V\_1}\right)^2} = \mathbf{g} \mathfrak{m}\_{e,\text{ac}}.\tag{5}$$

Here, *η<sup>e</sup>*,*unc* is the uncorrected effective efficiency, and *g* is the correction factor, which is the most important characteristic of a microcalorimeter. Several different techniques have been proposed in refs. [17–20] for evaluating the correction factor *g*, in order to determine the effective efficiency of the reference standard, thermistor mount, accurately. It is noted that sometimes the calibration factor *K* of the thermistor mount is of interest for applications. The calibration factor *K* can be derived from the effective efficiency *ηe*, as

$$K = \eta\_e \left( 1 - \left| \Gamma \right|^2 \right). \tag{6}$$

Here, Γ is the input reflection coefficient of the thermistor mount.

#### **4. Transfer Technique: Direct Comparison**

The parameter of a reference power standard such as a thermistor mount can be transferred to the DUT sensor by means of the direct comparison transfer technique, which was proposed and summarized by the National Institute of Standards and Technology (NIST) of USA [21,

22]. **Figure 11** presents a basic idea of the direct comparison transfer for waveguide microwave power sensor calibration.

2 2 1 2 . *Sub V V <sup>P</sup> R*

The loss *Pdw* contributes to the portion of thermopile output change (∆*e* = *e*<sup>2</sup> − *e*1) due to the

waveguide onto the thermopile output. If the loss is uniformly distributed along the axial

Taking into consideration all the contributions from heat dissipated in different locations (such as the mount wall, TIS) of a microcalorimeter into its correction factor *g*, the effective efficiency *ηe* of thermistor mount at each frequency of interest can be calculated using the following

> 2 2 1

Here, *η<sup>e</sup>*,*unc* is the uncorrected effective efficiency, and *g* is the correction factor, which is the most important characteristic of a microcalorimeter. Several different techniques have been proposed in refs. [17–20] for evaluating the correction factor *g*, in order to determine the effective efficiency of the reference standard, thermistor mount, accurately. It is noted that sometimes the calibration factor *K* of the thermistor mount is of interest for applications. The

> ( ) <sup>2</sup> 1 . *K* = -G h

The parameter of a reference power standard such as a thermistor mount can be transferred to the DUT sensor by means of the direct comparison transfer technique, which was proposed and summarized by the National Institute of Standards and Technology (NIST) of USA [21,

. *<sup>e</sup> e unc*

*V V g g e V e V*

æ ö - ç ÷ è ø = = æ ö - ç ÷ è ø

2 2 1 1

1

2 ,

 h

temperature rising. The thermopile output change ∆*e* has the following relationship:

The coefficient *c* is the weighted thermal factor due to the loss *Pi*

direction of the thermal isolation waveguide (typical case), *c* = 0.5.

h

calibration factor *K* can be derived from the effective efficiency *ηe*, as

Here, Γ is the input reflection coefficient of the thermistor mount.

**4. Transfer Technique: Direct Comparison**

recommendation [14]:

12 New Trends and Developments in Metrology


( ). *dw i* Dµ + *e P cP* (4)

*<sup>e</sup>* (6)

within the thermal isolation

(5)

**Figure 11.** Calibration of a waveguide power sensor by means of direct comparison transfer using a coupler.

The system consists of a microwave synthesizer and a three-port directional coupler which is used to minimize the source mismatch [23]. As shown in **Figure 11**, a monitoring power sensor with a meter is connected to Port 3 of the coupler. The effective efficiency *ηDUT* and the calibration factor *KDUT* of a DUT sensor are measured by alternately connecting a reference power standard (e.g., a thermistor mount with the effective efficiency *ηSTD* and the calibration factor *KSTD*) and the DUT to Port 2 of the coupler. For the setup shown in **Figure 11**, the connectors of the DUT and the reference standard are kept the same. It is noted that for coaxial application, the coupler shall be replaced using a three-port power splitter.

The calibration factor *KDUT* of the DUT sensor can be determined through

$$K\_{DUT} = K\_{\text{STD}} \times \frac{P\_{DUT}}{P\_{\text{STD}}} \times \frac{P\_{\text{STD}}}{P\_{\text{STD}}} \times \frac{\left\| 1 - \Gamma\_{DUT} \Gamma\_{EG} \right\|^2}{\left\| 1 - \Gamma\_{\text{STD}} \Gamma\_{EG} \right\|^2}. \tag{7}$$

Here, *PDUT* and *P*3*DUT* are the powers measured at Port 2 using the DUT sensor and that at Port 3 using a monitoring power sensor, respectively. *PSTD* and *P*3*STD* are the powers measured at Port 2 using the reference standard (e.g., a thermistor mount) and that at Port 3 using the same monitoring power sensor. Γ*STD* is the input reflection coefficient of the reference standard, and Γ*DUT* is the input reflection coefficient of the DUT. Γ*EG* is the equivalent source match term of Port 2 [8] and equal to

$$
\Gamma\_{EG} = S\_{22} - \frac{S\_{21} S\_{32}}{S\_{31}},
\tag{8}
$$

where *Sij* (*i*, *j* = 1, 2, or 3) are the scattering parameters (S-parameter) of the directional coupler. A more detailed description of eq. (7) can be obtained in refs. [24, 25], and the derivation of eq. (8) can be found in ref. [6].

However, sometimes, a DUT sensor has an unmatched connector with the reference standards, and then an adaptor has to be used. Some measurement models with an adaptor at DUT/ reference standards have been proposed in refs. [25–27] and will be briefly introduced below.

#### **4.1. Calibration Scenario with an Adaptor before Reference Standard**

This is the application scenario where an adaptor has been connected between the reference standard and Port 2 of the coupler, while the DUT sensor is alternatively connected to Port 2 directly. The calibration factor *KDUT* of the DUT sensor can be calculated with

$$K\_{DUT} = K\_{STD} \times \frac{P\_{DUT}}{P\_{STD}} \times \frac{P\_{3STD}}{P\_{3DUT}} \times \left| \frac{S\_{21A} \left(1 - \Gamma\_{DUT} \Gamma\_{EG}\right)}{1 - \Gamma\_{STD} S\_{22A} - \Gamma\_{EG} \Gamma\_{A-STD}} \right|^2. \tag{9}$$

Here, Γ*<sup>A</sup>* <sup>−</sup> *STD* = *S*11*<sup>A</sup>* + Γ*STDS*21*AS*12*<sup>A</sup>* − Γ*STDS*22*AS*11*<sup>A</sup>*, and *SlmA* is the S-parameter of adaptor A, and *l*, *m* = 1 or 2. **Figure 12** presents a typical measurement setup when a coaxial-to-waveguide adaptor has been used before a waveguide reference standard (a thermistor mount as shown in **Figure 12** (a)).

**Figure 12.** Calibration of a DUT sensor by means of direct comparison transfer with an adaptor before the reference standard.

#### **4.2. Calibration Scenario with an Adaptor before DUT Sensor**

21 32

G= - (8)

( ) <sup>2</sup>

Monitoring

DUT

<sup>1</sup> . <sup>1</sup>


31

where *Sij* (*i*, *j* = 1, 2, or 3) are the scattering parameters (S-parameter) of the directional coupler. A more detailed description of eq. (7) can be obtained in refs. [24, 25], and the derivation of eq.

However, sometimes, a DUT sensor has an unmatched connector with the reference standards, and then an adaptor has to be used. Some measurement models with an adaptor at DUT/ reference standards have been proposed in refs. [25–27] and will be briefly introduced below.

This is the application scenario where an adaptor has been connected between the reference standard and Port 2 of the coupler, while the DUT sensor is alternatively connected to Port 2

> 21 3 3 22

*DUT STD A DUT EG*

*PP S* -

Here, Γ*<sup>A</sup>* <sup>−</sup> *STD* = *S*11*<sup>A</sup>* + Γ*STDS*21*AS*12*<sup>A</sup>* − Γ*STDS*22*AS*11*<sup>A</sup>*, and *SlmA* is the S-parameter of adaptor A, and *l*, *m* = 1 or 2. **Figure 12** presents a typical measurement setup when a coaxial-to-waveguide adaptor has been used before a waveguide reference standard (a thermistor mount as shown

(a) (b)

**Figure 12.** Calibration of a DUT sensor by means of direct comparison transfer with an adaptor before the reference

*STD DUT STD A EG A STD*

22

**4.1. Calibration Scenario with an Adaptor before Reference Standard**

*DUT STD*

Reference

Adaptor

*K K*

in **Figure 12** (a)).

Monitoring

standard.

directly. The calibration factor *KDUT* of the DUT sensor can be calculated with

*P P S*

(8) can be found in ref. [6].

14 New Trends and Developments in Metrology

, *EG S S <sup>S</sup> S*

This is the application scenario where an adaptor has been connected between the DUT sensor and Port 2 of the coupler, while the reference standard is alternatively connected to Port 2 directly. The calibration factor *KDUT* of the DUT sensor can be calculated with

$$K\_{DUT} = K\_{STD} \times \frac{P\_{DUT}}{P\_{STD}} \times \frac{P\_{3\,\text{STD}}}{P\_{3\,\text{DUT}}} \times \left| \frac{1 - \Gamma\_{DUT} S\_{22A} - \Gamma\_{EG} \Gamma\_{A-DUT}}{S\_{21A} \left(1 - \Gamma\_{STD} \Gamma\_{EG} \right)} \right|^2. \tag{10}$$

Here, Γ*<sup>A</sup>* <sup>−</sup> *DUT* = *S*11*<sup>A</sup>* + Γ*DUTS*21*AS*12*<sup>A</sup>* − Γ*DUTS*22*AS*11*<sup>A</sup>*. The mathematical model [eq. (10)] was previously derived using signal flow graphs together with the non-touching loop rule analysis in ref. [25]. It was later comparatively investigated in ref. [27] through the analysis of physical measurement processes. A consistent mathematical model has been observed.

The proposed model was successfully used to calibrate a high-sensitivity (lower power range) power sensor with an attenuator (the attenuator can be treated as a two-port adaptor with high loss) between the DUT and Port 2 of the coupler in ref. [27]. Good performance has been achieved referring to the data from the manufacturer.

#### **5. Performance Evaluation and Uncertainty Estimation**

In this section, the evaluation of measurement uncertainty is briefly introduced with the *Guide to the Expression of Uncertainty in Measurement* (GUM) [28]. The GUM method has been accepted and used in most of the current routine calibration works at NMIs or the standards laboratories. This is followed by an example of calibrating waveguide thermistor mounts with uncertainty evaluation in an international comparison.

#### **5.1. Estimation of Measurement Uncertainty**

To evaluate the measurement uncertainty, the GUM shall be followed. According to the GUM, there are two methods to evaluate the standard uncertainty *u*(*xi* ) associated with the physical quantity *xi* in the measurements, namely, *Type A Evaluation* and *Type B Evaluation*.

*Type A Evaluation* is a method of evaluating the standard uncertainty through the statistical analysis of a series of observations. It is normally referred to as "repeatable" measurement uncertainty. *Type B Evaluation* is a method of evaluating the standard uncertainty from other information including previous measurement data, specifications from manufacturers, data provided in calibration and other certificates, and uncertainties assigned to reference data taken from handbooks.

For evaluating the uncertainty of a measurand *y* from the standard uncertainty information of other physical quantities (*x*1, *x*2,… ,*xN*) with *y* = *f*(*x*1, *x*2,…, *xN*), combined standard uncertainty *uc*(*y*) associated with *y* is adopted. According to the *Law of Propagation of Uncertainty* (LPU) in the GUM [28], *uc*(*y*) can be estimated from the standard uncertainties of *x*1,*x*2,… ,*xN*, as

$$\mu\_{\varepsilon}(\mathbf{y}) = \sqrt{\sum\_{i=1}^{N} \left[\frac{\partial f}{\partial \mathbf{x}\_{i}}\right]^{2} \mu^{2}\left(\mathbf{x}\_{i}\right) + 2\sum\_{i=1}^{N-1} \sum\_{j=i+1}^{N} \frac{\partial f}{\partial \mathbf{x}\_{i}} \frac{\partial f}{\partial \mathbf{x}\_{j}} \mu\left(\mathbf{x}\_{i}\mathbf{x}\_{j}\right)}\,\tag{11}$$

where *u*(*xi* , *xj* ) is the covariance between *xi* and *xj* .

The expanded uncertainty *U*, which defines an interval about the result of a measurement that may be expected, can be estimated through *U* = *kuc*. Here, *k* is the coverage factor and equal to 2 for a one-dimensional physical quantity at a level of confidence of approximately 95% assuming a Gaussian distribution. However, for a complex-valued physical quantity (e.g., Sparameter), the coverage factor *k* for 95% coverage probability is around 2.45 [29, 30].

#### **5.2. Performance Evaluation in an International Comparison**

**CCEM.RF-K25.W Key Comparison**: The precision measurement capabilities of NIM-fabri‐ cated WR-22 microcalorimeter have been validated and demonstrated in a key comparison (CCEM.RF-K25.W) on high-frequency power in the frequency range 33–50 GHz. The compar‐ ison is an exercise to establish the metrological equivalence of signatory NMIs' standards as stated in the Mutual Recognition Arrangement (MRA) of the Bureau International des Poids et Mesures (BIPM).

In the CCEM.RF-K25.W comparison, the effective efficiency and calibration factor of the travelling standards (Hughes Model 45772H-1100) as shown in **Figure 13** were compared. The effective efficiency of the travelling standard at each frequency of interest was determined by measuring the heating of mount in the microcalorimeter during the DC substitution. As introduced previously, a Type IV power meter was used as a bolometer bridge. The correction factor *g* of NIM's microcalorimeter was characterized through the measurements where a foil short was inserted between the test port and the DUT in the microcalorimeter [16].

**Figure 13.** Participation in the international key comparison, CCEM.RF-K25.W.

**Figure 14.** Calibration factor for the travelling standard Hughes Model 45772H-1100 (SN 216) at 33.0 GHz [31].

**Figure 14** presents the measured calibration factor *K* for the travelling standard Hughes Model 45772H-1100 (SN 216) at 33.0 GHz. From the results shown in **Figure 14**, it can be observed that the NIM's microcalorimeter has a good measurement capability, and very good equiva‐ lence has been achieved referring to the results reported by other NMIs. An example of NIM uncertainty budget at 33.0 GHz is shown in Table 1 as a reference.


**Table 1.** An example of NIM uncertainty budget at 33.0 GHz [31].

#### **6. Summary**

( ) ( ) ( ) <sup>2</sup> <sup>1</sup>

é ù ¶ ¶ ¶

 and *xj* .

parameter), the coverage factor *k* for 95% coverage probability is around 2.45 [29, 30].

The expanded uncertainty *U*, which defines an interval about the result of a measurement that may be expected, can be estimated through *U* = *kuc*. Here, *k* is the coverage factor and equal to 2 for a one-dimensional physical quantity at a level of confidence of approximately 95% assuming a Gaussian distribution. However, for a complex-valued physical quantity (e.g., S-

**CCEM.RF-K25.W Key Comparison**: The precision measurement capabilities of NIM-fabri‐ cated WR-22 microcalorimeter have been validated and demonstrated in a key comparison (CCEM.RF-K25.W) on high-frequency power in the frequency range 33–50 GHz. The compar‐ ison is an exercise to establish the metrological equivalence of signatory NMIs' standards as stated in the Mutual Recognition Arrangement (MRA) of the Bureau International des Poids

In the CCEM.RF-K25.W comparison, the effective efficiency and calibration factor of the travelling standards (Hughes Model 45772H-1100) as shown in **Figure 13** were compared. The effective efficiency of the travelling standard at each frequency of interest was determined by measuring the heating of mount in the microcalorimeter during the DC substitution. As introduced previously, a Type IV power meter was used as a bolometer bridge. The correction factor *g* of NIM's microcalorimeter was characterized through the measurements where a foil

short was inserted between the test port and the DUT in the microcalorimeter [16].


2 , ,

¶ ¶ ¶ ë û å åå (11)

2 1 1 1

= + ê ú

) is the covariance between *xi*

**5.2. Performance Evaluation in an International Comparison**

**Figure 13.** Participation in the international key comparison, CCEM.RF-K25.W.

where *u*(*xi*

, *xj*

16 New Trends and Developments in Metrology

et Mesures (BIPM).

*N N N c i i j i i i ji i j <sup>f</sup> f f u y u x uxx <sup>x</sup> x x*

= = =+

In this chapter, we mainly focused on the introduction of microwave power measurements and standards. Primary power standards (e.g., microcalorimeter) and reference standards (e.g., thermistor mounts) have been discussed. Some recent developments of the waveguide microcalorimeter at NIM, China, and further applications of the direct comparison transfer technique at NMC, A\*STAR of Singapore, have been reported. This is followed by an intro‐ duction of uncertainty evaluation for calibrating a WR-22 waveguide thermistor power sensor during an international comparison.

Furthermore, we have attempted to calibrate a WR-15 (50–75 GHz) waveguide thermistor mount using the direct comparison transfer technique [32]. Good performance has been observed preliminarily. Further improvement works have been planned and will be carried out in the near future.

#### **Acknowledgements**

This work was supported in part by the National Science and Technology Supporting Program "Wireless Communication Power Measurement Standard and Traceability Technology Research" of China under Grant No. 2014BAK02B02 and the Agency for Science, Technology and Research (A\*STAR) of Singapore under Grant No. 0920170078.

#### **Author details**

Xiaohai Cui1\*, Yu Song Meng2\*, Yueyan Shan2\* and Yong Li1

\*Address all correspondence to: cuixh@nim.ac.cn, meng\_yusong@nmc.a-star.edu.sg, and shan\_yueyan@nmc.a-star.edu.sg

1 National Institute of Metrology, Beijing, China

2 National Metrology Centre, Agency for Science, Technology and Research (A\*STAR), Sin‐ gapore

#### **References**


[6] Fantom A. Radio Frequency and Microwave Power Measurement. London, UK: Peter Peregrinus Ltd.; 1990.

observed preliminarily. Further improvement works have been planned and will be carried

This work was supported in part by the National Science and Technology Supporting Program "Wireless Communication Power Measurement Standard and Traceability Technology Research" of China under Grant No. 2014BAK02B02 and the Agency for Science, Technology

\*Address all correspondence to: cuixh@nim.ac.cn, meng\_yusong@nmc.a-star.edu.sg, and

2 National Metrology Centre, Agency for Science, Technology and Research (A\*STAR), Sin‐

[1] Panagopoulos AD, Arapoglou PM, Cottis PG. Satellite Communications at Ku, Ka, and V Bands: Propagation Impairments and Mitigation Techniques. IEEE Communications

[2] Lee YH, Natarajan V, Meng YS, Yeo JX, Ong JT. Cloud attenuation on Ka-band satellite link in the tropical region: Preliminary results and analysis. In: 2014 IEEE Antennas and Propagation Society International Symposium, 6–11 July 2014, Memphis, TN, USA.

[3] Dehos C, González JL, De Domenico A, Kténas D, Dussopt L. Millimeter-Wave Access and Backhauling: The Solution to the Exponential Data Traffic Increase in 5G Mobile Communications Systems? IEEE Communications Magazine 2014; 52(9) 88–95.

[4] ISO/IEC 17025:2005. General Requirements for the Competence of Testing and Cali‐

[5] Estin AJ, Juroshek JR, Marks RB, Clague FR, Allen JW. Basic RF and Microwave Measurements: A Review of Selected Programs. Metrologia 1992; 29(2) 125–151.

and Research (A\*STAR) of Singapore under Grant No. 0920170078.

Xiaohai Cui1\*, Yu Song Meng2\*, Yueyan Shan2\* and Yong Li1

1 National Institute of Metrology, Beijing, China

Surveys & Tutorials 2004; 6(3) 2–14.

bration Laboratories. Geneva, Switzerland; 2005.

out in the near future.

18 New Trends and Developments in Metrology

**Acknowledgements**

**Author details**

gapore

**References**

shan\_yueyan@nmc.a-star.edu.sg


#### **Chapter 2**

### **THz Measurement Systems**

Leopoldo Angrisani, Giovanni Cavallo, Annalisa Liccardo, Gian Paolo Papari and Antonello Andreone

Additional information is available at the end of the chapter

http://dx.doi.org/10.5772/63734

#### **Abstract**

[20] Judaschke R, Ruhaak J. Determination of the Correction Factor of Waveguide Micro‐ calorimeters in the Millimeter-Wave Range. IEEE Transactions on Instrumentation and

[21] Weidman MP. Direct Comparison Transfer of Microwave Power Sensor Calibration.

[22] Ginley R. A Direct Comparison System for Measuring Radio Frequency Power (100

[23] Engen GF. Amplitude Stabilization of a Microwave Signal Source. IRE Transactions on

[24] Shan Y, Cui X. RF and microwave power sensor calibration by direct comparison transfer. In: Cocco L. (ed.) Modern Metrology Concerns. Rijeka: InTech; 2012. P175–

[25] Shan Y, Meng YS, Lin Z. Generic Model and Case Studies of Microwave Power Sensor Calibration Using Direct Comparison Transfer. IEEE Transactions on Instrumentation

[26] Kang TW, Kim JH, Kwon JY, Park JI, Lee DJ. Direct comparison technique using a transfer power standard with an adapter and its uncertainty. In: Conference on Precision Electromagnetic Measurements Digest, CPEM2012, 1–6 July 2012, Washing‐

[27] Meng YS, Shan Y. Measurement and Calibration of a High-Sensitivity Microwave Power Sensor with an Attenuator. Radioengineering 2014; 23(4) 1055–1060.

[28] BIPM, IEC, IFCC, ILAC, ISO, et al. Evaluation of measurement data – guide to the expression of uncertainty in measurement. In: JCGM 100:2008 (GUM 1995 with Minor

[29] Ridler NM, Salter MJ. An Approach to the Treatment of Uncertainty in Complex S-

[30] Meng YS, Shan Y. Measurement Uncertainty of Complex-Valued Microwave Quanti‐

[31] Judaschke R. CCEM.RF-K25.W — RF power from 33 GHz to 50 GHz in waveguide.

[32] Cui X, Meng YS, Shan Y, Yuan W, Ma C, Li Y. Evaluation and validation of a national WR-15 (50 to 75 GHz) power measurement system. In: Digest of the 84th ARFTG

Microwave Measurement Conference, 2–5 December 2014, Boulder, Colorado.

Corrections). Joint Committee for Guides in Metrology; 2008.

Parameter Measurements. Metrologia 2002; 39(3) 295–302.

ties. Progress In Electromagnetics Research 2013; 136 421–433.

Measurement 2009; 58(4) 1104–1108.

kHz to 18 GHz). Measure 2006; 1(4) 46–49.

and Measurement 2013; 62(6) 1834–1839.

Microwave Theory and Techniques 1958, 6(2) 202–206.

NIST Technical Note 1379; 1996.

20 New Trends and Developments in Metrology

200.

ton, DC, USA.

Final Report (Draft B); 2014.

The terahertz (THz) frequency region is often defined as the last unexplored area of the electromagnetic spectrum. Over the past few years, the full access has been the objec‐ tive of intense research efforts. Progress in this area has played an important role in opening up the possibility of using THz electromagnetic radiation (T-waves) in science and in realworld applications. T-waves are not perceptible by the human eye, are not ionizing, and have the ability to cross many non-conducting materials such as paper, fabrics, wood, plastic, and organic tissues. Moreover, the use of THz radiation allows non-destructive analysis of the materials under investigation both by study of their "fingerprint" via spectroscopic measurements and by high-resolution spatial imaging operations, exploiting the see-through capability of T-waves. Such technology can be applied in diverse areas, spanning from biology to chemical, pharmaceutical, environmental sciences, etc. In this chapter, we will present the typical architecture of measurement systems based on the THz technology, detailing what are the parameters that define their performance, the measurement methods, and the related errors and uncertainty, and focusing at the end on the use of time-domain spectroscopy for the evaluation of different material properties in this specific frequency region.

**Keywords:** THz measurements, THz technology, time domain spectroscopy, imaging, metrological characterization

#### **1. Introduction to THz technology**

Terahertz (THz) spectrum refers to the frequency domain ranging approximately from 100 GHz to 10 THz, corresponding to wavelengths from 3 mm to 30 μm (T-waves). The lower limit is the

© 2016 The Author(s). Licensee InTech. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

microwave region, where mobile and satellite systems operate, and the upper limit is the far infrared, widely used for optical communications. **Figure 1** shows the characteristics of the Twaves in the electromagnetic (EM) spectrum.

**Figure 1.** Position of the THz waves in the electromagnetic spectrum.

Terahertz frequency region is often defined as the last unexplored area of the EM spectrum, since it represents an area of convergence between electronics and photonics presently lacking a mature technology. Since T-waves are located between microwaves and far-infrared waves, there are two enabling technologies that can be considered for their full exploitation: electronics and photonics. Over the past 20 years, the full access and exploitation of this frequency window have been the objective of intense research efforts both in academia and in industry, in order to close the so-called THz gap. T-waves present many important properties potentially with a shattering impact both in science and in many real-world applications. First, they are not perceptible by the human eye, are not ionizing, and have the ability to cross many nonconducting common materials such as paper, fabrics, wood, plastic, and organic tissues [1, 2]. Then, in terms of energy, they give access to the rotational and vibrational modes of many molecules and macromolecules. These modes can be observed as absorption peaks in the THz spectra, providing the "fingerprint" of unknown compounds via spectroscopic measurements. Finally, the use of THz radiation allows contactless and non-destructive analysis of the materials under investigation, by spatial imaging operations [3–5] with resolution higher than micro- and millimeter waves. THz science can be applied in so many and different areas of interest, from biology to physical, chemical, pharmaceutical, and environmental research, within a broad range of industries including the medical, security, cultural heritage, manu‐ facturing, and aerospace sectors.

The objective of this chapter is to provide readers who are not familiar with the basics of this breakthrough technology, a brief review of the typical architectures of measurement systems and the different sources and detectors that are commonly used, and looking at possible applications. Successively, they will provide details on the parameters that define the per‐ formance of a THz system, the measurement methods, and the related errors and uncertainty, focusing at the end on the use of time-domain spectroscopy for the evaluation of different material properties in this specific frequency region.

#### **2. Overview of THz sources**

microwave region, where mobile and satellite systems operate, and the upper limit is the far infrared, widely used for optical communications. **Figure 1** shows the characteristics of the T-

Terahertz frequency region is often defined as the last unexplored area of the EM spectrum, since it represents an area of convergence between electronics and photonics presently lacking a mature technology. Since T-waves are located between microwaves and far-infrared waves, there are two enabling technologies that can be considered for their full exploitation: electronics and photonics. Over the past 20 years, the full access and exploitation of this frequency window have been the objective of intense research efforts both in academia and in industry, in order to close the so-called THz gap. T-waves present many important properties potentially with a shattering impact both in science and in many real-world applications. First, they are not perceptible by the human eye, are not ionizing, and have the ability to cross many nonconducting common materials such as paper, fabrics, wood, plastic, and organic tissues [1, 2]. Then, in terms of energy, they give access to the rotational and vibrational modes of many molecules and macromolecules. These modes can be observed as absorption peaks in the THz spectra, providing the "fingerprint" of unknown compounds via spectroscopic measurements. Finally, the use of THz radiation allows contactless and non-destructive analysis of the materials under investigation, by spatial imaging operations [3–5] with resolution higher than micro- and millimeter waves. THz science can be applied in so many and different areas of interest, from biology to physical, chemical, pharmaceutical, and environmental research, within a broad range of industries including the medical, security, cultural heritage, manu‐

The objective of this chapter is to provide readers who are not familiar with the basics of this breakthrough technology, a brief review of the typical architectures of measurement systems and the different sources and detectors that are commonly used, and looking at possible applications. Successively, they will provide details on the parameters that define the per‐ formance of a THz system, the measurement methods, and the related errors and uncertainty,

waves in the electromagnetic (EM) spectrum.

22 New Trends and Developments in Metrology

**Figure 1.** Position of the THz waves in the electromagnetic spectrum.

facturing, and aerospace sectors.

Since its early beginning, one of the main hurdles for the development of THz technology was a lack of solid-state signal sources, rather than detectors. Since T-waves are located between microwaves and far-infrared waves, there are two enabling technologies that can be considered for their emission: electronics and photonics.

One can roughly group THz sources into two major operational families based on the emission mode and the operating frequency: the continuous wave (CW) and the pulsed (or time domain – TD) mode [1, 6].

Most of CW systems have been developed from the electronics side, in particular the micro‐ wave field. The typical way to generate THz emission, in fact, is by scaling the frequency by using frequency multipliers. These CW systems usually operate in the lower frequency range of the THz band with maximum frequencies around 0.8 THz. Still, there are some systems able to emit at frequencies as high as 5 THz, for example backward-wave oscillators and quantum cascade lasers. Nevertheless, CW systems could be realized from the photonics side too, since down-conversion is possible by mixing two lasers that work at different frequencies [6].



**Figure 2.** Characteristics of the two major operational families for the THz sources: continuous wave and time domain.

These systems are capable to operate at a single frequency, and their emission is continuous or modulated up to GHz frequencies. Therefore, CW systems are intrinsically narrowband and have often a limited tunability with a high spectral resolution (∼100 MHz) very useful for gasphase spectroscopy. Moreover, CW systems typically provide output power higher than pulsed sources. They can be passive or active; in the first case, the system detects the radiation emitted by the sample or body, whereas in the second case, the system illuminates the sample and detects the reflected or transmitted radiation. CW systems can be used in telecommuni‐ cations applications and non-destructive evaluation (NDE) applications.

In pulsed or time-domain (TD) systems, the distinctive element is an optical-to-THz signal conversion technology, based on the generation and detection of an electromagnetic transient having duration of few picoseconds by means of ultrafast pulsed lasers [6]. The short pulse is composed of many frequencies, which can be accessed with a Fourier transform of the pulse. Contrary to what happens in CW systems, pulsed systems can be only active, are broadband in nature, and do not benefit of a continuous emission, so that they are ideal to study ultrafast phenomena and for general purpose spectroscopic applications.

In **Figure 2**, the major differences between CW and pulsed systems are summarized.

In the following, the most commonly used THz sources will be described, with some emphasis on those based on electro-optical conversion, since they form the base for the development of coherent systems operating in the time domain.

#### **2.1. Backward-wave oscillator (BWO)**

The backward-wave oscillator is one of the most important and successful sources based on up-conversion, allowing to extend the frequency of microwave sources to the THz range using harmonic generation. The mechanism of a BWO is very similar to a travelling wave amplifier, with the difference that it is a slow-wave structure, deliberately designed to provide feedback. In particular, a BWO seems a sophisticated high vacuum diode, where the cathode is heated by a low voltage heater and emits electrons accelerated by a high-voltage field travelling to‐ ward the anode. The electrons are collimated by a uniform external magnetic field and pass over the slow-wave structure like a comb. This mechanism produces the required quantity for the transfer of the kinetic energy of the electrons to an electromagnetic wave that builds up from noise fluctuations. The most important advantage of BWOs is their tunability; in fact, the tuning rate is ∼10 MHz/V for low frequency devices, rising to ∼100 MHz/V for devices oper‐ ating at 1 THz or above. In addition, the output power of each BWO varies quite rapidly with frequency and the useful tuning range is approximately ±10% from the centre frequency [7].

#### **2.2. Quantum cascade laser (QCL)**

The quantum cascade laser belongs to the semiconductor-based THz sources. Over the years, the enormous progress in the field of nanotechnology is making QCL the most employed source in the CW family. In typical semiconductor lasers, light is generated by the recombi‐ nation of electrons in the conduction band with holes in the valence band, separated by the gap of the active material. In QCLs, the presence of coupled quantum wells (QWs) splits the conduction band by quantum confinement into a number of distinct sub-bands. In fact, the structure of a QLC is composed of semiconducting layered QWs from hundreds to thousands (like InGaAs/AlGaAs) [8]. Applied electric field, lifetimes, and tunneling probabilities of each level are fundamental to obtain population inversion between two sub-bands in a series of identical repeat units. The output radiation frequency is defined by the energy spacing (or QW thickness) of the lasing sub-bands. The active regions are connected with injector/collector structures allowing electrical transport through injection of carriers into the upper laser level and extraction of carriers from the lower laser level [9, 10].

#### **2.3. Stimulated terahertz amplitude radiation (STAR) emitters**

These systems are capable to operate at a single frequency, and their emission is continuous or modulated up to GHz frequencies. Therefore, CW systems are intrinsically narrowband and have often a limited tunability with a high spectral resolution (∼100 MHz) very useful for gasphase spectroscopy. Moreover, CW systems typically provide output power higher than pulsed sources. They can be passive or active; in the first case, the system detects the radiation emitted by the sample or body, whereas in the second case, the system illuminates the sample and detects the reflected or transmitted radiation. CW systems can be used in telecommuni‐

In pulsed or time-domain (TD) systems, the distinctive element is an optical-to-THz signal conversion technology, based on the generation and detection of an electromagnetic transient having duration of few picoseconds by means of ultrafast pulsed lasers [6]. The short pulse is composed of many frequencies, which can be accessed with a Fourier transform of the pulse. Contrary to what happens in CW systems, pulsed systems can be only active, are broadband in nature, and do not benefit of a continuous emission, so that they are ideal to study ultrafast

In **Figure 2**, the major differences between CW and pulsed systems are summarized.

In the following, the most commonly used THz sources will be described, with some emphasis on those based on electro-optical conversion, since they form the base for the development of

The backward-wave oscillator is one of the most important and successful sources based on up-conversion, allowing to extend the frequency of microwave sources to the THz range using harmonic generation. The mechanism of a BWO is very similar to a travelling wave amplifier, with the difference that it is a slow-wave structure, deliberately designed to provide feedback. In particular, a BWO seems a sophisticated high vacuum diode, where the cathode is heated by a low voltage heater and emits electrons accelerated by a high-voltage field travelling to‐ ward the anode. The electrons are collimated by a uniform external magnetic field and pass over the slow-wave structure like a comb. This mechanism produces the required quantity for the transfer of the kinetic energy of the electrons to an electromagnetic wave that builds up from noise fluctuations. The most important advantage of BWOs is their tunability; in fact, the tuning rate is ∼10 MHz/V for low frequency devices, rising to ∼100 MHz/V for devices oper‐ ating at 1 THz or above. In addition, the output power of each BWO varies quite rapidly with frequency and the useful tuning range is approximately ±10% from the centre frequency [7].

The quantum cascade laser belongs to the semiconductor-based THz sources. Over the years, the enormous progress in the field of nanotechnology is making QCL the most employed source in the CW family. In typical semiconductor lasers, light is generated by the recombi‐ nation of electrons in the conduction band with holes in the valence band, separated by the

cations applications and non-destructive evaluation (NDE) applications.

phenomena and for general purpose spectroscopic applications.

coherent systems operating in the time domain.

**2.1. Backward-wave oscillator (BWO)**

24 New Trends and Developments in Metrology

**2.2. Quantum cascade laser (QCL)**

Stimulated terahertz amplified radiation (STAR) is compact and technologically simple all solid-state emitters based on superconducting devices. Coherent electromagnetic waves are generated at THz frequencies because of the Josephson effect between one or more supercon‐ ductor–insulator–superconductor (SIS) junctions. Biased by a DC voltage V, a Josephson junction is essentially a two-level system with energy difference of 2 eV. As Cooper pairs are tunneling, EM waves are emitted from the junction. The radiation from a single junction, however, is only about 1 pW, and the frequency is below THz, limited in conventional low Tc superconductors by the small superconducting energy gap. The radiation power can be enhanced by fabricating an artificial array of Josephson junctions. Nevertheless, the crucial aspect relies on the coherent emission of EM waves, which requires synchronized oscillations of individual elements. The synchronization looks easier to achieve for the so-called intrinsic Josephson junctions (IJJs) that are densely packed inside high-quality single crystals of a high transition temperature Tc superconductor, usually Bi2Sr2CaCu2O8+δ (BSCCO). IJJs are formed naturally in BSCCO, where Bi-Sr-O layers between the superconducting CuO2 layers act as a non-conducting barrier of nanoscale thickness [11]. The main feature rendering the STAR emitters so attractive is the nature of the emitted radiation, which is fairly robust, reasonably intense (∼ μW), and characterized by high spectral purity. Furthermore, the line width is so sharp that its observation is limited by the resolution of the conventional spectrometers (0.25 cm−1). Then, the frequency of the THz radiation can be tuned considerably, approximately up to 10–15% of the central frequency, by varying the bias voltage applied to the N IJJ stack, even if its variable range of frequency strongly differs from sample to sample, depending on the preparation conditions [12].

#### **2.4. Photoconductive antenna (PCA)**

A photoconductive antenna is the most commonly used source in THz TDS systems. It consists of a semiconductor substrate where two metallic electrodes are deposited and separated by a gap of few microns. The substrate is typically a direct III-V semiconductor such as GaAs or low-temperature grown GaAs, sometimes doped silicon is adopted. In PCA, photocarriers are produced by the laser pulse and then accelerated by a bias field applied across the gap. In order to generate photo-induced free carriers, it is necessary that the photons of the laser pulse have an energy higher than the bandgap of the semiconductor. If the laser is focused at the gap between the electrodes, the photo-induced carriers are accelerated by the bias field across the gap, which generates a current. The amplitude of the current is a function of time, and thus, the derivative of the current respect time generates the THz pulse, in fact, for the pulsed nature of the laser beam [13, 14]:

$$E\_{\rm THz} = \frac{Ae}{4\pi\varepsilon\_0 c^2 z} \frac{\partial N(t)}{\partial t} \,\mu E\_b \tag{1}$$

where *A* is the area under beam illumination, *e* the electron charge, *ε*<sup>0</sup> is the vacuum permit‐ tivity, *c* is the light velocity, *z* is the pulse penetration inside the semiconductor, *N* is the photocarrier density, μ is the carrier mobility, and *Eb* is the bias field. The main contribution to the photocurrent comes from the electrons since their mobility is often higher than that of holes. In **Figure 3**, the photoconducting antenna is excited by a fs laser pulse. The dipole structure is biased at a voltage Vbias to increase the THz signal emitted from the device.

**Figure 3.** Sketch of the THz emission mechanism induced in a photoconductive antenna by a fs laser pulse.

Performances of PCAs can be appreciated considering some important and fundamental aspects as: semiconductor bandgap, carrier life and mobility, antenna gap and bias field. Finally, PCAs structures can be resonant or non-resonant. The first ones generate THz radiation around a central frequency, which depends on the gap distance, and the second ones have variable gap distances and provide broader frequency THz emission [13, 15].

#### **2.5. Electro-optical conversion (EOC)**

between the electrodes, the photo-induced carriers are accelerated by the bias field across the gap, which generates a current. The amplitude of the current is a function of time, and thus, the derivative of the current respect time generates the THz pulse, in fact, for the pulsed nature

( )

m

(1)

2 0

**Figure 3.** Sketch of the THz emission mechanism induced in a photoconductive antenna by a fs laser pulse.

¶ <sup>=</sup> ¶ *THz <sup>b</sup> Ae N t E E cz t*

where *A* is the area under beam illumination, *e* the electron charge, *ε*<sup>0</sup> is the vacuum permit‐ tivity, *c* is the light velocity, *z* is the pulse penetration inside the semiconductor, *N* is the photocarrier density, μ is the carrier mobility, and *Eb* is the bias field. The main contribution to the photocurrent comes from the electrons since their mobility is often higher than that of holes. In **Figure 3**, the photoconducting antenna is excited by a fs laser pulse. The dipole structure is biased at a voltage Vbias to increase the THz signal emitted from the device.

4

pe

of the laser beam [13, 14]:

26 New Trends and Developments in Metrology

Another type of THz source is based on the electro-optical rectification. It is a nonlinear optical effect, and THz waves are generated as a result of a difference-frequency process between the frequency components contained in the femtosecond laser pulse, occurring in materials having a higher order susceptibility that is different from zero.

Mathematically, the polarization induced by the electric field associated with the optical pulse can be expressed in power series [16]:

$$
\overline{P}(\overline{r},t) = \underline{\chi}^{(1)}\overline{E}(\overline{r},t) + \underline{\chi}^{(2)} : \overline{E}(\overline{r},t)\overline{E}(\overline{r},t) + \underline{\chi}^{(3)} : \overline{E}(\overline{r},t)\overline{E}(\overline{r},t)\overline{E}(\overline{r},t) + \dots \tag{2}
$$

EO rectification comes from the second term in the previous equation. If the incident light is a plane wave, the polarization induced by the second-order susceptibility can be expressed as:

$$\overline{P}\_{Ok}(t) = 2\,\mathrm{\mathcal{X}}^{(2)} : \Big\|\!\int\_{o}^{0} \overline{E}(o\nu + \Omega)\overline{E}'(o\nu)e^{-j\Omega\nu}d\Omega do\nu\tag{3}$$

Here, Ω is the difference between two frequency components of the laser pulse, whereas *χ*(2) is the second-order susceptibility, depending on the material crystalline structure. The radiated electric field due to the EO induced polarization can be expressed as follows [17, 18]:

$$\overline{E}\_{D\text{th}} \propto \frac{\partial J(t)}{\partial t} = \frac{\partial^2 P(t)}{\partial t^2} = \mathcal{X}^{(2)} \frac{\partial^2 E\_{laar}(t)}{\partial t^2} \tag{4}$$

It is worth to emphasize that in EO rectification, no bias is necessary to realize THz generation (**Figure 4**). For a given material, the radiation efficiency and bandwidth are affected by factors such as thickness, laser pulse duration, absorption and dispersion, crystal orientation, and phase-matching conditions [19].

**Figure 4.** Sketch of the electro-optical mechanism.

#### **2.6. Photomixing for the generation of THZ radiation**

The term "photomixing" describes the generation of T-waves as a difference frequency in a nonlinear element. In the case of the THz region, it is necessary to use two IR or two visible laser photons, so that the laser frequency difference lies in this frequency range. Basically, a THz photomixer consists of two independently tunable sources (usually, solid-state diode lasers) lighting a photoconductive antenna PCA and yielding the desired difference frequency by heterodyning [20], as shown in **Figure 5**.

**Figure 5.** Operation of a THz photomixer.

The device serves as a terahertz coherent wave emitter. Changing the temperature or the operation current of the laser diodes, the value of the beat frequency can be straightforwardly regulated.

The band of the photomixer depends on the spectral width of the employed diodes; in particular, in order to enlarge the band or to move the band of the photomixer into higher frequencies, diodes with different central frequencies are necessary [21].

**Table 1** summarizes the major strengths and weaknesses for the operation of the sources reported above in the THz region.


**Table 1.** List of pros and cons for different THz sources.

#### **3. Overview of THz detectors**

**Figure 4.** Sketch of the electro-optical mechanism.

28 New Trends and Developments in Metrology

by heterodyning [20], as shown in **Figure 5**.

**Figure 5.** Operation of a THz photomixer.

**2.6. Photomixing for the generation of THZ radiation**

The term "photomixing" describes the generation of T-waves as a difference frequency in a nonlinear element. In the case of the THz region, it is necessary to use two IR or two visible laser photons, so that the laser frequency difference lies in this frequency range. Basically, a THz photomixer consists of two independently tunable sources (usually, solid-state diode lasers) lighting a photoconductive antenna PCA and yielding the desired difference frequency

> In general, detectors are transducers converting an input signal into some convenient form that can be observed, recorded, and analyzed. In case of THz technology, the signal is an electromagnetic wave whose amplitude and phase both hold important information. Accord‐ ing to the nature of EM wave, detectors can be grouped into two classes: incoherent or direct detectors that detect the amplitude only and coherent detectors that detect both amplitude and phase.

> The performance of a THz detector depends on a number of parameters, some of them correlated among them. The most important are as follows: bandwidth (the spectral range over

which the detector responds), responsivity (input–output gain of the detector system), noise characteristics (characterized by the noise equivalent power, NEP, that is the signal power required to yield a signal-to-noise ratio of unity at the output of the detector in a 1 Hz band‐ width), dynamic range, response time, and sensitive area.

#### **3.1. Thermoelectric detectors**

The photon energy of a THz wave is relatively weak and comparable to the phonon energy of a crystal; for this reason, it could be detected as thermal energy. In principle, the response of such thermoelectric detectors is very slow—in the order of milliseconds—because of the thermoelectric reaction of the crystal, but it is usually ultra-broadband.

The most commonly used thermoelectric detectors are as follows: Golay cell, pyroelectric, and bolometer.

The Golay cell is a gas cell detector in which an IR-absorptive gas is encapsulated, and its thermal expansion produced by THz waves is optically detected. The pyroelectric detector is a photovoltaic detector made with a dielectric material, exhibiting temperature-sensitive surface polarization and high sensitivity to THz imaging. Typical value of NEP is 100 pW/√Hz. Finally, the bolometer is a temperature-sensitive semiconductor, such as Si or Ge, and detects the THz radiation as the change in resistivity by the heating due to absorption of THz radiation. The bolometer could operate at cryogenic temperatures, ultimately minimizing the back‐ ground noise level. The typical value of NEP is 10<sup>−</sup><sup>6</sup> W / Hz at 0.3 K and 1pW / Hz at 300 K.

#### *3.1.1. Golay detectors*

The mechanism of a modern version of this detector, with its main features, is shown in **Figure 6**.

The main component is a sealed cell where a gas having a low thermal conductivity, usually xenon, is inserted. One side of the cell ends with a window that allows the transmission of the THz radiation in the adopted frequency interval. The other side instead is closed with a flexible mirror. The cell is completed with a thin absorbing metallic film whose impedance nearly matches that exhibited by free space. The metallic film absorbs the THz radiation thus heating the surrounding gas and causing a slight movement of the mirror. This displacement is subsequently converted into an electrical signal. In particular, a lens system is exploited to concentrate the light emitted by the source through a grid towards the mirror; then, the mirror reflects the light back to the detector through the grid. If a displacement of the mirror occurs, the reflected image of the grid is distorted, and thus, the amount of light reaching the light detector changes. For THz frequencies, the most useful window materials are high-density polyethylene (HDPE), high-resistivity Si, crystalline quartz, and diamond [22].

**Figure 6.** Structure of a Golay cell.

which the detector responds), responsivity (input–output gain of the detector system), noise characteristics (characterized by the noise equivalent power, NEP, that is the signal power required to yield a signal-to-noise ratio of unity at the output of the detector in a 1 Hz band‐

The photon energy of a THz wave is relatively weak and comparable to the phonon energy of a crystal; for this reason, it could be detected as thermal energy. In principle, the response of such thermoelectric detectors is very slow—in the order of milliseconds—because of the

The most commonly used thermoelectric detectors are as follows: Golay cell, pyroelectric, and

The Golay cell is a gas cell detector in which an IR-absorptive gas is encapsulated, and its thermal expansion produced by THz waves is optically detected. The pyroelectric detector is a photovoltaic detector made with a dielectric material, exhibiting temperature-sensitive surface polarization and high sensitivity to THz imaging. Typical value of NEP is 100 pW/√Hz. Finally, the bolometer is a temperature-sensitive semiconductor, such as Si or Ge, and detects the THz radiation as the change in resistivity by the heating due to absorption of THz radiation. The bolometer could operate at cryogenic temperatures, ultimately minimizing the back‐

The mechanism of a modern version of this detector, with its main features, is shown in

The main component is a sealed cell where a gas having a low thermal conductivity, usually xenon, is inserted. One side of the cell ends with a window that allows the transmission of the THz radiation in the adopted frequency interval. The other side instead is closed with a flexible mirror. The cell is completed with a thin absorbing metallic film whose impedance nearly matches that exhibited by free space. The metallic film absorbs the THz radiation thus heating the surrounding gas and causing a slight movement of the mirror. This displacement is subsequently converted into an electrical signal. In particular, a lens system is exploited to concentrate the light emitted by the source through a grid towards the mirror; then, the mirror reflects the light back to the detector through the grid. If a displacement of the mirror occurs, the reflected image of the grid is distorted, and thus, the amount of light reaching the light detector changes. For THz frequencies, the most useful window materials are high-density

polyethylene (HDPE), high-resistivity Si, crystalline quartz, and diamond [22].

W / Hz at 0.3 K and 1pW / Hz at 300 K.

width), dynamic range, response time, and sensitive area.

ground noise level. The typical value of NEP is 10<sup>−</sup><sup>6</sup>

thermoelectric reaction of the crystal, but it is usually ultra-broadband.

**3.1. Thermoelectric detectors**

30 New Trends and Developments in Metrology

bolometer.

*3.1.1. Golay detectors*

**Figure 6**.

#### *3.1.2. Pyroelectric detectors*

A pyroelectric detector (or pyrometer) is an ac thermal sensor characterized by a frequency response spanning a large frequency range, which includes the THz region. It is based on the pyroelectric effect exhibited by a thin, permanently poled, ferroelectric crystal (i.e., LiTaO3), in which the instantaneous polarization is dependent on the rate of change of the crystal temperature. Pyrometers are commercially available either as single devices or as arrays for the entire IR and THz spectral regions. Besides being very sensitive, they have many advan‐ tages, including being relatively cheap and rugged, with room temperature operation. Their most useful property is that, with appropriate design of an associated amplifier, they can exhibit response times ranging from milliseconds to less than a nanoseconds [23]. In **Fig‐ ure 7**, the scheme of a pyroelectric detector and its equivalent circuital model is presented.

**Figure 7.** Pyroelectric detector and equivalent representation of electric circuit diagram.

#### *3.1.3. Bolometer detectors*

Semiconducting bolometers are among the most important of THz detectors. A design for a bolometer is shown in **Figure 8**. It consists of a small chip of doped semiconductor, Si or Ge. Typical doping concentrations are 1016 cm−3 for Ge and 1018 cm−3 for Si. The detector element is suspended in vacuum by two thin lead wires between the electrical contacts, which provide the electrical connections as well as the thermal link to the heat sink. Two aspects determine the optimum level of doping: (i) the temperature coefficient of the resistance should be large and (ii) the bolometer should have a resistance that allows for an efficient coupling to a low noise amplifier [24].

**Figure 8.** Simple design of a semiconducting bolometer.

A specific bolometer is the superconducting hot-spot air bolometer (SHAB), where the detector element consists of a microscopic narrow Nb or NbN strip creating a free-standing bridge structure on top of a substrate. When a voltage bias is applied to the structure, this produces the formation of a hot spot in the middle of the strip where the incoming radiation energy is dissipated, thereby switching from the superconducting to the normal state. The overall effect is a modulation of the suspended strip resistance, with a consequent modulation of the current flowing because of the voltage bias. From the recorded current, one can extract a measure of the THz radiation [25].

#### **3.2. Photoconductive antenna**

A photoconductive antenna could be considered also to detect THz waves. The structure is very similar to the structure of PCA for emission. In this case, PCA as detector measures the photocurrent, which is collected by the electrons generated by the probe beam across the antenna gap and biased by the THz electric field. When no electric field is present, the photocarriers produced by the laser pulse move randomly and no current is observed. On the contrary, when the THz wave irradiates the gap, it generates an electric field separating electrons from holes, and therefore, a current, proportional to the amplitude of the electric field, is observed. It is important to emphasize that PCA for detection and PCA for emission are differently designed. The narrower is the gap, the lower is the electric field required for obtaining an appreciable current; therefore, PCA for detection exhibits narrower gaps (∼10 μm) when compared with typical gaps of PCA for emission (> 50 μm) [14, 26, 27]. Factors affecting the performance of a PCA are similar to those for the emitter: semiconductor bandgap, carrier lifetime and mobility, and antenna gap [14].

#### **3.3. Electro-optical sampling (EO)**

*3.1.3. Bolometer detectors*

32 New Trends and Developments in Metrology

noise amplifier [24].

**Figure 8.** Simple design of a semiconducting bolometer.

the THz radiation [25].

**3.2. Photoconductive antenna**

Semiconducting bolometers are among the most important of THz detectors. A design for a bolometer is shown in **Figure 8**. It consists of a small chip of doped semiconductor, Si or Ge. Typical doping concentrations are 1016 cm−3 for Ge and 1018 cm−3 for Si. The detector element is suspended in vacuum by two thin lead wires between the electrical contacts, which provide the electrical connections as well as the thermal link to the heat sink. Two aspects determine the optimum level of doping: (i) the temperature coefficient of the resistance should be large and (ii) the bolometer should have a resistance that allows for an efficient coupling to a low

A specific bolometer is the superconducting hot-spot air bolometer (SHAB), where the detector element consists of a microscopic narrow Nb or NbN strip creating a free-standing bridge structure on top of a substrate. When a voltage bias is applied to the structure, this produces the formation of a hot spot in the middle of the strip where the incoming radiation energy is dissipated, thereby switching from the superconducting to the normal state. The overall effect is a modulation of the suspended strip resistance, with a consequent modulation of the current flowing because of the voltage bias. From the recorded current, one can extract a measure of

A photoconductive antenna could be considered also to detect THz waves. The structure is very similar to the structure of PCA for emission. In this case, PCA as detector measures the photocurrent, which is collected by the electrons generated by the probe beam across the antenna gap and biased by the THz electric field. When no electric field is present, the Electro-optical (EO) sampling is based on the Pockels effect, in which the application of an electric field, on a material, induces or modifies the birefringence properties of it. In other words, the Pockels effect is a change in the refractive index or birefringence that depends linearly on the electric field. It is important to say that the Pockels effect can be observed only in crystals characterized by no inversion symmetry, like those belonging to the zinc-blende group such as the ZnTe [19, 27].

Using this detection method, the THz electric field is sensed by measuring the change of the birefringence properties of the crystal, caused by the field itself. These changes can be meas‐ ured by analyzing the polarization properties of an optical probe beam going through the crystal. The most common setup to measure the THz waveform with EO sampling is a balanced measurement approach, as shown in **Figure 9**.

The operating principle is as follows. An optical probe beam characterized by linear polariza‐ tion is first passed through a polarizer and then propagates within the EO crystal. A quarterwave plate (QWP) is positioned just after the EO crystal in order to modify the probe beam ellipticity; moreover, a suitable Wollaston prism is exploited to split the elliptical polarization into its two perpendicular components. A proper photodiode assembly mounted in differential configuration detects the diverse polarization intensity. In the absence of THz radiation impinging on the EO crystal, the probe beam ellipticity can be regulated in such a way that both polarization intensities are equal; as a consequence, the net current flowing from the differential photodiodes is equal to zero. On the contrary, when the THz wave is present, the birefringence of the EO crystal is modified by the electric field, thus changing accordingly the ellipticity of the probe beam. As a result, the balance between the two polarizations is broken and the photodiodes assembly can generated a net current whose intensity is proportional to the amplitude of the electric field associated with the impinging THz wave.

**Table 2** summarizes the major strengths and weaknesses for the operation of the detectors reported above in the THz region.


**Table 2.** List of pros and cons for different THz detectors.

#### **4. THz systems**

into its two perpendicular components. A proper photodiode assembly mounted in differential configuration detects the diverse polarization intensity. In the absence of THz radiation impinging on the EO crystal, the probe beam ellipticity can be regulated in such a way that both polarization intensities are equal; as a consequence, the net current flowing from the differential photodiodes is equal to zero. On the contrary, when the THz wave is present, the birefringence of the EO crystal is modified by the electric field, thus changing accordingly the ellipticity of the probe beam. As a result, the balance between the two polarizations is broken and the photodiodes assembly can generated a net current whose intensity is proportional to

**Table 2** summarizes the major strengths and weaknesses for the operation of the detectors

✗ Limited dynamic range

✗ Slow response

✗ Bulky ✗ Fragile

✗ Expensive

✗ Large NEP

✗ Slow response

✗ Small sensing area

✗ Low detection current ✗ Relatively expensive

✗ Complex readout

✗ Relatively expensive

✗ Low operation temperature

✗ Microphonic response

the amplitude of the electric field associated with the impinging THz wave.

**Detector Advantages Drawbacks**

✓ High sensitivity ✓ Large sensing area

✓ High dynamic range

✓ Fast response

✓ Compact

✓ Inexpensive

✓ Lowest NEP

✓ Compact

✓ Highest sensitivity

✓ Moderate sensitivity

✓ Moderate sensitivity

✓ Fast response

**Golay cell** ✓ Broad spectral response

**Pyrometer** ✓ Broad spectral response

**Bolometer** ✓ Broad spectral response

**Electro-optical crystal** ✓ Broad spectral response

**Table 2.** List of pros and cons for different THz detectors.

**Photoconductive antenna** ✓ Fast response

reported above in the THz region.

34 New Trends and Developments in Metrology

As already described in Section 2, the operation of THz systems can be schematically divided into continuous wave and pulsed mode.

Frequency domain (THz-CW) systems work at a fixed frequency, which depends on the type of THz emitter. As an example, in **Figure 10**, the case of a QCL source coupled to a thermal detector (a pyrometer or a bolometer) is shown. Between THz emitter and detector, there is an ellipsoidal mirror that allows to collimate the laser beam. In this configuration scheme, the reference signal produced by the source and the output signal recorded by the detector is sent to a lock-in amplifier in order to provide a coherent detection [28].

**Figure 10.** Sketch of a THz-CW system. A QCL is used as THz source and a bolometer or a pyrometer as THz detector.

In the following, the attention will be focused primarily on the characteristics and performance of THz measurement system working in the time domain (THz-TD). A typical architecture is shown in **Figure 11**.

**Figure 11.** Typical scheme of a THz-TD system in transmission mode.

A beam splitter is adopted to divide the ultrafast optical pulse generated by an ultrafast laser into two beams, referred to as probe and pump, respectively. T-ray pulsed radiation is stimulated at the emitter by the optical pump beam via either charge transport or optical rectification effect, according to the specific type of exploited emitter. A suitable set of lenses and a pair of parabolic mirrors is adopted in this configuration to collimate and focus the diverging T-ray beam on the sample of interest. A similar combination of lenses and mirrors is then needed to recollimate and focus onto the receiver the T-ray beam passed through the sample. At receiver side, the originally split probe beam acts as an optical gate for the T-ray receiver; the optical gate signal is characterized by a shorter time duration compared with that of the arriving T-ray pulse. It is worth noting that a proper synchronization between the gating and T-ray pulse is mandatory to assure T-ray coherent detection at a time instance. By carefully controlling the optical delay line by means of the proper micromotion of a mechanical stage, it is possible to achieve a complete temporal scan of the T-ray signal [29].

Reflection measurements can be also used for practical applications, when bulky samples are considered, that are impossible to measure in a transmission mode (see **Figure 12**) [30].

**Figure 12.** Typical scheme of a THz-TD system in reflection mode.

#### **4.1. Performance of THz systems**

In this paragraph, the various characteristics and limitations associated with a THz system, in particular when defining its performance, are discussed. The parameters commonly used are the dynamic range (DR) and the signal-to-noise ratio (SNR), which mainly affect the accuracy in THz measurements. They should always be evaluated during the measurements, to avoid false interpretation of the results; therefore, some recommendations for the best practice are presented [31].

#### *4.1.1. Dynamic range and signal-to-noise ratio*

A beam splitter is adopted to divide the ultrafast optical pulse generated by an ultrafast laser into two beams, referred to as probe and pump, respectively. T-ray pulsed radiation is stimulated at the emitter by the optical pump beam via either charge transport or optical rectification effect, according to the specific type of exploited emitter. A suitable set of lenses and a pair of parabolic mirrors is adopted in this configuration to collimate and focus the diverging T-ray beam on the sample of interest. A similar combination of lenses and mirrors is then needed to recollimate and focus onto the receiver the T-ray beam passed through the sample. At receiver side, the originally split probe beam acts as an optical gate for the T-ray receiver; the optical gate signal is characterized by a shorter time duration compared with that of the arriving T-ray pulse. It is worth noting that a proper synchronization between the gating and T-ray pulse is mandatory to assure T-ray coherent detection at a time instance. By carefully controlling the optical delay line by means of the proper micromotion of a mechanical stage,

Reflection measurements can be also used for practical applications, when bulky samples are considered, that are impossible to measure in a transmission mode (see **Figure 12**) [30].

In this paragraph, the various characteristics and limitations associated with a THz system, in particular when defining its performance, are discussed. The parameters commonly used are the dynamic range (DR) and the signal-to-noise ratio (SNR), which mainly affect the accuracy in THz measurements. They should always be evaluated during the measurements, to avoid false interpretation of the results; therefore, some recommendations for the best practice are

it is possible to achieve a complete temporal scan of the T-ray signal [29].

**Figure 12.** Typical scheme of a THz-TD system in reflection mode.

**4.1. Performance of THz systems**

36 New Trends and Developments in Metrology

presented [31].

The dynamic range (DR) is defined as the ratio between the highest and lowest measurable signal and therefore describes the maximum signal change that can be quantified. As a matter of practice, it is calculated as the ratio between the maximum signal amplitude and the rootmean-square of noise floor. The signal-to-noise ratio (SNR) is defined as the ratio between the mean peak amplitude and the standard deviation (SD) of the signal amplitude. It indicates the minimum detectable signal change and is a complementary system parameter with respect to the dynamic range. Therefore, the DR determines the measurement bandwidth, whereas the SNR reflects the amplitude resolution or sensitivity.

DR and SNR may be evaluated either with respect to the time-domain waveform or to the spectrum obtained through Fourier transform. Two specific aspects are involved when these parameters are considered. First, data are acquired as time-domain waveforms, whereas measured optical parameters are derived from the Fourier transform (FT) spectra. In particular, the DR and SNR of time-domain signals may result different from that of the corresponding spectra, and there is not a straightforward analytical relationship between the parameters estimated according to the two methods. Moreover, both DR and SNR of spectral data are strongly frequency dependent and typically decrease steeply with frequency.

The recommended procedure for estimating the DR and SNR of THz through time-domain data is based on the following steps:


The recommended procedure for estimating the DR and SNR of THz through the amplitude spectrum is based on the following steps:


**5.** It is desirable to test the performance of the system by varying the scan parameters in order to identify the ranges characterized by the best SNR and DR values.

If DR and SNR are evaluated through the FT, the signal averaging is recommended, in order to reduce the noise effects. It is worth to notice that if a jitter affects the peak position, which often is due to errors in the initial position of the delay stage, the time-domain average will be distorted; the FT amplitude spectra and their average value, on the contrary, will remain correct. As expected, the approach for the evaluation of DR and SNR of a THz TDS system strictly depends on the specific domain adopted for the measurements. In other words, if the measurements are carried out in time domain, then the DR and SNR must be directly calculated from the available time-domain data. On the contrary, DR and SNR must be evaluated from FT amplitude spectra when measurement-based spectroscopic data are taken into account.

Another fundamental parameter to describe characteristics of a THz system is the spectral resolution. It is determined from the span of the time delay sweep, and it is given by the ratio between the light velocity c and the effective delay line length (multiplied by 2).

In principle, the pump laser pulse repetition rate is the only limitation of resolution if the ideal conditions of noise-free system and unlimited delay line are met. On the contrary, the actual frequency resolution is hardly reduced in presence of noise and mainly influenced by the timedomain SNR of the system. For increasing length of the delay line, the signal amplitude is reduced because of the increasing delay from the main pulse, and the SNR approaches unity.

As expected, THz pulses in the train must be identical with one another to make optical sampling works successfully. If this condition is not satisfied (i.e., the evolution of THz pulse shape occurs on time scales comparable to (or shorter than) the measurement time), no reliable samples of the waveform can be acquired. Besides this main drawback, another minor disadvantage affects the performance of optical sampling. As for the other sampling techni‐ ques, optical sampling also takes long time to acquire the desired data. The lower bound for the acquisition time is given by *N* \* *∂t*, (*N* and *∂t* being, respectively, the number of measured electric fields and the train pulse-to-pulse distance). Since it is possible to take advantage of signal averaging, the acquisition time is usually much longer than this minimum value. Another problem inherent to sampling measurements is that they require a method for varying the delay of the sampling gate relative to the THz pulse. This requirement is often accom‐ plished by means of a mechanical delay line consisting of a mirror that is moved to vary the optical path length [1].

#### *4.1.2. TDS calibration*

A typical device used for calibrating the linearity of amplitude/power measurements of THz system has to exhibit constant loss within the THz bandwidth. The most convenient and preferred solution is the employment of optically flat silicon plates as loss elements. Fresnel reflections are solely responsible for transmission losses in such a plate. Using a stack of plates parallel to each other, orthogonal to the incident THz beam and separated by air gaps, one can manage the loss level since it is dependent on the number of plates in the stack.

When the device is placed in the beam path, particular attention has to be paid to avoid distorting the THz beam or altering its focusing; in fact, it is desirable to position the device in a collimated beam. The device can be used in both THz CW and THz TDS systems, where the plates must be angled to the incident beam so as to destroy the etalon interference. There are two methods to verify the linearity and either time domain or frequency domain data can be processed:


It is worth noticing that the linearity of a TDS should not be assumed, but it should be experimentally verified.

#### **4.2. Errors and uncertainty in THz-TDS**

**5.** It is desirable to test the performance of the system by varying the scan parameters in

If DR and SNR are evaluated through the FT, the signal averaging is recommended, in order to reduce the noise effects. It is worth to notice that if a jitter affects the peak position, which often is due to errors in the initial position of the delay stage, the time-domain average will be distorted; the FT amplitude spectra and their average value, on the contrary, will remain correct. As expected, the approach for the evaluation of DR and SNR of a THz TDS system strictly depends on the specific domain adopted for the measurements. In other words, if the measurements are carried out in time domain, then the DR and SNR must be directly calculated from the available time-domain data. On the contrary, DR and SNR must be evaluated from FT amplitude spectra when measurement-based spectroscopic data are taken into account.

Another fundamental parameter to describe characteristics of a THz system is the spectral resolution. It is determined from the span of the time delay sweep, and it is given by the ratio

In principle, the pump laser pulse repetition rate is the only limitation of resolution if the ideal conditions of noise-free system and unlimited delay line are met. On the contrary, the actual frequency resolution is hardly reduced in presence of noise and mainly influenced by the timedomain SNR of the system. For increasing length of the delay line, the signal amplitude is reduced because of the increasing delay from the main pulse, and the SNR approaches unity.

As expected, THz pulses in the train must be identical with one another to make optical sampling works successfully. If this condition is not satisfied (i.e., the evolution of THz pulse shape occurs on time scales comparable to (or shorter than) the measurement time), no reliable samples of the waveform can be acquired. Besides this main drawback, another minor disadvantage affects the performance of optical sampling. As for the other sampling techni‐ ques, optical sampling also takes long time to acquire the desired data. The lower bound for the acquisition time is given by *N* \* *∂t*, (*N* and *∂t* being, respectively, the number of measured electric fields and the train pulse-to-pulse distance). Since it is possible to take advantage of signal averaging, the acquisition time is usually much longer than this minimum value. Another problem inherent to sampling measurements is that they require a method for varying the delay of the sampling gate relative to the THz pulse. This requirement is often accom‐ plished by means of a mechanical delay line consisting of a mirror that is moved to vary the

A typical device used for calibrating the linearity of amplitude/power measurements of THz system has to exhibit constant loss within the THz bandwidth. The most convenient and preferred solution is the employment of optically flat silicon plates as loss elements. Fresnel reflections are solely responsible for transmission losses in such a plate. Using a stack of plates parallel to each other, orthogonal to the incident THz beam and separated by air gaps, one can

manage the loss level since it is dependent on the number of plates in the stack.

between the light velocity c and the effective delay line length (multiplied by 2).

optical path length [1].

38 New Trends and Developments in Metrology

*4.1.2. TDS calibration*

order to identify the ranges characterized by the best SNR and DR values.

Many sources of error can affect a THz-TDS measurement and data extraction. As for example, laser intensity fluctuations, optical and electronic noise, delay line stage jitter, registration noise are common error sources. Moreover, contributions to the error in the estimated optical

**Figure 13.** Scheme showing the propagation of uncertainties in THz-TDS measurements (taken from Ref. [33]).

constants are not only from the randomness in the signal, but also from imperfections in the physical setup and parameter extraction process. Examples are the sample thickness meas‐ urement, the sample alignment, and so on. Significant sources of error are shown in the scheme of **Figure 13**. The uncertainty sources (green lines) can influence both the THz-TDS measure‐ ments and the parameter extraction process. Each of them determines a variance that can be propagate along the process and determine a variance on optical constants [32, 33].

Two major sources of thermal noise can be singled out in a THz-TD system:


Other relevant noise sources are quantum fluctuations and laser and shot noise. One of the most efficient methods to remove noise in a T-ray signal is by means of digital signal processing technique such as wavelet de-noising that can actually improve the signal SNR, particularly when intensities are strongly reduced in biological samples. Ultimately, the noise in a THz-TDS system limits the spectral resolution; the best achievable resolution in a defined frequency interval depends, in fact, on the maximum time duration, which is, in turn, directly related to the actual SNR within that frequency range. In this way, improvement of the system dynamic range (obtained by either increasing the power of the transmitted THz waveform or decreasing the system noise floor) turns out to be mandatory to assure suitably high-frequency resolution.

Another uncertainty source is the positioning of the stage for optical delay line (ODL); thanks to the exploitation of a couple of moving mirrors, ODL mechanically induces a delay either on the probe or, equivalently, pumping pulse. As a consequence, the sampling time of the optically-gated detector is characterized by uncertainty; due to its combination with the other sources (electronic and optical noise), a final uncertainty on the amplitude measurements of the sampled T-ray pulse arises. The uncertainty associated with the amplitude of the acquired T-ray pulse affects also the spectral components obtained through the Fourier transform and the deconvolution process. Another uncertainty source that cannot be neglected involves the procedure for unwrapping the phase. Moreover, the thickness and the alignment of the samples have to be known in order to extract the model parameters; as a consequence, the uncertainty associated with these inputs affects the whole parameter detection process. Finally, the overall uncertainty is affected by the uncertainty associated with the estimation of the air refractive index. Each uncertainty source, however, can be taken into account by means of a proper model describing the uncertainty propagation in the measurement process.

#### **4.3. THz spectroscopy**

The term spectroscopy refers to a series of experiments aiming to investigate the excited states of a specimen, exploiting the interaction of a proper electromagnetic perturbation with a sample. Reflected and/or transmitted waves release specific information on the electromag‐ netic properties of the sample as function of the frequency. Therefore, according to the spectral content of the electromagnetic signal-probe, different excitations can be investigated ranging from the quantum properties (energy levels of atomic bonds, roto-vibrational states, etc.) of molecules to the impedance of a macroscopic samples or transmission lines. The THz band is ideal to study electrodynamic properties of materials from metals to insulators, since the frequency is lower than the typical plasma frequency of metals (about 1015 Hz) that defines the frequency above which the metal becomes transparent to radiation. Coherent THz radiation can provide valuable information on the complete set of the complex electrodynamic param‐ eters [34] (refractive index (*ñ*) permittivity (*ε*˜) and /or conductivity (*σ*˜) characterizing a material whatever it is an insulator or metallic like. The complex refractive index (*ñ* = *n* + *i k*) furnishes information on both the delay (*n*) and the absorption (*k*). Once *n* and *k* are obtained (see below), the permittivity *ε*˜ <sup>=</sup>*ε*<sup>1</sup> <sup>+</sup> *<sup>i</sup>ε*2 can be reached exploiting the following relationship *n*˜ <sup>=</sup> *<sup>ε</sup>*˜*μ*˜ / *<sup>ε</sup>*0*μ*<sup>0</sup> where *μ*˜ is the complex magnetic permeability, *ε*0 is the vacuum permittivity, and *μ*0 is the vacuum permeability. Since most of materials have *μ*˜ =1, a direct relation between the refractive index and permittivity can be extracted *n*˜ <sup>≅</sup> *<sup>ε</sup>*˜ which deals *<sup>n</sup>* <sup>=</sup> ( *<sup>ε</sup>*<sup>1</sup> <sup>2</sup> <sup>+</sup> *<sup>ε</sup>*<sup>2</sup> <sup>2</sup> <sup>+</sup> *<sup>ε</sup>*<sup>1</sup> ) / 2 and *k* = ( *ε*<sup>1</sup> <sup>2</sup> <sup>+</sup> *<sup>ε</sup>*<sup>2</sup> <sup>2</sup> <sup>−</sup>*ε*<sup>1</sup> )/ 2. Conductivity (*σ*˜ <sup>=</sup>*σ*<sup>1</sup> <sup>+</sup> *<sup>i</sup>σ*2) and permittivity are also reciprocally related through the formulas *σ*<sup>1</sup> =*ε*0*ωε*2 and *σ*<sup>2</sup> =*ε*0*ω*(*ε∞* −*ε*1), where *ω* = 2*πν* and *ε*∞ = *ε* (*ω* → ∞) can be obtained through a fitting procedure. From the practical point of view, the most important parameter to obtain for a sample characterization is *ñ*, since the other parameters are just a combination of *n* and *k*.

constants are not only from the randomness in the signal, but also from imperfections in the physical setup and parameter extraction process. Examples are the sample thickness meas‐ urement, the sample alignment, and so on. Significant sources of error are shown in the scheme of **Figure 13**. The uncertainty sources (green lines) can influence both the THz-TDS measure‐ ments and the parameter extraction process. Each of them determines a variance that can be

**1.** Johnson-Nyquist noise is generated when charge carriers fluctuate in a substrate. It results in an artifact voltage measured with no T-ray incident electrical field, whatever is the

Other relevant noise sources are quantum fluctuations and laser and shot noise. One of the most efficient methods to remove noise in a T-ray signal is by means of digital signal processing technique such as wavelet de-noising that can actually improve the signal SNR, particularly when intensities are strongly reduced in biological samples. Ultimately, the noise in a THz-TDS system limits the spectral resolution; the best achievable resolution in a defined frequency interval depends, in fact, on the maximum time duration, which is, in turn, directly related to the actual SNR within that frequency range. In this way, improvement of the system dynamic range (obtained by either increasing the power of the transmitted THz waveform or decreasing the system noise floor) turns out to be mandatory to assure suitably high-frequency resolution. Another uncertainty source is the positioning of the stage for optical delay line (ODL); thanks to the exploitation of a couple of moving mirrors, ODL mechanically induces a delay either on the probe or, equivalently, pumping pulse. As a consequence, the sampling time of the optically-gated detector is characterized by uncertainty; due to its combination with the other sources (electronic and optical noise), a final uncertainty on the amplitude measurements of the sampled T-ray pulse arises. The uncertainty associated with the amplitude of the acquired T-ray pulse affects also the spectral components obtained through the Fourier transform and the deconvolution process. Another uncertainty source that cannot be neglected involves the procedure for unwrapping the phase. Moreover, the thickness and the alignment of the samples have to be known in order to extract the model parameters; as a consequence, the uncertainty associated with these inputs affects the whole parameter detection process. Finally, the overall uncertainty is affected by the uncertainty associated with the estimation of the air refractive index. Each uncertainty source, however, can be taken into account by means of a

propagate along the process and determine a variance on optical constants [32, 33].

**2.** Background noise gives rise to a random voltage across the receiving antenna.

proper model describing the uncertainty propagation in the measurement process.

The term spectroscopy refers to a series of experiments aiming to investigate the excited states of a specimen, exploiting the interaction of a proper electromagnetic perturbation with a sample. Reflected and/or transmitted waves release specific information on the electromag‐ netic properties of the sample as function of the frequency. Therefore, according to the spectral content of the electromagnetic signal-probe, different excitations can be investigated ranging

Two major sources of thermal noise can be singled out in a THz-TD system:

presence of optical gating pulses.

40 New Trends and Developments in Metrology

**4.3. THz spectroscopy**

The peculiar characteristic of using a TDS consists into directly manipulating the timedependent electric field E(x, t) transmitted through the sample. The ratio between the Fourier transforms of the transmitted signal and the reference signal is directly function of the refractive index. The sketch of the measurement on a generic sample *L* thick is reported in **Figure 14**. E(x,t) propagating from the transmitter (Tx) to the receiver (Rx) is linearly polarized along y. Since the signal is generated and detected in air, the proposed scheme allows to generalize the measurements in multilayer samples.

**Figure 14.** Scheme of the measurements through the THz-TDs system. Tx and Rx stand for transmitter and receiver, respectively. L represents the mean size of the sample, while ni with i = 1, 2, 3 are the refractive index of different through THz pulse.

The transmitted signal through the sample *S*(*ω*), can be expressed as function of the Fresnel coefficients *Ta,b* (*ω*) = 2*ña*/(*ña* + *ñb*) and *Ra,b* (*ω*) = (*ña* – *ñb*)/ (*ña* + *ñb*) and the propagation factor *Pa*(*ω*, *d*) = *exp*{−i *ña ω d*}, where the labels refer to the material [35]. The complete signal can be expressed as:

$$S(\alpha) = \eta(\alpha) T\_{1,2}(\alpha) P\_2(\alpha, d) T\_{2,3}(\alpha) \sum\_{k=0}^{\alpha} \left\{ R\_{2,3}(\alpha) P\_2^2(\alpha, L) R\_{2,1}(\alpha) \right\}^k \cdot E(\alpha) \tag{5}$$

where *E*(*ω*) is the generated THz pulse, and *η*(*ω*) accounts for all the reflected and transmitted signals which do not reach Rx. In Eq. (1), the factors *T*1,2(*ω*)*P*2(*ω*, *d*)*T*2,3(*ω*) take into account of the fraction of signal reaching Rx in one path, whereas the term ∑*<sup>k</sup>* =0 *<sup>K</sup>* {*R*2,3 (*ω*)*P*<sup>2</sup> 2(*ω*, *<sup>L</sup>* )*R*2,1 (*ω*)}*<sup>k</sup>* accounts for the delayed K-pulses originated by the reflections of the primary pulse between the sample boundaries (usually *K* ≤ 3). This phenomenon known as Fabry–Perot (FP) effect is depicted in **Figure 14** through the dashed arrows displaying the reflected signals. In the time domain, the FP effects show up in the appearance of copies of the primary transmitted signal. In **Figure 15**, a comparison is proposed between the reference signal (air) and the signal (Si) through a silicon slab 500 μm thick. Black arrows point out the copies of the primary signal. The delay between copies is due to the roundtrip walk in the sample and is about *Δt* ≅ 2*Ln*/ *c*∼11.4*ps* for *n*(*Si*) = 3.46.

**Figure 15.** Time-dependent signal measured through THz-TDS system. The black curve is the reference signal acquired in free space, whereas the red curve is the signal through a Si sample of 500 μm thick. The arrows indicate the primary signal copies generated by the Fabry–Perot effect.

Eq. (1) also defines the transmittance

$$T(\alpha) = \begin{cases} E\_{\text{sample}} \\ E\_{r\text{q}} \end{cases} \tag{6}$$

that, according to Eq. (5), in the case of a simple slab can be expressed as

$$H(\alpha) = \frac{2\tilde{n}\_2(\tilde{n}\_1 + \tilde{n}\_3)}{(\tilde{n}\_2 + \tilde{n}\_1)(\tilde{n}\_2 + \tilde{n}\_3)} \exp\{-\frac{i(\tilde{n}\_2 - \tilde{n}\_{\alpha\nu})\alpha L}{c}\} FP(\alpha) \tag{7}$$

where

The transmitted signal through the sample *S*(*ω*), can be expressed as function of the Fresnel coefficients *Ta,b* (*ω*) = 2*ña*/(*ña* + *ñb*) and *Ra,b* (*ω*) = (*ña* – *ñb*)/ (*ña* + *ñb*) and the propagation factor *Pa*(*ω*, *d*) = *exp*{−i *ña ω d*}, where the labels refer to the material [35]. The complete signal can be

> 1,2 2 2,3 2,3 2 2,1 0 ( ) ( ) ( ) ( , ) ( ) { ( ) ( , ) ( )} ( ) ¥ = <sup>=</sup> å <sup>×</sup> *<sup>k</sup> k S T P dT R P LR E*

where *E*(*ω*) is the generated THz pulse, and *η*(*ω*) accounts for all the reflected and transmitted signals which do not reach Rx. In Eq. (1), the factors *T*1,2(*ω*)*P*2(*ω*, *d*)*T*2,3(*ω*) take into account of

accounts for the delayed K-pulses originated by the reflections of the primary pulse between the sample boundaries (usually *K* ≤ 3). This phenomenon known as Fabry–Perot (FP) effect is depicted in **Figure 14** through the dashed arrows displaying the reflected signals. In the time domain, the FP effects show up in the appearance of copies of the primary transmitted signal. In **Figure 15**, a comparison is proposed between the reference signal (air) and the signal (Si) through a silicon slab 500 μm thick. Black arrows point out the copies of the primary signal. The delay between copies is due to the roundtrip walk in the sample and is about *Δt* ≅ 2*Ln*/

**Figure 15.** Time-dependent signal measured through THz-TDS system. The black curve is the reference signal acquired in free space, whereas the red curve is the signal through a Si sample of 500 μm thick. The arrows indicate the primary

 w

the fraction of signal reaching Rx in one path, whereas the term ∑*<sup>k</sup>* =0

2

 w  w  w

*<sup>K</sup>* {*R*2,3

(*ω*)*P*<sup>2</sup>

(5)

2(*ω*, *<sup>L</sup>* )*R*2,1

(*ω*)}*<sup>k</sup>*

 w

expressed as:

w hw

42 New Trends and Developments in Metrology

*c*∼11.4*ps* for *n*(*Si*) = 3.46.

signal copies generated by the Fabry–Perot effect.

Eq. (1) also defines the transmittance

 w w

$$FP(\alpha) = \frac{1}{1 - (\frac{\tilde{n}\_2 - \tilde{n}\_1}{\tilde{n}\_2 + \tilde{n}\_1})(\frac{\tilde{n}\_2 - \tilde{n}\_3}{\tilde{n}\_2 + \tilde{n}\_3})\exp\{-i\tilde{n}\_2\alpha 2L/c\}}\tag{8}$$

is the explicit form of the Fabry-Perot term when the echos in media 1 and 3 are negligible [35].

The transfer function *H*(*w*) in Eq. (7) is used as theoretical reference for *T*(*ω*) to calculate optical parameters of samples. Eqs. (6) and (7) describe the transmission through a homogeneous slab with refractive index *ñ*<sup>2</sup> when the equivalence *ñ*1 = *ñ*<sup>3</sup> *ñair* is verified. Instead by putting *ñ*1 = *ñair*, Eqs. (6) and (7) are able to describe a system composed by two layers as a thin metallic film on a dielectric substrate [36, 37].

Several techniques [36–40] have been developed in order to extract *ñ* by computing the minimum difference between the moduli and the phases of *H*(*ω*) and *T*(*ω*):

$$
\delta\rho(o) = |H(o)| - |T(o)|$$

$$
\delta\phi(o) = \arg[H(o)] - \arg[T(o)]\tag{9}$$

Eq. (9) allows to define an error function, the total variation (TV) [38], defined by the sum of differences *δ1* and *δψ* for each frequencial point

$$ER = \sum\_{o} |\left\{ \delta\mathfrak{p}(o) \,|\, + \, |\, \delta\mathfrak{p}(o) \,|\right\}|\tag{10}$$

This is a tridimensional paraboloid as function of the frequency and the sample thickness. The computational search of the minimum of *ER*(*L, ω*) implies the contemporary knowledge of the main quantities describing the system: the sample thickness *L* the refractive index *n*, and the extinction coefficent *k* [39]. A fast resolution of the TV approach is affected by the noise in the measured spectrum of *T*(*ω*). The most relevant noise source in thin samples is the Fabry–Perot oscillations which show a frequency inversely proportional to *L*. This problem can be overcome imposing the quasi-space (QS) optimization [39], where the periodicity of the FP effect is employed to achieve the effective optical thickness of the sample. The quasi-space is defined by the Fourier transform of an electro dynamical parameter *y*(*ωn*) which could be the refractive index or the extinction coefficient. Therefore, a new set of variables can be defined as follows

$$\mathcal{Q}\_{\mathcal{S}\_k} = \sum\_{n=1}^{N-1} [\mathbf{y}(\boldsymbol{\alpha}\_n) \exp(-i \frac{2\pi}{N} k \,\boldsymbol{n})], k = 0, 1, 2..., N-1 \tag{11}$$

This function can be displayed in terms of the variable *<sup>L</sup> QS* <sup>=</sup> *xQS c*<sup>0</sup> / 2, where *c*<sup>0</sup> is the speed of light in vacuum and *xQS* =2π/*ω*, showing a pronounced peak at the effective optical length. Alternatively, the sample thickness *L* can be accounted by locating the minimum of *QSk* (at a fixed frequency) for different *L* values [40]. The QS approach is limited just by the performances of the TDS system. In particular, the maximum and minimum detectable thickness can be expressed as *Lmax* = *c*0/4*n df* and *Lmin* = *c*0/2*n* Δ*f*, where *df* is the minimum detectable frequency of the system while Δ*f* is its bandwidth. The former thickness is based on the application of Nyquist theorem, whereas the latter is based on the resolution of neighbor *QS*'s peaks [40]. For instance, assuming as parameters *df* ≅ 3, and, as effective refractive index, *n* ≅ 2, the maximum and minimum length are *Lmax* ≅ 12.5 mm and *Lmin* ≅ 12.5 μm, respectively. Whenever the best optical length is found, the quality of the retrieved electrodynamic parameter depends also by the choice of a good thickness of the sample. Thinner samples become transparent to THz radiation, whereas thick samples do not release much information because the transmit‐ ted signal is low. In Ref. [41], authors aim to find the best thickness in order to minimize the standard deviations (*sn* 2 , *sn* 2 ) of electrodynamical parameter as (ω) and (ω) inherited by the standard deviations of signals *Esample*(*t*) and *Eref*(*t*) acquired in the time domain. The functions *sn* 2 , *sn* <sup>2</sup> can be minimized with respect *L* leading to get the optimal thickness as function of the absorption coefficient:

$$L\_{qw} = c\_0 \, / \, o \, k(o) = \frac{2}{a(o)}\tag{12}$$

Eq. (12) shows that *n*(*ω*) and *k*(*ω*) are affected by some indetermination as consequence of the fixed extension of the sample. On the other hand, Eq. (12) enables the possibility to get reliable results also on very thin samples provided that the absorption coefficient *α*(*ω*) is enough high. Indeed, full two-dimensional systems like single graphene layers have been extensively investigated through THz-TDS systems thanks to the robust absorption coefficient *αgraphene* ∼ 5 μm−1 [42–44].

#### **5. Conclusions**

imposing the quasi-space (QS) optimization [39], where the periodicity of the FP effect is employed to achieve the effective optical thickness of the sample. The quasi-space is defined by the Fourier transform of an electro dynamical parameter *y*(*ωn*) which could be the refractive index or the extinction coefficient. Therefore, a new set of variables can be defined as follows

<sup>2</sup> [ ( )exp( )], 0,1, 2 , 1

light in vacuum and *xQS* =2π/*ω*, showing a pronounced peak at the effective optical length.

fixed frequency) for different *L* values [40]. The QS approach is limited just by the performances of the TDS system. In particular, the maximum and minimum detectable thickness can be expressed as *Lmax* = *c*0/4*n df* and *Lmin* = *c*0/2*n* Δ*f*, where *df* is the minimum detectable frequency of the system while Δ*f* is its bandwidth. The former thickness is based on the application of Nyquist theorem, whereas the latter is based on the resolution of neighbor *QS*'s peaks [40]. For instance, assuming as parameters *df* ≅ 3, and, as effective refractive index, *n* ≅ 2, the maximum and minimum length are *Lmax* ≅ 12.5 mm and *Lmin* ≅ 12.5 μm, respectively. Whenever the best optical length is found, the quality of the retrieved electrodynamic parameter depends also by the choice of a good thickness of the sample. Thinner samples become transparent to THz radiation, whereas thick samples do not release much information because the transmit‐ ted signal is low. In Ref. [41], authors aim to find the best thickness in order to minimize the

standard deviations of signals *Esample*(*t*) and *Eref*(*t*) acquired in the time domain. The functions

<sup>2</sup> / () ( )

Eq. (12) shows that *n*(*ω*) and *k*(*ω*) are affected by some indetermination as consequence of the fixed extension of the sample. On the other hand, Eq. (12) enables the possibility to get reliable results also on very thin samples provided that the absorption coefficient *α*(*ω*) is enough high. Indeed, full two-dimensional systems like single graphene layers have been extensively investigated through THz-TDS systems thanks to the robust absorption coefficient *αgraphene* ∼ 5

a w

0

*Lck opt* = = w w

<sup>2</sup> can be minimized with respect *L* leading to get the optimal thickness as function of the

Alternatively, the sample thickness *L* can be accounted by locating the minimum of *QSk*

(11)

) of electrodynamical parameter as (ω) and (ω) inherited by the

(12)

*c*<sup>0</sup> / 2, where *c*<sup>0</sup> is the speed of

(at a

<sup>=</sup> *<sup>k</sup>* = - = ¼- å

*Q y i kn k N <sup>N</sup>* p

1 1

w

This function can be displayed in terms of the variable *<sup>L</sup> QS* <sup>=</sup> *xQS*


*N S n n*

44 New Trends and Developments in Metrology

standard deviations (*sn*

absorption coefficient:

μm−1 [42–44].

*sn* 2 , *sn* 2 , *sn* 2 Without claiming to be exhaustive, we presented a short overview of THz measurement systems presently under development for scientific and industrial applications. We first described the most common sources and detectors that are routinely in use for the manipula‐ tion of T-waves. We then focused on the typical architectures that are presently employed in time-domain spectroscopy and imaging. The importance of metrological aspects in THz systems performance and measurements and most of their related issues and solutions were discussed. Since each material has its own "identity card" in this band of the spectrum, a THz-TDS waveform transmitted through a sample is typically rich in information. We therefore presented what are the main parameters that can be measured from the material frequency response, namely optical or electrical complex quantities like the refractive index, the conduc‐ tivity and the dielectric constant, and what are the data extraction methods and the related errors and uncertainty.

#### **Author details**

Leopoldo Angrisani1\*, Giovanni Cavallo2 , Annalisa Liccardo2 , Gian Paolo Papari3 and Antonello Andreone3

\*Address all correspondence to: angrisan@unina.it

1 DIETI – Department of Information Technology and Electrical Engineering, CeSMA – Center of Advanced Measurement Services, University of Napoli Federico II, Napoli, Italy

2 DIETI – Department of Information Technology and Electrical Engineering, University of Napoli Federico II, Napoli, Italy

3 DF – Department of Physics, University of Napoli Federico II, Napoli, Italy

#### **References**


[4] Wilmink G. J., Grundt J. E. Terahertz radiation sources, applications, and biological effects. In: Lin J.C, editors. Handbook of Electromagnetic Fields in Biological Systems.

[5] Aurele J. L. A. Review of near field terahertz measurement methods and their appli‐

[6] Popovic Z., Grossman E. N. THz metrology and instrumentation. IEEE Trans. Terahertz

[7] Kozlov G., Volkov A. Coherent Source Submillimeter Wave Spectroscopy in Millimeter and Submillimeter Wave Spectroscopy of Solid. Springer Verlag; Berlin; 1998. p. 51–

[8] Faist J., Sirtori C. InP and GaAs-Based Quantum Cascade in Long-Wavelength Infrared

[9] Tredicucci A., Kohler R. Terahertz Quantum Cascade Lasers in Intersubband Transi‐

[11] Kleiner R., Steinmeyer F., Kunkel G., Müller P. Intrinsic Josephson effects in

[12] Ozyuzer L., Koshelev A. E., Kurter C., Gopalsami N., Tachiki Q., Li, M., Kadowaki K. Emission of coherent THz radiation from superconductors. Science. 2007; 318: 1291.

[13] Tani M., Matsuura S., Sakai K., Nakashima S. Emission characteristics of photocon‐ ductive antennas based on low-temperature grown GaAs and semi-insulating GaAs.

[14] Shen Y. C., Upadhya P. C., Beere H. E., Linfield E. H., Davies A. G., Gregory I. S. Generation and detection of ultra-broadband terahertz radiation using photoconduc‐

[15] Shen Y. C., Upadhya P. C., Linfield E. H., Beere H. E., Davies A. G. Ultra-broadband terahertz radiation from low-temperature-grown GaAs photoconductive emitters.

[16] Leitenstorfer A., Hunsche S., Shah J., Nuss M. C., Knox W. H. Detectors and sources for ultra broadband electro-optic sampling: experiment and theory. Appl. Phys. Lett.

[17] Sinyukov A. M., Leahy M. R., Hayden M., Haller M., Luo J., Jen A. K-Y. Resonance enhanced THz generation in electro-optic polymers near the absorption maximum.

[18] Sinyukov A. M., Hayden L. M. Generation and detection of terahertz radiation with

multilayered electro-optic polymer films. Opt. Lett. 2002; 27(1): 55–57.

Semiconductor Lasers. John Wiley and Sons; New York; 2004. p. 217–278.

tions in Quantum Structures. McGraw-Hill; New York; 2006. p. 45–105.

[10] Williams B. Terahertz quantum cascade lasers. Nat. Photonics. 2007; 1: 517.

Bi2Sr2CaCu2O8 single crystals. Phys. Rev. Lett. 1992; 68: 2394.

tive emitters and receivers. Appl. Phys. Lett. 2004; 85(2): 164–166.

CRC Press Taylor & Francis Group. 2012. p.369–423.

Sci. Technol. 2011; 1(1): 133–144.

46 New Trends and Developments in Metrology

Appl. Opt. 1997; 36(30): 7853–7859.

Appl. Phys. Lett. 2003; 83(15): 3117–3119.

Appl. Phys. Lett. 2004; 85(24): 5827–5829.

1999; 74(11): 1516–1518.

109.

cations. Infrared Milli Terahz Waves. 2011; 32: 976–1019.


## **Biomedical and Agriculture**

[35] Duvillaret L., Garet F., Coutaz J. L. A reliable method for extraction of material parameters in terahertz time-domain spectroscopy. IEEE J. Selected Top. Quantum

[36] Yasuda H., Hosako I. Measurement of terahertz refractive index of metal with terahertz

[37] Zhou D. X., Parrott E. P. J., Douglas J. P., Zeitler J. A. Determination of complex refractive index of thin metal films from terahertz time-domain spectroscopy. J. Appl.

[38] Dorney T. D., Baraniuk R. G., Mittleman D. M. Material parameter estimation with terahertz time-domain spectroscopy. J. Opt. Soc. Am. A. 2001; 18(7): 1562-1571. [39] Pupeza I., Wilk R., Koch M. Highly accurate optical material parameter determination

[40] Scheller M., Jansen C., Koch M. Analyzing sub-100-lm samples with transmission terahertz time domain spectroscopy. Opt. Commun. 2009; 282: 1304–1306.

[41] Withayachumnankul W., Fischer B. M., Abbott D. Material thickness optimization for transmission-mode terahertz time-domain spectroscopy. Opt. Express 2008; 16(10):

[42] Kuzel P., Nemec H. Terahertz conductivity in nanoscaled systems: effective medium

[43] Ivanov I., Bonn M., Mics Z., Turchinovich D. Perspective on terahertz spectroscopy of

[44] Denisultanov A. K., Azbite S. E., Balbekin N. S., Gusev S. I., Khodzitsky M. K. Optical properties of graphene on quartz and polyethylene substrates in terahertz frequency

range. PIERS Proceedings, Guangzhou, China, August 25–28, 2014.

time-domain spectroscopy. Jpn J. Appl. Phys. 2008; 47: 1632–1634.

with THz time-domain spectroscopy. Optics Express 2007; 15: 4335.

theory aspects. J. Phys. D: Appl. Phys. 2014; 47: 374005.

graphene. Eur. Phys. Lett. 2015; 111: 67001.

Electron. 1996; 2(3): 739-746.

48 New Trends and Developments in Metrology

Phys. 2008; 104: 053110.

7382.

### **Uncertainty of Measurement in Medical Laboratories**

#### Paulo Pereira

Additional information is available at the end of the chapter

http://dx.doi.org/10.5772/62437

#### **Abstract**

The "Guide to the Expression of Uncertainty in Measurement" (GUM) is not systemat‐ ically used in medical laboratories, for what the laboratorian should understand the Uncertainty Approach and its importance to recognize the level of realism of results. This chapter presents, discusses, and recommends the models fulfilling GUM principles. An example is given to a single test for an easier understanding of the determination of measurement uncertainty. All the practice uses a freeware. Results with larger measurement uncertainty intervals have a significant probability of being unrealistic, arising a high risk of the uncorrected clinical decision. A flow chart to the selection of models for the determination of measurement uncertainty in a medical laboratory is recommended.

**Keywords:** bias, bottom-up, GUM, measurement uncertainty, medical metrology, pre‐ cision, top-down, trueness

### **1. Introduction to the "Guide to the Expression of Uncertainty in Measurement"**

"The lack of consensus in the international scientific community regarding the expression of measurement uncertainty occurred in 1977. Two years later, the Bureau International des Poids et Mesures (BIPM) and 21 laboratories agreed upon it was important to develop an internation‐ al procedure for expressing measurement uncertainty and for combining individual uncertain‐ ty components into a single total uncertainty measurement in chemistry and physics. However, there was no consensus regarding the calculus for expression of measurement uncertainty. In 1980, the BIPM Working Group on the Statement of Uncertainties developed the Recommen‐ dation INC-1 "Expression of Experimental Uncertainties" [1], approved in the Comité Interna‐

© 2016 The Author(s). Licensee InTech. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

tional des Poids et Mesures (CIPM) in 1981 [2] and reapproved 5 years after [3]. The CIPM suggested to the International Organization for Standardization (ISO) the development of a measurement uncertainty master document based on the Working Group recommendation. The ISO Technical Advisory Group on Metrology (TAG 4) assumed the responsibility to write the guideline. Seven organizations participated in the work of TAG 4: BIPM, International Electrotechnical Commission (IEC), ISO, International Organization of Legal Metrology (OIML), International Union of Pure and Applied Chemistry (IUPAC), International Union of Pure and Applied Physics (IUPAP), and International Federation of Clinical Chemistry and Laboratory Medicine (IFCC), representing the medical laboratory. TAG 4 defined the Working Group 3 (ISO/TAG 4/WG 3) with a committee of experts from BIPM, OIML, IEC, and ISO. The guide‐ line "Guide to the Expression of Uncertainty in Measurement" (GUM) was first published in 1993 and 2 years later was corrected and reprinted. Measurement uncertainty is defined as the "nonnegative parameter characterizing the dispersion of the quantity values being attributed to a measurand, based on the information used" (entry 2.26 of [4]). It characterizes "the quality of a result of a measurement" expressed in uncertainty (quantitative indication). It was not intended to be applied to another quantity than numerical, for what it cannot be a standard to the determination of measurement uncertainty in all estimations. GUM is an open access document as it was republished with minor correction by BIPM in 2008 [5]. The terminology to the Uncertainty Approach is part of the International Vocabulary of Metrology (VIM), which is also freely available from the Bureau [4].

Measurement uncertainty provides information on the level of confidence on the measurement result. One example where the uncertainty is needed is when comparing a result to a clinical decision value. Measurement uncertainty is the quantifiable expression of the doubt related with the outcome. The expanded uncertainty *U* provides an interval within which the value of the measurand is assumed to be determined by a defined level of confidence. It is the result of multiplying the standard combined uncertainty *u*c by a coverage factor *k*. The choice of the factor *k* is established on the level of confidence desired. For an approximate level of confidence of 95%, *k* is usually set to 2, and with a confidence higher than 99%, *k* is typically set to 3 when degrees of freedom for the combined uncertainty are more than 20.

Commonly, the measurement uncertainty result is expressed as value±expanded uncertainty. For example, in a screening immunoassay, a ratio of 1.0±0.3 (expanded uncertainty *k*=2) corresponds to the interval 0.7 to 1.3 considering the clinical decision ("cutoff") ratio is equal to 1. It is interpreted as that the measured value±expanded uncertainty covers the "cutoff" value, and the result is in a measurement uncertainty where the result positive or negative cannot be declared (i.e., it is considered indeterminate).

Pereira et al. [6] claimed that a valid uncertainty evaluation should consider:


**•** A comprehensive analysis of the effects impacting the measurement results. From a laboratory view, the main effects are intermediate precision (within-laboratory precision) and any residual measurement bias."

According to the European Federation of National Associations of Measurement, Testing and Analytical Laboratories (Eurolab) Technical Report 1/2007 [7], there are four main approaches to estimating measurement uncertainty fulfilling Uncertainty Approach principles (see 2.1):

**•** Modeling,

tional des Poids et Mesures (CIPM) in 1981 [2] and reapproved 5 years after [3]. The CIPM suggested to the International Organization for Standardization (ISO) the development of a measurement uncertainty master document based on the Working Group recommendation. The ISO Technical Advisory Group on Metrology (TAG 4) assumed the responsibility to write the guideline. Seven organizations participated in the work of TAG 4: BIPM, International Electrotechnical Commission (IEC), ISO, International Organization of Legal Metrology (OIML), International Union of Pure and Applied Chemistry (IUPAC), International Union of Pure and Applied Physics (IUPAP), and International Federation of Clinical Chemistry and Laboratory Medicine (IFCC), representing the medical laboratory. TAG 4 defined the Working Group 3 (ISO/TAG 4/WG 3) with a committee of experts from BIPM, OIML, IEC, and ISO. The guide‐ line "Guide to the Expression of Uncertainty in Measurement" (GUM) was first published in 1993 and 2 years later was corrected and reprinted. Measurement uncertainty is defined as the "nonnegative parameter characterizing the dispersion of the quantity values being attributed to a measurand, based on the information used" (entry 2.26 of [4]). It characterizes "the quality of a result of a measurement" expressed in uncertainty (quantitative indication). It was not intended to be applied to another quantity than numerical, for what it cannot be a standard to the determination of measurement uncertainty in all estimations. GUM is an open access document as it was republished with minor correction by BIPM in 2008 [5]. The terminology to the Uncertainty Approach is part of the International Vocabulary of Metrology (VIM), which is

Measurement uncertainty provides information on the level of confidence on the measurement result. One example where the uncertainty is needed is when comparing a result to a clinical decision value. Measurement uncertainty is the quantifiable expression of the doubt related with the outcome. The expanded uncertainty *U* provides an interval within which the value of the measurand is assumed to be determined by a defined level of confidence. It is the result of multiplying the standard combined uncertainty *u*c by a coverage factor *k*. The choice of the factor *k* is established on the level of confidence desired. For an approximate level of confidence of 95%, *k* is usually set to 2, and with a confidence higher than 99%, *k* is typically set to 3 when

Commonly, the measurement uncertainty result is expressed as value±expanded uncertainty. For example, in a screening immunoassay, a ratio of 1.0±0.3 (expanded uncertainty *k*=2) corresponds to the interval 0.7 to 1.3 considering the clinical decision ("cutoff") ratio is equal to 1. It is interpreted as that the measured value±expanded uncertainty covers the "cutoff" value, and the result is in a measurement uncertainty where the result positive or negative

**•** A comprehensive specification of the measurement procedure and the measurement objects,

degrees of freedom for the combined uncertainty are more than 20.

Pereira et al. [6] claimed that a valid uncertainty evaluation should consider:

**•** "A clear definition of the measurand (i.e., the quantity to be measured),

cannot be declared (i.e., it is considered indeterminate).

and

also freely available from the Bureau [4].

52 New Trends and Developments in Metrology


Dybkaer [8], past IFCC President and a recognized supporter of GUM approach, argued in a paper that "this International Standard will outline the principles of estimating measurement uncertainty according to GUM (1993, 1995), ways of simplification in routine measurement, and possibilities of reporting. This International Standard applies to:


Eurachem/CITAC published in 2000 the "Quantifying Uncertainty in Analytical Measure‐ ment" (QUAM; revised for the third time in 2012) intended to be applied uniquely to meas‐ urement uncertainty in chemistry [9]. This document takes into account the practical experience of estimation of measurement uncertainty in chemistry laboratories. It emphasizes that procedures introduced by chemistry laboratories to estimate their measurement uncer‐ tainty should be integrated with existing quality assurance measurements, as these measures frequently provide much of the information required to evaluate the measurement uncertain‐ ty. It provides unequivocally for the use of validation and related data in the estimation of measurement uncertainty in full compliance with GUM principles. Different from GUM, it purpose is for not only modeling approaches ("bottom-up") but also empirical approaches ("top-down"). The "top-down" approaches satisfy the need of chemistry laboratories looking to an alternative to modular models, given that these models are often inapplicable in labora‐ tories' methods such as medical laboratories' methods. Its approach is consistent with the ISO/ IEC 17025 and even with ISO 15189 requirements. Because medical laboratories methods are mainly chemical, QUAM is an adequate reference to support the estimation of measurement uncertainty in this field, and it is referred in this Chapter to describe the basic concepts of models to the estimation of measurement uncertainty (see 2.2).

The Nordtest TR 537 "Handbook for Calculation of Measurement Uncertainty in Environ‐ mental Laboratories" [10], along with Eurolab TR 1/2007 "Measurement Uncertainty Revisited: Alternative Approaches to Uncertainty Evaluation" [7], proposes "top-down" approaches easily practicable in laboratories, including medical laboratories. Such as Eurachem/CITAC documents these are open access publications. The Finnish Environment Institute (SYKE) released a freeware, MUKit, featuring TR 537 mathematical models [11].

The National Pathology Accreditation Advisory Council (NPAAC) published in 2007 a public document entitled "Requirements for the Estimation of Measurement Uncertainty" to support the Australian accreditation of medical laboratories [12]. This guideline recommends a set of "top-down" models to use in this field.

The Clinical and Laboratory Standards Institute (CLSI)/IFCC C51-A "Expression of Measurement Uncertainty in Laboratory Medicine" was published in January 2012 (the code changed to EP29-A without text revision) [13] and it is the only global guideline for the determination of measurement uncertainty. The CLSI EP29 Working Group is chaired by Anders Kalnner, another recognized GUM expert in medical laboratories. Its application to medical laboratories is not consensual [14, 15]. According to this guideline, it "describes the principles of estimating measurement uncertainty and guides clinical laboratories and *in vitro* diagnostic device manufacturers on the specific issues to be considered for implementation of the concept in medical laboratory. This document illustrates the assessment of measurement uncertainty with both 'bottom-up' and 'top-down' approaches. The 'bottomup' approach suggests that all possible sources of uncertainty are identified and quantified in an uncertainty budget. A combined uncertainty is calculated using statistical propagation rules. The 'top-down' approach directly estimates the measurement uncertainty results produced by a measuring system. The methods to estimate the precision and bias are presented theoretically and in worked examples."

The ISO/PDTS 25680 "Medical Laboratories—Calculation and Expression of Measurement Uncertainty" [16] was written initially as an International Standard and later as a Technical Specification and was canceled on June 2011. The contents of the last draft were analogous to the CLSI C51-P.

Currently, the determination of measurement uncertainty is required in ISO/IEC 17025 and ISO 15189 [17]. It is mandatory in Australia, Latvia, and France after November 1, 2016.

The laboratorian should understand that several uncertainty components are immeasurable in Uncertainty Approach, for example, diagnostic uncertainty in tests with binary results (positive/negative). For further details about other sources of uncertainty, please refer to [18], and for diagnostic uncertainty, please refer to [19].

This Chapter presents a discussion of the estimation models of measurement uncertainty and its determination in a single medical laboratory test. Preferably, the reader should have basic statistical skills to understand the concepts and mathematical models. There are cited several references that should be revised for a deeper understanding of the Uncertainty Approach and also to review the examples of estimation in other medical laboratory tests.

### **2. Principles of GUM**

The Nordtest TR 537 "Handbook for Calculation of Measurement Uncertainty in Environ‐ mental Laboratories" [10], along with Eurolab TR 1/2007 "Measurement Uncertainty Revisited: Alternative Approaches to Uncertainty Evaluation" [7], proposes "top-down" approaches easily practicable in laboratories, including medical laboratories. Such as Eurachem/CITAC documents these are open access publications. The Finnish Environment Institute (SYKE)

The National Pathology Accreditation Advisory Council (NPAAC) published in 2007 a public document entitled "Requirements for the Estimation of Measurement Uncertainty" to support the Australian accreditation of medical laboratories [12]. This guideline recommends a set of

The Clinical and Laboratory Standards Institute (CLSI)/IFCC C51-A "Expression of Measurement Uncertainty in Laboratory Medicine" was published in January 2012 (the code changed to EP29-A without text revision) [13] and it is the only global guideline for the determination of measurement uncertainty. The CLSI EP29 Working Group is chaired by Anders Kalnner, another recognized GUM expert in medical laboratories. Its application to medical laboratories is not consensual [14, 15]. According to this guideline, it "describes the principles of estimating measurement uncertainty and guides clinical laboratories and *in vitro* diagnostic device manufacturers on the specific issues to be considered for implementation of the concept in medical laboratory. This document illustrates the assessment of measurement uncertainty with both 'bottom-up' and 'top-down' approaches. The 'bottomup' approach suggests that all possible sources of uncertainty are identified and quantified in an uncertainty budget. A combined uncertainty is calculated using statistical propagation rules. The 'top-down' approach directly estimates the measurement uncertainty results produced by a measuring system. The methods to estimate the precision and bias are presented

The ISO/PDTS 25680 "Medical Laboratories—Calculation and Expression of Measurement Uncertainty" [16] was written initially as an International Standard and later as a Technical Specification and was canceled on June 2011. The contents of the last draft were analogous to

Currently, the determination of measurement uncertainty is required in ISO/IEC 17025 and ISO 15189 [17]. It is mandatory in Australia, Latvia, and France after November 1, 2016.

The laboratorian should understand that several uncertainty components are immeasurable in Uncertainty Approach, for example, diagnostic uncertainty in tests with binary results (positive/negative). For further details about other sources of uncertainty, please refer to [18],

This Chapter presents a discussion of the estimation models of measurement uncertainty and its determination in a single medical laboratory test. Preferably, the reader should have basic statistical skills to understand the concepts and mathematical models. There are cited several references that should be revised for a deeper understanding of the Uncertainty Approach and

also to review the examples of estimation in other medical laboratory tests.

released a freeware, MUKit, featuring TR 537 mathematical models [11].

"top-down" models to use in this field.

54 New Trends and Developments in Metrology

theoretically and in worked examples."

and for diagnostic uncertainty, please refer to [19].

the CLSI C51-P.

#### **2.1. Principles of uncertainty approach**

GUM is recognized as the master document on measurement uncertainty throughout the field of metrology, for what it is also called "uncertainty bible." The measurement uncertainty evaluation is recognized to be applied to all types of quantitative assay results in physics and chemistry. The theoretical modeling follows the Uncertainty Approach published in 1980. The principles are as follows:

**a.** "The uncertainty in the result of a measurement generally consists of several components, which may be grouped into two categories according to the way in which their numerical value is estimated:

Those which are evaluated by statistical methods and

Those which are evaluated by other means.

There is not always a simple correspondence between the classification into categories A or B and the previously used classifications into "random" and "systematic" uncertainties. The term "systematic uncertainty" can be misleading and should be avoided.

Any detailed report of the uncertainty should consist of a complete list of the components, specifying for each the method used to obtain its numerical value.


These principles provide a methodology where:


**•** Uncertainty components are expressed as combined standard uncertainty or by the result of combined standard uncertainty multiplied by a defined coverage factor, designated expanded uncertainty [7].

The approach is based on a model designed to consider the interrelation of all sources of uncertainty that significantly affect the measurand. All known systematic errors must be corrected if significant, and they are not included in the calculations. All the principal sources are combined according to the laws of propagation of uncertainty. The calculated result is the combined uncertainty±value associated with a quantitative measured value. The uncertainty budget expresses all measurement sources as well as the combined calculus. A Pareto chart can be used to display visually and compare different sources and their weight on combined uncertainty. This step requires the medical laboratory to define a mathematical model, which could require the competencies of a mathematician or a statistician with special training in measurement uncertainty. The combined uncertainty is multiplied by a Student's *t* value and its result, referred as "expanded uncertainty," is expressed by a confidence interval (e.g., *k*=2 to a confidence interval equal to 95%) [20, 21]. For further details about principles of Uncer‐ tainty Approach, please refer to [5].

#### **2.2. Fundamental concepts of models to the estimation of measurement uncertainty**

#### *2.2.1. Stochastic mathematical equation of the measurement method*

The equation should assure the stochasticity of the reaction. Nonstochastic equations cause an incorrect estimate of measurement uncertainty. This is a complex task when designing a modeling approaches, requiring principally in chemical expertise in medical laboratories' tests. It is uniquely applied to modeling approaches.

#### *2.2.2. Distribution type in the estimation of standard deviations*

For each uncertain variable, the probable results with a probability distribution should be defined. The conditions contiguous to the variable determine the category of certain distribu‐ tion. The categories consist of the following:

	- **–** Bernoulli (i.e., success/failure in a single experiment);
	- **–** Binomial (i.e., the number of success/failure in some experiments);
	- **–** Multi-nominal (i.e., the frequency registered a certain result in some experiments); and
	- **–** Poisson (i.e., number of events of a certain occurrence from a period zero to a period *t*).

Two types of estimations could be made: Type A or Type B. These different estimates are merged in the combined standard uncertainty. Before the combination, all uncertainty sources must be expressed as standard uncertainties (i.e., as relative standard deviations). The standard deviation shall be measured according to data distribution, which could not be a normal distribution due to the number of different uncertainty sources that could include not only chemical but also physics. The following *rules* for distribution could be applied to most of the uncertainty sources in medical laboratory:

*Rule* 1 for distribution: Normal distribution with known *n*

**•** Uncertainty components are expressed as combined standard uncertainty or by the result of combined standard uncertainty multiplied by a defined coverage factor, designated

The approach is based on a model designed to consider the interrelation of all sources of uncertainty that significantly affect the measurand. All known systematic errors must be corrected if significant, and they are not included in the calculations. All the principal sources are combined according to the laws of propagation of uncertainty. The calculated result is the combined uncertainty±value associated with a quantitative measured value. The uncertainty budget expresses all measurement sources as well as the combined calculus. A Pareto chart can be used to display visually and compare different sources and their weight on combined uncertainty. This step requires the medical laboratory to define a mathematical model, which could require the competencies of a mathematician or a statistician with special training in measurement uncertainty. The combined uncertainty is multiplied by a Student's *t* value and its result, referred as "expanded uncertainty," is expressed by a confidence interval (e.g., *k*=2 to a confidence interval equal to 95%) [20, 21]. For further details about principles of Uncer‐

**2.2. Fundamental concepts of models to the estimation of measurement uncertainty**

The equation should assure the stochasticity of the reaction. Nonstochastic equations cause an incorrect estimate of measurement uncertainty. This is a complex task when designing a modeling approaches, requiring principally in chemical expertise in medical laboratories'

For each uncertain variable, the probable results with a probability distribution should be defined. The conditions contiguous to the variable determine the category of certain distribu‐

**–** Multi-nominal (i.e., the frequency registered a certain result in some experiments); and

**–** Poisson (i.e., number of events of a certain occurrence from a period zero to a period

Two types of estimations could be made: Type A or Type B. These different estimates are merged in the combined standard uncertainty. Before the combination, all uncertainty sources

*2.2.1. Stochastic mathematical equation of the measurement method*

tests. It is uniquely applied to modeling approaches.

tion. The categories consist of the following:

**b.** Continuous variables: exponential and normal.

**a.** Discreet

*t*).

*2.2.2. Distribution type in the estimation of standard deviations*

**–** Bernoulli (i.e., success/failure in a single experiment);

**–** Binomial (i.e., the number of success/failure in some experiments);

expanded uncertainty [7].

56 New Trends and Developments in Metrology

tainty Approach, please refer to [5].

"Where the uncertainty component was evaluated experimentally from the dispersion of repeated measurements, it can readily be expressed as a standard deviation. For the contri‐ bution to uncertainty in single measurements, the standard uncertainty is simply the observed standard deviation; for results subjected to averaging, the standard deviation of the mean is used."

The standard deviation of the mean of *n* values taken from a population is given by *n*:

$$\mu\left(\mathbf{x}\_i\right) = \mathbf{s}\_{\mathbb{Z}} = \mathbf{s} \land \sqrt{n} \tag{1}$$

*Rule* 2 for distribution: Normal distribution with known confidence interval

"Where an uncertainty estimate is derived from previous results and data, it may already be expressed as a standard deviation. However, where a confidence interval is given with a level of confidence *p*% (in the form ±*a* at *p*%), then divide the value *a* by the appropriate percentage point of the normal distribution for the level of confidence given to calculate the standard deviation."

$$
\mu\left(\mathbf{x}\_i\right) = a \land \left(t - \text{value}\right) \tag{2}
$$

#### *Rule* 3 for distribution: Rectangular distribution

"If limits of ±*a* are given without a confidence level and there is reason to expect that extreme values are likely, it is normally appropriate to assume a rectangular distribution, with a standard deviation of"

$$u\left(\mathbf{x}\_{i}\right) = a \lor \sqrt{3} \tag{3}$$

#### *Rule* 4 for distribution: Triangular distribution

"If limits of ±*a* are given without a confidence level, but there is reason to expect that extreme values are unlikely, it is normally appropriate to assume a triangular distribution, with a standard deviation of"

$$
\mu\left(\mathbf{x}\_i\right) = a \land \sqrt{6} \tag{4}
$$

#### *Rule* 5 for distribution: Estimation made by judgment

"Where an estimate is to be made on the basis of judgment, it may be possible to estimate the component directly as a standard deviation. If this is not possible, then an estimate should be made of the maximum deviation that could reasonably occur in practice (excluding simple mistakes). If a smaller value is considered substantially more likely, this estimate should be treated as descriptive of a triangular distribution. If there are no grounds for believing that a small error is more likely than a large error, the estimate should be treated as characterizing a rectangular distribution" (entry 8.1 of [9]).

#### *2.2.3. Combined standard uncertainty according to the law of the propagation of uncertainty*

VIM defines combined standard uncertainty as the "standard measurement uncertainty that is obtained using the individual standard measurement uncertainties associated with the input quantities in a measurement model. Note: In case of correlations of input quantities in a measurement model, covariances must also be taken into account when calculating the combined standard measurement uncertainty (…)" (entry 2.31 of [4]).

Combined uncertainty is the result of calculus according to the law of the propagation of uncertainty, which combines the sources of uncertainty in a single value of uncertainty. The law of the propagation of uncertainty is derived based on the Taylor series expansion of a functional relationship regularly used in differential calculus. A measurement process can be modeled mathematically using the function:

$$y = f\left(\mathbf{x}\_1, \mathbf{x}\_2, \dots, \mathbf{x}\_n\right) \tag{5}$$

When sources are not correlated (independent variables), the calculus is done according to the variance rules; when sources are correlated (dependent variables, share of variance), the calculus is done according to covariance [(entry Chapter 8 of [20], 22].

#### *2.2.4. Independent variables*

"The general relationship between the combined standard uncertainty *u*c(*y*) of a value *y* and the uncertainty of the independent parameters *x1*, *x2*, …, *xn* on which it depends on is

$$\mu\_{\circ}(\mathbf{y}) = \left(\mathbf{y}\left(\mathbf{x}\_{1}, \mathbf{x}\_{2}, \ldots\right)\right) = \sqrt{\sum\_{i=1,n} c\_{i}^{2} \mu\left(\mathbf{x}\_{i}\right)^{2}} = \sqrt{\sum\_{i=1,n} \mu\left(\mathbf{y}, \mathbf{x}\_{i}\right)^{2}}\tag{6}$$

where *y*(*x*1, *x*2, …) is a function of several parameters, *x*1, *x*2, …, *ci* is a sensitivity coefficient evaluated as *ci* =∂*y*/∂*xi* , the partial differential of *y* on *xi* , and *u*(*y*,*xi* ) denotes the uncertainty in *y* arising from the uncertainty in *xi* . Each variable's contribution *u*(*y*, *xi* ) is just the square of the associated uncertainty expressed as a standard deviation multiplied by the square of the relevant sensitivity coefficient. These sensitivity coefficients describe how the value of *y* varies with changes in the parameters *x*1, *x*2, etc." (entry 8.2.2 of [9]).

#### *2.2.5. Dependent variables*

( ) / 6 *<sup>i</sup> ux a* = (4)

( ) 1 2 ,,, *<sup>n</sup> y fxx x* = ¼ (5)

"Where an estimate is to be made on the basis of judgment, it may be possible to estimate the component directly as a standard deviation. If this is not possible, then an estimate should be made of the maximum deviation that could reasonably occur in practice (excluding simple mistakes). If a smaller value is considered substantially more likely, this estimate should be treated as descriptive of a triangular distribution. If there are no grounds for believing that a small error is more likely than a large error, the estimate should be treated as characterizing a

*2.2.3. Combined standard uncertainty according to the law of the propagation of uncertainty*

combined standard measurement uncertainty (…)" (entry 2.31 of [4]).

calculus is done according to covariance [(entry Chapter 8 of [20], 22].

c 1 2

VIM defines combined standard uncertainty as the "standard measurement uncertainty that is obtained using the individual standard measurement uncertainties associated with the input quantities in a measurement model. Note: In case of correlations of input quantities in a measurement model, covariances must also be taken into account when calculating the

Combined uncertainty is the result of calculus according to the law of the propagation of uncertainty, which combines the sources of uncertainty in a single value of uncertainty. The law of the propagation of uncertainty is derived based on the Taylor series expansion of a functional relationship regularly used in differential calculus. A measurement process can be

When sources are not correlated (independent variables), the calculus is done according to the variance rules; when sources are correlated (dependent variables, share of variance), the

"The general relationship between the combined standard uncertainty *u*c(*y*) of a value *y* and

where *y*(*x*1, *x*2, …) is a function of several parameters, *x*1, *x*2, …, *ci* is a sensitivity coefficient

1, 1, , , , *i i <sup>i</sup> i n i n*

. Each variable's contribution *u*(*y*, *xi*

= ¼= = å å (6)

, and *u*(*y*,*xi*

) denotes the uncertainty in

) is just the square of the

= =

the uncertainty of the independent parameters *x1*, *x2*, …, *xn* on which it depends on is

() ( ) ( ) ( ) ( ) 2 2 <sup>2</sup>

*u y y x x cu x u yx*

, the partial differential of *y* on *xi*

*Rule* 5 for distribution: Estimation made by judgment

rectangular distribution" (entry 8.1 of [9]).

58 New Trends and Developments in Metrology

modeled mathematically using the function:

*2.2.4. Independent variables*

evaluated as *ci*

=∂*y*/∂*xi*

*y* arising from the uncertainty in *xi*

"Where variables are not independent, the relationship is more complex:

$$\mu\left(y\left(\mathbf{x}\_{i,j\dots}\right)\right) = \sqrt{\sum\_{i=1,n} c\_i^2 \mu\left(\mathbf{x}\_i\right)^2 + \sum\_{i,k=1,n} c\_i c\_k \mu(\mathbf{x}\_i, \mathbf{x}\_k)}\tag{7}$$

where *u*(*xi* , *xk*) is the covariance between *xi* and *xk*, and *ci* and *ck* are the sensitivity coefficients (…). The covariance is related to the correlation coefficient *rik* by

$$
\mu(\mathbf{x}\_i, \mathbf{x}\_k) = \mu\left(\mathbf{x}\_i\right)\mu\left(\mathbf{x}\_k\right)r\_{ik} \tag{8}
$$

where – 1 ≤ *r*ik ≤ 1" (entry 8.2.3 of [9]).

QUAM recommends the Kragten spreadsheet to the determination of measurement uncer‐ tainty [23].

#### *2.2.6. Simpler expressions for independent variables*

It is possible to avoid the use of complex calculus and to use simpler expressions taken from the expression for the general relationship of variables. Two rules are proposed [root-sum-ofsquares (RSS)]:

*Rule* 1 for combination of uncertainty: Sum or difference

"For models involving only a sum or difference of quantities [e.g., *y*=(*p* + *q* + *r* +…)], the combined standard uncertainty *uc*(*y*) is given by

$$
\mu\_{\varepsilon} \left( \wp \left( p, q, \ldots \right) \right) = \sqrt{\mu \left( p \right)^{2} + \mu \left( q \right)^{2} + \ldots} \tag{9}
$$

*Rule* 2 for combination of uncertainty: Product or quotient

"For models involving only a product or quotient [e.g., *y*=(*p* ⋅ *q* ⋅ *r* ⋅…)] or

$$y = p / \left(q \cdot r \cdot \ldots\right) \tag{10}$$

The combined standard uncertainty *uc*(*y*) is given by

$$
\mu\_{\varepsilon}(\mathbf{y}) = \mathbf{y} \sqrt{\left(\mu(p) / \left.\, p\right)^{2} + \left(\mu(q) / \left.q\right)^{2} + \dots\right.}\tag{11}
$$

where *u*(*p*)/*p*, etc., are the uncertainties in the parameters, expressed as relative standard deviations. Note: Subtraction is treated in the same manner as addition, and division in the same way as multiplication" (entry 8.2.6 of [9]).

Theoretically, because RSS does not take into account the partial derivatives, it results in an inaccurate uncertainty result. However, the inaccuracy of determination is usually considered nonsignificant to the estimated result.

#### *2.2.7. Expanded uncertainty*

VIM defines expanded uncertainty as the "product of a combined standard measurement uncertainty and a factor larger than the number 1. Notes: (1) The factor depends on the type of probability distribution of the output quantity in a measurement model and on the selected coverage probability. (2) The term "factor" in this definition refers to a coverage factor. (3) Expanded measurement uncertainty is termed "overall uncertainty" in paragraph 5 of Recommendation INC-1 (1980) (see the GUM) and simply "uncertainty" in IEC documents" (entry 2.35 of [4]).

The final step in the evaluation of measurement uncertainty is the calculus of expanded uncertainty. Its purpose is the designation of an interval that may be expected to include a large fraction of the distribution of values, which could reasonably be attributed to the measurand. Its calculus is according to the formula:

$$U = k \cdot u \tag{12}$$

where *k* is a coverage factor according to the type of probability distribution and *u* is the combined uncertainty. The choice of *k* value should be done according to factors such as the level of confidence required, any knowledge of the underlying distributions, or any knowledge of the number of values used to estimate random effects.

The value for *k* in a medical laboratory usually is taken from a one- or two-tailed normal distribution for Student's *t*. When the effective degrees of freedom *veff* are higher than about 6, usually *k* is equal to 2, which correspond to 95% confidence; when *veff* are less than about 6, they shall be defined. The European Co-Operation for Accreditation (EA) recommends in EA4-02 "Expression of the Uncertainty of Measurement in Calibration," a formula for *veff* calculus [24]:

"Estimate the effective degrees of freedom *veff* of the standard uncertainty *u*(*y*) associated with the output estimate *y* from the Welch-Satterthwaite formula:

Uncertainty of Measurement in Medical Laboratories http://dx.doi.org/10.5772/62437 61

$$\mathbf{v}\_{\ast y^\*} = \left(\mu^4\left(y\right)\right) / \left(\sum\_{i=1}^N \mu\_i^4\left(y\right) / \mathbf{v}\_i\right) \tag{13}$$

where *ui* (*y*) (*i*=1, 2, …, *N*), defined in the equation, are the contributions to the standard uncertainty associated with the output estimate *y* resulting from the standard uncertainty associated with the input estimate *xi* , which are assumed to be mutually statistically inde‐ pendent, and *vi* is the effective degrees of freedom of the standard uncertainty contribution *ui* (*y*)."

When a standard uncertainty is measured requiring Type A evaluation, the degrees of freedom *vi* are measured according to a simpler formula:

$$
\lambda \mathbf{v}\_i = \mathbf{n} - \mathbf{l} \tag{14}
$$

The table with *k* values set from a *t* distribution evaluated for a coverage probability of 95% to be used for Type A evaluation is featured in (entry Annex G of [5]). If *veff* is not an integer, which is usually the case, truncate *veff* to the next lower integer. When a standard uncertainty is measured requiring Type B evaluation, the calculus is more complex. The common practice is to carry out such assessments in a mode that guarantees that any underestimation is avoided. Comsidering this practice is followed, the degrees of freedom of the standard uncertainty *u*(*xi* ) acquired from a Type B evaluation may be taken to be . The estimation of *k* could be easily done in Microsoft® Excel® using the function=TINV(probability;deg\_freedom) (note: for a 95% confidence, the probability is equal to the difference between 1 and 0.95). For further details about *veff* and levels of confidence in measurement uncertainty, please refer to (entry Annex G of [5]).

#### *2.2.8. Reporting the measurement uncertainty*

**a.** The report

() () ( ) ( ) ( ) 2 2

same way as multiplication" (entry 8.2.6 of [9]).

measurand. Its calculus is according to the formula:

of the number of values used to estimate random effects.

the output estimate *y* from the Welch-Satterthwaite formula:

nonsignificant to the estimated result.

60 New Trends and Developments in Metrology

*2.2.7. Expanded uncertainty*

(entry 2.35 of [4]).

calculus [24]:

where *u*(*p*)/*p*, etc., are the uncertainties in the parameters, expressed as relative standard deviations. Note: Subtraction is treated in the same manner as addition, and division in the

Theoretically, because RSS does not take into account the partial derivatives, it results in an inaccurate uncertainty result. However, the inaccuracy of determination is usually considered

VIM defines expanded uncertainty as the "product of a combined standard measurement uncertainty and a factor larger than the number 1. Notes: (1) The factor depends on the type of probability distribution of the output quantity in a measurement model and on the selected coverage probability. (2) The term "factor" in this definition refers to a coverage factor. (3) Expanded measurement uncertainty is termed "overall uncertainty" in paragraph 5 of Recommendation INC-1 (1980) (see the GUM) and simply "uncertainty" in IEC documents"

The final step in the evaluation of measurement uncertainty is the calculus of expanded uncertainty. Its purpose is the designation of an interval that may be expected to include a large fraction of the distribution of values, which could reasonably be attributed to the

where *k* is a coverage factor according to the type of probability distribution and *u* is the combined uncertainty. The choice of *k* value should be done according to factors such as the level of confidence required, any knowledge of the underlying distributions, or any knowledge

The value for *k* in a medical laboratory usually is taken from a one- or two-tailed normal distribution for Student's *t*. When the effective degrees of freedom *veff* are higher than about 6, usually *k* is equal to 2, which correspond to 95% confidence; when *veff* are less than about 6, they shall be defined. The European Co-Operation for Accreditation (EA) recommends in EA4-02 "Expression of the Uncertainty of Measurement in Calibration," a formula for *veff*

"Estimate the effective degrees of freedom *veff* of the standard uncertainty *u*(*y*) associated with

*U ku* = × (12)

<sup>c</sup> *u y y u p p uq q* = / / + +¼ (11)

The measurement uncertainty report should include:


Note: These elements could be also referred to documented sources.

The data and analysis should be presented in such a way that its important steps can be readily followed and the calculation of the result repeated if necessary (…)."

(…) "When reporting the results of routine analysis, it may be sufficient to state only the value of the expanded uncertainty and the value of *k*" (entry 9.2 of [9]).

The combined standard uncertainty or expanded uncertainty is reported. The majority of assay and calibration certificates describe expanded uncertainty. QUAM proposes the following format for reporting:

**Combined standard uncertainty:** "(Result): *x* (units) [with a] standard uncertainty of *uc* (units) [where standard uncertainty is as defined in the ISO/IEC "Guide to the Expression of Uncertainty" and corresponds to 1 standard deviation]." Note: The use of the symbol ± is not recommended when using standard uncertainty as the symbol is commonly associated with intervals corresponding to high levels of confidence. Terms in parentheses [] may be omitted or abbreviated as appropriate" (entry 9.3 of [9]).

**Expanded uncertainty:** "Unless otherwise required, the result *x* should be stated together with the expanded uncertainty *U* calculated using a coverage factor *k*=2 (…). The following form is recommended: "(Result): (*x*} *U*) (units) [where] the reported uncertainty is [an expanded uncertainty as defined in the International Vocabulary of Basic and General terms in Metrology, 2nd ed., ISO 1993 (note: this Chapter suggests the use the present VIM edition referred in [4]), calculated using a coverage factor of 2 [which gives a level of confidence of approximately 95%]." Terms in parentheses [] may be omitted or abbrevi‐ ated as appropriate. The coverage factor should, of course, be adjusted to show the value actually used" (entry 9.4 of [9]).

The customers and consumers of this information must have the skills to understand the purpose of measurement uncertainty and to take appropriate actions based on this evaluation. Such skills are often lacking for the customers and consumers of laboratory test (physicians, patients, and agencies).

**b.** Reporting in medical laboratory scope

The information required to report the result of measurement depends on its intended use. In the medical laboratory, the final consumer is the patient, blood donor, or other, who is not responsible for the diagnosis and follow-up. The primary customer is the physician or someone else with the responsibility for the technical action (screening, diagnosis, follow-up, or other). The personnel who takes a decision on the result must understand the purpose of measurement uncertainty and its value for the judgment. If this does not happen, reports of measurement uncertainty could generate doubts that could compromise the clinical decision. These skills are rare in physicians or other healthcare professionals. It is not requested by the physician who probably does not understand its concept. Therefore, the majority of hospital laboratories do not report measurement uncertainty because it does not add value to clinical decisions and may only cause indecision. The report must be fit for its purpose [(entry 9.4 of [9]), 25]. **Figure 1** summarizes the stages of measurement uncertainty methodology.

**Figure 1.** Methodology to determine and report measurement uncertainty.

#### *2.2.9. Metrological traceability*

(…) "When reporting the results of routine analysis, it may be sufficient to state only the

The combined standard uncertainty or expanded uncertainty is reported. The majority of assay and calibration certificates describe expanded uncertainty. QUAM proposes the

**Combined standard uncertainty:** "(Result): *x* (units) [with a] standard uncertainty of *uc* (units) [where standard uncertainty is as defined in the ISO/IEC "Guide to the Expression of Uncertainty" and corresponds to 1 standard deviation]." Note: The use of the symbol ± is not recommended when using standard uncertainty as the symbol is commonly associated with intervals corresponding to high levels of confidence. Terms in parentheses

**Expanded uncertainty:** "Unless otherwise required, the result *x* should be stated together with the expanded uncertainty *U* calculated using a coverage factor *k*=2 (…). The following form is recommended: "(Result): (*x*} *U*) (units) [where] the reported uncertainty is [an expanded uncertainty as defined in the International Vocabulary of Basic and General terms in Metrology, 2nd ed., ISO 1993 (note: this Chapter suggests the use the present VIM edition referred in [4]), calculated using a coverage factor of 2 [which gives a level of confidence of approximately 95%]." Terms in parentheses [] may be omitted or abbrevi‐ ated as appropriate. The coverage factor should, of course, be adjusted to show the value

The customers and consumers of this information must have the skills to understand the purpose of measurement uncertainty and to take appropriate actions based on this evaluation. Such skills are often lacking for the customers and consumers of laboratory

The information required to report the result of measurement depends on its intended use. In the medical laboratory, the final consumer is the patient, blood donor, or other, who is not responsible for the diagnosis and follow-up. The primary customer is the physician or someone else with the responsibility for the technical action (screening, diagnosis, follow-up, or other). The personnel who takes a decision on the result must understand the purpose of measurement uncertainty and its value for the judgment. If this does not happen, reports of measurement uncertainty could generate doubts that could compromise the clinical decision. These skills are rare in physicians or other healthcare professionals. It is not requested by the physician who probably does not understand its concept. Therefore, the majority of hospital laboratories do not report measurement uncertainty because it does not add value to clinical decisions and may only cause indecision. The report must be fit for its purpose [(entry 9.4 of [9]), 25]. **Figure 1**

value of the expanded uncertainty and the value of *k*" (entry 9.2 of [9]).

[] may be omitted or abbreviated as appropriate" (entry 9.3 of [9]).

following format for reporting:

62 New Trends and Developments in Metrology

actually used" (entry 9.4 of [9]).

test (physicians, patients, and agencies).

summarizes the stages of measurement uncertainty methodology.

**b.** Reporting in medical laboratory scope

Metrological traceability is defined as the "property of a measurement result whereby the result can be related to a reference through a documented unbroken chain of calibrations, each contributing to the measurement uncertainty" (entry 2.41 of [4]). Therefore, measurement uncertainty result should be metrologically traceable, assuring the comparability of outcomes in a metrological traceability chain, which is defined as the "sequence of measurement standards and calibrations that are used to relate a measurement result to a reference" (entry 2.42 of [4]). **Figure 2** shows an example of a metrological chain for a medical laboratory test. The measurement uncertainties and bias are determined according to the metrological traceability chain. The accuracy, level of accreditation laboratory, instability, and cost per material increase significantly from medical laboratories to the top of the hierarchy. On the contrary, the measurement uncertainty, bias, and availability of materials decrease from bottom to top.

**Figure 2.** A metrological traceability chain of measurement results in a medical laboratory test.

Although properly implemented in general metrology, it is not widely employed in most of medical laboratory tests due to the unavailability of reference materials and reference methods. Also, the "medical traceability" is hard to achieve due to the "physicochemical complexity" of human samples caused principally by the within-individual and interindividual biological variation [26]. For further details about metrological traceability, please refer to [27].

#### *2.2.10. Effect of bias in measurement uncertainty*

Significant systematic effects should be corrected (entry Annex D of [5]). Where bias is statistically significant and is uncorrected, it should be reported with the measurement uncertainty (entry 7.16 of [9]).

Dybkaer [8] claimed that minimizing a diagnostic misclassification required that "trueness obtained through metrological traceability based on a calibration hierarchy." On the article, he argued that the reduction of bias should happen according to seven approaches:


Magnusson and Ellison [29] published a paper on the treatment of uncorrected measurement bias when determining measurement uncertainty, suggesting a correction process and determination of uncertainty for corrected results.

In some medical laboratory fields, such as virology, some biases cannot be corrected as seronegative "window period," verification bias, spectrum bias, and length bias. Therefore, estimates of measurement uncertainty may not adequately describe the variability that is observed (entry 1.2.5 of [30]). For further details about dealing with bias in measurement uncertainty, please refer to [29].

#### **2.3. Modeling approach**

#### *2.3.1. Fundamental concepts*

**Figure 2.** A metrological traceability chain of measurement results in a medical laboratory test.

*2.2.10. Effect of bias in measurement uncertainty*

uncertainty (entry 7.16 of [9]).

64 New Trends and Developments in Metrology

icity.

Although properly implemented in general metrology, it is not widely employed in most of medical laboratory tests due to the unavailability of reference materials and reference methods. Also, the "medical traceability" is hard to achieve due to the "physicochemical complexity" of human samples caused principally by the within-individual and interindividual biological

Significant systematic effects should be corrected (entry Annex D of [5]). Where bias is statistically significant and is uncorrected, it should be reported with the measurement

Dybkaer [8] claimed that minimizing a diagnostic misclassification required that "trueness obtained through metrological traceability based on a calibration hierarchy." On the article,

**–** "The type of quantity that is to be measured must be defined sufficiently well. This is

**–** The principle and method of measurement must be carefully selected for analytical specif‐

**–** A practicable measurement procedure including sampling must be exhaustively described.

**–** A calibration hierarchy must be defined to allow metrological traceability, preferably to a unit of the International System of Units (SI). Traceability involves plugging into a reference measurement system of reference procedures and commutable calibration materials.

he argued that the reduction of bias should happen according to seven approaches:

particularly demanding when analyte isomorphs or speciation are involved.

**–** An internal QC (IQC) system must be devised to reveal increases in bias.

variation [26]. For further details about metrological traceability, please refer to [27].

This model is usually expressed in an equation accounting for the interrelation of the input quantities affecting the measurand pooled essentially with empirical data. Commonly, it is used a combination of different approaches to determine measurement uncertainty. The model incorporates a correction to minimize the effect of identified systematic error components. The law of propagation of uncertainty or propagation of distributions by Monte Carlo simulation method allows the computing of the combined standard uncertainty *u*<sup>c</sup> of the result. The combined standard uncertainty is determined, and factor *k* is selected according to the chosen confidence level to determine expanded uncertainty. The partial derivative method requires a more complex model equation including partial derivatives of standard uncertainties. For further details about the modeling approach for the evaluation of uncertainty, please refer to (entry Chapter 8 of [5]).

#### *2.3.2. Identification of sources of uncertainty*

#### **a**. Cause-and-effect diagram

The cause-and-effect diagram is valuable when the mathematical model is being formulated. It describes different groups or sources that may cause statistically significant contributions to measurement uncertainty (important sources of uncertainty), for example, sampling, storage conditions, instrument effects, reagent purity, assumed reaction stoichiometry, measurement conditions, sample effects, computational effects, blank correction, operator effects, and random effects (entry 6.7 of [9], entry Stage IV of [31]).

The groups of causes diverge in each assay, and the subcauses could be different even for the same commercial assays in different laboratories (i.e., some subcauses could cause significant uncertainty in one medical laboratory, and they should be expressed in mathematical module but, in a different medical laboratory, may not cause significant uncertainty and should be not expressed). This diagram should be carefully formulated for each measurement uncertainty evaluation, and it could be a complex task. The diagram should not include bias measurement because it is not allowed in the calculation of measurement uncertainty, but when bias is statistically significant the contribution of bias to uncertainty (bias uncertainty) should be included. It should be clear that the mathematical model must be carefully formulated to include important causes but also avoid duplications of sources, which would increase estimates of uncertainty. The skills to formulate a correct mathematical model are not common in medical laboratory scientists or researchers. **Figure 3** shows an example of a cause-and-effect diagram that identifies known significant sources of uncertainty for a certain test statistically. This diagram describes the groups of sources (or standard uncertainty intended for measure‐ ment of combined standard uncertainty) as well as the causes of those sources. The result has a measurement uncertainty associated. The groups of causes (and subcauses) that contribute to this measurement uncertainty were defined after a depth research that included equipment process (workflow), service manual, reagents literature, assay development guidelines, manufacturer perspective of significant uncertainty causes, and scientific journals (entry Stage IV of [31]).

**Figure 3.** A cause-and-effect diagram for the determination of measurement uncertainty in a hypothetical medical lab‐ oratory test.

#### **b**. Pareto diagram

The purpose of a Pareto diagram (bar diagram) is to highlight the critical sources of uncertainty, recognized as those with a significant contribution to measurement uncertainty. Usually, it is linked to the cause-and-effect diagram, allowing an easy detection of the primary sources of uncertainty. Typically, it is considered that approximately 80% of the effects come from 20% of the causes (Pareto principle also known as the "80-20 rule"), illustrating that a small part of the causes has the significant contribution to the effect. **Figure 4** shows an example of a Pareto diagram showing the "operator," "material," and "reagents" as the components with a significant contribution to the measurement uncertainty of a certain test, contributing to 92% of the uncertainty. When it is using a modular approach, the selection of the uncertainty components that will be combined according to a model equation should be selected according to this principle. In this condition, the unmeasured components are statistically considered nonsignificant. Thompson and Ellison [32] claimed that the mostly determined measurement uncertainty was statistically significantly lower when compared to the standard deviation in reproducibility conditions, indicating a lower estimation of the uncertainty. The uncertainty not recognized was invoked to as "dark uncertainty" [32]. The mathematical models of empirical models already consider this principle, for what the Pareto diagram is useless for the laboratorian using these models.

**Figure 4.** A Pareto diagram for the determination of measurement uncertainty in a hypothetical medical laboratory test.

#### *2.3.3. Partial derivative method*

evaluation, and it could be a complex task. The diagram should not include bias measurement because it is not allowed in the calculation of measurement uncertainty, but when bias is statistically significant the contribution of bias to uncertainty (bias uncertainty) should be included. It should be clear that the mathematical model must be carefully formulated to include important causes but also avoid duplications of sources, which would increase estimates of uncertainty. The skills to formulate a correct mathematical model are not common in medical laboratory scientists or researchers. **Figure 3** shows an example of a cause-and-effect diagram that identifies known significant sources of uncertainty for a certain test statistically. This diagram describes the groups of sources (or standard uncertainty intended for measure‐ ment of combined standard uncertainty) as well as the causes of those sources. The result has a measurement uncertainty associated. The groups of causes (and subcauses) that contribute to this measurement uncertainty were defined after a depth research that included equipment process (workflow), service manual, reagents literature, assay development guidelines, manufacturer perspective of significant uncertainty causes, and scientific journals (entry Stage

**Figure 3.** A cause-and-effect diagram for the determination of measurement uncertainty in a hypothetical medical lab‐

The purpose of a Pareto diagram (bar diagram) is to highlight the critical sources of uncertainty, recognized as those with a significant contribution to measurement uncertainty. Usually, it is linked to the cause-and-effect diagram, allowing an easy detection of the primary sources of uncertainty. Typically, it is considered that approximately 80% of the effects come from 20% of the causes (Pareto principle also known as the "80-20 rule"), illustrating that a small part of the causes has the significant contribution to the effect. **Figure 4** shows an example of a Pareto diagram showing the "operator," "material," and "reagents" as the components with a significant contribution to the measurement uncertainty of a certain test, contributing to 92%

IV of [31]).

66 New Trends and Developments in Metrology

oratory test.

**b**. Pareto diagram

The partial derivative method was used during the years close to GUM publication, for what is recognized as the "GUM method." The laboratorian must identify the critical sources of uncertainty and combine in a stochastic model equation. This required depth skills in chem‐ istry, mathematics, and statistics, which are rarely available in medical laboratories, represent a high cost. Usually, the output of the modeling approach is an "uncertainty budget" sum‐ marizing the determination of the combined standard uncertainty and the uncertainty components. Pareto diagrams are also used to compare input values with the combined standard uncertainty. Usually, model equations are not developed in medical laboratories but in a reagent manufacturer's laboratory during the test research and development (R&D). Even when medical laboratories are using "in-house" tests with published model equations, generally it is not practical for the determination of measurement uncertainty because, at least, it will require staff with advanced statistical skills of Uncertainty Approach, which is also not common in medical laboratories. It is easy to recognize the role of this approach in the manufacturer, because it is possible to identify and control the major sources of uncertainty, allowing the manufacturer to reduce certain uncertainty sources. This allows the new test to have a smaller measurement uncertainty (i.e., a high probability of the results cannot be significantly different from *in vivo* results). However, this role is not intended to a medical laboratory, where the major uncertainty sources are associated principally with good labora‐ tory practices (e.g., training of staff and storage conditions). The manufacturer considers this a useful method for assessing the influence of reference value uncertainties to the pooled uncertainty related to the final result of measurement.

Some other well-known disadvantages should be considered in the decision to use this method and in the interpretation of measurement uncertainty result. Preanalytical sources are not usually considered, including biological sources, which causes a misestimation of measure‐ ment uncertainty. In medical laboratory tests, measurement uncertainty is composed uniquely of analytical components. This is a critical lack of the accuracy of the determination. The application of inadequate model equation could lead to misestimation or overestimation; for example, unsuspected covariance could give rise to an overestimation of measurement uncertainty. Also, the statistical distribution might not be the same distribution than the one associated with method, which makes the evaluation inaccurate. The complex calculation of partial derivatives is time-consuming and expensive, requiring skilled statisticians. For further details about the partial derivative method, please refer to [5].

#### *2.3.4. Propagation of distributions by Monte Carlo simulation method*

The propagation of distribution method is an alternative to partial derivate method, and it is expected to be recommend in the next revision of GUM as the primary option when a modeling approach is considered. Currently, GUM proposes the propagation of uncertainties through the mathematical model measurement. The propagation of distribution substitutes the propagation of uncertainties method, correcting some obvious lacks of the propagation of uncertainties methods such as the linearity of the model and the normal distribution of the random variable representing the possible values of the measurand. This limitation is associ‐ ated with the application of the partial derivative method in some tests. The propagation of distribution is determined commonly using Monte Carlo simulation method (Monte Carlo). BIPM published in 2008 a Supplement 1 to the GUM "Propagation of Distributions Using a Monte Carlo Method" [33], considering a correct application of Monte Carlo in the determi‐ nation and evaluation of uncertainties. This method does not request a complex calculation of partial derivatives different from the partial derivative method. Monte Carlo simulation involves a random sampling of a probability distribution.

Contrary to the partial derivative method, Monte Carlo can be used in linear and nonlinear equations. It is used to simulate measurement results based on the input quantities of the probability density function (PDF). PDF describes the relative likelihood for a continuous random variable to take on a given value. Monte Carlo produces the propagation of PDF of the input through the mathematical model measurement, providing results in a PDF describ‐ ing the values of the measurand consistent with the available data. Nonlinear model equations, asymmetric distributions, and other problems to the partial derivative method are not significant. Another difference to the propagation of uncertainties is the use of Welch-Satterthwaite formula in the estimation of expanded uncertainty, which is unnecessary. The limits of a symmetrical range of coverage are estimated by the values of the 2.5th and 97.5th percentiles for *p*=95%. An extreme asymmetry could indicate the need to increase the *M*. Monte Carlo is not adequate when the output distribution is not symmetrical. The confidence of the results is based on the demonstration of the model equation and the number of simulated measurements.

laboratory, where the major uncertainty sources are associated principally with good labora‐ tory practices (e.g., training of staff and storage conditions). The manufacturer considers this a useful method for assessing the influence of reference value uncertainties to the pooled

Some other well-known disadvantages should be considered in the decision to use this method and in the interpretation of measurement uncertainty result. Preanalytical sources are not usually considered, including biological sources, which causes a misestimation of measure‐ ment uncertainty. In medical laboratory tests, measurement uncertainty is composed uniquely of analytical components. This is a critical lack of the accuracy of the determination. The application of inadequate model equation could lead to misestimation or overestimation; for example, unsuspected covariance could give rise to an overestimation of measurement uncertainty. Also, the statistical distribution might not be the same distribution than the one associated with method, which makes the evaluation inaccurate. The complex calculation of partial derivatives is time-consuming and expensive, requiring skilled statisticians. For further

The propagation of distribution method is an alternative to partial derivate method, and it is expected to be recommend in the next revision of GUM as the primary option when a modeling approach is considered. Currently, GUM proposes the propagation of uncertainties through the mathematical model measurement. The propagation of distribution substitutes the propagation of uncertainties method, correcting some obvious lacks of the propagation of uncertainties methods such as the linearity of the model and the normal distribution of the random variable representing the possible values of the measurand. This limitation is associ‐ ated with the application of the partial derivative method in some tests. The propagation of distribution is determined commonly using Monte Carlo simulation method (Monte Carlo). BIPM published in 2008 a Supplement 1 to the GUM "Propagation of Distributions Using a Monte Carlo Method" [33], considering a correct application of Monte Carlo in the determi‐ nation and evaluation of uncertainties. This method does not request a complex calculation of partial derivatives different from the partial derivative method. Monte Carlo simulation

Contrary to the partial derivative method, Monte Carlo can be used in linear and nonlinear equations. It is used to simulate measurement results based on the input quantities of the probability density function (PDF). PDF describes the relative likelihood for a continuous random variable to take on a given value. Monte Carlo produces the propagation of PDF of the input through the mathematical model measurement, providing results in a PDF describ‐ ing the values of the measurand consistent with the available data. Nonlinear model equations, asymmetric distributions, and other problems to the partial derivative method are not significant. Another difference to the propagation of uncertainties is the use of Welch-Satterthwaite formula in the estimation of expanded uncertainty, which is unnecessary. The limits of a symmetrical range of coverage are estimated by the values of the 2.5th and 97.5th percentiles for *p*=95%. An extreme asymmetry could indicate the need to increase the *M*. Monte

uncertainty related to the final result of measurement.

68 New Trends and Developments in Metrology

details about the partial derivative method, please refer to [5].

*2.3.4. Propagation of distributions by Monte Carlo simulation method*

involves a random sampling of a probability distribution.

The effect of the number of simulated measurements *M* in the sampling error for estimates is major. Supplement 1 to the GUM request at least 105 repetitions/trials of the model equation to achieve a statistically acceptable result. At present, it is easy to obtain results with higher *M*, such as 106 , due to the available software and hardware. A simple test could be performed in Microsoft® Excel® 2007 or later release using =NORMINV(RAND();mean;standard\_dev) function and histograms. The reader will observe that *M*=10 graph does not show a bell-shaped curve (see Figure 5, histogram A), simulations with *M*≥10 show a bell-shaped curve (see **Figure 5**, histograms A–F), and *M*=105 (see **Figure 5**, histogram E) and *M*=106 (see Figure 5, histogram F) bell-shaped curves are closer from what is represented in the infinite number of samples. This task was not possible for most of the users a decade ago. Model equations with unreliable quantities can produce a sampling error that cannot be reduced increasing *M*. Monte Carlo estimates are considered reasonably accurate when repeated simulations deliver values of *u*c(*y*) that do not diverge from each other in the second significant number. Some other known limitations of Monte Carlo are that the calculated uncertainties vary from one run to the next because of the intentionally random nature of the simulation and that is hard to identify the most significant contributions to the combined uncertainty without repeating the simulation.

**Figure 5.** Histograms from six simulations with data normally distributed: (A) 10, (B) 102 , (C) 103 , (D) 104 , (E) 105 , and (F) 106 trials.

Because, in the case of most medical laboratory tests, any model equation to describe the measurement is not known, the modeling approach cannot be applied in this case using partial derivative or Monte Carlo for what their role should be principally at the reagent manufacturer level. For further details about propagation of distributions by Monte Carlo simulation method, please refer to [33].

#### **2.4. Empirical approaches**

#### *2.4.1. Single laboratory validation (including QC)*

Single laboratory validation and QC, the major components of variance, can regularly be calculated by an "in-house" validation study pooled with internal QC using repeated deter‐ minations of reliable control samples. The evaluation of bias uncertainty is commonly done using a certified reference material (CRM) or comparing using a reference method. The laboratorian easily associates this approach to a method evaluation model. For further details about single laboratory validation for the evaluation of measurement uncertainty, please refer to Nordtest TR 537 [10] and ISO 11352 [34]. These technical guidelines use general approaches that can also be applied to results of medical laboratories' tests.

The within-laboratory reproducibility uncertainty *s*Rw is calculated by pooling the repeatability standard deviation *s*r arising from replicate measurements of human samples and the inter‐ mediate standard deviation *s*I from between runs as in Equation (15) (entry 4 of [10]).

$$\mathbf{s}\_{\text{Rw}} = \sqrt{\mathbf{s}\_{\text{r}}^2 + \mathbf{s}\_{\text{t}}^2} \tag{15}$$

Equation (16) (entry 5 of [10]) is used to compute the bias uncertainty *ub* if a CRM is available. Bias *b* is the result of the mean deviation of measurement results of replicates from the corresponding reference value; *sb* is the bias standard deviation, is the reference value standard uncertainty, and *m* is the number of replicate determinations:

$$
\mu\_b = \sqrt{b^2 + \left(\mathbf{s}\_b \wedge \sqrt{m}\right)^2 + \boldsymbol{\mu}\_{\boldsymbol{c}\_{\rm inf}}^2} \tag{16}
$$

To obtain the combined standard uncertainty, the uncertainty due to precision and that due to bias are combined as in Equation (17) (entry 1.2.2 of [7]):

$$
\mu\_{\varepsilon} \left( \boldsymbol{\nu} \right) = \sqrt{\mathbf{s}\_{\text{Rw}}^{2} + \boldsymbol{\mu}\_{b}^{2}} \tag{17}
$$

This approach considers the within-laboratory reproducibility standard deviation according to two different methods:

(a) Method validation protocol intended for validating the precision of numerical quantity tests. It is recommended for most of the quantitative tests—the approach described in CLSI EP15-A3 protocol to evaluate the precision (and bias) [34]. A series of five analytical runs with three replicates per run is suitable. For further details about precision evaluation, please refer to [(entry Chapter 2 of [35]), (entry 1.2.2 of [7])].

(b) Using data from between-run variation. The data arise from long-term determination, usually using IQC data. For some tests, such as IQC sample's batches, the expiration date is only 1 month; for others, the expiration could be 1 year. Data coming from batches using a larger period are preferable, when available. The statistical power of the determination is critically influenced by this period. IQC should also be chosen when the mobile mean and variance are stable (test in control). For further details about precision evaluation, please refer to (entry 1.2.2 of [7]).

#### *2.4.2. Interlaboratory comparisons*

In the interlaboratory validation approach, the principal sources of variability can often be assessed by interlaboratory studies performed and evaluated according to ISO 5725 [36]. This approach to estimating uncertainty is fully described in ISO 21748 [37].

The approach requires the determination of the between-laboratories reproducibility standard deviation *s*R from the results of an interlaboratory trial according to ISO 5725. In a standardized method, these precision data are usually given in an Appendix. For further details about interlaboratory comparisons approach, please refer to (entry 1.2.3 of [7]).

#### *2.4.3. EQA (PT)*

Because, in the case of most medical laboratory tests, any model equation to describe the measurement is not known, the modeling approach cannot be applied in this case using partial derivative or Monte Carlo for what their role should be principally at the reagent manufacturer level. For further details about propagation of distributions by Monte Carlo simulation

Single laboratory validation and QC, the major components of variance, can regularly be calculated by an "in-house" validation study pooled with internal QC using repeated deter‐ minations of reliable control samples. The evaluation of bias uncertainty is commonly done using a certified reference material (CRM) or comparing using a reference method. The laboratorian easily associates this approach to a method evaluation model. For further details about single laboratory validation for the evaluation of measurement uncertainty, please refer to Nordtest TR 537 [10] and ISO 11352 [34]. These technical guidelines use general approaches

The within-laboratory reproducibility uncertainty *s*Rw is calculated by pooling the repeatability standard deviation *s*r arising from replicate measurements of human samples and the inter‐

2 2

Equation (16) (entry 5 of [10]) is used to compute the bias uncertainty *ub* if a CRM is available. Bias *b* is the result of the mean deviation of measurement results of replicates from the corresponding reference value; *sb* is the bias standard deviation, is the reference value standard

( ) ref

To obtain the combined standard uncertainty, the uncertainty due to precision and that due

This approach considers the within-laboratory reproducibility standard deviation according

( ) 2 2

Rw r I *s ss* = + (15)

<sup>2</sup> 2 2 / *bb c u bsmu* =+ + (16)

<sup>c</sup> Rw *<sup>b</sup> uy s u* = + (17)

mediate standard deviation *s*I from between runs as in Equation (15) (entry 4 of [10]).

method, please refer to [33].

70 New Trends and Developments in Metrology

**2.4. Empirical approaches**

*2.4.1. Single laboratory validation (including QC)*

that can also be applied to results of medical laboratories' tests.

uncertainty, and *m* is the number of replicate determinations:

to bias are combined as in Equation (17) (entry 1.2.2 of [7]):

to two different methods:

EQA programs are proposed to verify periodically the performance of a laboratory test based on data of a laboratory group using proficiency tests [36]. Medical laboratory could also use the comparison data to determine the measurement uncertainty. EuroLab Technical Report 1/2007 [7] presents an approach to laboratories to evaluate measurement uncertainty. Accord‐ ingly, the standard uncertainty measured from the results of the group's participants could to a common test be treated as an early evaluation of the combined standard uncertainty. The reproducibility standard deviation could be determined using the results of the group's laboratories with a combined standard uncertainty. For further details about EQA (PT) approach, please refer to (entry 1.2.4 of [7]).

#### **2.5. Practical application**

#### *2.5.1. Test and measurand*

To exemplify the determination of measurement uncertainty, an *in vitro* chemiluminescent immunoassay was chosen to determine the concentration of antibodies to the hepatitis C virus (HCV), Abbott Prism® HCV (Abbott Diagnostics, Abbott Park, IL, USA) [38], in the Trans‐ missible Agents Laboratory, Portuguese Institute of Blood and Transplantation (IPST; blood establishment/blood bank). The measurand is the immunoglobulin concentration in the serum or plasma samples, which binds to solid-phase particles attached to recombinant antigens of the Core, NS3, NS4, and NS5 regions of the HCV genome.

The results of samples were derived from the photons emitted from the chemiluminescent reaction. They are expressed in a number of photons over a certain period. The number of photons is proportional to the number of antibody-antigen complexes released. This count is corrected by subtracting the number of photons in the dark. The equation to determine the test's results is defined by the manufacturer, validated using true negative and true positive human samples, and permitted by the national agencies. The "cutoff" *n*c is estimated per analytical run using positive and negative calibrators considering , where *n*c is the number of photons of "cutoff" value, is the number of photons of the negative calibrator expressed as the average result of the two lowest replicates out of three, is number of photons of the positive calibrator expressed as the average result of the two highest replicates out of three, and 0.55 is a factor critical for the false-negative rate. The results are verified in an ordinal scale using the ratio of sample corrected number of photons divided by "cutoff" corrected number of photons. Considering ratio value, the "cutoff" is a constant equal to 1. Ratios equal or higher than the "cutoff" are classified as positive, and ratios lower than the "cutoff" are classified as negative. A multiparametric positive sample is used at the end of each analytical run to control the run. This sample is supplied uniquely by the manufacturer and the acceptance criterion for anti-HCV is very large with ratio value from 1.02 to 6.00.

According to reagent manufacturer, "no qualitative performance differences were observed" for the immunoassay in controlled studies using anti-HCV nonreactive and reactive specimens when testing the following potentially interfering substances at the specified levels: bilirubin (≤20 mg/dL), hemoglobin (≤500 mg/dL), red blood cells (≤0.4% v/v), triglycerides (≤3.000 mg/dL) or protein (≤12 g/dL) [38].

Other possible sources of false results are not stated in reagent insert and could be identified by review of the scientific literature, particularly Young's extensive summaries of the effects of preanalytic and analytic variables:

	- **–** Hepatitis C IgG antibodies (Serum no effect analytical): Stability of specimen: "No effect observed when serum stored at ambient temperature for 14 days, refrigerated for 31 days, or frozen for 2 months."
	- **–** HCV antibodies (Serum no effect analytical): Stability of specimen: "No effect observed when serum stored at ambient temperature for 14 days, refrigerated for 31 days, or frozen for 2 months."
	- **–** HCV core protein (Serum positive physiological): Test association: "In patients with chronic HCV infection, no significant positive correlation of *r*=0.081 with serum tumor necrosis factor receptor p55; in patients with chronic HCV infection, no significant positive correlation of *r*=0.141 with serum tumor necrosis factor receptor p75" [39].
	- **–** Disease, drug, and herbs and natural medicine effects

HCV antibodies (Serum positive):

or plasma samples, which binds to solid-phase particles attached to recombinant antigens of

The results of samples were derived from the photons emitted from the chemiluminescent reaction. They are expressed in a number of photons over a certain period. The number of photons is proportional to the number of antibody-antigen complexes released. This count is corrected by subtracting the number of photons in the dark. The equation to determine the test's results is defined by the manufacturer, validated using true negative and true positive human samples, and permitted by the national agencies. The "cutoff" *n*c is estimated per analytical run using positive and negative calibrators considering , where *n*c is the number of photons of "cutoff" value, is the number of photons of the negative calibrator expressed as the average result of the two lowest replicates out of three, is number of photons of the positive calibrator expressed as the average result of the two highest replicates out of three, and 0.55 is a factor critical for the false-negative rate. The results are verified in an ordinal scale using the ratio of sample corrected number of photons divided by "cutoff" corrected number of photons. Considering ratio value, the "cutoff" is a constant equal to 1. Ratios equal or higher than the "cutoff" are classified as positive, and ratios lower than the "cutoff" are classified as negative. A multiparametric positive sample is used at the end of each analytical run to control the run. This sample is supplied uniquely by the manufacturer and the acceptance criterion for anti-

According to reagent manufacturer, "no qualitative performance differences were observed" for the immunoassay in controlled studies using anti-HCV nonreactive and reactive specimens when testing the following potentially interfering substances at the specified levels: bilirubin (≤20 mg/dL), hemoglobin (≤500 mg/dL), red blood cells (≤0.4% v/v), triglycerides (≤3.000

Other possible sources of false results are not stated in reagent insert and could be identified by review of the scientific literature, particularly Young's extensive summaries of the effects

**–** Hepatitis C IgG antibodies (Serum no effect analytical): Stability of specimen: "No effect observed when serum stored at ambient temperature for 14 days, refrigerated for 31

**–** HCV antibodies (Serum no effect analytical): Stability of specimen: "No effect observed when serum stored at ambient temperature for 14 days, refrigerated for 31 days, or

**–** HCV core protein (Serum positive physiological): Test association: "In patients with chronic HCV infection, no significant positive correlation of *r*=0.081 with serum tumor necrosis factor receptor p55; in patients with chronic HCV infection, no significant positive correlation of *r*=0.141 with serum tumor necrosis factor receptor p75" [39].

the Core, NS3, NS4, and NS5 regions of the HCV genome.

72 New Trends and Developments in Metrology

HCV is very large with ratio value from 1.02 to 6.00.

mg/dL) or protein (≤12 g/dL) [38].

of preanalytic and analytic variables:

frozen for 2 months."

days, or frozen for 2 months."

**–** Disease, drug, and herbs and natural medicine effects

**b.** There is no related association to disease effects [40].

**a.** Preanalytical components


There is no related association with natural medicine effects [42].

When the medical laboratory receives samples from patients having potentially interfering substances, it is desirable to make a new measurement using an assay that has not affected the potential cause or to request a collection of a sample during a period when the potential cause is not present. Correlation with past results, as well as other assays, could also be helpful. Results affected by preanalytic and analytic variables could be unrealistic, and the probability of incorrect clinical decision could be critical. In IPST, during the interview of candidates as blood donors, candidates with hepatocellular carcinoma and chronic or cirrhosis of the liver were rejected.

Because the model equation is unknown, uniquely empirical methods are used in this example. Interlaboratory comparisons were not determined for Abbott Prism® immunoassay results due to the method validation to be normally and uniquely intralaboratorial for screening immunoassays.

#### *2.5.2. Bias uncertainty*

Data for laboratory bias evaluation are lacking due to the unavailability of a CRM or a reference laboratory. The mean difference between two laboratories of the IPST was used as an approx‐ imation for bias estimation. The bias due to the different usage of reagent batches is already included in the within-laboratory reproducibility standard deviation. For further details about single laboratory validation approach, please refer to (entry 1.2.2 of [7]).

#### *2.5.3. Results*

**Table 1** summarizes the measurement uncertainty determinations for the intralaboratory and EQA approaches. It is evidenced that the standard deviation decreases according to the ratio, and the highest relative standard deviation is observed for samples close to the "cutoff" value. Standard deviation results under repeatability conditions used replicate testing from 0.5 to 1.5.

#### *2.5.4. Discussion*

The first intralaboratory approach (EP15) has precision estimates similar to what is claimed in a manufacturer's precision study [38] tested in five runs over 5 days, where *sRw* is from 5.7% (average ratio equal to 3.17) to 8.6% (average ratio equal to 0.17) under intralaboratory conditions (within-laboratory reproducibility). On the contrary, in the second intralaboratory approach using replicated results and between-run "cutoff" data, the estimated *sRw* is 16.5%. The contribution of precision of the replicate results was the major uncertainty component, as it has selected unique results with a ratio close to the "cutoff" value. The relative standard deviation is lower at higher ratios, as the EP15 results demonstrate it with an average ratio equal to 3.70.


NA, not applicable.

a Sample Accurun-1 Series 2400 batch no. 10017751 (Seracare Life Sciences, Inc., Milford, MA, USA); average ratio is equal to 3.70 (15 replicate measurements per run).

b Average ratio equal to 1 (299 results per run).

c For this approach, the obtained reproducibility standard deviation is set as the combined uncertainty (sample UK NEQAS no. 9316); average ratio is equal to 2.67 (results from nine laboratories).

**Table 1.** Estimated measurement uncertainty values for the Abbott Prism® HCV method using data from intralaboratory and EQA approaches.

The between-run data are more reliable due to much higher degrees of freedom of this estimation. This is evidence comparing the usage of EP15 precision data and the usage of the between-run data. **Table 1** shows that the between-run data are derived from approximately 300 results determined during a long period, and EP15 data were calculated uniquely using 15 results for 5 days from December 16 to 20, 2014.

The EQA results from nine laboratories tested in the Abbott Prism® gave an estimate of a relative expanded uncertainty of 28% [35] compared to the second intralaboratory approach with an estimated uncertainty of 36%. The higher measurement uncertainty was caused principally by the heterogeneity of the group's laboratories. It used a sample with an average ratio equal to 8.2, for what if it used a sample with a ratio close to the "cutoff" value, the measurement uncertainty should be higher, closer to the second intralaboratory approach percentage.

(average ratio equal to 3.17) to 8.6% (average ratio equal to 0.17) under intralaboratory conditions (within-laboratory reproducibility). On the contrary, in the second intralaboratory approach using replicated results and between-run "cutoff" data, the estimated *sRw* is 16.5%. The contribution of precision of the replicate results was the major uncertainty component, as it has selected unique results with a ratio close to the "cutoff" value. The relative standard deviation is lower at higher ratios, as the EP15 results demonstrate it with an average ratio

**Approach** *u***Rw Bias method** *ub u***<sup>c</sup>** *U*

14.4% Between-run "cutoff"b

Sample Accurun-1 Series 2400 batch no. 10017751 (Seracare Life Sciences, Inc., Milford, MA, USA); average ratio is

For this approach, the obtained reproducibility standard deviation is set as the combined uncertainty (sample UK

The between-run data are more reliable due to much higher degrees of freedom of this estimation. This is evidence comparing the usage of EP15 precision data and the usage of the between-run data. **Table 1** shows that the between-run data are derived from approximately 300 results determined during a long period, and EP15 data were calculated uniquely using

The EQA results from nine laboratories tested in the Abbott Prism® gave an estimate of a relative expanded uncertainty of 28% [35] compared to the second intralaboratory approach with an estimated uncertainty of 36%. The higher measurement uncertainty was caused principally by the heterogeneity of the group's laboratories. It used a sample with an average ratio equal to 8.2, for what if it used a sample with a ratio close to the "cutoff" value, the

**Table 1.** Estimated measurement uncertainty values for the Abbott Prism® HCV method using data from

NA NA NA NA 8.5% Control

sample measured at two laboratoriesa

sample measured at two laboratoriesa

8.5% 16.5% Control

NA NA NA NA NA NA NA 13.9

6.0% 10.4% 21%

6.0% 17.8% 36%

%

28 %

**Method Ratio** *s***r (method)** *s***<sup>r</sup>** *s***I (method)** *s***<sup>I</sup>** *s***Rw (Equation 3)**

equal to 3.70.

Intralab EP15

validation dataa

74 New Trends and Developments in Metrology

Validation and QC data

EQAc 6.2–

NA, not applicable.

a

b

c

2.2– 4.2

0.5– 2.8

10.1

equal to 3.70 (15 replicate measurements per run).

Average ratio equal to 1 (299 results per run).

intralaboratory and EQA approaches.

Replicates from ratio 0.5 to 1.5

NEQAS no. 9316); average ratio is equal to 2.67 (results from nine laboratories).

15 results for 5 days from December 16 to 20, 2014.

Supposedly, it is an alternative to the between-run precision using "cutoff" raw data for the use of a long-term IQC data, given that the whole analytical process is covered. Nevertheless, using a QC sample with an average ratio equal to 1 makes it a second suggestion. For example, the usage of IQC data from March 23, 2010 to June 11, 2011 determined from sample Seracare Accurun-1 Series 2400 batch no. 116406 (Seracare Life Sciences Inc. Milford (MA)) was not a primary choice considered the between-run "cutoff" data because the average ratio was statistically significantly higher. The average ratio of 293 samples was equal to 2.42, and the intermediate standard deviation was equal to 11.1%.

The example should not be understood as applicable to all tests in the medical laboratory. Although the mathematical models could be applied to any quantitative test, some adaptations must be done in different tests. For example, in clinical chemistry, one of the major concerns is the lack of biological components when determining measurement uncertainty. This problem happened already with the Westgard et al. [43] total analytical error (TAE) [43], which is the sum of the absolute value of bias with a *k* multiplied by the standard deviation. Fraser [26] expanded the TAE concept to the total biological error (TEba) combining not only analytical error components but also biological precision and accuracy. These models are based in the Error Approach (also recognized as the Traditional Approach or True Value Approach). The Australian NPAAC guideline [12] considers not only analytical sources of precision but also the individual biological variation (CVI) when it is known using GUM empirical estimations and Fraser's proposal. The Westgard QC provides quality requirements tables with CVI for most of the chemistry tests [44]. For further details about the empirical determinations of measurement uncertainty combining CVI, please refer to [12].

Although the determination of measurement uncertainty is practical in a medical laboratory, as it was already demonstrated [5], there is a serious lack of consensus on the use of the Uncertainty Approach, and the medical laboratory staff rarely understands it. Although TAE does not fulfill part of the Uncertainty Approach principles (i.e., it cannot be a model for the determination of measurement uncertainty), it is accepted in medical laboratory practice and recognized by some as a role analogous to measurement uncertainty [15, 45].

Medical laboratory results affect 60% to 70% of clinical decisions, affecting 100% of validation of blood donations, cells, and tissues. Consequently, results with higher measurement uncertainty have a significant probability of being unrealistic arising from a high risk of the uncorrected clinical decision. Measurement uncertainty has been demonstrated to be a tool to verify the level of confidence in medical laboratory results. Results with large measurement uncertainty intervals have a significant chance to be distant from the true value (i.e., *in vivo* value). Therefore, measurement uncertainty could also be used as a method validation estimation. The laboratorian could consider it as analogous to measurement uncertainty but not confusing for both concepts, differing principally in the use of bias.

#### **3. Recommended models in medical laboratories**

According to what was explained in Section 2.3, the use of modeling approach is intended principally to the R&D of medical laboratory tests. Its use in medical laboratories is complex and expensive and the measurement uncertainty results could be misestimated because not all uncertainty components are determined, different what happens using empirical data. The reagent manufacturer should choose the Monte Carlo simulation instead of partial derivative method because the propagation of distribution result is considered more reliable.

The application of empirical models is recommended for medical laboratory tests, as most of its theory and practice are already known and the results are more realistic when compared to the modeling approach. However, the laboratorian should select the model according to the chance to produce more realistic results. **Figure 6** represents in a flow chart the steps to the selection of measurement uncertainty models in medical laboratories. The first model to select is the intralaboratory approach using repeatability and between-run precision using long-term data. This is usually applied to tests used in the long-term. The second choice is the intrala‐

**Figure 6.** Flow chart to select a model for the determination of measurement uncertainty in a medical laboratory.

boratory approach using EP15 data. It is expected to be used in initial uncertainty estimation in a brand-new test. Interlaboratory comparisons are the third choice, and it is principally used when the comparator test is external. The fourth and last option is the EQA (PT). Due to the heterogeneity of the group's data, this estimation should be carefully evaluated. There is a high risk of an unrealistic estimation with the overestimation of measurement uncertainty and in some cases could be useless.

#### **Author details**

Paulo Pereira

**3. Recommended models in medical laboratories**

76 New Trends and Developments in Metrology

According to what was explained in Section 2.3, the use of modeling approach is intended principally to the R&D of medical laboratory tests. Its use in medical laboratories is complex and expensive and the measurement uncertainty results could be misestimated because not all uncertainty components are determined, different what happens using empirical data. The reagent manufacturer should choose the Monte Carlo simulation instead of partial derivative

The application of empirical models is recommended for medical laboratory tests, as most of its theory and practice are already known and the results are more realistic when compared to the modeling approach. However, the laboratorian should select the model according to the chance to produce more realistic results. **Figure 6** represents in a flow chart the steps to the selection of measurement uncertainty models in medical laboratories. The first model to select is the intralaboratory approach using repeatability and between-run precision using long-term data. This is usually applied to tests used in the long-term. The second choice is the intrala‐

**Figure 6.** Flow chart to select a model for the determination of measurement uncertainty in a medical laboratory.

method because the propagation of distribution result is considered more reliable.

Address all correspondence to: paulo.pereira@ipst.min-saude.pt

Quality Management Department, Portuguese Institute of BloodTransplantation, Lisbon, Portugal

#### **References**


Alternative Approaches to Uncertainty Evaluation. EuroLab; 2007. Retrieved from: http://www.eurolab.org/documents/1-2007.pdf. Accessed: January 4, 2016.


diagnostic accuracy models. Transfus Apher Sci 2015; 52(1): 35–41. DOI: 10.1016/ j.transci.2014.12.017

[20] Kimothi S. The Uncertainty of Measurements. Milwaukee, WI: ASQ; 2002.

Alternative Approaches to Uncertainty Evaluation. EuroLab; 2007. Retrieved from:

[8] Dybkaer R. From total allowable error via metrological traceability to uncertainty of measurement of the unbiased result. Accred Qual Assur 1999; 4(9): 401–405. DOI:

[9] Ellison SLR, Williams A. Quantifying Uncertainty in Analytical Measurement. 3rd ed. Eurachem/CITAC; 2012. Retrieved from: https://www.eurachem.org/images/stories/

[10] Magnusson B, Näykk T, Hovind H, Krysell M. NordTest NT TR 537 Handbook for Calculation of Measurement Uncertainty in Environmental Laboratories. 31st ed. Oslo: Nordic Innovation; 2011. Retrieved from: http://www.nordtest.info/index.php/ technical-reports/item/handbook-for-calculation-of-measurement-uncertainty-inenvironmental-laboratories-nt-tr-537-edition-3.html. Accessed: January 4, 2016. [11] SYKE. MUkit—Measurement Uncertainty Kit Finnish Environmental Institute; 2014. Retrieved from: http://www.syke.fi/en-us/Services/Calibration\_serv‐ ices\_and\_contract\_laboratory/MUkit\_\_Measurement\_Uncertainty\_Kit. Accessed:

[12] National Pathology Accreditation Advisory Council. Requirements for the Estimation of Measurement Uncertainty. Barton, ACT: NPAAC; 2007. Retrieved from: http://

[13] Clinical and Laboratory Standards Institute. EP29-A Expression of Measurement

[14] Westgard J. Update on Measurement Uncertainty: New CLSI C51A Guidance. Re‐ trieved from: http://www.westgard.com/CLSI-c51.htm. Accessed: January 4, 2016. [15] Westgard J. Managing quality vs. measuring uncertainty in the medical laboratory. Clin

[16] International Organization for Standardization. ISO/PDTS 25680 Medical Laboratories —Calculation and Expression of Measurement Uncertainty. Geneva: ISO; 2008. [17] Pereira P, Westgard J, Encarnação P, Seghatchian J, Sousa G. Quality management in the European screening laboratories in blood establishments: a view on current approaches and trends. Transfus Apher Sci 2015; 52(2): 245–251. DOI: 10.1016/j.transci.

[18] Pereira P, Westgard J, Encarnação P, Seghatchian J, de Sousa G. The role of uncertainty in results of screening immunoassays in blood establishments. Transfus Apher Sci 2015;

[19] Pereira P, Westgard J, Encarnação P, Seghatchian J. Evaluation of the measurement uncertainty in screening immunoassays in blood establishments: Computation of

B1074B732F32282DCA257BF0001FA218/\$File/dhaeou.pdf. Accessed: January 4, 2016.

http://www.eurolab.org/documents/1-2007.pdf. Accessed: January 4, 2016.

Guides/pdf/QUAM2012\_P1.pdf. Accessed: January 4, 2016.

www.health.gov.au/internet/main/publishing.nsf/Content/

Uncertainty in Laboratory Medicine. Wayne, PA: CLSI; 2012.

Chem Lab Med 2012; 48(1): 31–40. DOI: 10.1515/CCLM.2010.024

52(2): 252–255. DOI: 10.1016/j.transci.2015.02.015

10.1007/s007690050395

78 New Trends and Developments in Metrology

January 4, 2016.

2015.02.014


Retrieved from: http://www.BIPM.org/utils/common/documents/jcgm/ JCGM\_101\_2008\_E.pdf. Accessed: January 4, 2016.


### **Silent Speech Recognition by Surface Electromyography**

Andrzej B. Dobrucki, Piotr Pruchnicki, Przemysław Plaskota, Piotr Staroniewicz, Stefan Brachmański and Maciej Walczyński

Additional information is available at the end of the chapter

http://dx.doi.org/10.5772/60467

#### **Abstract**

Retrieved from: http://www.BIPM.org/utils/common/documents/jcgm/

[34] International Organization for Standardization. ISO 11352 Water Quality—Estimation of Measurement Uncertainty Based on Validation and Quality Control Data. Geneva:

[35] Clinical and Laboratory Standards Institute. EP15-A3 User Verification of Precision and

[36] International Organization for Standardization. ISO 5725-2 Accuracy (Trueness and Precision) of Measurement Methods and Results—Part 2: Basic Method for the Determination of Repeatability and Reproducibility of a Standard Measurement

[37] International Organization for Standardization. ISO 21748 Guidance for the Use of Repeatability, Reproducibility and Trueness Estimates in Measurement Uncertainty

[38] Abbott Diagnostics Division. Abbott Prism HCV. Code: 84-5342/R9. Wiesbaden: Abbott

[39] Young D. Effects of Preanalytical Variables on Clinical Laboratory Tests. 3rd ed.

[40] Young D. Effects of Disease on Clinical Laboratory Tests. Volumes 1 & 2. 4th ed.

[41] Young D. Effects of Drugs on Clinical Laboratory Tests. Volumes 1 & 2. 5th ed.

[42] Narayanan S, Young D. Effects of Herbs and Natural Products on Clinical Laboratory

[43] Westgard J, Neil Carey R, Wold S. Criteria for judging precision and accuracy in method

[44] Westgard J. Quality Requirements. Retrieved from: http://www.westgard.com/

[45] Pereira P, Westgard J, Encarnação P, Seghatchian J. Analytical model for calculating indeterminate results interval of screening tests, the effect on seroconversion window period: A brief evaluation of the impact of uncertain results on the blood establishment budget. Transfus Apher Sci 2014; 51(2): 126–131. DOI: 10.1016/j.transci.2014.10.004

development and evaluation. Clin Chem 1974; 20(7): 825–833.

quality-requirements.htm. Accessed: January 4, 2016.

JCGM\_101\_2008\_E.pdf. Accessed: January 4, 2016.

Estimation of Bias. 3rd ed. Wayne, PA: CLSI; 2014.

ISO; 2012.

80 New Trends and Developments in Metrology

Method. Geneva: ISO; 1994.

Estimation. Geneva: ISO; 2010.

Washington, DC: AACC Press; 2007.

Washington, DC: AACC Press; 2001.

Washington, DC: AACC Press; 2000.

Tests. Washington, DC: AACC Press; 2007.

Laboratories; 2008.

For some time, new methods based on a different than acoustic signal analysis are used for speech recognition. The purpose of nonacoustic signals is to allow silent communi‐ cation. One of these methods based on the electromyography signal is generated by the human speech articulation system. This article presents a device for electromyographic (EMG) signal acquisition and the first measurements from its use.

**Keywords:** Electromyography, Speech recognition

#### **1. Introduction**

Speech recognition is a very important part of computer–human communication system. The most frequently used methods for speech recognition are based on the analysis of an acoustic signal. This signal can be represented as a real-time signal or previously recorded data with a human speech signal.

The use of the acoustic signal analysis is not always possible. In a noisy environment, it is not possible to separate the speech signal from background noise. In this case, the limitations of technical capabilities are present. On the other hand, in a very quiet environment, it is not possible to use voice communication because of the need to maintain silence. In both cases, other signals generated by a speech articulation system can be useful.

It is possible to use electromyographic (EMG) signals produced by muscles of the human speech system [1, 2]. These signals are characterized by low voltage levels and therefore are

© 2016 The Author(s). Licensee InTech. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

difficult to record. Another problem is their low resistance to external interference due to the need for large gains [3].

For a practical use of EMG signals, it is necessary to record and gain EMG signals. In the next part of this article, the assumptions and implementation of recording devices are presented. Also, the results of the first attempts to use the device are described.

#### **2. System concept**

#### **2.1. EMG signal sensor**

The designed hardware part of the SVR (subvocal recognition) system should comply with some specific assumptions because of the specific characteristics of signals:


Based on the assumptions outlined above, a block diagram of the system has been developed (**Figure 1**). Also the necessary requirements for its parts have been defined.

**Figure 1.** Block diagram of the EMG sensor.

Properties of electrodes that have been used in the system were selected primarily on the basis of SENIAM (Surface ElectroMyoGraphy for the Non-Invasive Assessment of Muscles) recommendations [4]. According to them, there are bipolar electrodes. Due to the small size of the analyzed muscle, the electrodes must be placed close to those muscles. Also, the electrodes must be small.

Electrical impulses collected by the electrodes need to be amplified around 500–1000 times. For this purpose, it is necessary to use specially designed low-noise amplifiers. Due to the use of bipolar electrodes, the amplifier operates in a differential regime that helps significantly to reduce the impact of external noise in the recorded signal.

For further analysis of EMG signals, it is necessary to replace electrical EMG signals with a digital form. This process allows to carry on digital processing of EMG signals on computer devices. This transformation process of analog signals into their digital form requires the use of a signal acquisition card (DAQ card). Due to the requirement of system mobility, the optimal solution is to use an external adapter with a USB interface. It enables the use of easy-to-carry laptop, and besides, there is no need to use additional power supplies since enough energy is supplied by the USB port. An important element of the system is a PC. On this device, the programs for SVR based on electromyography signals are installed.

#### **2.2. Electrodes**

difficult to record. Another problem is their low resistance to external interference due to the

For a practical use of EMG signals, it is necessary to record and gain EMG signals. In the next part of this article, the assumptions and implementation of recording devices are presented.

The designed hardware part of the SVR (subvocal recognition) system should comply with

**•** high quality of the signal path due to work with low- level signals – characterized by low

**•** flexibility in the configuration of the signal path – variable usage of electrodes, different types of electrodes, and the possibility of simultaneous use of several signal paths,

Based on the assumptions outlined above, a block diagram of the system has been developed

Properties of electrodes that have been used in the system were selected primarily on the basis of SENIAM (Surface ElectroMyoGraphy for the Non-Invasive Assessment of Muscles) recommendations [4]. According to them, there are bipolar electrodes. Due to the small size of the analyzed muscle, the electrodes must be placed close to those muscles. Also, the

Electrical impulses collected by the electrodes need to be amplified around 500–1000 times. For this purpose, it is necessary to use specially designed low-noise amplifiers. Due to the use of bipolar electrodes, the amplifier operates in a differential regime that helps significantly to

Also, the results of the first attempts to use the device are described.

some specific assumptions because of the specific characteristics of signals:

noise, high dynamics, and immunity to external interference,

**•** mobility – portable character of the system, small size, and weight.

(**Figure 1**). Also the necessary requirements for its parts have been defined.

need for large gains [3].

82 New Trends and Developments in Metrology

**2. System concept**

**2.1. EMG signal sensor**

**Figure 1.** Block diagram of the EMG sensor.

reduce the impact of external noise in the recorded signal.

electrodes must be small.

The elements used to collect the electromyography signals are electrodes. The device uses various types of electrodes in order to allow finding the best solution. The self-adhesive electrodes (**Figure 2**) provide the most flexibility for an individual subject. The use of this kind of electrodes allows placing them almost anywhere. Initial experiments have shown that the adhesion of such electrodes is only possible on smooth and dry skin. The presence of hairs on the skin, sweat, and wrinkles often prevents the attachment of electrodes.

**Figure 2.** Self-adhesive electrodes.

An alternative solution is to use cup-shaped metal electrodes (**Figure 3**). They also contain isolated electrodes, which make them any deployment. They require using a special adhesives conductive gel that is used for fixing the electrode and increases the conductivity of the connection. The electrodes may be made of different materials, such as silver, silver coated with silver chloride (Ag/AgCl), gold, iron, or tin. Due to the very good properties – low and stable in time voltage – the most commonly used are silver electrodes coated with silver chloride.

**Figure 3.** Cup-shaped electrodes.

The main advantage of individual electrodes is the possibility of putting them on any place, although in certain circumstances, it may be a significant drawback. That happens when the repeatable electrode placement is needed in order to couple each other. A good solution in such a situation is the use of bar-type bipolar electrodes (**Figure 4**). These electrodes are compact and are lead shielded. The distance between the electrodes is strongly determined by their structure.

**Figure 4.** Bar-type electrodes.

During the measurement process, in addition to EMG signal electrodes, it is necessary to use a reference electrode. The purpose of the additional electrode is to determine the reference potential of the speaker. Without this additional electrode, the amount of noise (especially from the power network) in the acquisition increases greatly, which prevents the measurement completely. During the measurements, the reference electrode is connected to the patient in a place where there is no muscular activity.

#### **2.3. Amplifier**

**Figure 3.** Cup-shaped electrodes.

84 New Trends and Developments in Metrology

their structure.

**Figure 4.** Bar-type electrodes.

place where there is no muscular activity.

The main advantage of individual electrodes is the possibility of putting them on any place, although in certain circumstances, it may be a significant drawback. That happens when the repeatable electrode placement is needed in order to couple each other. A good solution in such a situation is the use of bar-type bipolar electrodes (**Figure 4**). These electrodes are compact and are lead shielded. The distance between the electrodes is strongly determined by

During the measurement process, in addition to EMG signal electrodes, it is necessary to use a reference electrode. The purpose of the additional electrode is to determine the reference potential of the speaker. Without this additional electrode, the amount of noise (especially from the power network) in the acquisition increases greatly, which prevents the measurement completely. During the measurements, the reference electrode is connected to the patient in a

To amplify the electrical EMG signals from the electrodes, specially designed amplifiers are used [5]. The development of the designed systems allows for a perfect fit of the amplifier for the SVR purpose.

Amplifier design guidelines are defined on the basis of the solutions used in the world as follows:


**Figure 5.** EMG amplifier.

**•** Power supply for the amplifier is on the board of the DAQ card; so the additional power unit is not necessary to use.

The scheme of the EMG amplifier is presented in **Figure 5**.

#### **2.4. Driver of the reference electrode**

The task of the reference electrode is fixing an adequate potential of the patient's body and ensuring returns to the bias currents of the inputs of a differential amplifier. Using the driver improves the damping of the common component for the amplifier (CMRR) of a value from 10 to 50 dB [6, 7]. The operation of this system consists of a continuous comparison of average voltage from measuring electrodes with the virtual ground's voltage. If a difference occurs, it is amplified with an opposite sign and put to the reference electrode. In this way, the system still maintains the proper value of the DC component of the measuring electrodes. The reference electrode driver (RLD) is used to raise the CMRR of the instrumentation amplifier. With this higher signal-to-noise ratio (SNR), the differential signal obtained is ensured to possess only relevant information and a minimum of interference currents or irrelevant data. The idea behind the RLD is to maintain a known voltage potential in the human subject that is directly related to the system board ground. This method then reduces the common-mode DC offset previously found in the system and thereby attempts to cancel any different DC offsets that individual channels or probes may experience. The actual method of the RLD is quite unique. A feedback network is created that depends on the averaged inputs from the combined instrumentation amplifier floating grounds and a GROUND signal originating from the human. This signal is then sent through an inverting gain stage that completes the feedback loop, which effectively counteracts any potential changes in the subject.

#### **2.5. Virtual ground source**

Because all electronic circuits of the power amplifier are supplied with a single voltage, it was necessary to produce an artificial "ground" potential. DC voltage of such a virtual ground should be equal to half the supply voltage. The easiest way to achieve this is to use a simple resistive divider (this version was used in the construction of the first amplifier). Unfortu‐ nately, the change of current from the divider causes changes of the voltage. Therefore, in the second version the buffer amplifier has been added, which holds the potential of virtual ground unchanged.

#### **2.6. ADC card (DAQ)**

Because of the narrow bandwidth of collected signals and a large level of signal after ampli‐ fication, requirements for the DAQ card parameters are not critical.

Mobility of the whole system involves the use of an external card with a USB interface. A precise sampling runs with high dynamics and requires a resolution of not less than 16 bits. Because of the research reason, the originally formed multicard system must provide a sampling for at least eight independent channels. The sampling rate is not critical. At the top of the analyzed signal frequency of 500 Hz and the provided sample for the antialiasing filter slope, there is quite an adequate sampling frequency equal to 4 kHz. Even while working with eight channels, the resultant frequency for the whole card will only be 32 kHz.

A desirable feature of the card used is switched ranges. They allow for a flexible adjustment range of the amplitude of the sampled signal and better use of dynamics DAC.

A software part of this project is written in C++ and runs on the Linux operating system. Due to the availability of drivers, the application was tested on Linux Debian, and due to the licensing requirements, it was decided that the program is a console application. The target of the software is data acquisition, signal parameterization, and classification.

#### **3. EMG measurement by using cup-shaped and double (glued) electrodes**

The measurements were made by using adhesive (**Figure 2**) cup-shaped (**Figure 3**) and double electrodes (Figure 4). The reference electrode was used as one of the glued electrodes (**Figure 2**). An example of the reference disc electrode placement is shown in **Figure 6**.

**Figure 6.** Example of the reference disc electrode placement.

#### **3.1. Measurement system**

**•** Power supply for the amplifier is on the board of the DAQ card; so the additional power

The task of the reference electrode is fixing an adequate potential of the patient's body and ensuring returns to the bias currents of the inputs of a differential amplifier. Using the driver improves the damping of the common component for the amplifier (CMRR) of a value from 10 to 50 dB [6, 7]. The operation of this system consists of a continuous comparison of average voltage from measuring electrodes with the virtual ground's voltage. If a difference occurs, it is amplified with an opposite sign and put to the reference electrode. In this way, the system still maintains the proper value of the DC component of the measuring electrodes. The reference electrode driver (RLD) is used to raise the CMRR of the instrumentation amplifier. With this higher signal-to-noise ratio (SNR), the differential signal obtained is ensured to possess only relevant information and a minimum of interference currents or irrelevant data. The idea behind the RLD is to maintain a known voltage potential in the human subject that is directly related to the system board ground. This method then reduces the common-mode DC offset previously found in the system and thereby attempts to cancel any different DC offsets that individual channels or probes may experience. The actual method of the RLD is quite unique. A feedback network is created that depends on the averaged inputs from the combined instrumentation amplifier floating grounds and a GROUND signal originating from the human. This signal is then sent through an inverting gain stage that completes the feedback

Because all electronic circuits of the power amplifier are supplied with a single voltage, it was necessary to produce an artificial "ground" potential. DC voltage of such a virtual ground should be equal to half the supply voltage. The easiest way to achieve this is to use a simple resistive divider (this version was used in the construction of the first amplifier). Unfortu‐ nately, the change of current from the divider causes changes of the voltage. Therefore, in the second version the buffer amplifier has been added, which holds the potential of virtual ground

Because of the narrow bandwidth of collected signals and a large level of signal after ampli‐

Mobility of the whole system involves the use of an external card with a USB interface. A precise sampling runs with high dynamics and requires a resolution of not less than 16 bits. Because of the research reason, the originally formed multicard system must provide a sampling for at least eight independent channels. The sampling rate is not critical. At the top of the analyzed signal frequency of 500 Hz and the provided sample for the antialiasing filter

unit is not necessary to use.

86 New Trends and Developments in Metrology

**2.5. Virtual ground source**

unchanged.

**2.6. ADC card (DAQ)**

**2.4. Driver of the reference electrode**

The scheme of the EMG amplifier is presented in **Figure 5**.

loop, which effectively counteracts any potential changes in the subject.

fication, requirements for the DAQ card parameters are not critical.

The measurement system consists of electrodes, a preamplifier of an EMG signal, an AD converter, and a multichannel recorder. As an AD converter, a multichannel recorder has been used, or a sound card with a software for digital signal processing.

A few extra features have been checked before finalizing the configuration of the measurement system. To record the results of the measurements, five different measurement configurations were used. Three of them have been reviewed negatively. The main problems concern the interference from the power supply, noise from FireWire (not always), and demodulation of the EMG signal cables from a nearby radio station transmitter. The use of battery power, the reduction of EMG wires length, and the use of the USB bus made it possible to eliminate the problems listed above.

The EMG signals were stored as a wave format file with a 48-kHz sampling frequency, with 16-bit resolution. The EMG signal dynamics reach about 20 dB, so that the signal parameters are sufficient. An easy analysis of the recorded signals is guaranteed by using a wave format file. During the measurements, the recorded signal can be observed, and these signals can be displayed in real time. It is also possible to insert some markers for easy highlighting in the next repetition of words. **Figure 7** shows some examples of the results of the measurements.

**Figure 7.** Sample measurement results for a person with a normal EMG signal (vowel "a," 1 s duration): a) a valid sig‐ nal and b) a signal with interference (marked). From the top: an EMG signal from the bar-type electrode, an EMG sig‐ nal from the cup-shaped electrode, an acoustic signal recorded by using a microphone. A – amplitude (arbitrary units), t – time.

#### **3.2. Measurements**

The measurements were provided for a group of 14 people (11 men and 3 women), aged 20– 25 years. A typical location for electrodes is shown in **Figure 8**. The electrodes' location differed slightly between individuals; the differences resulted primarily from the differences in the anatomy of the human body. In order to improve the conductivity between the skin and the electrodes, an electroconductive paste was used. The EMG signals were recorded with and without the conductive paste. The use of the paste in some cases slightly improved the conductivity (the recorded EMG signal was greater); the other did not affect the value of the EMG signal. There were no cases of deterioration of the EMG signals after applying the paste.

**Figure 8.** Example of cup-shaped electrodes' location.

were used. Three of them have been reviewed negatively. The main problems concern the interference from the power supply, noise from FireWire (not always), and demodulation of the EMG signal cables from a nearby radio station transmitter. The use of battery power, the reduction of EMG wires length, and the use of the USB bus made it possible to eliminate the

The EMG signals were stored as a wave format file with a 48-kHz sampling frequency, with 16-bit resolution. The EMG signal dynamics reach about 20 dB, so that the signal parameters are sufficient. An easy analysis of the recorded signals is guaranteed by using a wave format file. During the measurements, the recorded signal can be observed, and these signals can be displayed in real time. It is also possible to insert some markers for easy highlighting in the next repetition of words. **Figure 7** shows some examples of the results of the measurements.

**Figure 7.** Sample measurement results for a person with a normal EMG signal (vowel "a," 1 s duration): a) a valid sig‐ nal and b) a signal with interference (marked). From the top: an EMG signal from the bar-type electrode, an EMG sig‐ nal from the cup-shaped electrode, an acoustic signal recorded by using a microphone. A – amplitude (arbitrary units),

The measurements were provided for a group of 14 people (11 men and 3 women), aged 20– 25 years. A typical location for electrodes is shown in **Figure 8**. The electrodes' location differed

problems listed above.

88 New Trends and Developments in Metrology

t – time.

**3.2. Measurements**

To achieve the best repeatability of the measured signals, the research process was performed with the following procedure: 1) application of the sensor to the neck; 2) signal check: if bad – a change of sensor position is needed, if good – recording the signal. The electrodes were attached by using a hook and loop tape, ensuring the pressure stability over the entire process of measurements.

During the measurements, it was found that the EMG signals fundamentally differ within the group. For eight subjects, the correct values of the EMG signals were obtained immediately, while for six people, the correct values could not be obtained immediately. Three of these people learned to speak words in a short time, so that the EMG signal had the correct value. The other three were not able to obtain the EMG signal even after some time.

During the measurements, 10 repetitions of the following elements of the dictionary were recorded: the vowels a, e, i, o, u, y; the words: "stop", "start", "dalej" (further), "v levo" (left), "v pravo" (right), "pauza" (pause), "enter", "tak" (yes), "nie" (no).

To get the correct number of repetitions and the corresponding interval between different words, the graphical presentation on the screen was used. With this solution, the signals did not overlap, and it was easy to get the required number of repetitions.

#### **3.3. Measurements summary**

For higher values of EMG signals during speaking, it is necessary to use more sensitive sensors. The use of the conductive paste did not increase the value of the signal. The EMG signal has the highest value depending on the subjects and positions of the electrode. Finding the best location of electrodes requires several attempts. For some people, the correct EMG signal could not be obtained, but some of these people could learn to speak such words, and thus the EMG signals were correct. The use of rigid location of the sensor helps to improve the correct EMG signal quality.

The first experiments related to the registration of the EMG signals were conducted with the use of the first introductory version of the electronic circuit. The characteristics of this device were wires of 1.5 m, connecting the amplifier with the surface electrodes, which were placed on the skin of the investigated person.

In the process of measuring, some problems were observed, as follows:


All the presented problems are connected with a large length of wires connecting the electrodes with the amplifier. The EMG signals amplifier has a very large amplification and input impedance. It is very sensitive to violations from the electric–magnetic fields around the device.

The best method to minimize these type of violations is the reduction of the wire length from the electrode to the amplifier. To realize this aim, the miniaturization of the device is necessary. The first step is to fasten the electrodes directly to the printed circuit board, and the second step is to place the whole device on the neck of the measured person.

#### **4. State of the art of EMG speech recognition**

The early research in the field of electromyography in speech technology was basically carried out only for proper speech prosthetic (parameters comparison between healthy people with prosthesis).

First significant trials to use EMG for automatic speech recognition were done in 1985–1986 in the USA and Japan. Morse et al. [8] applied four-channel surface electrodes with a sampling frequency of 5120 samples/channel/s. After the limitation of amplitude and frequency band to the range of 100–1000 Hz, 20 repetitions of isolated words (digits in English) for two voices were examined. After the use of ML (maximum likelihood), the scores at the level of 60% were obtained. In a later research, the same team obtained similar results with the use of neural networks [9].

**3.3. Measurements summary**

90 New Trends and Developments in Metrology

on the skin of the investigated person.

signal quality.

position,

building.

device.

prosthesis).

For higher values of EMG signals during speaking, it is necessary to use more sensitive sensors. The use of the conductive paste did not increase the value of the signal. The EMG signal has the highest value depending on the subjects and positions of the electrode. Finding the best location of electrodes requires several attempts. For some people, the correct EMG signal could not be obtained, but some of these people could learn to speak such words, and thus the EMG signals were correct. The use of rigid location of the sensor helps to improve the correct EMG

The first experiments related to the registration of the EMG signals were conducted with the use of the first introductory version of the electronic circuit. The characteristics of this device were wires of 1.5 m, connecting the amplifier with the surface electrodes, which were placed

**•** high level hum of 50 Hz – the level of this noise was strongly dependent on the wires

**•** change of a noise level during the measurement due to changing of the wires position, and

**•** signal from the FM transmitter (analog radio program), depending on the position in the

All the presented problems are connected with a large length of wires connecting the electrodes with the amplifier. The EMG signals amplifier has a very large amplification and input impedance. It is very sensitive to violations from the electric–magnetic fields around the

The best method to minimize these type of violations is the reduction of the wire length from the electrode to the amplifier. To realize this aim, the miniaturization of the device is necessary. The first step is to fasten the electrodes directly to the printed circuit board, and the second

The early research in the field of electromyography in speech technology was basically carried out only for proper speech prosthetic (parameters comparison between healthy people with

First significant trials to use EMG for automatic speech recognition were done in 1985–1986 in the USA and Japan. Morse et al. [8] applied four-channel surface electrodes with a sampling frequency of 5120 samples/channel/s. After the limitation of amplitude and frequency band to the range of 100–1000 Hz, 20 repetitions of isolated words (digits in English) for two voices were examined. After the use of ML (maximum likelihood), the scores at the level of 60% were

In the process of measuring, some problems were observed, as follows:

step is to place the whole device on the neck of the measured person.

**4. State of the art of EMG speech recognition**

In a parallel study, Japanese scientists Sugie et al. [10] with three-channel surface electrodes with a sampling frequency of 1250 samples/channel/s succeeded in real-time phone recogni‐ tion experiments. For three voices and 50 Japanese syllables, they reached a proper classifica‐ tion at the level of 64%.

Among research centers where most advanced works on subvocal speech (based on EMG) are done, the NASA Ames Research Center should be mentioned [11, 12]. Jorgensen et al. use wavelet transform or linear predictive coding as the parameterization method, and they use artificial neural networks, SVM (support vector machines), and HMM (hidden Markov models) as classifiers. For the limited set of five voice commands, they obtained the scores of the order of 90%. For the set of English phones and female voices, the scores were lower – at the level of 50%. Among applications designed by NASA, beside voice commands controls (e.g., for remote steering of application or robot), also speech recognition in difficult conditions should be mentioned (e.g., acoustic – with high noise level or with speech pronounced when the gas mask is on).

Research carried out by Schultz et al. [13–15] at Karlsruhe University can be considered as the most crucial in the last decade because they tried to overcome the speaker dependence on one hand and the limitation of the dataset on the other. The collected signal database (EMG-PIT) was recorded for 78 speakers with the use of six electrodes placed on the speaker's face and neck. For the signal parameterization, the parameters of time domain were applied (mean value in frame, energy, crossing zero density). For the sample of 14 speakers (10 sentences each), they obtained 47.15% of WER (word error rate).

Manabe et al. [16, 17] carrying out research at the Japanese NTT DoCoMo used a novel technique of setting of the EMG electrodes. In the proposed technique, three electrodes are set on the speaker's fingers which he keeps on his face. In their preliminary tests with the neural network and signals energy, they reached around a 90% recognition score for five vowels. In their later works where more sophisticated signal parameterization techniques (MFCC, LPC, etc.) and HMM as classifier were used, the score at the level of 60% for several comments in Japanese was obtained.

The research on EMG speech signal recognition was also carried out by Instituto de Investi‐ gacion en Ingenieria de Aragon and Dpt. De Informatica e Ingenieria de Sistemas [18]. The prepared system uses eight electrodes detecting signals of face muscles. For three voices and 30 syllables of Spanish, they reached a proper classification at the level of 69%.

In the research of Bu et al. [19] instead of standard bipolar electrodes put on a singular muscle, they applied a differential signal between two unipolar electrodes put on different muscles, which let them use less number of electrodes to obtain similar recognition results. In the classification tests of several Japanese phones, they reached the score of 90.6%.

The parameterization and classification techniques used for the EMG speech signal recognition are presented in **Table 1**. Despite the fact that there are crucial differences between acoustic and EMG signals, most of them are used also in automatic acoustic speech and speaker recognition systems.


**Table 1.** Parameterization and classification techniques used for the EMG speech signal recognition described in the literature.

#### **5. Automatic recognition tests**

The block diagram of general speech recognition procedures is presented in **Figure 9**. The digital EMG signal is given to the front end of automatic speech recognition system. The system also uses the signals stored before in the EMG signals database. The task of the front-end procedure is to obtain the parametric picture of the registered digital EMG signal that can be later recognized in the classification process. The classification consists of two stages: the training and recognition.

The preliminary tests were carried out for the EMG recordings of six Polish vowels [a, e, i, o, u, y] and eight isolated words [stop, start, dalej, v levo, v pravo, pauza, enter, tak] [vowels and words are given in the SAMPA (Speech Assessment Methods Phonetic Alphabet ) notation]. The tested material was recorded with two types of electrodes (cup-shaped and self-adhesive) during one recording session of one speaker. Since the problem of speech recognition from the EMG signal is still very new, in the literature there is no commonly accepted set of best features for that kind of task. Most features applied and described in the literature are copied directly from nowadays well-developed techniques of speech recognition based on the acoustic signal. The EMG signal which has completely different physical origins than the acoustic one has also a completely different nature both in frequency (limited to substantially lower frequency range) and time domain (longer time intervals caused by preparation and relaxation of muscles for each utterance than for acoustic signal). Since the selection of most efficient features set is a big task, it demands longer studies and tests. In the preliminary tests, the feature vector was the combination of spectral MFCC (mel frequency cepstral coefficients) and LPC (linear predictive coefficients) features. In the following test results, the 76 parameters were applied (average and deviation for each): 13 MFCC, 10 LPC, 5 spectral moments, and 10 area method of moments of MFCC. MFCC and LPC are the most common features applied in speech technology; additionally the spectral variability was accounted for by using the FFT (fast Fourier transform) moment and its modification for MFCC.

**Figure 9.** Block diagram of speech recognition procedures.

and EMG signals, most of them are used also in automatic acoustic speech and speaker

**Parameterization Positions Classification Positions** WT (Wavelet Transform) [11, 12] HMM (Hidden Markov Models) [11, 13, 14, 16] STFT (short time Fourier transform) [12, 13, 15, 18] ANN (Artificial Neural Network) [12, 19] LPC (Linear Predictive Coding) [12, 13] SVM (Support Vector Machines) [12] Time-domain and zero-crossing parameters [14, 15, 18, ] DTC (Decision Trees Classifier) [14, 18] Cepstral parameters [13, 16–18] LDA (Linear Discriminant Analysis) [15]

**Table 1.** Parameterization and classification techniques used for the EMG speech signal recognition described in the

The block diagram of general speech recognition procedures is presented in **Figure 9**. The digital EMG signal is given to the front end of automatic speech recognition system. The system also uses the signals stored before in the EMG signals database. The task of the front-end procedure is to obtain the parametric picture of the registered digital EMG signal that can be later recognized in the classification process. The classification consists of two stages: the

The preliminary tests were carried out for the EMG recordings of six Polish vowels [a, e, i, o, u, y] and eight isolated words [stop, start, dalej, v levo, v pravo, pauza, enter, tak] [vowels and words are given in the SAMPA (Speech Assessment Methods Phonetic Alphabet ) notation]. The tested material was recorded with two types of electrodes (cup-shaped and self-adhesive) during one recording session of one speaker. Since the problem of speech recognition from the EMG signal is still very new, in the literature there is no commonly accepted set of best features for that kind of task. Most features applied and described in the literature are copied directly from nowadays well-developed techniques of speech recognition based on the acoustic signal. The EMG signal which has completely different physical origins than the acoustic one has also a completely different nature both in frequency (limited to substantially lower frequency range) and time domain (longer time intervals caused by preparation and relaxation of muscles for each utterance than for acoustic signal). Since the selection of most efficient features set is a big task, it demands longer studies and tests. In the preliminary tests, the feature vector was the combination of spectral MFCC (mel frequency cepstral coefficients) and LPC (linear predictive coefficients) features. In the following test results, the 76 parameters were applied (average and deviation for each): 13 MFCC, 10 LPC, 5 spectral moments, and 10 area method of moments of MFCC. MFCC and LPC are the most common features applied in speech technology; additionally the spectral variability was accounted for by using the FFT (fast

recognition systems.

92 New Trends and Developments in Metrology

literature.

Phonetic parameters [14]

**5. Automatic recognition tests**

Fourier transform) moment and its modification for MFCC.

training and recognition.

The window size for feature vector was set up experimentally. The tests were carried out for recognition of six Polish vowels with the Bayes network classification. Due to a small number of instances, instead of dividing them into training and testing sets, a cross-validation was applied. **Table 2** presents the results of correctly classified instances in the function of window size (represented in samples and corresponding time intervals). Kappa is a chance-corrected measure of agreement between the classifications and the true classes. It is calculated by taking the agreement expected by chance away from the observed agreement and dividing by the maximum possible agreement. The windows overlap was set to 0.5.


**Table 2.** Results of correctly classified instances in the function of window size.

Since the best results were obtained for the window size of 32 ms, the following tests were carried out for this value.

**Table 3** presents the overall results of correctly classified utterances for vowels, words, and three tested classifiers (Bayes network, Naive Bayes, and multilayer perceptron). Some conclusions from the obtained results can be drawn.


**Table 3.** Overall results for vowel and word recognition.

#### **6. Conclusions**

The level of the EMG signals obtained from the applied electrodes is sufficient for further analysis for most cases.

Subjects, who during the first trial did not have the correct EMG signal, may learn to speak such words to be able to record the signal. After a few minutes of training, it was possible to speak the words correctly, so the EMG signals could be recorded. It should be noted that the way of speaking words was associated with an unnatural speaking out, with very expressive facial expressions.

Subjects involved in the research can be divided into three groups:


The great difficulty in the measurement process was to obtain the undisturbed EMG signals. Each movement of the speaker was reflected in the EMG signal parameters. Particularly, a strong interference was generated by the reflex of swallowing saliva – the signal is two to three times greater than the speech signal. It is important to ensure the adequate measurement conditions – separate from the power supply, and other signals that can be picked up by the electrodes.

The best scores during automatic recognition tests were obtained for the Bayes network, but the differences between chosen classifiers are considerably insignificant. Very similar results were obtained for words and vowels (in vocal speech signal recognition, the vowel recognition would gain better results than word recognition).

The difference between electrodes indicates that the influence of electrodes type or their position can have a significant impact on final scores (recording for both types of electrodes were carried out during the same recording session). The difference was quite big in the case of vowels recognition (difference around 10%) and considerably insignificant in case of words recognition. Considerably high results of correctly classified instances are partly caused by a session-dependent and a speaker-dependent case.

#### **Author details**

**Table 3** presents the overall results of correctly classified utterances for vowels, words, and three tested classifiers (Bayes network, Naive Bayes, and multilayer perceptron). Some

**electrodes**

The level of the EMG signals obtained from the applied electrodes is sufficient for further

Subjects, who during the first trial did not have the correct EMG signal, may learn to speak such words to be able to record the signal. After a few minutes of training, it was possible to speak the words correctly, so the EMG signals could be recorded. It should be noted that the way of speaking words was associated with an unnatural speaking out, with very expressive

**•** People who were possible to speak in such the way that the registered EMG signals could be done at once. The expressive facial expressions allow to record signals with a higher

**•** People who were able to produce the sufficient for recording EMG signals after training.

The great difficulty in the measurement process was to obtain the undisturbed EMG signals. Each movement of the speaker was reflected in the EMG signal parameters. Particularly, a strong interference was generated by the reflex of swallowing saliva – the signal is two to three times greater than the speech signal. It is important to ensure the adequate measurement conditions – separate from the power supply, and other signals that can be picked up by the

The best scores during automatic recognition tests were obtained for the Bayes network, but the differences between chosen classifiers are considerably insignificant. Very similar results

The expressive facial expressions allow to capture data with normal values.

**•** People for whom EMG signals could not be registered even after training.

Bayes network 97.67% 90.70% 97.40% 96.10% Naive Bayes 95.35% 83.72% 93.51% 92.21% Multilayer perceptron 93.02% 79.07% 80.52% 84.42%

**Vowels Words**

**Cup-shaped electrodes Self-adhesive**

**electrodes**

conclusions from the obtained results can be drawn.

94 New Trends and Developments in Metrology

**Table 3.** Overall results for vowel and word recognition.

**6. Conclusions**

facial expressions.

amplitude.

electrodes.

analysis for most cases.

**Classifier Correctly classified instances**

**Cup-shaped electrodes Self-adhesive**

Subjects involved in the research can be divided into three groups:

Andrzej B. Dobrucki\* , Piotr Pruchnicki, Przemysław Plaskota, Piotr Staroniewicz, Stefan Brachmański and Maciej Walczyński

\*Address all correspondence to: andrzej.dobrucki@pwr.edu.pl

Department of Acoustics and Multimedia, Faculty of Electronics, Wrocław University of Technology, Wrocław, Poland

#### **References**


sygnał elektromiograficzny], Przegląd Telekomunikacyjny, Wiadomości Telekomuni‐ kacyjne No. 6; 2013, pp. 571–574.

[21] Mendes J, Robson RR, Labidi S, Barros AK. Subvocal speech recognition based on EMG signal using independent component analysis and neural network. In: Congress on Image and Signal Processing; 2008.

[8] Morse MS, O'Brien EM. Research summary of a scheme to ascertain the availability of speech information in the myoelectric signals of neck and head muscles using surface

[9] Jorgensen C, Dusan S. Speech interfaces based upon surface electromyography. Speech

[10] Sugie N, Tsunoda K. A speech prosthesis employing a speech synthesizer – vowel discrimination from perioral muscle activities and vowel production. IEEE Transac‐

[11] Betts BJ, Binsted K, Jorgensen C. Small-vocabulary speech recognition using surface

[12] Jorgensen C, Binsted K. Web browser control using EMG based sub vocal speech recognition. In: Proc. of the 38th Hawaii International Conference on System Sciences;

[13] Maier-Hein L, Metze F, Schultz T, Waibel A. Session independent non-audible speech recognition using surface electromyography. In: IEEE Workshop on Automatic Speech Recognition and Understanding, San Juan, 27–27 November, pp. 331–336; 2005.

[14] Schultz T, Wand M. Modeling coarticulation in EMG-based continuous speech

[15] Szu-Chen J, Schultz T, Walliczek M, Kraft F, WaibelA. Towards continuous speech recognition using surface electromyography. In: Proc. Interspeech 2006, pp. 573–576,

[16] Manabe H, Zhang Z. Multi-stream HMM for EMG-based speech recognition. In: Proc. of the 26th Annual International Conference of the IEEE EMBS, San Francisco, CA, USA,

[17] Zhang Z, Manabe H, Horikoshi T, Ohya T. Robust methods for EMG signal processing for audio-EMG-based multi-modal speech recognition. In: COST278 and ISCA Tutorial and Research Workshop on Robustness Issues in Conversational Interaction, Univer‐

[18] Lopez-Larraz E, MozosO M, Antelis JM, Minguez J. Syllable-based speech recognition using EMG. Conference Proceedings of the IEEE Engineering in Medicine and Biology

[19] Bu N, Tsuji T, Arita J, Ohga M. Phoneme Classification for Speech Synthesizer using Differential EMG Signals between Muscles. In: Proc. of the 2005 IEEE Engineering in Medicine and Biology 27th Annual Conference, Shanghai, China, September 1–4; 2005.

[20] Staroniewicz P, Brachmański S, Dobrucki AB. Subvocal speech recognition based on electromyographic signal [in Polish: Rozpoznawanie mowy subwokalnej w oparciu o

electromyography. Interacting with Computers. 2006, 18: pp. 1242–1259.

tions on Biomedical Engineering. 1985, 32, no. 7: pp. 485–490.

recognition. Speech Communication. 2010, 52: pp. 341–353.

Pittsburgh, Pennsylvania, September 17–21; 2006.

sity of East Anglia, Norwich, UK, August 30–31; 2004.

electrodes. Computers in Biology and Medicine. 1986, 16, no. 6: pp. 399–410.

Communication. 2010, 52: pp. 354–366.

96 New Trends and Developments in Metrology

2005.

September 1–5; 2004.

Society. 2010, 1: pp. 4699–4702.

### **Electrical Conductivity Measurements in Agriculture: The Assessment of Soil Salinity**

Fernando Visconti and José Miguel de Paz

Additional information is available at the end of the chapter

http://dx.doi.org/10.5772/62741

#### **Abstract**

Soil salinity is an important issue constraining the productivity of irrigation agriculture around the world. The standard method for soil salinity assessment is based on a laboratory method that is cumbersome and gives rise to limitations for data-intensive works. The use of sensors for the assessment of the apparent electrical conductivity (EC) of soils offers a way to overcome these constraints. These sensors are based on three electromagnetic phenomena, namely, electrical resistivity, electromagnetic induction, and reflectome‐ try. Each class of sensors presents its own advantages and drawbacks. In the following chapter, these are presented along with the most popular commercial EC sensors used in nowadays agriculture, equations for the assessment of soil salinity on basis sensor measurements, some examples of application, and present and future development trends.

**Keywords:** electrical conductivity, soil salinity, agriculture, irrigation, electrical resis‐ tivity, electromagnetic induction, time domain reflectometry, amplitude domain re‐ flectometry, frequency domain reflectometry

#### **1. Introduction**

Soil salinity is the concentration in the soil pore water of major dissolved ions. These are mainly Na+ , Mg2+, Ca2+, Cl<sup>−</sup> , SO4 2−, HCO3 − , and in some instances also CO3 2−. In agricultural lands, K+ and NO3 − also become major ions and thus, significantly contribute to salinity. All these ions build up in soils as a consequence of both evaporation and plant transpiration, which extracts almost pure water from soils while leaving its salts behind, and also as a consequence of fertilization practices. As soil salinity increases, the potential of the soil pore water decreases, thus oblig‐ ing plants to overcome an increasingly high energy gap for soil water uptake. Additionally,

© 2016 The Author(s). Licensee InTech. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

various ions, e.g., Na+ , Cl<sup>−</sup> , may also cause specific toxicity effects on plants, impair their nutritional balance, and/or decrease the permeability of soils with further indirect deleterious effects on crops. The development of these stressful conditions poses a remarkable threat for the sustainability of agriculture, mainly under irrigation. Of the world cultivated land, which amounts to roughly 1500 Mha, about 340 Mha, i.e. 23%, is salt affected [1]. Soil salinization has been estimated to cause income losses of about 12 billion US\$ per year globally [2]. In order to cope with soil salinity and the soil salinization issue, reliable and fast techniques for water and soil salinity evaluation are needed. The rigorous assessment of soil salinity requires, first, the extraction of representative samples of the soil pore water, and second, the subsequent determination of the concentrations of the aforementioned ions at a standard soil water content for their eventual summation in a single parameter known as total soluble salts (TSSs). Although rigorous, this method is expensive, labor-intensive, soil-destructive, and results-deferred, therefore posing severe constraints for data-demanding works both in space and time.

The carrying out of electrical conductivity (EC) measurements offers a way to overcome some of the limitations of the sampling and laboratory method. Specifically, the second part consisting in the analytical determinations of the major ion contents in the soil pore water can be replaced by one single measurement of EC, thus remarkably decreasing work and expenses in the laboratory, and shortening the obtaining of data. This methodological change has been globally so successfully adopted, that nowadays the standard for soil salinity assessment is the EC at 25°C of soil extracts at water saturation, in this chapter abbreviated as ECe,25. Besides, since not only the soil pore water conducts electricity, but also the bulk soil, its apparent electrical conductivity (ECa) can be used as a proxy for the assessment of soil salinity. However, some caution is in order when using ECa in this regard, because ECa depends on other soil properties, importantly water content, texture, structure, and mineralogy, in addition to salinity itself, therefore complicating the interpretation of measurements [3]. Furthermore, the soil ECa can be measured by means of various different techniques, which are based on i) potential drop or electrical resistivity (ER), ii) electromagnetic induction (EMI), and iii) reflectometry, either time (TDR), amplitude (ADR), or frequency (FDR) domain. These techniques and the corresponding sensors feature different abilities to sample soils for ECa measurements and therefore give rise to ECa values that present differences.

In this chapter, the foundations of the EC techniques nowadays available for the assessment of soil salinity in agriculture will be presented, along with the specific models developed to make estimations of soil salinity. The main commercial sensors used to make measurements of EC in agricultural soils will be presented, too, commenting on their strengths and weak‐ nesses. Examples of the practical application of sensors for soil salinity assessment in agricul‐ ture will be given. Finally, we will try to envisage the future trends in the development and applications of this technology.

#### **2. Electrical conductivity**

The cause of the electrical conductance is the existence of particles with electric charge which, from a microscopic point of view, are loosely bound to specific positions within materials and thus, are capable of conveying electric charge. The materials featured by these particles are able to conduct electricity and thus are known as conductors. Liquid water is a conductor because under natural conditions, it contains dissolved ions which are movable charged particles. Soil, the other material that concern us here the most, is a composite conductor in which water, solid particles, and air are present in variable quantities and arrangements, and in which the electric charge carriers are water dissolved and solid particle loosely adsorbed ions. These ions are, specifically, the major inorganic ions in the aqueous systems of the earth crust, i.e., cations Na+ , Mg2+, Ca2+, and K+ , and anions Cl<sup>−</sup> , SO4 2−, HCO3 − , CO3 2− and NO3 − .

When waters or soils are exposed to no electric fields, ions move randomly within, and macroscopically no net electric current is observed. On the contrary, when an electric field is applied, cations move to lower potentials whereas anions move to higher ones, and therefore, the water or soil system conducts electricity. The EC (σ) features the proportionality factor between the current density (**J**, A/m2 ) and the electrical field (**E**, V/m): **J** = σ **E** and thus, it measures the increment of the electric current through each unit area of a surface perpendicular to its flowing direction per unit increase of an externally applied electric field. Therefore, the EC of a material is a physical quantity that indicates its ability to conduct electric current. It is the reciprocal of the material's resistivity (ρ): σ = 1/ρ, and its units in the SI are the S m−1.

#### **3. Standard EC for soil salinity assessment**

various ions, e.g., Na+

100 New Trends and Developments in Metrology

, Cl<sup>−</sup>

, may also cause specific toxicity effects on plants, impair their

nutritional balance, and/or decrease the permeability of soils with further indirect deleterious effects on crops. The development of these stressful conditions poses a remarkable threat for the sustainability of agriculture, mainly under irrigation. Of the world cultivated land, which amounts to roughly 1500 Mha, about 340 Mha, i.e. 23%, is salt affected [1]. Soil salinization has been estimated to cause income losses of about 12 billion US\$ per year globally [2]. In order to cope with soil salinity and the soil salinization issue, reliable and fast techniques for water and soil salinity evaluation are needed. The rigorous assessment of soil salinity requires, first, the extraction of representative samples of the soil pore water, and second, the subsequent determination of the concentrations of the aforementioned ions at a standard soil water content for their eventual summation in a single parameter known as total soluble salts (TSSs). Although rigorous, this method is expensive, labor-intensive, soil-destructive, and results-deferred,

therefore posing severe constraints for data-demanding works both in space and time.

measurements and therefore give rise to ECa values that present differences.

applications of this technology.

**2. Electrical conductivity**

In this chapter, the foundations of the EC techniques nowadays available for the assessment of soil salinity in agriculture will be presented, along with the specific models developed to make estimations of soil salinity. The main commercial sensors used to make measurements of EC in agricultural soils will be presented, too, commenting on their strengths and weak‐ nesses. Examples of the practical application of sensors for soil salinity assessment in agricul‐ ture will be given. Finally, we will try to envisage the future trends in the development and

The cause of the electrical conductance is the existence of particles with electric charge which, from a microscopic point of view, are loosely bound to specific positions within materials and

The carrying out of electrical conductivity (EC) measurements offers a way to overcome some of the limitations of the sampling and laboratory method. Specifically, the second part consisting in the analytical determinations of the major ion contents in the soil pore water can be replaced by one single measurement of EC, thus remarkably decreasing work and expenses in the laboratory, and shortening the obtaining of data. This methodological change has been globally so successfully adopted, that nowadays the standard for soil salinity assessment is the EC at 25°C of soil extracts at water saturation, in this chapter abbreviated as ECe,25. Besides, since not only the soil pore water conducts electricity, but also the bulk soil, its apparent electrical conductivity (ECa) can be used as a proxy for the assessment of soil salinity. However, some caution is in order when using ECa in this regard, because ECa depends on other soil properties, importantly water content, texture, structure, and mineralogy, in addition to salinity itself, therefore complicating the interpretation of measurements [3]. Furthermore, the soil ECa can be measured by means of various different techniques, which are based on i) potential drop or electrical resistivity (ER), ii) electromagnetic induction (EMI), and iii) reflectometry, either time (TDR), amplitude (ADR), or frequency (FDR) domain. These techniques and the corresponding sensors feature different abilities to sample soils for ECa

The EC of soil materials measured as such is known as bulk or ECa (σa) because the measured EC corresponds to an EC-equivalent homogeneous single-phase material [3]. Even more important is to note that ECa is different from the EC of the soil pore water (ECp or σp), i.e. the EC of the soil solution separated from the soil solids. The convenient way of soil salinity estimation using EC instead of chemical analysis would involve the measurement of ECp. However, such direct measurement of ECp is never made because, on the one hand, of a practical issue, and on the other hand, of the need for standardization.

From a practical point of view, representative samples of the soil solution at usual field soil water contents are difficult or impossible to obtain [4]. Besides, soil salinity continuously varies as so does the soil water content, thus demanding a specific soil water content for standardi‐ zation. By international agreement, the standard soil water content for salinity assessment is soil water saturation. On the one hand, water saturation is the lowest soil water content from which a sample of the soil solution can be easily obtained and, on the other hand, water saturation is the highest soil water content attainable under field conditions and thus, repre‐ sentative of the soil salinity to which plants are exposed. Therefore, the universal standard for soil salinity appraisal is the EC of the soil saturation extract at 25°C [5], which is abbreviated as ECe,25 in this chapter. The salt tolerance of all crops is expressed in terms of ECe,25 [6, 7], and therefore, in agriculture, all measurements obtained with whichever other method have to be converted to ECe,25 in order to be useful for soil salinity assessment. All ECe,25 data are obtained through the preparation of saturated pastes by equilibration of a disturbed soil sample with deionized water, sampling of the aqueous phase by vacuum extraction, and eventual EC measurement [8]. To correctly interpret ECe,25 values in soil studies and agriculture, various environmental factors must be taken into account and specifically for agriculture, the salt tolerance of available crops must be known. Since interpretation of ECe,25 is not within the scope of this chapter, the interested reader is led to [5] and [9].

#### **4. Temperature effects on EC**

As indicated, in soil studies and agriculture, EC data have to be expressed at a standard temperature of 25°C. This is because the EC of all materials depends on temperature. Specifi‐ cally, the EC of waters and also soils increases as temperature increases. Thus, unless the temperature of all waters and soils is adjusted to 25°C by equilibration in thermostatic baths, all measurements must be corrected to 25°C. This is done by measurement of the temperature (T) of the material under test, and subsequent application of an adequate equation to convert from EC at T (ECT or σT) to EC at 25°C (EC25 or σ25). The function that relates EC with T depends on the specific salt composition, i.e. on the concentration of every major ion in solution. Since the sum of all ion concentrations is exactly what we estimate through the EC measurement, an empirical function is needed to assess EC25 from ECT. There are two functions in major use for this assessment provided T is between 3 and 50°C. The ratio model (Eq. 1) is based on the EC evolution of KCl 0.01 M solutions with temperature, while the exponential model (Eq. 2) was developed by [5] by taking EC measurements on different soil saturation extracts and various salt solutions at different temperatures [10, 11]:

$$
\sigma\_{2\circ} = \frac{\sigma\_T}{1 + 0.0191(T - 25)} \tag{1}
$$

$$
\sigma\_{2s} = \sigma\_r \left( 0.4470 + 1.4034e^{-T/26.819} \right) \tag{2}
$$

The use of empirical equations has a practical consequence in that the difference between true and corrected EC25 depends on T, and this difference decreases as T is closer to 25°C, being null at T = 25°C. Therefore, the EC measurements must be taken as close to 25°C as possible to avoid this empirical bias; usually, a range of 25 ± 5°C is enough to have differences under 1% in most instances. This warning extends to all EC measurements in soil studies and agriculture either in waters or soils.

#### **5. EC measurement methods**

EC measurements in soil studies and agriculture can be readily made with various types of sensors based on different electromagnetic phenomena which are classified into i) ER, ii) EMI, and iii) reflectometry, either TDR, ADR, or FDR.

#### **5.1. Electrical resistivity (ER)**

environmental factors must be taken into account and specifically for agriculture, the salt tolerance of available crops must be known. Since interpretation of ECe,25 is not within the scope

As indicated, in soil studies and agriculture, EC data have to be expressed at a standard temperature of 25°C. This is because the EC of all materials depends on temperature. Specifi‐ cally, the EC of waters and also soils increases as temperature increases. Thus, unless the temperature of all waters and soils is adjusted to 25°C by equilibration in thermostatic baths, all measurements must be corrected to 25°C. This is done by measurement of the temperature (T) of the material under test, and subsequent application of an adequate equation to convert from EC at T (ECT or σT) to EC at 25°C (EC25 or σ25). The function that relates EC with T depends on the specific salt composition, i.e. on the concentration of every major ion in solution. Since the sum of all ion concentrations is exactly what we estimate through the EC measurement, an empirical function is needed to assess EC25 from ECT. There are two functions in major use for this assessment provided T is between 3 and 50°C. The ratio model (Eq. 1) is based on the EC evolution of KCl 0.01 M solutions with temperature, while the exponential model (Eq. 2) was developed by [5] by taking EC measurements on different soil saturation extracts and

> ( ) <sup>25</sup> 1 0.0191 25 *T T* s

<sup>25</sup> 0.4470 1.4034 *<sup>T</sup>*

( ) / 26.815

The use of empirical equations has a practical consequence in that the difference between true and corrected EC25 depends on T, and this difference decreases as T is closer to 25°C, being null at T = 25°C. Therefore, the EC measurements must be taken as close to 25°C as possible to avoid this empirical bias; usually, a range of 25 ± 5°C is enough to have differences under 1% in most instances. This warning extends to all EC measurements in soil studies and agriculture

EC measurements in soil studies and agriculture can be readily made with various types of sensors based on different electromagnetic phenomena which are classified into i) ER, ii) EMI,

<sup>=</sup> + - (1)

*e*- = + (2)

of this chapter, the interested reader is led to [5] and [9].

various salt solutions at different temperatures [10, 11]:

s s

either in waters or soils.

**5. EC measurement methods**

and iii) reflectometry, either TDR, ADR, or FDR.

s

*T*

**4. Temperature effects on EC**

102 New Trends and Developments in Metrology

When an electric field is applied to a piece of a conductor, and current develops, the system is featured by resistance R = l/(σ A), where l and A are, respectively, the length and the crosssectional area of the piece of material under test. On basis, the inverse proportionality between σ and R, measurement of EC can thus be done by measuring the resistance of a piece of soil or water of known dimensions. Modern ER measurements are taken by using alternating currents (ACs) of extremely low to super low frequencies, usually below 30 Hz. At low frequencies, i.e. below 1 MHz, capacitance and electrolytic effects on the measurement electrodes and besides, amplifier distortions, are avoided, while resistive effects overwhelmingly contribute to the signal like in a purely direct current (DC) measurement [12, 13].

Nowadays, the simplest device to measure EC is a digital ohmmeter, which is composed of a probe made of a pair of metal electrodes, a power supply able to provide a standard constant current (Istd) to the electrodes and the soil in-between, and a voltmeter able to measure the potential drop across (ΔV). Thus, the soil resistance is simply calculated applying the Ohm's law as R = ΔV/Istd, and hence EC as

$$
\sigma = kI\_{\text{sat}} / \Delta V \tag{3}
$$

where k = l/A is known as the cell constant of the probe. The cell constant depends on the probe design, it has units of reciprocal of length, and can be analytically assessed in some simple cases.

In soil studies and agriculture, all EC measurements in waters, including irrigation waters and soil extracts such as the saturation extract, are taken by means of ER using laboratory bench or handheld conductimeters. These instruments are calibrated with EC standards to determine their cell constant. In agriculture most EC values lay in the range between 0.01 and 1.6 S m−1 which, in order to take unbiased measurements, determines both the EC of the standards needed for calibration and the appropriate cell constant of the instrument. For agricultural applications, the EC standards cover two orders of magnitude, usually with specific EC values of 0.0147, 0.1413, and 1.288 S m−1 at 25°C, which correspond to aqueous solutions 0.001, 0.01, and 0.1 M in KCl, whereas the appropriate cell constants are around 1 cm−1.

The two-point ER method just presented has some constraints that are revealed as nonnegligible when the method is used for soils. This is because the two-point ER method not only measures the conductor's resistance but also the resistance of the probe electrodes and wiring and besides, is altered by the Galvani potential difference that develops across the contacts of the electrodes with the conductor. To overcome these limitations, the four-point method developed by [14] for geophysics sounding is used instead of the simple two-point method.

Contrary to the two-point method, in which the potential drop is measured with the same electrodes used to inject the current into the soil, in the four-point method, each function is carried out with its own pair of electrodes. By using separate electrodes, neither the electrodes resistances, nor the contact resistances between the metal electrodes and the soil show up in the measured EC, thus making interpretation of data more straightforward. The electrodes can be arranged according to various configurations, which differ in geometry and electrode spacing [15].

#### **5.2. Electromagnetic induction (EMI)**

The ECa of soil materials can be estimated with no contact by means of EMI. In EMI instru‐ ments, there is one transmitting (Tx) and one or more receiving (Rx1, Rx2, etc) coils (**Figure 1**). The Tx is connected to an oscillator operating at very low frequencies (VLF), specifically in the range between 1 and 100 kHz in which the conductivity soil response has been found to be almost independent from frequency [16]. This way, the Tx generates a primary time variable magnetic field (Hp). By the EMI phenomenon, this time variable magnetic field induces a varying electric field in the soil and in response, many alternating eddy currents are generated within. The amplitude of the total alternating electric current is proportional to i) the EC of the soil, ii) the rate of change of the primary magnetic field (Hp), and iii) the orientation and proximity of the instrument to the soil. The ACs generated in the soil lead in turn, by the same EMI phenomenon, to the creation of a secondary magnetic field (Hi). The resulting total magnetic field (Hp + Hi) induces again, by the same EMI phenomenon, a current in the receiver(s) coil(s) of the instrument. The amplitude of the quadrature-phase of this total field (real component) is the one which is related to a depth-weighted soil EC (ECa\*) [17]. Impor‐ tantly, note that ECa\* is different from ECa. This signal is amplified and formed into an output voltage, which is shown as such ECa\* value to the user [18].

**Figure 1.** Schematic overview of the functioning of an EMI sensor, specifically the DUALEM-1 in which one receiving coil is coplanar (Rx1) and the other is perpendicular (Rx2) to the transmitting coil (Tx).

#### **5.3. Reflectometry**

Reflectometry is based on the effects soil has on primary alternating electric currents trans‐ mitted into the soil via embedded electrodes. In reflectometry, the characteristics of ACs change in response to the dielectric properties of the soil medium, and therefore other alternating signals are generated. These secondary signals are recorded, and their speed, amplitude, or frequency are analyzed, thus giving rise to three different techniques. These are two genuine reflectometry methods such as TDR and ADR, in addition to FDR, which is based on the electrical resonance of RLC circuits.

Reflectometry ECa measurements are based on the fact that conductivity is one of the main mechanisms through which electromagnetic signals transmitted into the soil lose energy. This energy loss is represented by the imaginary part of the soil dielectric permittivity which is conveniently represented by a complex frequency-dependent variable: ε\*(f) = ε'(f) - j ε"(f), where j = √−1 is the imaginary unit, and ε'(f) and ε"(f) are, respectively, the real and imaginary parts of ε\*. The real part of permittivity represents energy storage and is mainly, but not only, related to the soil water content (θ, m3 /m3 ) because of the remarkably higher relative dielectric permittivity of pure water (80) regarding soil solids (3–5) and soil air (1). The imaginary part of permittivity depends on various energy loss mechanisms such as dipole relaxation (εrel"(f)) and importantly ECa, through:

$$
\varepsilon''(f) = \varepsilon''\_{rel}(f) + \frac{\sigma\_a}{2\pi f \varepsilon\_0} \tag{4}
$$

where ε0 = 8.85418 10−12 F/m is the vacuum permittivity. In reflectometry, ECa is assessed from energy losses usually assuming that the other loss effects encompassed in εrel"(f), are negligible regarding the conductivity loss, i.e. σa/2πfε0. Since frequency similarly affects both εrel"(f) and σa/2πfε0, ECa measurements will be barely affected by frequency changes. On the contrary, the assessment of θ is made on basis the apparent permittivity (εa), which depends on both energy storage and loss:

$$\varepsilon\_a = \frac{\varepsilon'(f)}{2} \left[ 1 + \sqrt{1 + \tan^2 \left( \frac{\varepsilon''(f)}{\varepsilon'(f)} \right)} \right] \tag{5}$$

As a result, since energy losses decrease as frequency increase, εa approaches the real permit‐ tivity (ε') as frequency increases and therefore, as frequency increases θ estimations will be more accurate.

#### *5.3.1. Time domain reflectometry (TDR)*

the measured EC, thus making interpretation of data more straightforward. The electrodes can be arranged according to various configurations, which differ in geometry and electrode

The ECa of soil materials can be estimated with no contact by means of EMI. In EMI instru‐ ments, there is one transmitting (Tx) and one or more receiving (Rx1, Rx2, etc) coils (**Figure 1**). The Tx is connected to an oscillator operating at very low frequencies (VLF), specifically in the range between 1 and 100 kHz in which the conductivity soil response has been found to be almost independent from frequency [16]. This way, the Tx generates a primary time variable magnetic field (Hp). By the EMI phenomenon, this time variable magnetic field induces a varying electric field in the soil and in response, many alternating eddy currents are generated within. The amplitude of the total alternating electric current is proportional to i) the EC of the soil, ii) the rate of change of the primary magnetic field (Hp), and iii) the orientation and proximity of the instrument to the soil. The ACs generated in the soil lead in turn, by the same EMI phenomenon, to the creation of a secondary magnetic field (Hi). The resulting total magnetic field (Hp + Hi) induces again, by the same EMI phenomenon, a current in the receiver(s) coil(s) of the instrument. The amplitude of the quadrature-phase of this total field (real component) is the one which is related to a depth-weighted soil EC (ECa\*) [17]. Impor‐ tantly, note that ECa\* is different from ECa. This signal is amplified and formed into an output

**Figure 1.** Schematic overview of the functioning of an EMI sensor, specifically the DUALEM-1 in which one receiving

Reflectometry is based on the effects soil has on primary alternating electric currents trans‐ mitted into the soil via embedded electrodes. In reflectometry, the characteristics of ACs

coil is coplanar (Rx1) and the other is perpendicular (Rx2) to the transmitting coil (Tx).

**5.3. Reflectometry**

spacing [15].

**5.2. Electromagnetic induction (EMI)**

104 New Trends and Developments in Metrology

voltage, which is shown as such ECa\* value to the user [18].

TDR is a broadband high-frequency technique originally applied to soil studies as a means for the in situ fast estimation of θ [19]. Reflectometry in the time domain is based on the reflection primary precisely timed electrical pulses undergo when sent along a transmission line (TL) ending in various electrodes inserted into the soil. The three essential parts of a TDR instrument are i) a TL which is formed, in turn, by a coaxial cable ending in a probe formed by two-four metal rods, ii) a fast-rise signal generator operating at high frequencies between 0.02 and 3 GHz, and iii) a fast oscilloscope (**Figure 2** left). The oscilloscope is fast enough to sample the voltage level of the TL at intervals down to around 100 ps and, hence, obtain the TDR waveform [21] (**Figure 2** right). This way, the fast-rise electromagnetic pulse composed of a wide range of frequencies transmitted to the TL is partially reflected back and forth at the end and at the beginning of the TL giving rise to an electromagnetic oscillation whose voltage amplitude is sampled. The main characteristics of the primary and reflected pulses that are useful in this regard are i) traveling back and forth velocity and ii) attenuation. The first characteristic is related to εa, and thus mainly to θ, whereas the second one is mainly related to just the imaginary part of permittivity and thus to ECa [22, 23]. The voltage at late time (**Figure 2** right) is usually used to derive the ECa by means of the Giese and Tiemann equation [24] as indicated in [25]. However, in practice, several different voltage values in the TDR attenuation curve can be reliably used and therefore, various different equations have been proposed to calculate ECa from TDR waveforms [26, 27].

**Figure 2.** Schematic diagram of a TDR instrument (left) and graph showing the attenuation effect of soil salinity on TDR waveforms after [20] (right).

#### *5.3.2. Amplitude domain reflectometry (ADR)*

Reflectometry in the amplitude domain is another genuine reflectometry technique like TDR. Similar elements to those previously described for TDR equipment are used for ADR. In ADR, however, the measurement is based on the amplitude features of the standing electromagnetic oscillation in different parts of the TL [28]. Besides, since in ADR the signal generator operates in a frequency range between 10 and 100 MHz, i.e. significantly lower than in TDR, instru‐ mentation prices decrease.

#### *5.3.3. Frequency domain reflectometry (FDR)*

Contrary to TDR and ADR, FDR is not based on the analysis of reflected electromagnetic pulses but on the resonance features of RLC circuits in which a capacitor is formed by two electrodes and the in-between and surrounding soil. The RLC circuit in FDR instruments is formed by a signal generator, plus resistor, inductive and capacitive elements, including the lossy capacitor involving the soil (**Figure 3**). This lossy capacitor is characterized by a resistance (R1) and a capacitance (C1) which depends, in addition to the soil, on its specific design and εa according to C1 = gεa, where g is the capacitor design factor in length units. The soil sensing capacitor is connected in parallel with the parasitic capacitance due to the circuit board and connections (C2), and in series with the circuit board which is characterized by a well-known capacitance (C3) [29, 30]. Therefore, g and C2 are instrument-specific, and even sensor-specific, and their values must be obtained by calibration using liquids of known dielectric properties.

In FDR, the signal generator operates at frequencies below TDR, which are in the range from 10 to 200 MHz. At these relatively low frequencies, ε<sup>a</sup> measurements are more dependent on other soil properties in addition to θ and therefore, FDR is less reliable than TDR for θ estimations [31]. In FDR instruments, the frequency of the AC is usually varied within a narrow range until resonance is achieved in the circuit, i.e., until amplitude is maximum. The resonant frequency depends mostly on εa, while the amplitude depends mostly on ECa. Therefore, in FDR, the ECa is assessed from the amplitude at resonance [12].

**Figure 3.** Circuit diagram for a single-probe FDR sensor.

GHz, and iii) a fast oscilloscope (**Figure 2** left). The oscilloscope is fast enough to sample the voltage level of the TL at intervals down to around 100 ps and, hence, obtain the TDR waveform [21] (**Figure 2** right). This way, the fast-rise electromagnetic pulse composed of a wide range of frequencies transmitted to the TL is partially reflected back and forth at the end and at the beginning of the TL giving rise to an electromagnetic oscillation whose voltage amplitude is sampled. The main characteristics of the primary and reflected pulses that are useful in this regard are i) traveling back and forth velocity and ii) attenuation. The first characteristic is related to εa, and thus mainly to θ, whereas the second one is mainly related to just the imaginary part of permittivity and thus to ECa [22, 23]. The voltage at late time (**Figure 2** right) is usually used to derive the ECa by means of the Giese and Tiemann equation [24] as indicated in [25]. However, in practice, several different voltage values in the TDR attenuation curve can be reliably used and therefore, various different equations have been proposed to calculate

**Figure 2.** Schematic diagram of a TDR instrument (left) and graph showing the attenuation effect of soil salinity on

Reflectometry in the amplitude domain is another genuine reflectometry technique like TDR. Similar elements to those previously described for TDR equipment are used for ADR. In ADR, however, the measurement is based on the amplitude features of the standing electromagnetic oscillation in different parts of the TL [28]. Besides, since in ADR the signal generator operates in a frequency range between 10 and 100 MHz, i.e. significantly lower than in TDR, instru‐

Contrary to TDR and ADR, FDR is not based on the analysis of reflected electromagnetic pulses but on the resonance features of RLC circuits in which a capacitor is formed by two electrodes and the in-between and surrounding soil. The RLC circuit in FDR instruments is formed by a signal generator, plus resistor, inductive and capacitive elements, including the lossy capacitor involving the soil (**Figure 3**). This lossy capacitor is characterized by a resistance (R1) and a

ECa from TDR waveforms [26, 27].

106 New Trends and Developments in Metrology

TDR waveforms after [20] (right).

mentation prices decrease.

*5.3.2. Amplitude domain reflectometry (ADR)*

*5.3.3. Frequency domain reflectometry (FDR)*

#### **6. Assessment of soil salinity from EC measurements**

As indicated in Section 3, ECe,25 is the standard for soil salinity assessment; however, the method of the saturation extract is labor-intensive and soil-destructive. To overcome either one or both of these drawbacks, several alternative methodologies have been proposed. These methodologies can be grouped into two classes: i) those based on more readily prepared soil water extracts and ii) those based on ECa measurements. The first class of methods constitute alternative sampling and laboratory methods and will not be dealt with in this chapter. The interested reader is referred to the literature on the subject, e.g. [32] and references therein.

In the second class of methods, the interpretation of ECa measurements is made by means of models relating ECa with ECe, 25. Unfortunately, universally valid equations do not exist for this transformation. Such relationship must be assessed almost in every instance by using two different methods. The first one consists of two steps: i) the assessment of ECp from ECa and then ii) the assessment of ECe from ECp. The second one is based on the assessment of ECe directly from ECa. Each of these methods presents its own advantages and drawbacks.

#### **6.1. Two-step estimation of ECe from ECa**

The first part of this method, i.e. the calculation of ECp from ECa, is based on what is known about how soils behave as conductors. In the second part, ECe is related to ECp either through modeling of soil solution dilution and concentration processes, or by means of empirical equations.

#### *6.1.1. First step: estimation of ECp from ECa*

Since soils are composite materials made of solids, water, and air, electric charge is conveyed through them by means of three different paths acting in parallel [33]. These are i) a continuous liquid pathway in which dissolved ions are the charge carriers, ii) an alternate solid-liquid pathway in which both exchangeable and dissolved ions are the charge carriers, and iii) a pathway formed by solid particles in direct and continuous contact with one another in which exchangeable ions are the charge carriers. Since just the continuous liquid (i) and continuous solid pathways (iii) can be straightforward parameterized as separate units, the following equation with just two summands representing, respectively, the continuous liquid plus continuous solid pathways, can be used to model ECa as a function of soil properties [34]:

$$
\sigma\_a = \theta \left( a\theta + b\right) \sigma\_\rho + \left( a\theta + b\right) B\rho\_b CEC \tag{6}
$$

In Eq. 6, the alternate liquid-solid pathway is somewhat included into both summands, (aθ + b) is a factor known as tortuosity or transmission coefficient where a and b are fitting param‐ eters which depend on soil texture and structure, B is the equivalent conductance of the counterions on the soil exchange complex (S m2 /mmolC), ρb is the soil bulk density (kg/m3 ), and finally, CEC is the soil cation exchange capacity (mmolC/kg), which depends on soil texture, clay mineralogy, and organic matter content. In Eq. 6, there are three soil-specific parameters (a, b, and B) that must be assessed by calibration by taking several ECa measure‐ ments and analyzing the soil for θ, ECp, ρb, and CEC. Once the parameters (a, b, and B) of Eq. 6 have been calibrated, σp can be isolated to estimate the EC of the soil solution by means of Eq. 7.

$$
\sigma\_{\rho} = \frac{\sigma\_{\a} - (a\theta + b)B\rho\_{b}CEC}{\theta (a\theta + b)} \tag{7}
$$

In addition to the model parameters, an adequate estimation of ECp from ECa by means of Eq. 7 requires knowing the values of ρb, CEC, and θ in the same soil volume. The former two (ρb, CEC) can be assumed to be barely spatial and overall time variable and thus, as steady soil properties for many applications. On the contrary, the soil water content (θ) is usually very variable both in space and time and can virtually never be considered as steady for survey and monitoring applications. Therefore, to correctly estimate ECp from ECa measurements, θ must be determined along with ECa in the soil volume under test. This determination could rigorously be done by sampling of the measured soil volume and subsequent laboratory analysis. However, this direct method would override most of the advantages gained by working with ECa sensors. To overcome this limitation, the soil water content could either be estimated by means of the various nowadays available indirect sensing techniques such as i) neutron thermalization, ii) gamma ray attenuation, iii) those related to soil thermal properties, and iv) electromagnetic methods [35]. Nevertheless, the first three methods provide θ estimations not for the same soil volume surveyed by the ECa sensor but for an adjacent soil volume, thus giving rise to errors. With electromagnetic methods and specifically with reflectometry, the requirement of θ estimations in exactly the same soil volume under test has been addressed because most of these instruments can measure ECa in addition to estimate θ. For use with such instruments, various simple empirical equations have been proposed to assess ECp. Eq. 8 uses the empirical linear relationship that has been revealed to exist between σa and ε<sup>a</sup> as the water content changes while ECp is kept constant, to assess ECp on basis just ECa and εa sensor measurements [36]:

different methods. The first one consists of two steps: i) the assessment of ECp from ECa and then ii) the assessment of ECe from ECp. The second one is based on the assessment of ECe directly from ECa. Each of these methods presents its own advantages and drawbacks.

The first part of this method, i.e. the calculation of ECp from ECa, is based on what is known about how soils behave as conductors. In the second part, ECe is related to ECp either through modeling of soil solution dilution and concentration processes, or by means of empirical

Since soils are composite materials made of solids, water, and air, electric charge is conveyed through them by means of three different paths acting in parallel [33]. These are i) a continuous liquid pathway in which dissolved ions are the charge carriers, ii) an alternate solid-liquid pathway in which both exchangeable and dissolved ions are the charge carriers, and iii) a pathway formed by solid particles in direct and continuous contact with one another in which exchangeable ions are the charge carriers. Since just the continuous liquid (i) and continuous solid pathways (iii) can be straightforward parameterized as separate units, the following equation with just two summands representing, respectively, the continuous liquid plus continuous solid pathways, can be used to model ECa as a function of soil properties [34]:

**6.1. Two-step estimation of ECe from ECa**

108 New Trends and Developments in Metrology

*6.1.1. First step: estimation of ECp from ECa*

s qq

counterions on the soil exchange complex (S m2

 s  q

In Eq. 6, the alternate liquid-solid pathway is somewhat included into both summands, (aθ + b) is a factor known as tortuosity or transmission coefficient where a and b are fitting param‐ eters which depend on soil texture and structure, B is the equivalent conductance of the

and finally, CEC is the soil cation exchange capacity (mmolC/kg), which depends on soil texture, clay mineralogy, and organic matter content. In Eq. 6, there are three soil-specific parameters (a, b, and B) that must be assessed by calibration by taking several ECa measure‐ ments and analyzing the soil for θ, ECp, ρb, and CEC. Once the parameters (a, b, and B) of Eq. 6 have been calibrated, σp can be isolated to estimate the EC of the soil solution by means of

> ( ) ( ) *a b*

q q

sq

*a b B CEC a b*

In addition to the model parameters, an adequate estimation of ECp from ECa by means of Eq. 7 requires knowing the values of ρb, CEC, and θ in the same soil volume. The former two (ρb, CEC) can be assumed to be barely spatial and overall time variable and thus, as steady soil

 r

*p*

s

 r

*a pb* = + ++ ( )( ) *a b a b B CEC* (6)


/mmolC), ρb is the soil bulk density (kg/m3

),

equations.

Eq. 7.

$$
\sigma\_{\rho} = \frac{\sigma\_a - \sigma\_s}{m \left(\varepsilon\_a - \varepsilon\_s\right)} \tag{8}
$$

where the parameters σs, εss can be interpreted, respectively, as the particle surface EC, and the soil solids dielectric permittivity. They both along with m have to be assessed by calibration and depend on soil properties such as texture, mineralogy, and organic matter. Note that in Eq. 8, and other models, ECp estimations are expected to improve provided ε<sup>a</sup> is replaced by ε' [37], and that temperature corrections are still pending. The Hilhorst [38] model (Eq. 9) constitutes a simplified version of Eq. 8, where just one parameter, i.e. the soil dielectric permittivity at zero ECa (εσa = 0), is needed, and εw(T) is the dielectric permittivity of water, whose value depends just on temperature.

$$
\sigma\_{\rho} = \frac{\sigma\_a \varepsilon\_w(T)}{\varepsilon\_a - \varepsilon\_{\sigma a = 0}} \tag{9}
$$

Provided the relatively low variability of ρb and CEC regarding θ, a simpler version of the model represented by Eq. 7 has been developed for assessing ECp along with the use of instruments capable of both ECa measurements and θ estimations [39]:

$$
\sigma\_p = \frac{\sigma\_a - \sigma\_s}{(a\theta + b)\theta} \tag{10}
$$

where σs is a lumped coefficient, i.e., equal to the second summand ((aθ + b) B ρ<sup>b</sup> CEC) in Eq. 6, representing the particle surface EC, and θ is usually empirically assessed with a third-order polynomial of the form: θ = a3ε<sup>a</sup> 3 + a2ε<sup>a</sup> 2 + a1εa + a0 [19]. An alternative to this polynomial is the simplified dielectric mixing (SDM) model, which is more theoretically based [40]. Thus, the following equation (Eq. 11), which combines Eq. 10 with the SDM model has been proposed to estimate ECp at 25°C (σp,25) from sensor measurements of ECa, εa and T [41]:

$$
\sigma\_{p,25} = \frac{\sigma\_a h \left( T \right) - \sigma\_s}{\left[ a \frac{-b\_0 + \sqrt{\mathcal{E}\_a}}{b\_1} + b \right] \left[ \frac{-b\_0 + \sqrt{\mathcal{E}\_a}}{b\_1} \right]} \tag{11}
$$

where h(T) = σ25/σ<sup>T</sup> is a function of temperature as given by the ratio (Eq. 1) or the exponential model (Eq. 2).

#### *6.1.2. Second step: estimation of ECe from ECp*

Once ECp has been estimated by means of the previous equations, it can be related to ECe by means of either process-based or functional models, or even a mixture of both. The processbased models simulate the dilution of the soil solution from the field water content at which measurements have been taken (θ<sup>f</sup> ) to the soil saturation water content (θs). This model can be as simple as a dilution ratio giving rise to the following estimation: ECe,25 = (θ<sup>f</sup> ρbs ECp,25)/(θ<sup>s</sup> ρbf), where ρbs and ρbf are the bulk densities of, respectively, the saturated paste and field soil [42, 43], or either a complex model such as SALSOLCHEMEC [44], that requires, in addition to θ and ρb data, the likely major ion contents and CEC of the soil. These complex models, though more accurate, can be regarded as less appealing because they require more data and further elaboration of results. The functional models, on the contrary, are based on the statistical relationship between ECe,25 and ECp,25, which must be obtained by calibration beforehand. A mixture of both approaches can be applied considering the dilution ratio to calculate a proxy of ECe,25, i.e. ECe,25', whose statistical relationship with the true ECe,25 must also be obtained by calibration beforehand. Nevertheless, an advantage of this latter approach over the purely statistical one is that estimation errors will diminish.

#### **6.2. One-step estimation of ECe from ECa**

ECe can be directly estimated from ECa measurements using purely empirical calibrations. Most of these calibrations have been developed to interpret the ECa measurements taken with ER and EMI techniques. Calibrations are usually obtained by multiple linear regression (MLR), principal components regression (PCR), partial least squares regression (PLSR), or either kriging (KR) or co-kriging regressions (CKR). Regardless of the specific regression technique, in these models, a profile ECe average can be assessed as a mathematical function of several ECa sensor measurements, e.g. vertical and horizontal configuration EMI measurements (ECav\*, ECa-h\*), in addition to an n number of soil properties (P1, P2, …, Pn) including other sensor measurements, coordinates, micro-topography features, etc:

$$EC\_a = h\left(\sigma\_{a\to\nu} \text{\*}, \sigma\_{a\to h} \text{\*}, \dots, P\_1, P\_2, \dots, P\_n\right) \tag{12}$$

Profile ECe averages are not the only data we can obtain with the use of these techniques. With many ER and certainly most EMI instruments we have got the ability to delineate the onedimensional (1D) soil ECa, and therefore, we can elaborate on calibrations for ECe variations along the soil vertical coordinate.

#### **7. Commercial sensors for measuring EC**

#### **7.1. Electrical resistivity sensors**

where σs is a lumped coefficient, i.e., equal to the second summand ((aθ + b) B ρ<sup>b</sup> CEC) in Eq. 6, representing the particle surface EC, and θ is usually empirically assessed with a third-order

simplified dielectric mixing (SDM) model, which is more theoretically based [40]. Thus, the following equation (Eq. 11), which combines Eq. 10 with the SDM model has been proposed

( )

*h T*

*a s*

where h(T) = σ25/σ<sup>T</sup> is a function of temperature as given by the ratio (Eq. 1) or the exponential

Once ECp has been estimated by means of the previous equations, it can be related to ECe by means of either process-based or functional models, or even a mixture of both. The processbased models simulate the dilution of the soil solution from the field water content at which

as simple as a dilution ratio giving rise to the following estimation: ECe,25 = (θ<sup>f</sup> ρbs ECp,25)/(θ<sup>s</sup> ρbf), where ρbs and ρbf are the bulk densities of, respectively, the saturated paste and field soil [42, 43], or either a complex model such as SALSOLCHEMEC [44], that requires, in addition to θ and ρb data, the likely major ion contents and CEC of the soil. These complex models, though more accurate, can be regarded as less appealing because they require more data and further elaboration of results. The functional models, on the contrary, are based on the statistical relationship between ECe,25 and ECp,25, which must be obtained by calibration beforehand. A mixture of both approaches can be applied considering the dilution ratio to calculate a proxy of ECe,25, i.e. ECe,25', whose statistical relationship with the true ECe,25 must also be obtained by calibration beforehand. Nevertheless, an advantage of this latter approach

ECe can be directly estimated from ECa measurements using purely empirical calibrations. Most of these calibrations have been developed to interpret the ECa measurements taken with ER and EMI techniques. Calibrations are usually obtained by multiple linear regression (MLR), principal components regression (PCR), partial least squares regression (PLSR), or either kriging (KR) or co-kriging regressions (CKR). Regardless of the specific regression technique, in these models, a profile ECe average can be assessed as a mathematical function of several ECa sensor measurements, e.g. vertical and horizontal configuration EMI measurements (ECav\*, ECa-h\*), in addition to an n number of soil properties (P1, P2, …, Pn) including other sensor

over the purely statistical one is that estimation errors will diminish.

measurements, coordinates, micro-topography features, etc:

*a a*

 e

 s

0 0 1 1


*b b a b b b*

e

s

to estimate ECp at 25°C (σp,25) from sensor measurements of ECa, εa and T [41]:

+ a1εa + a0 [19]. An alternative to this polynomial is the

) to the soil saturation water content (θs). This model can be

(11)

3 + a2ε<sup>a</sup> 2

,25

*p*

s

*6.1.2. Second step: estimation of ECe from ECp*

**6.2. One-step estimation of ECe from ECa**

measurements have been taken (θ<sup>f</sup>

polynomial of the form: θ = a3ε<sup>a</sup>

110 New Trends and Developments in Metrology

model (Eq. 2).

ER gives rise to the simplest of techniques for the assessment of ECa in agriculture with essentially two classes of ER systems: static and mobile instruments. Two-point ER measure‐ ments can be done with commercial handheld digital multimeters (DMM, DVOM). Provided the length (L), radius (r), and spacing (d) between the pair of test leads driven into the soil are known ECa can be measured by less than 50 €. However, two-point ER measurements are usually made with sensors specifically developed for soil applications. The capacitanceconductance (CC) combined 5TE and GS3 sensors by Decagon (Decagon Devices, Inc., Pullman, Washington, USA) provide ECa measurements using ER. Besides, both sensors also measure εa through capacitance, which is the simplest sensor technique for θ estimation, and T through a thermistor in roughly the same soil volume, and are worth between 200 and 300 €. Subsequent elaboration of ECa, εa, and T data with the equations in Section 6 along with equations in Section 4 allows for the assessment of ECp,25.

In the 1970s, a four-point ER probe based on a Wenner's array was developed by Rhoades and van Schilfgaarde [45]. On basis of this design, various commercial devices were developed next in order to make discrete and continuous ECa measurements in agriculture [33]. The Martek (Martek Instruments, Raleigh, North Carolina, USA) soil salinity sensors adequately served for both profile soundings and continuous burial measurements. Models such as the SCT-10 included a temperature sensor, and thus provided both raw and temperature corrected readings for improved data interpretation. The commercialization of this sensor was discon‐ tinued by Martek Instruments; however, a very similar device including temperature meas‐ urement can still be bought from Eijkelkamp (Giesbeek, The Netherlands) for roughly 4500 €.

Since the works of Wenner, the static four-point ER sensors early evolutioned to mobile ECa instruments [46]. Nowadays, there are instruments sold by Veris (Veris Technologies, Inc., Salina, Kansas, USA), in which sensors take the shape of coulters mounted in a trailer that is towed by a vehicle through the field under test. These instruments are worth between 12,000 and 24,000 €, and integrate global positioning system (GPS) and data-logging utilities. Besides, the newest Veris models (V2000XA, V3100, V3150) present two pairs of potential measurement electrodes instead of just one, which jointly enable a simple soil profiling of ECa with two depths. The coulter electrodes are also sold individually and therefore users can build their own craft ER sensor systems [47].

The most critical issues in order to make reliable ECa measurements with ER sensors are i) knowing accurately the probe cell constant value and ii) assuring good contact between all the electrodes and the soil. The cell constant can be analytically assessed in most instances as shown in ref. [48]. This is the only option when performing four-point measurements with mobile systems. However, for most applications with commercial two-point and four-point static sensors, the probe cell constant is empirically assessed by means of calibration using EC standards (Section 5.1).

#### **7.2. Electromagnetic induction sensors**

The EMI instruments most commonly used in nowadays agriculture include the DUALEM-1, DUALEM-2, and DUALEM-21 (Dualem, Inc., Milton, Ontario, Canada); the EM38, EM38-DD, and EM38-MK2 sensors (Geonics Ltd., Mississauga, Ontario, Canada), and the Profiler EMP-400 (Geophysical Survey Systems, Inc., Salem, New Hampshire, US). The simplest EMI instruments are the EM38 and the DUALEM-1, in which there are just one transmitting and one or two receiving coils 1 m apart, and are worth between 11,000 and 14,000 €. As a conse‐ quence, these instruments are sensitive to the soil ECa down to a depth between 0.5 and 1.5 m, which is where most plant's roots develop and thus, the most interesting for soil studies and agriculture.

The soil depth response to EMI instruments depends on the separation and orientation of transmitting and receiving coils, and on their distance over ground in the following ways: i) as the separation between both coils increases, the soil depth contributing to the sensor signal increases; ii) when one coil, either the transmitter or the receiver, is turned from vertical to horizontal, the soil depth contributing to the sensor signal decreases; and iii) as the height over ground increases, the soil depth contributing to the sensor signal decreases.

The EM38 presents two parallel coils and measurements can be performed with both coils either vertical (V-V) or horizontal (H-H) to the soil. When the sensor is laid onto the soil, 70% of the cumulative sensor signal is provided by the upper 1.55 m in the V-V orientation, and by the upper 0.75 m in the H-H orientation. The DUALEM-1 presents three coils, one transmitter and two receivers (**Figure 1**). While the transmitter coil is vertically oriented, one receiver is 1 m apart and parallel to the transmitter (vertical orientation), and another is 1.1 m apart and perpendicular to the transmitter (horizontal orientation). When the sensor is laid onto the soil, 70% of the cumulative sensor signal is provided by the upper 1.5 m in the V-V orientation, and by the upper 0.5 m in the V-H orientation. As described, two ECa\* measurements can be made with both the EM38 and DUALEM-1 sensors. Both measurements can then be elaborated to have qualitative information about how ECa changes with soil depth. For example, if the V-V ECa\* (ECa-v\*) is higher than the H-H or V-H ECa\* (ECa-h\*), it is because ECa increases with soil depth. On the contrary, if ECa-v\* is lower than ECa-h\*, it is because ECa decreases with soil depth. Even though the EM38 and DUALEM-1 sensors work at slightly different frequencies, which are, respectively, 14.7 and 9 kHz, both provide the same data for ECa-v\* and ECa-h\* and can then be used interchangeably [49].

The EM38-DD is an evolution of the EM38, in which two EM38 sensors are bolted together and electronically coupled. Besides, one is vertically and the other horizontally oriented, so as to have simultaneous ECa-v\* and ECa-h\* measurements. The EM38-MK2 is a further evolution of the EM38 in which there are two receiving coils instead of just one. The receivers are parallel to the transmitter and, respectively, 0.5 and 1.0 m apart from it and, as a consequence, to have 70% of cumulative sensor signal, between 0.75 and 1.55 m are required in the V-V-V orientation, and between 0.4 and 0.75 m are required in the H-H-H orientation [50].

In the DUALEM-2, the separation between the transmitter and receivers has been doubled to be 2 m and, as a consequence, the depth for 70% cumulative signal is 3 m in the V-V orientation and 1 m in the V-H orientation. The DUALEM-21 is a combination of the previous DUALEM-1 and DUALEM-2 in which one pair of receivers are 1 m away from the transmitter, and another pair of receivers are 2 m away. This configuration allows DUALEM-21 users to have four depths of simultaneous ECa sounding.

An instrument similar to the EM38 and the DUALEM-1 is the EMP-400, in which there are just two parallel coils, one transmitter and one receiver separated by 1.22 m to take measurements in the V-V or H-H orientations. The EMP-400 is a multi-frequency instrument able to take measurements at whichever three frequencies between 1 and 16 kHz; however, to have ECa\* measurements similar to those provided by similar EMI instruments, users have to work at 15 kHz. The EMP-400 is worth roughly 16,000 €.

All the aforementioned EMI instruments can be used along with data-logging and positioning systems (GPS) to have georeferenced and continuous ECa\* measurements. In fact, DUALEM instruments and Profiler EMP-400 present internal GPS receivers [50]. Dualem also sells the sensors of their instruments individually under denominations 1S, 2S, etc., to allow practi‐ tioners to build their own customized equipment.

#### **7.3. Reflectometry sensors**

The most critical issues in order to make reliable ECa measurements with ER sensors are i) knowing accurately the probe cell constant value and ii) assuring good contact between all the electrodes and the soil. The cell constant can be analytically assessed in most instances as shown in ref. [48]. This is the only option when performing four-point measurements with mobile systems. However, for most applications with commercial two-point and four-point static sensors, the probe cell constant is empirically assessed by means of calibration using EC

The EMI instruments most commonly used in nowadays agriculture include the DUALEM-1, DUALEM-2, and DUALEM-21 (Dualem, Inc., Milton, Ontario, Canada); the EM38, EM38-DD, and EM38-MK2 sensors (Geonics Ltd., Mississauga, Ontario, Canada), and the Profiler EMP-400 (Geophysical Survey Systems, Inc., Salem, New Hampshire, US). The simplest EMI instruments are the EM38 and the DUALEM-1, in which there are just one transmitting and one or two receiving coils 1 m apart, and are worth between 11,000 and 14,000 €. As a conse‐ quence, these instruments are sensitive to the soil ECa down to a depth between 0.5 and 1.5 m, which is where most plant's roots develop and thus, the most interesting for soil studies and

The soil depth response to EMI instruments depends on the separation and orientation of transmitting and receiving coils, and on their distance over ground in the following ways: i) as the separation between both coils increases, the soil depth contributing to the sensor signal increases; ii) when one coil, either the transmitter or the receiver, is turned from vertical to horizontal, the soil depth contributing to the sensor signal decreases; and iii) as the height over

The EM38 presents two parallel coils and measurements can be performed with both coils either vertical (V-V) or horizontal (H-H) to the soil. When the sensor is laid onto the soil, 70% of the cumulative sensor signal is provided by the upper 1.55 m in the V-V orientation, and by the upper 0.75 m in the H-H orientation. The DUALEM-1 presents three coils, one transmitter and two receivers (**Figure 1**). While the transmitter coil is vertically oriented, one receiver is 1 m apart and parallel to the transmitter (vertical orientation), and another is 1.1 m apart and perpendicular to the transmitter (horizontal orientation). When the sensor is laid onto the soil, 70% of the cumulative sensor signal is provided by the upper 1.5 m in the V-V orientation, and by the upper 0.5 m in the V-H orientation. As described, two ECa\* measurements can be made with both the EM38 and DUALEM-1 sensors. Both measurements can then be elaborated to have qualitative information about how ECa changes with soil depth. For example, if the V-V ECa\* (ECa-v\*) is higher than the H-H or V-H ECa\* (ECa-h\*), it is because ECa increases with soil depth. On the contrary, if ECa-v\* is lower than ECa-h\*, it is because ECa decreases with soil depth. Even though the EM38 and DUALEM-1 sensors work at slightly different frequencies, which are, respectively, 14.7 and 9 kHz, both provide the same data for ECa-v\* and ECa-h\* and can then

The EM38-DD is an evolution of the EM38, in which two EM38 sensors are bolted together and electronically coupled. Besides, one is vertically and the other horizontally oriented, so as to

ground increases, the soil depth contributing to the sensor signal decreases.

standards (Section 5.1).

112 New Trends and Developments in Metrology

agriculture.

be used interchangeably [49].

**7.2. Electromagnetic induction sensors**

ER instruments provide ECa measurements, while EMI instruments provide ECa estimations through ECa\*. However, since ECa strongly depends on θ in addition to soil salinity, and secondarily on other soil properties, reliable estimations of ECp require, at least, a reliable estimation of θ in the soil volume under test. With maturing of reflectometry techniques, reliable estimations of both θ and ECa can be made in exactly the same soil volume, giving rise to effective ECp estimations [22, 51]. Another advantage of reflectometry is that contact between soil and electrodes is important but not so critical than with ER.

#### *7.3.1. Time domain reflectometry sensors*

Nowadays, there are three types of TDR measurement systems. The first type consists of four parts: i) a compact reflectometer that includes a signal generator, a fast oscilloscope, and a microcontroller; ii) one datalogger; iii) one or more multiplexors; and iv) several TDR probes that can be monitored at once. A remarkable example of an affordable compact reflectometer with wide multiplexing capabilities is the TDR100 (Campbell Scientific Inc., Logan, Utah, USA) which is worth 4000 €, and 100 € more per probe. Another modern example of this kind of instruments is the Trase Systems (SoilMoisture Equipment Corp., Santa Barbara, California, USA). With these TDR measurement systems, the ECa and θ calculations can be programmed to be made by the reflectometer, or either the TDR traces can be saved and calculations deferred. All these instruments are essentially research oriented.

There are even more compact instruments which constitute a second type of TDR systems. These instruments integrate reflectometer and probe and besides, a temperature sensor. Examples are the Trime-PICO 64/32 (Van Walt Ltd., Haslemere, Surrey, UK), which works at 1 GHz, and the CS615 and CS616 (Campbell Scientific Inc., Logan, Utah, USA), which operate at frequencies of 45 and 70 MHz, respectively [52], and thus are less expensive (less than 300 €) but also less accurate for θ estimation than the previous more complex TDR systems. These compact TDR probes make ECa and θ calculations automatically and are adequate for practical applications in agriculture and soil studies.

A third type of TDR systems are profile probes. These devices are formed by several paired electrodes, which are usually equally separated on opposite sides of a non-conductive tube, which is vertically inserted into the soil. Examples are the Trime PICO-Profile (IMKO Micro‐ modultechnik GmbH, Ettlingen, Baden-Württemberg, Germany), and the Vector Probe (Aquaspy, San Diego, California, USA) with 12 sensors down to 1.2 m depth, which is worth less than 1000 €.

#### *7.3.2. Amplitude domain reflectometry sensors*

There are several commercial ADR sensors able to estimate θ such as the popular Theta Probe (Delta-T Devices Ltd., Cambridge, UK). However, to our knowledge, only the Hydra Probe (Stevens Water Monitoring Systems, Inc., Portland, Oregon, USA) is able to take both εa and ECa measurements. This sensor forms its probe with three aligned stainless steel prongs, wired to an oscillator working at 50 MHz. It also measures soil temperature and is worth roughly 400 €.

#### *7.3.3. Frequency domain reflectometry sensors*

Commercial FDR sensors work at frequencies between 10 and 200 MHz, i.e. lower than TDR. There are two basic types of FDR sensors: single and profile probes. The degree in which the soil contributes to the dielectric medium in the soil sensing capacitors is markedly different in each kind of probe. For single-probe devices, the soil forms a good deal of the dielectric medium (**Figure 3**), while in profile probes, the soil medium is just an almost marginal part of it. In profile FDR probes, the soil sensing capacitors are attached to a non-conductive rod or plate which is introduced into an insulating access tube, and this in turn vertically into the soil. Therefore, the soil sensing capacitors are not in contact with the soil, and in fact only a small part of the electromagnetic field created by each one permeates the surrounding soil. This fringe field extinguishes rapidly from the capacitor. Accordingly, most of the sensitivity of FDR profile probes lies in the soil zone immediate adjacent to the access tube, which is the one more affected by soil drilling, thus raising concerns about representativeness [53].

There are several single-probe FDR instruments for both θ and ECa estimations. One of the most used for agricultural applications is the WET sensor (Delta-T Devices Ltd., Cambridge, UK), which is worth roughly 1200 €. The WET sensor forms the soil sensing capacitor by means of three aligned 6-cm-long metal prongs, with the central rod acting as the plus plate, and the side prongs acting as the ground plates of the capacitor. Additionally, the WET sensor has a thermistor at the central rod tip that enables soil temperature measurements. Measurements of ECa with the WET sensor are very similar to those carried out with ER techniques. However, the relatively low oscillation frequency in the WET sensor (20 MHz) makes εa measurements much too dependent on soil salinity and therefore, impairs the estimations of θ, and thus ECp [41].

Although there are several commercial profile FDR instruments for θ estimations, to our knowledge, only the TriSCAN (Sentek Pty Ltd., Stepney, South Australia, Australia) estimates soil salinity in addition to θ. This probe can bear up to 16 pairs of electrodes no less than 10 cm apart, and it provides an estimation of soil salinity expressed as volumetric ion content (VIC). It works at two frequencies: over 100 Mhz for θ estimation and below 27 MHz for VIC assessment. The VIC is derived with a proprietary method and is related to ECa, though it is not directly interchangeably with it [54]. Each TriSCAN probe is worth between 1000 and 1600 € depending on length (0.6–1.2 m).

#### **8. Applications of EC measurements in agriculture**

to be made by the reflectometer, or either the TDR traces can be saved and calculations

There are even more compact instruments which constitute a second type of TDR systems. These instruments integrate reflectometer and probe and besides, a temperature sensor. Examples are the Trime-PICO 64/32 (Van Walt Ltd., Haslemere, Surrey, UK), which works at 1 GHz, and the CS615 and CS616 (Campbell Scientific Inc., Logan, Utah, USA), which operate at frequencies of 45 and 70 MHz, respectively [52], and thus are less expensive (less than 300 €) but also less accurate for θ estimation than the previous more complex TDR systems. These compact TDR probes make ECa and θ calculations automatically and are adequate for practical

A third type of TDR systems are profile probes. These devices are formed by several paired electrodes, which are usually equally separated on opposite sides of a non-conductive tube, which is vertically inserted into the soil. Examples are the Trime PICO-Profile (IMKO Micro‐ modultechnik GmbH, Ettlingen, Baden-Württemberg, Germany), and the Vector Probe (Aquaspy, San Diego, California, USA) with 12 sensors down to 1.2 m depth, which is worth

There are several commercial ADR sensors able to estimate θ such as the popular Theta Probe (Delta-T Devices Ltd., Cambridge, UK). However, to our knowledge, only the Hydra Probe (Stevens Water Monitoring Systems, Inc., Portland, Oregon, USA) is able to take both εa and ECa measurements. This sensor forms its probe with three aligned stainless steel prongs, wired to an oscillator working at 50 MHz. It also measures soil temperature and is worth roughly

Commercial FDR sensors work at frequencies between 10 and 200 MHz, i.e. lower than TDR. There are two basic types of FDR sensors: single and profile probes. The degree in which the soil contributes to the dielectric medium in the soil sensing capacitors is markedly different in each kind of probe. For single-probe devices, the soil forms a good deal of the dielectric medium (**Figure 3**), while in profile probes, the soil medium is just an almost marginal part of it. In profile FDR probes, the soil sensing capacitors are attached to a non-conductive rod or plate which is introduced into an insulating access tube, and this in turn vertically into the soil. Therefore, the soil sensing capacitors are not in contact with the soil, and in fact only a small part of the electromagnetic field created by each one permeates the surrounding soil. This fringe field extinguishes rapidly from the capacitor. Accordingly, most of the sensitivity of FDR profile probes lies in the soil zone immediate adjacent to the access tube, which is the one

more affected by soil drilling, thus raising concerns about representativeness [53].

There are several single-probe FDR instruments for both θ and ECa estimations. One of the most used for agricultural applications is the WET sensor (Delta-T Devices Ltd., Cambridge, UK), which is worth roughly 1200 €. The WET sensor forms the soil sensing capacitor by means

deferred. All these instruments are essentially research oriented.

applications in agriculture and soil studies.

114 New Trends and Developments in Metrology

*7.3.2. Amplitude domain reflectometry sensors*

*7.3.3. Frequency domain reflectometry sensors*

less than 1000 €.

400 €.

The capability of EC sensors to easily and quickly take high amounts of measurements at broad spatial scales that range from profile horizons to watersheds, and at no less wide time scales that range from seconds to years, permits the development of many applications for both mapping and monitoring of soil salinity, and also soil salt dynamics. Interestingly, EC measurement systems have been used even for crop yield estimation, due to the significant correlations between ECa and yield found for different crops such as tomato [55], corn and soybean [56–58], sorghum [57, 58], and cotton [59, 60]. The inherent integration within ECa of various soil properties on which plant development depends, such as θ, ρb, clay content, and ECp, explains such ability. The use of EC measurement systems is nowadays of paramount importance for irrigation, crop, and fertilizer management in a framework of precision agriculture (PA) in which management is adapted to the specific soil and crop characteristics as they change through space within a field, and throughout time within growing seasons.

PA is a farming management concept based on observe, measure, and thus, respond to variability in crop fields, both spatial and temporal [61]. The ultimate goal of PA is agricultural sustainability and efficiency. By matching agricultural inputs to needs, PA aims at simultane‐ ously maximizing crop production and product quality, while minimizing environmental damage. In a PA framework, the obtaining of big data about soil properties within fields feds the decision-making process. Provided the capabilities of the ECa sensors described in previous sections, they are crucial to acquire this information in many applications with the eventual aim of controlling soil salinity and additionally, improving nutrition of plants avoiding harmful side effects on the environment.

#### **8.1. Soil salinity mapping**

ECa measurements have been widely used to characterize soil salinity at field scale. Modern mobile ER measurement systems based on a four-point array along with the use of GPS allow to elaborate 3D maps of ECa in agricultural fields, giving rise to ER Imaging (ERI) in soil studies and agriculture [47, 62]. Good contact between the electrodes and the soil is maybe the most important constraint to have reliable ECa measurements using ER. Depending on soil texture, water and coarse fragment contents, good contact could not be assured, and therefore, reliable measurements could neither be guaranteed with ER, overall with mobile instruments.

**Figure 4.** A mobile georeferenced electromagentic sensor (MGES) developed for the rapid carrying out of ECa\* surveys in agricultural plots [72].

Surveys for ECa\* taken with EMI sensors present several advantages over surveys for ECa with ER techniques because EMI sensors do not require any contact with the soil. Therefore, ECa\* data can be more readily and reliably obtained on soils with stones and/or low water contents. Besides, with the specific aim of ECa mapping, EMI techniques are usually overwhelmingly used instead of ERI, because EMI presents several advantages in addition to the previous such as i) the ability to make surveys on fields supporting growing crops, ii) the ability to make surveys on fields with beds and furrows, iii) the avoidance of soil alteration issues due to the low weight of EMI instruments, iv) the ability to survey faster because of the higher operating speeds of EMI instruments, and v) lower prices [63–65]. EMI methods present also some disadvantages, the most important of which is the more complex interpretation of ECa\* readings regarding ECa.

The high volumes of ECa\* data obtained with EMI sensors are generally processed with the aid of geo-statistics [66, 67], multivariate statistics [68, 69], and GIS tools [70, 71]. EMI has been widely used for soil salinity mapping of agricultural plots, e.g. [72–75], by means of craft Mobile Georeferenced Electromagnetic Sensors (MGESs; **Figure 4**). Besides, EMI can be used along with remote sensing [76–78] to extend the capabilities of both techniques for soil salinity mapping at watershed scales.

In any case, since all ECa measurements are affected by several other soil properties in addition to salinity, mainly water content and texture, ECa data, and overall ECa\* data cannot be used alone for mapping of soil salinity. All ECa surveys must be carried out along with traditional soil sampling and/or other sensor measurements and besides, other field observations must also be carried out.

#### **8.2. Soil salinity monitoring**

**8.1. Soil salinity mapping**

116 New Trends and Developments in Metrology

in agricultural plots [72].

readings regarding ECa.

mapping at watershed scales.

ECa measurements have been widely used to characterize soil salinity at field scale. Modern mobile ER measurement systems based on a four-point array along with the use of GPS allow to elaborate 3D maps of ECa in agricultural fields, giving rise to ER Imaging (ERI) in soil studies and agriculture [47, 62]. Good contact between the electrodes and the soil is maybe the most important constraint to have reliable ECa measurements using ER. Depending on soil texture, water and coarse fragment contents, good contact could not be assured, and therefore, reliable measurements could neither be guaranteed with ER, overall with mobile instruments.

**Figure 4.** A mobile georeferenced electromagentic sensor (MGES) developed for the rapid carrying out of ECa\* surveys

Surveys for ECa\* taken with EMI sensors present several advantages over surveys for ECa with ER techniques because EMI sensors do not require any contact with the soil. Therefore, ECa\* data can be more readily and reliably obtained on soils with stones and/or low water contents. Besides, with the specific aim of ECa mapping, EMI techniques are usually overwhelmingly used instead of ERI, because EMI presents several advantages in addition to the previous such as i) the ability to make surveys on fields supporting growing crops, ii) the ability to make surveys on fields with beds and furrows, iii) the avoidance of soil alteration issues due to the low weight of EMI instruments, iv) the ability to survey faster because of the higher operating speeds of EMI instruments, and v) lower prices [63–65]. EMI methods present also some disadvantages, the most important of which is the more complex interpretation of ECa\*

The high volumes of ECa\* data obtained with EMI sensors are generally processed with the aid of geo-statistics [66, 67], multivariate statistics [68, 69], and GIS tools [70, 71]. EMI has been widely used for soil salinity mapping of agricultural plots, e.g. [72–75], by means of craft Mobile Georeferenced Electromagnetic Sensors (MGESs; **Figure 4**). Besides, EMI can be used along with remote sensing [76–78] to extend the capabilities of both techniques for soil salinity

In any case, since all ECa measurements are affected by several other soil properties in addition to salinity, mainly water content and texture, ECa data, and overall ECa\* data cannot be used Since soil salinity is a dynamic property, many instruments able to automatically take ECa measurements and additionally, to temporally save information and, to withstand the variable and tough outdoor conditions for long time spans, have been developed during the last decades. These instruments have been used mainly for agricultural water management and with increasing frequency, in a framework of PA. The ECa technique originally used for monitoring was ER through four-point probes. Since the advent of reflectometry, however, TDR, ADR, and FDR have captured almost all monitoring applications. The only exception where ER still holds on is, maybe, the CC combined sensors, which are featured by an interesting price-quality ratio.

#### **9. Present and future trends**

The nowadays trends in ECa sensor development are focused on improving accuracy, robust‐ ness, field installation ease, and data communication, while decreasing acquisition and maintenance expenses. All these improvements increase the applicability of ECa sensors in agriculture, overall for irrigation and nutrient management in a PA framework. Nevertheless, while a large number of agricultural exploitations are using sensors for θ estimation to allow the subsequent adjustment of irrigation rates, the use of ECa sensors is far less widespread. This fact is due to issues still not satisfactorily fixed about the correct interpretation of ECa data under the ever-changing and diverse soil conditions of agricultural fields. Although many investigations have been carried out in order to interpret ECa and, furthermore, to assess ECp, far more research should be performed in this regard.

More accurate ECa interpretations and ECp assessments, and even ECe, 25 estimations, from sensor measurements can proceed through three ways. First, it can proceed through the development of techniques able to separate water and salinity effects on sensor responses. Second, improvement of ECa interpretations will take place through the development of calibrations based on more reliable models for ECa. And third, improvement of ECa interpre‐ tations will proceed through sensor fusion, in which data from different sensors will be used together in order to ascertain soil properties and/or crop status [79]. For the first requirement, reflectometry seems to be leading. Among the three different techniques, TDR has established itself as predominant. Nevertheless, the other two reflectometry techniques (ADR and FDR) are without doubt promising. On the one hand, in ADR, water and salinity effects have been claimed to be precisely separated [80] and, on the other hand, in FDR, progress can be made through the improved interpretation of soil dielectric spectra, i.e., soil permittivity against electromagnetic frequency up to 500 MHz [37]. Along with the improvement in the separation of water and salinity effects, the development of sensors less affected by temperature should also be performed so as to increase the robustness of measurement systems for ECa interpre‐ tation.

Promising applications of reflectometry sensors for PA lie in the development of smart irrigation management systems (SIMS) (**Figure 5**). A SIMS has essentially two parts: i) the core, which is a decision support system (DSS) hosted in an Internet server, and ii) a field-deployed wireless sensor network (WSN), which feeds the DSS with e.g., ECa and θ data. The WSN is made up of several probes distributed within the cropped field and wireless connected to one or more dataloggers which act as gateways to the WSN.. The dataloggers communicate data to the Internet server by means of GSM/GPRS, 3G, or 4G, for cloud computing by the DSS. The DSS is essentially a simulation model that runs on the data from the WSN and additionally, on other data, meteorological, soil, crop, water quality, fertilization, etc., and as a consequence, produces management recommendations which are sent to the farmer in real-time. A fully automatic system would include a third part. That is an actuator network, e.g., the simplest will be for irrigation hydrant control. Such fully automated SIMS including ECa measurements seems to have not well been developed yet. However, remote irrigation management is possible by means of systems that use field θ sensor estimations [81, 82]. Further advances in cellular networks and cheaper and faster Internet communications will allow the spread of SIMS with fully farmer informed control through mobile telephony. This technology will benefit from the advances in ECa data interpretation. One important way this will occur is through sensor fusion, which consists in the joint use of information originating from diverse sensors.

**Figure 5.** Scheme of a smart irrigation management system (SIMS).

Similarly to ECa, which is an unselective factor on its own, i.e. it depends on various soil properties such as θ, ρb, clay content, and ECp, other proximal sensors as used in the field also provide unselective responses, e.g. hyperspectral, radiometric, mechanical, acoustic, pneu‐ matic, and electrochemical [83]. Interestingly, under the concept of *sensor fusion*, all these unselective measurements including ECa could be jointly used to generate selective informa‐ tion through the use of multivariate statistical methods such as MLR, PCR, and PLSR [69, 84].

Last, the use of ECa measurements will remarkably benefit the fertilizer management for crop production in the next years. The conventional fertilizer management is generally based on monitoring the soil contents of nitrogen, phosphorous, and potassium by sampling and laboratory methods. However, since plant available N and K are in ionic form (NO3 − , NH4 + , and K+ ), they contribute to ECp and eventually ECa. Therefore, the development of new sensors with higher capacity to separate water and salinity effects, in addition to sensor fusion, will contribute to the improvement of crop nutrition management, e.g. [85, 86]. Colburn [87] pioneering works on the secondary correlation of ECa with soil nutrients pointed to the usage of ECa measurements for fertilization management. However, more investigations are still needed on the application of ECa sensors for fertilizer management [88].

#### **Author details**

also be performed so as to increase the robustness of measurement systems for ECa interpre‐

Promising applications of reflectometry sensors for PA lie in the development of smart irrigation management systems (SIMS) (**Figure 5**). A SIMS has essentially two parts: i) the core, which is a decision support system (DSS) hosted in an Internet server, and ii) a field-deployed wireless sensor network (WSN), which feeds the DSS with e.g., ECa and θ data. The WSN is made up of several probes distributed within the cropped field and wireless connected to one or more dataloggers which act as gateways to the WSN.. The dataloggers communicate data to the Internet server by means of GSM/GPRS, 3G, or 4G, for cloud computing by the DSS. The DSS is essentially a simulation model that runs on the data from the WSN and additionally, on other data, meteorological, soil, crop, water quality, fertilization, etc., and as a consequence, produces management recommendations which are sent to the farmer in real-time. A fully automatic system would include a third part. That is an actuator network, e.g., the simplest will be for irrigation hydrant control. Such fully automated SIMS including ECa measurements seems to have not well been developed yet. However, remote irrigation management is possible by means of systems that use field θ sensor estimations [81, 82]. Further advances in cellular networks and cheaper and faster Internet communications will allow the spread of SIMS with fully farmer informed control through mobile telephony. This technology will benefit from the advances in ECa data interpretation. One important way this will occur is through sensor fusion, which consists in the joint use of information originating from diverse

tation.

118 New Trends and Developments in Metrology

sensors.

**Figure 5.** Scheme of a smart irrigation management system (SIMS).

Fernando Visconti\* and José Miguel de Paz

\*Address all correspondence to: fernando.visconti@uv.es

Valencian Institute for Agricultural Research (IVIA), Centre for Sustainable Agriculture (CDAS), Moncada, València, Spain

#### **References**


management (2nd Ed., pp. 265–341). Reston, VA: American Society of Civil Engineers, Environmental and Water Resources Institute.

[19] Topp, G. C., Davis, J. L. and Annan, A. P. (1980). Electromagnetic determination of soil water content: Measurements in coaxial transmission lines. Water Resources Research, 16(3), 574–582.

[5] USSL Staff. (1954). Diagnosis and improvement of saline and alkali soils (Agriculture Handbook No. 60). Washington, DC: United States Department of Agriculture.

[6] Maas, E. V. and Hoffman, G. J. (1977). Crop salt tolerance – current assessment. ASCE

[7] Shannon, M. C. and Grieve, C. M. (1998). Tolerance of vegetable crops to salinity.

[8] Rhoades, J. D. (1996). Salinity: Electrical conductivity and total dissolved solids. In D. L. Sparks, et al. (Eds.), Methods of soil analysis part 3—Chemical methods (pp. 417–

[9] Ayers, R. S. and Westcot, D. W. (1985). Water quality for agriculture. Irrig drain paper 29, rev. 1. Rome: Food and Agriculture Organization of the United Nations.

[10] Corwin, D. L. and Lesch, S. M. (2005). Apparent soil electrical conductivity measure‐ ments in agriculture. Computers and Electronics in Agriculture, 46(1–3 Spec Iss.), 11–

[11] Ma, R., McBratney, A., Whelan, B., Minasny, B. and Short, M. (2011). Comparing temperature correction models for soil electrical conductivity measurement. Precision

[12] Johnson, P. (2007). The design of an integrated soil moisture sensor for agriculture.

[13] Groom, D. (2008). Common misconceptions about capacitively-coupled resistivity (CCR) what it is and how it works. Environmental and Engineering Geophysical Society – 21st Symposium on the Application of Geophysics to Engineering and

[14] Wenner, F. (1915). A method of measuring earth resistivity. Bulletin of the Bureau of

[15] Loke, M. H., Chambers, J. E., Rucker, D. F., Kuras, O. and Wilkinson, P. B. (2013). Recent developments in the direct-current geoelectrical imaging method. Journal of Applied

[16] McNeill, J. D. (1980). Electrical conductivity of soils and rocks. Technical note TN-5.

[17] Saey, T., Van Meirvenne, M., De Smedt, P., Neubauer, W., Trinks, I., Verhoeven, G., et al. (2013). Integrating multi-receiver electromagnetic induction measurements into the interpretation of the soil landscape around the school of gladiators at carnuntum.

[18] Corwin, D. L., Lesch, S. M. and Lobell, D. B. (2012). Laboratory and field measurements. In W. W. Wallender and K. K. Tanji (Eds.), Agricultural salinity assessment and

J Irrig Drain Div, 103(2), 115–134.

120 New Trends and Developments in Metrology

435). Madison, WI: SSSA, ASA.

Agriculture, 12(1), 55–66.

Standards, 12, 469–478.

Geophysics, 95, 135–156.

Brisbane, Queensland: Griffith University.

Environmental Problems, pp. 36–41.

Mississauga, ON: Geonics Limited.

European Journal of Soil Science, 64(5), 716–727.

43.

Scientia Horticulturae, 78(1–4), 5–38.


oscillation frequency response model. Hydrology and Earth System Sciences, 2(1), 111– 120.


[43] Rhoades, J. D., Shouse, P. J., Alves, W. J., Manteghi, N. A. and Lesch, S. M. (1990). Determining soil salinity from soil electrical conductivity using different models and estimates. Soil Science Society of America Journal, 54, 46–54.

oscillation frequency response model. Hydrology and Earth System Sciences, 2(1), 111–

[31] Seyfried, M. S. and Murdock, M. D. (2004). Measurement of soil water content with a 50-MHz soil dielectric sensor. Soil Science Society of America Journal, 68(2), 394–403.

[32] He, Y., DeSutter, T., Prunty, L., Hopkins, D., Jia, X. and Wysocki, D. A. (2012). Evalu‐ ation of 1:5 soil to water extract electrical conductivity methods. Geoderma, 185–186,

[33] Rhoades, J. D., Chanduvi, F. and Lesch, S. (1999). Soil salinity assessment. Methods and interpretation of electrical conductivity measurements (FAO Irrigation and Drainage Paper 57). Rome, Italy: Food and Agricultural Organization of the United Nations.

[34] Kelleners, T. J. and Verma, A. K. (2010). Measured and modeled dielectric properties of soils at 50 megahertz. Soil Science Society of America Journal, 74(3), 744–752. [35] Evett, S. R. (2007). Soil water and monitoring technology. In R. J. Lascano and R. E. Sojka (Eds.), Irrigation of agricultural crops (2nd Ed., pp. 25–84). Madison, WI: American Society of Agronomy, Crop Science Society of America, Soil Science Society

[36] Malicki, M. A. and Walczak, R. T. (1999). Evaluating soil salinity status from bulk electrical conductivity and permittivity. European Journal of Soil Science, 50(3), 505–

[37] Wilczek, A., Szyplowska, A., Skierucha, W., Ciesla, J., Pichler, V. and Janik, G. (2012). Determination of soil pore water salinity using an FDR sensor working at various

[38] Hilhorst, M. A. (2000). A pore water conductivity sensor. Soil Science Society of

[39] Kizito, F., Campbell, C. S., Campbell, G. S., Cobos, D. R., Teare, B. L., Carter, B., et al. (2008). Frequency, electrical conductivity and temperature analysis of a low-cost

[40] Whalley, W. R. (1993). Considerations on the use of time-domain reflectometry (TDR)

[41] Visconti, F., Martínez, D., Molina, M. J., Ingelmo, F. and De Paz, J. M. (2014). A combined equation to estimate the soil pore-water electrical conductivity: Calibration with the

[42] Rhoades, J. D. (1981). Predicting bulk soil electrical conductivity versus saturation paste extract electrical conductivity calibrations from soil properties. Soil Science Society of

capacitance soil moisture sensor. Journal of Hydrology, 352(3–4), 367–378.

for measuring soil water content. Journal of Soil Science, 44(1), 1–9.

WET and 5TE sensors. Soil Research, 52(5), 419–430.

frequencies up to 500 MHz. Sensors (Switzerland), 12(8), 10890–10905.

120.

122 New Trends and Developments in Metrology

12–17.

of America.

America Journal, 64(6), 1922–1925.

America Journal, 45, 42–44.

514.


[69] Mouazen, A. M., Alhwaimel, S. A., Kuang, B. and Waine, T. (2014). Multiple on-line soil sensors and data fusion approach for delineation of water holding capacity zones for site specific irrigation. Soil and Tillage Research, 143, 95–105.

[56] Zhu, Q., Lin, H. S. and Doolittle, J. A. (2013). Functional soil mapping for site-specific

[57] Kitchen, N. R., Sudduth, K. A. and Drummond, S. T. (1999). Soil electrical conductivity as a crop productivity measure for claypan soils. Journal of Production Agriculture,

[58] Kitchen, N. R., Drummond, S. T., Lund, E. D., Sudduth, K. A. and Buchleiter, G. W. (2003). Soil electrical conductivity and topography related to yield for three contrasting

[59] Corwin, D. L., Lesch, S. M., Shouse, P. J., Soppe, R. and Ayars, J. E. (2003). Identifying soil properties that influence cotton yield using soil sampling directed by apparent soil

[60] Guo, W., Maas, S. J. and Bronson, K. F. (2012). Relationship between cotton yield and soil electrical conductivity, topography, and landsat imagery. Precision Agriculture,

[61] Vanden-Heuvel, R. M. (1996). The promise of precision agriculture. Journal of Soil and

[62] Lueck, E. and Ruehlmann, J. (2013). Resistivity mapping with GEOPHILUS ELECTRI‐ CUS—information about lateral and vertical soil heterogeneity. Geoderma, 199, 2–11.

[63] Corwin, D. L. and Lesch, S. M. (2005). Characterizing soil spatial variability with apparent soil electrical conductivity: Part II. Case study. Computers and Electronics in

[64] Abdu, H., Robinson, D. A. and Jones, S. B. (2007). Comparing bulk soil electrical conductivity determination using the DUALEM-1S and EM38-DD electromagnetic

induction instruments. Soil Science Society of America Journal, 71(1), 189–196.

[65] Serrano, J., Shahidian, S. and da Silva, J. M. (2014). Spatial and temporal patterns of apparent electrical conductivity: DUALEM vs. veris sensors for monitoring soil

[66] De Paz, J. M., Visconti, F. and Rubio, J. L. (2011). Spatial evaluation of soil salinity using the WET sensor in the irrigated area of the segura river lowland. Journal of Plant

[67] Ding, J. and Yu, D. (2014). Monitoring and evaluating spatial variability of soil salinity in dry and wet seasons in the werigan-kuqa oasis, china, using remote sensing and

[68] Rodrigues, F. A., Bramley, R. G. V. and Gobbett, D. L. (2015). Proximal soil sensing for precision agriculture: Simultaneous use of electromagnetic induction and gamma

electromagnetic induction instruments. Geoderma, 235–236, 316–322.

radiometrics in contrasting soils. Geoderma, 243–244, 183–195.

soil moisture and crop yield management. Geoderma, 200–201, 45–54.

soil-crop systems. Agronomy Journal, 95(3), 483–495.

electrical conductivity. Agronomy Journal, 95(2), 352–364.

12(4), 607–617.

124 New Trends and Developments in Metrology

13(6), 678–692.

Water Conservation, 51(1), 38–40.

Agriculture, 46(1–3 Spec. Iss.), 135–152.

properties. Sensors (Switzerland), 14(6), 10024–10041.

Nutrition and Soil Science, 174(1), 103–112.


**Force, Pressure, Dimensions and Signals**

[82] Navarro-Hellín, H., Torres-Sánchez, R., Soto-Valles, F., Albaladejo-Pérez, C., López-Riquelme, J. A. and Domingo-Miguel, R. (2015). A wireless sensors architecture for efficient irrigation water management. Agricultural Water Management, 151, 64–74.

[83] Adamchuk, V. I., Hummel, J. W., Morgan, M. T. and Upadhyaya, S. K. (2004). On-thego soil sensors for precision agriculture. Computers and Electronics in Agriculture,

[84] De Benedetto, D., Castrignanò, A., Rinaldi, M., Ruggieri, S., Santoro, F., Figorito, B., et al. (2013). An approach for delineating homogeneous zones by using multi-sensor data.

[85] Fuentes, S., Rogers, G., Jobling, J., Conroy, J., Camus, C., Dalton, M., et al. (2008). A soilplant-atmosphere approach to evaluate the effect of irrigation/fertigation strategy on grapevine water and nutrient uptake, grape quality and yield. Acta Horticulturae,

[86] Rogers, G., Shuttleworth, L., Fox, M., Fuentes, S., Dalton, M., and Conroy, J. (2008). Evaluation of a combined soil EC and moisture sensor and its use to co-manage soil moisture and vine nitrogen in grapevines (cv. shiraz) under deficit irrigation. Acta

[87] Colburn, J. W. (1999). Soil doctor multi-parameter, real-time soil sensor and concurrent input control system. In P. C. Robert, R. H. Rust & W. E. Larson (Eds.), Precision agriculture (pp. 1011–1021). Madison, WI: American Society of Agronomy, Crop

[88] Paul, W. (2002). Prospects for controlled application of water and fertiliser, based on sensing permittivity of soil. Computers and Electronics in Agriculture, 36(2–3), 151–

Science Society of America, Soil Science Society of America.

44(1), 71–91.

126 New Trends and Developments in Metrology

(792), 297–303

163.

Geoderma, 199, 117–127.

Horticulturae, 792, 543–549.

### **Force Measuring System for High-Precision Surface Characterization under Extreme Conditions**

Roman Nevshupa and Marcello Conte

Additional information is available at the end of the chapter

http://dx.doi.org/10.5772/62386

#### **Abstract**

Force measuring is used in various surface characterization techniques such as indenta‐ tion, scratch tests, tribological analysis, determination of gas content, etc. The main problems related with force measurement under extreme conditions have been analysed. A strategy that should be followed to solve these problems has been discussed and several examples of successive solutions that recently were developed by the authors are presented. The need to carry out the characterization under extreme conditions poses serious problems for the designers of the measuring systems that may include the incompatibility of the sensors with the test conditions, undesirable interactions with other components, stability, precision and uncertainty issues, the measurement range, etc. Resolving these problems must be based on a global approach in which the characteri‐ zation system is considered as a whole, while the designer must analyse and solve the possible conflicts between the subsystems. The way how an appropriate force measur‐ ing system can be selected is described. The proposed method is illustrated by an example in which an indirect force measurement using optical fibre displacement sensor was used. Another example describes measuring system developed for vacuum high-tempera‐ ture nanoindentation. At high temperature, proper heat management based on noncontact heating and laminar flow cooling system is mandatory to avoid experimental data being affected by external noise and thermal drift.

**Keywords:** Force sensor, Uncertainty, Modelling, Extreme conditions, Vacuum, High-Temperature Nanoindentation

#### **1. Introduction**

Like aerospace industry itself, vacuum tribology rapidly developed in the middle of the twentieth century pursuing to cover the needs in this growing sector in reliable durable materials and

© 2016 The Author(s). Licensee InTech. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

lubricants suitable foroperatingundervacuumandrarefiedgas atmosphere [1–3].Bythe 1970s, when some of the problems had been resolved and the limitations of other had been defined, furtherdevelopmentofvacuumtribologicaltechniques sloweddown[4].Thesituationchanged in 1990s driven by the advancements in various sectors including semiconductor, energy, transport, etc. that issued new challenges for development of materials with controlled microand nanostructure for extreme operation conditions. For example, advanced tribological materials for space applications must comply with very strict requirements on strength, toughness,andwearresistanceandalsohavelowoutgassing,emissionofvolatiles,highchemical stability, low secondary electron emission, etc. [3,5,6]. Moreover, in semiconductor industry gaseous and particular contamination associated with tribochemical degradation and wear‐ ing of materials in tribo-contacts of mechanical components is one of the main problems that interferes development of fully automatic robotized technological lines with enhanced production yield.

Surface mechanical characterization in vacuum such as precise tribometry, indentation, scratching, etc. is now enjoining a renaissance with various new techniques developed in the last three decades [4,7–9]. These new techniques have been evolved towards both enhancement of their features and integration with other techniques. Combination of mechanical , physical and chemical characterization [4] has offered the opportunity to study complex processes of gas-phase lubrication [10], mechanically stimulated gas emission [4,11–13], triboplasma [14– 16], triboelectrification [17], triboluminescence [15,16,18], emission of charged particles [19,20], tribochemical reactions [21–25], and so on. On the other hand, nanoindentation has become widespread as a method to study the mechanical response of materials [26], particularly the measurement of hardness [27,28], elastic modulus [27,28], hardening exponents [29], creep parameters [30], and residual stresses [31]. For all of these purposes, however, nanoindentation testing has most commonly been conducted at room temperature. This is in spite of the fact that micro-materials and devices are often employed at elevated temperatures, and deforma‐ tion physics are usually thermally activated. The use of nanoindentation to study materials and thin films at high temperature is of high interest and undergo through several limitation due to material oxidation, thermal drift, and machine stability [32].

Therefore, development of precise and reliable measuring systems for force and displacement that must be compatible with extreme operating conditions and with other instruments for physico-chemical characterization is crucial for overall performance of the techniques. In this work we summarize some recent advances in development of force measuring systems for high-temperature nanoindentation and ultrahigh vacuum tribometry.

#### **2. Problems and challenges of measuring systems under extreme conditions: resolving the conflicts**

#### **2.1. Ultrahigh vacuum tribology**

Force transducers suitable for ultrahigh vacuum applications must comply not only with the requirements on rated capacity, non-linearity, combined error, repeatability, reproducibility, creep, long-term stability, frequency response, fatigue life, etc. but also with a series of specific provisions [33]. Among others these provisions include possibility of outgassing baking, reduced emission rate of volatiles and gas desorption, appropriate thermal management and heat sinks in absence of convection and heat conduction through the air, low emission of electromagnetic interferences that could affect other neighbour analytic tools, low emission of gases and volatiles, and so on. For example, strain gauges, capacitive, and inductive trans‐ ducers can cause crosstalk in charge detectors, electron multipliers, antennas and other sensitive electromagnetic devices. Force transducers that utilize visible light may alter the measurements of triboluminescence by introducing undesirable background radiation.

lubricants suitable foroperatingundervacuumandrarefiedgas atmosphere [1–3].Bythe 1970s, when some of the problems had been resolved and the limitations of other had been defined, furtherdevelopmentofvacuumtribologicaltechniques sloweddown[4].Thesituationchanged in 1990s driven by the advancements in various sectors including semiconductor, energy, transport, etc. that issued new challenges for development of materials with controlled microand nanostructure for extreme operation conditions. For example, advanced tribological materials for space applications must comply with very strict requirements on strength, toughness,andwearresistanceandalsohavelowoutgassing,emissionofvolatiles,highchemical stability, low secondary electron emission, etc. [3,5,6]. Moreover, in semiconductor industry gaseous and particular contamination associated with tribochemical degradation and wear‐ ing of materials in tribo-contacts of mechanical components is one of the main problems that interferes development of fully automatic robotized technological lines with enhanced

Surface mechanical characterization in vacuum such as precise tribometry, indentation, scratching, etc. is now enjoining a renaissance with various new techniques developed in the last three decades [4,7–9]. These new techniques have been evolved towards both enhancement of their features and integration with other techniques. Combination of mechanical , physical and chemical characterization [4] has offered the opportunity to study complex processes of gas-phase lubrication [10], mechanically stimulated gas emission [4,11–13], triboplasma [14– 16], triboelectrification [17], triboluminescence [15,16,18], emission of charged particles [19,20], tribochemical reactions [21–25], and so on. On the other hand, nanoindentation has become widespread as a method to study the mechanical response of materials [26], particularly the measurement of hardness [27,28], elastic modulus [27,28], hardening exponents [29], creep parameters [30], and residual stresses [31]. For all of these purposes, however, nanoindentation testing has most commonly been conducted at room temperature. This is in spite of the fact that micro-materials and devices are often employed at elevated temperatures, and deforma‐ tion physics are usually thermally activated. The use of nanoindentation to study materials and thin films at high temperature is of high interest and undergo through several limitation

Therefore, development of precise and reliable measuring systems for force and displacement that must be compatible with extreme operating conditions and with other instruments for physico-chemical characterization is crucial for overall performance of the techniques. In this work we summarize some recent advances in development of force measuring systems for

Force transducers suitable for ultrahigh vacuum applications must comply not only with the requirements on rated capacity, non-linearity, combined error, repeatability, reproducibility,

due to material oxidation, thermal drift, and machine stability [32].

high-temperature nanoindentation and ultrahigh vacuum tribometry.

**conditions: resolving the conflicts**

**2.1. Ultrahigh vacuum tribology**

**2. Problems and challenges of measuring systems under extreme**

production yield.

130 New Trends and Developments in Metrology

In general, selection and development of the transducers for measuring system being used under vacuum must be done using holistic approach in which a satisfactory compromise between the requirements set by various subsystems of the entire test rig upon the environ‐ mental and operational conditions, relative position of the components and their compatibility must be found. A design of the tests rig in which such satisfactory compromise is achieved is considered as optimal. Recently it was shown that an experimental system for combined tribological and , physical and chemical characterization normally comprises seven subsys‐ tems one of which is the measurement system [4]. Therefore, optimal design must involve matching the measuring systems with other six subsystems including vacuum, mechanical, sample handling, environmental control, loading, and kinematics and physico-chemical characterization (**Figure 1**).

**Figure 1.** Subsystems of a vacuum test rig for complex tribological and tribo-physico-chemical (TPC) characterization of materials. Some of the critical relationships and constrictions between the subsystems are shown by arrows: i (blue) – limitations between Vacuum, Mechanical, and Loading and Kinematics subsystems; ii (red) – limitations between Vacuum and Sample handling subsystems; iii (green) – limitations between Sensors, Environmental control, and Vac‐ uum subsystems; iv (orange) – limitations between Sample handling and TPC subsystems. (Reprinted from [4] Copy‐ right 2015, with permission from Elsevier.)

#### **2.2. Nanoindentation at high and low temperatures**

In normal ambient conditions most of metallic materials, semiconductors, metal hydrides, etc., when heated up, experience oxidation. In many cases an oxide layer at elevated temperature can be as thick as the penetration depth of a tip. This poses serious problem for material characterization through nanoindentation since the measured hardness value may correspond to the oxide layer rather than to the substrate. To avoid oxidation the measurement should be done in inert environment or in high vacuum. The latter is preferable because of the higher surface cleanliness. Also, in absence of convection in vacuum more precise and efficient heating control can be achieved. Vacuum is also desirable when indentation must be carried out at cryogenic temperatures. In this case, vacuum is needed to avoid condensation and to cut down thermal flow towards cold surfaces through conductivity and convection. Nevertheless, an increase the complexity of the experimental system by adding subsystems, e.g. pumping means, vacuum measurement, etc. can introduce important external noise or limit the system functionality. For example, the materials being tested must be vacuum compatible, i.e. to have sufficiently low pressure of saturated vapours at given temperature in order to reach the required vacuum degree. Another important problem is related with deformations of the system components due to differences in thermal expansion of construction materials. The components of the measurement system may be subjected to important temperature gradients when a pin and material at the indentation zone are heated up or cooled down, while some sensitive elements including electronic circuits situated in proximity to the indentation zone have to be maintained at room temperature. When temperature distribution in the components of the measuring system is non-stationary it can lead to thermal drift. Therefore, the collected data, in general, is a complex function of the mechanical behaviour of material under study, fluctuations and noise induced due to operation of subsystems, and thermal drift. Measure‐ ment uncertainties and errors associated with fluctuations, noise and thermal drift can be significantly reduced by correct designing of the measurement system. Analysis of some problems that must be solved to achieve the optimal design and possible solutions are discussed in Section 3.

#### **3. Nanoindentation at high temperature**

Development of turbomolecular pumping system with a magnetically suspended rotor (maglev) and oil-less primary pump has offered a cost-effective and clean solution for those applications where vibration is a serious problem. By combining opportune materials with maglev pumping system it is possible to significantly dump vibrations in nanoindentation systems. Nevertheless, it is not enough when vibrations due to thermal expansion happen. An interesting solution was presented in [34,35]. It is based on an active top referencing configu‐ ration which eliminates almost entirely the problem of noise and, furthermore, reduces to negligible values the instrument frame compliance. Such substantial improvement has been achieved by the use of a unique principle of load and displacement measurements. So far all existing nanoindentation systems have been based on only one actuator and one sensor. The ultra nanoindentation method uses two separated actuators and three separated sensors, which provide real measurements of depth and load, as well as a feedback loop that allows continuous and accurate control of the applied load (**Figure 2**). In particular, each axis has its own actuator, displacement and load sensors. For both axes, the displacement is applied via piezo actuators A1 and A2. The load on the indenter and the reference is obtained from the displacement of the springs K1 and K2, measured with capacitive sensors C1 and C2. The displacement of the indenter is measured relative to the reference through the differential capacitive sensor C3.

can be as thick as the penetration depth of a tip. This poses serious problem for material characterization through nanoindentation since the measured hardness value may correspond to the oxide layer rather than to the substrate. To avoid oxidation the measurement should be done in inert environment or in high vacuum. The latter is preferable because of the higher surface cleanliness. Also, in absence of convection in vacuum more precise and efficient heating control can be achieved. Vacuum is also desirable when indentation must be carried out at cryogenic temperatures. In this case, vacuum is needed to avoid condensation and to cut down thermal flow towards cold surfaces through conductivity and convection. Nevertheless, an increase the complexity of the experimental system by adding subsystems, e.g. pumping means, vacuum measurement, etc. can introduce important external noise or limit the system functionality. For example, the materials being tested must be vacuum compatible, i.e. to have sufficiently low pressure of saturated vapours at given temperature in order to reach the required vacuum degree. Another important problem is related with deformations of the system components due to differences in thermal expansion of construction materials. The components of the measurement system may be subjected to important temperature gradients when a pin and material at the indentation zone are heated up or cooled down, while some sensitive elements including electronic circuits situated in proximity to the indentation zone have to be maintained at room temperature. When temperature distribution in the components of the measuring system is non-stationary it can lead to thermal drift. Therefore, the collected data, in general, is a complex function of the mechanical behaviour of material under study, fluctuations and noise induced due to operation of subsystems, and thermal drift. Measure‐ ment uncertainties and errors associated with fluctuations, noise and thermal drift can be significantly reduced by correct designing of the measurement system. Analysis of some problems that must be solved to achieve the optimal design and possible solutions are

Development of turbomolecular pumping system with a magnetically suspended rotor (maglev) and oil-less primary pump has offered a cost-effective and clean solution for those applications where vibration is a serious problem. By combining opportune materials with maglev pumping system it is possible to significantly dump vibrations in nanoindentation systems. Nevertheless, it is not enough when vibrations due to thermal expansion happen. An interesting solution was presented in [34,35]. It is based on an active top referencing configu‐ ration which eliminates almost entirely the problem of noise and, furthermore, reduces to negligible values the instrument frame compliance. Such substantial improvement has been achieved by the use of a unique principle of load and displacement measurements. So far all existing nanoindentation systems have been based on only one actuator and one sensor. The ultra nanoindentation method uses two separated actuators and three separated sensors, which provide real measurements of depth and load, as well as a feedback loop that allows continuous and accurate control of the applied load (**Figure 2**). In particular, each axis has its own actuator, displacement and load sensors. For both axes, the displacement is applied via piezo actuators A1 and A2. The load on the indenter and the reference is obtained from the

discussed in Section 3.

132 New Trends and Developments in Metrology

**3. Nanoindentation at high temperature**

**Figure 2.** Schematic design of Ultra Nanoindentation head. FN – normal load; A1 – Indenter's piezo actuator; A2 – Ref‐ erence's piezo actuator; K1 – Indenter's load cell spring; K2 – Reference's load cell spring; C1 – Indenter's load cell capacitive sensor; C2 – Reference's load cell capacitive sensor; C3 – Penetration depth differential capacitive sensor.

Continuous control of normal force on both the indenter and the reference is ensured by precise feedback loops. The crucial components used in the measurement head are made of Zerodur®, a material with extremely low coefficient of thermal expansion (0.01×10-6 K-1 over the range 0 °C – 100 °C).

Considering it as a starting point, addressing the thermal drift problem is the next step before performing high precision nanoindentation tests. The need to reduce the thermal flow at the contact point can be achieved in two ways called passive heating and active heating, respec‐ tively. The former is based on holding the indenter in contact with the sample surface for a time long enough to equilibrate the temperature of the involved bodies. The latter consist of heating up the involved bodies independently. This was demonstrated to be the most appro‐ priate way, especially for temperature higher than 400 °C, that is, when incandescence becomes important [8]. In the same work, the effect of thermal drift on the elastic modulus and hardness measurements on standard reference materials has been analysed.

Heating the sample to be characterized is not a difficult task as any contact/non-contact heating system can be used. However, heating the indentation tip is not easy as any contact acts as a spring and its compliance can be read out by the force and displacement sensors. A non-contact heating system is then preferred.

In [9] a novel patent pending non-contact heating system based on IR radiation was presented (**Figure 3**). The main advantage of IR heaters is their almost zero thermal inertia. Such heating system located in high vacuum environment uses the active top referencing system previously illustrated for high temperature nanoindentation. The reference and indenter shafts are irradiated independently by infrared emitters and their temperatures are read out by means of thermocouples embedded in the tips. When the sample approaches the measurement device, an IR bath is established allowing a fast and precise regulation of the sample surface temperature. Opportune reflective coatings and water cooling management in the measure‐ ment area allow reaching accuracy of temperature about 0.1 °C, that is the accuracy of the thermocouples at temperatures as high as 800 °C.

**Figure 3.** Schematic drawing of the sample and tip heating systems.

Thermal drift rate was measured on oxygen free high conductivity copper (OFHC Cu) by loading up to 100 mN in at least 30 s, holding the maximum load for few seconds and unloading to 5 mN, that is less than 10% of the maximum load to avoid material creep (**Figure 4**). The indenter was hold in contact for 1 minute and the deviation of the penetration depth was measured along this period. A thermal drift less than 1 nm/min was recorded for all the measurements up to 600 °C (**Figure 5**).

**Figure 4.** Thermal drift of the measuring system registered under various loads and temperatures.

**Figure 5.** Indentation tests on oxygen free high conductivity copper at various temperatures.

The extrapolation of Elastic Modulus and Hardness from the uncorrected data is then an easy task.

#### **4. Force transducers for tribological and physicochemical characterization in ultrahigh vacuum**

#### **4.1. Force measurement in ultrahigh vacuum**

device, an IR bath is established allowing a fast and precise regulation of the sample surface temperature. Opportune reflective coatings and water cooling management in the measure‐ ment area allow reaching accuracy of temperature about 0.1 °C, that is the accuracy of the

Thermal drift rate was measured on oxygen free high conductivity copper (OFHC Cu) by loading up to 100 mN in at least 30 s, holding the maximum load for few seconds and unloading to 5 mN, that is less than 10% of the maximum load to avoid material creep (**Figure 4**). The indenter was hold in contact for 1 minute and the deviation of the penetration depth was measured along this period. A thermal drift less than 1 nm/min was recorded for all the

**Figure 4.** Thermal drift of the measuring system registered under various loads and temperatures.

thermocouples at temperatures as high as 800 °C.

134 New Trends and Developments in Metrology

**Figure 3.** Schematic drawing of the sample and tip heating systems.

measurements up to 600 °C (**Figure 5**).

The force transducer is a device that is subjected to the force to be measured and changes it into another measurable physical quantity through a known relationship.There are several groups of force transducers that use different physical relationships. These transducers are based on piezoelectric phenomenon, light pressure [37], magneto-elasticity [38], vibration, surface waves, gyroscopic effect, electromagnetic watt balance [39], etc. Many force transduc‐ ers employ some form of elastic load-bearing element with known load-deformation behav‐ iour.Elastic devices represent the most common type of a force transducer, while the most frequent method is to make measurements of the longitudinal and lateral strain. When a force is applied to the elastic element it deflects and the deformation is measured by a displacement transducer. Displacement means moving from one position of the object to another for a specific distance or angle referencing to its own prior position rather than to an external reference [36].

Elastic devices are made of materials which exhibit a linear relationship between the stress and strain with low hysteresis and low creep in the working range such as metal alloys (beryllium copper, phosphor bronze, Monel, Inconel, high-carbon and alloyed steel, amorphous metals, etc.), silicon, borosilicate glass, etc. and are perfectly compatible with ultrahigh vacuum conditions. The shape of elastic elements depends on a number of factors including the range of force to be measured, required performance, sensitivity to misalignments and buckling of the displacement transducer coupled to the elastic elements, etc.

Among different geometries of elastic elements of force transducers the most commonly used are the bending beam, cantilever or leaf spring arranged perpendicularly to the direction of acting force to be measured (**Figure 6**) [34,40]. A cantilever employed in Atomic Force Microscopy is an example of a single beam that is used for measuring of forces in two directions. **Figure 7** shows a set-up developed for microtribological measurements and consisting of two parallel beams [41–43].

**Figure 6.** Schematic drawing of a double-leaf spring made of glass and used for simultaneous measurement of normal and tangential forces in a microtribometer. (Reprinted from [42] Copyright 2003, with permission from Elsevier.)

The important disadvantage of a single beam configuration is that under applied load the free end of the beam undergoes both displacement and tilting (**Figure 7**). In some cases tilting is undesirable since it can lead to systematic errors in some displacement transducers.

**Figure 7.** Schematic drawing of a force sensor with a single leaf spring (a) and double-leaf spring (b). (Reprinted from [4] Copyright 2015, with permission from Elsevier.)

Double-leaf spring with two or more parallel beams with their free ends rigidly connected to a block are nearly free of this disadvantage and follows approximately straight displacements [44]. Further improvements of the spring linearity and reducing tilting were achieved by using symmetrical double-leaf configuration with two couples of springs facing each other on both sides of a point, where the force is applied. **Figure 8** shows a complex elastic element designed for measuring of two perpendicular forces [45] using four compliant units (A-D) and their respective mirrored compliant units ( A´-D´). The operation of the unit has been described as the following: *When the normal force is applied the Flexure B, B´ and D, D´ would bend to give the desired displacement and Flexure A, A´ and C, C´ are in tensile/compressive load. Similarly, when the lateral force is applied flexure A, A´ and C, C´ deflects to give the desired motion and Flexure B, B´ and D, D´ are in tensile/compressive load. Any parasitic errors due to bending of compound flexures are compensated by the secondary motion stage. Furthermore, this force measuring mechanism is relatively insensitive to thermal disturbances and manufacturing errors due to its symmetry*.

Force Measuring System for High-Precision Surface Characterization under Extreme Conditions http://dx.doi.org/10.5772/62386 137

**Figure 8.** Elastic element for measuring two perpendicular forces using four compliant symmetric double-leaf units. (Reprinted from [45]).

A number of force transducers employing elastic elements in the form of parallel double-leaf spring have been developed and patented so far for atmosphere and vacuum tribometry [41,43,46–50]. A different approach was proposed to measure 3D forces by a single elastic element having a shape of a hollow elastic dome [51]. Deformation of the dome is sensed by a photodiode matrix situated at the circular baseplate of the dome.

Sensing displacement of the elastic element in vacuum can be done by almost any kind of displacement transducer. Strain gauge technology has been the most prominent on the market since its inception in 1938. Other types of displacement transducers include capacitive [34,52, 53], laser interferometry [54], Doppler sensor [55], linear variable differential transformer, optical fibre strain gauge, interference-optical load cell, and so on. An optical displacement transducers is preferable when some sensitive devices are used for physic-chemical charac‐ terization. A fibre-optic displacement sensor based on the principle of reflective light intensity modulation is a simple, contactless and compact transducer suitable for operating under vacuum and extreme environments [41,42,56–60]. The basic principles and uncertainty analysis of this type of the sensors are discussed in the following section.

#### **4.2. Intensity modulated bundle fibre-optic displacement sensor**

**Figure 6.** Schematic drawing of a double-leaf spring made of glass and used for simultaneous measurement of normal and tangential forces in a microtribometer. (Reprinted from [42] Copyright 2003, with permission from Elsevier.)

The important disadvantage of a single beam configuration is that under applied load the free end of the beam undergoes both displacement and tilting (**Figure 7**). In some cases tilting is

**Figure 7.** Schematic drawing of a force sensor with a single leaf spring (a) and double-leaf spring (b). (Reprinted from

Double-leaf spring with two or more parallel beams with their free ends rigidly connected to a block are nearly free of this disadvantage and follows approximately straight displacements [44]. Further improvements of the spring linearity and reducing tilting were achieved by using symmetrical double-leaf configuration with two couples of springs facing each other on both sides of a point, where the force is applied. **Figure 8** shows a complex elastic element designed for measuring of two perpendicular forces [45] using four compliant units (A-D) and their respective mirrored compliant units ( A´-D´). The operation of the unit has been described as the following: *When the normal force is applied the Flexure B, B´ and D, D´ would bend to give the desired displacement and Flexure A, A´ and C, C´ are in tensile/compressive load. Similarly, when the lateral force is applied flexure A, A´ and C, C´ deflects to give the desired motion and Flexure B, B´ and D, D´ are in tensile/compressive load. Any parasitic errors due to bending of compound flexures are compensated by the secondary motion stage. Furthermore, this force measuring mechanism is relatively*

*insensitive to thermal disturbances and manufacturing errors due to its symmetry*.

[4] Copyright 2015, with permission from Elsevier.)

136 New Trends and Developments in Metrology

undesirable since it can lead to systematic errors in some displacement transducers.

Intensity modulation fibre-optic displacement sensor (IMFODS) consists of one or several transmitting and one or several receiving step-index optic fibres packed in a bundle. At the proximal end the transmitting fibres are illuminated by a light source. Light passed through the transmitting fibres exits their distal end and is reflected from the surface, a distance to which is being measured (**Figure 9**). It is assumed that the reflection is pure specular; thus, the irradiance distribution function at the image plane is the same as for the radiant emit‐ tance at the exit from the transmitting fibre. Then the reflected light is coupled to the receiv‐ ing fibre and directed to a photodiode detector to measure its power. Displacement of the reflecting surface towards or backwards the distal end of the bundle will modulate the radi‐ ant power of light entering the receiving fibres, thus producing variable electrical signal on the photodiode [61,62]. A thorough explanation of the operating principles of the intensity modulation optic fibre displacement sensors is given in the literature [56,62–64].

**Figure 9.** Geometrical considerations of a two-fibre sensor.

The irradiance distribution function at the image plane and the configuration of the receiving and transmitting fibres within a bundle provide the necessary information to determine the sensor modulation characteristic, i.e., the electrical signal at the photodiode detector vs. displacement.

Normally, IMFODSs have two measurement regions: one at near side (front slope) and another at far side (back slope) of the modulation characteristic (**Figure 10**).

**Figure 10.** Typical characteristic of IMFODS.

It is gener0ally assumed that both regions are linear. So, when the slope *K*, of a linear region is known, displacement of the target surface, ∆*l*, can be easily determined from the difference in the output voltage at the final, *Uf* , and initial, *Ui* , positions of the target surface [65]:

$$
\Delta l = K \left( U\_f - U\_i \right) \tag{1}
$$

Assuming that the measurements of output voltage at the initial and final positions are independent and have the same uncertainty, *uu*, the uncertainty in the displacement measure‐ ment is found from the following expression [65]:

$$
\mu\_{\text{Al}} = \Delta l \sqrt{\frac{\mu\_{\text{K}}^2}{K^2} + 2 \frac{\mu\_{\text{U}}^2}{\left(U\_{f} - U\_{i}\right)^2}} \tag{2}
$$

where *uK* is the uncertainty of *K*.

reflecting surface towards or backwards the distal end of the bundle will modulate the radi‐ ant power of light entering the receiving fibres, thus producing variable electrical signal on the photodiode [61,62]. A thorough explanation of the operating principles of the intensity

The irradiance distribution function at the image plane and the configuration of the receiving and transmitting fibres within a bundle provide the necessary information to determine the sensor modulation characteristic, i.e., the electrical signal at the photodiode detector vs.

Normally, IMFODSs have two measurement regions: one at near side (front slope) and another

at far side (back slope) of the modulation characteristic (**Figure 10**).

modulation optic fibre displacement sensors is given in the literature [56,62–64].

**Figure 9.** Geometrical considerations of a two-fibre sensor.

138 New Trends and Developments in Metrology

**Figure 10.** Typical characteristic of IMFODS.

displacement.

Although linear assumption is useful for practical applications, the measurement uncertainty is quite large since the characteristics of IMFODS are not perfectly linear in both ranges [65]. In addition, linear regions are quite narrow as compared with the total available measurement range. All these limit the applicability of this approach.

Another approach is based on the displacement determination directly from the modulation characteristic without any assumption on its form [65]:

$$
\Delta I = I \begin{pmatrix} U\_{\nearrow} \end{pmatrix} - I \begin{pmatrix} U\_{\nearrow} \end{pmatrix} \tag{3}
$$

where *l*(*Uz*) is the distance between the sensor and the target surface corresponding to the output voltage *Uz*.

In this approach an inverse modulation characteristic *l*=*g*(*U*) is used. *g*(*U*) is a monotonous function with two existence domains corresponding to front and back slopes of normal modulation characteristic.

The corresponding uncertainty is [65]:

$$
\mu\_{\Lambda} = \sqrt{\mathfrak{u}\_{l(U\_{\backslash})}^2 + \mathfrak{u}\_{l(U\_{\backslash})}^2} \tag{4}
$$

where *ul*(*U*) is the uncertainty in the determination of distance *l*(*U*).

Measurement uncertainty can be found experimentally in each point of the modulation characteristics or from the uncertainty of the normal modulation characteristic, *uf*(*l*) , using the following transformation [65]:

$$\mu\_{\boldsymbol{l}(\boldsymbol{U})} = \frac{\sqrt{\boldsymbol{u}\_{\boldsymbol{f}(\boldsymbol{l})}^2 - \boldsymbol{u}\_{\boldsymbol{U}}^2}}{d\boldsymbol{f}\left(\boldsymbol{l}\right) / \boldsymbol{d}\boldsymbol{l}}\tag{5}$$

In most cases the uncertainty in the voltage reading is much below the uncertainty of the modulation characteristic, so (5) can be simplified:

$$
\mu\_{\boldsymbol{h}(\boldsymbol{U})} \approx \frac{\boldsymbol{u}\_{f(\boldsymbol{l})}}{\left| \boldsymbol{d}f(\boldsymbol{l}) / \boldsymbol{d} \boldsymbol{l} \right|} \tag{6}
$$

These formulae indicate that knowledge of the modulation characteristic is required for displacement measurement and determination of its uncertainty.

So far, several models were developed for modulation characteristic of a single pair of fibres and different irradiance distribution function: uniform [56,62,66,67], Gaussian [61,64,68], modified Gaussian [69,70], uniangular [63], theoretical [56]. Though these models showed good correlation with the experimental results, most of them have not been studied for multiple bundled fibres. Cao and co-workers [64] developed a model for the optical fibre bundle displacement sensor for several bundle configurations assuming quasi-Gaussian irradiance distribution function. The quasi-Gaussian, i.e., truncated Gaussian, distribution was adopted by these authors assuming that it coincides best with the actual facts, though, no argument for this assumption were presented. He and Cuomo [56] developed a more realistic theoretical irradiance distribution function for the lossless step-index multimode fibre illuminated with a Lambertian light source. This model is based on the assumption that each light ray propagating through the optical fibre core at different angles with respect to the fibre axis carries the same power. The model was validated for a bundle of seven fibres (six receiving fibres surrounding a transmitting one) with core radius 50 μm and the displacement range between 0 and 900 μm.

Before getting deeper into modelling the coordinates and geometrical considerations must be defined. It is assumed that light exits the transmitting fibre with a conical shape with a critical angle *θ<sup>c</sup>* (**Figures 9** and **11**). The radius of a light spot at the image plane is a function of distance *D* between the distal end of the bundle and the reflecting surface:

$$\mathbf{x}\_c = \mathbf{x}\_0 + \mathbf{2}D\tan\theta\_c \tag{7}$$

and corresponding dimensionless critical radius:

$$k\_c = \frac{q\_c}{\varkappa\_0} = 1 + \frac{2D}{\varkappa\_0} \tan \theta\_c \tag{8}$$

Force Measuring System for High-Precision Surface Characterization under Extreme Conditions http://dx.doi.org/10.5772/62386 141

**Figure 11.** Geometry for determination of subtended area for a pair of fibres placed at an distance an between their centres.

The distance between the centres of adjacent fibres depends on the core diameter and the cladding thickness: *aa* = 2(*cl* + *x*0), whereas for any other pair of fibres within a bundle the distance between the fibre centres *an* is a function of the bundle configuration. The corre‐ sponding dimensionless distance is *m* = *an*/*x*0. For simulation purpose, *m* was varied propor‐ tionally to the dimensionless distance between the centres of the adjacent fibres that in our case was 20/9.

The reflected light collected by a receiving fibre depends on the subtended area, *Si* , and the irradiance distribution function, *Ii* , there on. The subtended area shown by the shadowed region in **Figure 11** is determined as function of only geometrical parameters from the following expression:

$$\mathbb{E}S\_i\left(D,\theta\_\varepsilon\right) = \mathbf{x}\_0^2 \left(k\_\varepsilon^2 \boldsymbol{\sigma} + \sin^{-1}\left(k\_\varepsilon \sin \boldsymbol{\sigma}\right) - mk\_\varepsilon \sin \boldsymbol{\sigma}\right),\\k\_\varepsilon \leq \sqrt{m^2 + 1} \tag{9a}$$

$$S\_i(D, \theta\_\varepsilon) = \mathbf{x}\_0^2 \left( k\_\varepsilon^2 \boldsymbol{\sigma} + \boldsymbol{\pi} - \sin^{-1} \left( k\_\varepsilon \sin \boldsymbol{\varphi} \right) - m k\_\varepsilon \sin \boldsymbol{\varphi} \right), \sqrt{m^2 + 1} < k\_\varepsilon < m + 1 \tag{9b}$$

where

( )

( )

*l U*

displacement measurement and determination of its uncertainty.

*D* between the distal end of the bundle and the reflecting surface:

and corresponding dimensionless critical radius:

*l U*

modulation characteristic, so (5) can be simplified:

140 New Trends and Developments in Metrology

between 0 and 900 μm.

( ) ( )

*u u <sup>u</sup> df l dl*

2 2

/ *f l U*

In most cases the uncertainty in the voltage reading is much below the uncertainty of the

( ) ( ) / *f l*

These formulae indicate that knowledge of the modulation characteristic is required for

So far, several models were developed for modulation characteristic of a single pair of fibres and different irradiance distribution function: uniform [56,62,66,67], Gaussian [61,64,68], modified Gaussian [69,70], uniangular [63], theoretical [56]. Though these models showed good correlation with the experimental results, most of them have not been studied for multiple bundled fibres. Cao and co-workers [64] developed a model for the optical fibre bundle displacement sensor for several bundle configurations assuming quasi-Gaussian irradiance distribution function. The quasi-Gaussian, i.e., truncated Gaussian, distribution was adopted by these authors assuming that it coincides best with the actual facts, though, no argument for this assumption were presented. He and Cuomo [56] developed a more realistic theoretical irradiance distribution function for the lossless step-index multimode fibre illuminated with a Lambertian light source. This model is based on the assumption that each light ray propagating through the optical fibre core at different angles with respect to the fibre axis carries the same power. The model was validated for a bundle of seven fibres (six receiving fibres surrounding a transmitting one) with core radius 50 μm and the displacement range

Before getting deeper into modelling the coordinates and geometrical considerations must be defined. It is assumed that light exits the transmitting fibre with a conical shape with a critical angle *θ<sup>c</sup>* (**Figures 9** and **11**). The radius of a light spot at the image plane is a function of distance

q

q

(7)

(8)

<sup>0</sup> 2 tan *c c qx D* = +

0 0 <sup>2</sup> 1 tan *<sup>c</sup> c c q D <sup>k</sup> x x* = =+


*<sup>u</sup> <sup>u</sup> df l dl* » (6)

$$\varphi = \cos^{-1}\left(\left(k\_c^2 + m^2 - 1\right) / \,\, \mathcal{D}k\_c m\right). \tag{10}$$

The above model is a further development of more simple models that account only for certain fixed distance between the fibres [63,67,69,70]. In addition, this model corrects some errors present in [63].

The received radiant power is calculated by integration the irradiance distribution function over the subtended area (**Figure 11**):

$$P\_{ig} = \int\_{S\_i} I\_i dS = \mathbf{x}\_0^2 \int\_{m-1}^{k\_{up}} I\_i k \cos^{-1} \frac{k^2 + m^2 - 1}{2km} dk \tag{11}$$

where *kmax* = *m* + 1 or *kmax* = *kc* whichever is smaller.

In the simplest case the irradiance distribution function is assumed uniform over the light spot. Then, assuming the reflection losses negligible, the irradiance at the image plane is determined as follows:

$$I\_{\mu} = I\_0 \frac{\chi\_0^2}{q\_c^2} \tag{12}$$

where *I0* is mean radiant emittance at the exit from the transmitting fibre.

The radiant flux directed to the photodiode detector is given by the following expressions in dimensionless form:

$$\frac{P\_i}{I\_0 \mathbf{x}\_0^2} = k\_c^{-2} \left( k\_c^2 \boldsymbol{\varphi} + \sin^{-1} \left( k\_c \sin \boldsymbol{\varphi} \right) - m k\_c \sin \boldsymbol{\varphi} \right), k\_c \le \sqrt{m^2 + 1} \tag{13a}$$

$$\frac{P\_i}{I\_0 x\_0^2} = k\_\varepsilon^{-2} \left( k\_\varepsilon^2 \phi + \pi - \sin^{-1} \left( k\_\varepsilon \sin \phi \right) - m k\_\varepsilon \sin \phi \right), \sqrt{m^2 + 1} < k\_\varepsilon < m + 1 \tag{13b}$$

$$\frac{P\_i}{I\_o x\_0^2} = \pi k\_c^{-2} k\_c > m + 1\tag{13c}$$

In more complex model the irradiance distribution is simulated by Gaussian function [61]:

$$I\_{\rm \lg} \left( k \right) = \frac{I\_0}{k\_c^2} \exp \left( -\frac{k^2}{k\_c^2} \right) \tag{14}$$

After substitution (14) in (11) the received radiant power can be obtained by integration in polar coordinates:

Force Measuring System for High-Precision Surface Characterization under Extreme Conditions http://dx.doi.org/10.5772/62386 143

$$\frac{P\_{\rm ig}}{I\_{\rm o}x\_{\rm o}^{2}} = 2k\_{\rm c}^{-2} \int\_{m-1}^{k\_{\rm mw}} \exp\left(-\frac{k^{2}}{k\_{\rm c}^{2}}\right) k \cos^{-1}\frac{k^{2}+m^{2}-1}{2km} dk \tag{15}$$

Finally, the most realistic theoretical irradiance function was developed by He and Cuomo [56] as a piecewise-defined smooth function composed of eight formulas. These formulas combine according to the values of the parameters *k* and *kc*. Five of these formulas correspond to the interval *k* ≥ 1:

$$\begin{split} I\_{\boldsymbol{x}}\left(\boldsymbol{k}\right) &= \frac{I\_{0}d}{2\theta\_{\boldsymbol{c}}\left(d^{2}-1\right)} \Big(\tan^{-1}\left(\boldsymbol{k}\_{\boldsymbol{c}}-1\right) - \tan^{-1}\left(\boldsymbol{k}-1\right) + d^{-1}\left(\tan^{-1}\frac{\boldsymbol{k}-1}{d} - \tan^{-1}\frac{\boldsymbol{k}\_{\boldsymbol{c}}-1}{d}\right)\Big), \boldsymbol{l} \leq \boldsymbol{k}\_{\boldsymbol{c}} \leq \\ 12 \text{ and } \boldsymbol{l} \leq \boldsymbol{k} \leq \boldsymbol{k}\_{\boldsymbol{c}} \end{split} \tag{16a}$$

$$\begin{split} I\_{\mu}\left(k\right) &= \frac{I\_{o}d}{2\theta\_{c}\left(d^{2}-1\right)} \Big(\frac{\pi}{4} - d^{-1}\tan^{-1}d^{-1} - \tan^{-1}\left(k-1\right) + d^{-1}\tan^{-1}\frac{k-1}{d}\Big) \\ &+ \frac{I\_{0}}{8\theta\_{c}d}\ln\frac{\left(k\_{c}-1\right)^{2}\left(1+\mathrm{d}^{-2}\right)}{1+\mathrm{d}^{-2}\left(k\_{c}-1\right)^{2}}, k\_{c} > 2, 1 \leq k \leq 2 \text{and} k \geq k\_{c} - 2; \end{split} \tag{16b}$$

$$\begin{split} I\_{\omega}(k) &= \frac{I\_{0}d}{2\theta\_{c}\left(d^{2}-1\right)} \Big(\frac{\pi}{4} - d^{-1}\tan^{-1}d^{-1}\tan^{-1}\left(k-1\right) + d^{-1}\tan^{-1}\frac{k-1}{d}\Big) \\ &+ \frac{I\_{0}}{8\theta\_{c}d}In\frac{\left(k-1\right)^{2}\left(1+d^{-2}\right)}{1+d^{-2}\left(k-1\right)^{2}}, \mathbf{k}\_{c} > 2, 1 \leq k \leq 2\,\text{and}\,k < k\_{c} - 2; \end{split} \tag{16c}$$

$$M\_{\mu}\left(k\right) = \frac{I\_0}{8\theta\_c d} \ln \frac{\left(k\_c - 1\right)^2 \left(1 + d^{-2} \left(k - 1\right)^2\right)}{\left(k - 1\right)^2 \left(1 + d^{-2} \left(k\_c - 1\right)\right)^2}, k\_c > 2, k > 2 \text{ and } k \ge k\_c - 2;\tag{16d}$$

$$M\_{\mu}\left(k\right) = \frac{I\_0}{8\theta\_c d} \ln \frac{\left(k+1\right)^2 \left(1 + d^{-2} \left(k-1\right)^2\right)}{\left(k-1\right)^2 \left(1 + d^{-2} \left(k+1\right)^2\right)}, k\_c > 2, k > 2 \text{and} k < k\_c - 2;\tag{16e}$$

where *d* = 2*D*/*x*0.

The received radiant power is calculated by integration the irradiance distribution function

max 2 2 2 1


In the simplest case the irradiance distribution function is assumed uniform over the light spot. Then, assuming the reflection losses negligible, the irradiance at the image plane is determined

> 2 0 *iu* 0 2 *c*

The radiant flux directed to the photodiode detector is given by the following expressions in

( ( ) ) 22 1 <sup>2</sup>

jj

 j- - = +- - +< < + (13b)


sin sin sin , 1 *<sup>i</sup> c c cc <sup>P</sup> k k k mk k m*

sin sin sin , 1 1 *<sup>i</sup> c cc c <sup>P</sup> k k k mk m k m*

> 2 2 c 0 0

1 *<sup>i</sup> c <sup>P</sup> kk m*

In more complex model the irradiance distribution is simulated by Gaussian function [61]:

( ) <sup>2</sup> 0 2 2 exp *ig c c*

After substitution (14) in (11) the received radiant power can be obtained by integration in

*I k I k k k* æ ö = -ç ÷

( ( ) ) 2 2 <sup>1</sup> <sup>2</sup>

p

*I x*

 j

*<sup>x</sup> I I*

where *I0* is mean radiant emittance at the exit from the transmitting fibre.

cos 2 *<sup>i</sup>*

*k m P I dS x I k dk*

1

+ - = = ò ò (11)

*<sup>q</sup>* <sup>=</sup> (12)


è ø (14)

*km*

0 1

*ig i i S m*

where *kmax* = *m* + 1 or *kmax* = *kc* whichever is smaller.

2 c 0 0

j

jp

*I x*

2 c 0 0

*I x*

polar coordinates:

*k*


over the subtended area (**Figure 11**):

142 New Trends and Developments in Metrology

as follows:

dimensionless form:

The radiant power received by the receiving fibre was calculated numerically in dimensionless form for three irradiation distribution function. The results are given in **Figure 12**.

**Figure 12.** Elemental two-fibre modulation characteristics for various light intensity distribution functions: a) uniform, b) Gaussian, c) He-Cuomo theoretical distribution. Insets show the offset (blind) regions. (Reprinted from [65] Copy‐ right 2013, IOP Publishing.)

The graphs of dimensionless received radiant power vs. *D*/*x0* for uniform irradiance function show a horizontal offset at the origin, *Off*, that increases with increasing *m* (**Figure 12a**). The offset in dimensionless form can be determined from the following expression:

$$O\_{\mathcal{J}} = \frac{m-2}{2\tan\theta\_c} \tag{17}$$

For adjacent fibres *<sup>O</sup> ffa* <sup>=</sup> *cl* 2 tan *θ<sup>c</sup>* , i.e., is proportional to the cladding thickness. When *m* increases the value at the maximum decreases and the peak gets less sharp. The falling edges of all peaks converge to the function (*D*/*x*0) − 2.

For Gaussian irradiance distribution function, the falling edge of all graphs is much more gradual as compared with the uniform irradiance model and all the graphs converge at larger displacements than for the uniform irradiance model (**Figure 12b**).

The graphs for He-Cuomo irradiance model are much less sharp as compared to other models considered above. After reaching the maximum the graphs decrease but do not converge in the studied range of *D*. For all irradiance distribution functions, the offset of the graphs is the same since it depends only on the cladding thickness; however, for He-Cuomo model the graph raises very smoothly just after the offset, thus creating illusion of a bigger offset.

To compare the resulting graphs the plots for *m*=20/9 were normalized dividing by the corresponding maximum values (**Figure 13**). The position of the maximum shifts to larger *D*/ *x0* in the following order: Uniform ,*U*, Gaussian, *G*, He-Cuomo, *H-C*, models. Also, the slopes of the front and falling edges of the peaks decrease following the same order. It should be mentioned that for He-Cuomo model the falling edge of the peak is much higher at larger distances than for other two models.

**Figure 13.** Normalized received radiant power for Uniform, Gaussian and He-Cuomo irradiance functions.

**Figure 12.** Elemental two-fibre modulation characteristics for various light intensity distribution functions: a) uniform, b) Gaussian, c) He-Cuomo theoretical distribution. Insets show the offset (blind) regions. (Reprinted from [65] Copy‐

The graphs of dimensionless received radiant power vs. *D*/*x0* for uniform irradiance function show a horizontal offset at the origin, *Off*, that increases with increasing *m* (**Figure 12a**). The

2

q

the value at the maximum decreases and the peak gets less sharp. The falling edges of all peaks

For Gaussian irradiance distribution function, the falling edge of all graphs is much more gradual as compared with the uniform irradiance model and all the graphs converge at larger

The graphs for He-Cuomo irradiance model are much less sharp as compared to other models considered above. After reaching the maximum the graphs decrease but do not converge in the studied range of *D*. For all irradiance distribution functions, the offset of the graphs is the

*c*


, i.e., is proportional to the cladding thickness. When *m* increases

offset in dimensionless form can be determined from the following expression:

2tan *ff*

*<sup>m</sup> <sup>O</sup>*

right 2013, IOP Publishing.)

144 New Trends and Developments in Metrology

For adjacent fibres *<sup>O</sup> ffa* <sup>=</sup> *cl*

converge to the function (*D*/*x*0)

2 tan *θ<sup>c</sup>*

− 2.

displacements than for the uniform irradiance model (**Figure 12b**).

In many cases the transducers have more than one of each transmitting and receiving fibres assembled in a bundle. Independently of the order of the fibres in the bundle, the resulting signal of the photodiode detector can be found as a superposition of the signals from the individual receiving fibres. In turn, the signal on a receiving fibre is proportional to a sum of the received radiant power from each transmitting fibre. Therefore, the total voltage at the photodiode detector can be determined by summation [65]:

$$U\left(D\right) = \sum\_{l=1}^{l=N\_r} U\_r\left(D\right) \propto \sum\_{l=1}^{l=N\_r} \sum\_{j=1}^{j=N\_r} P\_i\left(l, j, D\right) \tag{18}$$

where *Pi* (*l*,*j*,*D*) is the radiant power from the *j-*th transmitting fibre subtended by the *l*-th receiving fibre at a distance *D* from the reflecting surface, *Nt* is the number of transmitting fibres and *Nr* is the number of receiving fibres.

The output voltage of the sensor depends not only on the distance to the reflecting surface and the irradiance function but also on *an*(*l*,*j*), i.e., on the configuration of the fibres within a bundle.

A summation model for equally spaced fibres with 1×2 periodicity was developed in [64]. It is worth mentioning that the adjacent fibres contribute most significantly to the rising front of the resulting modulation characteristics of the sensor.

The effect of bundle configuration was modelled for a bundle containing 85 fibres with a fibre core radius *x0* = 22.5 μm and cladding thickness *cl* = 2.5. Geometry and enumeration of individual fibres in the bundle are shown in **Figure 14**.

**Figure 14.** Three bundle configurations were modelled The numbers denote the centres of individual fibres.

The following three configurations of a bundle were modelled: alternating linear configuration of fibres (a line of transmitting fibres followed by a line of receiving fibres and so on), semicircle, and random configuration. The results for alternating linear configuration and for the semicircular one are shown in **Figure 15**.

**Figure 15.** Modulation characteristics of a IMFODS with a) alternating linear configuration of transmitting and receiv‐ ing fibres; b) semicircle configuration of emitting and receiving fibres. U – uniform irradiance function; G – Gaussian irradiance model; H-C – He-Cuomo theoretical irradiance function.

For the alternating configuration of fibres, there is a large number of transmitting fibres situated at a short distance from the receiving ones. Therefore, for all three irradiance distri‐ bution functions the plots have sharp peaks at the falling edge resulting from the contribution of the nearby transmitting fibres. However, for He-Cuomo function these peaks are less important than for other functions. A similar effect can also be observed for the semicircle configuration (**Figure 15b**). Whereas the sensor is being retracted from the reflecting surface new rows of receiving and transmitting fibres enter synchronously into the light spot at the image plane producing sharp steps on the modulation characteristic. Again, for He-Cuomo model the graphs are smoother than for other irradiance distribution functions.

The effect of bundle configuration was modelled for a bundle containing 85 fibres with a fibre

**Figure 14.** Three bundle configurations were modelled The numbers denote the centres of individual fibres.

The following three configurations of a bundle were modelled: alternating linear configuration of fibres (a line of transmitting fibres followed by a line of receiving fibres and so on), semicircle, and random configuration. The results for alternating linear configuration and for the

**Figure 15.** Modulation characteristics of a IMFODS with a) alternating linear configuration of transmitting and receiv‐ ing fibres; b) semicircle configuration of emitting and receiving fibres. U – uniform irradiance function; G – Gaussian

For the alternating configuration of fibres, there is a large number of transmitting fibres situated at a short distance from the receiving ones. Therefore, for all three irradiance distri‐ bution functions the plots have sharp peaks at the falling edge resulting from the contribution of the nearby transmitting fibres. However, for He-Cuomo function these peaks are less important than for other functions. A similar effect can also be observed for the semicircle configuration (**Figure 15b**). Whereas the sensor is being retracted from the reflecting surface

= 2.5. Geometry and enumeration of

core radius *x0* = 22.5 μm and cladding thickness *cl*

146 New Trends and Developments in Metrology

semicircular one are shown in **Figure 15**.

irradiance model; H-C – He-Cuomo theoretical irradiance function.

individual fibres in the bundle are shown in **Figure 14**.

Earlier Cao et al. [64] simulated modulation characteristics of an IMFODS using various bundle configurations including the "random" one. In fact, the configuration they referred to as "random" was an ordered one with a definite pattern (see **Figure 2** in [64]). In this work, real random distribution of the fibres was obtained using a random number generator. Five different random configurations were created. **Figure 16** a shows mean modulation charac‐ teristic determined as a mean of five particular modulation characteristics corresponding to five random bundle configurations. In addition, standard deviation and error of mean were determined for all models. Despite the use of random configuration followed by averaging of particular modulation characteristics, the graphs in **Figure 16** for uniform and Gaussian models have important corrugation in the region near to the maximum. However, He-Cuomo model gives very smooth modulation curve without any significant peaks and corrugation.

**Figure 16.** a) Mean modulation characteristic of a bundle optical fibre displacement sensor with random configuration of transmitting and receiving fibres. Portion of graphs fitted by linear function is shown by vertical dashed lines. C – experimental data, U – uniform model, G – Gaussian model, H-C – He-Cuomo theoretical model. b) Relative standard error for random configuration of a bundle and different irradiation distribution functions. Inset: Absolute dimension‐ less value of standard error for random configuration and He-Cuomo theoretical irradiation distribution. Graph C is the difference between the mean modulation characteristic and the experimental data.

Graph C in **Figure 16** a represents experimental calibration graph of a bundle optical fibre sensor. This graph is well fitted by the He-Cuomo modulation characteristic within the interval of dimensionless displacements from 0 to 20. This finding is consistent with the results published earlier by He and Cuomo [3] in the same range of dimensionless distances. The absolute difference between the two graphs (C) and the standard error for He-Cuomo model (H-C) are plotted in the inset in **Figure 16b**. The standard error reflects dispersion of the mean modulation characteristic related to variation of the bundle configuration. Within the interval of displacements from 3 to 20 the absolute difference between the theoretical and experimental graphs does not exceed standard error, thus indicating that this difference is due to mismatch between the theoretical and real bundle configurations. A sharp peak at the beginning can be attributed to the experimental error related to limited measurement capacity of the experi‐ mental test rig. At the displacements larger than 18 graph C sharply increases indicating significant divergence between two graphs. The slope of the H-C graph decreases, whereas the slope of the experimental graph remains almost constant (**Figure 16a**). Surprisingly, the experimental calibration curve cannot be fitted on the entire range of the displacement using only He-Cuomo theoretical model.

In the range of *D*/*x0* between 18 and 35 all the plots, both calculated and the experimental, are almost linear. All graphs were fitted in this range by a linear function using a least-squares method and mean slope and the standard error (se) were determined for these linear functions (see **Table 1**). Adjusted coefficient of determination for all fittings was larger than 0.99. The mean slope of the experimental calibration graph is very close to that of the Gaussian model, although the difference between the mean slopes is statistically significant.


**Table 1.** Mean slope of fitted linear functions, standard error of the slope (*se*) and the sample volume for calculated modulation characteristics and experimental calibration graph.

Seemingly, the reason for the discrepancy between the calculated and the experimental modulation characteristics should be sought in some discrepancy between the real irradiance distribution function and the function used in He-Cuomo model. To fit the experimental data, the real irradiance distribution function should conceivably combine the characteristics of both Gaussian and He-Cuomo functions: the real irradiance distribution function should have a sharp peak at the origin as He-Cuomo function, but this central part should decrease with increasing *D*/*x0* as rapidly as Gaussian function.

It was reported that the experimental graph of the measurement uncertainty vs. distance is almost identical to the simulated one obtained by varying the bundle configuration [65]. Bearing in mind that the bundle configuration was constant during the test it was suggested that small imperfections on the mirror surface could be responsible. In fact, presence of spots that reduce reflectivity can alter particular contributions from certain pairs of receiving and transmitting fibres that is equivalent to changing of the bundle configuration. The imperfec‐ tions should have the size beyond a certain critical value, which is comparable with the fibre core diameter.

Another important conclusion that has been drawn from this study is that, in contrast to implicit expectations [71], the derivatives of the modulation characteristics have no flat features at the near side that suggests absence of nominally linear region. Thus, using the linear approach at the near side will lead to systematic errors related with the deviation from linearity of the modulation characteristic. The values of these errors depend on the displacement range as well as on the initial and final positions. However, at the far side the derivatives of both theoretical and experimental modulation characteristics have definite flat region, especially at *D*/*x*0 between 30 and 50. Therefore, the using of the linear approach is justified at this side.

#### **5. Concluding remarks**

attributed to the experimental error related to limited measurement capacity of the experi‐ mental test rig. At the displacements larger than 18 graph C sharply increases indicating significant divergence between two graphs. The slope of the H-C graph decreases, whereas the slope of the experimental graph remains almost constant (**Figure 16a**). Surprisingly, the experimental calibration curve cannot be fitted on the entire range of the displacement using

In the range of *D*/*x0* between 18 and 35 all the plots, both calculated and the experimental, are almost linear. All graphs were fitted in this range by a linear function using a least-squares method and mean slope and the standard error (se) were determined for these linear functions (see **Table 1**). Adjusted coefficient of determination for all fittings was larger than 0.99. The mean slope of the experimental calibration graph is very close to that of the Gaussian model,

although the difference between the mean slopes is statistically significant.

Sample volume 426 426 426 32

modulation characteristics and experimental calibration graph.

increasing *D*/*x0* as rapidly as Gaussian function.

core diameter.

**Symbol Uniform Gaussian He-Cuomo Calibration** Mean slope −2.22 × 10-2 −2.08×10-2 −1.63 × 10-2 −1.91 × 10-2 se 3.64 × 10-5 2.08 × 10-5 1.76 × 10-5 7.36 × 10-5

**Table 1.** Mean slope of fitted linear functions, standard error of the slope (*se*) and the sample volume for calculated

Seemingly, the reason for the discrepancy between the calculated and the experimental modulation characteristics should be sought in some discrepancy between the real irradiance distribution function and the function used in He-Cuomo model. To fit the experimental data, the real irradiance distribution function should conceivably combine the characteristics of both Gaussian and He-Cuomo functions: the real irradiance distribution function should have a sharp peak at the origin as He-Cuomo function, but this central part should decrease with

It was reported that the experimental graph of the measurement uncertainty vs. distance is almost identical to the simulated one obtained by varying the bundle configuration [65]. Bearing in mind that the bundle configuration was constant during the test it was suggested that small imperfections on the mirror surface could be responsible. In fact, presence of spots that reduce reflectivity can alter particular contributions from certain pairs of receiving and transmitting fibres that is equivalent to changing of the bundle configuration. The imperfec‐ tions should have the size beyond a certain critical value, which is comparable with the fibre

Another important conclusion that has been drawn from this study is that, in contrast to implicit expectations [71], the derivatives of the modulation characteristics have no flat features at the near side that suggests absence of nominally linear region. Thus, using the linear approach at the near side will lead to systematic errors related with the deviation from linearity of the modulation characteristic. The values of these errors depend on the displacement range

only He-Cuomo theoretical model.

148 New Trends and Developments in Metrology

Technological advances in development of precision surface characterization techniques have been notable in the last 20 years offering a wide range of solutions to the research community. Several improvements were achieved thanks to the better understanding of limitations of devices and to the development of reliable measurement techniques pushing the envelope to extreme working conditions. This led to resolving conflicts between different subsystems by means of a holistic approach involving different physic-chemical phenomena. As for complex tribological characterization and indentation, an indirect measurement of the involved forces through displacement measurements is needed, fibre optical and capacitive sensors were found to be the most suitable. The limitations for the former are mainly related to reflectivity loss, critical dependences of the measurement uncertainty on the minute defects on reflecting surfaces, limited temperature range and the non-linearity of its working characteristic; for the latter, a differential capacitive sensor is preferable leading to the need of an active reference. In addition, use of capacitance sensor can be limited when it is used together with other sensitive techniques due to electromagnetic interferences. However, when these kinds of sensors are implemented in the device, their accuracy is so high that the noise floor is recorded and overlaps the measurements. A proper damping is than needed as well as the right models taking into account the estimated instrument cumulated uncertainty. It should then be highlighted the need of subsystem influencing as less as possible these sensors and materials with stable mechanical characteristics over a wide range of temperatures. It is then easy to understand how only non-contact heating device, such as laser-based, inductive or radiationbased heat sources, can be used to such accurate machines, and how important is to keep the measuring device free from external noises and change of temperature.

The results presented in this chapter show that such recommendation are not utopic although a deep experience is needed in interpreting the machine response and proper calibrations should be carried out. As an example, **Figure 17** shows the force measuring head designed by the authors for an ultrahigh vacuum system for tribological characterization of materials and lubricants [4]. The patented head allows measuring two forces in perpendicular directions with very low crosstalk using two pairs of leaf springs. The displacement of the leaf springs is measured by IMFODS mounted on corresponding micropositioning motorized stages. These stages permit not only fine adjustment of the initial position of the IMFODS that is necessary to reduce the measurement uncertainty but also switching from back to front slopes during the experiment. Since the sensitivity of the IMFODS at the front slope is almost tenfold greater than on the back slope, flexibility in selection the range in vacuum offers the opportunity to significantly increase the measurement range.

**Figure 17.** UHV compatible 2-axes force sensor. 1 – the base plate; 2 – the pin; 3 – one of the two leaf-springs used for measuring y component of the force ;4a and 4b – the leaf-springs for measuring x component of the force; 5,6 – IM‐ FODS; 7 – mirror of the x-stage; and 8 – microactuator for positioning of the optical sensor of the y-stage. (Reprinted from [4] Copyright 2015, with permission from Elsevier.)

#### **Author details**

Roman Nevshupa1\* and Marcello Conte2\*

\*Address all correspondence to: r.nevshupa@csic.es and marcello.conte@anton-paar.com

1 Spanish National Research Council, Institute of Constructional Sciences "Eduardo Torro‐ ja" (IETCC-CSIC), Madrid, Spain

2 Anton Paar Tritec SA, Peseux, Switzerland

#### **References**


**Figure 17.** UHV compatible 2-axes force sensor. 1 – the base plate; 2 – the pin; 3 – one of the two leaf-springs used for measuring y component of the force ;4a and 4b – the leaf-springs for measuring x component of the force; 5,6 – IM‐ FODS; 7 – mirror of the x-stage; and 8 – microactuator for positioning of the optical sensor of the y-stage. (Reprinted

\*Address all correspondence to: r.nevshupa@csic.es and marcello.conte@anton-paar.com

1 Spanish National Research Council, Institute of Constructional Sciences "Eduardo Torro‐

[1] Jones WR, Jansen MJ. Space tribology. Hanover, MD: NASA Center for Aerospace

[2] Miyoshi K. Solid lubricants and coatings for extreme environments: State-of-the-art survey. Technical memorandum. Cleveland, Ohio: National Aeronautics and Space

from [4] Copyright 2015, with permission from Elsevier.)

150 New Trends and Developments in Metrology

Roman Nevshupa1\* and Marcello Conte2\*

2 Anton Paar Tritec SA, Peseux, Switzerland

Information; 2000. p. 33.

Administration, Glenn Research Center; 2007. p. 16.

ja" (IETCC-CSIC), Madrid, Spain

**Author details**

**References**


[31] Suresh S, Giannakopoulos AE. A new method for estimating residual stresses by instrumented sharp indentation. Acta Materialia. 1998;46:5755-5767. DOI: 10.1016/ S1359-6454(98)00226-2

[17] Williams MW. Triboelectric charging of insulating polymers-some new perspectives.

[18] Walton AJ. Triboluminescence. Advances in Physics. 1977;26:887-948. DOI:

[19] Nakayama K, Ikeda H. Triboemission characteristics of electrons during wear of amorphous carbon and hydrogenated amorphous carbon films in a dry air atmosphere.

[20] Evdokimov VD. Specific features of exoelectron emission during friction of metals.

[21] Mahrova M, Conte M, Roman E, Nevshupa R. Critical insight into mechanochemical and thermal degradation of imidazolium-based ionic liquids with alkyl and mPEG side chains. Journal of physical chemistry C. 2014;118:22544–22552. DOI: 10.1021/jp504946h

[22] Nevshupa R, Ares JR, Fernández JF, del Campo A, Roman E. Tribochemical decom‐ position of light ionic hydrides at room temperature. The Journal of Physical Chemistry

[23] Rusanov A, Nevshupa R, Fontaine J, Martin J-M, Le Mogne T, Elinson V, et al. Probing the tribochemical degradation of hydrogenated amorphous carbon using mechanically stimulated gas emission spectroscopy. Carbon. 2015;81:788-799. DOI: 10.1016/j.carbon.

[24] Le Mogne T, Martin J-M, Grossiord C. Imaging the chemistry of transfer film in aes/xps analytical UHV tribotester. In: Dowson D, editor. Lubrication at the frontier: The role of the interface and surface layers in the thin film and boundary regime.

[25] Kajdas C. General approach to mechanochemistry and its relation to tribochemistry

[26] Schuh CA. Nanoindentation studies of materials. Materials Today. 2006;9:32-40. DOI:

[27] Bhushan B. Handbook of micro/nano tribology. 2nd ed: Taylor & Francis, 1998. 880 p. [28] Oliver WC, Pharr GM. Measurement of hardness and elastic modulus by instrumented indentation: Advances in understanding and refinements to methodology. Journal of

[29] Dao M, Chollacoop N, Van Vliet KJ, Venkatesh TA, Suresh S. Computational modeling of the forward and reverse problems in instrumented sharp indentation. Acta Materi‐

[30] Storåkers B, Larsson P-L. On brinell and boussinesq indentation of creeping solids. Journal of the Mechanics and Physics of Solids. 1994;42:307-332. DOI:

Aip Advances. 2012;2. DOI: 010701 10.1063/1.3687233

Wear. 1996;198:71-76. DOI: 10.1016/0043-1648(96)06934-7

Letters. 2015;6:2780-2785. DOI: 10.1021/acs.jpclett.5b00998

Tribology in engineering Zagreb: InTech; 2013. p. 209-240.

Materials Research. 2004;19:3-20. DOI: 10.1557/jmr.2004.19.1.3

alia. 2001;49:3899-3918. DOI: 10.1016/S1359-6454(01)00295-6

Amsterdam: Elsevier; 1999. p. 413-422.

10.1016/S1369-7021(06)71495-X

10.1016/0022-5096(94)90012-4

Soviet Physics Journal. 1968;11:11-13. DOI: 10.1007/BF01890910

10.1080/00018737700101483

152 New Trends and Developments in Metrology

2014.10.026


[59] Roman E, Nevshupa R, de Segovia JL, Konovalov PI, Menshikov IP. Method and apparatus for analysis of gas content in solids and surface coatings. Patent number WO2007ES70216 20071220. Priority date: 23.02.2007

[45] Yaqoob MA, M.B. R, Schipper DJ. Design of a vacuum based test rig for measuring micro adhesion and friction force. In: De Wilde WP, Brebbia CA, Hernández S, editors. High performance structures and materials vi. Southampton: WIT press; 2012. p.

[46] Block B. Solid state force transducer and method of making same. Patent number

[47] Uchikawa H, Munekata M, Motohashi H. Kinetofrictional force testing apparatus.

[48] Hegde SG, Praino AP, Root SJ, Sri-Jayantha M. Tunable feedback transducer for transient friction measurement. Patent number US 5115664. Priority date: 25.06.1990

[49] Börner H, Frindt F, Mollenhauer O, Spaltmann D. Ultrahochvakuum-tribometer. Patent number DE10390125B4. International patent number WO2003060487A1.

[50] Dellah A, Wild PM, Moore TN, Shalaby M, Jeswiet J. An embedded friction sensor based on a strain-gauged diaphragm. Journal of Manufacturing Science and Engineer‐

[51] TarÁS, Cserey GG, Veres J. Sensor device. International patent number WO2013072712

[52] Bonin WA. Multi-dimensional capacitive transducer. Patent number US 5661235.

[53] Bellaton B, Consiglio R, Woirgard J. Measuring head for nanoindentation instrument and measuring method. Patent application WO2014202551 A1. Priority date: 17.06.2013

[54] Kosinskiy M, Ahmed SI-U, Liu Y, Schaefer JA. A compact reciprocating vacuum microtribometer. Tribology International. 2012;56:81-88. DOI: 10.1016/j.triboint.

[55] Holton C, Ahmadian M. Doppler sensor for the derivation of torsional slip, friction and related parameters. Patent number US7705972 B2. Patent number US7705972 B2.

[56] He G, Cuomo FW. A light intensity function suitable for multimode fiber-optic sensors.

[57] Mollenhauer O, Scherge M, Karguth A. Device for examining friction conditions. Patent

[58] Spaltmann D, Boerner H, Frindt F, Mollenhauer O. Tribometer. Patent number WO 03060487. International patent number WO 03060487. Priority date: 20.01.2003

Journal of Lightwave Technology. 1991;9:545-551. DOI: 10.1109/50.76670

261-274.

154 New Trends and Developments in Metrology

CA1114644 A1. Priority date: 9 Feb 1976

Priority date: 18.01.2002

A1. Priority date: 17.11.2011

Priority date: 20.12.1995

Priority date: 20.06.2006

2012.06.019

Patent number US 5212657. Priority date: 16.01.1990

ing. 2002;124:523-527. DOI: 10.1115/1.1461839

number US 6666066. Priority date: 03.04.1998


### **Optical Pressure Measurement Principle System**

Limin Gao, Ruiyu Li and Bo Liu

Additional information is available at the end of the chapter

http://dx.doi.org/10.5772/64008

#### **Abstract**

Surface pressure measurements are critical to aerodynamic testing in wind tunnel. A new pressure measurement technique is being developed to augment current capabilities that are an optical-based technique using pressure sensitive paint (PSP). Compared with the traditional surface pressure measurement, an important feature of optical pressure measurement is much more complete surface information with relatively simple procedures and instrumentation. So optical pressure measurement technique will provide an alternative to conventional methods for the pressure measurement. After studying the chapter of "Optical Pressure Measurement System", readers are expected to grasp the measurement principle and know how to establish the corresponding measurement system. In this chapter, the measurement principle, measurement system, component characteristics and its application of optical pressure measurement technique are introduced based on the authors' research.

**Keywords:** surface pressure, pressure sensitive paint, optical pressure measurement

#### **1. Introduction**

Surface pressure measurements are critical to aerodynamic testing in wind tunnel. For experimental research, pressure has often been measured for its ability of revealing complex flow phenomena such as shock wave, boundary layer separation, etc.

Conventional pressure measurement method is based on pressure taps and electronically scanned pressure transducers. Pipes connect these holes to pressure transducers, which transform the mechanical force of the pressure to a digital or analogy reading. Since the effective area for a single tap is rather limited, there could be several large arrays of hundreds or even thousands of pressure taps to be employed in industrial wind tunnel testing. Although

these devices provide accurate pressure information, depending on the size and complexity of the model, the process of creating these models is time-consuming and expensive, while the preparation for wind tunnel testing is rather tedious and laborious. Thus it is the first drawback that using a pressure tap system needs much cost in time and labour and money. The other drawbacks include the fact that the number of taps should be limited because of the minimum space between pressure taps which lead to poor space resolution and the taps would introduce aerodynamic interference in wind tunnel testing which might affect the total pressure profiles along the pressure model surfaces as well as the difficulties of taps drilling and pipe arrange‐ ment on the surfaces with thin structures.

A new pressure measurement technique has been developed to augment the conventional surface pressure measuring method in aerodynamic testing, which actually is an optical-based technique using pressure sensitive paint (PSP). Compared to the conventional pressure measurement method, PSP technique provides a relatively simple and inexpensive way to acquire full-field pressure image on aerodynamic model surface with high spatial resolution and low aerodynamic interference resulting from PSP coating. Since it is with both functions of flow visualization and flow field measurement, PSP technique could reflect the details of flow structure on model surfaces such as traces of vortices, flow separation and reattachment and shock waves. The process of model-making and testing preparation is easy to manipulate with less time and less expense. PSP model needs less number taps for in-situ calibration and even the aerodynamic force model could be employed to acquire surface pressure distribution by PSP technique. If two-dimensional pressure data is mapped onto the three-dimensional model surface, aerodynamic force can be obtained by integration over the model surface.

PSP technique introduces an innovative concept of instrumentation and provides entire surface pressure map with high spatial resolution. In recent years, surface pressure distribution measurement technique using pressure sensitive paint (PSP) has been recently receiving popularity in the aerospace fields. The central Aero-Hydrodynamic Institute (TsAGI) [1,2] in Moscow, NASA [3–6] and McDonnell Douglas Aerospace (MDA) [7] are the PSP pioneers who published some PSP application results in wind tunnel, and the studied area includes the pressure distributions on the Delta-wing, the iced wing, the cooled film as well as famous combat fighter and civil transporter such as F-16, F-18 and several Boeing and Air Bus transporters. So far the convenient commercial PSP measurement system has been developed by ISSI (Innovative Scientific Solution Inc.). So Optical Pressure Measurement technique will act as an alternative to conventional methods for the pressure measurement.

#### **2. Measurement principle**

This section briefly reviews luminescence and quenching as it pertains to pressure sensitive paint. The image-based PSP technique is an optical method that enables measurements of surface pressure distributions over a model. More detailed discussions of these phenomena can be found in textbooks, for example, by Willard et al., Tianshu Liu.

Pressure sensitive paint techniques are based on photoluminescence (which includes both fluorescence and phosphorescence) of some polymer probe molecules, as shown in **Figure 1.** When photons of particular wavelength impinge on the model surface with PSP coating, the probe molecule is promoted to an excited state by absorbing photons with appropriate energy. Then immediately excited probe molecule must return to the ground state by emitting photons of a longer wavelength to lose the excited energy. This process is called photoluminescence. Meanwhile, there exists other ways for excited molecules to lose energy, one of which is the process to transfer much excited energy to oxygen molecules penetrating into PSP coating by colliding and to decrease the emitting intensity, named as oxygen quenching. The working process of PSP involves those of photoluminescence and quenching, denominated as Stern-Volmer process, in which the intensity of PSP is inversed to the concentration of oxygen molecules.

Since the concentration of oxygen molecules in air is always consistent with 21% and that locally inside PSP coatings is proportional to the local pressure outside the coatings, locally low-pressure regions on the surface of a model will emit higher intensity with less quenching than locally high-pressure areas. Thus, pressure can be measured by oxygen quenching of luminescence.

Based on above mentioned photochemical characteristic of PSP, the working process can be modelled by a simplified form of the Stern-Volmer relation as:

**Figure 1.** Principle of optical PSP techniques.

these devices provide accurate pressure information, depending on the size and complexity of the model, the process of creating these models is time-consuming and expensive, while the preparation for wind tunnel testing is rather tedious and laborious. Thus it is the first drawback that using a pressure tap system needs much cost in time and labour and money. The other drawbacks include the fact that the number of taps should be limited because of the minimum space between pressure taps which lead to poor space resolution and the taps would introduce aerodynamic interference in wind tunnel testing which might affect the total pressure profiles along the pressure model surfaces as well as the difficulties of taps drilling and pipe arrange‐

A new pressure measurement technique has been developed to augment the conventional surface pressure measuring method in aerodynamic testing, which actually is an optical-based technique using pressure sensitive paint (PSP). Compared to the conventional pressure measurement method, PSP technique provides a relatively simple and inexpensive way to acquire full-field pressure image on aerodynamic model surface with high spatial resolution and low aerodynamic interference resulting from PSP coating. Since it is with both functions of flow visualization and flow field measurement, PSP technique could reflect the details of flow structure on model surfaces such as traces of vortices, flow separation and reattachment and shock waves. The process of model-making and testing preparation is easy to manipulate with less time and less expense. PSP model needs less number taps for in-situ calibration and even the aerodynamic force model could be employed to acquire surface pressure distribution by PSP technique. If two-dimensional pressure data is mapped onto the three-dimensional model surface, aerodynamic force can be obtained by integration over the model surface.

PSP technique introduces an innovative concept of instrumentation and provides entire surface pressure map with high spatial resolution. In recent years, surface pressure distribution measurement technique using pressure sensitive paint (PSP) has been recently receiving popularity in the aerospace fields. The central Aero-Hydrodynamic Institute (TsAGI) [1,2] in Moscow, NASA [3–6] and McDonnell Douglas Aerospace (MDA) [7] are the PSP pioneers who published some PSP application results in wind tunnel, and the studied area includes the pressure distributions on the Delta-wing, the iced wing, the cooled film as well as famous combat fighter and civil transporter such as F-16, F-18 and several Boeing and Air Bus transporters. So far the convenient commercial PSP measurement system has been developed by ISSI (Innovative Scientific Solution Inc.). So Optical Pressure Measurement technique will

This section briefly reviews luminescence and quenching as it pertains to pressure sensitive paint. The image-based PSP technique is an optical method that enables measurements of surface pressure distributions over a model. More detailed discussions of these phenomena

act as an alternative to conventional methods for the pressure measurement.

can be found in textbooks, for example, by Willard et al., Tianshu Liu.

ment on the surfaces with thin structures.

158 New Trends and Developments in Metrology

**2. Measurement principle**

Where *I*is the PSP luminescence emitted at the pressure *P* and *Iref* is the reference luminescence at a reference pressure *Pref*. Usually the wind-off condition is selected as the reference condition. The coefficients *A* and *B* display the dependence of PSP on the temperature *T*, which are determined from pre-calibrating tests to be carried out in advanced and related with the paint's formulations. If the paint is insensitive to thermal changes, the coefficients *A* and *B* are both constant.

The preconditions for application of PSP technique are so strict and ideal that could never be achieved in practice. First, the uniform luminance on measured surfaces of model, consistent thickness of paint coatings and the distribution of temperature on measured surfaces must be kept the same. Furthermore, relative motion in position and structure distortion during operation should be prohibited, and ideal emission without spectral variability and filter leakage and the rest should be met.

The most popular type of PSP technique is intensity-based method which acquires surface pressure distribution from the ratio image of intensity image at non-operation (wind-off) condition to that at operation (wind-on) by interpolated with pre-established Stern-Volmer equation. In most cases, some pressure taps values are essential and necessary to calibrate and validate PSP data, which process denoted as in-situ calibration. The intensity images are captured by scientific grade CCD or CMOS cameras which are either coloured or greyed type. Although the coloured camera only has one-fourth effective resolution of greyed one with the same nominal resolution, the former camera can provide at least two intensity image at same time. That is to say, the output of either camera is in grey level.

#### **3. Measurement systems**

The PSP measurement system should first be compatible to a particular test facility. The portable instruments or devices are necessary. Compatibility of a PSP measurement system to several test facilities is preferred. If compatibility is impossible, a flexible PSP measurement system should be established for cost-efficient purpose. The involved instruments or devices should be of high quality and of perfect performance. Thus the expense to establish PSP measurement system is not too low. An accurate PSP measurement system includes paint formulation, painting device, excited light source, scientific grade camera, calibration devices, image-processing device and software, and other auxiliaries such as filters etc. A perfect PSP measurement system has normally been promoted from an initially primary measurement system. Experience and expertise would play a critical role in the process of PSP measurement system establishment. Primary instruments or devices such as excited light source, calibration devices, image-processing software and filters should be customized, among which the selection of excited light source and filters depends on the spectral performances of paint formulations. The control devices will play an important role in multi-excited sources-andcameras system. In addition, real time image processing is a perfect aim for PSP measurement system. A sketch of one PSP system configuration used in this investigation is shown in **Figure 2**.

**Figure 2.** Components of optical PSP measurement system.

#### **3.1. PSP paint layer application**

Where *I*is the PSP luminescence emitted at the pressure *P* and *Iref* is the reference luminescence at a reference pressure *Pref*. Usually the wind-off condition is selected as the reference condition. The coefficients *A* and *B* display the dependence of PSP on the temperature *T*, which are determined from pre-calibrating tests to be carried out in advanced and related with the paint's formulations. If the paint is insensitive to thermal changes, the coefficients *A* and *B* are both

The preconditions for application of PSP technique are so strict and ideal that could never be achieved in practice. First, the uniform luminance on measured surfaces of model, consistent thickness of paint coatings and the distribution of temperature on measured surfaces must be kept the same. Furthermore, relative motion in position and structure distortion during operation should be prohibited, and ideal emission without spectral variability and filter

The most popular type of PSP technique is intensity-based method which acquires surface pressure distribution from the ratio image of intensity image at non-operation (wind-off) condition to that at operation (wind-on) by interpolated with pre-established Stern-Volmer equation. In most cases, some pressure taps values are essential and necessary to calibrate and validate PSP data, which process denoted as in-situ calibration. The intensity images are captured by scientific grade CCD or CMOS cameras which are either coloured or greyed type. Although the coloured camera only has one-fourth effective resolution of greyed one with the same nominal resolution, the former camera can provide at least two intensity image at same

The PSP measurement system should first be compatible to a particular test facility. The portable instruments or devices are necessary. Compatibility of a PSP measurement system to several test facilities is preferred. If compatibility is impossible, a flexible PSP measurement system should be established for cost-efficient purpose. The involved instruments or devices should be of high quality and of perfect performance. Thus the expense to establish PSP measurement system is not too low. An accurate PSP measurement system includes paint formulation, painting device, excited light source, scientific grade camera, calibration devices, image-processing device and software, and other auxiliaries such as filters etc. A perfect PSP measurement system has normally been promoted from an initially primary measurement system. Experience and expertise would play a critical role in the process of PSP measurement system establishment. Primary instruments or devices such as excited light source, calibration devices, image-processing software and filters should be customized, among which the selection of excited light source and filters depends on the spectral performances of paint formulations. The control devices will play an important role in multi-excited sources-andcameras system. In addition, real time image processing is a perfect aim for PSP measurement system. A sketch of one PSP system configuration used in this investigation is shown in

constant.

leakage and the rest should be met.

160 New Trends and Developments in Metrology

**3. Measurement systems**

**Figure 2**.

time. That is to say, the output of either camera is in grey level.

Detailed discussion of the PSP paint formulations are out of the chapter's focus and only briefly described here.

A typical pressure-sensitive paint (PSP) is composed of two main components: oxygensensitive fluorescence molecules (which is called the luminescent probe molecules) and oxygen-permeable binder materials. From publications, PSP composed of pyrene or PtTFPP with oxygen-permeable silicon binder are popular in wind tunnel testing, the former of which exists an absorbing peak near 320 to 340 nm among an exciting range from 320 to 390 nm and an emission peak near 440 to 520 nm, the latter of which is excited near 400 nm and then emits a main peak near 650 nm. The pyrene-based paint is developed by ICAS (Institute of Chemistry, Chinese Academy of Science), is insensitive to thermal changes when ambient temperature is below 40°C. Therefore, the spectral characteristic of the used PSP paint should be acquired by the calibration for measurement accuracy.

**Figure 3.** Cleaning surface with acetone.

**Figure 4.** Spraying the paint on the surface.

Fine paint application is the basis of PSP measurement which needs specific expertise and experience. There are several steps for operation. First is to remove oil dots, dust and other dirt spots on the model surface needed paint applying with alcohol or acetone (as shown by **Figure 3**). Any surface imperfections that are unwanted should be corrected prior to a final cleaning. Once the model is completely cleaned, model parts that are not to be painted are covered with tape and/or paper. This step assures the surface to be absolutely clean. Second is to spray base coat or primer coating with an air brush on the cleaned model surface and then to cure the primer coating by lifting ambient temperature for several hours or at ambient temperature for more than 12 h. To finish, the cured primer coating with fine sand papers 800– 1500 grit. Finishing will promote the uniform coating thickness and enhance the reflection of excited and emission light. Then remove finished dust attached on the coating by spraying mixture of air and acetone. Third is to spray upper coating containing probes molecules as seen in **Figure 4**. The primer coating is over-sprayed with a saturated solution of probe molecule in toluene or acetone. Several over-spray coats are continued until an even colour (deep pink from ISSI or sky blue from ICAS, seen in **Figure 5**, which is dependent on the PSP formulation) is obtained. And then cure it as the same process as the second step. Fourth is to arrange markers on the cured coating for image alignment which will be described in the following section.

**Figure 5.** Surface coated with PSP. (a) Paint from ISSI; (b) paint from ICAS.

**Figure 3.** Cleaning surface with acetone.

162 New Trends and Developments in Metrology

**Figure 4.** Spraying the paint on the surface.

The paints are applied using a commercial automotive type spray air brush (**Figure 6**) and clean, dry, regulated service air (**Figure 7**). A high volume low pressure (HVLP) touch up sprayer has produced excellent results. Here, the nitrogen is suggested for sprayer instead of the air to avoid the interaction from the oxygen. HVLP sprayers have become standard with their compliance with environmental standards. Jet pressure and area of the sprayer is regu‐ lated according the performance of the paint layer. Paint should be sprayed in an area with adequate ventilation. Personal protective equipment must be used depending on the materi‐ als being sprayed and the location where the paint is applied.

**Figure 6.** Spraying air brush (W-71G).

**Figure 7.** Oil and water filtering regulator for spraying air brush.

#### **3.2. Excitation light source**

Pressure sensitive paint working process is denoted as Stern-Volmer process as mentioned previously. The purpose of the excitation light source is to excite the photo luminescent probe molecules. The source must provide an adequate number of excitation photons within appropriate wavelengths. The excited source should avoid emitting photons with wavelength overlap incident photons. Otherwise, Signal Noise Ratio (SNR) of scientific grade camera will decrease to very low level. A range of illumination sources have been considered including lasers, filtered arc lamps, and LED, etc. Here, UV lamp and LED are tested and used.

#### *1. UV lamp*

**Figure 6.** Spraying air brush (W-71G).

164 New Trends and Developments in Metrology

**Figure 7.** Oil and water filtering regulator for spraying air brush.

Based on the feature of the paint layer from ICAS, a Porta-Ray 400 portable ultraviolet (UV) lamp made by Uvitron International Company in the U.S. is chosen in the current work, which is a 400W/200W, 1000 h lifetime metal halide lamp. It can emit visible light in a broad spectrum and the maximal output intensity is up to 500 MW/cm2 within UVA range, whose practical output instability is less than 5%, especially less than 1% after running 30 min.

Additionally, a filter box (**Figure 8**) was designed and manufactured to filter visible light and select UVA light to excite the probe molecules, which consists a metal box, a scattered quartz glass and two pieces of UVA transparent glasses filter. With a pair of latches, the filter box is connected with the UV lamp to form the integrated excitation light source.

**Figure 8.** UV lamp and filter box.

#### *2. LED arrays*

The arrays consists of 37 individual LED (light emitting diode) elements (3W) arranged on a 9.0 cm diameter aluminium substrate and is equipped with water cooled equipment (**Fig‐ ure 9**) to insure pure illuminating light. The LED array produces light centred at 365 nm (∼20 nm full width at half maximum) which is the optimal excitation wavelength. The uniformity of the spot area is beyond 92% with irradiation height of 120 mm. The LED arrays are operated using a 300W switching power supply and can be remotely operated using standard TTL pulse or with continues irradiation mode.

**Figure 9.** Photo of LED arrays and controller.

#### **3.3. Luminescence detector**

Considering the higher readout noise and nonlinear response at low intensity situation, industrial CCD or CMOS camera generally is not fit for accurate PSP measurement. This leads to generation of relatively low SNR (Signal-To-Noise Ratio) Image to decrease its quality. In general, scientific grade CCD or CMOS camera is preferred because its readout noise has been well controlled, and linear response to low intensity has been carefully corrected to satisfy the request of PSP measurement.

In initial study, a TSI charge-coupled device camera (CCD, 1600×1200 pixels, 10 bit) from a Stereo PIV system is chosen as the luminescence photo detector. The image sample is controlled by PIV operating software, INSIGHT 6.0, automatically.

In current study, PSP luminescent emission data is acquired by a specialized air cooled scientific charge-coupled device (CCD) ORCA-R2 camera. A combination of a 24–85mm Nikon zoom lens and 480 nm band pass filter is used in front of the CCD to block excitation. This low noise device allows for the rapid collection of image pairs with a minimal time delay between images. The camera employs a CCD chip with an active area of 1344×1024 pixels with 0.008 ms exposure time and its quantum efficiency is greater than 70% at the wavelength range of 450 nm to 600 nm, which covers the (480 ± 20) nm emission from PSP, and more than 50% near 650 nm. The camera employs 16-bit digital resolution as well as on-board memory that will allow it to rapidly store images; also it provides the high-dynamic range model at 8×8 binning readout if necessary.

#### **3.4. Calibration device**

*2. LED arrays*

or with continues irradiation mode.

166 New Trends and Developments in Metrology

**Figure 9.** Photo of LED arrays and controller.

**3.3. Luminescence detector**

request of PSP measurement.

by PIV operating software, INSIGHT 6.0, automatically.

The arrays consists of 37 individual LED (light emitting diode) elements (3W) arranged on a 9.0 cm diameter aluminium substrate and is equipped with water cooled equipment (**Fig‐ ure 9**) to insure pure illuminating light. The LED array produces light centred at 365 nm (∼20 nm full width at half maximum) which is the optimal excitation wavelength. The uniformity of the spot area is beyond 92% with irradiation height of 120 mm. The LED arrays are operated using a 300W switching power supply and can be remotely operated using standard TTL pulse

Considering the higher readout noise and nonlinear response at low intensity situation, industrial CCD or CMOS camera generally is not fit for accurate PSP measurement. This leads to generation of relatively low SNR (Signal-To-Noise Ratio) Image to decrease its quality. In general, scientific grade CCD or CMOS camera is preferred because its readout noise has been well controlled, and linear response to low intensity has been carefully corrected to satisfy the

In initial study, a TSI charge-coupled device camera (CCD, 1600×1200 pixels, 10 bit) from a Stereo PIV system is chosen as the luminescence photo detector. The image sample is controlled

In order to maintain the accuracy of PSP measurement, a calibration device was designed and manufactured, which integrates pressure and temperature adjustors with adequate precision as well as a mediate pressure air pump and an integral cooler and heater. Calibration vessel was customized, which is a steel cylinder with 100 mm diameter ×100 mm high × 8 mm thick and attached with a shrouding ring 20 mm wide atop the plate (**Figure 10**). It is covered with a piece of quartz glass or Plexiglas and connected with a pair of holes as inlet and outlet. With the help of a pressure or vacuum pump, the gas pressure in the pressure vessel can be adjusted 0–200kPa through a pressure manometer.

**Figure 10.** Photo of pressure vessel.

#### **3.5. Image processing**

Image processing is a critical step in the whole optical PSP measurement. The current image processing is divided into three parts as shown in **Figure 11**: basic data processing, image registration and 3D reconstruction [8].

**Figure 11.** Schematic representation of major steps in PSP data processing.

#### *1. Basic data processing*

**3.5. Image processing**

168 New Trends and Developments in Metrology

registration and 3D reconstruction [8].

**Figure 11.** Schematic representation of major steps in PSP data processing.

Image processing is a critical step in the whole optical PSP measurement. The current image processing is divided into three parts as shown in **Figure 11**: basic data processing, image Transformation from ratio image to pressure map is manipulated with Stern-Volmer Eq. (1) in which Stern-Volmer constants have been determined through a priori calibration. The luminescence *I* and *Iref* are intensity images captured at wind-on and wind-off respectively. According to Stern-Volmer equation and by introducing the ambient pressure *Pref*, a pressure map of model surface can be produced initially.

Generally, the condition of wind-off and wind-on is denoted non-operation and operation of the wind tunnel. It is clear that wind-off means reference condition in Stern-Volmer Eq. (1).

It should be mentioned that the number of images captured at wind-off and wind-on depends on the performances of PSP measurement system, environment and operation condition of the wind tunnel. Besides, a number of dark images and background images should be captured before and after operation respectively. The random noise such as photon shot noise etc. should be restrained by averaging the respective images. The algorithms to suppress the noise generated in image calculations should be chosen carefully because most of them would decrease the effective resolution of selected images.

#### *2. Image registration*

In practice, the model surface could move or distort during operation of wind tunnel due to aerodynamic loads and vibrations. Thus, there exist space displacement, structure bending and structure torsion in operation. The situations introduce the difficulties in image ratio calculation due to displacement and structure variation. Hence, image registration must be employed to solve this problem. The wind-on image may not align with the wind-off image due to model deformation produced by aerodynamic loads. **Figure 12(a)** displays an unregis‐ tered intensity ratio image of a compressor cascade suction surface, which shows a shadow near each pressure port, as well as two strip-like shadows along leading and trailing edges. The occurrence of shadows might be ascribed to not aligning of wind-on and wind-off images. A ratio between those non-aligned images can lead to a considerable error in calculation of pressure using a calibration relation. Significant errors are introduced if these effects are not included, particularly in regions of rapid pressure changes such as near shock waves, boun‐ dary layer transition and flow separation.

To match the deformed pressure image coordinates (*x, y*) with the reference images coordinates (*xref*, *yref*), the generalized equation for image registration of image matrices with an *n*×*n* array could be used:

$$\begin{aligned} \mathbf{x}\_{ref} &= \sum\_{i,j=1}^{n} a\_{ij} f\_i(\mathbf{x}) f\_j(\mathbf{y}) \Bigg| \\ \mathbf{y}\_{ref} &= \sum\_{i,j=1}^{n} b\_{ij} f\_i(\mathbf{x}) f\_j(\mathbf{y}) \Bigg| \end{aligned} \tag{2}$$

**Figure 12.** Image of unregistering (a) and registered (b).

Where both basic functions *fi* (*x*) and *fj* (*y*) are the orthogonal functions or the non-orthogonal power functions *fi* (*x*) = *xc* or *fj* (*y*) = *yc* with c being a constant value. With the help of the marker recognition and given the image coordinates of alignment reference markers on the measured surface, the unknown coefficients *aij* and *bij*, can be determined by least-squares fit to match the targets in reference images to those in pressure ones, that keeps the wind-on image locating on the same position as the wind-off image. A registered ratio image is created using the pair of averaged wind-off image and the averaged registered wind-on image. Moreover, the median filtering is performed to reduce the random fluctuation (noise). **Figure 12(b)** shows an aligned intensity ratio image without any shadows. Then the pressure map on the test model is finally revealed as a two dimensional image according to the in-situ calibration curve.

#### *3. 3D reconstruction*

In fact, it is very common that the axis of camera lens is not perpendicular to the surface of interest. Thus, 2D pressure map would not provide detailed useful information when normal direction of captured images is not parallel to that of the interest surface. In special case such as testing in cascade wind tunnel, the image is rather distorted and unable to provide more detail on interest surface. To reproduce the pressure map on the 3D test model, 3D recon‐ structing method is developed based on the projective theory of photogrammetry [9–11].

The perspective relationship between the coordinate (*x, y*) on the 2D image and (*X, Y, Z*) on the 3D test model is shown in **Figure 13**. Neglecting the deformation of camera lens, the direct linear transform formulation between 3D test model and 2D image can be expressed by:

$$\begin{aligned} \chi &= \frac{L\_1 X + L\_2 Y + L\_3 Z + L\_4}{L\_9 X + L\_{10} Y + L\_{11} Z + 1} \\ \chi &= \frac{L\_3 X + L\_6 Y + L\_7 Z + L\_8}{L\_9 X + L\_{10} Y + L\_{11} Z + 1} \end{aligned} \tag{3}$$

**Figure 13.** Perspective between 2D image and 3D model.

**Figure 12.** Image of unregistering (a) and registered (b).

(*x*) = *xc* or *fj*

(*x*) and *fj*

recognition and given the image coordinates of alignment reference markers on the measured surface, the unknown coefficients *aij* and *bij*, can be determined by least-squares fit to match the targets in reference images to those in pressure ones, that keeps the wind-on image locating on the same position as the wind-off image. A registered ratio image is created using the pair of averaged wind-off image and the averaged registered wind-on image. Moreover, the median filtering is performed to reduce the random fluctuation (noise). **Figure 12(b)** shows an aligned intensity ratio image without any shadows. Then the pressure map on the test model is finally revealed as a two dimensional image according to the in-situ calibration curve.

In fact, it is very common that the axis of camera lens is not perpendicular to the surface of interest. Thus, 2D pressure map would not provide detailed useful information when normal direction of captured images is not parallel to that of the interest surface. In special case such as testing in cascade wind tunnel, the image is rather distorted and unable to provide more detail on interest surface. To reproduce the pressure map on the 3D test model, 3D recon‐ structing method is developed based on the projective theory of photogrammetry [9–11].

The perspective relationship between the coordinate (*x, y*) on the 2D image and (*X, Y, Z*) on the 3D test model is shown in **Figure 13**. Neglecting the deformation of camera lens, the direct linear transform formulation between 3D test model and 2D image can be expressed by:

> 1 23 4 9 10 11 5 678 9 10 11

*L X LY LZ L <sup>y</sup> LX LY L Z*

*LX LY LZ L <sup>x</sup> LX LY L Z*

+++ <sup>=</sup> +++

+++ <sup>=</sup> +++

1

(3)

1

(*y*) are the orthogonal functions or the non-orthogonal

(*y*) = *yc* with c being a constant value. With the help of the marker

Where both basic functions *fi*

170 New Trends and Developments in Metrology

power functions *fi*

*3. 3D reconstruction*

With the help of the over six marks, the transforming coefficient *L1*∼*L11* can be solved. Then pressure information on the 2D image is transformed to the 3D model directly without any loss. The pressure map on the 3D test model is expressed in a stereo vision, and more flow details are provided for engineer. The whole flowchart of 3D reconstruction is given in **Figure 14**.

**Figure 14.** Flow chart of 3D reconstruction.

#### **4. Study of PSP measurement system characteristics**

When a set of PSP measurement system has been established, it should be corrected and validated via several PSP calibrations using mature PSP formulation. This is the basis for practical PSP applications.

Having established the PSP measurement system, the validation of its performances and the investigation of selecting relevant system parameters have been conducted in Laboratory of Aerofoil & Cascade Aerodynamics, Northwestern Polytechnical University (NPU). More detailed work has been published in the reference [12,13]. During the calibration process, it is kept dark in the ambient to avoid lighting contamination.

A sample coated with the paint layer is placed in the pressure calibration vessel bottom, which is about 50 mm×50 mm. According to a cascade transonic wind tunnel, the pressure calibration range is determined from 27.4 kPa to 217.4 kPa with interval 10 kPa. To optimize the meas‐ urement system, the calibration experiment is performed with five apertures of CCD (2.8, 4, 5.6, 8 and 11) and two powers of UV lamp (200*W* and 400*W*). To reduce the noise of the camera, a total of 20 sequential images are sampled and ensemble-averaged on the same measuring condition

#### **4.1. Excitation characteristics of light source power**

loss. The pressure map on the 3D test model is expressed in a stereo vision, and more flow details are provided for engineer. The whole flowchart of 3D reconstruction is given in

**Figure 14**.

172 New Trends and Developments in Metrology

**Figure 14.** Flow chart of 3D reconstruction.

practical PSP applications.

condition

**4. Study of PSP measurement system characteristics**

kept dark in the ambient to avoid lighting contamination.

When a set of PSP measurement system has been established, it should be corrected and validated via several PSP calibrations using mature PSP formulation. This is the basis for

Having established the PSP measurement system, the validation of its performances and the investigation of selecting relevant system parameters have been conducted in Laboratory of Aerofoil & Cascade Aerodynamics, Northwestern Polytechnical University (NPU). More detailed work has been published in the reference [12,13]. During the calibration process, it is

A sample coated with the paint layer is placed in the pressure calibration vessel bottom, which is about 50 mm×50 mm. According to a cascade transonic wind tunnel, the pressure calibration range is determined from 27.4 kPa to 217.4 kPa with interval 10 kPa. To optimize the meas‐ urement system, the calibration experiment is performed with five apertures of CCD (2.8, 4, 5.6, 8 and 11) and two powers of UV lamp (200*W* and 400*W*). To reduce the noise of the camera, a total of 20 sequential images are sampled and ensemble-averaged on the same measuring

**Figure 15** shows the original PSP images under typical pressures using two power settings of UV lamp on the condition that the camera aperture is 2.8. There is an inverse relationship between the emitted light intensity and the local air pressure according to the oxygen quench‐ ing characteristics of the PSP. Thus, with the pressure increasing, the image is getting darker as shown in **Figure 15**, which is in agreement with the principle of PSP technique. Furthermore, it is very clearly observed by naked eyes that the image **Figure 15(b)** using 400*W* UV lamp is a little brighter comparing with **Figure 15(a)** using 200*W*.

To show the difference between **Figure 15(a)** and (b) more clearly, the luminous intensity of the image is quantified using the grey value of the model surface centred point in **Figure 16**. The luminous intensity using 400*W* UV lamp in **Figure 16(b)** is obviously greater than using 200*W* in terms of the grey scale. When the air pressure in the pressure vessel is 217.4 *kPa*, the luminous intensity using 400*W* UV lamp is *I*=0.175, but *I*=0.0717 using 200*W*. Additionally, **Figure 16(b)** shows better linearity with the pressure increase, but sampled points in **Fig‐ ure 16(a)** are scattered due to low SNR. The main reason is that the energy of the excitation light is relatively weak with 200*W* UV lamp and can't provide the adequate excitation photons to excite fluorescence molecules entering the energy transition.

**Figure 15.** Images using two powers of UV lamp. (a) 200W power; (b) 400W power.

**Figure 17** shows the PSP calibration curves using both 200*W* and 400*W* UV lamp on the condition that typical apertures are 2.8, 5.6 and 11, respectively, and all calibration curves are fitted the second-order line using the sampled data. In **Figure 17**, with the relative pressure *P*/ *Pref* increasing, the relative luminous intensity *I*/*Iref* is reduced under all settings of PSP measurement system, which is consistent with the PSP measurement principle. Compared with the calibration results using 200*W* UV lamp, sampled points using 400*W* is in order and have a good agreement with the fitted second-order calibration curve under all CCD apertures. Calibration curves using 400*W* UV lamp is much steeper and keeps a better monotonic property.

**Figure 16.** Luminous intensity frame using two powers of UV lamp. (a) 200W power; (b) 400W power.

**Figure 17.** Calibration curve using two powers of UV lamp with typical apertures at 2.8 (a), 5.6 (b) and 11 (c).

#### **4.2. Characteristics of camera aperture**

It was also found from **Figure 17** that the setting of CCD aperture has an influence on the measured result. In the present study, the optimization of CCD aperture (2.8, 4.0, 5.6, 8 and 11) is performed with 400*W* UV lamp. Original PSP images are showed in **Figure 18** with two typical camera apertures: **Figure 18(a)** is with the aperture at 5.6 and **Figure 18(b)** with the aperture at 11. Due to the oxygen quenching, it also can be seen from **Figure 18** that the image is darker and darker with the pressure increasing. Compared with **Figure 15(b)**, **Figure 18(a)** and **Figure 18(b)**, the original PSP image is getting darker with CCD aperture increasing under the same pressure and excitation light.

**Figure 18.** Images with typical CCD apertures at 5.6 (a) and 11 (b).

Calibration curves using 400*W* UV lamp is much steeper and keeps a better monotonic

**Figure 16.** Luminous intensity frame using two powers of UV lamp. (a) 200W power; (b) 400W power.

**Figure 17.** Calibration curve using two powers of UV lamp with typical apertures at 2.8 (a), 5.6 (b) and 11 (c).

It was also found from **Figure 17** that the setting of CCD aperture has an influence on the measured result. In the present study, the optimization of CCD aperture (2.8, 4.0, 5.6, 8 and 11) is performed with 400*W* UV lamp. Original PSP images are showed in **Figure 18** with two typical camera apertures: **Figure 18(a)** is with the aperture at 5.6 and **Figure 18(b)** with the

**4.2. Characteristics of camera aperture**

property.

174 New Trends and Developments in Metrology

**Figure 19.** Luminous intensity frame with typical CCD aperture at 4 (a) and 8 (b).

To show the influence of the CCD apertures more clearly, the luminous intensity of the image is quantified using the grey value of the model surface centred point in **Figure 19**. Sampled data using different CCD apertures show higher linearity using the 400*W* UV lamp. However, the luminous intensity in **Figure 19(b)** is obviously greater and changing sharper than that with 2.8 (see **Figure 16(b)**) and 4 (see **Figure 19(a)**), which corresponds to **Figure 18**. The main reason is that CCD lens captures more luminescence with the increasing CCD aperture and results in an increase in the grey scale of the PSP image.

All calibration curves with five CCD apertures are shown in **Figure 20**. The calibration curves are very close when the CCD aperture is set as 4 and 2.8. The calibration curve becomes steeper with the CCD aperture increasing and is nearly linear when CCD aperture is 11. Clearly, due to the improved SNR (Signal Noise Ratio) of CCD, the pressure calibration curve is mostly sensitive to the pressure change and keep better monotonic when the CCD aperture is 11, especially when the pressure is higher.

**Figure 20.** Calibration curve in different apertures.

### **5. Examples of optical PSP**

#### **5.1. Global pressure distribution on suction surface of a compressor cascade**

#### *5.1.1. Test rig*

the luminous intensity in **Figure 19(b)** is obviously greater and changing sharper than that with 2.8 (see **Figure 16(b)**) and 4 (see **Figure 19(a)**), which corresponds to **Figure 18**. The main reason is that CCD lens captures more luminescence with the increasing CCD aperture and

All calibration curves with five CCD apertures are shown in **Figure 20**. The calibration curves are very close when the CCD aperture is set as 4 and 2.8. The calibration curve becomes steeper with the CCD aperture increasing and is nearly linear when CCD aperture is 11. Clearly, due to the improved SNR (Signal Noise Ratio) of CCD, the pressure calibration curve is mostly sensitive to the pressure change and keep better monotonic when the CCD aperture is 11,

results in an increase in the grey scale of the PSP image.

especially when the pressure is higher.

176 New Trends and Developments in Metrology

**Figure 20.** Calibration curve in different apertures.

In the references [13–15], the two sets of PSP experiment are performed in a transonic cascade wind tunnel: one is based on the self-established PSP system, the other based on the commercial PSP system by ISSI. And the global pressure distributions on two sets of compressor blade are measured on several inflow conditions. Finally, PSP results are compared with that with the traditional measurement technique.

**Figure 21.** Schedule of cascade.

PSP experiments are carried out in the Science and Technology Key Lab's transonic cascade wind tunnel in Northwestern Polytechnical University (NPU). The experimental cascade is composed of several blades and two end-walls (see **Figure 21**). In the present work, two sets of the PSP measurement system are used: one is the own established PSP system by ourselves, and the other is the commercial PSP system produced by ISSI. Correspondingly, two sets of cascade are provided as the PSP measured model: blade1 is coated with PSP from ICAS (Institute of Chemistry, Chinese Academy of Science) in **Figure 22**, and blade2 iss coated with PSP from ISSI in **Figure 23**.

**Figure 22.** Blade 1 coated with ICAS PSP.

**Figure 23.** Blade 2 coated with ISSI PSP.

#### *5.1.2. Result of blade 1*

**Figure 22.** Blade 1 coated with ICAS PSP.

178 New Trends and Developments in Metrology

**Figure 23.** Blade 2 coated with ISSI PSP.

Based on the established measurement system, the pressure distribution on blade1, whose chord is 56.7 mm, bend angle is 52.6°, height is 100 mm, is measured under two conditions: Ma=0.4 and Ma=0.5. with the same incidence angle (i=−10°).

**Figure 24.** Static pressure distribution on suction surface at Mach 0.4 (a) and Mach 0.5 (b).

**Figure 24** shows the pressure distribution on suction surface at Mach 0.4 and Mach 0.5 respectively. From two pressure map, there is a high pressure area near the leading edge. Under the action of the large negative attack angle and the larger pitch between two neigh‐ bourhood blades, the inflow injects onto the blade leading edge and decelerates quickly, consequently, the pressure increases. Then, airflow began accelerating and the pressure decreases because of the large curvature on blade surface. At 40% chord, airflow accelerated to max and a pressure valley is shown in the pressure map. Due to the divergence of the cascade passage, pressure grows up again. In addition, due to the boundary layer interfering mutually between end-wall and blade surface, pressure at the corner between the end-wall and blade surface is smaller than the other area.

Although there are two results at different Mach number in **Figure 24**, there is the same pressure distribution trend on suction surface, and the surface pressure is increasing with the rising inlet Mach number.

To test the measurement precision of PSP technique, the averaged-pressure around pressure holes is compared with the pressure measured by conventional pressure tap, and the results are shown in **Figure 25**. In the plot, X-coordinate shows the relative chord, Y-coordinate shows dimensionless pressure.

From **Figure 25**, it is seen that the pressures measured by PSP technique and pressure tap follow the same trend in two different conditions. The position of the lowest pressure on blade suction surface coincided nearly. The pressure error at the same position is less than 4.5%, which meets the engineering application basically.

**Figure 25.** Pressure on the 50% span at Mach 0.4 (a) and Mach 0.5 (b) using PSP and pressure scanner.

#### *5.1.3. Result of blade 2*

Based on the ISSI PSP system, the pressure distribution on blade 2, whose chord is 69.946 mm, pitch is 60.7 mm, height is 100 mm, is measured using ISSI PSP under three conditions: Ma=0.4, 0.5 and 0.6 with the same incidence angle (i=0°).

**Figure 26** shows the pressure distribution on suction surface at Mach 0.4, 0.5 and 0.6 respec‐ tively. It shows that the distribution of pressure in different conditions has almost the same trend. While with the increase of the inlet Mach, the pressure at the same position decreases. Similar to the blade 1 results, the pressure at the corner of blade 2 between the end-wall and blade surface is smaller than the other area due to the boundary layer interfering. Besides, the pressure increases along the blade chord due to the divergence of the cascade passage.

**Figure 26.** Static pressure distribution on suction surface at Mach 0.4 (a), 0.5 (b) and 0.6 (c).

To test the measurement precision of PSP technique, the averaged-pressure around pressure holes is compared with the pressure measured by conventional pressure tap, and the results are shown in **Figure 27**. In the plot, X-coordinate shows the relative chord, and Y-coordinate shows dimensionless pressure.

From **Figure 27**, it is seen that PSP measurement results have the same pressure distribution trend to the pressure scanner results on the two conditions. The position of the lowest pressure on blade suction surface coincided nearly. The pressure error at the same position is less than 4.5%, which meets the engineering application basically.

**Figure 27.** Pressure on the 50% span at Mach 0.4 (a), 0.5 (b) and 0.6 (c) using PSP and pressure scanner.

#### **6. Conclusion**

**Figure 25.** Pressure on the 50% span at Mach 0.4 (a) and Mach 0.5 (b) using PSP and pressure scanner.

**Figure 26.** Static pressure distribution on suction surface at Mach 0.4 (a), 0.5 (b) and 0.6 (c).

To test the measurement precision of PSP technique, the averaged-pressure around pressure holes is compared with the pressure measured by conventional pressure tap, and the results

Based on the ISSI PSP system, the pressure distribution on blade 2, whose chord is 69.946 mm, pitch is 60.7 mm, height is 100 mm, is measured using ISSI PSP under three conditions: Ma=0.4,

**Figure 26** shows the pressure distribution on suction surface at Mach 0.4, 0.5 and 0.6 respec‐ tively. It shows that the distribution of pressure in different conditions has almost the same trend. While with the increase of the inlet Mach, the pressure at the same position decreases. Similar to the blade 1 results, the pressure at the corner of blade 2 between the end-wall and blade surface is smaller than the other area due to the boundary layer interfering. Besides, the pressure increases along the blade chord due to the divergence of the cascade passage.

*5.1.3. Result of blade 2*

180 New Trends and Developments in Metrology

0.5 and 0.6 with the same incidence angle (i=0°).

Pressure Sensitive Paint measurement technique is attracting extensive attention for its unique advantage at surface pressure measurement, such as the absence of intruding measured surfaces, application of image processing to acquire global pressure distribution, and the acquisition of PSP results that traditional methods could never obtain. In this chapter, both the theory and application of PSP technique are introduced, including the measurement principle, measurement system, component characteristics and its application in internal flow fields based on author's researches. The results are proposed for engineering application.

#### **Acknowledgements**

This work was supported by the National Nature Science Foundation of China (NSFC) under the Grand No.51476132. Besides, Dr. Zhou Qiang was acknowledged for his valuable sugges‐ tion.

#### **Author details**

Limin Gao\* , Ruiyu Li and Bo Liu

\*Address all correspondence to: gaolm@nwpu.edu.cn

Northwestern Polytechnical University, Xi'an, PR China

#### **References**


[6] Gouterman M. Oxygen quenching of luminescence of pressure-sensitive paint for wind tunnel research. Journal of Chemical Education. 1997;74(6):697–704. DOI: 10.1021/ ed074p697.

theory and application of PSP technique are introduced, including the measurement principle, measurement system, component characteristics and its application in internal flow fields

This work was supported by the National Nature Science Foundation of China (NSFC) under the Grand No.51476132. Besides, Dr. Zhou Qiang was acknowledged for his valuable sugges‐

[1] Ardasheva M. M., Nevskii L. B., Pervushin G. E. Measurement of pressure distribution by means of indicator coating. Journal of Applied Mechanics and Technical Physics

[2] Radchenko V. N. Application of the luminescence in aerodynamic researches [thesis].

[3] Lepicovsky J., Bencic T. Use of pressure-sensitive paint for diagnostics in turbomachi‐ nery flows with shocks. Experiments in Fluids. 2002;33(4):531–538. DOI: 10.1007/

[4] Watkins A. N., Goad W. K., Obara C. J., Danny R. S., Richard L. C., Melissa B. C. Flow visualization at cryogenic conditions using a modified pressure-sensitive paint approach. In: 43rd AIAA Aerospace Sciences Meeting and Exhibit; 10–13 January; Reno,

[5] Erickson G. E., Gonzalez H. A. Pressure-sensitive paint investigation of double delta wing vortex flow manipulation. In: 43rd AIAA Aerospace Sciences Meeting and

Exhibit; 10–13 January; Reno, Nevada. AIAA; 2005. 1059.1–1059.59.

based on author's researches. The results are proposed for engineering application.

**Acknowledgements**

182 New Trends and Developments in Metrology

**Author details**

, Ruiyu Li and Bo Liu

\*Address all correspondence to: gaolm@nwpu.edu.cn

Northwestern Polytechnical University, Xi'an, PR China

1985; 26(4):469–474. DOI: 10.1007/BF01101626.

Zhokovsky: Moscow Physical-Technical Institute; 1985.

Limin Gao\*

**References**

s00348-002-0476-x.

Nevada. AIAA; 2005. 456.1–456.13.

tion.


### **Self-Calibration of Two-Dimensional Precision Metrology Systems**

Chuxiong Hu and Yu Zhu

Additional information is available at the end of the chapter

http://dx.doi.org/10.5772/62761

#### **Abstract**

In modern industry, there usually exist high-precision stages, such as reticle stages and wafer stages in VLSI lithography tools, metrology stages in coordinate measurement machines, motion stages in CNC machine tools, etc. These stages need high-precision measurement/metrology systems for monitoring its XY movement. As the metrology systems are quite accurate, we often cannot find a standard tool with better accuracy to implement a traditional calibration process for systematic measurement error (i.e., stage error) determination and measurement accuracy compensation. Subsequently, selfcalibration technology is developed to meet this challenge and to solve the calibration problem. In this chapter, we study the self-calibration of two-dimensional precision metrology systems and present a holistic self-calibration strategy. This strategy utilizes three measurement views of an artifact plate with mark positions not precisely known on the un-calibrated two-dimensional metrology stage and constructs relevant symmetry, transitivity, and redundancy of the stage error of the metrology stage. The misalign‐ ment errors of all measurement views, especially including those of the translation view, are totally determined by detailed mathematical manipulations. Then, a redundant equation group is synthesized, and a least-square–based robust estimation law is employed to calculate out the stage error even under the existence of random measure‐ ment noise. Furthermore, as the determination of the misalignment error components of the measurement views is rather complicated but important in previous and the proposed methods, this chapter also significantly analyzes the necessity of this costly computa‐ tion. The proposed approach is investigated by simulation computation, and the simulation results prove that the proposed determination scheme can calculate out the stage error rather exactly without random measurement noise. Furthermore, when there exist various random measurement noises, the calibration accuracy of the proposed strategy is also investigated, and the results illustrate that the standard deviations of the calibration error are consistently with the same level of those of the random measure‐ ment noises. All these results verify that the proposed scheme can realize the stage error rather accurately even under the existence of random measurement noise. The practical

© 2016 The Author(s). Licensee InTech. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

procedure for performing a standard self-calibration is also introduced for engineers to facilitate actual implementation.

**Keywords:** Self-calibration, Metrology system, Stage error, Measurement noise, VLSI lithography

#### **1. Introduction**

Modern precision applications such as VLSI lithography tools, CNC machine tools, and coordinate measurement machines usually need multi-dimensional stages those are capable of manufactureaccuracyatmicro/nanometerlevels [1,2].Asautomatic servosystems,these stages have metrology/measurement systems for position information and motion feedback [3–5]. Because oftheunavoidable surface non-flatness, non-orthogonality, scaledifference, etc., in the metrology systems, there exists systematic measurement error (i.e., stage error), which is the differencebetweentheactualmetrologysystemandtheidealmetrologysystem[6].Thecommon way to determine the stage error map in Cartesian space needs traditional calibration technol‐ ogy [7, 8] with a rather standard measurement plate or scale for direct "high-precision calibra‐ tes low-precision". If the stage error can be figured out, the measurement accuracy of the metrology system in the stage could be determined and compensated, which will improve the positioning accuracyandrepeatability ofpracticalmotionsystems [9, 10]. However,inpractical ultra-precisionapplications suchasnano-lithography, engineersusuallycannotfinda standard measurementtoolwithbetteraccuracythanthestage'smetrologysystemtoperformatraditional calibration process. Therefore, the idea of self-calibration is developed. The basic principle is to utilize an artifact plate with mark positions not precisely known and place the plate on the metrology stage with different locations or orientations to construct different measurement views [11]. In the following **Figure 1**, one example of the measurement view is shown where the pedestal is the un-calibrated metrology system and the grid plate is the artifact plate.

The key point of the self-calibration way is that the positions of the marks on the plate are fixed whatever the measurement view is, and researchers could utilize the measurement informa‐ tion of the marks in different views to construct equation about the stage error for final determination. Following this way, self-calibration has attracted attention of researchers and engineers and has been applied to certain special but important applications [12, 13], such as nano-positioners [14], profiling stages [15], scanning probe microscope [16], and coordinate measuring machines [17]. For example, Takac [18] concerned the self-calibration problem of one-dimensional stages with the utilization of an un-calibrated artifact plate to obtain con‐ gruency via transitivity and provided a calibration that makes a set of tool graduation markers with identical spacing. Raugh proposed a rigorous mathematical method for two-dimensional self-calibration under the assumption that the stage error map in Cartesian space can be expressed as a finite polynomial [19]. However, the scheme is complicated with extensive computation and even may be unstable under the existence of random measurement noise. To reduce the computation of Raugh's algorithm, for the two-dimensional self-calibration, Takac et al. [9] developed a transitive algorithm based on direct point-to-point comparison, which is intuitive and simple, but quite sensitive to random measurement noise due to that the handling of the rotation and translation of each measurement view is oversimplified.

**Figure 1.** An artifact plate on a two-dimensional metrology stage.

procedure for performing a standard self-calibration is also introduced for engineers to

**Keywords:** Self-calibration, Metrology system, Stage error, Measurement noise, VLSI

Modern precision applications such as VLSI lithography tools, CNC machine tools, and coordinate measurement machines usually need multi-dimensional stages those are capable of manufactureaccuracyatmicro/nanometerlevels [1,2].Asautomatic servosystems,these stages have metrology/measurement systems for position information and motion feedback [3–5]. Because oftheunavoidable surface non-flatness, non-orthogonality, scaledifference, etc., in the metrology systems, there exists systematic measurement error (i.e., stage error), which is the differencebetweentheactualmetrologysystemandtheidealmetrologysystem[6].Thecommon way to determine the stage error map in Cartesian space needs traditional calibration technol‐ ogy [7, 8] with a rather standard measurement plate or scale for direct "high-precision calibra‐ tes low-precision". If the stage error can be figured out, the measurement accuracy of the metrology system in the stage could be determined and compensated, which will improve the positioningaccuracyandrepeatability ofpracticalmotionsystems [9, 10]. However,inpractical ultra-precisionapplications suchasnano-lithography, engineersusuallycannotfinda standard measurementtoolwithbetteraccuracythanthestage'smetrologysystemtoperformatraditional calibration process. Therefore, the idea of self-calibration is developed. The basic principle is to utilize an artifact plate with mark positions not precisely known and place the plate on the metrology stage with different locations or orientations to construct different measurement views [11]. In the following **Figure 1**, one example of the measurement view is shown where the pedestal is the un-calibrated metrology system and the grid plate is the artifact plate.

The key point of the self-calibration way is that the positions of the marks on the plate are fixed whatever the measurement view is, and researchers could utilize the measurement informa‐ tion of the marks in different views to construct equation about the stage error for final determination. Following this way, self-calibration has attracted attention of researchers and engineers and has been applied to certain special but important applications [12, 13], such as nano-positioners [14], profiling stages [15], scanning probe microscope [16], and coordinate measuring machines [17]. For example, Takac [18] concerned the self-calibration problem of one-dimensional stages with the utilization of an un-calibrated artifact plate to obtain con‐ gruency via transitivity and provided a calibration that makes a set of tool graduation markers with identical spacing. Raugh proposed a rigorous mathematical method for two-dimensional self-calibration under the assumption that the stage error map in Cartesian space can be expressed as a finite polynomial [19]. However, the scheme is complicated with extensive computation and even may be unstable under the existence of random measurement noise. To reduce the computation of Raugh's algorithm, for the two-dimensional self-calibration, Takac et al. [9] developed a transitive algorithm based on direct point-to-point comparison,

facilitate actual implementation.

lithography

186 New Trends and Developments in Metrology

**1. Introduction**

In [20], Ye proposed a discrete Fourier transform-based algorithm for two-dimensional selfcalibration, which is numerically robust. He also validated that the algorithm provides an exact self-calibration on the discrete sample sites when there was no random measurement noise, and only introduced calibration error of about the same size as the random measurement noise when there exists random measurement noise. The development of this algorithm is inspired by the achievement of Takac and Raugh and is considered as a standardized process by related researchers and engineers [16]. However, the computation for the misalignment error com‐ ponents of translation view is complex due to that the information of the rotation view is not sufficiently utilized. Moreover, discrete Fourier transform is widely used in the algorithm, which further increases the difficulty on understanding and implementation for researchers and engineers [21].

In this chapter, considering the complexity of existing algorithms, the self-calibration of X–Y precision metrology stages is studied with simplicity and effectiveness orientation. The method sufficiently utilizes the three properties of stage error detailed in [20] but abandons the Fourier transform way to provide an easily understood self-calibration algorithm. Specif‐ ically, with the three measurement views of the plate on the un-calibrated stage, the measure‐ ment information of the plate's mark positions in all views is utilized to construct symmetry, transitivity, and redundancy of the stage error. Consequently, a least-square-based selfcalibration algorithm is proposed to effectively realize the stage error accurately even under the existence of random measurement noise. The computation process for the components of misalignment error of each measurement view is also provided in detail, which could be utilized as basis for synthesis of other self-calibration strategies. Furthermore, as the determi‐ nation of the misalignment error components of the measurement views is rather complicated but important in previous and the proposed methods, this chapter also significantly analyzes the necessity of this costly computation. The proposed self-calibration algorithm is investigat‐ ed by computer simulation, and the results validate that the stage error could be determined quite exactly when there is no random measurement noise. Furthermore, when there exists random measurement noise, the calibration accuracy also could be guaranteed, and the calibration error is at the same level as the random measurement noises, which illustrates that the proposed scheme possesses certain performance robustness. The proposed scheme actually develops a well-understood solution to two-dimensional self-calibration problem, which could facilitate practical implementation.

#### **2. Problem formulation of two-dimensional self-calibration**

#### **2.1. Stage error**

In this section, the expression of stage error would be presented. In a two-dimensional metrology/measurement system of the stage, define (x, y) as the true location of the sample site in the Cartesian grid, and **G**(x, y) as the stage error at (x, y), which is the difference between the actual metrology/measurement system and the ideal metrology/measurement system. The field to be calibrated is assumed to be L × L, and the origin of the X–Y axis is at the center of this area. Define

$$\mathbf{G}(\mathbf{x}, \mathbf{y}) = G\_x(\mathbf{x}, \mathbf{y})\mathbf{e}\_x + G\_y(\mathbf{x}, \mathbf{y})\mathbf{e}\_y \tag{1}$$

where **ex** and **ey** are the unit vectors of the two-dimensional stage axis; *G***x**(x, y) and *G***y**(x, y) are functions of the continuous X–Y field L × L. The sample sites are set to be as an N × N square array covering the field L × L, and N is odd. Then, the sample site locations in the Cartesian grid can be expressed as follows:

$$
\Delta \mathbf{x}\_n = m \Delta \text{ , } \mathbf{y}\_n = n \Delta \tag{2}
$$

where m= <sup>−</sup> *<sup>N</sup>* <sup>−</sup> <sup>1</sup> <sup>2</sup> , <sup>−</sup> *<sup>N</sup>* <sup>−</sup> <sup>3</sup> <sup>2</sup> , <sup>⋯</sup>, *<sup>N</sup>* <sup>−</sup> <sup>1</sup> <sup>2</sup> , *<sup>n</sup>* <sup>=</sup> <sup>−</sup> *<sup>N</sup>* <sup>−</sup> <sup>1</sup> <sup>2</sup> , <sup>−</sup> *<sup>N</sup>* <sup>−</sup> <sup>3</sup> <sup>2</sup> , <sup>⋯</sup>, *<sup>N</sup>* <sup>−</sup> <sup>1</sup> 2 , and *Δ* = *L*/(*N* − 1), which is called sample site interval. For notation simplicity, we denote as follows:

$$\begin{aligned} \mathbf{G}\_{w,n} &\equiv G\_{x,m,a}\mathbf{e}\_x + G\_{y,m,a}\mathbf{e}\_y\\ G\_{x,m,a} &\equiv G\_x(\mathbf{x}\_m, \mathbf{y}\_n) \\ G\_{y,m,a} &\equiv G\_y(\mathbf{x}\_m, \mathbf{y}\_n) \end{aligned} \tag{3}$$

**Figure 2(1)** shows an example of the stage error, Gm,n which leads the distortion of the actual measurement system. The objective of self-calibration in this chapter was to calculate **G**m,n out which can be directly utilized to compensate the measurement accuracy. According to the presentation of [20], there are three properties for **G**m,n those essentially define the ideal calibration coordinates.

**• G***m,n* has no translation, i.e.,

random measurement noise, the calibration accuracy also could be guaranteed, and the calibration error is at the same level as the random measurement noises, which illustrates that the proposed scheme possesses certain performance robustness. The proposed scheme actually develops a well-understood solution to two-dimensional self-calibration problem, which

In this section, the expression of stage error would be presented. In a two-dimensional metrology/measurement system of the stage, define (x, y) as the true location of the sample site in the Cartesian grid, and **G**(x, y) as the stage error at (x, y), which is the difference between the actual metrology/measurement system and the ideal metrology/measurement system. The field to be calibrated is assumed to be L × L, and the origin of the X–Y axis is at the center of

where **ex** and **ey** are the unit vectors of the two-dimensional stage axis; *G***x**(x, y) and *G***y**(x, y) are functions of the continuous X–Y field L × L. The sample sites are set to be as an N × N square array covering the field L × L, and N is odd. Then, the sample site locations in the Cartesian

<sup>2</sup> , <sup>−</sup> *<sup>N</sup>* <sup>−</sup> <sup>3</sup>

, ,, ,,

**G ee**

º +

*xmn x m n ymn y m n*

*G Gx y G Gx y*

º º

*mn xmn x ymn y*

*G G*

(,) (,)

**Figure 2(1)** shows an example of the stage error, Gm,n which leads the distortion of the actual measurement system. The objective of self-calibration in this chapter was to calculate **G**m,n out which can be directly utilized to compensate the measurement accuracy. According to the presentation of [20], there are three properties for **G**m,n those essentially define the ideal

<sup>2</sup> , <sup>⋯</sup>, *<sup>N</sup>* <sup>−</sup> <sup>1</sup>

2 ,

(, ) (, ) (, ) *x xy y* **G ee** *xy G xy G xy* = + (1)

, *m n xmyn* = D =D (2)

and *Δ* = *L*/(*N* − 1), which is called

(3)

**2. Problem formulation of two-dimensional self-calibration**

could facilitate practical implementation.

188 New Trends and Developments in Metrology

**2.1. Stage error**

this area. Define

where m= <sup>−</sup> *<sup>N</sup>* <sup>−</sup> <sup>1</sup>

calibration coordinates.

grid can be expressed as follows:

<sup>2</sup> , <sup>−</sup> *<sup>N</sup>* <sup>−</sup> <sup>3</sup>

<sup>2</sup> , <sup>⋯</sup>, *<sup>N</sup>* <sup>−</sup> <sup>1</sup>

<sup>2</sup> , *<sup>n</sup>* <sup>=</sup> <sup>−</sup> *<sup>N</sup>* <sup>−</sup> <sup>1</sup>

sample site interval. For notation simplicity, we denote as follows:

, , , ,

$$\sum\_{n,m,n} G\_{n,m,n} = \sum\_{m,n} G\_{\mathbf{y},m,n} = \mathbf{0} \tag{4}$$

**• G***m,n* has no rotation, i.e.,

$$\sum\_{n,n} (G\_{\mathbf{y},m,n}\mathbf{x}\_n - G\_{\mathbf{x},m,n}\mathbf{y}\_n) = \mathbf{0} \tag{5}$$

**• G***m,n* has no magnification, i.e.,

$$\sum\_{m,n} (G\_{x,m,n}\mathbf{x}\_m + G\_{y,m,n}\mathbf{y}\_n) = \mathbf{0} \tag{6}$$

**Figure 2.** (1) An example of **G***<sup>m</sup>*,*<sup>n</sup>* ; (2) an example of **A***<sup>m</sup>*,*<sup>n</sup>* .

Meanwhile, two dimensionless parameters *O* and *R* are defined as the X–Y non-orthogonality and the X–Y scale difference of **G**m,n, respectively. Consequently, **G**m,n be described as follows [20]:

$$\begin{aligned} G\_{\mathbf{x},w,n} &= O\mathbf{y}\_n + R\mathbf{x}\_m + F\_{\mathbf{x},w,n} \\ G\_{\mathbf{y},w,n} &= O\mathbf{x}\_m - R\mathbf{y}\_n + F\_{\mathbf{y},w,n} \end{aligned} \tag{7}$$

Therefore, the determination of **G**m,n can be completed by calculating out the first-order components *O* and *R*, and the residual error **F**m,n. It must be noted that the origin of X–Y axis is the center of the sample array and the following properties of **F**m,n could be obtained [20]:

$$\begin{aligned} \sum\_{m,n} F\_{x,m,n} &= \sum\_{m,n} F\_{x,m,n} \mathbf{x}\_m = \sum\_{m,n} F\_{x,m,n} \mathbf{y}\_n = \mathbf{0} \\ \sum\_{m,n} F\_{y,m,n} &= \sum\_{m,n} F\_{y,m,n} \mathbf{x}\_m = \sum\_{m,n} F\_{y,m,n} \mathbf{y}\_n = \mathbf{0} \end{aligned} \tag{8}$$

#### **2.2. Artifact error**

As we state previously, an artifact plate is needed as a critical device for the self-calibration. The used artifact plate has a square N × N mark array with the same size of the stage sample site array, and the origin of the plate X–Y axis is located at the center of the mark array. The plate axis is fixed on the plate and will move with the plate during its motion. The nominal mark locations in the plate coordinate system are the same as the sample site locations in the stage coordinate system. It is well known that the artifact plate cannot be perfect, and each actual mark at (*m,n*) deviates from its nominal location by Am,n. Herein, Am,n is named as artifact error with one example illustrated in **Figure 2(2)**, and

$$\begin{aligned} \mathbf{A}\_{\boldsymbol{m},n} & \equiv A\_{\boldsymbol{\chi},m,n} \mathbf{e}\_{\boldsymbol{\mu}\boldsymbol{\epsilon}} + A\_{\boldsymbol{\chi},m,n} \mathbf{e}\_{\boldsymbol{\chi}\boldsymbol{\epsilon}} \\ A\_{\boldsymbol{\chi},m,n} & \equiv A\_{\boldsymbol{\chi}}(\mathbf{x}\_{\boldsymbol{m}}, \boldsymbol{y}\_{\boldsymbol{n}}) \\ A\_{\boldsymbol{\chi},m,n} & \equiv A\_{\boldsymbol{\chi}}(\mathbf{x}\_{\boldsymbol{m}}, \boldsymbol{y}\_{\boldsymbol{n}}) \end{aligned} \tag{9}$$

where m= <sup>−</sup> *<sup>N</sup>* <sup>−</sup> <sup>1</sup> <sup>2</sup> , <sup>−</sup> *<sup>N</sup>* <sup>−</sup> <sup>3</sup> <sup>2</sup> , …, *<sup>N</sup>* <sup>−</sup> <sup>1</sup> <sup>2</sup> , *<sup>n</sup>* <sup>=</sup> <sup>−</sup> *<sup>N</sup>* <sup>−</sup> <sup>1</sup> <sup>2</sup> , <sup>−</sup> *<sup>N</sup>* <sup>−</sup> <sup>3</sup> <sup>2</sup> , …, *<sup>N</sup>* <sup>−</sup> <sup>1</sup> <sup>2</sup> , **e***px* and **e***py* are the unit vectors of the plate axes. It should be noted that each mark on the artifact plate is with an identification number (*m,n*). The mark's identification number is fixed with the mark and does not change when the plate moves on the stage. The mark number is utilized to identify each physical mark of the plate, when different measurement views are conducted and compared. Similar to the stage error **G**m,n, the artifact error **A**m,n also has two properties as follows [20]:

**• A**m,n has no translation, i.e.,

$$\sum\_{m,n} A\_{\chi,m,n} = \sum\_{m,n} A\_{\chi,m,n} = 0 \tag{10}$$

**• A**m,n has no rotation, i.e.,

$$\sum\_{n,m,n} (A\_{\mathbf{y},m,n}\mathbf{x}\_m - A\_{\mathbf{x},m,n}\mathbf{y}\_n) = \mathbf{0} \tag{11}$$

#### **3. Fundamental of X–Y self-calibration**

The self-calibration requires performing independent measurement views with different permutations of the artifact plate on the un-calibrated metrology stage as illustrated in **Figure 3**, where the grid is the artifact plate and the colorful plane out of the grid is the metrology stage. The first view is the original orientation of the artifact plate referred to as View 0, in which the X-axis and Y-axis of the plate are aligned with those of the stage as closely as possible. In View 1, the artifact plate is rotated 90° around the origin from View 0 on the metrology stage. In View 2, the artifact plate is translated from the original View 0 by one sampling site + *Δ* along X-axis. In the following, we measure the rigid artifact plate marks when the plate is placed on the metrology stage in three measurement views. Through these measurement data, the stage error map can be determined by comparing different measure‐ ment views, and the misalignment error can be directly figured out with mathematical manipulations in detail.

**Figure 3.** Three measurement views for X–Y self-calibration.

#### **3.1. View 0**

(9)

<sup>2</sup> , **e***px* and **e***py* are the unit vectors

, , , , , , ,, ,

*F Fx Fy*

*m n m n m n*

åå å

*m n m n m n*

error with one example illustrated in **Figure 2(2)**, and

<sup>2</sup> , …, *<sup>N</sup>* <sup>−</sup> <sup>1</sup>

**2.2. Artifact error**

190 New Trends and Developments in Metrology

where m= <sup>−</sup> *<sup>N</sup>* <sup>−</sup> <sup>1</sup>

<sup>2</sup> , <sup>−</sup> *<sup>N</sup>* <sup>−</sup> <sup>3</sup>

**• A**m,n has no translation, i.e.,

**• A**m,n has no rotation, i.e.,

*xmn xmn m xmn n*

===

===

As we state previously, an artifact plate is needed as a critical device for the self-calibration. The used artifact plate has a square N × N mark array with the same size of the stage sample site array, and the origin of the plate X–Y axis is located at the center of the mark array. The plate axis is fixed on the plate and will move with the plate during its motion. The nominal mark locations in the plate coordinate system are the same as the sample site locations in the stage coordinate system. It is well known that the artifact plate cannot be perfect, and each actual mark at (*m,n*) deviates from its nominal location by Am,n. Herein, Am,n is named as artifact

0

0

åå å (8)

, , , , , , ,, ,

, ,, , ,

**A ee**

º +

*xmn x m n ymn y m n*

*A Ax y A Ax y*

º º

<sup>2</sup> , *<sup>n</sup>* <sup>=</sup> <sup>−</sup> *<sup>N</sup>* <sup>−</sup> <sup>1</sup>

stage error **G**m,n, the artifact error **A**m,n also has two properties as follows [20]:

*m n m n*

,

*m n*

**3. Fundamental of X–Y self-calibration**

, , , , , ,

,, ,,

( ) 0 *ymn m xmn n*

The self-calibration requires performing independent measurement views with different permutations of the artifact plate on the un-calibrated metrology stage as illustrated in

0 *xmn ymn*

(,) (,)

<sup>2</sup> , <sup>−</sup> *<sup>N</sup>* <sup>−</sup> <sup>3</sup>

of the plate axes. It should be noted that each mark on the artifact plate is with an identification number (*m,n*). The mark's identification number is fixed with the mark and does not change when the plate moves on the stage. The mark number is utilized to identify each physical mark of the plate, when different measurement views are conducted and compared. Similar to the

<sup>2</sup> , …, *<sup>N</sup>* <sup>−</sup> <sup>1</sup>

å å *A A* = = (10)

å *AxAy* - = (11)

*m n x m n px y m n py*

*A A*

, , , ,

*F Fx Fy*

*ymn ymn m ymn n*

In the first view, the artifact plate is placed on the un-calibrated metrology stage with the corresponding axes aligned roughly as shown in View 0 of **Figure 3**. The deviation of mark (*m,n*) from the measurement value to its nominal position in the stage coordinate system is denoted as **v**0,m,n, where subscript 0 represents View 0. Define

$$\mathbf{v}\_{0,w,n} = \nu\_{0,x,m,u}\mathbf{e}\_x + \nu\_{0,y,m,u}\mathbf{e}\_y \tag{12}$$

It must be noted that the alignment between the stage axes and plate axes cannot be perfect, i.e., there inevitably exists a small offset between their origins and a small rotation between their orientations. Therefore,

$$\begin{aligned} \mathbf{v}\_{0,\boldsymbol{x},m,n} &= \mathbf{G}\_{\boldsymbol{x},m,n} + A\_{\boldsymbol{x},m,n} + E\_{0,\boldsymbol{x},m,n} + r\_{0,\boldsymbol{x},m,n} \\ \mathbf{v}\_{0,\boldsymbol{y},m,n} &= \mathbf{G}\_{\boldsymbol{y},m,n} + A\_{\boldsymbol{y},m,n} + E\_{0,\boldsymbol{y},m,n} + r\_{0,\boldsymbol{y},m,n} \end{aligned} \tag{13}$$

where the misalignment error **E**0,m,n is denoted as follows:

$$\begin{aligned} E\_{0, \mathbf{x}, m, n} &= -\theta\_0 \mathbf{y}\_n + t\_{0\mathbf{x}} \\ E\_{0\_{\mathbf{y}, y, m, n}} &= \theta\_0 \mathbf{x}\_m + t\_{0\_{\mathbf{y}}} \end{aligned} \tag{14}$$

and θ0 and **t0** are the rotation and offset of the misalignment. r0,y,m,n and r0,y,m,nare the random measurement noises. To attenuate the effects of r0,x,m,nand r0,y,m,n, repeated measurements are constructed with exactly the same mark position of the artifact plate. Resultantly, V0,x,m,n and V0,y,m,n are the mean values of a number of repeated measurements of V0,x,m,n and V0,y,m,n, and the random measurement noise is consequently assumed to be completely attenuated, i.e.,

$$\begin{aligned} V\_{0, \mathbf{x}, m, n} &= G\_{\mathbf{x}, m, n} + A\_{\mathbf{x}, m, n} + E\_{0, \mathbf{x}, m, n} \\ V\_{0, \mathbf{y}, m, n} &= G\_{\mathbf{y}, m, n} + A\_{\mathbf{y}, m, n} + E\_{0, \mathbf{y}, m, n} \end{aligned} \tag{15}$$

Substituting Eqs. (7) and (14) into Eq. (15) leads to the following:

$$\begin{aligned} V\_{0,x,m,u} &= F\_{x,m,u} + O\mathbf{y}\_n + R\mathbf{x}\_m + A\_{x,m,u} - \theta\_0 \mathbf{y}\_n + t\_{0x} \\ V\_{0,\mathbf{y},m,u} &= F\_{\mathbf{y},m,u} + O\mathbf{x}\_m - R\mathbf{y}\_n + A\_{\mathbf{y},m,u} + \theta\_0 \mathbf{x}\_m + t\_{0y} \end{aligned} \tag{16}$$

Noting Eqs. (8), (10), (11), and (16), *t*0*<sup>x</sup>*, *t*0*<sup>y</sup>*, and *θ*0 of the misalignment error **E**0,m,n are determined [20], i.e.,

$$t\_{0x} = \frac{\sum\_{m,n} V\_{0,x,m,n}}{N^2}, t\_{0y} = \frac{\sum\_{m,n} V\_{0,y,m,n}}{N^2}, \Theta\_0 = \frac{\sum\_{m,n} (V\_{0,y,m,n} \mathbf{x}\_m - V\_{0,x,m,n} \mathbf{y}\_n)}{\sum\_{m,n} (\mathbf{x}\_m^2 + \mathbf{y}\_n^2)}\tag{17}$$

After computing **t**0 and θ0, the misalignment error **E**0,m,n is determined. Then, define

$$\begin{aligned} U\_{0, \mathbf{x}, m, n} &= V\_{0, \mathbf{x}, m, n} - t\_{0\mathbf{x}} + \theta\_0 \mathbf{y}\_n \\ U\_{0, \mathbf{y}, m, n} &= V\_{0, \mathbf{y}, m, n} - t\_{0\mathbf{y}} - \theta\_0 \mathbf{x}\_n \end{aligned} \tag{18}$$

It must be pointed out that, the transformation from **V**0,m,n to **U**0,m,n is commonly referred to as multiple-point alignment. Combining Eqs. (16) and (18) leads to the following

$$\begin{aligned} F\_{\mathbf{x},m,n} + A\_{\mathbf{x},m,n} &= U\_{\mathbf{0},\mathbf{x},m,n} - O\mathbf{y}\_n - R\mathbf{x}\_m \\ F\_{\mathbf{y},m,n} + A\_{\mathbf{y},m,n} &= U\_{\mathbf{0},\mathbf{y},m,n} - O\mathbf{x}\_m + R\mathbf{y}\_n \end{aligned} \tag{19}$$

#### **3.2. View 1**

where the misalignment error **E**0,m,n is denoted as follows:

192 New Trends and Developments in Metrology

0, , , 0 0 0, , , 0 0 *xmn n x ymn m y*

q

0, , , , , , , 0, , ,

=++

*V GAE*

*V GAE*

Substituting Eqs. (7) and (14) into Eq. (15) leads to the following:

, ,, 00 0 2 2 2 2

*m n m n m n*

ååå

*x y*

*t t*

[20], i.e.,

*xmn xmn xmn xmn*

*ymn ymn ymn ymn*

0, , , , , , , 0, , ,

0, , , , , ,, 0 0 0, , , , , ,, 0 0 *xmn xmn n m xmn n x ymn ymn m n ymn m y*

= +++ - +

Noting Eqs. (8), (10), (11), and (16), *t*0*<sup>x</sup>*, *t*0*<sup>y</sup>*, and *θ*0 of the misalignment error **E**0,m,n are determined

0, , , 0, , , 0, , , 0, , ,

*N N x y* q

After computing **t**0 and θ0, the misalignment error **E**0,m,n is determined. Then, define

0, , , 0, , , 0 0 0, , , 0, , , 0 0 *xmn xmn x n ymn ymn y m*

= -+

*xmn xmn xmn n m ymn ymn ymn m n*

*F A U Oy Rx F A U Ox Ry* + = --

It must be pointed out that, the transformation from **V**0,m,n to **U**0,m,n is commonly referred to as

*U V ty U V tx*

multiple-point alignment. Combining Eqs. (16) and (18) leads to the following

, , , , 0, , , , , , , 0, , ,

= == <sup>+</sup>

*V V V xV y*

, , ( ) *xmn ymn ymn m xmn n*

q

q

,

*m n*

q

q

= + -+ + + (16)

( )


*m n*

= -- (18)

+ = -+ (19)

å (17)

*V F Oy Rx A y t V F Ox Ry A x t*

=++

q

=- +

and θ0 and **t0** are the rotation and offset of the misalignment. r0,y,m,n and r0,y,m,nare the random measurement noises. To attenuate the effects of r0,x,m,nand r0,y,m,n, repeated measurements are constructed with exactly the same mark position of the artifact plate. Resultantly, V0,x,m,n and V0,y,m,n are the mean values of a number of repeated measurements of V0,x,m,n and V0,y,m,n, and the random measurement noise is consequently assumed to be completely attenuated, i.e.,

= + (14)

(15)

*E yt E xt*

In the second measurement view, the artifact plate is rotated 90° around the origin from View 0 as shown in View 1 of **Figure 3**. In this view, the plate X-axis is aligned along the stage Ydirection with same direction, and plate Y-axis is aligned along the stage X-direction but with converse direction. The deviation of mark (*m,n*) from the measurement value to its nominal position in the stage coordinate system is denoted as **V**1,m,n, where subscript 1 represents View 1. It must be noted that although located at a different position relative to the stage, the mark (*m,n*) in View 1 is the same physical mark (*m,n*) in View 0. Then,

$$\mathbf{v}\_{1,m,n} = \nu\_{1,x,m,n}\mathbf{e}\_x + \nu\_{1,y,m,n}\mathbf{e}\_y \tag{20}$$

and

$$\begin{aligned} V\_{1,\boldsymbol{x},m,n} &= G\_{\boldsymbol{x},-\boldsymbol{n},m} - A\_{\boldsymbol{y},m,n} + E\_{1,\boldsymbol{x},m,n} \\ V\_{1,\boldsymbol{y},m,n} &= G\_{\boldsymbol{y},-\boldsymbol{n},m} + A\_{\boldsymbol{x},m,n} + E\_{1,\boldsymbol{y},m,n} \end{aligned} \tag{21}$$

where **V**1,m,n is the mean value of a number of measurements of **V**1,m,n, and **E**1,m,n is the mis‐ alignment error. With similar procedure of above subsection, we define

$$\begin{aligned} U\_{1,x,n,a} &= V\_{1,x,n,a} - \frac{\sum\_{n,n} V\_{1,x,n,a}}{N^2} + \frac{\sum\_{n,n} \left(-V\_{1,y,n,a} \mathcal{V}\_n - V\_{1,x,n,a} \mathbf{x}\_m\right)}{\sum\_{n,n} \left(\mathbf{x}\_n^2 + \mathcal{V}\_n^2\right)} \mathbf{x}\_m \\ U\_{1,y,n,a} &= V\_{1,y,n,a} - \frac{\sum\_{n,n} V\_{1,y,n,a}}{N^2} + \frac{\sum\_{n,n} \left(-V\_{1,y,n,a} \mathcal{V}\_n - V\_{1,x,n,a} \mathbf{x}\_m\right)}{\sum\_{n,n} \left(\mathbf{x}\_n^2 + \mathcal{V}\_n^2\right)} \mathbf{y}\_n \end{aligned} \tag{22}$$

Consequently, Eqs. (21) and (22) yield

$$\begin{aligned} F\_{\mathbf{x}, -n, m} - A\_{\mathbf{y}, m, n} &= U\_{\mathbf{1}, \mathbf{x}, m, n} - O\mathbf{x}\_m + R\mathbf{y}\_n \\ F\_{\mathbf{y}, -n, m} + A\_{\mathbf{x}, m, n} &= U\_{\mathbf{1}, \mathbf{y}, m, n} + O\mathbf{y}\_n + R\mathbf{x}\_m \end{aligned} \tag{23}$$

Comparing Eq. (23) of View 1 with Eq. (19) of View 0, with the same procedure in [20], the stage error components *O* and *R* can be calculated out as follows:

$$\begin{split} O &= \frac{1}{2} \mathbb{E} \frac{\sum \left( U\_{0,x,m,n} \mathbf{y}\_n + U\_{0,y,m,n} \mathbf{x}\_m \right)}{\sum \left( \mathbf{x}\_n^2 + \mathbf{y}\_n^2 \right)} + \frac{\sum \left( U\_{1,x,m,n} \mathbf{x}\_n - U\_{1,y,m,n} \mathbf{y}\_n \right)}{\sum \left( \mathbf{x}\_n^2 + \mathbf{y}\_n^2 \right)} \\ R &= \frac{1}{2} \mathbb{E} \frac{\sum \left( U\_{0,x,m,n} \mathbf{x}\_n - U\_{0,y,m,n} \mathbf{y}\_n \right)}{\sum \left( \mathbf{x}\_n^2 + \mathbf{y}\_n^2 \right)} + \frac{\sum \left( -U\_{1,x,m,n} \mathbf{y}\_n - U\_{1,y,m,n} \mathbf{x}\_m \right)}{\sum\_{n,n} \left( \mathbf{x}\_n^2 + \mathbf{y}\_n^2 \right)} \end{split} \tag{24}$$

After calculating out *O* and *R*, Eqs. (19) and (23) can lead to the following

$$\begin{aligned} F\_{x,w,u} - F\_{y,-u,m} &= P\_{m,u} \\ F\_{y,m,u} + F\_{x,-u,m} &= Q\_{m,u} \end{aligned} \tag{25}$$

where

$$\begin{aligned} P\_{m,n} &= U\_{0,x,m,n} - U\_{1,y,m,n} - 2O\mathbf{y}\_n - 2R\mathbf{x}\_m \\ Q\_{m,n} &= U\_{0,y,m,n} + U\_{1,x,m,n} - 2O\mathbf{x}\_m + 2R\mathbf{y}\_n \end{aligned} \tag{26}$$

#### **3.3. View 2**

As the first-order components *O* and *R* have been determined by View 0 and View 1, we try to determine **F***<sup>m</sup>*,*<sup>n</sup>* in the third view. In this view, the artifact plate is translated by roughly one sample site interval +Δ from View 0 along X-axis relative to the stage, while the stage axes and plate axes are roughly aligned, which is shown in View 2 of **Figure 3**. The deviation of mark (*m,n*) from the measurement value to its nominal position in the stage coordinate system is denoted as **v**2,m,n, where subscript 2 represents View 2. Different from View 0 and View 1, the subscript *m* here is from <sup>−</sup> *<sup>N</sup>* <sup>−</sup> <sup>1</sup> <sup>2</sup> to <sup>−</sup> *<sup>N</sup>* <sup>−</sup> <sup>3</sup> <sup>2</sup> , due to that the far right column, i.e., *m*<sup>=</sup> <sup>−</sup> *<sup>N</sup>* <sup>−</sup> <sup>1</sup> <sup>2</sup> , is outside the initial N × N stage sample sites. It also should be noted that although located at a different position relative to the stage (by about one sample site interval translation), the mark (*m,n*) here is the same physical mark (*m,n*) in View 0. As only the (N − 1) × N sites in View 2 within the initial N × N sites can be used, the properties of no translation and no rotation cannot be used.

Similar to Views 0 and 1, we can obtain

$$\mathbf{v}\_{2,m,n} = \nu\_{2, \ge, m, n} \mathbf{e}\_{\times} + \nu\_{2, \ge, m, n} \mathbf{e}\_{\times} \tag{27}$$

and

$$\begin{aligned} V\_{2,\mathbf{x},\mathbf{w},n} &= G\_{\mathbf{x},m+1,n} + A\_{\mathbf{x},m,n} + E\_{2,\mathbf{x},m,n} \\ V\_{2,\mathbf{y},m,n} &= G\_{\mathbf{y},m+1,n} + A\_{\mathbf{y},m,n} + E\_{2,\mathbf{y},m,n} \\ E\_{2,\mathbf{x},m,n} &= -\Theta\_2 \mathbf{y}\_n + \mathbf{t}\_{2\mathbf{x}} \\ E\_{2,\mathbf{y},m,n} &= \Theta\_2 \mathbf{x}\_m + \mathbf{t}\_{2\mathbf{y}} \end{aligned} \tag{28}$$

where m= <sup>−</sup> *<sup>N</sup>* <sup>−</sup> <sup>1</sup> <sup>2</sup> , <sup>−</sup> *<sup>N</sup>* <sup>−</sup> <sup>3</sup> <sup>2</sup> , …, *<sup>N</sup>* <sup>−</sup> <sup>1</sup> <sup>2</sup> , **V**2,m,n, is the mean value of a number of measurements of, **V**2,m,n, θ2 and **t**2 are the rotation and offset of the misalignment. To be consistent with the previous two views, **U**2,m,n is used instead of **V**2,m,n, i.e.,

$$U\_{\mathfrak{z},\mathbf{x},m,n} = V\_{\mathfrak{z},\mathbf{x},m,n},\ U\_{\mathfrak{z},\mathbf{y},m,n} = V\_{\mathfrak{z},\mathbf{y},m,n} \tag{29}$$

Combining Eqs. (28) and (29) and noting *xm* + 1 = *xm* + *Δ* lead to the following

0, , , 0, , , 1, , , 1, , ,

*U yU x U xU y*

+ -

( ) ( ) <sup>1</sup> [ ] 2 () ()

, ,

*m n m n*

å å

, ,

,, , , , ,, , , , *xmn y nm mn ymn x nm mn*

*mn xmn ymn n m mn ymn xmn m n*

As the first-order components *O* and *R* have been determined by View 0 and View 1, we try to determine **F***<sup>m</sup>*,*<sup>n</sup>* in the third view. In this view, the artifact plate is translated by roughly one sample site interval +Δ from View 0 along X-axis relative to the stage, while the stage axes and plate axes are roughly aligned, which is shown in View 2 of **Figure 3**. The deviation of mark (*m,n*) from the measurement value to its nominal position in the stage coordinate system is denoted as **v**2,m,n, where subscript 2 represents View 2. Different from View 0 and View 1, the

outside the initial N × N stage sample sites. It also should be noted that although located at a different position relative to the stage (by about one sample site interval translation), the mark (*m,n*) here is the same physical mark (*m,n*) in View 0. As only the (N − 1) × N sites in View 2 within the initial N × N sites can be used, the properties of no translation and no rotation cannot

*P U U Oy Rx Q U U Ox Ry* = - --

*FF P FF Q* - - - =

*m n m n*

2 () ()

å å

2 2 2 2

*m n m n*

*x y x y*

+ +

*xmn n ymn m xmn m ymn n*

0, , , 0, , , 1, , , 1, , ,

*x y x y*

+ +

*U xU y U yU x*


( ) ( ) <sup>1</sup> [ ]

2 2 2 2

*m n m n*

*xmn m ymn n xmn n ymn m*

2 2 2 2

+ = (25)

= + -+ (26)

<sup>2</sup> , due to that the far right column, i.e., *m*<sup>=</sup> <sup>−</sup> *<sup>N</sup>* <sup>−</sup> <sup>1</sup>

2, , 2, , , 2, , , *mn xmn x ymn y* **v ee** = + *v v* (27)

(24)

<sup>2</sup> , is

, ,

*m n m n*

å å

= +

= +

*O*

194 New Trends and Developments in Metrology

*R*

where

**3.3. View 2**

be used.

and

subscript *m* here is from <sup>−</sup> *<sup>N</sup>* <sup>−</sup> <sup>1</sup>

Similar to Views 0 and 1, we can obtain

, ,

After calculating out *O* and *R*, Eqs. (19) and (23) can lead to the following

, 0, , , 1, , , , 0, , , 1, , ,

<sup>2</sup> to <sup>−</sup> *<sup>N</sup>* <sup>−</sup> <sup>3</sup>

*m n m n*

å å

$$\begin{aligned} F\_{\boldsymbol{x},m+1,u} + A\_{\boldsymbol{x},m,u} &= U\_{\boldsymbol{2},\boldsymbol{x},m,u} - O\boldsymbol{\mathcal{y}}\_{u} - R\boldsymbol{\mathfrak{x}}\_{m} + \boldsymbol{\mathfrak{z}}\_{\boldsymbol{x}} - \boldsymbol{\mathfrak{z}}\_{\boldsymbol{\theta}}\boldsymbol{\mathfrak{y}}\_{u} \\ F\_{\boldsymbol{y},m+1,u} + A\_{\boldsymbol{y},m,u} &= U\_{\boldsymbol{2},\boldsymbol{y},m,u} - O\boldsymbol{\mathfrak{x}}\_{m} + R\boldsymbol{\mathfrak{y}}\_{u} + \boldsymbol{\mathfrak{z}}\_{\boldsymbol{\theta}} + \boldsymbol{\mathfrak{z}}\_{\boldsymbol{\theta}}\boldsymbol{\mathfrak{x}}\_{u} \end{aligned} \tag{30}$$

where *ξx* = − (*t*2*<sup>x</sup>* + *R*Δ), *ξy* = − (*t*2*<sup>y</sup>* + *O*Δ), *ξθ* = − *θ*2. Subtracting Eq. (19) from (30) leads to the following

$$\begin{aligned} F\_{x,m+1,n} - F\_{x,m,n} &= U\_{2,x,m,n} - U\_{0,x,m,n} + \tilde{\xi}\_x - \tilde{\xi}\_\theta \mathbf{y}\_n \\ F\_{y,m+1,n} - F\_{y,m,n} &= U\_{2,y,m,n} - U\_{0,y,m,n} + \tilde{\xi}\_y + \tilde{\xi}\_\theta \mathbf{x}\_m \end{aligned} \tag{31}$$

Unlike the cases in View 0 and View 1, the misalignment error of View 2 is not identified simply by applying the properties of **G***<sup>m</sup>*,*<sup>n</sup>* , **F***<sup>m</sup>*,*<sup>n</sup>* and **A***<sup>m</sup>*,*<sup>n</sup>* , as View 2 provides no measurement data for the first column of the sampling sites. Certain algebraic manipulation is needed and is explained in Appendix A, B, and C, where the full procedures to determine the misalignment error components of View 2, i.e., *ξθ*, *ξx*, and *ξy*, are separately presented in detail. It should be pointed out that the determination process of *ξθ*, *ξx*, and *ξ<sup>y</sup>* also can be used as foundation for research of other self-calibration schemes.

#### **4. Self-calibration algorithm for X–Y metrology systems**

In this section, we try to synthesize a self-calibration algorithm for X–Y precision metrology stages with accuracy and simplicity orientation. The proposed method should be well understood and robust to meet the challenge of random measurement noise, which are both important for engineers to implement in practical applications. In the following, two schemes are provided.

#### **4.1. Scheme I**

Combining Eqs. (25), (31), and (8), the following equation group for *Fx*,*m*,*<sup>n</sup>* and *Fy*,*m*,*<sup>n</sup>* can be obtained:

$$\begin{aligned} F\_{x,n,n} - F\_{y,-n,m} &= U\_{0,x,m,n} - U\_{1,y,m,n} - 2O\mathbf{y}\_n - 2R\mathbf{x}\_n \\ F\_{y,n,n} + F\_{x,-n,n} &= U\_{0,y,m,n} + U\_{1,x,m,n} - 2O\mathbf{x}\_n + 2R\mathbf{y}\_n \\ F\_{x,m+1,n} - F\_{x,m,n} &= U\_{2,x,m,n} - U\_{0,x,m,n} + \tilde{\xi}\_x - \tilde{\xi}\_0\mathbf{y}\_n \\ F\_{y,m+1,n} - F\_{y,m,n} &= U\_{2,y,m,n} - U\_{0,y,m,n} + \tilde{\xi}\_y + \tilde{\xi}\_y\mathbf{x}\_n \\ \sum\_{m,n} F\_{x,m,n} = \sum\_{m,n} F\_{x,m,n}\mathbf{x}\_m = \sum\_{m,n} F\_{x,m,n}\mathbf{y}\_n = \mathbf{0} \\ \sum\_{m,n} F\_{y,m,n} = \sum\_{m,n} F\_{y,m,n}\mathbf{x}\_m = \sum\_{m,n} F\_{y,m,n}\mathbf{y}\_n = \mathbf{0} \end{aligned} \tag{32}$$

Eq. (32) actually supplies linear equations for calculation of *Fx*,*m*,*<sup>n</sup>* and *Fy*,*m*,*<sup>n</sup>* with certain redundancy. Therefore, a least-square solution for **F***<sup>m</sup>*,*<sup>n</sup>* can be synthesized with robustness to meet the challenge of random measurement noise. To provide a more explicit illustration, we take *N* = 11 as an example, and Eq. (32) can then be assembled into matrix form, i.e., *MEF* = *S*, where *M* is a relational matrix of dimension 468 × 242, *EF* is a vector of dimension 242 × 1, and *S* is a vector of dimension 468 × 1. This assembly of equations under matrix form is as follows:

$$ME\_{F} = \text{S} \tag{33}$$

where

$$\begin{aligned} F\_{F} &= \{F\_{x\_{-3}, -5}, F\_{x\_{-4}, -5}, F\_{x\_{-3}, -5}, \dots, F\_{x\_{-5}, -5}, \\ F\_{x\_{-5}, -4, } F\_{x\_{-4}, -4, }, F\_{x\_{-3}, -4}, \dots, F\_{x\_{-5}, -4}, \\ &\vdots \\ F\_{x\_{-5}, 5}, F\_{x\_{-4}, 5}, F\_{x\_{-3}, 5}, \dots, F\_{x\_{-5}, 5}, \\ F\_{y\_{-5}, -5}, F\_{y\_{-4}, -5}, F\_{y\_{-3}, -3}, \dots, F\_{y, 5, -5}, \\ F\_{y\_{-5}, -4}, F\_{y\_{-4}, -4}, F\_{y\_{-3}, -4}, \dots, F\_{y, 5, -4}, \\ &\vdots \end{aligned} \tag{34}$$

We have stated previously that three measurement views are required to generate equations with data redundancy. Consequently, a least-square method can be synthesized to provide a solution as follows

$$E\_F = (M^T M)^{-1} M^T S \tag{35}$$

which possesses certain robustness to random measurement noise. Therefore, **F***<sup>m</sup>*,*<sup>n</sup>* is obtained which leads to the final determination of G*<sup>m</sup>*,*<sup>n</sup>*.

#### **4.2. Scheme II**

**4.1. Scheme I**

196 New Trends and Developments in Metrology

obtained:

where

solution as follows

Combining Eqs. (25), (31), and (8), the following equation group for *Fx*,*m*,*<sup>n</sup>* and *Fy*,*m*,*<sup>n</sup>* can be

*xmn y nm xmn ymn n m ymn x nm ymn xmn m n xm n xmn xmn xmn x n ym n ymn ymn ymn y m*

*F F U U Oy Rx F F U U Ox Ry F FU U y F FU U x*


2 2 2 2

x x

x x

=

*n*

*y*

Eq. (32) actually supplies linear equations for calculation of *Fx*,*m*,*<sup>n</sup>* and *Fy*,*m*,*<sup>n</sup>* with certain redundancy. Therefore, a least-square solution for **F***<sup>m</sup>*,*<sup>n</sup>* can be synthesized with robustness to meet the challenge of random measurement noise. To provide a more explicit illustration, we take *N* = 11 as an example, and Eq. (32) can then be assembled into matrix form, i.e., *MEF* = *S*, where *M* is a relational matrix of dimension 468 × 242, *EF* is a vector of dimension 242 × 1, and *S* is a vector of dimension 468 × 1. This assembly of equations under matrix form is as follows:

> , 5, 5 , 4, 5 , 3, 5 ,5, 5 , 5, 4 , 4, 4 , 3, 4 ,5, 4


[ , , ,..., , , , ,..., ,

*FFF F FFF F*

*Fx x x x xxx x*

> , 5,5 , 4,5 , 3,5 ,5,5 , 5, 5 , 4, 5 , 3, 5 ,5, 5 , 5, 4 , 4, 4 , 3, 4 ,5, 4

*xxx x yyy y yyy y*

*FFF F FFF F FFF F*


, , ,..., , , , ,..., , , , ,..., ,


5,5 , 4,5 , 3,5 ,5,5 242 1 , , ,..., ]*<sup>T</sup>* --- *FF F yy y* ´

<sup>1</sup> ( ) *T T E MM MS <sup>F</sup>*

We have stated previously that three measurement views are required to generate equations with data redundancy. Consequently, a least-square method can be synthesized to provide a

,

*y*

*F*

M

*E*

=

M

0

0

q

q

*ME S <sup>F</sup>* = (33)


(32)

(34)

, , , , 0, , , 1, , , , , , , 0, , , 1, , , , 1, , , 2, , , 0, , , , 1, , , 2, , , 0, , , , , , , , , ,, ,


+ +

*xmn xmn m xmn m n m n m n*

, , , , , , ,, ,

*F Fx Fy*

åå å = = =

*ymn ymn m ymn n*

*F Fx F*

åå å = =

*m n m n m n*

As presented above, with the determination of the misalignment errors detailed in the Appendix, the stage error can be calculated out. However, it also can be observed from the Appendix that the computation process is complicated to some extent. Herein, this chapter also analyses the necessity of this costly computation and provides another alternative as follows [22].

As *ξθ* can be calculated out in the Appendix, define:

$$\begin{aligned} L\_{\boldsymbol{x},\boldsymbol{w},n} &= U\_{\boldsymbol{2},\boldsymbol{x},\boldsymbol{w},n} - U\_{\boldsymbol{0},\boldsymbol{x},\boldsymbol{w},n} - \boldsymbol{\xi}\_{\boldsymbol{\theta}} \boldsymbol{y}\_{n} \\ L\_{\boldsymbol{\nu},\boldsymbol{m},n} &= U\_{\boldsymbol{2},\boldsymbol{y},\boldsymbol{m},n} - U\_{\boldsymbol{0},\boldsymbol{y},\boldsymbol{m},n} + \boldsymbol{\xi}\_{\boldsymbol{\theta}} \boldsymbol{x}\_{n} \end{aligned} \tag{36}$$

Combining (31) and (36) leads to the following

$$\begin{aligned} F\_{x,m+2,n} - 2F\_{x,m+1,n} + F\_{x,m,n} &= L\_{x,m+1,n} - L\_{x,m,n} \\ F\_{x,m+1,n+1} - F\_{x,m+1,n} - F\_{x,m,n+1} + F\_{x,m,n} &= L\_{x,m,n+1} - L\_{x,m,n} \\ F\_{y,m+2,n} - 2F\_{y,m+1,n} + F\_{y,m,n} &= L\_{y,m+1,n} - L\_{y,m,n} \\ F\_{y,m+1,n+1} - F\_{y,m+1,n} - F\_{y,m,n+1} + F\_{y,m,n} &= L\_{y,m,n+1} - L\_{y,m,n} \end{aligned} \tag{37}$$

Consequently, the equations for **F***<sup>m</sup>*,*<sup>n</sup>* from (8), (25), and (37) can be grouped as follows

$$\begin{aligned} \sum\_{n,u} F\_{x,u,u} &= \sum\_{n,u} F\_{x,u,u} x\_n = \sum\_{n,u} F\_{x,u,u} \mathcal{Y}\_n = 0 \\ \sum\_{n,u} F\_{y,u,u} &= \sum\_{n,u} F\_{y,n,u} x\_n = \sum\_{n,u} F\_{y,n,u} \mathcal{Y}\_n = 0 \\ F\_{x,u,u} - F\_{y,-n,u} &= P\_{u,u} \\ F\_{y,n,u} + F\_{x,-n,u} &= Q\_{u,u} \\ F\_{x,m,n+1,u} - 2F\_{x,m+1,n} + F\_{x,m,n} &= L\_{x,m+1,n} - L\_{x,m,n} \\ F\_{x,m+1,u+1} - F\_{x,m+1,u} - F\_{x,m,n+1} + F\_{x,m,n} &= L\_{x,m,n+1} - L\_{x,m,n} \\ F\_{y,m+2,u} - 2F\_{y,m+1,u} + F\_{y,m,u} &= L\_{y,m+1,u} - L\_{y,m,u} \\ F\_{y,m+1,u+1} - F\_{y,m,u+1} + F\_{y,m,u} &= L\_{y,m,u} - L\_{y,m,u} - L\_{y,m,u} \end{aligned} \tag{38}$$

Eq. (38) also provides linear equations for determination of *Fx*,*m*,*<sup>n</sup>* and *Fy*,*m*,*<sup>n</sup>* with certain redundancy. Then, a least-square solution for **F***<sup>m</sup>*,*<sup>n</sup>* can be synthesized. In detail, Eq. (38) can be assembled into matrix form as follows

$$
\Gamma E\_F = S \tag{39}
$$

where **Γ** is a relational matrix of dimension *D*1 × *D*2, *EF* is a vector of dimension *D*2 × 1, and *S* is a vector of dimension *D*1 × 1. Noting the subscripts of (38), we can calculate out *D*1 and *D*2 as: *D*1 = 6 + *N*<sup>2</sup> + *N*<sup>2</sup> + (*N* − 2) × *N* + (*N* − 1)<sup>2</sup> + (*N* − 2) × *N* + (*N* − 1)<sup>2</sup> = 6*N*<sup>2</sup> − 8*N* + 8, *D*2 = *N*<sup>2</sup> + *N*<sup>2</sup> = 2*N*<sup>2</sup> . For example, assuming *N* = 11, Γis of dimension 646 × 242 which can be determined by (38), while

$$\begin{aligned} F\_F &= \{F\_{x\_{-5},5}, F\_{x\_{-4},4}, F\_{x\_{-3},5}, F\_{x\_{-3},5}, \dots, F\_{x\_{-5},5}, \\ F\_{x\_{-5},4}, F\_{x\_{-4},4}, F\_{x\_{-3},4}, \dots, F\_{x\_{-5},4}, \\ &\vdots\\ F\_{x\_{-5},5}, F\_{x\_{-4},5}, F\_{x\_{-3},5}, \dots, F\_{x\_{-5},5},\\ F\_{y\_{-3},5,4}, F\_{y\_{-4},4,5}, F\_{y\_{-3},5,5}, \dots, F\_{y,5,-3},\\ &\vdots\\ F\_{y\_{-5},4,5}, F\_{y\_{-4},-4}, F\_{y\_{-3},-4,1}, \dots, F\_{y,5,-4},\\ F\_{y\_{-5},3,5}, F\_{y\_{-4},3,5}, F\_{y\_{-3},3,5}, \dots, F\_{y,5,-3},\\ S = [0,0,0,0,0,0,0, P\_{z\_{-3},3}, P\_{-4,-5}, \dots, P\_{z,5},\\\underline{Q\_{-5,-3}}, \underline{Q\_{-4,-5}}, \underline{Q\_{-3,-5}}, \dots, \underline{Q\_{-5,-3}},\\\underline{L\_{x\_{-4},-4}} - \underline{L\_{x\_{-5},-5}}, \dots, \underline{L\_{x\_{-5},5}} - \underline{L\_{x\_{-5},5}},\\\vdots\\ L\_{y\_{-5},-4} - \underline{L\_{x\_{-5},5}, \dots, L\_{x\_{-5},5}} - \underline{L\_{x\_{-5},4}}^{\rm H\_{6b+1}} \end{aligned} \tag{40}$$

As the system has requisite data redundancy, it can be solved using a least-square method to calculate *Fx*,*m*,*<sup>n</sup>* and *Fy*,*m*,*<sup>n</sup>* , producing a solution like

$$E\_F = (\Gamma^\tau \Gamma)^{-1} \Gamma^\tau S \tag{41}$$

Therefore, **F***<sup>m</sup>*,*<sup>n</sup>* is obtained with certain robustness which finally leads to the determination of **G***<sup>m</sup>*,*<sup>n</sup>* by (7) as O and R are all known.

It should be pointed out that the proposed scheme does not calculate out the misalignment error components *ξx* and *ξy*, while these calculations are important in previous published strategies [15, 23, 24]. The proposed scheme does not need this costly computation, which would be meaningful for simplification of the calculation process to some extent. In the reminder of this chapter, as Scheme I is a holistic method and Scheme II is a simplified alternative, the simulation test and self-calibration procedure are all presented just for Scheme I. Readers could follow the following presentation to conduct similar issues for Scheme II.

#### **5. Computer simulation**

G = *E S <sup>F</sup>* (39)

− 8*N* + 8, *D*2 = *N*<sup>2</sup>

+ *N*<sup>2</sup>

 = 2*N*<sup>2</sup> .

(40)

= 6*N*<sup>2</sup>

where **Γ** is a relational matrix of dimension *D*1 × *D*2, *EF* is a vector of dimension *D*2 × 1, and *S* is a vector of dimension *D*1 × 1. Noting the subscripts of (38), we can calculate out *D*1 and *D*2 as:

For example, assuming *N* = 11, Γis of dimension 646 × 242 which can be determined by (38),

, 5, 5 , 4, 5 , 3, 5 ,5, 5 , 5, 4 , 4, 4 , 3, 4 ,5, 4


[ , , ,..., , , , ,..., ,

*Fx x x x xxx x*

*EF F F F FFF F*

> , 5,5 , 4,5 , 3,5 ,5,5 , 5, 5 , 4, 5 , 3, 5 ,5, 5 , 5, 4 , 4, 4 , 3, 4 ,5, 4

*xxx x yyy y yyy y*

*FFF F FFF F FFF F*


, , ,..., , , , ,..., , , , ,..., ,


5, 3 , 4, 3 , 3, 3 ,5, 3


*FF F*

*yy y*

, , ,..., ,

5, 5 4, 5 3, 5 2, 5 5,5 , 4, 5 , 5, 5 ,5,5 ,4,5


, , , ,..., ,

[0,0,0,0,0 , ,..., ,

*x x xx*

,0,


*S PP P QQQQ Q L L LL*


, 5, 4 , 5, 5 ,5,5 ,5,4 646 1


<sup>1</sup> ( ) *T T E S <sup>F</sup>*

,..


*y x xx*

*L L LL*

5, 5 4, 5 5,5

,..., ,

., ]

As the system has requisite data redundancy, it can be solved using a least-square method to

Therefore, **F***<sup>m</sup>*,*<sup>n</sup>* is obtained with certain robustness which finally leads to the determination of

It should be pointed out that the proposed scheme does not calculate out the misalignment error components *ξx* and *ξy*, while these calculations are important in previous published strategies [15, 23, 24]. The proposed scheme does not need this costly computation, which would be meaningful for simplification of the calculation process to some extent. In the reminder of this chapter, as Scheme I is a holistic method and Scheme II is a simplified alternative, the simulation test and self-calibration procedure are all presented just for Scheme I. Readers could follow the following presentation to conduct similar issues for Scheme II.

*T*


+ (*N* − 2) × *N* + (*N* − 1)<sup>2</sup>

*D*1 = 6 + *N*<sup>2</sup>

while

+ *N*<sup>2</sup>

198 New Trends and Developments in Metrology

+ (*N* − 2) × *N* + (*N* − 1)<sup>2</sup>

=

M

,

*y*

M

*F*

=

calculate *Fx*,*m*,*<sup>n</sup>* and *Fy*,*m*,*<sup>n</sup>* , producing a solution like

**G***<sup>m</sup>*,*<sup>n</sup>* by (7) as O and R are all known.

M

In the following, the proposed self-calibration scheme would be tested by simulation. The stage error is assumed to be on a 11 × 11 sample site array with the sample site internal *Δ* = 10 mm. And the stage error is generated by the command "normrnd" of MATLAB software with mean of 0 and standard deviation of 0.3, plus certain sine/cosine functions which may be partial characteristics of the stage error data, i.e.,

$$\begin{aligned} G\_{x,u,n} &= normrnd(0, 0.3, 11, 11) + 0.5 \sin(3m \times \frac{2\pi}{11})\\ G\_{y,u,n} &= normrnd(0, 0.3, 11, 11) + 0.5 \cos(3n \times \frac{2\pi}{11}) \end{aligned} \tag{42}$$

The above data are utilized as the nominal Gm,n shown in **Figure 4**, where the red vector lines are the stage error which have been zoomed in for 5000 times. It must be noted that Gx,m,n and Gy,m,n are both generated by *normrnd* (0, 0.3, 11, 11), respectively. To satisfy relevant constrains such as Eq. (4), little data modification has been made. In addition, the artifact error including **A**x,m,nand **A**y,m,n is generated with mean of 0 and standard deviation of 0.5 *μm*.

**Figure 4.** Nominal **G***<sup>m</sup>*,*<sup>n</sup>* × 5000 with max(**G***<sup>m</sup>*,*<sup>n</sup>*) = = 1.0832 *μ*m, min(**G***<sup>m</sup>*,*<sup>n</sup>*) = − 1.1448 *μ*m, and std(**G***<sup>m</sup>*,*<sup>n</sup>*) = 0.4611 *μ*m.

#### **5.1. Case I: simulation without random measurement noise**

In this subsection, the random measurement noise is assumed to be perfectly compensated by the mean value of a number of repeated measurements. The recalculated stage error , i.e., and , can be calculated out through the proposed self-calibration strategy. **Figure 5** illustrates the calibration error *EG* including *EGx* and *EGy*, i.e., , and . The cali‐ bration error is shown as the red vector lines those have been zoomed in for 5 × 1017 times. The maximum value *max*(⋅), the minimum value *min*(⋅) and the standard deviation *std*(⋅)of and *EG* are all listed in **Table 1**. It can be observed that the reconstructed stage error is always quite close to the actual stage error , and the calibration errors are all below 1.6 × 10−14 μm , which validate that the proposed self-calibration algorithm can accurately determine the stage error without the existence of random measurement noise.

**Figure 5.** Calibration error E*Gy*(*μ*m)s × 5 × 1017 with max(E*G*) = 1.5127 × 10−14μm, min(E*G*) = −1.1546 × 10−14μm, and std(E*G*) = 4.6547 × 10−15μm (without random measurement noise).


**Table 1.** Calculation performance indexes (without random measurement noise).

**5.1. Case I: simulation without random measurement noise**

200 New Trends and Developments in Metrology

In this subsection, the random measurement noise is assumed to be perfectly compensated by the mean value of a number of repeated measurements. The recalculated stage error

bration error is shown as the red vector lines those have been zoomed in for 5 × 1017 times. The maximum value *max*(⋅), the minimum value *min*(⋅) and the standard deviation *std*(⋅)of

error is always quite close to the actual stage error , and the calibration errors are all below 1.6 × 10−14 μm , which validate that the proposed self-calibration algorithm can accurately determine the stage error without the existence of random measurement noise.

**Figure 5.** Calibration error E*Gy*(*μ*m)s × 5 × 1017 with max(E*G*) = 1.5127 × 10−14μm, min(E*G*) = −1.1546 × 10−14μm, and std(E*G*)

= 4.6547 × 10−15μm (without random measurement noise).

, i.e., and , can be calculated out through the proposed self-calibration strategy. **Figure 5** illustrates the calibration error *EG* including *EGx* and *EGy*, i.e.,

and *EG* are all listed in **Table 1**. It can be observed that the reconstructed stage

, and . The cali‐

#### **5.2. Case II: simulation with random measurement noise of standard deviation 0.05** *μ***m**

The nominal stage error and artifact error in this subsection keep same with those of Sec‐ tion 5.1, except that in all site's measurements, we add independent random Gaussian meas‐ urement noises. The random measurement noise is generated with mean of 0 and standard deviation of 0.05*μm* . Following the proposed self-calibration scheme, the reconstructed stage error , i.e., and also can be computed out. The calibration error *EG*, which is the difference between the recalculated and the actual stage error, is shown in **Figure 6**, where the red vector lines as *EG* have been zoomed in for 5 × 104 times. The maxi‐ mum value *max*(⋅), the minimum value *max*(⋅), and the standard deviation of *Gx*,*m*,*<sup>n</sup>*, *Gy*,*m*,*<sup>n</sup>*, *EGx*, and *EGy* are all listed in **Table 2**. It can be seen that the proposed self-calibration method

**Figure 6.** Calibration error E*G* × 5 × 104 with max(E*G*) = 0.0899 μm, min(E*G*) = −0.0793 μm and std(E*G*) = 0.0334 μm (with random measurement noise std = 0.05 μm).

can determine the stage error rather exactly even there exists certain random measurement noise —when the measurement noise is with standard deviation of 0.05 μm , the corre‐ sponding calibration error is with a standard deviation of 0.0334 μm which is at the same level as the random measurement noise.

We also further test the calculation robustness of the proposed scheme to various random measurement noises those are continuously generated by arbitrary 15 times. The standard deviations of the resultant calibration errors are shown in **Figure 7**, where the 15 standard deviations are all below 0.05 μm . These results consistently validate that the proposed selfcalibration algorithm can meet the challenge of random measurement noise effectively.


**Table 2.** Calculation performance indexes (with random measurement noise std = 0.05 μm).

**Figure 7.** Standard deviation of calibration error for arbitrary 15 times (with random measurement noise std = 0.05 μm).

#### **5.3. Case III: simulation with random measurement noise of standard deviation 5 × 10−4 μm**

In this subsection, we implement other simulation to test the consistency of the proposed scheme to random measurement noise. The random measurement noise is generated with mean of 0 and standard deviation of 0.0005 μm which is rather small. **Figure 8** shows the resultant calibration error *EG* under the existence of random measurement noise, where the red vector lines is *EG* which have been zoomed in for 5 × 106 times. The maximum value *max*(⋅), the minimum value *min*(⋅), and the standard deviation *std*(⋅) of *Gx*,*m*,*<sup>n</sup>*, *Gy*,*m*,*<sup>n</sup>*, *EGx* and *EGy* are all listed in **Table 3**. Seen from this table, the proposed method can determine the stage error when there exists random measurement noise with standard deviation of 0.0005 μm --- —the standard deviation of calibration error is about 0.000327 μm , again the same level as the random measurement noise. The algorithm's robustness is further tested for arbitrary 15 times with standard deviation of 0.0005 μm , and **Figure 9** shows the standard deviations of the calibration errors. It can be seen from the figure that the standard deviations are all <0.0005 *μm* . All these results consistently validate that the proposed self-calibration algorithm can compute the stage error rather accurately no matter what the random measurement noise is.

can determine the stage error rather exactly even there exists certain random measurement noise —when the measurement noise is with standard deviation of 0.05 μm , the corre‐ sponding calibration error is with a standard deviation of 0.0334 μm which is at the same

We also further test the calculation robustness of the proposed scheme to various random measurement noises those are continuously generated by arbitrary 15 times. The standard deviations of the resultant calibration errors are shown in **Figure 7**, where the 15 standard deviations are all below 0.05 μm . These results consistently validate that the proposed selfcalibration algorithm can meet the challenge of random measurement noise effectively.

E*G*(*μ*m) 0.0899 −0.0793 0.0334 E*Gx*(*μ*m) 0.0552 −0.0793 0.0334 E*Gy*(*μ*m) 0.0899 −0.0759 0.0336

**Figure 7.** Standard deviation of calibration error for arbitrary 15 times (with random measurement noise std = 0.05

**5.3. Case III: simulation with random measurement noise of standard deviation 5 × 10−4 μm**

In this subsection, we implement other simulation to test the consistency of the proposed scheme to random measurement noise. The random measurement noise is generated with mean of 0 and standard deviation of 0.0005 μm which is rather small. **Figure 8** shows the

**Table 2.** Calculation performance indexes (with random measurement noise std = 0.05 μm).

**max (⋅) min (⋅) std (⋅)**

1.0919 −1.1598 0.4645

1.0919 −1.1598 0.4693

0.9744 −1.0835 0.4617

level as the random measurement noise.

202 New Trends and Developments in Metrology

*G* ⌢

*G* ⌢

*G* ⌢

μm).

*<sup>m</sup>*,*n*(*μ*m)

*<sup>x</sup>*,*m*,*n*(*μ*m)

*<sup>y</sup>*,*m*,*n*(*μ*m)

**Figure 8.** Calibration error E*G* × 5 × 106 with max(E*G*) = 8.8410 × 10−4 μm, min(E*G*) = −7.4194 × 10−4 μm and std(E*G*) = 3.2684 × 10−4 μm (with random measurement noise std = 5 × 10−4 μm).


**Table 3.** Calculation performance indexes (with random measurement noise std = 5 *×* 10*<sup>−</sup>*<sup>4</sup> μm).

**Figure 9.** Standard deviation of calibration error for arbitrary 15 times (with random measurement noise std = 5 × 10−4 μm).

#### **6. Standard self-calibration procedure**

In this section, the procedure of performing a standard self-calibration following the proposed scheme is provided to facilitate practical implementations. To illustrate the procedure more clearly, a self-calibration system including an artifact plate, a two-dimensional stage, and a mark alignment system is developed and shown in **Figure 10**. Herein, the mark alignment system is developed to precisely obtain the measurement information of each mark position. The actual steps are listed in the following.


**Figure 10.** An example setup for 2-D self-calibration.

#### **7. Conclusions**

**Figure 9.** Standard deviation of calibration error for arbitrary 15 times (with random measurement noise std = 5 × 10−4

In this section, the procedure of performing a standard self-calibration following the proposed scheme is provided to facilitate practical implementations. To illustrate the procedure more clearly, a self-calibration system including an artifact plate, a two-dimensional stage, and a mark alignment system is developed and shown in **Figure 10**. Herein, the mark alignment system is developed to precisely obtain the measurement information of each mark position.

**•** Step 0: Place the artifact plate on the 2-D metrology stage as shown in View 0 of **Figure 1**. Utilize the metrology stage to measure the marks of the artifact plate through the mark alignment system. Obtain *v*0,*x*,*m*,*<sup>n</sup>* and *v*0,*y*,*m*,*<sup>n</sup>* for *K* (e.g. *K* = 10) times, and get *U*0,*x*,*m*,*<sup>n</sup>* and *U*0,*y*,*m*,*<sup>n</sup>*

**•** Step 1: Place the artifact plate on the 2-D metrology stage as shown in View 1 of **Figure 1**. Use the metrology stage to measure the marks of the artifact plate, and get *v*1,*x*,*m*,*<sup>n</sup>* and *v*1,*y*,*m*,*<sup>n</sup>* for *K* (e.g. *K* = 10) times. Then, determine *U*1,*x*,*m*,*<sup>n</sup>* and *U*1,*y*,*m*,*<sup>n</sup>* by (22), and obtain *O* and *R* by (24).

**•** Step 2: Place the artifact plate on the 2-D metrology stage as shown in View 2 of **Figure 1**. Use the metrology stage to measure the marks of the artifact plate, and get *v*2,*x*,*m*,*<sup>n</sup>* and *v*2,*y*,*m*,*<sup>n</sup>* for *K* (e.g. *K* = 10) times. Compute the misalignment error components *ξθ*, *ξx*, *ξ<sup>y</sup>* , and obtain

**•** Step 3: Through above steps, Eq. (32) can be consequently obtained. Then, by (35), *Fx*,*m*,*<sup>n</sup>* and *Fy*,*m*,*<sup>n</sup>* can be figured out. Finally, *Gx*,*m*,*<sup>n</sup>* and *Gy*,*m*,*<sup>n</sup>* can be computed out by (7) as *O* and *R* are

μm).

**6. Standard self-calibration procedure**

204 New Trends and Developments in Metrology

The actual steps are listed in the following.

previously determined by Step 1.

by (18), respectively.

(31).

In this chapter, a least-square-based self-calibration strategy with simplicity and accuracy orientation has been proposed for two-dimensional precision metrology stages to address the measurement accuracy calibration problem. Three measurement views of an artifact plate on the un-calibrated metrology stage are utilized to construct symmetry, transitivity, and redundancy of the stage error, which is the basis for the synthesis of a least-square-based selfcalibration algorithm. The misalignment error of each measurement view has been calculated out with explicit mathematical manipulations, while the necessity of these costly computations has been discussed. The proposed algorithm has been investigated by computer simulation, and the results show that the self-calibration strategy can reconstruct the stage error map rather precisely even there exist various random measurement noises. Finally, the procedure for performing the proposed self-calibration has been presented for engineers to facilitate practical implementation.

#### **Acknowledgements**

This work is supported by National Nature Science Foundation of China (Grant 51475262) and Autonomous Scientific Research Project of Tsinghua University (Grant 20151080363).

#### **Appendix**

#### **Appendix A**

Calculation of misalignment error component *ξθ* : Through Eq. (**31**), one can get

$$\begin{aligned} &(F\_{x,n+1,n} + F\_{x,-(m+1),-n}) - (F\_{x,n,n} + F\_{x,-m,-n}) \\ &= (F\_{x,n+1,n} - F\_{x,m,n}) - (F\_{x,-m,-n} - F\_{x,-(m+1),-n}) \\ &= U\_{2,x,n,n} - U\_{0,x,n,n} + \underline{\xi}\_x - \underline{\xi}\_y \underline{\nu}\_n - (U\_{2,x,-(m+1),-n} - U\_{0,x,-(m+1),-n} + \underline{\xi}\_x - \underline{\xi}\_y \underline{\nu}\_{-n}) \\ &= (U\_{2,x,n,n} - U\_{0,x,m,n} - U\_{2,x,-(m+1),-n} + U\_{0,x,-(m+1),-n}) - \underline{\xi}\_y (\underline{\nu}\_n - \underline{\nu}\_{-n}) \end{aligned} \tag{43}$$

Changing the index of Eq. (25), we can obtain

$$\begin{aligned} F\_{x, -m, -n} + F\_{y, -n, m} &= \mathcal{Q}\_{-n, m} \\ F\_{x, -n, u} - F\_{y, -m, -n} &= P\_{-n, u} \end{aligned} \tag{44}$$

Combining Eqs. (25) and (44) leads to the following

$$\begin{aligned} F\_{x,m,n} + F\_{x,-m,-n} &= C\_{m,n} \equiv P\_{m,n} + \underline{Q}\_{-n,n} \\ F\_{y,m,n} + F\_{y,-m,-n} &= D\_{m,n} \equiv \underline{Q}\_{m,n} - P\_{-n,n} \end{aligned} \tag{45}$$

Then, noting Eqs. (43) and (45), we can obtain

$$\left( U\_{2,x,m,n} - U\_{0,x,m,n} - U\_{2,x,-(m+1),-n} + U\_{0,x,-(m+1),-n} \right) - \xi\_0 (\mathbf{y}\_n - \mathbf{y}\_{-n}) = C\_{m+1,n} - C\_{m,n} \tag{46}$$

Eq. (46) can be directly utilized to determine the value of *ξθ*.

#### **Appendix B**

Calculation of misalignment error component *ξx* : Summing Eq. (31) over n leads to the following

$$\begin{aligned} H\_{\mathbf{x},m+1} - H\_{\mathbf{x},m} &= Z\_{\mathbf{x},m} + N\xi\_{\mathbf{x}} \\ H\_{\mathbf{y},m+1} - H\_{\mathbf{y},m} &= Z\_{\mathbf{y},m} + N\xi\_{\mathbf{y}} + N\xi\_{\theta}\mathbf{x}\_{m} \end{aligned} \tag{47}$$

where *m*<sup>=</sup> <sup>−</sup> *<sup>N</sup>* <sup>−</sup> <sup>1</sup> <sup>2</sup> , <sup>⋯</sup>, *<sup>N</sup>* <sup>−</sup> <sup>3</sup> <sup>2</sup> , and

#### Self-Calibration of Two-Dimensional Precision Metrology Systems http://dx.doi.org/10.5772/62761 207

$$\begin{aligned} H\_{\boldsymbol{x},\boldsymbol{m}} &= \sum\_{\boldsymbol{n}} F\_{\boldsymbol{x},\boldsymbol{w},\boldsymbol{n}}, \; Z\_{\boldsymbol{x},\boldsymbol{m}} = \sum\_{\boldsymbol{n}} (U\_{\boldsymbol{2},\boldsymbol{x},\boldsymbol{w},\boldsymbol{n}} - U\_{0,\boldsymbol{x},\boldsymbol{w},\boldsymbol{n}})\\ H\_{\boldsymbol{y},\boldsymbol{m}} &= \sum\_{\boldsymbol{n}} F\_{\boldsymbol{y},\boldsymbol{w},\boldsymbol{n}}, \; Z\_{\boldsymbol{y},\boldsymbol{m}} = \sum\_{\boldsymbol{n}} (U\_{\boldsymbol{2},\boldsymbol{y},\boldsymbol{w},\boldsymbol{n}} - U\_{0,\boldsymbol{y},\boldsymbol{m},\boldsymbol{n}}) \end{aligned} \tag{48}$$

Considering *Hx*,*<sup>m</sup>*, with *<sup>j</sup>* <sup>=</sup> <sup>−</sup> *<sup>N</sup>* <sup>−</sup> <sup>1</sup> <sup>2</sup> , we can obtain

**Appendix**

206 New Trends and Developments in Metrology

**Appendix A**

**Appendix B**

following

where *m*<sup>=</sup> <sup>−</sup> *<sup>N</sup>* <sup>−</sup> <sup>1</sup>

<sup>2</sup> , <sup>⋯</sup>, *<sup>N</sup>* <sup>−</sup> <sup>3</sup>

Calculation of misalignment error component *ξθ* : Through Eq. (**31**), one can get

2, , , 0, , , 2, , ( 1), 0, , ( 1),

= - +- - - +- = -- + ( ) *n n y y*

*xmn xmn x n x m n x m n x n*

*U U yU U y*


, , ,, , ,, , , , *x m n y nm nm x nm y m n nm*

,, , , , , , ,, , , , , , *xmn x m n mn mn nm ymn y m n mn mn nm*

+ = º+

2, , , 0, , , 2, , ( 1), 0, , ( 1), 1, , ( )( ) *UUU U xmn xmn x m n x m n n n m n mn* - - + - -= - - +- - +-

Calculation of misalignment error component *ξx* : Summing Eq. (31) over n leads to the

x

x

xq

q


 x


*F F C PQ F F DQP*

+ =

*F FQ FF P* -- - - - -- -

( )

q x- - -


 q (43)

x x


+ =º- (45)

*yy C C* - + (46)

2, , , 0, , , 2, , ( 1), 0, , ( 1),

*xmn xmn x m n x m n*

q

( )

, 1, , ( 1), ,, , , , 1, , , , , , ( 1),

*UUU U*

*xm n x m n xmn x m n xm n xmn x m n x m n*

+ - +- - - + -- - +-

> x x

( ) ( ) ( )( )

+ -+

*F F FF FFF F*

= -- -

Changing the index of Eq. (25), we can obtain

Combining Eqs. (25) and (44) leads to the following

Then, noting Eqs. (43) and (45), we can obtain

Eq. (46) can be directly utilized to determine the value of *ξθ*.

,1 , , ,1 , ,

+ +

<sup>2</sup> , and

*H HZN*

*xm xm xm x*


*ym ym ym y m*

*H H Z N Nx*

$$\begin{aligned} &\sum\_{m=-j}^{j} H\_{x,m} \mathbf{x}\_m \\ &= H\_{x,-j} \mathbf{x}\_{-j} + \{ [H\_{x,-j+1} - H\_{x,-j}] + H\_{x,-j} \} \mathbf{x}\_{-j+1} + \dots \\ &+ \{ [H\_{x,j} - H\_{x,j-1}] + [H\_{x,j-1} - H\_{x,j-2}] \} \ + [H\_{x,j-2} - H\_{x,j-3}] + \dots + H\_{x,-j} \} \mathbf{x}\_j \\ &= H\_{x,-j} \sum\_{m=-j}^{j} \mathbf{x}\_m + [H\_{x,-j+1} - H\_{x,-j}] \sum\_{m=-j+1}^{j} \mathbf{x}\_m + \dots + [H\_{x,j} - H\_{x,j-1}] \sum\_{m=j}^{j} \mathbf{x}\_m \end{aligned} \tag{49}$$

Substituting (47) into (49), and noting ∑*m*=<sup>−</sup> *<sup>j</sup> <sup>j</sup> xm* =0 , one can get

$$\begin{aligned} &\sum\_{m=-j}^{j} H\_{\mathbf{x},m} \mathbf{x}\_m \\ &= N \xi\_x \big| \sum\_{m=-j+1}^{j} \mathbf{x}\_m + \sum\_{m=-j+2}^{j} \mathbf{x}\_m + \dots + \sum\_{m=j}^{j} \mathbf{x}\_m \big| \\ &+ \big[ Z\_{\mathbf{x}\_{i-j}} \sum\_{m=-j+1}^{j} \mathbf{x}\_m + Z\_{\mathbf{x}\_{i-j+1}} \sum\_{m=-j+2}^{j} \mathbf{x}\_m + \dots \quad + Z\_{\mathbf{x}\_{i-j}} \big| \sum\_{m=j}^{j} \mathbf{x}\_m \big] \end{aligned} \tag{50}$$

From Eq. (8), it can be seen that

$$\sum\_{m=-j}^{j} H\_{\mathbf{x},w} \mathbf{x}\_{m} = \sum\_{m,n} F\_{\mathbf{x},m,n} \mathbf{x}\_{m} = \mathbf{0} \tag{51}$$

#### **Appendix C**

Calculation of misalignment error component *ξy* : Similar to H*<sup>x</sup>*,*<sup>m</sup>*, the following equation for <sup>H</sup>*<sup>y</sup>*,*<sup>m</sup>* with *<sup>j</sup>* <sup>=</sup> <sup>−</sup> *<sup>N</sup>* <sup>−</sup> <sup>1</sup> <sup>2</sup> can be constructed, i.e.,

$$\begin{aligned} &\sum\_{m=-j}^{j} H\_{y,m} \mathbf{x}\_m \\ &= H\_{y,-j} \mathbf{x}\_{-j} + \{ [H\_{y,-j+1} - H\_{y,-j}] + H\_{y,-j} \} \mathbf{x}\_{-j+1} + \dots \\ &\quad + \{ [H\_{y,j} - H\_{y,j+1}] + [H\_{y,j+1} - H\_{y,j+2}] + \dots + H\_{y,-j} \} \mathbf{x}\_j \\ &= H\_{y,-j} \sum\_{m=-j}^{j} \mathbf{x}\_m + [H\_{y,-j+1} - H\_{y,-j}] \sum\_{m=-j+1}^{j} \mathbf{x}\_m \\ &\quad + \dots + [H\_{y,j} - H\_{y,j-1}] \sum\_{m=-j}^{j} \mathbf{x}\_m \end{aligned} \tag{52}$$

Substituting Eq. (47) into (52), and noting ∑*<sup>m</sup> <sup>j</sup>* = − *j m <sup>x</sup>* =0 , one can obtain

$$\begin{aligned} &\sum\_{m=-j}^{j} H\_{\mathbf{y},m} \mathbf{x}\_m \\ &= N\tilde{\boldsymbol{\xi}}\_{\mathcal{I}} \big| \sum\_{m=-j+1}^{j} \mathbf{x}\_m + \sum\_{m=-j+2}^{j} \mathbf{x}\_m + \dots + \sum\_{m=j}^{j} \mathbf{x}\_m \big] \\ &+ N\tilde{\boldsymbol{\xi}}\_{\theta} [\mathbf{x}\_{-j} \sum\_{m=-j+1}^{j} \mathbf{x}\_m + \mathbf{x}\_{-j+1} \sum\_{m=-j+2}^{j} \mathbf{x}\_m + \dots \quad + \mathbf{x}\_{j-1} \sum\_{m=j}^{j} \mathbf{x}\_m] \\ &+ [Z\_{\boldsymbol{y},-j} \sum\_{m=-j+1}^{j} \mathbf{x}\_m \quad + Z\_{\boldsymbol{y},-j+1} \sum\_{m=-j+2}^{j} \mathbf{x}\_m + \dots + Z\_{\boldsymbol{y},j-1} \sum\_{m=j}^{j} \mathbf{x}\_m] \end{aligned} \tag{53}$$

From Eq. (8), it also can be seen that

$$\sum\_{m=-j}^{j} H\_{\mathbf{y},m} \mathbf{x}\_m = \sum\_{m,n} F\_{\mathbf{y},m,n} \mathbf{x}\_m = \mathbf{0} \tag{54}$$

As *ξθ* has been calculated out in Appendix A, *ξy* can be figured out by Eqs. (53) and (54).

#### **Author details**

Chuxiong Hu1,2\* and Yu Zhu1,2

\*Address all correspondence to: cxhu@tsinghua.edu.cn

1 State Key Lab of Tribology, Department of Mechanical Engineering, Tsinghua University, Beijing, China

2 Beijing Key Lab of Precision/Ultra-precision Manufacturing Equipment and Control, Tsinghua University, Beijing, China

#### **References**

,

*H x*

*ym m*

*j*

208 New Trends and Developments in Metrology

*m j*

å

=-

, ,1 , , 1


*yj j yj yj yj j*

= + -+ + + - + - ++

*Hx H H H x*

*j j yj m yj yj m m j m j j*

å å

*H xH H x*

[ ]

å

1 2

[ ... ]

+ + ++

+ + + +

*m j m n*

*jj j ym m m mj mj m j*

åå å

=- + =- + =

, , 1 , 1 1 2


> , , , ,

*Hx F x*

*ym m ymn m*

As *ξθ* has been calculated out in Appendix A, *ξy* can be figured out by Eqs. (53) and (54).

1 State Key Lab of Tribology, Department of Mechanical Engineering, Tsinghua University,

2 Beijing Key Lab of Precision/Ultra-precision Manufacturing Equipment and Control,

*Nx x x x x x*


[ ... ]

*Z xZ x Z x*

[ ... ]

ååå

*jj j j mj m j m m j m j m j jj j yj m yj m yj m m j m j m j*

åå å

, ,1 ,


, ,1

1 2

= + ++

*j*

=-

\*Address all correspondence to: cxhu@tsinghua.edu.cn

*Nx x x*

*HH x*

*yj yj m m j*


.] [ ..

++ -

Substituting Eq. (47) into (52), and noting ∑*<sup>m</sup>*

x

*j*

*m j*

From Eq. (8), it also can be seen that

**Author details**

Beijing, China

Chuxiong Hu1,2\* and Yu Zhu1,2

Tsinghua University, Beijing, China

å

=-

,

q

x

*H x*

*ym m*

= +-

, ,1 ,1 ,2 ,

*H H H H Hx*

{[ ] } ... {[ ] [ ] ... }

*yj yj yj yj yj j*

*<sup>j</sup>* = − *j m*


1

1 1

0

å å= = (54)

*<sup>x</sup>* =0 , one can obtain

(52)

(53)


### **Innovative Theoretical Approaches Used for RF Power Amplifiers in Modern HDTV Systems**

Daniel Discini Silveira, Marcos Paulo de Souza Silva, Marcel Veloso Campos and Maurício Silveira

Additional information is available at the end of the chapter

http://dx.doi.org/10.5772/63547

#### **Abstract**

[14] Y. H. Jeong, J. Dong, and P. M. Ferreira. Self-calibration of dual-actuated single-axis nanopositioners. Measurement Science and Technology. 2008, 19 (4): 1–13.

[15] S. Yoo and S. W. Kim. Self-calibration algorithm for testing out-of-plane errors of twodimensional profiling stages. International Journal of Machine Tools &; Manufacture.

[16] M. Xu, T. Dziomba, G. Dai, and L. Koenders. Self-calibration of scanning probe microscope: mapping the errors of the instrument. Measurement Science and Technol‐

[17] Q. C. Dang, S. Yoo, and S.-W. Kim. Complete 3-D self-calibration of coordinate

[18] M. T. Takac. Self-calibration in one dimension. In Proceedings of SPIE 13th Annual BACUS Symposium on Photomask Technology and Management, 1993, pp. 80–86.

[19] M. R. Raugh. Absolute two-dimensional sub-micron metrology for electron beam lithography: a theory of calibration with applications. Precision Engineering. 1985, 7:

[20] J. Ye, M. T. Takac, C. N. Berglund, et al. An exact algorithm for self-calibration of

[21] X. M. Lu. Real-time self-calibration and geometry error measurement in nm level multiaxis precision machines based on multi X–Y encoder integration. Ph.D. Thesis. Uni‐

[22] Y. Zhu, C. Hu, J. Hu, and K. Yang. Accuracy and simplicity oriented self-calibration approach for two-dimensional precision stages. IEEE Transactions on Industrial

[23] M. Xu, T. Dziomba, G. Dai, and L. Koenders. Self-calibration of scanning probe microscope: mapping the errors of the instrument. Measurement Science and Technol‐

[24] C. Hu, Y. Zhu, J. Hu, M. Zhang, and D. Xu. A holistic self-calibration algorithm for X– Y precision metrology systems. IEEE Transactions on Instrumentation and Measure‐

precision metrology stages. Precision Engineering. 1997, 20 (1): 16–32.

measuring machines. Annals of the CIRP. 2006, 55 (1): 1–4.

versity of New Mexico, Albuquerque, 2004.

Electronics. 2013, 60 (6): 2264–2272.

ogy. 2008, 19 (2): 1–6.

ment. 2012, 61 (9): 2492–2500.

2004, 44: 767–774.

210 New Trends and Developments in Metrology

ogy. 2008, 19 (2): 1–6.

3–13.

The essential purpose of this chapter is to introduce theoretical and numerical approaches that can be used for modeling nonlinear effects that appear intrinsically in the design of power amplifiers that have been used widely in many modern highdensity television (HDTV) architectures. Important effects like the pre-distortion using adaptive techniques, with distinct characteristics like amplitude, phase, and frequen‐ cy, as well as, their specific nature such as *AM/AM*, *AM/PM*, *PM/AM,* and *PM/PM*, and constitute one of the main directions of this research. All theoretical and technological approaches have been supported by a consistent set of numerical data performed with one of the most important platform of simulations used in the great area of Radio Frequency (RF) and Microwave structures. As a direct application, we are introducing some efficient processes that can be used for the characterization of RF systems with a set of consistent laboratorial measures that permit us to visualize the effective cost and a complete architecture for the characterization of high-power amplifiers. With the continuous and innovative technological demand that is imposed by the international marketing has a great importance to find versatile systems that are capable of measuring several amplifier characteristics, as gain, output power, inter-modulation distortion of different signals, efficiency, current, and temperature that constitute another direction of research that has been demanded strongly for news advanced technologies used widely in modern HDTV systems.

**Keywords:** innovative theoretical modeling, digital pre-distortion, nonlinear power amplifiers, characterization of RF systems, numerical approach, measurement sys‐ tems, HDTV architectures

© 2016 The Author(s). Licensee InTech. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### **1. Introduction**

Nowadays, the research and development of the modern *RF* power amplifiers have been demanded some continuous and accurate studies concentrated over the linearity of the equipment. Besides the distortions caused by the nonlinear devices, we have been having a constant improvement in the implementation of new modulation techniques by exploring the spectrum properties [1]. The nonlinear performance caused by the devices is a fundamental effect, which have invoked the attention of many researchers in many famous research centers around the entire world [2, 3]. In Section 2, we will focus our research in the development on an adaptiveprototype forthe linearization ofthe power amplifier with resources ofthedigital predistortion, in a such way that it is possible to generate a feedback distortion through the power amplifiers [4, 5]. This will provide a much more linear final answer, being the main applica‐ tion applied for modern high-density television (*HDTV*) transmitter equipment. In the current literature, we can classify this type of pre-distortion as being adaptive, since that we will must to compare the input signal with that transmitted one, which it includes the distortion gener‐ ated by the power amplifier [6]. The theoretical as well as experimental approach starts by set anopen-loopprototypeuntiltoreachthe closedloopbyadoptinganadaptive systemfor analog signals, such that the digital treatment uses the adaptive techniques over the entire transmis‐ sion system [7–14].

Actually, for many modern implementations of broadcast TV transmitters, the association of power amplifiers (PAs) is performed in order to obtain the desired output power, usually higher than 1 kW. The filtered output signal is expected to present low levels of distortion, so it does not interfere in adjacent broadcast channels. A signal applied to a power amplifier (PA) will suffer from distortions, independently of its input level and the technology (BJT and LDMOS) [1, 2, 15]. Basically, three types of distortions are caused in the input signal: amplitude, frequency, and phase. Amplitude distortions are caused by the amplifier gain compression, P1dB, a device-dependent characteristic. By clipping, the signal harmonics are generated, also classified as frequency distortion. The magnitude of these harmonics is increased with the amount of signal clipping, consequently increasing the overall distortion. Phase distortions are mainly caused by memory effects present in amplifiers due to the different reasons, as biasing, transistor transit-time, thermal effects, and others. Operating the amplifier in signifi‐ cant *back-off* can avoid some of these undesirable distortions, but the efficiency of the amplifier will be very poor. A powerful way to decrease distortion in the output signal is to use linearization techniques, thus operating the PA in a more efficient way [16–18]. Some initial research for the *AM/AM* analysis can be obtained, by imposing that the transfer function of the output voltage of the power amplifier had third-order nonlinear dependence with respect to the input signal. Although the numerical approach adopted in that occasion was efficient for this particular case, the nonlinear effect for the *AM/PM* analysis was not compensated [19– 22]. Example of models which can compensate these and also other distortions can be found with a study over several practical models. An accurate model is the first step for the imple‐ mentation of an efficient pre-distortion system using a parallel model has obtained good results, compensating for both *AM*/*AM*, *AM*/*PM* distortions, and also for memory effects, the main causes of distortions in the PA output signal. These main theoretical and experimental approaches constitute the most important contributions presented in the Section 3 [23, 24–26].

**1. Introduction**

212 New Trends and Developments in Metrology

sion system [7–14].

Nowadays, the research and development of the modern *RF* power amplifiers have been demanded some continuous and accurate studies concentrated over the linearity of the equipment. Besides the distortions caused by the nonlinear devices, we have been having a constant improvement in the implementation of new modulation techniques by exploring the spectrum properties [1]. The nonlinear performance caused by the devices is a fundamental effect, which have invoked the attention of many researchers in many famous research centers around the entire world [2, 3]. In Section 2, we will focus our research in the development on an adaptiveprototype forthe linearization ofthe power amplifier with resources ofthedigital predistortion, in a such way that it is possible to generate a feedback distortion through the power amplifiers [4, 5]. This will provide a much more linear final answer, being the main applica‐ tion applied for modern high-density television (*HDTV*) transmitter equipment. In the current literature, we can classify this type of pre-distortion as being adaptive, since that we will must to compare the input signal with that transmitted one, which it includes the distortion gener‐ ated by the power amplifier [6]. The theoretical as well as experimental approach starts by set anopen-loopprototypeuntiltoreachthe closedloopbyadoptinganadaptive systemfor analog signals, such that the digital treatment uses the adaptive techniques over the entire transmis‐

Actually, for many modern implementations of broadcast TV transmitters, the association of power amplifiers (PAs) is performed in order to obtain the desired output power, usually higher than 1 kW. The filtered output signal is expected to present low levels of distortion, so it does not interfere in adjacent broadcast channels. A signal applied to a power amplifier (PA) will suffer from distortions, independently of its input level and the technology (BJT and LDMOS) [1, 2, 15]. Basically, three types of distortions are caused in the input signal: amplitude, frequency, and phase. Amplitude distortions are caused by the amplifier gain compression, P1dB, a device-dependent characteristic. By clipping, the signal harmonics are generated, also classified as frequency distortion. The magnitude of these harmonics is increased with the amount of signal clipping, consequently increasing the overall distortion. Phase distortions are mainly caused by memory effects present in amplifiers due to the different reasons, as biasing, transistor transit-time, thermal effects, and others. Operating the amplifier in signifi‐ cant *back-off* can avoid some of these undesirable distortions, but the efficiency of the amplifier will be very poor. A powerful way to decrease distortion in the output signal is to use linearization techniques, thus operating the PA in a more efficient way [16–18]. Some initial research for the *AM/AM* analysis can be obtained, by imposing that the transfer function of the output voltage of the power amplifier had third-order nonlinear dependence with respect to the input signal. Although the numerical approach adopted in that occasion was efficient for this particular case, the nonlinear effect for the *AM/PM* analysis was not compensated [19– 22]. Example of models which can compensate these and also other distortions can be found with a study over several practical models. An accurate model is the first step for the imple‐ mentation of an efficient pre-distortion system using a parallel model has obtained good results, compensating for both *AM*/*AM*, *AM*/*PM* distortions, and also for memory effects, the

During these last years, a wide class of theoretical approaches have been presented for the current literature that involves the project of the power amplifiers modeled with field-effect transistor (*FET*) structures, where the main focus is concentrate over the linear region of the device [2–5, 27–29]. Recently, important studies were done for nonlinear microwave circuits as well as the new design for wireless digital structures. Also some innovative studies using linearization techniques with high gain of the power amplifiers, as well as the great importance it acquires the presence of an active pre-distortion circuit for monolithic microwave integrated circuit (*MMIC*) components implemented with field-programmable gate array (*FPGA*) architectures [30–32]. More recently, a set of innovative research appears in the current literature, with the introduction of a new theoretical modeling for the nonlinear analysis of the power amplifiers, where the electronic structure modeling was strongly motivated by the involvement of the authors with the project of modern HDTV equipments. With respect to industrial implementations, this research has an expressive contribution related with polari‐ zation of the digital structures using the common source configuration in the design of all hybrid RF circuits [33]. Therefore, the desired great stability can be reached independent of the particular pre-distortion that must be corrected in the real time, like: *AM/AM*, *AM/PM*, and all the others cases. The theoretical approach presented in the Section 4 can be used for a wide class of modern power amplifiers independent of the particular modulation system that will be adopted for the design of the HDTV systems. Some experimental measurements are performed, and we make a comparison with all numerical simulations using a very powerful numerical platform, that is, the Advance Design Systems (*ADS*) that confirm all ours theoret‐ ical approaches [34, 35].

The characterization of RF power systems or products over the entire Ultra High Frequency (*UHF*) band is very difficult and sometimes expensive task that demands time and accuracy. This means expensive and specialized setups capable of performing the necessary measure‐ ments [36–45]. In Section 5, we present a high performance and low-cost setup designed for test and characterization of RF high-power amplifiers, capable of generate reports that will help the operator to decide about the conformity of the equipment with other ones tested, as we can see in the characterization of a power drawer [46–52]. The drawer is composed by four power amplifiers, capable of delivering a maximal output power of 860 Watts peak sync (Wps), 430 (W) for the USA digital TV broadcast system (*ATSC*), or 220 (W) for the standard adopted in the Japanese/Brazilian systems (*ISDB-T*). The architecture presented in this section can naturally also be used to characterize other RF power amplifiers, where the power drawer was only a particular case.

#### **2. Efficient experimental techniques to control the adaptive linearization: digital signals** *I* **and** *Q*

In this section, we intend to focus the implementation of an efficient FPGA electronic archi‐ tecture than can serve for those systems that have a very sensible characteristic, with respect to the nonlinear distortions [1]. In principal, this research will be directed for the analysis of both *I* and *Q* complex signals in order to minimize both *Phase* (*AM-PM*) and *Amplitude* (*AM-AM*) distortions that are present intrinsically in the functioning of the architecture. One of the most important steps to be conquered with this design is to combine the power amplifier with a digital pre-distortion system, in order to get an acceptable inter-modulation level at the output.

With respect to the numerical implementations, all programs are made using classic compu‐ tational methods, such as *linear correlation* and *linear regression*, which are very convenient for this kind of distortion as well as permits for the designer to work close to the linear operation region of the amplifier [2–5].

#### **2.1. Fundamental relationships adopted for the linear numerical approach**

Recently, the implementation of innovative electronic architectures that involve numerical implementation has an important requirement to investigate the relationship existent between two or more variables presented in the system. The verification of the degree of relationship between the variables is object of the correlation studies, with the essential purpose to measure and to evaluate the relationship existent between two random variables [6–8].

Typically, the linear correlation seeks to measure the relationship between the variables and through the disposition of the points around a straight line and is used exhaustively for the determination of the correlation coefficient. For a more general computational modeling, we must to detect the correlation that exists between two arbitraries variables. For that case, it is possible to allocate all points over a straight line, which corresponds the known classic method of the linear correlation, while the points in the two-dimensional real domain must be placed close of some curve and establish a nonlinear correlation.

If both variables *x* and *y* have the same characteristics such that the increase of the first variable must impose the same performance for the second one, we are in front of the positive linear correlation, where the alternative case is corresponding to the negative linear correlation. The perfect linear correlation happens when the points *x* and *y* are perfectly aligned. The null correlation occurs when there is not a relationship among the variables, which implies that all variations can happen independently [9–12].

The linear coefficient, *αr*, which in the international literature is known as the Pearson corre‐ lation can be described by Eq. (1) in the analytic form:

$$|\alpha\_r| = \frac{\sum\_{i=1}^{n} |(\mathbf{x}\_i - \overline{\mathbf{x}})(\mathbf{y}\_i - \overline{\mathbf{y}})|}{\sqrt{\sum\_{i=1}^{n} (\mathbf{x}\_i - \overline{\mathbf{x}})^2 \sum\_{i=1}^{n} (\mathbf{y}\_i - \overline{\mathbf{y}})^2}} \tag{1}$$

In the previous equation, we have: *n* = number of pairs (*x*, *y*); *x*¯ and *y*¯ = averages of both *x* and *y* variables. The coefficient *α<sup>r</sup>* has values in the interval I = [−1, 1] that corresponds the corre‐ lation level, as is illustrated in **Table 1**.


**Table 1.** Amplitude of the parameter *αr*.

to the nonlinear distortions [1]. In principal, this research will be directed for the analysis of both *I* and *Q* complex signals in order to minimize both *Phase* (*AM-PM*) and *Amplitude* (*AM-AM*) distortions that are present intrinsically in the functioning of the architecture. One of the most important steps to be conquered with this design is to combine the power amplifier with a digital pre-distortion system, in order to get an acceptable inter-modulation level at the

With respect to the numerical implementations, all programs are made using classic compu‐ tational methods, such as *linear correlation* and *linear regression*, which are very convenient for this kind of distortion as well as permits for the designer to work close to the linear operation

Recently, the implementation of innovative electronic architectures that involve numerical implementation has an important requirement to investigate the relationship existent between two or more variables presented in the system. The verification of the degree of relationship between the variables is object of the correlation studies, with the essential purpose to measure

Typically, the linear correlation seeks to measure the relationship between the variables and through the disposition of the points around a straight line and is used exhaustively for the determination of the correlation coefficient. For a more general computational modeling, we must to detect the correlation that exists between two arbitraries variables. For that case, it is possible to allocate all points over a straight line, which corresponds the known classic method of the linear correlation, while the points in the two-dimensional real domain must be placed

If both variables *x* and *y* have the same characteristics such that the increase of the first variable must impose the same performance for the second one, we are in front of the positive linear correlation, where the alternative case is corresponding to the negative linear correlation. The perfect linear correlation happens when the points *x* and *y* are perfectly aligned. The null correlation occurs when there is not a relationship among the variables, which implies that all

The linear coefficient, *αr*, which in the international literature is known as the Pearson corre‐


*x xy y*


*i i*

*i i i i*

( )( )

*xx yy*


2 2

(1)

1

*i <sup>r</sup> n n*

*n*

å

1 1

= =

å å

**2.1. Fundamental relationships adopted for the linear numerical approach**

and to evaluate the relationship existent between two random variables [6–8].

close of some curve and establish a nonlinear correlation.

variations can happen independently [9–12].

lation can be described by Eq. (1) in the analytic form:


=

a<sup>=</sup>

output.

region of the amplifier [2–5].

214 New Trends and Developments in Metrology

The proximity of the parameter *αr* close to zero permits us to identify that there does not exists a significant linear correlation between the variables *x* and *y*, while if this coefficient is close to ±1, we can conclude that there exists a strong correlation. Typically, for the general case, we have: 0≤ |*α<sup>r</sup>* | ≤1.

The straight line of the minimum square that adjusts the set of points {(*x***1,** *y***1**), (*x***2,** *y***2**),… (*xn***,** *yn*)} that has a better adjustment performed through the linear equation:

$$y = a.x + b \tag{2}$$

For this numerical approach, the constants *a* and *b* can be determined by solving the normalized system composed by the equations:

$$a = \frac{\sum\_{i=1}^{n} [(\mathbf{x}\_i - \overline{\mathbf{x}}).(\mathbf{y}\_i - \overline{\mathbf{y}})]}{\sum\_{i=1}^{n} (\mathbf{x}\_i - \overline{\mathbf{x}})^2} \quad b = \overline{\mathbf{y}} - a\overline{\mathbf{x}} \tag{3}$$

#### **2.2. An initial hardware implementation**

In order to reach the demanded linearization, we must to take into account some very important parameters, such as operation frequency, the temperature, and other ones. It is great to emphasize that those parameters are not constant as a function of the time, and thus we must to set their control under an adaptive digital system.

If our project has the technological demand to work with a fast control and the system must to operate in real time, we looked for to substitute the used memory for a model implemented experimentally by a *Programmable Logic Device* (*PLD*) just because its present some countless advantages. In our case, we have opted for a *Look-Up Table* (*LUT*) digital architecture.

The analog video signal is transformed in a digital one that proceeds for the *LUT* and for the retard circuit, aiming to establish the best relationship among the samples. In other words, we must to get a better synchronism, and at that point, the adjustment algorithm compares the samples of the correlator and the *up-date-algorithm* will make the updating at the *LUT*.

An initial architecture for the pre-distortion system can be visualized through the block diagram shown in **Figure 1**. Despite of we have many parameters to adjust, has a great importance the fact that we still must to be sure that we have comparing the samples in the same instant of time. A program can be developed in C++ language to control the whole system, and this initial idea proved to us a great effectiveness in the linearization as well as in the correction of the amplitude.

**Figure 1.** Block diagram: digital pre-distortion system.

**Figure 2.** An initial hardware architecture: pre-distortion system.

At the same time, our hardware implementation must be projected as well as operate with distinct input digital signals: *I* (*Phase*) and *Q* (*Quadrature*), such that it is possible to correct both amplitude and the phase of each one. An initial whole architecture for the Module of Pro‐ grammable Logic (*MLP*), the *PLD* plate and the converters, can be visualized in **Figure 2**. At first, the control is done through the C++ software; but with the progress of this project, it is possible to set the control through a device that make the digital signal processing (*DSP*), as well as making the use of a microcontroller, by placing him in the same architecture besides both *AD* and *DA* converters.

#### **2.3. An improvement of the hardware architecture**

experimentally by a *Programmable Logic Device* (*PLD*) just because its present some countless

The analog video signal is transformed in a digital one that proceeds for the *LUT* and for the retard circuit, aiming to establish the best relationship among the samples. In other words, we must to get a better synchronism, and at that point, the adjustment algorithm compares the samples of the correlator and the *up-date-algorithm* will make the updating at the *LUT*.

An initial architecture for the pre-distortion system can be visualized through the block diagram shown in **Figure 1**. Despite of we have many parameters to adjust, has a great importance the fact that we still must to be sure that we have comparing the samples in the same instant of time. A program can be developed in C++ language to control the whole system, and this initial idea proved to us a great effectiveness in the linearization as well as in the

advantages. In our case, we have opted for a *Look-Up Table* (*LUT*) digital architecture.

correction of the amplitude.

216 New Trends and Developments in Metrology

**Figure 1.** Block diagram: digital pre-distortion system.

**Figure 2.** An initial hardware architecture: pre-distortion system.

The linearization of the signals has a great importance, because independently of the distinct *TV* digital patterns that have been adopted in many countries, the systems have a very sensible characteristic with respect to the nonlinear distortions. These effects always have been demanding the need of to implement circuits that can provide an efficient linearization [9–15].

With respect to the analysis of both *I* and *Q* complex signals, the new linearizer circuit must allow to get a better performance for the whole architecture. In order to minimize both *Phase* (*AM-PM*) and *Amplitude* (*AM-AM*) distortions, the ideal solution would be to combine the power amplifier with a digital pre-distortion system, in order to get an acceptable intermodulation level at the output. In this project, we concentrate our choice over the *FPGA* electronic architectures.

#### **2.4. Experimental measurements in accordance with a set of compatible simulations**

At first, it is important to emphasize that the *FPGA* architecture works like a *LUT* that stores the input data and permits to establish a comparison between all of them with the coming set of feedback data. At the same time, this comparison is performed; the numerical codes inserted inside the digital architecture shall supply to the *LUT* a new set of values that can be stored, being this process a constant and continuous routine.

For the flow of the data inside the entire architecture, we can use the *Matlab* platform, for the generation of the input signal with *two tones* [10–12]. This function can be accomplished by the first *FPGA* structure, and in the sequence, we can pass the signal through the pre-distortion processor. The output signal will enter in the second *FPGA* architecture that has the essential function of to simulate the power amplifier. After the amplification is performed, we can remove the feedback signal that will be compared with that one to be transmitted, and as one direct consequence, it is possible to realize all corrections that are demanded by the project in both signals.

An update of the first architecture exhibited previously can be visualized in **Figure 3**.

**Figure 3.** An improvement of the first hardware architecture.

In **Figure 4(a–c)**, we can visualize the input signal, the distorted signal by the amplifier and the error vector, respectively, and the error vector corresponds to the subtraction of the two previous signals at the same time *t*.

**Figure 4.** Some of the most important signals detected in some points of the architecture, plotted with the *MatLab* as a function of the time *t* (*s*). (a) *Input* Signal. (b) *Distorted* Signal. (c) *Error* Vector.

The *linear regression method* combined with *linear correlation* using the *FPGA* technology permits to adjust the linear function *y* = *f*(*x*) = *ax* + *b*. A reverse effect for the distortion is shown in **Table 2**, using as a numerical approach the inverse linear function *x* = *g*(*y*) = (*1/a*)*.*[*y* − *b*], where the parameters *a* and *b* are defined as in Section 2.1, where evidently we are presenting all numerical data placed in distinct lines.

For this case, if we consider the entire set of ideal values, all coefficients defined in Section 2.1 acquire the values: *x*¯ =48.5, *y*¯ =48.39, *a* = 1.1837 and *b* = −9.02. In addition, in **Figure 5**, it is possible to visualize the transition of the most important curves with respect to the ideal linear performance of the amplifier, by adopting this numerical approach for the adaptive predistortion linearization.


**Table 2.** *Up-Date* of the input signals.

**Figure 3.** An improvement of the first hardware architecture.

function of the time *t* (*s*). (a) *Input* Signal. (b) *Distorted* Signal. (c) *Error* Vector.

previous signals at the same time *t*.

218 New Trends and Developments in Metrology

numerical data placed in distinct lines.

distortion linearization.

In **Figure 4(a–c)**, we can visualize the input signal, the distorted signal by the amplifier and the error vector, respectively, and the error vector corresponds to the subtraction of the two

**Figure 4.** Some of the most important signals detected in some points of the architecture, plotted with the *MatLab* as a

The *linear regression method* combined with *linear correlation* using the *FPGA* technology permits to adjust the linear function *y* = *f*(*x*) = *ax* + *b*. A reverse effect for the distortion is shown in **Table 2**, using as a numerical approach the inverse linear function *x* = *g*(*y*) = (*1/a*)*.*[*y* − *b*], where the parameters *a* and *b* are defined as in Section 2.1, where evidently we are presenting all

For this case, if we consider the entire set of ideal values, all coefficients defined in Section 2.1 acquire the values: *x*¯ =48.5, *y*¯ =48.39, *a* = 1.1837 and *b* = −9.02. In addition, in **Figure 5**, it is possible to visualize the transition of the most important curves with respect to the ideal linear performance of the amplifier, by adopting this numerical approach for the adaptive pre-

**Figure 5.** An improvement of the hardware architecture.

**Figure 6** is presenting one of the most important *FPGA* architecture used for all experimental measurements.

**Figure 6.** One FPGA programmable logic module architecture.

**Figure 7.** The entire pre-distortion digital architecture.

Finally, in **Figure 7**, we are showing the entire architecture implemented for experimental tests of a modern HDTV transmission equipment. In this Section, we present a programmable logic module architecture that have as the main function to control the linearization process, which permits for us to acquire more knowledge about how to set the programming codes more easily [13–15].

It is easy to confirm the nonlinear performance of the power amplifiers provokes the harmonic distortion as well as the inter-modulation products as a function of how many carriers we have used for setup the system. This apprenticeship permits for the designer to get more background with the linearization approaches and uses an appropriate software for the adaptive digital pre-distortion techniques.

### **3. Distortions for** *ISDB***-T OFDM signals for RF power amplifiers architectures**

The essential purpose of this Section is to present some recent development for many types of digital modulations, capable of transmitting large amount of data, which occurs for the standard *ISDB-T*. For this case, the carrier combination can generate a reasonable number of voltage spikes in the time domain, depending of its phase.

Experimentally, some important parameters like the figure of merit, the peak-to-average ratio (PAR), assume an important role with respect to improve our background of how to determine the level of the compression point of 1 (dB) for a wide class of AB power amplifiers. Depends on how efficient is the architecture will be implemented, it is possible to confirm that a large back-off of all amplifiers used in the *ISDB-T* transmitter chain needs to be performed, in comparison with the case of an analog broadcast transmission.

Normally, the gain compression is caused by inter-modulation characteristics of the transis‐ tors, with major influence of the third harmonic component. The inter-modulation distortion (*IMD*) can be easily measured when more than one carrier is used as a PA input signal, as two tones with equal amplitude. For the case that a two-tone test signal is applied to an amplifier, it is possible to observe several inter-modulation components at the output spectrum gener‐ ated due to the transistor nonlinear behavior.

For an *ISDB-T* standard Orthogonal Frequency Division Multiplexing (OFDM) signal with thousands of carriers, the distortion at the fundamental zone appears as a set of multiple distortions, and the bandwidth of the distortion is related to the signal bandwidth. Therefore, it is possible to establish some relationships between the signal bandwidth with the second and higher-order distortions for a specified fundamental frequency for the system. In addition, it is possible to detect the fundamental zones that corresponding to odd or even distortions, and some of them can be removed using filters [17, 19, 23].

Another set of important parameters and effects must be controlled for a system with memory, such as low (high) frequency, thermal effects, trapping effects, biasing circuits, Automatic Gain Control (*AGC*) loops; transistor (transit time and parasitic reactance), matching networks (group delay). The amount of memory is defined as the time between the origin of the kernel until the point where it passes through zero and is directly dependent of the transmitted signal bandwidth.

**Figure 7.** The entire pre-distortion digital architecture.

220 New Trends and Developments in Metrology

easily [13–15].

**architectures**

pre-distortion techniques.

Finally, in **Figure 7**, we are showing the entire architecture implemented for experimental tests of a modern HDTV transmission equipment. In this Section, we present a programmable logic module architecture that have as the main function to control the linearization process, which permits for us to acquire more knowledge about how to set the programming codes more

It is easy to confirm the nonlinear performance of the power amplifiers provokes the harmonic distortion as well as the inter-modulation products as a function of how many carriers we have used for setup the system. This apprenticeship permits for the designer to get more background with the linearization approaches and uses an appropriate software for the adaptive digital

The essential purpose of this Section is to present some recent development for many types of digital modulations, capable of transmitting large amount of data, which occurs for the standard *ISDB-T*. For this case, the carrier combination can generate a reasonable number of

Experimentally, some important parameters like the figure of merit, the peak-to-average ratio (PAR), assume an important role with respect to improve our background of how to determine the level of the compression point of 1 (dB) for a wide class of AB power amplifiers. Depends on how efficient is the architecture will be implemented, it is possible to confirm that a large back-off of all amplifiers used in the *ISDB-T* transmitter chain needs to be performed, in

Normally, the gain compression is caused by inter-modulation characteristics of the transis‐ tors, with major influence of the third harmonic component. The inter-modulation distortion

**3. Distortions for** *ISDB***-T OFDM signals for RF power amplifiers**

voltage spikes in the time domain, depending of its phase.

comparison with the case of an analog broadcast transmission.

Some particular cases can be analyzed for RF power amplifiers in order to get more information about general classification of memory effects, in linear and nonlinear ones, as well as the *memoryless* systems. Thus, it is possible to confirm the origin of linear memory and that the nonlinear memories are generated in the nonlinear dynamic interactions of the input signal. Another important experimental procedure confirms that a practical PA output signal has both linear and nonlinear memory, and an *ISDB-T* OFDM signal that occupies a fixed bandwidth with up to thousand of carriers will cause more memory effects in a PA than for an analog signal, which has a small number of carriers [20–22, 24].

#### **3.1. Fundamental principles over the clipping and inter-modulation distortion effects**

Generally, there are many types of digital modulations, capable of transmitting large amount of data. If the OFDM technique is used, the signal can contain thousands of carriers. Depending of the phase of these carriers, the combination can result in a signal with high *Crest Factor*, demanding very high output power from the system PA. The system will be then inefficient.

It is possible to quantify the magnitude of this *Crest Factor* by the following equation, named here peak-to-average ratio (PAR), defined by:

$$PAR = 20.Log(\lfloor X\_{P\_{PEAK}} \rfloor / X\_{A\_{\mathcal{W}}}) \tag{4}$$

where *XPEAK* and *XAv* are the maximum and the average values (V) of the signal, respectively. The *XPEAK* value can be quantified by an oscilloscope with enough bandwidth to measure the signal in time domain. The *XAv* can be measured using a Wattmeter, and later converting this value to voltage, assuming a 50 (Ohms) load. The PAR from some common used signals is: 3 (dB) for a continuous wave (CW), 3.5–4 (dB) for a *QPSK*, and approximately 12 (dB) for the *ISDB-T* standard OFDM.

**Figure 8.** Several components of inter-modulation generated by a *two-tone* test signal applied to a PA.

Inter-modulation can be easily measured when more than one carrier is used as a PA input signal, as two tones with equal amplitude. When a *two-tone* test signal is applied to an amplifier, it is possible to observe several *IMD* components at the output spectrum generated due to the transistor nonlinear behavior, as seen in **Figure 8**.

**Figure 9.** Typical gain curve of a class AB power amplifier.

**Figure 10.** Distortion generated by a PA when excited by an *ISDB-T* standard signal.

(dB) for a continuous wave (CW), 3.5–4 (dB) for a *QPSK*, and approximately 12 (dB) for the

**Figure 8.** Several components of inter-modulation generated by a *two-tone* test signal applied to a PA.

transistor nonlinear behavior, as seen in **Figure 8**.

**Figure 9.** Typical gain curve of a class AB power amplifier.

Inter-modulation can be easily measured when more than one carrier is used as a PA input signal, as two tones with equal amplitude. When a *two-tone* test signal is applied to an amplifier, it is possible to observe several *IMD* components at the output spectrum generated due to the

*ISDB-T* standard OFDM.

222 New Trends and Developments in Metrology

By adopting this procedure, if a class AB PA has a compression point of 1 (dB) at 10 (*dBm*), as shown in **Figure 9**, an OFDM signal in the *ISDB-T* standard can be amplified until a maximum input power of −2 (*dBm*), using this PA, without compression of the signal peaks. Therefore, a large *back-off* of all amplifiers used in the *ISDB-T* transmitter chain needs to be performed, in comparison with the case of an analog broadcast transmission. The gain compression is caused by inter-modulation (IMD) characteristics of the transistors, with major influence of the third harmonic component.

In an *ISDB-T* standard OFDM signal with thousands of carriers, the distortion at the funda‐ mental zone appears as a set of multiple distortions, as in **Figure 10**. The bandwidth of the distortion is related to the signal bandwidth. Thus, if a signal has a bandwidth of 5.6 (MHz) at the fundamental frequency, the bandwidth of the second-order distortion will be of (2)\* (5.6) = 11.2 (MHz), the third-order distortion will have bandwidth of (3)\*(5.6) (MHz), and so on. The odd-order distortions lie inside the fundamental zone, requiring a different treatment than the even order ones. These lie outside the fundamental and at the DC zone, and can be removed using filters.

A possibility to avoid IMD occurrence is to perform *back-off*, as said before. This procedure obviously increases costs in the transmission system, due to an increased number of PAs, but will guarantee a very high quality of the transmitted *ISDB-T* signal, measured by the MER in (dB), which excellent levels are over 40 (dB). On the other side, if high values of *back-off* are used, the PA efficiency is affected, as the amplifier will operate with less power than it was designed for. Therefore, a careful trade-off between *back-off*, or high values of MER, and PA efficiency has to be carefully established.

#### **3.2. About the intrinsic dependence of memory effects**

A system with memory is that one where the output signal has dependency of past states of the input signal. The memory of a system can be estimated through the observation of the firstorder *Volterra* kernel. The amount of memory is defined as the time between the origin of the kernel until the point where it passes through zero and is directly dependent of the transmitted signal bandwidth.

Let us consider the particular case of a RF power amplifier, which has a complex structure and presents many kinds of memory. For our experimental procedure, we adopt the classification in accordance with the band of frequencies, in the form:


These memories are mixed together in a nonlinear coupled power amplifier, and the problem of estimating behavioral models becomes very difficult. A general classification of memory effects, in linear and nonlinear ones, can be found, while the *memoryless* system is characterized if the output signal envelope follows the variations of the input signal envelope.

**Figure 11.** Plot of an *AM/AM* characteristic curve. The center line illustrates an estimate of the *memoryless* amplifier behavior.

**Figure 12.** Plot of an *AM/PM* characteristic curve. The values close to the origin at the abscissas axis may include some errors of synchronization as well as measurement noise.

Matching networks are the origin of linear memory, and nonlinear memories are generated in the nonlinear dynamic interactions of the input signal. A practical RF PA output signal has both linear and nonlinear memory, and an *ISDB-T* OFDM signal that occupies a bandwidth of 5.7 (MHz) with up to 8.000 carries will cause more memory effects in a PA than an analog signal, which has only three carriers.

kernel until the point where it passes through zero and is directly dependent of the transmitted

Let us consider the particular case of a RF power amplifier, which has a complex structure and presents many kinds of memory. For our experimental procedure, we adopt the classification

**A.** *Low frequency* (kHz to MHz): Thermal effects, trapping effects, biasing circuits, AGC loops.

**B.** *High frequency* (GHz): Transistor (transit time and reactance parasitics), matching net‐

These memories are mixed together in a nonlinear coupled power amplifier, and the problem of estimating behavioral models becomes very difficult. A general classification of memory effects, in linear and nonlinear ones, can be found, while the *memoryless* system is characterized

**Figure 11.** Plot of an *AM/AM* characteristic curve. The center line illustrates an estimate of the *memoryless* amplifier

**Figure 12.** Plot of an *AM/PM* characteristic curve. The values close to the origin at the abscissas axis may include some

errors of synchronization as well as measurement noise.

if the output signal envelope follows the variations of the input signal envelope.

in accordance with the band of frequencies, in the form:

signal bandwidth.

behavior.

works (group delay).

224 New Trends and Developments in Metrology

The effects caused by a system with memory in a signal can be easily viewed through *Amplitude/Amplitude* (*AM/AM*) in **Figure 11** and *Amplitude/Phase* (*AM/PM*) in **Figure 12**, where the central line shows a *memoryless* amplifier behavior. The spreading of points for a particular instantaneous power indicates more memory, although close to the origin at the abscissas axis may include some errors of synchronization as well as measurement noise also. These plots were obtained from time-domain measurements performed by a Vectorial Signal Analyzer (*VSA*). Another memory definition can be found in the literature that permits to include a "*time*" interval between "*AM/AM* and *AM/PM* curves".

The IMD effects mixed with memory effects of a PA can generate distortions that degrade seriously the output signal, which can make the task of achieving the required ISDB-T norm mask more difficult. As every amplifier presents distortion, the only way to produce costeffective and efficient amplifiers is to implement methods to compensate or correct these distortions, as presented in next section.

#### **3.3. Correction of distortions and the technology of the compensation methods**

Distortions caused in the amplitude of a signal can be corrected basically with *back-off*, the simpler, and the less-efficient method. If linearization is used, many methods can be applied. The most frequently used ones are *feed forward* and *pre-distortion* with pros and cons listed in our initial references [16, 17, 19, 20, 23].

Choosing the right model is the key for the compensation experimental techniques to control a wide class of distortions generated intrinsically by the RF power amplifier. An efficient architecture for experimental measurements can be built using a basic platform that allows the simulation several models together with *ISDB-T* OFDM signal generation. The board archi‐ tecture includes essential devices such as *A/D* and *D/A* converters, a FPGA architecture which realizes the *Mux* and *Demux* for *Trelis-Encoder* and *Reed-Solomon* digital communications, memories, and other circuits, as we are presenting in **Figure 13**.

A fundamental step surpassed for the hardware architecture setup implementation was the construction of a filter with excellent sideband attenuation, like surface acoustic Wave (*SAW*) filters that normally present problems regarding pass-band ripple, high insertion loss, and high cost. Using the FPGA and DSP techniques and digital filters, this problem was solved. In addition, it was also possible to compare two distinct theoretical and experimental approaches: *Weaver* and *Frequency Complex* methods combined with DSP techniques, which permit to generate *AM-VSB* analog TV and *8-VSB* digital TV signals through a simplified architecture [6, 7, 8, 23, 25, 26].

This *FPGA* structure was used to implement the nonlinear experimental approach discussed in this chapter and to achieve the best results for the pre-distortion. This architecture was designed, implemented, tested and was in industrial production, part of the first transmission HDTV equipment in Brazil for the *ISDB-T* standard and the electronic structure can serve also for other standards of modulation.

**Figure 13.** The implemented architecture used to check both theoretical and experimental approaches.

The approach discussed in this Section reveals different distortions that occur in an RF PA input signal, specifically the *ISDB-T* OFDM signal, and their main sources. Further on, common compensation methods were discussed: *back-off*, linearization using *feed forward*, and predistortion, including costs involved in each method. For the design of a good quality, efficient, and cost-effective transmitter, a careful study about the distortion generated by the RF amplifier has to be performed. The architecture was designed such that the pre-distortion methods were implemented in practice, and the correct compensation method can be chosen, in order to obtain the highest efficiency allied to high quality [7–9, 14, 17, 23, 25, 26].

**Figure 14.** Examples of models including linear and nonlinear blocks. The *Wiener* model contains a filter before the nonlinearity block, and the *Hammerstein* model has the nonlinear block first.

By considering the fact that only those models that include memory in their architectures can be effective against the distortions generated by the modern RF PAs, **Figure 14** presents an example of a block diagram proposed by *Wiener* and *Hammerstein*. Normally, this approach can be adopted in accordance with the compensated distortion that varies as a function of the imposed bandwidth and also with the complexity of the adopted pre-distortion structure.

designed, implemented, tested and was in industrial production, part of the first transmission HDTV equipment in Brazil for the *ISDB-T* standard and the electronic structure can serve also

**Figure 13.** The implemented architecture used to check both theoretical and experimental approaches.

in order to obtain the highest efficiency allied to high quality [7–9, 14, 17, 23, 25, 26].

**Figure 14.** Examples of models including linear and nonlinear blocks. The *Wiener* model contains a filter before the

nonlinearity block, and the *Hammerstein* model has the nonlinear block first.

The approach discussed in this Section reveals different distortions that occur in an RF PA input signal, specifically the *ISDB-T* OFDM signal, and their main sources. Further on, common compensation methods were discussed: *back-off*, linearization using *feed forward*, and predistortion, including costs involved in each method. For the design of a good quality, efficient, and cost-effective transmitter, a careful study about the distortion generated by the RF amplifier has to be performed. The architecture was designed such that the pre-distortion methods were implemented in practice, and the correct compensation method can be chosen,

for other standards of modulation.

226 New Trends and Developments in Metrology

### **4. An innovative nonlinear technological modeling of RF power amplifiers for HDTV architectures**

The essential purpose of this section will be to present an efficient theoretical approach for modeling with high precision power amplifiers that have quadratic characteristics for both input voltage and the output current. Although our approach will refer for the *FET* device, the same ideas can be adopted for the others structures such as *MOS*, *LDMOS*, and other similar ones. Through the entire section, we will assume that the signal is working in the active region of the transistor, which implies that the input voltage *Vgs* is bigger than the threshold *Vt* . Some particular cases can be analyzed, if we use the Fourier approach for modeling the drain current of the device [2–5, 15, 21, 28]. In order to compare our experimental results with our techno‐ logical approach, all computational simulations will be performed using the most important numerical platform for this special kind of nonlinear analysis, which is the software *Advanced Design System* (*ADS*), that permits to show with high-resolution harmonic spectrum for any output signal.

In order to get a more complete idea of this theoretical approach, we must to consider the presence of signals of great amplitudes, such that it is possible to detect the occurrence of the saturation of the device as well as in some cases this effect can occurs abruptly. For that case, the more convenient modeling that can be adopted is to use a partial quadratic characteristic with multiple segments. Therefore, we must to extend our initial analysis for the Fourier series that represents a *pulse train* with distinct sinusoidal peaks for the amplitudes of the currents.

The last theoretical modeling permit us to make the calculations of the coefficients of the Fourier cosine series for all respective train of pulses, which turns out in the new series obtained with the coefficients we have found in the previous analysis. This study is strongly supported if we have enough involvements with new power architectures, just due to the fact that these equipments have some enormous requirements imposed over the project of all hybrid circuits in order to get more stability in the analysis for both *AM/AM* and *AM/PM* digital pre-distortion corrections on real time. The *ADS* platform permits to make to visualize the harmonic components of higher order, such that all numerical results can be compared with those ones that we can get with our hardware implementation.

Our technological approach will be tested with the experimental data that we can exhibit with the project of an electronic architecture that was built at the Laboratory of HDTV Systems of a Company that have a wide class of power amplifiers in this line of industrial production. This device was designed for validation our theoretical approach in order to get a complete set of experimental measurements, and the RF Amplifier was implemented using a *LDMOS MRF 9060* component.

#### **4.1. Quadratic behavior of the** *FET* **transistor**

We start by assuming that we have a quadratic characteristic between both input voltage and output current that can be approached with a good precision for a *FET* device, such that an input tension *VGS* can control a current source *Id* [15, 28–30]. In this case, we will have a quadratic behavior of the type:

$$I\_d(t) = \frac{I\_{\rm LOS}}{V\_T^2} (V\_{\rm GS} - V\_T)^2, \quad V\_{\rm GS} = V\_b + V\_1 \cos\alpha t \tag{5}$$

where *IDSS* = Intrinsic parameter of the transistor, *VT* = Threshold voltage, *VGS* = Gate to source voltage, *Vb* = Bias tension, and *V*<sup>1</sup> = Peak tension. We can assume always the signal is working in the active region of the transistor, which implies that the input signal *VGS* is bigger than the threshold one *VT*, with *Vx* = *VT* − *Vb*. Therefore, we combine the previous definition, and we can derive in the equation:

$$I\_d(t) = \left(\frac{I\_{\text{DSS}}}{V\_{\text{r}}^2}\right) \left(\frac{V\_{\text{i}}^2}{2} + \frac{V\_{\text{i}}^2 \cos 2.oat}{2} - 2.V\_{\text{i}}V\_{\text{x}}\cos(oat) + V\_{\text{x}}^2\right) \tag{6}$$

It is easy to verify that an expansion in Fourier series for *Id(t)* has only three terms, and they are:

$$I\_d(t) = I\_0 + I\_1 \cos \alpha t + I\_2 \cos 2.\alpha t \tag{7}$$

If we compare the expression expanded in (6), with the *Fourier* series shown in (7), it is easy to check that:

$$I\_o = \frac{I\_{D\text{SS}}}{V\_T^2} \left(V\_x^{\prime 2} + \frac{V\_1^2}{2}\right), \quad I\_1 = -2. \frac{I\_{D\text{SS}}}{V\_T^{\prime 2}} V\_x V\_1, \quad I\_2 = \frac{I\_{D\text{SS}}}{V\_T^{\prime 2}} \frac{V\_1^2}{2} \tag{8}$$

To check the efficiency of this approach, we consider an analytical model with the following set of parameters: *IDSS* = 16 (*mA*), *Vt* = −4 (*V*), *Vb* = −2 (*V*), and *V1* = 2.cos(*4.π.10 <sup>7</sup> t*) (*V*). Thus, if we make the calculations in accordance with the Eq. (6), the current *Id* (*t*) in (*mA*) can be done by:

$$I\_d(t) = \left[6 + 8\cos(4\pi.10^\top t) + 2\cos(8\pi.10^\top t)\right] \tag{9}$$

**Figure 15.** *Amplitude* versus *Frequency* of the output current signal.

set of experimental measurements, and the RF Amplifier was implemented using a *LDMOS*

We start by assuming that we have a quadratic characteristic between both input voltage and output current that can be approached with a good precision for a *FET* device, such that an input tension *VGS* can control a current source *Id* [15, 28–30]. In this case, we will have a quadratic

where *IDSS* = Intrinsic parameter of the transistor, *VT* = Threshold voltage, *VGS* = Gate to source voltage, *Vb* = Bias tension, and *V*<sup>1</sup> = Peak tension. We can assume always the signal is working in the active region of the transistor, which implies that the input signal *VGS* is bigger than the threshold one *VT*, with *Vx* = *VT* − *Vb*. Therefore, we combine the previous definition, and we can

1 1 2

w

w

=+- + ç ÷ ç ÷ è ø è ø (6)

 w

è ø (8)

= −4 (*V*), *Vb* = −2 (*V*), and *V1* = 2.cos(*4.π.10 <sup>7</sup>*

 p (5)

(7)

*t t* (9)

*t*) (*V*). Thus, if we

2 <sup>2</sup> <sup>1</sup> () ( ), .cos . *DSS*

= - =+

*<sup>I</sup> It V V V V V t*

*d GS T GS b*

2 2

2 1 .cos2. . () . 2. . .cos( . ) 2 2

*d x x*

01 2 ( ) . 2. . *<sup>d</sup> I t I I cos t I cos t* =+ + w

<sup>0</sup> <sup>2</sup> <sup>1</sup> 2 2 1 2 , 2. . , 2 2 *DSS DSS DSS x x T T T I V I IV I V I VV I V VV*

> 7 7 ( ) [6 8cos(4 .10 ) 2cos(8 .10 )] *<sup>d</sup> I t* =+ + p

= + =- = ç ÷

w

æ ö æ ö

It is easy to verify that an expansion in Fourier series for *Id(t)* has only three terms, and they

If we compare the expression expanded in (6), with the *Fourier* series shown in (7), it is easy to

To check the efficiency of this approach, we consider an analytical model with the following

make the calculations in accordance with the Eq. (6), the current *Id* (*t*) in (*mA*) can be done by:

2 2 2 1 1

*I VV t I t VV t V*

*T*

*V*

*DSS*

*T*

æ ö

*V*

*MRF 9060* component.

228 New Trends and Developments in Metrology

behavior of the type:

derive in the equation:

are:

check that:

set of parameters: *IDSS* = 16 (*mA*), *Vt*

**4.1. Quadratic behavior of the** *FET* **transistor**

**Figure 15** shows the harmonic spectrum of the output signal using the software *ADS*. In this case, it is easy to detect that there is a very good agreement between the results obtained with the simulations and those ones obtained with the calculation for the analytical model proposed in this chapter.

In **Figure 16(a)**, we are showing all equipments used in the hardware implementation with the essential purpose of making the validation of the characteristic quadratic of the amplifier presented in this subsection. The graphic shown in **Figure 16(b)** depicts the output current signal, where we are showing into detail the spectral components of first and second order.

**Figure 16.** The architecture designed for experimental measurements—power RF amplifier. (a) Equipments used for experimental measures. (b) Output current signal of the amplifier.

#### **4.2. An innovative analytical approach in the presence of the saturation effect**

Typically, in the presence of great signals, it is possible to detect the occurrence of the satura‐ tion, and in some cases, this can occurs abruptly. In this case, the more convenient modeling will be to use a partial quadratic characteristic with multiple segments, and we can adopt the same approach of Section 4.1, such that *Vy* and *Vx* are the maximum and minimum limits for the fluctuation of the signal. Starting on the Fourier cosine series, this modeling can be found by the direct subtraction of the series that represents a pulse train with sinusoidal peaks of amplitude *Ip1* and *Ip2*.

In addition, if the conduction angles are *2φ1* and *2φ2*, we can get a new series where the coefficients are corresponding to the subtraction of those ones of the original Fourier series of each pulse train. Now, if we assume the quadratic behavior of the device and by imposing some limits of fluctuation over the signal, we can express the peak current and the drain currents of the device through the analytical expressions in the form:

$$I\_{\rho1} = \frac{I\_{\text{DSS}}}{V\_{\text{T}}^2} (V\_1 - V\_x)^2, \quad I\_{d1} = \frac{I\_{\text{DSS}}}{V\_{\text{T}}^2} (V\_1 \cos \alpha t - V\_x)^2, \quad \rho\_1 = \cos^{-1} \frac{V\_x}{V\_1} \tag{10}$$

$$I\_{p2} = \frac{I\_{\text{DSS}}}{V\_{\text{r}}^2} (V\_1 - V\_{\text{y}})^2, \quad I\_{d2} = \frac{I\_{\text{DSS}}}{V\_{\text{r}}^2} (V\_1 \cos \alpha t - V\_{\text{y}})^2, \quad \rho\_2 = \cos^{-1} \frac{V\_{\text{y}}}{V\_1} \tag{11}$$

In this case, the Fourier cosine series as a function of *θ* has the following representation:

$$f(t) = a\_0 + \sum\_{n=1}^{n} a\_n \cos n\,\alpha t, \quad f(t) = I\_d(t), \quad a\_0 = \frac{1}{\pi} \Big[ f\left(\frac{\theta}{\alpha}\right) d\theta, \quad a\_n = \frac{2}{\pi} \Big[ f\left(\frac{\theta}{\alpha}\right) \cos(n\theta) d\theta \tag{12}$$

If we develop the Fourier series for both *Id1* and *Id2* currents, where the index *j* is associated with the current *IdJ* that must to be determined. Therefore, for *j* ∈ {0, 1}, the iterative process will have the following steps:

$$I\_{(J)0} = \frac{I\_{\mu J}}{\pi} \Big|\_{0}^{\rho\_J} \left( \frac{\cos \theta - \cos \varphi\_j}{1 - \cos \varphi\_J} \right)^2 d\theta\_{\pm} I\_{(J)0} = \frac{I\_{\mu J}}{\pi} \cdot \frac{\int\_{\mu J}^{\rho\_J} -\frac{3}{4} \sin 2\varphi\_J + \frac{\rho\_J}{2} \cos 2\varphi\_J}{\left(1 - \cos \varphi\_J\right)^2} \tag{13}$$

$$I\_{(J)\parallel} = \frac{2.I\_{\rho J}}{\pi} \int\_0^{\rho\_j} \left(\frac{\cos\theta - \cos\varphi\_j}{1 - \cos\varphi\_J}\right)^2 \cos\theta \,d\theta \,\_2I\_{(J)\parallel} = \frac{2.I\_{\rho J}}{\pi} \cdot \frac{\frac{3}{4}\sin\varphi\_J - \rho\_J\cos\varphi\_J + \frac{1}{12}\sin 3\varphi\_J}{\left(1 - \cos\varphi\_J\right)^2} \tag{14}$$

$$I\_{(J)u} = \frac{2I\_{\mu\prime}}{\pi} \frac{(4-n^2)\sin(n\phi\_{\slash}) + (3n)\sin[(n-2)\phi\_{\slash}] + (n-1). (n-2). [\sin(n\phi\_{\slash})\cos 2\phi\_{\slash}]}{(n). (n^2-1). (n^2-4). (1-\cos\phi\_{\slash})^2}, n \ge 2\tag{15}$$

Finally, we can obtain the iterative current expressed analytically through the equation:

Innovative Theoretical Approaches Used for RF Power Amplifiers in Modern HDTV Systems http://dx.doi.org/10.5772/63547 231

$$I\_{d\_j}(t) = I\_{(j)0} + \sum\_{\imath=1}^{\circ} [(I\_{(j)\imath}).\cos(n.o.t)] \tag{16}$$

In addition, we can make the calculations for the coefficients of the Fourier series for both pulse trains that permit to obtain a new series with the subtraction of the coefficients found in the Eq. (16), and represented by:

$$I\_d(t) = [(I\_{(1)0}) - (I\_{(2)0})] + \sum\_{n=1}^{n} [(I\_{(1)n}) - (I\_{(2)n})] \cos(n.out) \tag{17}$$

In order to get more legitimacy of the analytical approach here proposed, we make a compar‐ ison similar to that one realized in the previous model. In this second model, we adopt the following characteristics for the device: *IDSS* = 16 (*mA*), *Vx* = −4 (*V*), *Vy* = 4 (*V*), *Vb* = *0* (*V*), and *V1* = 5.cos(*4.π.10 <sup>7</sup> t*) (*V*). The output current for this case has the spectrum depicted in the **Figure 17(a)**, and all simulations done with the *ADS* software permit us to visualize the harmonic components of higher order. By making a comparison in **Figure 17(b)**, we present a Table where we can detect a very good agreement between the results obtained with the simulation and those ones established with the theoretical modeling that indicates this approach is very efficient and can serves as a tool in the designs of the RF amplifiers. Our professional experience confirms for us that the technological modeling proposed in this subsection can serve for a wide class of projects that involve many modern transmission equipments, with this continuous and rigorous demand imposed by the international market.

will be to use a partial quadratic characteristic with multiple segments, and we can adopt the same approach of Section 4.1, such that *Vy* and *Vx* are the maximum and minimum limits for the fluctuation of the signal. Starting on the Fourier cosine series, this modeling can be found by the direct subtraction of the series that represents a pulse train with sinusoidal peaks of

In addition, if the conduction angles are *2φ1* and *2φ2*, we can get a new series where the coefficients are corresponding to the subtraction of those ones of the original Fourier series of each pulse train. Now, if we assume the quadratic behavior of the device and by imposing some limits of fluctuation over the signal, we can express the peak current and the drain

2 2 1

w

2 2 1

w



q

jj

è ø - - <sup>ò</sup> (13)

<sup>Þ</sup>

*j*

 j

ò (14)

 j

 j

( ) , ( cos . ) , cos *DSS DSS <sup>x</sup>*

( ), ( cos . ) , cos *DSS DSS <sup>y</sup>*

*I I V*

*VV V*

In this case, the Fourier cosine series as a function of *θ* has the following representation:

p

If we develop the Fourier series for both *Id1* and *Id2* currents, where the index *j* is associated with the current *IdJ* that must to be determined. Therefore, for *j* ∈ {0, 1}, the iterative process

cos cos 4 2 . . 1 cos (1 cos )

3 1 sin cos sin3 2. cos cos 2. <sup>4</sup> <sup>12</sup> .cos . . 1 cos (1 cos ) *<sup>J</sup> JJ J <sup>J</sup> pJ <sup>J</sup> pJ*

2 (4 ).sin( ) (3 ).sin[( 2) ] ( 1).( 2).[sin( ).cos2 ] . , 2 ( ).( 1).( 4).(1 cos ) *pJ j j j j*

 j

Finally, we can obtain the iterative current expressed analytically through the equation:

*JJ J pJ <sup>J</sup> pJ*

*J J*

*J J*

 p


<sup>Þ</sup>

 p

pw

¥ æ ö æ ö = + = = <sup>=</sup> ç ÷ ç ÷ è ø è ø <sup>å</sup> ò ò (12)

q

*II V I V V I V tV VV V*

1

1

 p

 pw

<sup>3</sup> sin 2 cos2

 j

jj

*J*

 j

> j

j

 j

j  q

q q

> j

 j

currents of the device through the analytical expressions in the form:

*p x d x*

*p y d y*

*I V V I V tV*

*T T*

*T T*

0 0

*J J*

 j

j

*I d I*

pj

*J J*

*I d I*

q

pj

2

q

w

*n*

will have the following steps:

0

*J n*

p

j 0

*J*

j

11 11 2 2 1

21 21 2 2 2

1 0 0 1 2 ( ) cos . . , ( ) ( ), . , .cos( . ). *n d <sup>n</sup>*

2

*I I*

 j

2

*I I*

( ) 2 2 2

( )0 ( )0 2


q

( )1 ( )1 2

*I n n n n nn n <sup>I</sup> <sup>n</sup> nn n*


q q

*ft a a n t ft I t a f d a f n d*

amplitude *Ip1* and *Ip2*.

230 New Trends and Developments in Metrology


**Figure 17.** Analytical and numerical analysis of the saturation effect for the power RF amplifier. (a) *Amplitude* versus *Frequency* of the output current. (b) Analytical model and the results of simulation.

In conclusion, **Figure 18** presents some photos of the device implemented in the laboratory of the Company [7–14, 25–26, 34–35, 40]. In the item (a), we shows the zoom of the *RF* Amplifier built using the *LDMOS* MRF 9060 transistor, while that in the item (b), we present a complete view of the electronic architecture that was used for all experimental measurements that were presented into detail in **Figure 16(b)**.

**Figure 18.** Photographs of the hardware architecture for the power RF amplifier. (a) The *LDMOS 9060* electronic struc‐ ture. (b) Power RF amplifier (*10 W*) at the 3rd band.

#### **4.3. An appropriate theoretical approach for the amplifier with a source resistance**

This approach starts by assuming that we will concentrate our analysis over the source resistance effect for the circuit depicted in the **Figure 19**. In this case, we must to get all analytical equations that describe the functioning of the model, and in consequence, it is possible to check that the *Vin* tension cannot be applied directly in the gate-source voltage *VGS*. Therefore, only part of the voltage appears in the gate-source input of the FET and the complementary tension must be applied in the resistance *R*. Therefore, in this case, the equ. (18) presents the analytical dependence between these parameters, described analytically in the form:

$$V\_{in} = V\_r + V\_{GS} = (I\_d)\_r R + V\_{GS} \tag{18}$$

In accordance with the previous studies done by Cantrell and Davis, we can define the input incremental resistance expressed mathematically by:

$$r\_{\rm in(n)} = (^{\partial V\_{\rm in}} \! \! / \! \! \! / I\_d \ )\_{I\_d - I\_{\rm loc}} \implies r\_{\rm in(n)} = \{ \frac{\partial}{\partial I\_d} [(I\_d), R] \} + [\frac{\partial V\_{\rm GS}}{\partial I\_d}] \tag{19}$$

where *Id* and *IDC* are, respectively, the drain and the DC currents defined by the circuit polarization. Starting on the equation that defines the classic modeling of the FET structure, and after some algebraic calculations, it is possible to express the voltage *VGS* in the form:

$$V\_{\rm cs} = \left(\begin{array}{c} I\_4\\ \nearrow \end{array}\right)^{1/2} + V\_r,\\ k = \left(\frac{1}{2}\right). (k.n). \left(\begin{array}{c} W \nearrow \\\nearrow \end{array}\right) \tag{20}$$

Under these hypotheses, if we combine conveniently the last equations, both the incremental resistance and the new transconductance can be expressed by the equations:

$$\mathbf{r}\_{\mathrm{in}(a)} = \left( \begin{array}{c} \boldsymbol{\partial} \boldsymbol{V}\_{\mathrm{GS}} \\ \boldsymbol{\upbeta} \boldsymbol{I}\_{d} \end{array} \right)\_{\mathrm{I}\_{d} - \mathrm{I}\_{\mathrm{DC}}} = \left( \frac{1}{2.k} \right) \left( \begin{array}{c} \boldsymbol{I}\_{\mathrm{DC}} \\ \boldsymbol{\upbeta} \end{array} \right)^{-(1/2)}, \mathbf{g}\_{\mathrm{m}(a)} = \left( \frac{1}{r\_{\mathrm{in}(a)}} \right) \left[ \frac{\mathbf{g}\_{\mathrm{m}(a)}}{\mathrm{I} + (\mathbf{g}\_{\mathrm{m}(a)}) \boldsymbol{R}} \right] \tag{21}$$

For a fixed value of the parameter *gm*, Eq. (21) shows that the new transconductance decreases as a function of the resistance *R*. In particular, if the factor (*gm*).*R* >> 1, in the limit condition, this expression will depends only of *R*. Therefore, the transconductance is independent of any change in the polarization current, that is: *gm*(*n*) = 1/*R*.

#### **4.4. Additional performance with respect to the effect of the feed-forward linearity**

**Figure 18.** Photographs of the hardware architecture for the power RF amplifier. (a) The *LDMOS 9060* electronic struc‐

This approach starts by assuming that we will concentrate our analysis over the source resistance effect for the circuit depicted in the **Figure 19**. In this case, we must to get all analytical equations that describe the functioning of the model, and in consequence, it is possible to check that the *Vin* tension cannot be applied directly in the gate-source voltage *VGS*. Therefore, only part of the voltage appears in the gate-source input of the FET and the complementary tension must be applied in the resistance *R*. Therefore, in this case, the equ. (18) presents the analytical dependence between these parameters, described analytically in

In accordance with the previous studies done by Cantrell and Davis, we can define the input

where *Id* and *IDC* are, respectively, the drain and the DC currents defined by the circuit polarization. Starting on the equation that defines the classic modeling of the FET structure, and after some algebraic calculations, it is possible to express the voltage *VGS* in the form:

> ( ) 1/2 1 , .( . ). <sup>2</sup>

*k L*

( ) ( ) ( ) { [( ). ]} [ ] *d DC in GS*

*in n I I in n <sup>d</sup> <sup>d</sup> d d <sup>V</sup> <sup>V</sup> <sup>r</sup> r IR <sup>I</sup> I I* <sup>=</sup> ¶ ¶ ¶

*<sup>d</sup> GS <sup>T</sup> <sup>V</sup> <sup>I</sup> V k kn <sup>W</sup>*

( ). *V V V I RV in r GS d GS* =+ = + (18)

= Þ= + ¶ ¶ ¶ (19)

æ ö æ ö = += ç ÷ ç ÷ è ø è ø (20)

**4.3. An appropriate theoretical approach for the amplifier with a source resistance**

ture. (b) Power RF amplifier (*10 W*) at the 3rd band.

232 New Trends and Developments in Metrology

incremental resistance expressed mathematically by:

the form:

With the final purpose of to get a more knowledge about the effect of the feed-forward linearity, when the electronic structure has the presence of the passive component *R*, it acquires a great importance define a new curve of transconduction *Vin* versus *Id* [3–5, 10–15, 18, 28–30]. This will demand a new modeling in the initial Eq. (18), using the voltage *VGS* as in Eq. (20), which derives in the expression:

$$(V\_{in} - V\_{\tau} + V\_{GS} = (I\_d).R + \left(\bigvee\_k \stackrel{\circ}{k}\right)^{1/2} \tag{22}$$

where *VT* is the threshold voltage. This will demand a new definition of the voltage *Vc*, that indicates the dependence of *Vin* with respect to *VT*, when the drain current vanishes, that is *Id* = 0. Therefore, we can get:

$$\left.V\_c = (I\_{DC}).(R + r\_{in})\_\Longrightarrow V\_c = \left(\frac{I\_{DC}}{\mathcal{g}\_w}\right).[\text{l} + (\text{g}\_w).R] \tag{23}$$

In order to simplify the 2D plot analysis of the new transconductance, we normalize the Eq. (5) by the voltage *Vc* as a function of the parameter (*gm*).*R*, which imply that:

$$
\left(\frac{V\_{in} - V\_{r}}{V\_{c}}\right) = \left(\frac{I\_{d}}{I\_{DC}}\right) \cdot \left(\frac{(\mathbf{g\_{m}})\_{R}R}{1 + (\mathbf{g\_{m}})\_{R}R}\right) + \left(\bigvee\_{I\_{DC}} \right)^{1/2} \cdot \left(\frac{2}{1 + (\mathbf{g\_{m}})\_{R}R}\right) \tag{24}
$$

#### **4.5. Analysis in the limit condition at the presence of the feed-forward effect**

At first, in **Figure 20**, we are portraying a graphic that we obtain for three different values of the parameter (*gm*).*R*. Therefore, it is possible to check the change in the quadratic curve with respect to the correspondent linear behavior of the amplifier, as well as the decreases of its gain. Some particular cases related with the real transition of the parameter (*gm*).*R* can be performed, and each one of them has a very simple mathematics derivation, if we look the previous equation at the limit condition.

**Figure 19.** The FET device with the presence of the source resistance.


Innovative Theoretical Approaches Used for RF Power Amplifiers in Modern HDTV Systems http://dx.doi.org/10.5772/63547 235

**Figure 20.** The 2d plot of the transconductance as a function of (*gm*).*R*.

gain. Some particular cases related with the real transition of the parameter (*gm*).*R* can be performed, and each one of them has a very simple mathematics derivation, if we look the

**•** At first, it is easy to check that when (*gm*).*R << 1*, the behavior of the structure has a quadratic

**•** For the case that (*gm*).*R >>1*, we will have a linear dependence that involves all variables.

**•** It is important to emphasize that the limit condition, that is: (*gm*).*R* = *0*, characterizes the quadratic behavior too, which corresponds exactly the same one presented by the FET

**•** By the other hand, the 2nd limit condition, that is: (*gm*).*R* → ∞, can be reached with the linear behavior which will be analyzed in the next subsection, more specifically in the set of Eqs.

**•** Generally, we must to concentrate our approach by making an appropriate choice on the cases discussed in the previous items. The almost linear dependence that corresponds to the

**•** Finally, it is important to put in evidence that the case (*gm*).*R* = 0, can be solved with those analytical expressions that permits us to detect the quadratic behavior of the FET transistor.

(28) and (29), and is associated to the *threshold* condition of the amplifier.

previous equation at the limit condition.

234 New Trends and Developments in Metrology

**Figure 19.** The FET device with the presence of the source resistance.

limit condition (*gm*).*R* = 1, is shown in **Figure 20**.

performance.

device.

**Figure 21.** The *Id* harmonic components plotted with the *ADS* platform, for Vin < *V*c.

#### **4.6. An important analysis corresponding to the presence of the threshold effect**

The threshold condition, (*gm.R*) → ∞, imposes for us focus our analysis over the equation: *Id* = (*Vin* − *VT*)/*R*, for *Vin* > *0*, and we will assume that the electronic structure has a voltage source *vin* = *Vin*.cos(*ω.t*) In this case, two different and important conditions must be analyzed that are as follows:

**I.** *Vin* < *Vc*—It is easy to check that, if we combine this inequality with the addition condition: *Vc* = (*IDC*).*R*, we can assume the linear performance of the FET, which make possible to derive the analytical expression: *Id* = *IDC* + [*Vin*/*R*]. cos(*ω.t*).

In order to verify the higher dependence of the parameter *R*, we imposed in our simula‐ tions the following set of parameters for the FET structure: *gm* = 86 (*mS*), *IDC* = 5,8 (*mA*), *R* = 500 (*Ω*), and *vin* = 1.cos.[(*2π.10<sup>6</sup>* ).*t*] (*V*). For this fixed value of *R*, we can obtain the transconductance *gm* of the amplifier as well as its corresponding polarization, and with this particular choice of the parameters, the more convenient real dependence can be formalized with the following condition: (*gm*).*R* = 43.

In addition, the current output *Id* characterizes the linear behavior of the amplifier, that is: (*gm*).*R* >> *1*, and the drain current has the form: *Id* = (5.8) + 4.cos(*ω.t*), with *ω* = 2.(π).10.exp(6). **Figure 21** depicts the harmonic composition of the output current *Id*, simulated with the "*Premium Version*" of platform of simulations: *Advanced Design Systems*—*ADS* (*Agilent Tech. Co.*). It is possible to confirm that the level of the fundamental component is reduced with the same rate that the transconductance increases. The same comparison can been performed in the case of the quadratic behavior of the FET, if we observe that the 2nd harmonic component does not exist practically, that characterizes the *feed-forward effect*, and this is one of the main reasons we can detect a small reduction in the fundamental component.

**II.** *Vin* > *Vc*—In this case, the current *Id* takes the form of a periodic train of pulses with sine peaks, which can be visualized on **Figure 22**. Then, we must to modeling the device using the classical Fourier Cosine Series.

With respect to the drain current, we can write this signal in accordance with its series representation in the form:

$$I\_d(t) = I\_0 + I\_1 \cos \alpha t + I\_2 \cos 2.\alpha t + \dots \tag{25}$$

which has an alternative analytical written as:

$$I\_{\omega}(t) = I\_{0} + \left[1 + \left(I\_{1} \mid I\_{0}\right) \cos at + \left(I\_{2} \mid I\_{0}\right) \cos \mathcal{Z}\_{\omega} at + \dots \right] \tag{26}$$

If we use this Fourier Cosine Series approximation and we make an accurate analysis over the graphic presented in the **Figure 22**, it is possible to derive the functional dependence like: *Ip* = *G*[*Vin* − [*Vin* − *VT*]. Now, by assuming that (*gm*).*R >>*1, if we follow the same line of ideas we have made previously, we can deduct that *G* = g*<sup>m</sup>*(*n*) = (1/*R*), which implies that we will obtain *Ip* = (1/ *R*).*VT*.

In the sequence, after some extensive calculations, we can derive the harmonics components for the output signal, using the Fourier series approach, and the set of equations that is possible to derive can be synthesized in the form:

Innovative Theoretical Approaches Used for RF Power Amplifiers in Modern HDTV Systems http://dx.doi.org/10.5772/63547 237

$$
\phi\left(\frac{I\_n}{I\_o}\right) = \left(\frac{2.\left[\cos\varphi\,\mathrm{sen}(n.\rho) - n.\,\mathrm{sen}\,\rho\,\mathrm{cos}(n.\rho)\right]}{n(n^2 - 1)(\mathrm{sen}\,\rho - \,\mathrm{op}\,\cos\rho)}\right)\left(\frac{I\_1}{I\_o}\right) = \left(\frac{\rho - \mathrm{sen}\,\rho\,\mathrm{cos}\,\rho}{\mathrm{sen}\,\rho - \rho\,\mathrm{cos}\,\rho}\right)\tag{27}
$$

**I.** *Vin* < *Vc*—It is easy to check that, if we combine this inequality with the addition condition: *Vc* = (*IDC*).*R*, we can assume the linear performance of the FET, which make possible to

In order to verify the higher dependence of the parameter *R*, we imposed in our simula‐ tions the following set of parameters for the FET structure: *gm* = 86 (*mS*), *IDC* = 5,8 (*mA*), *R*

transconductance *gm* of the amplifier as well as its corresponding polarization, and with this particular choice of the parameters, the more convenient real dependence can be

In addition, the current output *Id* characterizes the linear behavior of the amplifier, that is: (*gm*).*R* >> *1*, and the drain current has the form: *Id* = (5.8) + 4.cos(*ω.t*), with *ω* = 2.(π).10.exp(6). **Figure 21** depicts the harmonic composition of the output current *Id*, simulated with the "*Premium Version*" of platform of simulations: *Advanced Design Systems*—*ADS* (*Agilent Tech. Co.*). It is possible to confirm that the level of the fundamental component is reduced with the same rate that the transconductance increases. The same comparison can been performed in the case of the quadratic behavior of the FET, if we observe that the 2nd harmonic component does not exist practically, that characterizes the *feed-forward effect*, and this is one of the main reasons we can detect a small reduction in the fundamental

**II.** *Vin* > *Vc*—In this case, the current *Id* takes the form of a periodic train of pulses with sine peaks, which can be visualized on **Figure 22**. Then, we must to modeling the device using

> 01 2 ( ) cos . cos2. . .... *<sup>d</sup> It I I t I t* =+ + + w

0 10 2 0 ( ) [1 ( / )cos . ( / )cos2. . .... *<sup>d</sup> It I I I t I I t* = ++ w

If we use this Fourier Cosine Series approximation and we make an accurate analysis over the graphic presented in the **Figure 22**, it is possible to derive the functional dependence like: *Ip* = *G*[*Vin* − [*Vin* − *VT*]. Now, by assuming that (*gm*).*R >>*1, if we follow the same line of ideas we have

In the sequence, after some extensive calculations, we can derive the harmonics components for the output signal, using the Fourier series approach, and the set of equations that is possible

With respect to the drain current, we can write this signal in accordance with its series

 w

> w

(25)

+ + (26)

= (1/*R*), which implies that we will obtain *Ip* = (1/

).*t*] (*V*). For this fixed value of *R*, we can obtain the

derive the analytical expression: *Id* = *IDC* + [*Vin*/*R*]. cos(*ω.t*).

formalized with the following condition: (*gm*).*R* = 43.

= 500 (*Ω*), and *vin* = 1.cos.[(*2π.10<sup>6</sup>*

236 New Trends and Developments in Metrology

the classical Fourier Cosine Series.

which has an alternative analytical written as:

made previously, we can deduct that *G* = g*<sup>m</sup>*(*n*)

to derive can be synthesized in the form:

representation in the form:

component.

*R*).*VT*.

$$\left(\frac{V\_{\text{in}}}{V\_c}\right) = \left(\frac{\pi}{\text{sen}\phi - \varphi \text{.cos}\phi}\right), \text{ } \phi = \arccos\left(\frac{V\_{\text{in}} - V\_r}{V\_{\text{in}}}\right) \tag{28}$$

For some application, we will adopt the same set of parameters used at the end of the last subsection. For this case, we use the input voltage: *Vin* = 4 (*V*), since that the condition *Vin* > *Vc* must be verified. In addition, through the Eqs. (27) and (28), it is possible to determine all harmonic components; and for this next example, we intend to preserve the relation: *Vin*/*Vc* = 1,372.

It is important to emphasize that, we will obtain a transcendental equation, which demands intrinsically a numerical approach to get the solution in the form: *Id* = [5,8 + 7,13.cos.(2π.10<sup>6</sup> .*t*) + 0,696.cos.(4π.10<sup>6</sup> .*t*) + 0,458.cos.(6π.10<sup>6</sup> .*t*) + 0,234.cos.(8π.10<sup>6</sup> .*t*)]. For all numerical simulations, we can explore the resources of the ADS *software* to verify the veracity of the proposed analytical model presented in this section, and the results for the harmonic component of the drain current *Id* are depicted in **Figure 23** [32–35].

A comparison between the theoretical and the simulated results is presented in **Table 3** and it is possible to see a small difference between the results determined for the current *I0*. To justify these differences, we observe that we have a limitation with respect to the harmonic compo‐ nents involved in the simulation, which not only can provoke some alterations but also has some consequences due to the fact that these values are exceeding the voltage *VDC* = *15 V*.


**Table 3.** Comparison between theoretical and numerical results: harmonics components.

As a direct consequence, the output signal reaches the cutoff region. Furthermore, if the output signal operates outside the cutoff region, intrinsically it can be cut, due to the limits previously established for the excursion of the signal. **Figure 24** presents the output waveform that can reach at maximum the *VDC* level.

**Figure 22.** Graphic of the transconductance at the condition: *Vin* > *Vc*.

**Figure 23.** The *Id* harmonic components plotted with the *ADS* platform, for *Vin* > *V*c.

**Figure 24.** Temporal response the output voltage, for *Vin* > *V*c.

**Figure 22.** Graphic of the transconductance at the condition: *Vin* > *Vc*.

238 New Trends and Developments in Metrology

**Figure 23.** The *Id* harmonic components plotted with the *ADS* platform, for *Vin* > *V*c.

#### **5. An efficient architecture involving broadcast of RF power amplifiers**

The main focus of this Section is to present an efficient in-house NTSC architecture that can works as an exciter with a central PC that controls various interfaces. With the setup of this electronic structure, it is possible to make a communication with different measurement equipment, and the levels generated by the exciter are sufficient to drive the power drawer to the nominal power 860 (Wps). All numerical codes of the central software that interfaced and controlled all equipment were written using both *Matlab* platform. For this design, the most important interfaces are the Serial *RS232*, *USB*, *GPIB*, and *LAN*, and the final software presents an easy-to-use interface that are strongly demanded currently in the production line of all modern RF architectures [7–10, 12–17, 23].

#### **5.1. A wide class of important reports: principal measurements over the power amplifier**

**Figure 25** shows the amplifier we intend to make some experimental tests. This is a picture from a part of the power drawer that shows four RF power transistors, capable of delivering altogether 860 (Wps).

Typically, the operation of this device preserves the balanced configuration and has some similarity with other literatures as described previously. For the case that the setup has a correct operation, a wide class of different reports can be generated [20–22, 24, 36–42].

Hybrids structures were used to combine push–pull amplifiers, such that we should expect a difference in the current of the transistors, especially in the middle of the UHF band, when the coupled and the direct port present the largest dispersion from the ideal 3 *dB* value. The designer can use all reports to get enough background about the performance analysis of the amplifier under test, as well as to acquire a fine tuning of an amplifier that is being optimized using the same setup. At first, we shown in **Figure 26**, the current of each transistor recorded for each channel, in the entire UHF band.

**Figure 25.** The RF architecture implemented for experimental tests with four power amplifiers with 860 (Wps).

**Figure 26.** Currents of each transistor measured at nominal power for the UHF band (channels 14–69).

A 2D diagram in the entire UHF band for the temperature of each transistor is presented in **Figure 27**. Evidently, the transistors that were operating with less current due to the imperfect signal division of the hybrids presented lower temperatures, but the maximal ratings are important measures to verify the air cooling system of the drawer.

Hybrids structures were used to combine push–pull amplifiers, such that we should expect a difference in the current of the transistors, especially in the middle of the UHF band, when the coupled and the direct port present the largest dispersion from the ideal 3 *dB* value. The designer can use all reports to get enough background about the performance analysis of the amplifier under test, as well as to acquire a fine tuning of an amplifier that is being optimized using the same setup. At first, we shown in **Figure 26**, the current of each transistor recorded

**Figure 25.** The RF architecture implemented for experimental tests with four power amplifiers with 860 (Wps).

**Figure 26.** Currents of each transistor measured at nominal power for the UHF band (channels 14–69).

for each channel, in the entire UHF band.

240 New Trends and Developments in Metrology

**Figure 27.** Temperatures of each transistor measured at nominal power for the UHF band (channels 14–69).

**Figure 28.** Plot of the gain (*dB*) for the UHF band (channels 14–69).

With this architecture, it is possible to perform a complete set of reports, as we can visualize in the graphic of **Figure 28** that shows the values of the Gain (*dB*) versus the UHF television channels (14–69), while **Figure 29** depicts the inter-modulation of 0% average power level (APL) black signal located at (−4.5) (MHz).

In order to consider the stress conditions generated by the amplifier by considering the signal that represents the highest output power generated by the device in an NTSC transmission, we must check the use of the APL 0% black signal. This experimental allows observe the *IMD* generated by the amplifier at the maximum output power in the entire band imposed for the operation of the device. Therefore, it is possible to plot both the lower side of the *IMD* (−4.5 MHz) and the upper side of the *IMD* (9 MHz), and we can detect a high asymmetry between the lower and the higher sides of the *IMD* frequencies.

**Figure 29.** Plot of the inter-modulation (0%) APL black signal located at −4.5 (MHz), for the UHF band (channels 14– 69).

**Figure 30.** Plot of the (75%) red *Ramp* signal for the UHF band (channels 14–69).

If we taking into account that the human eye is sensitive to static noise intermixed in a red field, such that the *IMD* levels should be as low as possible in a good quality amplifier, it is important to get a 2D plot corresponding to the (75%) red *Ramp* test signal results for the same UHF television channels, which is represented in **Figure 30**.

In order to consider the stress conditions generated by the amplifier by considering the signal that represents the highest output power generated by the device in an NTSC transmission, we must check the use of the APL 0% black signal. This experimental allows observe the *IMD* generated by the amplifier at the maximum output power in the entire band imposed for the operation of the device. Therefore, it is possible to plot both the lower side of the *IMD* (−4.5 MHz) and the upper side of the *IMD* (9 MHz), and we can detect a high asymmetry between

**Figure 29.** Plot of the inter-modulation (0%) APL black signal located at −4.5 (MHz), for the UHF band (channels 14–

the lower and the higher sides of the *IMD* frequencies.

242 New Trends and Developments in Metrology

**Figure 30.** Plot of the (75%) red *Ramp* signal for the UHF band (channels 14–69).

69).

**Figure 31.** Plot of both local oscillator and the mirror signals for the UHF band (channels 7 14–69).

**Figure 32.** A statistical comparison of the combined PA's efficiency for validation of all measurement tests.

It is possible also to perform a graphic that corresponds the efficiency of the power drawer for the entire UHF band, since this test would indicate the RF power delivered over the total power consumed by the amplifier. Another important report is corresponding to the level of both local oscillator (LO) and the mirror signals as a function of the same channels of UHF television, which is plotted in **Figure 31**. This test ensures that the LO of the modulator and the mirror signals are low enough, which permit us the complete validation of this experimental test.

As a conclusion of all experimental measurements, it is really important to use all data available, which permit us not only to make some statistical comparisons as well as pass-fail tests were it is possible. This procedure presents a mean value of several results that were performed, such that it is possible to establish a comparison with the actual measurements and those ones we have made previously. If more tests are realized more precise, they become and the mean curve can be adjusted more conveniently, as it is being presented in **Figure 32** [22– 26, 42–47].

Using this mean curve and upper and lower bounds, a defective piece can be easily identified and sent to repair if necessary. A good agreement can be observed for the amplifier tested in this graphic, and so the behavior of the amplifier was inside expectations.

#### **5.2. Block diagram and the setup of the entire experimental hardware architecture**

A block diagram corresponding to the flows of the signal for the implemented architecture is depicted in **Figure 33**. For both input and output signals, it will be necessary to check the system gain and to ensure that the output power is at the correct level, a set of instruments such as: Power Meters, USB2001, E4416B, Current Probe 34134A, and the 34401A Multimeter, in order to get the whole system that will be used to measure the drawer overall efficiency. The transistor currents were measured from the power drawer supply, and the temperatures were acquired from a sensor placed at the side of the transistor flange.

With the purpose of to maintain the exciter totally "*free of distortion*"; sometimes, we need to check and realize, the performance of both the local oscillator (LO) as well as the image suppression for each analyzed channel. An ESA spectrum analyzer can serve for this experi‐ mental procedure which permits also to detect inter-modulation levels of three different NTSC signals, namely 0% average power level (APL) black, 75% modulated red, and the modulated ramp.

In comparison with others commercial systems (e.g., Anritsu ME 7840A), this architecture is more flexible regarding the output power as well as all individual components. Besides of the RF structure being composed by several parts, these ones can be tuned for a single amplifier module of 220 (Wps) or a power drawer 860 (Wps), which permits that some parts can also be substituted by low-cost equivalent, as for example, power meters, current meters, attenuators, and couplers.

**Figure 33.** Design of the whole architecture for the measurement setup.

It is possible also to perform a graphic that corresponds the efficiency of the power drawer for the entire UHF band, since this test would indicate the RF power delivered over the total power consumed by the amplifier. Another important report is corresponding to the level of both local oscillator (LO) and the mirror signals as a function of the same channels of UHF television, which is plotted in **Figure 31**. This test ensures that the LO of the modulator and the mirror signals are low enough, which permit us the complete validation of this experimental test.

As a conclusion of all experimental measurements, it is really important to use all data available, which permit us not only to make some statistical comparisons as well as pass-fail tests were it is possible. This procedure presents a mean value of several results that were performed, such that it is possible to establish a comparison with the actual measurements and those ones we have made previously. If more tests are realized more precise, they become and the mean curve can be adjusted more conveniently, as it is being presented in **Figure 32** [22–

Using this mean curve and upper and lower bounds, a defective piece can be easily identified and sent to repair if necessary. A good agreement can be observed for the amplifier tested in

A block diagram corresponding to the flows of the signal for the implemented architecture is depicted in **Figure 33**. For both input and output signals, it will be necessary to check the system gain and to ensure that the output power is at the correct level, a set of instruments such as: Power Meters, USB2001, E4416B, Current Probe 34134A, and the 34401A Multimeter, in order to get the whole system that will be used to measure the drawer overall efficiency. The transistor currents were measured from the power drawer supply, and the temperatures were

With the purpose of to maintain the exciter totally "*free of distortion*"; sometimes, we need to check and realize, the performance of both the local oscillator (LO) as well as the image suppression for each analyzed channel. An ESA spectrum analyzer can serve for this experi‐ mental procedure which permits also to detect inter-modulation levels of three different NTSC signals, namely 0% average power level (APL) black, 75% modulated red, and the modulated

In comparison with others commercial systems (e.g., Anritsu ME 7840A), this architecture is more flexible regarding the output power as well as all individual components. Besides of the RF structure being composed by several parts, these ones can be tuned for a single amplifier module of 220 (Wps) or a power drawer 860 (Wps), which permits that some parts can also be substituted by low-cost equivalent, as for example, power meters, current meters, attenuators,

this graphic, and so the behavior of the amplifier was inside expectations.

acquired from a sensor placed at the side of the transistor flange.

**5.2. Block diagram and the setup of the entire experimental hardware architecture**

26, 42–47].

244 New Trends and Developments in Metrology

ramp.

and couplers.

For the case that the architecture shown in **Figure 33** will be used for standard *ISDB-T* OFDM signals, some minor changes in software to measure inter-modulation distortions must be performed when the signal is applied to a power amplifier. In the references presented at the Introduction, a detailed study about *IMD* distortions caused by a power amplifier in an *ISDB-T* OFMD signal and also its peak-to-average ratio (*PAR*) characteristics is presented.

The whole architecture will serve for submit under test a wide class of power amplifiers. All reports generated in accordance with the performance analysis of each amplifier makes possible to implement a fine tuning of the device. Furthermore, it is possible to combine *push– pull* amplifiers, for the case that hybrids structures must be used, and we should expect a difference in the current of the transistors, especially in the middle of the UHF band, when the coupled and the direct port present the largest dispersion from the ideal 3 *dB* value.

Acquires a great importance the reports corresponding to the fluctuation of the temperatures of each transistor in the entire UHF band. Initially, we impose set all values of the Power (W) and the Gain (*dB*) versus the UHF television channels (14–69), in accordance with the intermodulation of 0% APL black signal, for some specified frequencies. Typically, the use of the APL 0% black signal is due to the stress conditions generated in the amplifier, by considering that this signal represents the highest output power generated by the device for an NTSC transmission. This test allows the observation of the IMD effect generated at the maximum output power in the entire band imposed for the operation of the amplifier [48–52].

Finally, corresponding to the generation of inter-modulation reports that can be used in the analysis of PAR of digital signals, we should expect a sensible broadening in the domain of applications of the measurement architecture. An alternative option for this architecture is to use it for the validation of noise analysis as well as to validate some studies for uplinks for cognate radio structures made for a wide class of OFDM systems. In conclusion, a load capable of absorbing the amount of power generated by the amplifier under test is used as termination, preceded by two couplers. The complete architecture implemented on a rack for all measure‐ ments can be visualized in **Figure 34**.

**Figure 34.** The whole architecture which contains all equipment used to perform all requirements measurements con‐ veniently mounted on a rack.

The whole architecture presented in this Section can be used widely for the characterization of high-power amplifiers, and the structure was designed to be used in the production line of modern RF companies. Important parameters, such as gain, output power, inter-modulation distortion of different signals, efficiency, current, and temperature, can be measured in accordance with the intrinsic characteristics of a wide class of amplifiers. The RF electronic structure can generate very important reports that can be used to avail the performance of the amplifier under test, comparing it to other amplifiers already measured.

#### **6. Conclusions**

The theoretical research with experimental validation presented in this chapter was aimed primarily to introduce new technologies that can be used in modeling of nonlinear phenomena that attend intrinsically in the projects involving the power amplifiers used widely in the new industrial production lines of high-resolution digital systems of TV.

The competitiveness that is continuously imposed on the international scene demand that professionals involved with projects of new technologies keep always a updated vision of what can be more in terms of new emergency equipment should be projects for communication links using techniques in which the *Digital Signal Processing* (DSP), with high rates of transmission by making that information, regardless of its analog or digital form can get the link *Tx*-*Rx* efficiently, with optimized time, protected against intrinsic noise in transmission, with a modern computing platform that allows not only to implement the codes are written in structures *Field-Programmable Gate Array* (FPGA), as well as all the simulations that can validate the theoretical approaches formulated, in addition to a safe protection of traffic information.

This demand of designers a vigilance on construction of versatile systems, extreme precision, so that the demands imposed on business sectors may be always being supplied in real time, which keeps the heated market, strengthening thus the companies that are on the line, in the discovery of new fronts in their diverse research lines of production. Constant challenges are always objectified targets to the engineers who are involved with projects of new technologies. The space in the international market is something that outlines as one of the fundamentals of the companies who engage seriously in new lines of production equipment used in the transmission and reception of signals in the wide area of Radio Frequency (RF).

We hope this set of theoretical and experimental approaches presented in this Chapter can be our modest contribution for the development of new technologies as well as can serve as a guide for many engineers with strong interests in this field of research.

#### **Acknowledgements**

preceded by two couplers. The complete architecture implemented on a rack for all measure‐

**Figure 34.** The whole architecture which contains all equipment used to perform all requirements measurements con‐

The whole architecture presented in this Section can be used widely for the characterization of high-power amplifiers, and the structure was designed to be used in the production line of modern RF companies. Important parameters, such as gain, output power, inter-modulation distortion of different signals, efficiency, current, and temperature, can be measured in accordance with the intrinsic characteristics of a wide class of amplifiers. The RF electronic structure can generate very important reports that can be used to avail the performance of the

The theoretical research with experimental validation presented in this chapter was aimed primarily to introduce new technologies that can be used in modeling of nonlinear phenomena that attend intrinsically in the projects involving the power amplifiers used widely in the new

The competitiveness that is continuously imposed on the international scene demand that professionals involved with projects of new technologies keep always a updated vision of what can be more in terms of new emergency equipment should be projects for communication links using techniques in which the *Digital Signal Processing* (DSP), with high rates of transmission by making that information, regardless of its analog or digital form can get the link *Tx*-*Rx*

amplifier under test, comparing it to other amplifiers already measured.

industrial production lines of high-resolution digital systems of TV.

ments can be visualized in **Figure 34**.

246 New Trends and Developments in Metrology

veniently mounted on a rack.

**6. Conclusions**

We would like to thank the Institutions: Foundation of Studies and Projects, FINEP, RJ, for all financial support received for this research; Linear Equipamentos Eletrônicos S/A, one of the best Companies in Brazil that produces RF and Microwave equipments with high perform‐ ance, where were developed all projects related with a wide class of HDTV transmitters architectures; São Paulo Research Foundation, FAPESP, SP, processes with ID: 2006/01655-7 and 2013/23431-7; Minas Gerais Research Foundation, FAPEMIG, MG, by all financial supports received by the authors; Brazilian Educational Ministry, CAPES, MCT, for the PhD scholarship studies developed by an author in Austria.

#### **Author details**

Daniel Discini Silveira1 , Marcos Paulo de Souza Silva2 , Marcel Veloso Campos3 and Maurício Silveira4\*


#### **References**


[14] Mello, A.A., Rodrigues, H.D., Silva, M.P.S., Silveira, M., Adaptive digital pre-distortion to reduce the power amplifier non-linearity. In: Proceedings of the IEEE APS-URSI (APS-URSI'03), 2003; Columbus, OH, USA.

**References**

House, 2003.

248 New Trends and Developments in Metrology

2000.

1999.

Artech House, 2002.

[1] Vuolevi, J. and Rahkonen, T., Distortion in RF Power Amplifiers, Norwood, MA: Artech

[3] Cripps, S.C., Advanced Techniques in RF Power Amplifier Design, Norwood, MA:

[4] Kenington, P.B., High-Linearity RF Amplifier Design, Norwood, MA: Artech House,

[5] Pothecary, N., Feedforward Linear Power Amplifiers, Norwood, MA: Artech House,

[6] Lima, J.S. and Silveira, M., Medidas de desempenho em Sistemas 8-VSB e COFDM. In: Proceedings of the International Week of Telecommunication, 2002, National Institute

[7] Lima, J.S., Linearization of TV transmitters with IF pre-distortion, Telecommunication

[8] Mello, A.A., Rodrigues, H.D., Lima, J.S., Silveira, M., Pereira, W.N., Ribeiro, J.A.J., Linearization of the transmitter using digital pre-distortion. In: Proceedings of the IEEE

[9] Mello, A.A., Rodrigues, H.D., Lima, J.S., Silva, M.P.S., Silveira, M., Pereira, W.N., O uso da técnica de pré-distorção digital na linearização de amplificadores de potência em RF, In: Proceedings of the International Week of Telecommunications; 2002, National

[10] Mello, A.A., Rodrigues, H.D., Lima, J.S., Silva, M.P.S., Silveira, M., Adaptive lineariza‐ tion using pre-distortion experimental results, Telecommunication Journal, National

[11] Mello, A.A., Rodrigues, H.D., Lima, J.S., Silva, M.P.S., Silveira, M., An efficient Numerical Approach for the Linearization of Power Amplifiers, IEEE Journal Latin

[12] Silva, M.P.S., Silveira, M., et al., An efficient analysis of the performance of nonlinear devices using as a tool the software ADS. In: Proceedings of the IEEE of Word Congress on Engineering and Technology Education (WCETE'04), 2004, Guarujá, Brazil. [13] Mello, A.A., Rodrigues, H.D., Lima, J.S., Silva, M.P.S., Silveira, M., A new numerical approach in the linear analysis of RF amplifiers. In: Proceedings of the IEEE 33rd

European Microwave Conference (EuMC'03); 2003, Munich, Germany.

Journal, National Institute of Telecommunication, 1998, Vol. 1, Brazil.

Institute of Telecommunication, Santa Rita do Sapucaí, MG, Brazil.

of Telecommunication, Santa Rita do Sapucaí, MG, Brazil.

APS-URSI (APS-URSI'02); 2002, San Antonio, TE, USA.

Institute of Telecommunication, 2004, Vol. 3, Brazil.

America, Vol. 2, 2, 2004.

[2] Maas, S.A., Nonlinear Microwave Circuits, Piscataway, NY: IEEE Press, 1988.


[42] Varahram, P., Ali, B.M., Peak-to-average power ratio reduction and digital predistortion effects in power amplifiers in OFDM systems; International Journal of Communication Systems, 2012, Vol. 25, pp. 543–552, doi:10.1002/dac.1282.

[29] Vuolevi, J., Analysis, measurement and cancellation of the bandwidth and amplitude dependence of intermodulation distortion in RF power amplifiers, Lecture Notes, Department of Electrical Engineering, University of Oslo, Blindern, Oslo, 2001. [30] Wang, H., Bao, J. Wu, Z., Comparison of the Behavioral Modeling for RF Power Amplifier With Memory Effects, IEEE Microwave and Wireless Components Letters,

[31] Cantrell, W.H., Davis, W.A., Amplitude modulator utilizing a high-Q class-E DC-DC converter, In: Proceedings of the IEEE MTT-S Microwave Symposium Dig. (MTT-S'03),

[32] Elnady, A., An efficient current regulator for multilevel voltage source converter based on a simple analog control circuit, Journal of Circuits, Systems and Computers, 2013,

[33] Hung, T.P., Metzger, A.G., Zampardi, P.J., Iwamoto, M., Asbeck, P.M., Design of highefficiency current-mode class D amplifiers for wireless handsets, IEEE Transactions on

[34] Campos, M.V., Silveira, M., An efficient analytical approach for the non-linear analysis of RF amplifiers implemented with the FET structures, In: Proceedings of the IEEE Asia Pacific Microwave Conference, (APMC'05), Dicember 2005, doi:10.1109/APMC.

[35] Campos, M.V., Silveira, M., An original approach of the effect source resistance for FET devices, In: Proceedings of the IEEE Asia Pacific Microwave Conference, (APMC'08),

[36] Sevic, J.F., Statistical characterization of RF power amplifier efficiency for CDMA wireless communication systems, In: Proceedings of the IEEE Wireless Communica‐

[37] GPIB tutorial, Available from: http://cp.literature.agilent.com/litweb/pdf/

[38] Duffy, D.G., Advanced Engineering Mathematics with Matlab, 3rd ed., New York, NY:

[39] Tektronix, Television measurements manual for NTSC system, Available from: http://

[40] Silveira, D.D.; Lima, J.S.; Silveira, M.; Dissect PA distortion from OFDM signals,

[41] Bermudez A. et al., Mathematical Models and Numerical Simulation in Electromag‐

www.tek.com/document/primer/television-measurements-ntsc-systems.

tions Conference, 1997, p. 110–113, doi:10.1109/WCC.1997.622258.

2002, Vol. 19, 3, pp. 179–181.

250 New Trends and Developments in Metrology

June 2003, Vol. 3, pp. 1721–1724.

2005.1606219.

04396-90063.pdf.

CRC Press, 2010.

Vol. 22, 4, pp. 1–21, doi:10.1142/S021826613500230.

Dicember 2008, doi:10.1109/APMC.2008.4958523.

Microwaves & RF Magazine, 2010; Vol. 01, pp. 88–94.

netism, 1st ed., New York, NY: Springer Publisher, 2014.

Microwave Theory and Techniques, 2005, Vol. 53, pp. 144–151.


**Section 4**

### **Soft Metrology**

### **Objectifying the Subjective: Fundaments and Applications of Soft Metrology**

Laura Rossi

Additional information is available at the end of the chapter

http://dx.doi.org/10.5772/64123

#### **Abstract**

The aim of the interdisciplinary research was to facilitate the understanding of a specific topic passing by different disciplinary perspectives. Soft metrology is the perfect example of a scientific field that needs that sort of approach. Seeking to provide a reproducible basis for qualifying and quantifying what are essentially 'soft' measure‐ ments (subject to human perception and interpretation) is a particularly challenging scientific endeavour. This chapter presents a theoretical overview of main concepts around soft metrology and, in the second instance, proposes a mathematical model for the measurement of a soft measurand through a dedicated index (IPER—influence on performance index).

**Keywords:** soft metrology, human perception, human–machine interaction, human performance

#### **1. Introduction**

*'Metrology must be ideal, but practical. Rigorous, but accommodating. Demanding, but forgiving. Quantitative, but qualitative. Forward‐looking, but faithful to the past'.*

(John Michael Linacre, University of Sydney, 2005).

This work represents both a compendium of ideas around soft metrology intended as the set of techniques and models for measuring quantities related to perception. The knowledge of the physical world through senses is a key topic of the history of philosophy. Since Plato and Boethius, passing through all the history of Greek and Indian ancient philosophy, getting to the modern era with Descartes and Hume until reaching contemporary philosophy of mind,

© 2016 The Author(s). Licensee InTech. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

human perception has always been a central topic of thought as a key epistemological problem. Hereby, some specific theoretical interpretation of the relation between humans and the physical world will be presented, depending on how much they are useful to the foundations of a specific line of reasoning pertinent to the metrological scope.

**Figure 1.** A metaphorical picture for soft metrology. The traditional tools/methods of metrology are not suitable for measuring quantities related to human perception.

The soft metrology expression is relatively recent and has appeared in literature the first time in 2003 in a report of the National Physical Laboratory [1] where it has been defined as 'the set of techniques and models that allow objective quantification of the properties determined by perception in the domain of all five senses'. It can be intended as the investigation about the correlation between the human subjective response and objective measurement of physical properties of the empirical world.

The main objective of this work is to contribute to the definition of the theoretic basis, the language and investigation methodology related to soft metrology. Some methodological proposals are presented as well as some experimental findings. Subjective and physiological subjects' responses to specific visual and auditory stimuli are considered, in conditions of low or high cognitive load and with an environmental setting influencing the perceptual process (presence of glare or noise). Here lies also a good question into what extent one mode of perception is affected on influenced by another, as in perceiving the flux of information coming from the empirical world cross‐modal interactions normally occur (**Figure 1**).

As a matter of fact, the recent interest in metrology of perception by the industry for design and development of efficient human–machine interfaces makes the study of these issues promising interesting developments. Human–machine interface (HMI) is in fact out of the niche of basic scientific research and hi‐tech applications (e.g. military, aviation and aerospace), thus making key service for consumer goods. This is attested by the recent identification of commercial brands with the HMI, for example in the automotive industry (in 2004, Citroen ASL for the notice of passing the line of road through seat vibration and APGS Lexus' 'Advanced Parking Guidance System', in 2006 Fiat 'Blue & Me', in 2009 the Toyota Head Up Display). Soft metrology, together with cognitive science and ergonomics, can define the criteria necessary for effective planning and efficient industrial design as well.

human perception has always been a central topic of thought as a key epistemological problem. Hereby, some specific theoretical interpretation of the relation between humans and the physical world will be presented, depending on how much they are useful to the foundations

**Figure 1.** A metaphorical picture for soft metrology. The traditional tools/methods of metrology are not suitable for

The soft metrology expression is relatively recent and has appeared in literature the first time in 2003 in a report of the National Physical Laboratory [1] where it has been defined as 'the set of techniques and models that allow objective quantification of the properties determined by perception in the domain of all five senses'. It can be intended as the investigation about the correlation between the human subjective response and objective measurement of physical

The main objective of this work is to contribute to the definition of the theoretic basis, the language and investigation methodology related to soft metrology. Some methodological proposals are presented as well as some experimental findings. Subjective and physiological subjects' responses to specific visual and auditory stimuli are considered, in conditions of low or high cognitive load and with an environmental setting influencing the perceptual process (presence of glare or noise). Here lies also a good question into what extent one mode of perception is affected on influenced by another, as in perceiving the flux of information coming

As a matter of fact, the recent interest in metrology of perception by the industry for design and development of efficient human–machine interfaces makes the study of these issues promising interesting developments. Human–machine interface (HMI) is in fact out of the niche of basic scientific research and hi‐tech applications (e.g. military, aviation and aerospace), thus making key service for consumer goods. This is attested by the recent identification of commercial brands with the HMI, for example in the automotive industry (in 2004, Citroen ASL for the notice of passing the line of road through seat vibration and APGS Lexus' 'Advanced Parking Guidance System', in 2006 Fiat 'Blue & Me', in 2009 the Toyota Head Up

from the empirical world cross‐modal interactions normally occur (**Figure 1**).

of a specific line of reasoning pertinent to the metrological scope.

measuring quantities related to human perception.

256 New Trends and Developments in Metrology

properties of the empirical world.

As the aim of the interdisciplinary research is to facilitate the understanding of a specific topic passing by different disciplinary perspectives, soft metrology is the perfect example of a scientific field that needs that sort of approach.

Seeking to provide a reproducible basis for qualifying and quantifying what are essentially 'soft' measurements (subject to human perception and interpretation) is a particularly challenging scientific endeavour. The tentative to do so has been carried out by the work of MINET [2,3] as described below, in a pure and fruitful interdisciplinary perspective. An expression of this key aspect of research in this area is to involve, for example investigations of human mental and brain functions (studied primarily in psychology and neuroscience), research into how these underpin human attention, perception and cognition (psychophysics and behavioural studies) and development of measurement instrumentation and perceptual models (metrology, mathematics, modelling, computing, psychology, physics and psycho‐ physics).

Human perception and interpretation encompasses phenomena that are of different complex‐ ity, ranging from sensory perceptions (e.g. colour, taste, odour, loudness), to environmental perceptions (e.g. soundscape, air quality, landscape), to self‐perceptions (e.g. chronic pain, wellbeing, mood), to perceptual attributions (e.g. aesthetics, satisfaction, expectancies), as well as all kinds of complex interpretations and evaluations (e.g. utility, risk, maladjustment) that are based on learning and experience.

'Measuring the Impossible' is the impressive title of a recently concluded European call for new and innovative research projects, concerning the measurement of quantities and qualities related to human perception and interpretation.

As a matter of facts, many phenomena of significant interest to contemporary science are intrinsically multidimensional and multidisciplinary, with strong crossover among physical, biological and social sciences. Economic aspects, products and services appeal to consumers according to parameters of quality, comfort and beauty which are mediated by perception and culture.

A few examples of funded projects can help to get the feel of recently closed or still on‐going research activities in this area.

The MONAT project [4], Measurement of Naturalness, coordinated by the National Physical Laboratory (NPL, United Kingdom), was aimed to measure the naturalness of materials, such as fabrics, on the basis on of their visual appearance (influenced by factors such as colour, texture, gloss…) and tactile feeling (thermal conductivity, hardness, friction coefficient…). Investigations into this multisensory perception and interpretation require the measurements of related perceptual, physical and psychophysical qualities and their combinations.

CLOSED [5], closing the loop of sound evaluation and design, coordinated by the Institut de Recherche Acoustique/Musique (IRCAM, France), sought to improve products by introduc‐ ing, at the early design stage, information concerning the perceived sound quality of products in use.

Inspired by the MINET activities, a book on this topic has been published in 2011 and this has been the first comprehensive publication to scrutinise this measurement topic from a multi‐ and interdisciplinary perspective [6].

#### **2. Fundaments of Soft Metrology**

#### **2.1. Definition and theoretical context**

As far we know, the first appearance of the expression 'soft metrology' and its related definition appeared in 2003 in the NPL Report CMSC 20/03 [1]. Here, soft metrology is defined as follows:

*The set of measurement techniques and models which enable the objective quantification of properties which are determined by human perception.? (The human response may be in any of the five senses: sight, smell, sound, taste and touch). Thus, soft metrology includes aspects of appearance (colour and gloss), noise quality, texture of food (such as creaminess) and, more broadly, topics such as biometrics and usability of systems.*

Here, the author points out that soft metrology, in its extended sense, is not yet an established branch of metrology, and today, it does not find a unique place within the structure of the National Measurement System of each country.

That does not mean that measurement scales do not exist or that research in the domain of soft metrology is not conducted, but rather that there is a lack of metrology research in considering properties of human perception and cognition as measurand of physical properties of the environment.

As a matter of facts, soft metrology requires the measurement of proper physical parameters and the development of models to correlate them to perceptual quantities. Traceable soft metrology is achievable both through the traceable measurement of the physical parameters and the development of accurate correlation models.

**Figure 2.** A set of boxes that are perceived to increase in SIZE: the corresponding physical measurement is of LENGTH (for example, in metres).

As an example, we would like to mention the following one, described in the NPL Report as quite explicatory of the soft metrological approach when a relation to physical stimuli and perception is investigated.

The example considers the perception and measurement of length. A series of boxes are perceived to increase in size (**Figure 2**). A human observer could be asked to indicate a number correlated to their impression of the size. Equally, a device could be constructed (a ruler) which could be used to 'measure' the size of each square, that is its area.

In this case, it is likely that the human responses, the soft measure, will simply correlate with the measurements made using the ruler, the physical measure. Hence, soft metrology can be considered the formulation of a correlation between the human response and the physical measure.

As a second example, consider the series of boxes shown in **Figure 3**.

**Figure 3.** A set of boxes that differ in a systematic manner.

ing, at the early design stage, information concerning the perceived sound quality of products

Inspired by the MINET activities, a book on this topic has been published in 2011 and this has been the first comprehensive publication to scrutinise this measurement topic from a multi‐

As far we know, the first appearance of the expression 'soft metrology' and its related definition appeared in 2003 in the NPL Report CMSC 20/03 [1]. Here, soft metrology is defined as follows: *The set of measurement techniques and models which enable the objective quantification of properties which are determined by human perception.? (The human response may be in any of the five senses: sight, smell, sound, taste and touch). Thus, soft metrology includes aspects of appearance (colour and gloss), noise quality, texture of food (such as creaminess)*

Here, the author points out that soft metrology, in its extended sense, is not yet an established branch of metrology, and today, it does not find a unique place within the structure of the

That does not mean that measurement scales do not exist or that research in the domain of soft metrology is not conducted, but rather that there is a lack of metrology research in considering properties of human perception and cognition as measurand of physical properties of the

As a matter of facts, soft metrology requires the measurement of proper physical parameters and the development of models to correlate them to perceptual quantities. Traceable soft metrology is achievable both through the traceable measurement of the physical parameters

**Figure 2.** A set of boxes that are perceived to increase in SIZE: the corresponding physical measurement is of LENGTH

As an example, we would like to mention the following one, described in the NPL Report as quite explicatory of the soft metrological approach when a relation to physical stimuli and

*and, more broadly, topics such as biometrics and usability of systems.*

in use.

environment.

(for example, in metres).

perception is investigated.

and interdisciplinary perspective [6].

258 New Trends and Developments in Metrology

**2. Fundaments of Soft Metrology**

National Measurement System of each country.

and the development of accurate correlation models.

**2.1. Definition and theoretical context**

To the human observer, something is changing as the boxes are examined from left to right but what is it and how can it be described? Some may call it lightness, some brightness, some density.

It has been shown [7] that human observers, after a little training, can give a consistent response to the changes they see. Equally, some physical measures, for instance, the reflectance of the paper surface of every square, the density or even their luminance, can be made of each box using a suitable metre, and a scale constructed that relates the two measures.

**Figure 4.** The squares marked A and B are the same shade of grey.

Also, in real life, the human response is not absolute as a physical measure usually is, but it is subjectively correlated to a real or imaginary reference. This fact creates a set of interesting optical illusions, as the famous Checker illusion [8], a visual illusion published in 1995 by Edward H. Adelson, Professor of Vision Science at MIT (**Figure 4**).

Differently from the human eye, a physical system does not find luminance differences between square A and B in **Figure 4** and cannot understand the presence of the chessboards without complex elaboration of the image. The visual system is not very good at being a physical light metre, but that is not its purpose. The important task is to break the image information down into meaningful components, thereby perceiving the nature of the objects in view.

We now go rapidly through the idea of what it does imply for considering the human senses as a measuring instrument and what measuring the act of perception means. Following what said just before, soft metrology can be considered as the investigation of the correlation between human, subjective, responses and physical, objective, measures. What is generated is a measurement scale, a number series, which allows the subjective response to be predicted from the objective measure.

There is a parallelism between the process of perceiving and the one of measuring. This comes from the idea that human senses can be considered a sort of transducer. A transducer is, in fact, a tool that turns a physical quantity—as a sound pressure wave—in another physical quantity—as a voltage variation. It is assumed that the transducer has a well definite function between input and output quantities (i.e. linear) and can be calibrated, but in the case of the human being, of course, this attribute cannot be proposed.

As a matter of facts, the human being is already equipped himself with transducers (sight, hearing, smell, taste and touch) and should be considered as a filter or rather, using the language of signal, such as a black box through which a stream of input events, perceived by the senses, flows through leading to opinions, actions and output reactions.

**Figure 5.** The input–output model by behaviourists. The internal functioning of on organism mind (O) generates a re‐ sponse (R) to a stimulus (S).

This idea is borrowed from the thought of a certain strand of the early twentieth‐century psychology called behaviourism. When coming to the study of human and, more broadly, its measurement, behaviourists concentrated exclusively on the study of behaviour neglecting the study of the inner workings of the mind as considered impossible to be measured in a scientifically objective way.

In contrast, behaviours, or actions produced by organisms, as external events, were regarded as objects clearly quantifiable and measurable. **Figure 5** shows the model stimulus‐response (SR) that represents the behaviourist position. In the model, a stimulus (S) in the environment affects an organism (O) and the stimulus then causes the body to produce a response (R).

In this model, the human mind is not shown. The perceiving subject is treated just as a black box, possessing unfathomable properties and internal relations impossible to be explained. In this perspective, he becomes simply an entity that is used to convert the stimuli into responses. A mathematical model between S and R can be developed using experimental results, but the model should be true only in this specific condition and eventually could not be linked to the real process involved in O.

However, the father of behaviourism, John B. Watson (1878–1958), [9] admitted himself that the decision not to investigate the sphere of mental activity was mainly caused by the limited instruments available in his time. Behaviourism was, in fact, the dominant paradigm in psychological research for nearly 50 years until the advent of cognitive science in 1960.

Therefore, while behaviourism meant the mind as a passive body operating according to simple rules of conditioning, cognitive psychology saw in it a strong, active ability, such as the selection of information from the environment and, about prior knowledge, the capability of act on the results of such selection. Fundamentally, there are three main reasons that led to the rapid growth of this new perspective:

**•** the failure of behaviourism in explaining language acquisition;

optical illusions, as the famous Checker illusion [8], a visual illusion published in 1995 by

Differently from the human eye, a physical system does not find luminance differences between square A and B in **Figure 4** and cannot understand the presence of the chessboards without complex elaboration of the image. The visual system is not very good at being a physical light metre, but that is not its purpose. The important task is to break the image information down into meaningful components, thereby perceiving the nature of the objects

We now go rapidly through the idea of what it does imply for considering the human senses as a measuring instrument and what measuring the act of perception means. Following what said just before, soft metrology can be considered as the investigation of the correlation between human, subjective, responses and physical, objective, measures. What is generated is a measurement scale, a number series, which allows the subjective response to be predicted

There is a parallelism between the process of perceiving and the one of measuring. This comes from the idea that human senses can be considered a sort of transducer. A transducer is, in fact, a tool that turns a physical quantity—as a sound pressure wave—in another physical quantity—as a voltage variation. It is assumed that the transducer has a well definite function between input and output quantities (i.e. linear) and can be calibrated, but in the case of the

As a matter of facts, the human being is already equipped himself with transducers (sight, hearing, smell, taste and touch) and should be considered as a filter or rather, using the language of signal, such as a black box through which a stream of input events, perceived by

**Figure 5.** The input–output model by behaviourists. The internal functioning of on organism mind (O) generates a re‐

This idea is borrowed from the thought of a certain strand of the early twentieth‐century psychology called behaviourism. When coming to the study of human and, more broadly, its measurement, behaviourists concentrated exclusively on the study of behaviour neglecting the study of the inner workings of the mind as considered impossible to be measured in a

In contrast, behaviours, or actions produced by organisms, as external events, were regarded as objects clearly quantifiable and measurable. **Figure 5** shows the model stimulus‐response

the senses, flows through leading to opinions, actions and output reactions.

Edward H. Adelson, Professor of Vision Science at MIT (**Figure 4**).

human being, of course, this attribute cannot be proposed.

in view.

from the objective measure.

260 New Trends and Developments in Metrology

sponse (R) to a stimulus (S).

scientifically objective way.


The emergence of new investigation devices, including positron emission tomography (PET), computed tomography (CT) and magnetic resonance imaging (MRI), but also the increased use of computers, was the innovations that have most contributed significantly to the decline of behaviourism.

Psychologists realised that the mind, like a computer, could be treated as a device that represents and transforms information. This initial metaphor then flew into the actual conception of the human mind as an information processor and of the act reasoning assimilated to computing, in which the behaviourist stimulus‐response model turned into input–process– output one:


According to Ulric Neisser (1928–2012), an American cognitive psychologist, the study of man:

*refers to any process in which sensory input is transformed, reduced, elaborated, stored, recovered and used* [10]

and all these modes convey exactly the meaning of the numerous potential of the human mind in the management of information from the sensory world.

Summarising, if behaviourism had the fundamental claim to limit the psychological investi‐ gation of objectively observable reactions, its strength was in the methodological need that it asserted: it is not possible to scientifically talk about what escapes any possibility of objective observation and control.

If at that time it was not possible to have this kind of control over mental processes, which is no longer the case thanks to the modern techniques of physio‐neurological investigation, with the addition of the contemporary cognitive science assumption that human rational processes can be considered as an algorithm.

After this short glimpse on some relevant ideas of theories of human perception, we now go through more specific issues related to soft metrology as a part of metrological science and therefore bound to its specific approach.

#### **2.2. Research issues and challenges**

The main research issues that are currently central to the soft metrology area [11] can be grouped into three main categories:


Several foundational and theoretical issues are of primary interest in the soft metrological field. These include as follows:


Language has been a major issue in twentieth‐century scientific and philosophical debate. In the metrology community, an extensive revision of linguistic terms has been undertaken, starting with the publication in 1984 of the International vocabulary of basic and general terms in metrology, now at its 3rd edition [12].

As suggested [13], in any revision of terms, a revision of concepts and theories must also be considered and this would indeed be beneficial for the entire world of measurement. From a theoretical point of view, soft metrology can be assumed funding its basis on the Representa‐ tional Theory of Measurement [14] where measurement is defined as

*a process of empirical, objective assignment of symbols to attribute of objects and events of the real world, in such a way as to represent them or to describe them and also, in a shortest form, as the correlation of numbers with entities that are not numbers* [15].

In this theory, values are assigned based on correspondences or similarities between the structure of number systems and the structure of qualitative systems. A property is defined as 'quantitative' if such similarities can be established.

and all these modes convey exactly the meaning of the numerous potential of the human mind

Summarising, if behaviourism had the fundamental claim to limit the psychological investi‐ gation of objectively observable reactions, its strength was in the methodological need that it asserted: it is not possible to scientifically talk about what escapes any possibility of objective

If at that time it was not possible to have this kind of control over mental processes, which is no longer the case thanks to the modern techniques of physio‐neurological investigation, with the addition of the contemporary cognitive science assumption that human rational processes

After this short glimpse on some relevant ideas of theories of human perception, we now go through more specific issues related to soft metrology as a part of metrological science and

The main research issues that are currently central to the soft metrology area [11] can be

Several foundational and theoretical issues are of primary interest in the soft metrological field.

Language has been a major issue in twentieth‐century scientific and philosophical debate. In the metrology community, an extensive revision of linguistic terms has been undertaken, starting with the publication in 1984 of the International vocabulary of basic and general terms

As suggested [13], in any revision of terms, a revision of concepts and theories must also be considered and this would indeed be beneficial for the entire world of measurement. From a theoretical point of view, soft metrology can be assumed funding its basis on the Representa‐

*a process of empirical, objective assignment of symbols to attribute of objects and events of the real world, in such a way as to represent them or to describe them and also, in a shortest form, as the correlation of numbers with entities that are not numbers* [15].

tional Theory of Measurement [14] where measurement is defined as

in the management of information from the sensory world.

observation and control.

262 New Trends and Developments in Metrology

can be considered as an algorithm.

therefore bound to its specific approach.

**2.2. Research issues and challenges**

grouped into three main categories:

**•** instrumentation and methods;

**•** implementation areas and applications.

**•** concepts of measuring system (or instrument);

**•** the issue of measurability and uncertainty.

in metrology, now at its 3rd edition [12].

**•** foundation and theory;

These include as follows:

**•** language and terminology;

The concept of measurement is often misunderstood as merely the assignment of value, but it is also possible to assign a value in a way that is not a measurement. For example, a value of a person's height can be assigned, but unless it can be established that there is a correlation between measurements of height and empirical relations, it is not a measurement. To link this theory with the soft metrological research field [16], it is interesting to cite the following passage by an article by Luce and Suppes [17].

*In the 75 or so years beginning in 1870, some psychologists (often physicists or physicians turned psychologists) attempted to import measurement ideas from physics, but gradually it became clear that doing this successfully was a good deal trickier than was initially thought.*

*Indeed, by the 1940s a number of physicists and philosophers of physics concluded that psychologists really did not and could not have an adequate basis for measurement. They concluded, correctly, that the classical measurement models were for the most part unsuited to psychological phenomena. But they also concluded, incorrectly, that no scientifically sound psychological measurement is possible at all.*

*In part, the theory of representational measurement was the response of some psychologists and other social scientists who were fairly well trained in the necessary physics and mathematics to understand how to modify in substantial ways the classical models of physical measurement to be better suited to psychological issues.*

**Figure 6.** The two definitions of *measure* and *perceive* and streamlined representation of the passage from real word to numbers and percept through a measuring instrument (e.g. a colorimeter) or a sense (e.g. sight).

As a matter of facts, an actual correlation between the definition of measure—as coming from representational theory—and perceive [18] does exist (**Figure 6**). Both measuring and perceiv‐ ing are an act of correlation between two systems.

The measuring process consists of the creation of an image (in logic: homomorphism) of the real world into a numerical system or model that maintains its proportions and inner relations, through the use of an instrument or model.

The act of perceiving can be assumed to function in the same way: from empirical stimuli, the human mind creates an interpretational model that represents proportions and relations in the real world, through the use of senses.

Concerning measurement uncertainty, a basic issue in soft metrology is to come to terms with how to handle inter‐individual differences in measurement. These may be considered true differences, as postulated and measured in psychometrics, or they may be regarded as common and specific errors in measurement, as treated in psychophysics.

Instrumentation‐oriented research involves the measurement of external physical events (stimuli) that more or less simultaneously give rise to perceptual and physiological (or behavioural) responses. It would include the perception and interpretation of processes and the development of sensors that, to a certain extent, would mimic human perception and interpretation.

Consider, for example, the measurement of sound. Highly accurate measurement micro‐ phones and binaural recording devices, those allow acoustic stimuli to be measured as they appear at the input of the auditory system, have been developed [5]. Sophisticated binaural reproduction or surround sound systems, with processors and algorithms, provide the required signal processing.

As concerns vision, not only luminous intensity and colour wavelengths can be measured, but also parameters of the interaction between light and matter, and surfaces properties, such as texture‐topography, involving sophisticated signal processing [4]. Similar considerations also apply to the other senses.

As previously mentioned, measurements related to human perception and interpretation have a wide range of actual and potential applications [19]. Here, the areas of perceived quality (of products and services), environment, ergonomics, safety, security and clinics are briefly mentioned.

In the first part of the last century, the impact of mass production was so high that qualitative aspects of goods were somewhat neglected. Today, the shortage of energy sources and the concern for pollution may increase the demand for durable, high‐quality goods. Thus, perceived quality, which results from the perception and interpretation of sensory input, may play a key role in the industrial competition. Examples of products include food, materials, simple and complex devices. A good cup of coffee, for example, is appreciated on the basis of a combination of taste, smell, sight, touch and, for the Italian recipe, of the colour of the surface cream [20].

As said, much of human behaviour is controlled by responses to the five senses. To the consumer, the whole appearance (that can be the smell, the sound, the taste…) of specific products, whether natural or man‐made, is used to assess quality, both consciously and subconsciously, and hence mediate product choice.

Providing a means for reproducible measurement of parameters (such as pleasure and pain) has important implications in evaluating all kinds of products and services.

In the commercial world, a host of factors—from naturalness, aesthetics and comfort to security, service and price influences choice and gives peculiarities to a brand. Is the smell of a new car appealing for the customer? Does a fabric have a luxurious feel?

By being able to predict user perceptions and reactions through a robust modelling, can allow to save time and costs by meeting the expectations of the customers, without the need for extensive and expensive testing.

The ISO 9000 framework, however, requires the parameters able to characterise products to be measured in a precise way, together with associated tolerances, to establish a formal quality‐ control system. Normally, this is more easily done using instruments because of their inherent controllability, stability and repeatability. The challenge of soft metrology is to gain a compa‐ rable level of strengths.

#### **3. The three‐output model**

The measuring process consists of the creation of an image (in logic: homomorphism) of the real world into a numerical system or model that maintains its proportions and inner relations,

The act of perceiving can be assumed to function in the same way: from empirical stimuli, the human mind creates an interpretational model that represents proportions and relations in the

Concerning measurement uncertainty, a basic issue in soft metrology is to come to terms with how to handle inter‐individual differences in measurement. These may be considered true differences, as postulated and measured in psychometrics, or they may be regarded as

Instrumentation‐oriented research involves the measurement of external physical events (stimuli) that more or less simultaneously give rise to perceptual and physiological (or behavioural) responses. It would include the perception and interpretation of processes and the development of sensors that, to a certain extent, would mimic human perception and

Consider, for example, the measurement of sound. Highly accurate measurement micro‐ phones and binaural recording devices, those allow acoustic stimuli to be measured as they appear at the input of the auditory system, have been developed [5]. Sophisticated binaural reproduction or surround sound systems, with processors and algorithms, provide the

As concerns vision, not only luminous intensity and colour wavelengths can be measured, but also parameters of the interaction between light and matter, and surfaces properties, such as texture‐topography, involving sophisticated signal processing [4]. Similar considerations also

As previously mentioned, measurements related to human perception and interpretation have a wide range of actual and potential applications [19]. Here, the areas of perceived quality (of products and services), environment, ergonomics, safety, security and clinics are briefly

In the first part of the last century, the impact of mass production was so high that qualitative aspects of goods were somewhat neglected. Today, the shortage of energy sources and the concern for pollution may increase the demand for durable, high‐quality goods. Thus, perceived quality, which results from the perception and interpretation of sensory input, may play a key role in the industrial competition. Examples of products include food, materials, simple and complex devices. A good cup of coffee, for example, is appreciated on the basis of a combination of taste, smell, sight, touch and, for the Italian recipe, of the colour of the surface

As said, much of human behaviour is controlled by responses to the five senses. To the consumer, the whole appearance (that can be the smell, the sound, the taste…) of specific products, whether natural or man‐made, is used to assess quality, both consciously and

subconsciously, and hence mediate product choice.

common and specific errors in measurement, as treated in psychophysics.

through the use of an instrument or model.

real world, through the use of senses.

264 New Trends and Developments in Metrology

interpretation.

required signal processing.

apply to the other senses.

mentioned.

cream [20].

The fundamental question to which soft metrology attempts to provide answers may be described as follows: given as input a sensory stimulus metrologically quantifiable (a special sound, a flash of light, the roughness of an object, etc.), what is the outcome of the interaction between this and humans? That is, what is the measurable output, if it exists, of this interaction?

**Figure 7.** A schematic representation of the three‐output model. Measuring the interaction between man and the ob‐ jects or events of the real world means to take into account three kinds of outcomings from physiology, from evalua‐ tion of performance and psychology.

To answer this question, the output of the perceptual interaction between subject and world objects or events shall be defined at first. According to the tools and analytical models available, three main outputs (**Figure 7**) have been identified [21]:


The first output coming from human–world interaction is called 'physiological', and it can be quantified through various clinical techniques that monitor some physiological parameters considered as important indicators of changes in the level of perception, emotion or cognition (e.g. the measurement of heart rate, blood pressure, pupil dilation, sweating, brain electrical activity).

The second output is referred to human performance (we will refer to it also as 'operational') and is quantified by measuring the changes in the attitudes of a subject doing a particular task. These can be for example attention, fatigue or accuracy and can be assessed via various parameters, such as reaction times, the number of correct answers, test duration, etc.

The third output is determined and quantified by evaluation scales, questionnaires and interviews (we will refer to it as the 'psychological' output), and this is one of the most popular methods for subjective data acquisition. The response of subjects to a specific question about their opinion on a particular aspect of a stimulus is defined 'psychological' as it is mediated by the psychology (culture, mood, prejudices…) of the person to whom the question is asked.

The method of investigation proper to soft metrology must consist in the comparative analysis of these three main outputs. Through an organic synthesis of these data, is it possible to get reliable objective information about the subjective response to a particular sensory stimulus [22]. An example of analysis of this sort will be given through the description of the influence on performance (IPER) index, correlated by a numerical example explicating its applications.

Before moving further, it is important to introduce a review of key terms of the three fields of investigation. When necessary they are redefined in order to incorporate in the soft metrolog‐ ical terminology, the three‐output model approach.

#### **3.1. Redefinition hypothesis and terminology**

As previously mentioned for what regards soft metrology is necessary to establish a common language and a definition of key terms to simplify the organisation of concepts of this new branch of metrology. Hereby, a list of important terms is given with the related definitions congruent with the model given by the last edition of the International Vocabulary of Metrol‐ ogy [23].

A first and remarkable tentative is the review by Goodman about the measurement of physical parameters in sensory science [24]. Goodman summarises the definitions and descriptions of some relevant physical parameters useful for soft metrological applications. As we interact with our environment, and with objects within that environment, through our five senses, the physical measurements that are most relevant for sensory science are those relating to the parameters that are sensed by our sensory transducers. As reported in Goodman's article, these are listed in **Table 1**.

To answer this question, the output of the perceptual interaction between subject and world objects or events shall be defined at first. According to the tools and analytical models available,

The first output coming from human–world interaction is called 'physiological', and it can be quantified through various clinical techniques that monitor some physiological parameters considered as important indicators of changes in the level of perception, emotion or cognition (e.g. the measurement of heart rate, blood pressure, pupil dilation, sweating, brain electrical

The second output is referred to human performance (we will refer to it also as 'operational') and is quantified by measuring the changes in the attitudes of a subject doing a particular task. These can be for example attention, fatigue or accuracy and can be assessed via various

The third output is determined and quantified by evaluation scales, questionnaires and interviews (we will refer to it as the 'psychological' output), and this is one of the most popular methods for subjective data acquisition. The response of subjects to a specific question about their opinion on a particular aspect of a stimulus is defined 'psychological' as it is mediated by the psychology (culture, mood, prejudices…) of the person to whom the question is asked. The method of investigation proper to soft metrology must consist in the comparative analysis of these three main outputs. Through an organic synthesis of these data, is it possible to get reliable objective information about the subjective response to a particular sensory stimulus [22]. An example of analysis of this sort will be given through the description of the influence on performance (IPER) index, correlated by a numerical example explicating its applications. Before moving further, it is important to introduce a review of key terms of the three fields of investigation. When necessary they are redefined in order to incorporate in the soft metrolog‐

As previously mentioned for what regards soft metrology is necessary to establish a common language and a definition of key terms to simplify the organisation of concepts of this new branch of metrology. Hereby, a list of important terms is given with the related definitions congruent with the model given by the last edition of the International Vocabulary of Metrol‐

A first and remarkable tentative is the review by Goodman about the measurement of physical parameters in sensory science [24]. Goodman summarises the definitions and descriptions of some relevant physical parameters useful for soft metrological applications. As we interact with our environment, and with objects within that environment, through our five senses, the

parameters, such as reaction times, the number of correct answers, test duration, etc.

three main outputs (**Figure 7**) have been identified [21]:

ical terminology, the three‐output model approach.

**3.1. Redefinition hypothesis and terminology**

**•** physiological output,

266 New Trends and Developments in Metrology

**•** psychological output.

**•** performance,

activity).

ogy [23].

The object of the subjective measurement is the interaction between a subject and the physical word, and the schematic representation of **Figure 8** shows the interaction between subject and physical word displaying the key terms and expressions that is important to define.



**Table 1.** Key physical parameters for sensory transduction as reviewed by T. Goodman (2012), pp. 59–60.

**Figure 8.** Schematic representation of relationships between key terms in the description and measurement of the in‐ teraction between the subject (S) and the physical world (PW).

**Subject**: human being directly involved in the subjective measurement process (an interesting parallelism is the definition given in VIM: 2008—3.8 of sensor: element of a measuring system that is directly affected by a phenomenon, body, or substance carrying a quantity to be measured).

**Soft metrology**: set of models and techniques that allow subjective measurements intended as the process of experimentally obtaining one or more quantity values that can be attributed to human physiological, operational and psychological quantity—as defined by soft metrology —directly involving one human being.

**Subjective measurement**: process of experimentally obtaining one or more quantity values that can be attributed to human physiological, operational and psychological quantity—as defined by soft metrology—directly involving one human being.

**Physiological quantity in soft metrology**: property of a phenomenon belonging to human body that has a magnitude expressible as a number and a reference.

**Operational quantity in soft metrology**: property of a human‐centred task that has a magni‐ tude expressible as a number with proper unit of measurement, in terms of distance (differ‐ ence) from an optimal or ideal execution of the same task.

**Psychological quantity in soft metrology**: property of the attribution of a judgment by a human to an object, event or phenomenon—concerning his/her opinions, emotions and sensations—that has a magnitude expressible as a number and a reference in a given scale.

**Soft measurand**: a characteristic of a specific object to be measured, in a specific situation. This object outcomes from the interaction between a human and the physical world—event not necessarily repeatable and directly quantifiable—and related to perception, cognition and volition.

**Soft uncertainty**: non‐negative parameter characterising the dispersion of the quantity values being attributed to a soft measurand.

**Intrasubjective repeatability**: comparability level of subjective measure under a set of repeatable measurement conditions, related to the same subject.

#### **4. Influence on performance (IPER) index**

#### **4.1. IPER index definition**

*Sensory modality*

Taste and smell

**per square‐root second, J·m-***<sup>2</sup>*

268 New Trends and Developments in Metrology

sound wave amplitude

sound power per unit area

**for use with SI, but is not an SI unit):**

teraction between the subject (S) and the physical world (PW).

**per metre squared per kelvin, W·S<sup>½</sup>·m-***<sup>2</sup>*

material to exchange thermal energy with its surroundings

Sound *Acoustic pressure* **(pascal, Pa, or newton per metre squared, N·m-2):**

*Acoustic intensity* **(watt per metre squared, W·m-2):**

*Acoustic impedance* **(decibel, dB—note this is accepted**

**Table 1.** Key physical parameters for sensory transduction as reviewed by T. Goodman (2012), pp. 59–60.

**Figure 8.** Schematic representation of relationships between key terms in the description and measurement of the in‐

attenuation of sound waves through a medium

*Physical parameter and SI units Sensory response*

**·K-1·s-½or watt square‐root second**

**·K-1):** ability of a

*Acoustic frequency* **(hertz, Hz):** sound wave frequency Pitch, sharpness, tone

*Chemical composition* **(mole per metre cubed, mol·m-3)** Flowery, fruity, salty,

Loudness

Loudness

quality, timbre

sweet, bitter, sour….

Muffled

We give now a formulation proposal within the soft metrological field of research, as we developed a set of experiments aimed to investigate how noise can affect the performance of humans involved in a task. However, the primary object of these kinds of studies is more general and lies in the possibility of testing a theoretical method of analysis peculiar to soft metrology. The idea is to acquire a robust tool that permits a reliable interpretation of subjective responses to sensorial stimuli and get a quantification of non‐metrologically definable objects, such as the 'influence of external stimuli on human performance'. To get this quantification, three kinds of outputs—coming from subjective measurement—have been identified as significant. They are characterised as described hereinafter (**Figure 9**).

**Figure 9.** Geometrical description of IPER.

The idea of synthesising in a single number these three outputs in a significantly and properly weighted way comes from the need of metrologically describe a complex and variegated object of study or, in general, what is called soft measurand. Due to its specific synthesising nature, such single number allows prompt and easy comparisons between different complex and multifaceted conditions, events or setups where several subjective parameters are involved. This single number is the here‐proposed IPER index.

The main characteristic of IPER index is that all its components are dynamically weighted in order to picture—as plausibly as possible—the actual behaviour of the interaction between a subject and a stimulus together from a physiological, operational and psychological point of view. The 'influence on performance' is the soft measurand considered, and this influence can be due to different phenomena or events from time to time.

The IPER index combines the three outputs R described before:

RPH = Physiological output;

ROP = Operational output;

RPS= Psychological output.

The physiological output (from now RPH) is the first output coming from human‐–world interaction, and it can be quantified through various clinical techniques that monitor some physiological parameters considered as important indicators of changes in the level of perception, emotion or cognition (e.g. the measurement of heart rate, blood pressure, pupil dilation, sweating, brain electrical activity). It depends on:

**1.** the subject;

three kinds of outputs—coming from subjective measurement—have been identified as

The idea of synthesising in a single number these three outputs in a significantly and properly weighted way comes from the need of metrologically describe a complex and variegated object of study or, in general, what is called soft measurand. Due to its specific synthesising nature, such single number allows prompt and easy comparisons between different complex and multifaceted conditions, events or setups where several subjective parameters are involved.

The main characteristic of IPER index is that all its components are dynamically weighted in order to picture—as plausibly as possible—the actual behaviour of the interaction between a subject and a stimulus together from a physiological, operational and psychological point of

significant. They are characterised as described hereinafter (**Figure 9**).

270 New Trends and Developments in Metrology

**Figure 9.** Geometrical description of IPER.

This single number is the here‐proposed IPER index.

	- **1.** the test stimulus;
	- **2.** other stimuli imposed as constants;
	- **3.** stimuli present during the test that are erroneously considered as non‐influencing and that are not controlled;
	- **4.** experimental methodology (e.g. the order in which the subject carries out some tasks part of the experiment).
	- **1.** referred to the subject;
	- **2.** referred to stimuli;
	- **3.** referred to measurement instruments used for the measure of the physiological parameter.

The operational output (from now ROP), is referred to human performance and is quantified by measuring the changes in the attitudes of a subject doing a particular task. These can be for example attention, fatigue or accuracy and can be assessed via various parameters, such as reaction times, number of correct answers, test duration, etc. It depends on:


objective (e.g.: memorising a list of five digit or nine digit) and

subjective (e.g.: memorising a list of number in a language that for the subject is the native or not);

	- **1.** instrumental;
	- **2.** subjective (learning and fatigue).

The psychological output (from now RPS) is the subject's response to a specific question about his opinion on a particular aspect of a stimulus, determined and quantified by evaluation scales, questionnaires and interviews. This specific output is evaluated through the subject's reply to a question about his opinion on the perceived quality of the performance in doing the task given. RPS depends on:


As IPER aim is to combine together these three R, in order to make them comparable, a normalisation procedure is needed.

The normalisation criteria chosen consist in determine the measurability range (RMIN and RMAX) for each R and then reconduct each R to a range between 0 and 1. Measurability range can be the one given by literature (e.g. the pupil can dilate from a minimum of 2.2 mm to a maximum of 7.9 mm), or derive from test conditions (e.g. regarding the percentage of errors we have 0% corresponding to any error and 100% corresponding to any correct answer). On the other hand, different kinds of ranges, and therefore normalisation criteria, can be chosen. For example, it is possible to choose as measurability range the minimum and maximum values reaches by the specific set of subjects involved in the experiment (e.g. regarding the percentage of errors we impose a minimum of 20% deriving from the performance of the subject that in the test group did the best, and the same with the maximum of 85% deriving from the worst perform‐ ance). Another possibility could be to set measurability ranges on the parameters of each subject, but of course in this way IPER will not be directly comparable between different subjects.

In order to go from each R to the corresponding Rn (where Rn stands for 'normalised output'), the following linear procedure is adopted, but more complex normalisation rules could be adopted too:

Objectifying the Subjective: Fundaments and Applications of Soft Metrology http://dx.doi.org/10.5772/64123 273

n *MIN MIN MAX RPH RPH RPH RPH RPH* - <sup>=</sup> n *MIN MIN MAX ROP ROP ROP ROP ROP* - <sup>=</sup> n *MIN RPS RPS RPS RPS RPS* - <sup>=</sup> -

Each one of these output (Rn) corresponds to a parameter monitored in a reference condition (r) and in one affected by the presence of an influencing phenomena (t). So, after the normal‐ isation procedure, we obtain the following six a‐dimensioned parameters:

*MIN MAX*


Where:

subjective (e.g.: memorising a list of number in a language that for the subject is the native

The psychological output (from now RPS) is the subject's response to a specific question about his opinion on a particular aspect of a stimulus, determined and quantified by evaluation scales, questionnaires and interviews. This specific output is evaluated through the subject's reply to a question about his opinion on the perceived quality of the performance in doing the

**2.** the subject's interpretation of each question (a good questionnaire should avoid as much as possible the influence of personal interpretation of questions and terms on subject's answers. Anyway, this is a kind of bias that, even if not considered, must be mentioned

**3.** the subject's answers reliability (e.g. same answer to same question posed in a different

As IPER aim is to combine together these three R, in order to make them comparable, a

The normalisation criteria chosen consist in determine the measurability range (RMIN and RMAX) for each R and then reconduct each R to a range between 0 and 1. Measurability range can be the one given by literature (e.g. the pupil can dilate from a minimum of 2.2 mm to a maximum of 7.9 mm), or derive from test conditions (e.g. regarding the percentage of errors we have 0% corresponding to any error and 100% corresponding to any correct answer). On the other hand, different kinds of ranges, and therefore normalisation criteria, can be chosen. For example, it is possible to choose as measurability range the minimum and maximum values reaches by the specific set of subjects involved in the experiment (e.g. regarding the percentage of errors we impose a minimum of 20% deriving from the performance of the subject that in the test group did the best, and the same with the maximum of 85% deriving from the worst perform‐ ance). Another possibility could be to set measurability ranges on the parameters of each subject, but of course in this way IPER will not be directly comparable between different

In order to go from each R to the corresponding Rn (where Rn stands for 'normalised output'), the following linear procedure is adopted, but more complex normalisation rules could be

**4.** measurement uncertainty associated to the evaluation method chosen:

or not);

**1.** instrumental;

272 New Trends and Developments in Metrology

task given. RPS depends on:

as reputedly existent);

normalisation procedure is needed.

time of the test);

subjects.

adopted too:

**1.** the structure of the questionnaire;

**4.** the subject's capability of evaluating proportions.

**2.** subjective (learning and fatigue).

**the subscript 'nr'** means normalised response in *'reference'* condition, that is the condition where the task done by the subject is supposed not influenced by something external.

**the subscript 'nt'** means normalised response in *'test'* condition, or the condition where the task done by the subject should be affected by the presence of something external (i.e. disturbing, distracting, annoying…).

Then, in order to compare the two conditions, the difference between the two kinds of outputs (reference and test) is considered:

$$
\Delta RPH = RPH\_{\text{int}} - RPH\_{\text{int}}
$$

$$
\Delta ROP = ROP\_{\text{int}} - ROP\_{\text{int}}
$$

$$\Delta RPS = RPS\_{\text{ut}} - RPS\_{\text{uv}}$$

where

**Δ***RPH* = Finite increment between physiological response in test condition and the physiological response in reference condition.

**Δ***ROP* = Finite increment between operational response in test condition and the operational response in reference condition.

**Δ***RPS* = Finite increment between psychological response in test condition and the psychological response in reference condition.

After the normalisation and the calculation of the difference between reference condition and test condition, IPER is:

$$\text{IPER} = \sqrt{\Delta RPH^2 + \Delta ROP^2 + \Delta RPS^2}$$

IPER goes from a minimum value of 0 to a maximum value of v3. In order to obtain a more legible and directly understandable value, IPER is then transposed to a scale between 0 and 100 using the following formula and becoming *IPER*0,100.

$$IPER\_{0,100} = 100 \cdot \frac{IPER}{\sqrt{3}}$$

Mathematically, the IPER index is the Euclidean distance between two points in a 3D Cartesian space where the axis are the normalised physiological output (*RPH*n), operational output (*ROP*n) and psychological output (*RPS*n).

The first point represents the test results in reference conditions (r), the second point in test conditions (t). If the subject responses show no differences between the two testing condition, the IPER0,100 index is 0 (i.e. no measurable influence on performance). If the differences are the maximum permissible for each response (depending from range chosen for each parameter), the IPER0,100 index will be 100.

Using a scalar index makes easier the evaluations of differences, but information on the contribution of the three outputs is lost. This aspect is described after where a vectorial version of IPER is given.

At the same time, the IPER index is very effective for comparing the influence of different external stimuli. A similar approach is adopted in colorimetry to quantify colour differences using the CIE Laboratory colour system. In the CIE Laboratory system, the difference between two colours is given as the distance between their representation on a 3D space where the axes are correlated with lightness, chroma and hue. Where *lightness* is the brightness of an area judged relative to the brightness of a similarly illuminated area that appears to be white or highly transmitting; *chroma* is the colourfulness, of an area judged as a proportion of the brightness of a similarly illuminated area that appears white or highly transmitting; *hue* is the attribute of a visual sensation according to which an area appears to be similar to one of the perceived colours, red, yellow, green, and blue, or to a combination of two of them [25].

The last issue regarding the IPER index has to do with the fact that the final value looses the information regarding the contribution of each single component to the final result. For example, two subjects in a specific experiment reach an IPER0,100 value of 48 but the first deriving especially by the fact that he made lots of mistakes in the given task and the second especially because of a great physiological reaction. In order to highlight this differences and at the same time understand the dynamic of the experiment, the computation of the percentage of importance of each *R* on the final IPER0,100 value is required.

So, given the previous example, the first subject can have an IPER = 48, composed for the 10% by physiology, 60% by performance and 30% by psychology, while for the second subject the same value is composed by 70% physiology, 5% performance and 25% psychology.

These values are computed as follows:

$$\begin{aligned} \label{eq:1} IPER\_{\text{ph}} &= 100 \cdot \frac{\Delta RPH^2}{IPER^2} \\\\ \label{eq:1} IPER\_{\text{op}} &= 100 \cdot \frac{\Delta ROP^2}{IPER^2} \end{aligned}$$

*IPER*

$$IPER\_{\text{ps}} = 100 \cdot \frac{\Delta RPS^2}{IPER^2}$$

Where:

where

test condition, IPER is:

274 New Trends and Developments in Metrology

**Δ***RPH* = Finite increment between physiological response in test condition and

**Δ***ROP* = Finite increment between operational response in test condition and the

**Δ***RPS* = Finite increment between psychological response in test condition and

After the normalisation and the calculation of the difference between reference condition and

2 22 IPER = D +D +D *RPH ROP RPS*

IPER goes from a minimum value of 0 to a maximum value of v3. In order to obtain a more legible and directly understandable value, IPER is then transposed to a scale between 0 and

0,100 100

*IPER IPER* = ×

Mathematically, the IPER index is the Euclidean distance between two points in a 3D Cartesian space where the axis are the normalised physiological output (*RPH*n), operational output

The first point represents the test results in reference conditions (r), the second point in test conditions (t). If the subject responses show no differences between the two testing condition, the IPER0,100 index is 0 (i.e. no measurable influence on performance). If the differences are the maximum permissible for each response (depending from range chosen for each parameter),

Using a scalar index makes easier the evaluations of differences, but information on the contribution of the three outputs is lost. This aspect is described after where a vectorial version

At the same time, the IPER index is very effective for comparing the influence of different external stimuli. A similar approach is adopted in colorimetry to quantify colour differences using the CIE Laboratory colour system. In the CIE Laboratory system, the difference between two colours is given as the distance between their representation on a 3D space where the axes are correlated with lightness, chroma and hue. Where *lightness* is the brightness of an area judged relative to the brightness of a similarly illuminated area that appears to be white or highly transmitting; *chroma* is the colourfulness, of an area judged as a proportion of the brightness of a similarly illuminated area that appears white or highly transmitting; *hue* is the

3

the physiological response in reference condition.

the psychological response in reference condition.

operational response in reference condition.

100 using the following formula and becoming *IPER*0,100.

(*ROP*n) and psychological output (*RPS*n).

the IPER0,100 index will be 100.

of IPER is given.

*IPER***ph** = percentage of importance on *IPER*0,100 value of the physiological param‐ eter;

*IPER***op** = percentage of importance on *IPER*0,100 value of the operational parame‐ ter;

*IPER***ps** = percentage of importance on *IPER*0,100 value of the psychological parameter;

#### **4.2. Numerical example and uncertainty evaluation**

Suppose the following experiment: the subject A is involved in a mnemonic task (e.g. memorise and repeat the correct order of a set of numbers), while an environmental disturbing factor is present (e.g. acoustic noise at two different levels). In reference condition, the acoustic noise is not present. The soft measurand is the influence of noise on mnemonic performance. The three outputs are:


**Table 2.** Spread sheet of IPER0,100 for subject A, in test condition 1.


The measuring range for these three *R* is:


These conditions are considered:


As shown in **Table 2**, for the subject A in test condition1 IPER0,100 is 26.2 ± 3.4 (a detailed description of how uncertainty is calculated is given in the following paragraph § 5.3).


**Table 3.** Spread sheet of IPER0,100 for subject A, in test condition 2.

**Table 2.** Spread sheet of IPER0,100 for subject A, in test condition 1.

276 New Trends and Developments in Metrology

The measuring range for these three *R* is:

you did your task?'.

These conditions are considered:

**•** *RPH* corresponds to the measurement of pupil diameter;

**• Reference condition:** Mnemonic task in silent condition;

**•** *ROP* corresponds to the percentage of correct answer in the mnemonic task;

0 to 10 (where 0 means 'very badly' and 10 means 'very good').

**•** *RPS* corresponds to the answer to the question 'How much well you did your task?' from

**•** for *RPH*: 2.2 ÷ 7.9 mm corresponding to physiological dilation range of a healthy pupil [26];

**•** for *RPS*: 0 ÷ 10 corresponding to the answer scale range to the question 'How much well

As shown in **Table 2**, for the subject A in test condition1 IPER0,100 is 26.2 ± 3.4 (a detailed

description of how uncertainty is calculated is given in the following paragraph § 5.3).

**•** for *ROP*: 0 ÷ 100% corresponding to the percentage range of correct answer; and

**• Test condition1:** Mnemonic task in the presence of background noise of type1;

**• Test condition2:** Mnemonic task in the presence of background noise of type2.

In **Table 3**, the value for IPER0,100 in test condition 2 is calculated. The subject A shows a similar physiological reaction to test condition1 but a different behaviour for what regards operational and psychological parameters giving a greater final value of 38.9 ± 3.6.

From this, we deduce that the noise of type 2 produces a greater influence on subjects A performance than noise of type 1, both referring to the silent condition.

At this point, many other considerations and analysis can be done. For example, supposing to have a panel of 50 subjects tested we can get a total IPER0,100 mean value for the noise of type1 and type 2 as IPER0,100 is directly comparable between subjects and get for example that the second one has an influence on the performance of 35% greater than the first one.

It is also possible to create some homogenous groups of subjects, for example, considering one parameter at time and then to test the possibility of a correlation with variables associated to the subjects (e.g. age, iris colour, sex, nationality…).

Each output involved in the IPER index evaluations comes from measurements or other procedures that can be assimilated to measurements. For example, the answer to a question‐ naire is a measure of a subjective sensation or opinion on a psychological scale and an operational output is a measure of the capability to perform a given task.

The three constituents of the *IPER* index are known with a given uncertainty, evaluated following the methodologies describes above.

It is important to evaluate the uncertainty of *IPER* in order to clearly understand weak points in the experiment setup (e.g. if the uncertainty of a component is too high respect to the others) or to give the correct interpretation to numerical values.

As clearly explained in the International Vocabulary of Metrology (VIM), every number in the uncertainty interval has the same probability to represent the true value that if considered unique is, in practice, unknowable. Every comparison of results or deductions considering values inside the uncertainty interval has poor physical meaning.

According to the European Accreditation (EA) guide [27], all the expanded uncertainty specified before considers a rectangular distribution and should be reduced to a normal distribution before starting the uncertainty evaluation.

The coefficient of sensibility *ci* where *i* is the generic parameter in *IPER* is:

$$c\_{RPH\_r} = \frac{RPH\_r - RPH\_t}{\left(RPH\_{\max} - RPH\_{\min}\right)^2 \sqrt{\left(RPH\_r - RPH\_t\right)^2 + \left(ROP\_r - ROP\_t\right)^2 + \left(ROP\_{\max} - ROP\_{\min}\right)^2}}} $$

$$\mathcal{C}\_{RPH\_{\iota}} = -\mathcal{C}\_{RPH\_{\iota}}$$

$$\begin{split} \mathcal{L}\_{RO\_{r}^{p}} &= \frac{ROP\_{r} - ROP\_{t}}{\left(ROP\_{\text{max}} - ROP\_{\text{min}}\right)^{2} \sqrt{\frac{\left(RPH\_{r} - RPH\_{t}\right)^{2}}{\left(RPH\_{\text{max}} - RPH\_{\text{min}}\right)^{2}} + \frac{\left(ROP\_{r} - ROP\_{t}\right)^{2}}{\left(ROP\_{\text{max}} - ROP\_{\text{min}}\right)^{2}} + \frac{\left(RPS\_{r} - RPS\_{t}\right)^{2}}{\left(RPS\_{\text{max}} - RPS\_{\text{min}}\right)^{2}}} \end{split}$$

$$\mathcal{C}\_{ROP\_i} = -\mathcal{C}\_{ROP\_i}$$

$$\mathcal{L}\_{\text{RPS}\_{r}} = \frac{RPS\_{r} - RPS\_{t}}{\left(RPS\_{\text{max}} - RPS\_{\text{min}}\right)^{2} \sqrt{\frac{\left(RPH\_{r} - RPH\_{t}\right)^{2}}{\left(RPH\_{\text{max}} - RPH\_{\text{min}}\right)^{2}} + \frac{\left(ROP\_{r} - ROP\_{t}\right)^{2}}{\left(ROP\_{\text{max}} - ROP\_{\text{min}}\right)^{2}} + \frac{\left(RPS\_{r} - RPS\_{t}\right)^{2}}{\left(RPS\_{\text{max}} - RPS\_{\text{min}}\right)^{2}}}$$

*RPS RPS t r c c* = -

Similar formulas are used for the each component of the IPER index.

#### **5. Conclusions**

The field of 'classical' metrology (the one related exclusively to physical quantities) is still formally lacking all the concepts around soft metrology. As a matter of facts, this is still considered a newborn branch of research that need a robust methodological and theoretical architecture to be recognised.

However, due to what said before and described in this chapter, the interest on this topic is growing significantly. This is demonstrated by the recent interest by the fact, for example, many international congresses on metrology are adding specific sessions dedicated to human perception, appearance and in general about the quantification of subjective aspects of human– world interaction. Moreover, the Committee in charge for the new revision of GUM is recently willing to include definitions and models concerning soft metrology.

#### **Acknowledgements**

As clearly explained in the International Vocabulary of Metrology (VIM), every number in the uncertainty interval has the same probability to represent the true value that if considered unique is, in practice, unknowable. Every comparison of results or deductions considering

According to the European Accreditation (EA) guide [27], all the expanded uncertainty specified before considers a rectangular distribution and should be reduced to a normal

where *i* is the generic parameter in *IPER* is:

*r t*

*r t*

*r t*




*RPH RPH t r c c* = -

*ROP ROP t r c c* = -

*RPS RPS t r c c* = -

The field of 'classical' metrology (the one related exclusively to physical quantities) is still formally lacking all the concepts around soft metrology. As a matter of facts, this is still considered a newborn branch of research that need a robust methodological and theoretical

( ) ( )

*RPH RPH ROP ROP RPS RPS*

( ) ( )

( ) ( )

*RPH RPH ROP ROP RPS RPS*

*RPH RPH ROP ROP RPS RPS*

*r t rt rt*

*max min max min max min*

*RPH RPH ROP ROP RPS RPS*

*RPH RPH ROP ROP RPS RPS*

*r t rt rt*

*max min max min max min*

*RPH RPH ROP ROP RPS RPS*

*r t rt rt*

*max min max min max min*

2 22

2 22

2 22

2 22

2 22

2 22

( ) ( )

( ) ( )

( ) ( )

values inside the uncertainty interval has poor physical meaning.

distribution before starting the uncertainty evaluation.

( ) ( )

*RPH RPH <sup>c</sup>*


2

( ) ( )

( ) ( )

*RPS RPS <sup>c</sup>*


2

*ROP ROP <sup>c</sup>*


2

*max min*

*max min*

*max min*

*RPS RPS*

*ROP ROP*

*RPH RPH*

( )

( )

( )

Similar formulas are used for the each component of the IPER index.

 

   

The coefficient of sensibility *ci*

278 New Trends and Developments in Metrology

*r*

*r*

*r*

**5. Conclusions**

architecture to be recognised.

*RPS*

*ROP*

*RPH*

The author developed the ideas presented in this chapter during the 3 years research for the Ph.D. in Metrology. This time has been incredibly challenging and represents the best period of all the time spent at University as a student and as a researcher. The author wants to thanks the entire professors and researchers team with whom the greatest work and ideas exchange have been done.

#### **Author details**

Laura Rossi

Address all correspondence to: laura.rossi@altran.com

Altran S.p.A., Bologna, Italy

#### **References**


[22] L. Rossi, A. Schiavi, A. Astolfi, Measurement of subjective reaction to noise through performance evaluation, in: Proc. 10th Int. Congr. Noise as a Public Health Problem (ICBEN 2011), July, 24–28, London, 2011.

[6] B. Berglund, G.B. Rossi, J.T. Townsend, L.R. Pendrill, (Eds), Measurement with Persons – Theory, Methods and Implementation Areas, Taylor & Francis, New York, 2011.

[7] P. Iacomussi, G. Rossi, L. Rossi, A comparison between different light sources induced

[8] E.H. Adelson, Checkershadow illusion,2005, Retrieved 2007‐04‐21. http://web.mit.edu/

[9] J.B. Watson, Psychology as the behaviourist views, *Psychological Review*, 1913, 20, 158‐

[11] G.B. Rossi, Measurement of quantities related to human perception, in: F. Pavese et al. (Eds), Advanced Mathematical and Computational Tools in Metrology and Testing

[12] ISO IEC Guide 99, *International Vocabulary of Metrology – Basic and General Concepts and*

[13] G.B. Rossi, Cross‐disciplinary concepts and terms in measurement, *Measurement*,2009,

[14] L. Finkelstein, Theory and philosophy of measurement, in: P.H. Sydenham (Ed.) Handbook of Measurement Science, vol. 1: Theoretical Fundamentals, Wiley, Chicester,

[15] E. Nagel, Measurement, *Erkenntnis*, December 1931,2(1), 313–335, published by

[16] G. Iverson, R.D. Luce, The representational measurement approach to psychophysical and judgmental problems, in: M.H. Birnbaum (Ed.), Measurement, Judgment, and

[17] R.D. Luce, P. Suppes, (2001), Representational measurement theory, in: J. Wixted, H. Pashler (Eds), Stevens' Handbook of Experimental Psychology, 3rd edition, vol 4:

[19] F.Pavese et al. (Eds), *Advanced Mathematical and Computational Tools in Metrology and*

[20] C. Spence (2012), Auditory contributions to flavour perception and feeding behaviour,

[21] L. Rossi, A. Schiavi, Oltre la psicoacustica. Gli effetti del rumore sull'uomo nella prospettiva della soft metrology, Rivista Italiana di Acustica, 2012, 36(1), Gennaio‐

Methodology in Experimental Psychology, Wiley, New York, 2001, pp.1‐41.

[10] U. Neisser, *Cognitive Psychology*, Prentice‐Hall, Englewood Cliffs, NJ, USA, 1967.

glare on perceived contrast, *Light & Engineering*, 2012, 20(1).

persci/people/adelson/checkershadow\_illusion.html.

(AMCTM VIII), World Scientific, Singapore, 2009.

*Associated Terms*, 3rd edition, ISO, Geneva, 2007.

Decision Making, Academic Press, New York, 1998.

[18] S. Mannarini, "*Psicometria*", Il Mulino, Manuali, 2003.

*Testing (AMCTM VIII)*, World Scientific, Singapore, 2009.

*Physiology and Behaviour*, November 5, 2012, 107(4) 505–515.

177.

280 New Trends and Developments in Metrology

42, 1288–1296.

Springer, the Netherlands.

1982.

Marzo.


### *Edited by Luigi Cocco*

Investigating the incessant technology growth and the even higher complexity of engineering systems, one of the crucial requirements to confidently steer both scientific and industrial challenges is to identify an appropriate measurement approach. A general process can be considered effective and under control if the following elements are consciously and cyclically managed: numeric target, adequate tools, output analysis, and corrective actions. The role of metrology is to rigorously harmonize this virtuous circle, providing guidance in terms of instruments, standards, and techniques to improve the robustness and the accuracy of the results. This book is designed to offer an interdisciplinary experience into the science of measurement, not only covering high-level measurement strategies but also supplying analytical details and experimental setups.

Photo by yellowj / CanStockPhoto

New Trends and Developments in Metrology

New Trends and

Developments in Metrology

*Edited by Luigi Cocco*