SV PDOP

applied.

angles output.

**5.2 Attitude determination**

Fig. 9. The antennas set-up onboard the Cessna Citation II.

was also equipped with an Inertial Navigation System (INS), whose output is used to test the GNSS-based attitude estimation accuracy. Figure 11 reports the number of tracked satellites for the duration of the two tests. The PDOP (Precision Dilution of Precision) is also shown.

The matrix of local body-frame baseline coordinates for the two tests are

$$F\_{7-l} = \begin{bmatrix} 5.45 \ -0.34 \\ 0 \end{bmatrix} \quad \text{[m]} \qquad F\_{7-ll} = \begin{bmatrix} 4.90 \ -0.39 \\ 0 \end{bmatrix} \quad \text{[m]} \tag{56}$$

The receiver collected GPS-L1 data for about 6000 epochs (zero cut-off angle 1Hz sampling), between 11:42 and 13:20 UTC, 2nd June 2005 on the first test, and 15000 epochs (zero cut-off angle, 1 Hz sampling), between 11:00 and 14:23 UTC, 1st November 2007 on the second test.

Fig. 10. Ground traces of the two test flights.

22 Will-be-set-by-IN-TECH

was also equipped with an Inertial Navigation System (INS), whose output is used to test the GNSS-based attitude estimation accuracy. Figure 11 reports the number of tracked satellites for the duration of the two tests. The PDOP (Precision Dilution of Precision) is also shown.

[m] *FT*−*I I* =

50.5 51 51.5 52

Latitude [deg]

The receiver collected GPS-L1 data for about 6000 epochs (zero cut-off angle 1Hz sampling), between 11:42 and 13:20 UTC, 2nd June 2005 on the first test, and 15000 epochs (zero cut-off angle, 1 Hz sampling), between 11:00 and 14:23 UTC, 1st November 2007 on the second test.

4.90 −0.39 0 7.60

4.8 5 5.2 5.4 5.6 5.8 6 6.2 6.4 6.6 6.8

Longitude [deg]

(b) *T* − *I I*

[m] (56)

Fig. 9. The antennas set-up onboard the Cessna Citation II.

5.45 −0.34 0 7.60

*FT*−*<sup>I</sup>* =

4.4 4.6 4.8 5 5.2 5.4 5.6 5.8 6

Longitude [deg]

(a) *T* − *I*

Fig. 10. Ground traces of the two test flights.

52.8 52.9 53 53.1 53.2 53.3 53.4

Latitude [deg]

The matrix of local body-frame baseline coordinates for the two tests are

Fig. 11. Number of satellites tracked and corresponding PDOP values.

#### **5.1 Instantaneous ambiguity resolution**

The success rate marked by the LAMBDA and MC-LAMBDA methods applied to both flight tests is reported in Table 1 (Giorgi et al., 2011). The single-epoch performance of the


Table 1. *T* − *I* and *T* − *I I* tests: unaided single-epoch, single-frequency success rate (%) for the LAMBDA and the MC-LAMBDA methods, two-baseline processing.

unconstrained method are rather unsatisfactory. The correct set of integer ambiguities is resolved only for 5.8% of time in test *T* − *I* and in 24.7% of time in test *T* − *I I*. The difference is due to the higher number of satellites tracked in the second test.

Instead, application of the MC-LAMBDA method yields a strong performance improvement. The constrained method is capable of providing the correct integer solution for more than 80% of the epochs in test *T* − *I* and more than 88% in test *T* − *I I*.

Both airborne tests confirm the very large improvement that is obtained by strengthening the underlying model with the inclusion of geometrical constraints. It is stressed that all the ambiguity resolution performance reported are obtained by processing the GNSS signals without any a priori information or assumption about the attitude or the aircraft motion. Also, mask angles, elevation-dependent models, dynamic models or any kind of filtering are not applied.

#### **5.2 Attitude determination**


Table 2. *T* − *I I* test: standard deviations of the differences between GPS and INS attitude angles output.

High single-epoch success rates yield precise epoch-by-epoch attitude solutions for the larger part of the flights duration. The attitude angles based on the correctly fixed integer

4100 4150 4200

4100 4150 4200

Epoch

1000 1500 2000 2500 3000 3500

IRS GPS

IRS GPS

IRS GPS

Epoch

(b) Heading *ψ*(*t*), zoom.

600 800 1000 1200 1400 1600

(d) Elevation *θ*(*t*), zoom

3200 3400 3600 3800 4000 4200 4400 4600

(f) Bank *φ*(*t*), zoom

Epoch

Epoch

(b) Altitude profile during the zero-gravity

maneuver.

−8 −6 −4 −2 0 2 4 6 8

Bank φ [deg]

Elevation θ [deg]

Heading Ψ [deg]

IRS GPS

IRS GPS

IRS GPS

Fig. 14. *T* − *I I* test: time series of the three attitude angles as estimated via GNSS and

Ambiguity resolution can be effectively enhanced by means of a rigorous formulation of the ambiguity-attitude estimation problem. In order to infer the aircraft's orientation from the GNSS antenna positions, each antenna location on the aircraft body has to be precisely

Altitude [m]

GNSS Carrier Phase-Based Attitude Determination 217

Epoch

(a) Elevation *θ*(*t*).

Fig. 13. *T* − *I* test: zero-gravity maneuver.

<sup>0</sup> <sup>2000</sup> <sup>4000</sup> <sup>6000</sup> <sup>8000</sup> <sup>10000</sup> <sup>12000</sup> <sup>14000</sup> <sup>16000</sup> −200

(a) Heading *ψ*(*t*).

<sup>0</sup> <sup>2000</sup> <sup>4000</sup> <sup>6000</sup> <sup>8000</sup> <sup>10000</sup> <sup>12000</sup> <sup>14000</sup> <sup>16000</sup> −20

Epoch

(c) Elevation *θ*(*t*).

<sup>0</sup> <sup>2000</sup> <sup>4000</sup> <sup>6000</sup> <sup>8000</sup> <sup>10000</sup> <sup>12000</sup> <sup>14000</sup> <sup>16000</sup> −20

Epoch

provided by the INS. On the right, a closer look at the estimates.

(e) Bank *φ*(*t*).

**6. Summary and conclusions**

Epoch

−40 −30 −20 −10 0 10 20 30 40 50

Bank φ [deg]

Elevation θ [deg]

Heading Ψ [deg]

Elevation θ [deg]

ambiguities in test *T* − *I* are shown in Figure 12. The high dynamics of the flight is evident from the steep variations of the attitude. In particular, Figure 13 shows a zero-gravity maneuver: the aircraft promptly pitched up, gained some altitude, and performed an ample arc to create a virtual absence of gravity on board.

Figure 14 shows the GNSS-based attitude angles for the test *T* − *I I*. The INS solutions are also reported in the figures, in order to provide a comparison between the two systems. Table 2 reports the standard deviations of the differences between the INS and GNSS-based attitude estimations. Taking the precise INS output as benchmark solution, it can be inferred that the accuracy obtained is within the expected range, given the baseline lengths employed. The heading angle is estimated with the highest precision, whereas the elevation estimation is characterized by the highest noise levels. This is due to the relative geometry of the antennas and to the fact that the vertical components of the GNSS-based baseline estimations are inherently less accurate that the horizontal components. The bank angle is estimated with higher precision than the elevation angle, being driven by the longer baseline *Body* − *Wing*.

Fig. 12. *T* − *I* test: time series of the three attitude angles as estimated via GNSS. On the right, a closer look at the estimates.

24 Will-be-set-by-IN-TECH

ambiguities in test *T* − *I* are shown in Figure 12. The high dynamics of the flight is evident from the steep variations of the attitude. In particular, Figure 13 shows a zero-gravity maneuver: the aircraft promptly pitched up, gained some altitude, and performed an ample

Figure 14 shows the GNSS-based attitude angles for the test *T* − *I I*. The INS solutions are also reported in the figures, in order to provide a comparison between the two systems. Table 2 reports the standard deviations of the differences between the INS and GNSS-based attitude estimations. Taking the precise INS output as benchmark solution, it can be inferred that the accuracy obtained is within the expected range, given the baseline lengths employed. The heading angle is estimated with the highest precision, whereas the elevation estimation is characterized by the highest noise levels. This is due to the relative geometry of the antennas and to the fact that the vertical components of the GNSS-based baseline estimations are inherently less accurate that the horizontal components. The bank angle is estimated with higher precision than the elevation angle, being driven by the longer baseline *Body* − *Wing*.

> −150 −100 −50 0 50 100

Bank φ [deg]

Fig. 12. *T* − *I* test: time series of the three attitude angles as estimated via GNSS. On the

Elevation θ [deg]

Heading ψ [deg]

400 700 1000 1300

Epoch

4000 4200 4400 4600

Epoch

(d) Elevation *θ*(*t*), zoom

700 1000 1300 1600

Epoch

(f) Bank *φ*(*t*), zoom

(b) Heading *ψ*(*t*), zoom.

arc to create a virtual absence of gravity on board.

<sup>0</sup> <sup>1000</sup> <sup>2000</sup> <sup>3000</sup> <sup>4000</sup> <sup>5000</sup> <sup>6000</sup> −200

(a) Heading *ψ*(*t*).

<sup>0</sup> <sup>1000</sup> <sup>2000</sup> <sup>3000</sup> <sup>4000</sup> <sup>5000</sup> <sup>6000</sup> −40

(c) Elevation *θ*(*t*).

<sup>0</sup> <sup>1000</sup> <sup>2000</sup> <sup>3000</sup> <sup>4000</sup> <sup>5000</sup> <sup>6000</sup> −80

(e) Bank *φ*(*t*).

right, a closer look at the estimates.

Epoch

Epoch

Epoch

> −60 −40 −20 0 20 40 60

Bank φ [deg]

Elevation θ [deg]

Heading ψ [deg]

Fig. 13. *T* − *I* test: zero-gravity maneuver.

maneuver.

Altitude [m]

Fig. 14. *T* − *I I* test: time series of the three attitude angles as estimated via GNSS and provided by the INS. On the right, a closer look at the estimates.

#### **6. Summary and conclusions**

Ambiguity resolution can be effectively enhanced by means of a rigorous formulation of the ambiguity-attitude estimation problem. In order to infer the aircraft's orientation from the GNSS antenna positions, each antenna location on the aircraft body has to be precisely

4100 4150 4200

Epoch

(b) Altitude profile during the zero-gravity

Giorgi, G., Teunissen, P. J. G., Verhagen, S. & Buist, P. J. (2012). Integer Ambiguity Resolution

GNSS Carrier Phase-Based Attitude Determination 219

Huang, S. Q., Wang, J. X., Wang, X. Y. & Chen, J. P. (2009). The Application of the

Ji, S., Chen, W., Zhao, C., Ding, X. & Chen, Y. (2007). Single Epoch Ambiguity Resolution for Galileo with the CAR and LAMBDA Methods, *GPS Solutions* 11(4): 259–268. Kroes, R., Montenbruck, O., Bertiger, W. & Visser, P. (2005). Precise GRACE Baseline

Markley, F. L. & Landis, F. (1993). Attitude Determination Using Vector Observations: a Fast Optimal Matrix Algorithm, *The Journal of the Astronautical Sciences* 41(2): 261–280. Markley, F. L. & Mortari, D. (1999). How to Estimate Attitude from Vector Observations, *Presented at AAS/AIAA Astrodynamics Specialist Conference, Paper 99-427* . Markley, F. L. & Mortari, D. (2000). Quaternion Attitude Estimation Using Vector Observations, *The Journal of the Astronautical Sciences* 48(2-3): 359–380. Misra, P. & Enge, P. (2001). *Global Positioning System: Signals, Measurements, and Performance*,

Mortari, D. (1997). ESOQ: A Closed-form Solution to the Wahba Problem, *The Journal of the*

Mortari, D. (2000). Second Estimator of the Optimal Quaternion, *Journal of Guidance, Control,*

Nadarajah, N., Teunissen, P. J. G. & Giorgi, G. (2011). Instantaneous GNSS Attitude

Shuster, M. D. (1978). Approximate Algorithms for Fast Optimal attitude Computation, *Proceedings of the AIAA Guidance and Control conference, Palo Alto, CA, US* pp. 88–95. Shuster, M. D. (1993). A Survey of Attitude Representations , *The Journal of the Astronautical*

Shuster, M. D. & Oh, S. D. (1981). Three-Axis Attitude Determination from Vector

Teunissen, P. J. G. (1995). The Least-Squares Ambiguity Decorrelation Adjustment: a Method for Fast GPS Integer Ambiguity Estimation, *Journal of Geodesy* 70(1-2): 65–82. Teunissen, P. J. G. (2000). The Success Rate and Precision of GPS Ambiguities, *Journal of Geodesy*

Teunissen, P. J. G. (2007a). A General Multivariate Formulation of the Multi-Antenna GNSS

Teunissen, P. J. G. (2007b). Influence of Ambiguity Precision on the Success Rate of GNSS Integer Ambiguity Bootstrapping, *Journal of Geodesy, Springer* 81(5): 351–358. Teunissen, P. J. G. (2007c). The LAMBDA Method for the GNSS Compass, *Artificial Satellites*

Teunissen, P. J. G. (2011). A-PPP: Array-aided Precise Point Positioning with Global

Navigation Satellite Systems, *IEEE Transactions on Signal Processing (submitted for*

Attitude Determination Problem, *Artificial Satellites* 42(2): 97–111.

Observations, *Journal of Guidance and Control* 4(1): 70–77.

*of Geodesy and Geophysics General Assembly (IUGG), Melbourne, Australia* . Schonemann, P. H. (1966). A Generalized Solution of the Orthogonal Procrustes Problem,

Determination for Remote Sensing Platforms, *Presented at the XXV International Union*

Determination Using GPS, *GPS Solutions* 9(1): 21–31.

2nd edn, Ganga-Jamuna Press, Lincoln MA.

*Astronautical Sciences* 45(2): 195–204.

*and Dynamics* 23(5): 885–888.

*Psychometrika* 31(1): 1–10.

*Sciences* 41(4): 439–517.

74(3): 321–326.

41(3): 89–103.

*publication)* pp. 1–12.

*Springer-Verlag* .

*Astronautica Sinica* 50(1): 60–68.

with Nonlinear Geometrical Constraints., *N. Sneeuw et al. (eds.), VII Hotine-Marussi Symposium on Mathematical Geodesy, International Association of Geodesy Symposia 137,*

LAMBDA Method in the Estimation of the GPS Slant Wet Vapour, *Acta Aeronautica et*

known. This geometrical information can be embedded in the ambiguity resolution step, thus strengthening the underlying functional model - i.e., additional information is added to the functional model - and enhancing the whole estimation process. The higher ambiguity resolution performance comes at the cost of an increased computational complexity. In order to overcome the issue, a number of solutions are presented, which allow for fast and reliable solutions without requiring extensive computational loads. A fast implementation of the geometrically constrained problem is obtained by modifying a well-known method for ambiguity resolution: the LAMBDA (Least-squares AMBiguity Decorrelation Adjustment) method. This method is nowadays the standard for carrier-phase based applications, and it is being implemented in a number of receivers employed for high-precision navigation applications. The complexity of the constrained estimation method requires the development of novel strategies to extract the solution in a timely manner. This is achieved by properly modifying the LAMBDA method to address the specific ambiguity-attitude estimation problem: the Multivariate Constrained (MC)-LAMBDA method. Through the use of two novel search schemes the sought-for set of carrier phase ambiguities can be efficiently estimated.

The method is tested on actual data collected on two different flight tests. Each test indicates the feasibility of employing GNSS as attitude sensor, an application that might be increasingly adopted in the aviation industry, either stand-alone for non-critical applications, or in combinations with other sensors for safety-critical applications.

#### **7. Acknowledgment**

The second author is the recipient of an Australian Research Council Federation Fellowship (project number FF0883188). This support is gratefully acknowledged.

#### **8. References**


26 Will-be-set-by-IN-TECH

known. This geometrical information can be embedded in the ambiguity resolution step, thus strengthening the underlying functional model - i.e., additional information is added to the functional model - and enhancing the whole estimation process. The higher ambiguity resolution performance comes at the cost of an increased computational complexity. In order to overcome the issue, a number of solutions are presented, which allow for fast and reliable solutions without requiring extensive computational loads. A fast implementation of the geometrically constrained problem is obtained by modifying a well-known method for ambiguity resolution: the LAMBDA (Least-squares AMBiguity Decorrelation Adjustment) method. This method is nowadays the standard for carrier-phase based applications, and it is being implemented in a number of receivers employed for high-precision navigation applications. The complexity of the constrained estimation method requires the development of novel strategies to extract the solution in a timely manner. This is achieved by properly modifying the LAMBDA method to address the specific ambiguity-attitude estimation problem: the Multivariate Constrained (MC)-LAMBDA method. Through the use of two novel search schemes the sought-for set of carrier phase ambiguities can be efficiently

The method is tested on actual data collected on two different flight tests. Each test indicates the feasibility of employing GNSS as attitude sensor, an application that might be increasingly adopted in the aviation industry, either stand-alone for non-critical applications,

The second author is the recipient of an Australian Research Council Federation Fellowship

Boon, F. & Ambrosius, B. A. C. (1997). Results of Real-Time Applications of the LAMBDA Method in GPS Based Aircraft Landings, *Proceedings KIS97* pp. 339–345. Cheng, Y. & Shuster, M. D. (2007). Robustness and Accuracy of the QUEST Algorithm,

Cox, D. B. & Brading, J. D. (2000). Integration of LAMBDA Ambiguity Resolution with Kalman Filter for Relative Navigation of Spacecraft, *NAVIGATION* 47(3): 205–210. Davenport, P. B. (1968). A Vector Approach to the algebra of Rotations with Applications,

de Jonge, P. & Tiberius, C. (1996). The LAMBDA Method for Integer Ambiguity Estimation:

Giorgi, G. (2011). GNSS Carrier Phase-based Attitude Determination. Estimation and applications., *PhD dissertation, Delft University of Technology, Delft, The Netherlands* . Giorgi, G., Teunissen, P. J. G. & Buist, P. J. (2008). A Search and Shrink Approach for the

Giorgi, G., Teunissen, P. J. G., Verhagen, S. & Buist, P. J. (2011). Instantaneous Ambiguity

method, *Journal of Guidance, Control, and Dynamics, to be published* .

Implementation Aspects, *LGR Series 12, Publications of the Delft Geodetic Computing*

Baseline Constrained LAMBDA: Experimental Results, *Proceedings of the International Symposium on GPS/GNSS 2008. A. Yasuda (Ed.), Tokyo University of Marine Science and*

Resolution in GNSS-based Attitude Determination Applications: the MC-LAMBDA

or in combinations with other sensors for safety-critical applications.

(project number FF0883188). This support is gratefully acknowledged.

*Advances in the Astronautical Sciences* 127: 41–61.

*Centre, Delft, The Netherlands* .

*Technology* pp. 797–806.

*NASA Technical Note D-4696, Goddard Space Flight Center* .

estimated.

**7. Acknowledgment**

**8. References**


**0**

**10**

*USA*

**A Variational Approach to the Fuel Optimal**

The pivotal role of unmanned aerial vehicles (UAVs) in modern aircraft technology is evidenced by the large number of civil and military applications they are employed in. For example, UAVs successfully serve as platforms carrying payloads aimed at land monitoring (Ramage et al., 2009), wildfire detection and management (Ambrosia & Hinkley, 2008), law enforcement (Haddal & Gertler, 2010), pollution monitoring (Oyekan & Huosheng, 2009), and

A formation of UAVs, defined by a set of vehicles whose states are coupled through a common control law (Scharf et al., 2003b), is often more valuable than a single aircraft because it can accomplish several tasks concurrently. In particular, UAV formations can guarantee higher flexibility and redundancy, as well as increased capability of distributed payloads (Scharf et al., 2003a). For example, an aircraft formation can successfully intercept a vehicle which is faster than its chasers (Jang & Tomlin, 2005). Alternatively, a UAV formation equipped with interferometic synthetic aperture radar (In-SAR) antennas can pursue both along-track and cross-track interferometry, which allow harvesting information that a single radar cannot

Path planning is one of the main problems when designing missions involving multiple vehicles; a UAV formation typically needs to accomplish diverse tasks while meeting some assigned constraints. For example, a UAV formation may need to intercept given targets while its members maintain an assigned relative attitude. Trajectories should also be optimized with respect to some performance measure capturing minimum time or minimum fuel expenditure. In particular, trajectory optimization is critical for mini and micro UAVs (*μ*UAVs) because they often operate independently from remote human controllers for extended periods of time (Shanmugavel et al., 2010) and also because of limited amount of available

The scope of the present paper is to provide a rigorous and sufficiently broad formulation of the optimal path planning problem for UAV formations, modeled as a system of n 6-degrees of freedom (DoF) rigid bodies subject to a constant gravitational acceleration and aerodynamic forces and moments. Specifically, system trajectories are optimized in terms of control effort, that is, we design a control law that minimizes the forces and moments needed to operate a UAV formation, while meeting all the mission objectives. Minimizing the control effort is equivalent to minimizing the formation's fuel consumption in the case of vehicles equipped

communication broadcast relay (Majewski, 1999), to name just a few.

detect otherwise (Lillesand et al., 2007).

energy sources (Plnes & Bohorquez, 2006).

**1. Introduction**

**Control Problem for UAV Formations**

Andrea L'Afflitto and Wassim M. Haddad

*Georgia Institute of Technology*


## **A Variational Approach to the Fuel Optimal Control Problem for UAV Formations**

Andrea L'Afflitto and Wassim M. Haddad *Georgia Institute of Technology USA*

#### **1. Introduction**

28 Will-be-set-by-IN-TECH

220 Recent Advances in Aircraft Technology

Teunissen, P. J. G. & Kleusberg, A. (1998). GPS for Geodesy, *Springer, Berlin Heidelberg New*

Van Loan, C. F. (2000). The Ubiquitous Kronecker Product, *Journal of Computational and Applied*

Wahba, G. (1965). Problem 65-1: A Least Squares Estimate of Spacecraft Attitude, *SIAM Review*

*York* .

7(3): 384–386.

*Mathematics* 123: 85–100.

The pivotal role of unmanned aerial vehicles (UAVs) in modern aircraft technology is evidenced by the large number of civil and military applications they are employed in. For example, UAVs successfully serve as platforms carrying payloads aimed at land monitoring (Ramage et al., 2009), wildfire detection and management (Ambrosia & Hinkley, 2008), law enforcement (Haddal & Gertler, 2010), pollution monitoring (Oyekan & Huosheng, 2009), and communication broadcast relay (Majewski, 1999), to name just a few.

A formation of UAVs, defined by a set of vehicles whose states are coupled through a common control law (Scharf et al., 2003b), is often more valuable than a single aircraft because it can accomplish several tasks concurrently. In particular, UAV formations can guarantee higher flexibility and redundancy, as well as increased capability of distributed payloads (Scharf et al., 2003a). For example, an aircraft formation can successfully intercept a vehicle which is faster than its chasers (Jang & Tomlin, 2005). Alternatively, a UAV formation equipped with interferometic synthetic aperture radar (In-SAR) antennas can pursue both along-track and cross-track interferometry, which allow harvesting information that a single radar cannot detect otherwise (Lillesand et al., 2007).

Path planning is one of the main problems when designing missions involving multiple vehicles; a UAV formation typically needs to accomplish diverse tasks while meeting some assigned constraints. For example, a UAV formation may need to intercept given targets while its members maintain an assigned relative attitude. Trajectories should also be optimized with respect to some performance measure capturing minimum time or minimum fuel expenditure. In particular, trajectory optimization is critical for mini and micro UAVs (*μ*UAVs) because they often operate independently from remote human controllers for extended periods of time (Shanmugavel et al., 2010) and also because of limited amount of available energy sources (Plnes & Bohorquez, 2006).

The scope of the present paper is to provide a rigorous and sufficiently broad formulation of the optimal path planning problem for UAV formations, modeled as a system of n 6-degrees of freedom (DoF) rigid bodies subject to a constant gravitational acceleration and aerodynamic forces and moments. Specifically, system trajectories are optimized in terms of control effort, that is, we design a control law that minimizes the forces and moments needed to operate a UAV formation, while meeting all the mission objectives. Minimizing the control effort is equivalent to minimizing the formation's fuel consumption in the case of vehicles equipped

The inverse of a square matrix **A** is denoted by **A**−1, the transpose of **A**−<sup>1</sup> is denoted by **A**−T, the determinant of **A** is denoted by det(**A**), the diagonal of **A** is denoted by diag(**A**), and the

A Variational Approach to the Fuel Optimal Control Problem for UAV Formations 223

Functions are always introduced by specifying their domain and codomain, e.g., **h** : *A*<sup>1</sup> × *A*<sup>2</sup> → *B*. The arguments of a function will not be indicated in the text unless necessary, e.g., **h**(**x**, **y**) is simply denoted by **h**. If a function is dependent on some unspecified variables, then its arguments will be replaced by dots, e.g., **h**(·, ·). The same convention is used for

The first derivative with respect to time of a differentiable function **<sup>q</sup>** : [*t*1, *<sup>t</sup>*2] <sup>→</sup> **<sup>R</sup>**<sup>n</sup> is denoted by the a dot on top of the function, e.g., **<sup>q</sup>**˙ (*t*). Given **<sup>g</sup>** : *<sup>A</sup>* <sup>→</sup> **<sup>R</sup>**m, where *<sup>A</sup>* <sup>⊂</sup> **<sup>R</sup>**<sup>n</sup> is an open set, we say that **<sup>g</sup>**(·) *is of class* <sup>C</sup>k, that is, **<sup>g</sup>**(·) ∈ Ck(*A*), if **<sup>g</sup>**(·) is continuous on *<sup>A</sup>* with k-continuous

Throughout the paper we use two types of mathematical statements, namely, existential and universal statements. An existential statement has the form: "there exist **x** ∈ *A* such that condition Φ is satisfied." A universal statement has the form: "condition Φ is satisfied for all **x** ∈ *A*." For universal statements we often omit the words "for all" and write: "condition Φ

Time is the only independent variable used in this paper and is denoted by *t*. In this paper, *t* ∈ [*t*1, *t*2], where [*t*1, *t*2] ⊂ **R** is a fixed time interval and is a priori assigned. A generic member of a formation of n ∈ **N** UAVs is identified by the subscript i and, hence, i = 1, ..., n. We define **<sup>r</sup>**<sup>i</sup> : [*t*1, *<sup>t</sup>*2] <sup>→</sup> **<sup>R</sup>**<sup>3</sup> as the *position vector* of the center of mass of the i-th vehicle in a given inertial reference frame, <sup>σ</sup><sup>i</sup> : [*t*1, *<sup>t</sup>*2] <sup>→</sup> **<sup>R</sup>**<sup>3</sup> as the *attitude vector* of the i-th vehicle in

The vector **<sup>v</sup>**<sup>i</sup> : [*t*1, *<sup>t</sup>*2] <sup>→</sup> **<sup>R</sup>**<sup>3</sup> denotes the *velocity* of the center of mass of the i-th vehicle, <sup>ω</sup><sup>i</sup> : [*t*1, *<sup>t</sup>*2] <sup>→</sup> **<sup>R</sup>**<sup>3</sup> denotes the *angular velocity* of the i-th vehicle in a principal body reference

<sup>i</sup> , <sup>σ</sup><sup>T</sup> i ]

<sup>1</sup> (*t*), ..., **<sup>x</sup>**<sup>T</sup>

**x**T

<sup>∈</sup> *<sup>D</sup>*abs <sup>⊆</sup> **<sup>R</sup>**12n, *<sup>t</sup>* <sup>∈</sup> [*t*1, *<sup>t</sup>*2].

∪ {**0**3} ,

∪ {**0**3} .

<sup>T</sup> is the *augmented state vector* of the i-th vehicle. For all

<sup>i</sup> (*t*) (Neimark & Fufaev, 1972; Shuster, 1993). We assume

*<sup>t</sup>*<sup>1</sup> **<sup>v</sup>**i(*τ*) <sup>d</sup>*<sup>τ</sup>* and ˙σi(*t*) = **<sup>R</sup>**rod(σi(*t*))ωi(*t*), where **<sup>R</sup>**rod(σi(*t*)) <sup>1</sup>

<sup>n</sup>(*t*) T <sup>T</sup> as the *state vector*

<sup>4</sup> (1 −

<sup>n</sup>(*t*) T .

functionals; however, their arguments are embraced by square brackets, i.e., J [**x**, **y**].

derivatives. If **<sup>g</sup>**(·) ∈ C1(*A*), then **<sup>g</sup>**(·) is *continuously differentiable*.

modified rodrigues parameters (MRPs) (Shuster, 1993), and **x**<sup>i</sup> [**r**<sup>T</sup>

of the i-th vehicle. The *system's configuration* at time *t* is defined by

2σi(*t*)σ<sup>T</sup>

**x**T

<sup>1</sup> (*t*), ..., **<sup>x</sup>**<sup>T</sup>

We define **u**i,tran : [*t*1, *t*2] → Γi,tran (respectively, **u**i,rot : [*t*1, *t*2] → Γi,rot) as the *translational acceleration* (respectively, the *rotational acceleration*) provided by the control system of the i-th vehicle in the formation, e.g., **u**i,tran is the acceleration provided by the propulsion system and **u**i,rot is the acceleration provided by the ailerons. The vector **u**i,tran (respectively, **u**i,rot) is also referred to as the *i-th translational control vector* (respectively, the *i-th rotational control vector*). For a given set of real constants *ρ*i,1, *ρ*i,2, *ρ*i,3, and *ρ*i,4 such that 0 ≤ *ρ*i,1 < *ρ*i,2 and

**<sup>a</sup>** <sup>∈</sup> **<sup>R</sup>**<sup>3</sup> : *<sup>ρ</sup>*i,1 ≤ ||**a**||<sup>2</sup> <sup>≤</sup> *<sup>ρ</sup>*i,2

**<sup>a</sup>** <sup>∈</sup> **<sup>R</sup>**<sup>3</sup> : *<sup>ρ</sup>*i,3 ≤ ||**a**||<sup>2</sup> <sup>≤</sup> *<sup>ρ</sup>*i,4

<sup>∈</sup> *<sup>D</sup>*rel <sup>⊆</sup> **<sup>R</sup>**6n and

Γi,tran

Γi,rot

nullspace of a matrix **A** is denoted by N (**A**).

holds, **x** ∈ *A*."

frame, and **<sup>x</sup>**<sup>i</sup>

<sup>i</sup> (*t*)σi(*t*))**I**<sup>3</sup> <sup>+</sup> <sup>1</sup>

<sup>1</sup> (*t*), ..., **<sup>x</sup>**<sup>T</sup>

σT

 **x**T

*<sup>t</sup>* <sup>∈</sup> [*t*1, *<sup>t</sup>*2], **<sup>r</sup>**i(*t*) = *<sup>t</sup>*

<sup>n</sup>(*t*) T **r**T <sup>i</sup> , **<sup>v</sup>**<sup>T</sup> <sup>i</sup> , <sup>σ</sup><sup>T</sup> <sup>i</sup> , <sup>ω</sup><sup>T</sup> i

2σ<sup>×</sup>

<sup>i</sup> (*t*) + <sup>1</sup>

0 ≤ *ρ*i,3 < *ρ*i,4, Γi,tran and Γi,rot are defined as

with conventional fuel-based propulsion systems (Schouwenaars et al., 2006) and is a suitable indicator of the energy consumption for vehicles powered by batteries or other power sources.

In this paper, we derive an optimal control law which is independent of the size of the formation, the system constraints, and the environmental model adopted, and hence, our framework applies to aircraft, spacecraft, autonomous marine vehicles, and robot formations. The direction and magnitude of the optimal control forces and moments is a function of the dynamics of two vectors, namely the translational and rotational primer vectors. In general, finding the dynamics of these two vectors over a given time interval is a demanding task that does not allow for an analytical closed-form solution, and hence, a numerical approach is required. Our main result involves necessary conditions for optimality of the formations' trajectories.

The contents of this paper are as follows. In Section 2, we present notation and definitions of the physical variables needed to formulate the fuel optimization problem. Section 3 gives a problem statement of the UAV path planning optimization problem, whereas Section 4 provides the necessary mathematical background for this problem. Next, in Section 5, we survey the relevant literature and highlight the advantages related to the proposed approach. Section 6 discusses results achieved by applying the theoretical framework developed in Section 4. In Section 7, we present an illustrative numerical example that highlights the efficacy of the proposed approach. Finally, in Section 8, we draw conclusions and highlight future research directions.

#### **2. Notation and definitions**

The notation used in this paper is fairly standard. When a word is defined in the text, the concept defined is *italicized* and it should be understood as an "if and only if" statement. Mathematical definitions are introduced by the symbol "." The symbol **N** denotes the set of positive integers, **R** denotes the set of real numbers, **R**+ denotes the set of nonnegative real numbers, **<sup>R</sup>**<sup>n</sup> denotes the set of *<sup>n</sup>* <sup>×</sup> 1 column vectors on the field of real numbers, and **<sup>R</sup>**n×<sup>m</sup> denotes the set of real n × m matrices. Both natural and real numbers are denoted by lower case letters, e.g., j <sup>∈</sup> **<sup>N</sup>** and a <sup>∈</sup> **<sup>R</sup>**, vectors are denoted by bold lower case letters, e.g., **<sup>x</sup>** <sup>∈</sup> **<sup>R</sup>**n, and matrices are denoted by bold upper case letters, e.g., **<sup>A</sup>** <sup>∈</sup> **<sup>R</sup>**n×m. Subsets of **<sup>R</sup>**<sup>n</sup> and **<sup>R</sup>**n×<sup>m</sup> are denoted by italicized upper case letters, e.g., *<sup>A</sup>* <sup>⊆</sup> **<sup>R</sup>**<sup>n</sup> and *<sup>B</sup>* <sup>⊆</sup> **<sup>R</sup>**n×m. The interior of the set *A* is denoted by int(*A*). The zero vector in **R**<sup>n</sup> is denoted by **0**n, the zero matrix in **R**n×<sup>m</sup> is denoted by **<sup>0</sup>**n×m, and the identity matrix in **<sup>R</sup>**n×<sup>n</sup> is denoted by **<sup>I</sup>**n.

For **<sup>x</sup>** <sup>∈</sup> **<sup>R</sup>**<sup>n</sup> we write **<sup>x</sup>** ≥≥ **<sup>0</sup>**<sup>n</sup> (respectively, **<sup>x</sup>** >> **<sup>0</sup>**n) to indicate that every component of **x** is nonnegative (respectively, positive). We write || · ||<sup>p</sup> for the p-norm of a vector and its corresponding equi-induced matrix norm, e.g., ||**x**||<sup>p</sup> and ||**A**||p. The transpose of a vector or of a matrix is denoted by the superscript (·)T, e.g., **<sup>x</sup>**<sup>T</sup> and **<sup>A</sup>**T. The cross product between two vectors **<sup>a</sup>** and **<sup>b</sup>** is denoted by **<sup>a</sup>** <sup>∧</sup> **<sup>b</sup>**. Given **<sup>x</sup>** <sup>∈</sup> **<sup>R</sup>**<sup>3</sup> such that **<sup>x</sup>** [x1, x2, x3] T, we define

$$\mathbf{x}^{\times} \triangleq \begin{bmatrix} 0 & -\mathbf{x}\_3 & \mathbf{x}\_2 \\ \mathbf{x}\_3 & 0 & -\mathbf{x}\_1 \\ -\mathbf{x}\_2 & \mathbf{x}\_1 & 0 \end{bmatrix}.$$

2 Will-be-set-by-IN-TECH

with conventional fuel-based propulsion systems (Schouwenaars et al., 2006) and is a suitable indicator of the energy consumption for vehicles powered by batteries or other power sources. In this paper, we derive an optimal control law which is independent of the size of the formation, the system constraints, and the environmental model adopted, and hence, our framework applies to aircraft, spacecraft, autonomous marine vehicles, and robot formations. The direction and magnitude of the optimal control forces and moments is a function of the dynamics of two vectors, namely the translational and rotational primer vectors. In general, finding the dynamics of these two vectors over a given time interval is a demanding task that does not allow for an analytical closed-form solution, and hence, a numerical approach is required. Our main result involves necessary conditions for optimality of the formations'

The contents of this paper are as follows. In Section 2, we present notation and definitions of the physical variables needed to formulate the fuel optimization problem. Section 3 gives a problem statement of the UAV path planning optimization problem, whereas Section 4 provides the necessary mathematical background for this problem. Next, in Section 5, we survey the relevant literature and highlight the advantages related to the proposed approach. Section 6 discusses results achieved by applying the theoretical framework developed in Section 4. In Section 7, we present an illustrative numerical example that highlights the efficacy of the proposed approach. Finally, in Section 8, we draw conclusions and highlight

The notation used in this paper is fairly standard. When a word is defined in the text, the concept defined is *italicized* and it should be understood as an "if and only if" statement. Mathematical definitions are introduced by the symbol "." The symbol **N** denotes the set of positive integers, **R** denotes the set of real numbers, **R**+ denotes the set of nonnegative real numbers, **<sup>R</sup>**<sup>n</sup> denotes the set of *<sup>n</sup>* <sup>×</sup> 1 column vectors on the field of real numbers, and **<sup>R</sup>**n×<sup>m</sup> denotes the set of real n × m matrices. Both natural and real numbers are denoted by lower case letters, e.g., j <sup>∈</sup> **<sup>N</sup>** and a <sup>∈</sup> **<sup>R</sup>**, vectors are denoted by bold lower case letters, e.g., **<sup>x</sup>** <sup>∈</sup> **<sup>R</sup>**n, and matrices are denoted by bold upper case letters, e.g., **<sup>A</sup>** <sup>∈</sup> **<sup>R</sup>**n×m. Subsets of **<sup>R</sup>**<sup>n</sup> and **<sup>R</sup>**n×<sup>m</sup> are denoted by italicized upper case letters, e.g., *<sup>A</sup>* <sup>⊆</sup> **<sup>R</sup>**<sup>n</sup> and *<sup>B</sup>* <sup>⊆</sup> **<sup>R</sup>**n×m. The interior of the set *A* is denoted by int(*A*). The zero vector in **R**<sup>n</sup> is denoted by **0**n, the zero matrix in **R**n×<sup>m</sup> is

For **<sup>x</sup>** <sup>∈</sup> **<sup>R</sup>**<sup>n</sup> we write **<sup>x</sup>** ≥≥ **<sup>0</sup>**<sup>n</sup> (respectively, **<sup>x</sup>** >> **<sup>0</sup>**n) to indicate that every component of **x** is nonnegative (respectively, positive). We write || · ||<sup>p</sup> for the p-norm of a vector and its corresponding equi-induced matrix norm, e.g., ||**x**||<sup>p</sup> and ||**A**||p. The transpose of a vector or of a matrix is denoted by the superscript (·)T, e.g., **<sup>x</sup>**<sup>T</sup> and **<sup>A</sup>**T. The cross product between two

> 0 −x3 x2 x3 0 −x1 −x2 x1 0

⎤ ⎦ . T, we define

denoted by **<sup>0</sup>**n×m, and the identity matrix in **<sup>R</sup>**n×<sup>n</sup> is denoted by **<sup>I</sup>**n.

vectors **<sup>a</sup>** and **<sup>b</sup>** is denoted by **<sup>a</sup>** <sup>∧</sup> **<sup>b</sup>**. Given **<sup>x</sup>** <sup>∈</sup> **<sup>R</sup>**<sup>3</sup> such that **<sup>x</sup>** [x1, x2, x3]

**x**×

⎡ ⎣

trajectories.

future research directions.

**2. Notation and definitions**

The inverse of a square matrix **A** is denoted by **A**−1, the transpose of **A**−<sup>1</sup> is denoted by **A**−T, the determinant of **A** is denoted by det(**A**), the diagonal of **A** is denoted by diag(**A**), and the nullspace of a matrix **A** is denoted by N (**A**).

Functions are always introduced by specifying their domain and codomain, e.g., **h** : *A*<sup>1</sup> × *A*<sup>2</sup> → *B*. The arguments of a function will not be indicated in the text unless necessary, e.g., **h**(**x**, **y**) is simply denoted by **h**. If a function is dependent on some unspecified variables, then its arguments will be replaced by dots, e.g., **h**(·, ·). The same convention is used for functionals; however, their arguments are embraced by square brackets, i.e., J [**x**, **y**].

The first derivative with respect to time of a differentiable function **<sup>q</sup>** : [*t*1, *<sup>t</sup>*2] <sup>→</sup> **<sup>R</sup>**<sup>n</sup> is denoted by the a dot on top of the function, e.g., **<sup>q</sup>**˙ (*t*). Given **<sup>g</sup>** : *<sup>A</sup>* <sup>→</sup> **<sup>R</sup>**m, where *<sup>A</sup>* <sup>⊂</sup> **<sup>R</sup>**<sup>n</sup> is an open set, we say that **<sup>g</sup>**(·) *is of class* <sup>C</sup>k, that is, **<sup>g</sup>**(·) ∈ Ck(*A*), if **<sup>g</sup>**(·) is continuous on *<sup>A</sup>* with k-continuous derivatives. If **<sup>g</sup>**(·) ∈ C1(*A*), then **<sup>g</sup>**(·) is *continuously differentiable*.

Throughout the paper we use two types of mathematical statements, namely, existential and universal statements. An existential statement has the form: "there exist **x** ∈ *A* such that condition Φ is satisfied." A universal statement has the form: "condition Φ is satisfied for all **x** ∈ *A*." For universal statements we often omit the words "for all" and write: "condition Φ holds, **x** ∈ *A*."

Time is the only independent variable used in this paper and is denoted by *t*. In this paper, *t* ∈ [*t*1, *t*2], where [*t*1, *t*2] ⊂ **R** is a fixed time interval and is a priori assigned. A generic member of a formation of n ∈ **N** UAVs is identified by the subscript i and, hence, i = 1, ..., n. We define **<sup>r</sup>**<sup>i</sup> : [*t*1, *<sup>t</sup>*2] <sup>→</sup> **<sup>R</sup>**<sup>3</sup> as the *position vector* of the center of mass of the i-th vehicle in a given inertial reference frame, <sup>σ</sup><sup>i</sup> : [*t*1, *<sup>t</sup>*2] <sup>→</sup> **<sup>R</sup>**<sup>3</sup> as the *attitude vector* of the i-th vehicle in modified rodrigues parameters (MRPs) (Shuster, 1993), and **x**<sup>i</sup> [**r**<sup>T</sup> <sup>i</sup> , <sup>σ</sup><sup>T</sup> i ] <sup>T</sup> as the *state vector* of the i-th vehicle. The *system's configuration* at time *t* is defined by **x**T <sup>1</sup> (*t*), ..., **<sup>x</sup>**<sup>T</sup> <sup>n</sup>(*t*) T .

The vector **<sup>v</sup>**<sup>i</sup> : [*t*1, *<sup>t</sup>*2] <sup>→</sup> **<sup>R</sup>**<sup>3</sup> denotes the *velocity* of the center of mass of the i-th vehicle, <sup>ω</sup><sup>i</sup> : [*t*1, *<sup>t</sup>*2] <sup>→</sup> **<sup>R</sup>**<sup>3</sup> denotes the *angular velocity* of the i-th vehicle in a principal body reference frame, and **<sup>x</sup>**<sup>i</sup> **r**T <sup>i</sup> , **<sup>v</sup>**<sup>T</sup> <sup>i</sup> , <sup>σ</sup><sup>T</sup> <sup>i</sup> , <sup>ω</sup><sup>T</sup> i <sup>T</sup> is the *augmented state vector* of the i-th vehicle. For all *<sup>t</sup>* <sup>∈</sup> [*t*1, *<sup>t</sup>*2], **<sup>r</sup>**i(*t*) = *<sup>t</sup> <sup>t</sup>*<sup>1</sup> **<sup>v</sup>**i(*τ*) <sup>d</sup>*<sup>τ</sup>* and ˙σi(*t*) = **<sup>R</sup>**rod(σi(*t*))ωi(*t*), where **<sup>R</sup>**rod(σi(*t*)) <sup>1</sup> <sup>4</sup> (1 − σT <sup>i</sup> (*t*)σi(*t*))**I**<sup>3</sup> <sup>+</sup> <sup>1</sup> 2σ<sup>×</sup> <sup>i</sup> (*t*) + <sup>1</sup> 2σi(*t*)σ<sup>T</sup> <sup>i</sup> (*t*) (Neimark & Fufaev, 1972; Shuster, 1993). We assume **x**T <sup>1</sup> (*t*), ..., **<sup>x</sup>**<sup>T</sup> <sup>n</sup>(*t*) T <sup>∈</sup> *<sup>D</sup>*rel <sup>⊆</sup> **<sup>R</sup>**6n and **x**T <sup>1</sup> (*t*), ..., **<sup>x</sup>**<sup>T</sup> <sup>n</sup>(*t*) T <sup>∈</sup> *<sup>D</sup>*abs <sup>⊆</sup> **<sup>R</sup>**12n, *<sup>t</sup>* <sup>∈</sup> [*t*1, *<sup>t</sup>*2].

We define **u**i,tran : [*t*1, *t*2] → Γi,tran (respectively, **u**i,rot : [*t*1, *t*2] → Γi,rot) as the *translational acceleration* (respectively, the *rotational acceleration*) provided by the control system of the i-th vehicle in the formation, e.g., **u**i,tran is the acceleration provided by the propulsion system and **u**i,rot is the acceleration provided by the ailerons. The vector **u**i,tran (respectively, **u**i,rot) is also referred to as the *i-th translational control vector* (respectively, the *i-th rotational control vector*). For a given set of real constants *ρ*i,1, *ρ*i,2, *ρ*i,3, and *ρ*i,4 such that 0 ≤ *ρ*i,1 < *ρ*i,2 and 0 ≤ *ρ*i,3 < *ρ*i,4, Γi,tran and Γi,rot are defined as

$$\begin{aligned} \Gamma\_{\mathsf{i},\mathsf{tran}} & \triangleq \left\{ \mathbf{a} \in \mathbb{R}^{3} : \rho\_{\mathsf{i},1} \le ||\mathbf{a}||\_{2} \le \rho\_{\mathsf{i},2} \right\} \cup \left\{ \mathbf{0}\_{3} \right\}, \\\Gamma\_{\mathsf{i},\mathsf{rot}} & \triangleq \left\{ \mathbf{a} \in \mathbb{R}^{3} : \rho\_{\mathsf{i},3} \le ||\mathbf{a}||\_{2} \le \rho\_{\mathsf{i},4} \right\} \cup \left\{ \mathbf{0}\_{3} \right\} \dots \end{aligned}$$

where **I**in,i is the inertia matrix of the i-th vehicle in a principal body reference frame and

A Variational Approach to the Fuel Optimal Control Problem for UAV Formations 225

Given *<sup>D</sup>*<sup>1</sup> <sup>⊂</sup> **<sup>R</sup>**<sup>p</sup> and *<sup>D</sup>*<sup>2</sup> <sup>⊂</sup> **<sup>R</sup>**m, the function **<sup>S</sup>** : *<sup>D</sup>*<sup>1</sup> <sup>→</sup> *<sup>D</sup>*<sup>2</sup> is a *continuously differentiable*

Let **<sup>S</sup>**<sup>1</sup> : *<sup>D</sup>*abs <sup>→</sup> **<sup>R</sup>**n1 and **<sup>S</sup>**<sup>2</sup> : *<sup>D</sup>*abs <sup>→</sup> **<sup>R</sup>**n2 be two continuously differentiable manifolds, and

Endpoint constraints partly impose the formation's configuration at times *t*<sup>1</sup> and *t*2, and

where **<sup>f</sup>**ineq : *<sup>D</sup>*rel <sup>→</sup> **<sup>R</sup>**n3 and **<sup>f</sup>**ineq(**x**1, ..., **<sup>x</sup>**n) ∈ C3(int(*D*rel)). *State equality constraints* are

where **<sup>f</sup>**eq : [*t*1, *<sup>t</sup>*2] <sup>×</sup> *<sup>D</sup>*rel <sup>→</sup> **<sup>R</sup>**n4 and **<sup>f</sup>**eq(*t*, **<sup>x</sup>**1, ..., **<sup>x</sup>**n) ∈ C2((*t*1, *<sup>t</sup>*2) <sup>×</sup> int(*D*rel)). Here we assume that the constraints are *compatible*, that is, for all *t* ∈ [*t*1, *t*2] there exists at least one set of 2n admissible controls {**u**1,tran(*t*), ..., **u**n,tran(*t*); **u**1,rot(*t*), ..., **u**n,rot(*t*)} that satisfies (3) – (6). State constraints given in terms of **<sup>x</sup>**1(*t*),..., **<sup>x</sup>**n(*t*) that can be reduced to the form given by (5) and (6) are called *holonomic* constraints. In particular, for n = 2 and *t* ∈ [*t*1, *t*2], the constraint **v**1(*t*) = **v**2(*t*) is holonomic since it can be rewritten as **r**1(*t*) + **r**<sup>1</sup> (*t*1) = **r**2(*t*) + **r**<sup>2</sup> (*t*1), *t* ∈ [*t*1, *t*2]. It is important to note that the constraint ω1(*t*) ≤≤ ω2(*t*), *t* ∈ [*t*1, *t*2], is nonholonomic

*<sup>t</sup>*<sup>1</sup> ωi(*τ*) d*τ* + σi(*t*1), *t* ∈ [*t*1, *t*2] and i = 1, 2 (Greenwood, 2003). State constraints can model collision avoidance, keeping the formation far from no-fly zones, or the requirement of pointing payloads toward the same target. It is obvious that (6) is a special case of (5); however, as noted in Section 4.2, this distinction is useful in reducing

For all i = 1, ..., n and *t* ∈ [*t*1, *t*2] find the control vectors **u**i,tran(*t*) and **u**i,rot(*t*) among all admissible controls in Γi,tran and Γi,tran such that the performance measure (2) is minimized

<sup>n</sup>(*t*1) T 

<sup>n</sup>(*t*2) T  = **0**r1 ,

= **0**r2 .

**f**ineq(**x**1(*t*), ..., **x**n(*t*)) ≤≤ **0**r3 , (5)

**f**eq(*t*, **x**1(*t*), ..., **x**n(*t*)) = **0**r4 , (6)

<sup>1</sup> (*t*1), ..., **<sup>x</sup>**<sup>T</sup>

<sup>1</sup> (*t*2), ..., **<sup>x</sup>**<sup>T</sup>

in,i**<sup>m</sup>** (**<sup>x</sup>**i(*t*)), *<sup>t</sup>* <sup>∈</sup> [*t*1, *<sup>t</sup>*2]. The boundary conditions for (3) are given by the

*<sup>∂</sup>***<sup>y</sup>** = m (Pontryagin et al., 1962).

(4)

<sup>ω</sup><sup>i</sup> (**<sup>x</sup>**i(*t*)) **<sup>I</sup>**−<sup>1</sup>

given by

since <sup>σ</sup>i(*t*) �<sup>=</sup> *<sup>t</sup>*

computational complexity.

and **<sup>x</sup>**i(*t*) satisfies (3) – (6).

**3.4 Path planning optimization problem**

**3.3 Formation constraints**

define the *endpoint constraints*

*State inequality constraints* are given by

endpoint constraints discussed in Section 3.3.

*manifold* if **<sup>S</sup>**(**y**) = 0, m <sup>&</sup>lt; p, **<sup>S</sup>**(**y**) ∈ C1(*D*1), and rank *<sup>∂</sup>***S**(**y**)

**S**1 **x**T

**S**2 **x**T

hence, can model point-to-point or rendezvous maneuvers.

Finally, for a given set <sup>Γ</sup> <sup>⊂</sup> **<sup>R</sup>**p, **<sup>u</sup>** : [*t*1, *<sup>t</sup>*2] <sup>→</sup> <sup>Γ</sup> is an *admissible control in* <sup>Γ</sup> if *<sup>i</sup>*) **<sup>u</sup>**(·) is continuous at the endpoints of [*t*1, *t*2], *ii*) **u**(·) is continuous for all *t* ∈ (*t*1, *t*2) with the exception of a finite number of times *t* at which **u**(·) may have discontinuities of the first kind, and *iii*) **<sup>u</sup>**(*τ*) = lim*t*→*τ*<sup>−</sup> **<sup>u</sup>**(*t*), where *<sup>τ</sup>* ∈ [*t*1, *<sup>t</sup>*2] is a point of discontinuity of first kind for **<sup>u</sup>**(*t*) (Pontryagin et al., 1962). We assume that **u**i,tran (respectively, **u**i,rot) is an admissible control in Γi,tran (respectively, Γi,rot) for each i ∈ {1, . . . , n}.

#### **3. Problem statement**

#### **3.1 Fuel consumption performance functional**

A measure of the effort needed to control the i-th formation vehicle is given by the *performance functional*

$$\mathbf{J}\left[\mathbf{u}\_{\mathbf{i}}(\cdot)\right] \stackrel{\Delta}{=} \int\_{t\_1}^{t\_2} ||\mathbf{u}\_{\mathbf{i}}(t)||\_2 \,\mathrm{d}t,\tag{1}$$

where **u**i(*t*) [**u**<sup>T</sup> i,tran(*t*), c**u**<sup>T</sup> i,rot(*t*)]<sup>T</sup> and c is a real constant with units of distance. Without loss of generality we assume that <sup>|</sup>c<sup>|</sup> <sup>=</sup> 1. The performance functional � *<sup>t</sup>*<sup>2</sup> *<sup>t</sup>*<sup>1</sup> ||**u**i,tran(*t*)||<sup>2</sup> d*t* represents a measure of the fuel consumed over the time interval [*t*1, *t*2] (Schouwenaars et al., 2006). Path planning for UAV formations is sometimes addressed by minimizing the more conservative performance functional � *<sup>t</sup>*<sup>2</sup> *<sup>t</sup>*<sup>1</sup> ||**u**i,tran(*t*)||<sup>1</sup> d*t* (Blackmore, 2008). It is important to note that ||**u**i,rot(*t*)||<sup>2</sup> is much smaller than ||**u**i,tran(*t*)||<sup>2</sup> for conventional aircraft and, hence, its contribution to the performance functional (1) is negligible. However, this assumption does not hold for the case of *μ*UAVs (Bataillé et al., 2009).

The control effort for the entire formation can be captured by the performance measure

$$\mathbf{J}\_{\text{formation}}\left[\tilde{\mathbf{u}}(\cdot)\right] \stackrel{\Delta}{=} \sum\_{\mathbf{i}=1}^{n} \mu\_{\mathbf{i}} \mathbf{J}\left[\mathbf{u}\_{\mathbf{i}}(\cdot)\right],\tag{2}$$

where **u**˜(*t*) [**u**<sup>T</sup> <sup>1</sup> (*t*), ..., **<sup>u</sup>**<sup>T</sup> <sup>n</sup>(*t*)]<sup>T</sup> and *<sup>μ</sup>*<sup>i</sup> <sup>∈</sup> [0, 1], with <sup>∑</sup><sup>n</sup> <sup>i</sup>=<sup>1</sup> *μ*<sup>i</sup> = 1, which represents the relative importance of minimizing the control effort of the i-th vehicle with respect to the others.

#### **3.2 Aircraft dynamic equations**

Aircraft are subject to external forces and moments from the environment. Specifically, an aerial vehicle is subject to gravitational forces, aerodynamic forces, and aerodynamic moments. Accelerations induced by external forces and external moments acting on a formation vehicle are denoted by **<sup>a</sup>** : **<sup>R</sup>**<sup>12</sup> <sup>→</sup> **<sup>R</sup>**<sup>3</sup> and **<sup>m</sup>** : **<sup>R</sup>**<sup>12</sup> <sup>→</sup> **<sup>R</sup>**3, respectively, where **<sup>a</sup>** (**x**�i), **<sup>m</sup>** (�**x**i) ∈ C<sup>1</sup> � **R**12� .

The unconstrained dynamic equations for the i-th vehicle are given by (Greenwood, 2003)

$$\frac{\mathbf{d}}{\mathbf{d}\mathbf{t}}\tilde{\mathbf{x}}\_{\mathbf{i}}(t) = \begin{bmatrix} \mathbf{v}\_{\mathbf{i}}(t) \\ \mathbf{a}\left(\tilde{\mathbf{x}}\_{\mathbf{i}}(t)\right) \\ \mathbf{R}\_{\text{rod}}\left(\sigma\_{\mathbf{i}}(t)\right)\omega\_{\mathbf{i}}(t) \\ -\mathbf{I}\_{\text{in,j}}^{-1}\omega\_{\mathbf{i}}^{\times}\left(\omega\_{\mathbf{i}}(t)\right)\mathbf{I}\_{\text{in,i}}\omega\_{\mathbf{i}}(t) + \tilde{\omega}\_{\mathbf{i}}\left(\tilde{\mathbf{x}}\_{\mathbf{i}}(t)\right) \end{bmatrix} + \begin{bmatrix} \mathbf{0}\_{3} \\ \mathbf{u}\_{\text{i,ran}}(t) \\ \mathbf{0}\_{3} \\ \mathbf{u}\_{\text{i,rot}}(t) \end{bmatrix} \tag{3}$$

where **I**in,i is the inertia matrix of the i-th vehicle in a principal body reference frame and <sup>ω</sup><sup>i</sup> (**<sup>x</sup>**i(*t*)) **<sup>I</sup>**−<sup>1</sup> in,i**<sup>m</sup>** (**<sup>x</sup>**i(*t*)), *<sup>t</sup>* <sup>∈</sup> [*t*1, *<sup>t</sup>*2]. The boundary conditions for (3) are given by the endpoint constraints discussed in Section 3.3.

#### **3.3 Formation constraints**

4 Will-be-set-by-IN-TECH

Finally, for a given set <sup>Γ</sup> <sup>⊂</sup> **<sup>R</sup>**p, **<sup>u</sup>** : [*t*1, *<sup>t</sup>*2] <sup>→</sup> <sup>Γ</sup> is an *admissible control in* <sup>Γ</sup> if *<sup>i</sup>*) **<sup>u</sup>**(·) is continuous at the endpoints of [*t*1, *t*2], *ii*) **u**(·) is continuous for all *t* ∈ (*t*1, *t*2) with the exception of a finite number of times *t* at which **u**(·) may have discontinuities of the first kind, and *iii*) **<sup>u</sup>**(*τ*) = lim*t*→*τ*<sup>−</sup> **<sup>u</sup>**(*t*), where *<sup>τ</sup>* ∈ [*t*1, *<sup>t</sup>*2] is a point of discontinuity of first kind for **<sup>u</sup>**(*t*) (Pontryagin et al., 1962). We assume that **u**i,tran (respectively, **u**i,rot) is an admissible control in

A measure of the effort needed to control the i-th formation vehicle is given by the *performance*

� *t*<sup>2</sup> *t*1

represents a measure of the fuel consumed over the time interval [*t*1, *t*2] (Schouwenaars et al., 2006). Path planning for UAV formations is sometimes addressed by minimizing the more

note that ||**u**i,rot(*t*)||<sup>2</sup> is much smaller than ||**u**i,tran(*t*)||<sup>2</sup> for conventional aircraft and, hence, its contribution to the performance functional (1) is negligible. However, this assumption does

> n ∑ i=1

The control effort for the entire formation can be captured by the performance measure

importance of minimizing the control effort of the i-th vehicle with respect to the others.

Aircraft are subject to external forces and moments from the environment. Specifically, an aerial vehicle is subject to gravitational forces, aerodynamic forces, and aerodynamic moments. Accelerations induced by external forces and external moments acting on a formation vehicle are denoted by **<sup>a</sup>** : **<sup>R</sup>**<sup>12</sup> <sup>→</sup> **<sup>R</sup>**<sup>3</sup> and **<sup>m</sup>** : **<sup>R</sup>**<sup>12</sup> <sup>→</sup> **<sup>R</sup>**3, respectively, where

The unconstrained dynamic equations for the i-th vehicle are given by (Greenwood, 2003)

**v**i(*t*) **<sup>a</sup>** (�**x**i(*t*)) **R**rod(σi(*t*))ωi(*t*)

<sup>i</sup> (ωi(*t*))**I**in,iωi(*t*) + <sup>ω</sup>�<sup>i</sup> (�**x**i(*t*))

Jformation [**u**˜(·)]

<sup>n</sup>(*t*)]<sup>T</sup> and *<sup>μ</sup>*<sup>i</sup> <sup>∈</sup> [0, 1], with <sup>∑</sup><sup>n</sup>


*μ*iJ [**u**i(·)] , (2)

<sup>i</sup>=<sup>1</sup> *μ*<sup>i</sup> = 1, which represents the relative

*<sup>t</sup>*<sup>1</sup> ||**u**i,tran(*t*)||<sup>2</sup> d*t*

i,rot(*t*)]<sup>T</sup> and c is a real constant with units of distance. Without

*<sup>t</sup>*<sup>1</sup> ||**u**i,tran(*t*)||<sup>1</sup> d*t* (Blackmore, 2008). It is important to

⎤ ⎥ ⎥ ⎦ + ⎡ ⎢ ⎢ ⎣

**0**3 **u**i,tran(*t*) **0**3 **u**i,rot(*t*)

⎤ ⎥ ⎥ ⎦

, (3)

J [**u**i(·)]

loss of generality we assume that <sup>|</sup>c<sup>|</sup> <sup>=</sup> 1. The performance functional � *<sup>t</sup>*<sup>2</sup>

Γi,tran (respectively, Γi,rot) for each i ∈ {1, . . . , n}.

**3.1 Fuel consumption performance functional**

i,tran(*t*), c**u**<sup>T</sup>

not hold for the case of *μ*UAVs (Bataillé et al., 2009).

conservative performance functional � *<sup>t</sup>*<sup>2</sup>

<sup>1</sup> (*t*), ..., **<sup>u</sup>**<sup>T</sup>

**R**12� .

> ⎡ ⎢ ⎢ ⎣

<sup>−</sup>**I**−<sup>1</sup> in,iω<sup>×</sup>

**3.2 Aircraft dynamic equations**

d dt�**x**i(*t*) =

**3. Problem statement**

*functional*

where **u**i(*t*) [**u**<sup>T</sup>

where **u**˜(*t*) [**u**<sup>T</sup>

**<sup>a</sup>** (**x**�i), **<sup>m</sup>** (�**x**i) ∈ C<sup>1</sup> �

Given *<sup>D</sup>*<sup>1</sup> <sup>⊂</sup> **<sup>R</sup>**<sup>p</sup> and *<sup>D</sup>*<sup>2</sup> <sup>⊂</sup> **<sup>R</sup>**m, the function **<sup>S</sup>** : *<sup>D</sup>*<sup>1</sup> <sup>→</sup> *<sup>D</sup>*<sup>2</sup> is a *continuously differentiable manifold* if **<sup>S</sup>**(**y**) = 0, m <sup>&</sup>lt; p, **<sup>S</sup>**(**y**) ∈ C1(*D*1), and rank *<sup>∂</sup>***S**(**y**) *<sup>∂</sup>***<sup>y</sup>** = m (Pontryagin et al., 1962). Let **<sup>S</sup>**<sup>1</sup> : *<sup>D</sup>*abs <sup>→</sup> **<sup>R</sup>**n1 and **<sup>S</sup>**<sup>2</sup> : *<sup>D</sup>*abs <sup>→</sup> **<sup>R</sup>**n2 be two continuously differentiable manifolds, and define the *endpoint constraints*

$$\begin{aligned} \mathbf{S}\_1 \left( \left[ \tilde{\mathbf{x}}\_1^\mathrm{T}(t\_1), \dots, \tilde{\mathbf{x}}\_\mathrm{n}^\mathrm{T}(t\_1) \right]^\mathrm{T} \right) &= \mathbf{0}\_{\mathbf{r}\_1 \prime} \\ \mathbf{S}\_2 \left( \left[ \tilde{\mathbf{x}}\_1^\mathrm{T}(t\_2), \dots, \tilde{\mathbf{x}}\_\mathrm{n}^\mathrm{T}(t\_2) \right]^\mathrm{T} \right) &= \mathbf{0}\_{\mathbf{r}\_2} \end{aligned} \tag{4}$$

Endpoint constraints partly impose the formation's configuration at times *t*<sup>1</sup> and *t*2, and hence, can model point-to-point or rendezvous maneuvers.

*State inequality constraints* are given by

$$\mathbf{f\_{i:neq}}(\mathbf{x\_1}(t), \dots, \mathbf{x\_n}(t)) \le \le \mathbf{0\_{f\_{i'}}} \tag{5}$$

where **<sup>f</sup>**ineq : *<sup>D</sup>*rel <sup>→</sup> **<sup>R</sup>**n3 and **<sup>f</sup>**ineq(**x**1, ..., **<sup>x</sup>**n) ∈ C3(int(*D*rel)). *State equality constraints* are given by

$$\mathbf{f\_{eq}}(t, \mathbf{x\_1}(t), \dots, \mathbf{x\_n}(t)) = \mathbf{0\_{r\_{4'}}} \tag{6}$$

where **<sup>f</sup>**eq : [*t*1, *<sup>t</sup>*2] <sup>×</sup> *<sup>D</sup>*rel <sup>→</sup> **<sup>R</sup>**n4 and **<sup>f</sup>**eq(*t*, **<sup>x</sup>**1, ..., **<sup>x</sup>**n) ∈ C2((*t*1, *<sup>t</sup>*2) <sup>×</sup> int(*D*rel)). Here we assume that the constraints are *compatible*, that is, for all *t* ∈ [*t*1, *t*2] there exists at least one set of 2n admissible controls {**u**1,tran(*t*), ..., **u**n,tran(*t*); **u**1,rot(*t*), ..., **u**n,rot(*t*)} that satisfies (3) – (6).

State constraints given in terms of **<sup>x</sup>**1(*t*),..., **<sup>x</sup>**n(*t*) that can be reduced to the form given by (5) and (6) are called *holonomic* constraints. In particular, for n = 2 and *t* ∈ [*t*1, *t*2], the constraint **v**1(*t*) = **v**2(*t*) is holonomic since it can be rewritten as **r**1(*t*) + **r**<sup>1</sup> (*t*1) = **r**2(*t*) + **r**<sup>2</sup> (*t*1), *t* ∈ [*t*1, *t*2]. It is important to note that the constraint ω1(*t*) ≤≤ ω2(*t*), *t* ∈ [*t*1, *t*2], is nonholonomic since <sup>σ</sup>i(*t*) �<sup>=</sup> *<sup>t</sup> <sup>t</sup>*<sup>1</sup> ωi(*τ*) d*τ* + σi(*t*1), *t* ∈ [*t*1, *t*2] and i = 1, 2 (Greenwood, 2003).

State constraints can model collision avoidance, keeping the formation far from no-fly zones, or the requirement of pointing payloads toward the same target. It is obvious that (6) is a special case of (5); however, as noted in Section 4.2, this distinction is useful in reducing computational complexity.

#### **3.4 Path planning optimization problem**

For all i = 1, ..., n and *t* ∈ [*t*1, *t*2] find the control vectors **u**i,tran(*t*) and **u**i,rot(*t*) among all admissible controls in Γi,tran and Γi,tran such that the performance measure (2) is minimized and **<sup>x</sup>**i(*t*) satisfies (3) – (6).

In the following we assume that the path planning optimization problem can be solved over

A Variational Approach to the Fuel Optimal Control Problem for UAV Formations 227

**Example 4.1.** Consider a UAV formation with two vehicles so that n = 2. Assume that




where rmin and rmax are real constants such that 0 < rmin < rmax. Equation (12) ensures that



where **<sup>s</sup>**<sup>j</sup> : [*t*1, *<sup>t</sup>*2] <sup>→</sup> **<sup>R</sup>**3, j <sup>=</sup> 3, 4. Note that in this case, the dimension of �**f**ineq is increased since six additional slack variables have been introduced, which increases computational

Next, define ri,j : [*t*1, *t*2] → **R** (respectively, *σ*i,j : [*t*1, *t*2] → **R**) as the j-th component of **r**i(*t*)

<sup>1</sup> (*t*), <sup>σ</sup><sup>T</sup>

<sup>1</sup> (*t*), *r*2,1(*t*)

eq (*t*, **s**, **x**1, **x**2), **q**<sup>T</sup> (*t*, **s**, **x**1, **x**2)

rmin − ||**r**1(*t*) <sup>−</sup> **<sup>r</sup>**2(*t*)||<sup>2</sup>

rmin − ||**r**1(*t*) <sup>−</sup> **<sup>r</sup>**2(*t*)||<sup>2</sup>

<sup>σ</sup>1(*t*) <sup>−</sup> <sup>σ</sup>2(*t*) + <sup>1</sup>

<sup>σ</sup>2(*t*) <sup>−</sup> <sup>σ</sup>1(*t*) + <sup>1</sup>

: rmin ≤ ||**r**1(*t*) <sup>−</sup> **<sup>r</sup>**2(*t*)||<sup>2</sup>

Introducing the slack variables s1 : [*t*1, *t*2] → **R** and s2 : [*t*1, *t*2] → **R**, (12) becomes

As noted in Section 3.3, the equality constraint (13) can be embedded into (12) to give

⎡ ⎢ ⎢ ⎣

*s*1(*t*), *s*2(*t*), **r**<sup>T</sup>

*∂* � **s**T, **x**<sup>T</sup> <sup>1</sup> , **<sup>x</sup>**<sup>T</sup> 2 �T

ineq (**s**, **<sup>x</sup>**1, **<sup>x</sup>**2), **<sup>f</sup>**<sup>T</sup>

rmin − ||**r**1(*t*) <sup>−</sup> **<sup>r</sup>**2(*t*)||<sup>2</sup>

**f**eq (*t*, **x**1(*t*), **x**2(*t*)) = σ1(*t*) − σ2(*t*) = **0**3, (13)

σ1(*t*1) − σ2(*t*1)

σ1(*t*2) − σ2(*t*2)

<sup>2</sup> ≤ rmax and (13) ensures that both vehicles always have the same

<sup>2</sup> <sup>−</sup> rmax <sup>+</sup> <sup>1</sup>

<sup>2</sup> <sup>−</sup> rmax <sup>+</sup> <sup>1</sup>

2 s2 <sup>1</sup>(*t*)

2 s2 <sup>1</sup>(*t*)

3 )

4 )

, then (8) gives

⎞

⎟⎠ <sup>=</sup> 0.

�T

<sup>2</sup> <sup>+</sup> <sup>1</sup> 2 s2 <sup>2</sup>(*t*) �

<sup>2</sup> <sup>+</sup> <sup>1</sup> 2 s2 <sup>2</sup>(*t*)

2diag(**s**3**s**<sup>T</sup>

2diag(**s**4**s**<sup>T</sup>

�T

�

*t* ∗ 1, *t* ∗ 2 �

�, where [*t*1, *<sup>t</sup>*2] <sup>⊂</sup> I ⊂� �

<sup>1</sup> (**q**(*t*1), **<sup>q</sup>**dot(**q**(*t*1))), ..., �**x**<sup>T</sup>

<sup>1</sup> (**q**(*t*2), **<sup>q</sup>**dot(**q**(*t*2))), ..., �**x**<sup>T</sup>

**f**ineq (**x**1(*t*), **x**2(*t*)) =

<sup>2</sup> (*t*1) �T � = �

<sup>2</sup> (*t*2) �T � = �

�**f**ineq(**s**(*t*), **<sup>x</sup>**1(*t*), **<sup>x</sup>**2(*t*)) = �

�**f**ineq(**s**(*t*), **x**1(*t*), **x**2(*t*)) =

⊃ [*t*1, *t*2] and that the given set of Lagrange coordinates can be defined

<sup>n</sup> (**q**(*t*1), **q**dot(**q**(*t*1)))

<sup>n</sup> (**q**(*t*2), **q**dot(**q**(*t*2)))

<sup>2</sup> − rmax

2 �

<sup>2</sup> <sup>−</sup> � rmax+rmin 2

<sup>2</sup> <sup>−</sup> <sup>2</sup>(rmax−rmin) 3

. Thus, (4) can be rewritten as

= **0**r1 , (10)

= **0**r2 . (11)

= **0**4, (14)

= **0**4, (15)

= **0**2. (16)

� .

≤≤ **0**2, (12)

�T �

�T �

�

�

�

<sup>2</sup> ≤ rmax, σ1(*t*) = σ2(*t*), *t* ∈ [*t*1, *t*2]

⎤ ⎥ ⎥ <sup>⎦</sup> <sup>=</sup> **<sup>0</sup>**8,

the time interval �

*t* ∗ 1, *t* ∗ 2 �

on the open connected set I

**S**1 �� �**x**T

**S**2 �� �**x**T

**S**1 �� �**x**T <sup>1</sup> (*t*1) �**x**<sup>T</sup>

**S**2 �� �**x**T <sup>1</sup> (*t*2) �**x**<sup>T</sup>

> �� **x**T <sup>1</sup> (*t*) **<sup>x</sup>**<sup>T</sup> <sup>2</sup> (*t*) �T

rmin ≤ ||**r**1(*t*) <sup>−</sup> **<sup>r</sup>**2(*t*)||<sup>2</sup>

attitude: *D*rel =

complexity.

(respectively, σi(*t*)). If **q**(*t*) = �

det

⎛

⎜⎝ *∂* � �**f**T

#### **4. Mathematical background**

#### **4.1 Slack variables**

Inequality constraints (5) can be reduced to equality constraints by introducing **s** : [*t*1, *t*2] → **<sup>R</sup>**n3 such that **<sup>s</sup>**(*t*) ∈ C2(*t*1, *<sup>t</sup>*2) and **<sup>f</sup>**ineq(*t*, **<sup>x</sup>**1(*t*), ..., **<sup>x</sup>**n(*t*)) + <sup>1</sup> <sup>2</sup>diag � **ss**T� = **<sup>0</sup>**r3 . The components of **s** are called *slack variables*. Thus, (5) can be rewritten as (Valentine, 1937)

$$\overline{\mathbf{f}}\_{\text{ineq}}(\mathbf{s}(t), \mathbf{x}\_1(t), \dots, \mathbf{x}\_{\text{n}}(t)) = \mathbf{0}\_{\text{r}\succ} \tag{7}$$

where �**f**ineq (**s**(*t*), **<sup>x</sup>**1(*t*), ..., **<sup>x</sup>**n(*t*)) **<sup>f</sup>**ineq(**x**1(*t*), ..., **<sup>x</sup>**n(*t*)) + <sup>1</sup> <sup>2</sup>diag � **ss**T� .

#### **4.2 Lagrange coordinates**

The following theorem is needed for the main results of this paper.

**Theorem 4.1.** *(Pars, 1965) Let D*<sup>q</sup> <sup>⊆</sup> **<sup>R</sup>**6n−n4 *be an open connected set and let* **<sup>q</sup>** : [*t*1, *<sup>t</sup>*2] <sup>×</sup> **<sup>R</sup>**n3 <sup>×</sup> *<sup>D</sup>*rel <sup>→</sup> *Dq be such that* **<sup>q</sup>** (*t*, **<sup>s</sup>**(*t*), **<sup>x</sup>**1(*t*), ... , **<sup>x</sup>**n(*t*)) ∈ C<sup>2</sup> ((*t*1, *<sup>t</sup>*2) <sup>×</sup> **<sup>R</sup>**n3 <sup>×</sup> int(Drel))*. Assume that*

$$\det\left(\frac{\partial\left[\widetilde{\mathbf{f}}\_{\text{ineq}}^{\rm T}(\mathbf{s},\mathbf{x}\_{1},\ldots,\mathbf{x}\_{\text{n}})\,\mathbf{f}\_{\text{eq}}^{\rm T}(t,\mathbf{s},\mathbf{x}\_{1},\ldots,\mathbf{x}\_{\text{n}})\,\mathbf{q}^{\rm T}(t,\mathbf{s},\mathbf{x}\_{1},\ldots,\mathbf{x}\_{\text{n}})\right]^{\rm T}}{\partial\left[\mathbf{s}^{\rm T},\mathbf{x}\_{1}^{\rm T},\ldots,\mathbf{x}\_{\text{n}}^{\rm T}\right]^{\rm T}}\right)\neq 0\tag{8}$$

*for all* (*t*, **<sup>s</sup>**, **<sup>x</sup>**1..., **<sup>x</sup>**n) ∈I× <sup>Δ</sup>*, where* I ⊂ (*t*1, *<sup>t</sup>*2) *and* <sup>Δ</sup> <sup>⊂</sup> **<sup>R</sup>**n3 <sup>×</sup> *<sup>D</sup>*rel *are open connected sets. Then <sup>q</sup> can be rewritten as a function of t, that is,* **<sup>q</sup>** : I → *<sup>D</sup>*q*, and* **<sup>s</sup>***,* **<sup>x</sup>**1*, ...,***x**n*,* �**x**1*,...,* �**x**<sup>n</sup> *can be rewritten as unique functions of t and <sup>q</sup>, that is,* **<sup>s</sup>** : I × *<sup>D</sup>*<sup>q</sup> <sup>→</sup> **<sup>R</sup>**n3 *,* **<sup>x</sup>**<sup>i</sup> : I × *<sup>D</sup>*<sup>q</sup> <sup>→</sup> **<sup>R</sup>**6*, and* �**x**<sup>i</sup> : I × *<sup>D</sup>*<sup>q</sup> <sup>→</sup> **<sup>R</sup>**<sup>12</sup> *for all* i = 1, ..., n *and* (*t*, **s**, **x**1..., **x**n) ∈I× Δ*. Furthermore, the components of q are independent and uniquely characterize the system's configuration.*

Under the hypothesis of Theorem 4.1, the components of **q**(*t*) are called *Lagrange coordinates*. As will be shown in Section 4.3, the key advantage of using Lagrange coordinates is that the constraints (5) – (7) are automatically accounted for when rewriting the formation's dynamic equations in terms of *t* and **q**(*t*) (Pars, 1965). In this paper, we assume that **<sup>s</sup>**, **<sup>x</sup>**1, ..., **<sup>x</sup>**n, �**x**1, ..., �**x**<sup>n</sup> are explicit functions of **<sup>q</sup>** only and not *<sup>t</sup>*, which occurs in most practical applications (Pars, 1965). In practice, given constraints in the form of (6) and (7), **q** is chosen such that Theorem 4.1 holds. As will be further discussed in Section 4.3, we select **q** (*t*, **s**(*t*), **x**1(*t*), ..., **x**n(*t*)) as an explicit function of (**s**(*t*), **x**1(*t*), ..., **x**n(*t*)).

Given **q** (*t*, **s**(*t*), **x**1(*t*), ..., **x**n(*t*)), **˙q** is a function of **s**(*t*), **r**i(*t*), σi(*t*), *i* = 1, ..., *n*, and their first time derivatives. In practice, however, we measure ωi(*t*) rather than σi(*t*), and hence, if the assumptions of Theorem 4.1 hold, we define the *kinematic equation*

$$\mathbf{q}\_{\rm dot}(t) \triangleq \mathbf{\varPsi}\left(\mathbf{q}(t)\right)\dot{\mathbf{q}}(t) + \boldsymbol{\psi}\left(\mathbf{q}(t)\right),\tag{9}$$

where <sup>ω</sup>i(*t*), i <sup>=</sup> 1, 2, . . . , n, explicitly appears in **<sup>q</sup>**dot(*t*), **<sup>Ψ</sup>** : *<sup>D</sup>*<sup>q</sup> <sup>→</sup> **<sup>R</sup>**(6n−n4)×(6n−n4) is an invertible continuously differentiable matrix function, and <sup>ψ</sup> : *<sup>D</sup>*<sup>q</sup> <sup>→</sup> **<sup>R</sup>**6n−n4 is continuously differentiable. Consequently, **<sup>s</sup>**, **<sup>x</sup>**1, ..., **<sup>x</sup>**n, �**x**1, ..., �**x**<sup>n</sup> can be rewritten as unique functions of **<sup>q</sup>** and **<sup>q</sup>**dot, that is, **<sup>s</sup>** : *<sup>D</sup>*<sup>q</sup> <sup>×</sup> **<sup>R</sup>**6n−n4 <sup>→</sup> **<sup>R</sup>**n3 , **<sup>x</sup>**<sup>i</sup> : *<sup>D</sup>*<sup>q</sup> <sup>×</sup> **<sup>R</sup>**6n−n4 <sup>→</sup> **<sup>R</sup>**6, and �**x**<sup>i</sup> : *<sup>D</sup>*<sup>q</sup> <sup>×</sup> **<sup>R</sup>**6n−n4 <sup>→</sup> **<sup>R</sup>**12, (*t*, **s**, **x**1..., **x**n) ∈I× Δ (Greenwood, 2003). Here, we assume that **q**dot satisfies (23) below.

6 Will-be-set-by-IN-TECH

Inequality constraints (5) can be reduced to equality constraints by introducing **s** : [*t*1, *t*2] →

**Theorem 4.1.** *(Pars, 1965) Let D*<sup>q</sup> <sup>⊆</sup> **<sup>R</sup>**6n−n4 *be an open connected set and let* **<sup>q</sup>** : [*t*1, *<sup>t</sup>*2] <sup>×</sup> **<sup>R</sup>**n3 <sup>×</sup> *<sup>D</sup>*rel <sup>→</sup> *Dq be such that* **<sup>q</sup>** (*t*, **<sup>s</sup>**(*t*), **<sup>x</sup>**1(*t*), ... , **<sup>x</sup>**n(*t*)) ∈ C<sup>2</sup> ((*t*1, *<sup>t</sup>*2) <sup>×</sup> **<sup>R</sup>**n3 <sup>×</sup> int(Drel))*. Assume that*

> <sup>1</sup> , ..., **<sup>x</sup>**<sup>T</sup> n �T

*for all* (*t*, **<sup>s</sup>**, **<sup>x</sup>**1..., **<sup>x</sup>**n) ∈I× <sup>Δ</sup>*, where* I ⊂ (*t*1, *<sup>t</sup>*2) *and* <sup>Δ</sup> <sup>⊂</sup> **<sup>R</sup>**n3 <sup>×</sup> *<sup>D</sup>*rel *are open connected sets. Then <sup>q</sup> can be rewritten as a function of t, that is,* **<sup>q</sup>** : I → *<sup>D</sup>*q*, and* **<sup>s</sup>***,* **<sup>x</sup>**1*, ...,***x**n*,* �**x**1*,...,* �**x**<sup>n</sup> *can be rewritten as unique functions of t and <sup>q</sup>, that is,* **<sup>s</sup>** : I × *<sup>D</sup>*<sup>q</sup> <sup>→</sup> **<sup>R</sup>**n3 *,* **<sup>x</sup>**<sup>i</sup> : I × *<sup>D</sup>*<sup>q</sup> <sup>→</sup> **<sup>R</sup>**6*, and* �**x**<sup>i</sup> : I × *<sup>D</sup>*<sup>q</sup> <sup>→</sup> **<sup>R</sup>**<sup>12</sup> *for all* i = 1, ..., n *and* (*t*, **s**, **x**1..., **x**n) ∈I× Δ*. Furthermore, the components of q are independent and*

Under the hypothesis of Theorem 4.1, the components of **q**(*t*) are called *Lagrange coordinates*. As will be shown in Section 4.3, the key advantage of using Lagrange coordinates is that the constraints (5) – (7) are automatically accounted for when rewriting the formation's dynamic equations in terms of *t* and **q**(*t*) (Pars, 1965). In this paper, we assume that **<sup>s</sup>**, **<sup>x</sup>**1, ..., **<sup>x</sup>**n, �**x**1, ..., �**x**<sup>n</sup> are explicit functions of **<sup>q</sup>** only and not *<sup>t</sup>*, which occurs in most practical applications (Pars, 1965). In practice, given constraints in the form of (6) and (7), **q** is chosen such that Theorem 4.1 holds. As will be further discussed in Section 4.3, we select

Given **q** (*t*, **s**(*t*), **x**1(*t*), ..., **x**n(*t*)), **˙q** is a function of **s**(*t*), **r**i(*t*), σi(*t*), *i* = 1, ..., *n*, and their first time derivatives. In practice, however, we measure ωi(*t*) rather than σi(*t*), and hence, if the

where <sup>ω</sup>i(*t*), i <sup>=</sup> 1, 2, . . . , n, explicitly appears in **<sup>q</sup>**dot(*t*), **<sup>Ψ</sup>** : *<sup>D</sup>*<sup>q</sup> <sup>→</sup> **<sup>R</sup>**(6n−n4)×(6n−n4) is an invertible continuously differentiable matrix function, and <sup>ψ</sup> : *<sup>D</sup>*<sup>q</sup> <sup>→</sup> **<sup>R</sup>**6n−n4 is continuously differentiable. Consequently, **<sup>s</sup>**, **<sup>x</sup>**1, ..., **<sup>x</sup>**n, �**x**1, ..., �**x**<sup>n</sup> can be rewritten as unique functions of **<sup>q</sup>** and **<sup>q</sup>**dot, that is, **<sup>s</sup>** : *<sup>D</sup>*<sup>q</sup> <sup>×</sup> **<sup>R</sup>**6n−n4 <sup>→</sup> **<sup>R</sup>**n3 , **<sup>x</sup>**<sup>i</sup> : *<sup>D</sup>*<sup>q</sup> <sup>×</sup> **<sup>R</sup>**6n−n4 <sup>→</sup> **<sup>R</sup>**6, and �**x**<sup>i</sup> : *<sup>D</sup>*<sup>q</sup> <sup>×</sup> **<sup>R</sup>**6n−n4 <sup>→</sup> **<sup>R</sup>**12, (*t*, **s**, **x**1..., **x**n) ∈I× Δ (Greenwood, 2003). Here, we assume that **q**dot satisfies (23) below.

**q**dot(*t*) **Ψ** (**q**(*t*)) **˙q**(*t*) + ψ (**q**(*t*)), (9)

eq (*t*, **s**, **x**1, ..., **x**n) **q**<sup>T</sup> (*t*, **s**, **x**1, ..., **x**n)

components of **s** are called *slack variables*. Thus, (5) can be rewritten as (Valentine, 1937)

<sup>2</sup>diag �

**ss**T� .

�T

⎞

⎟⎠ �<sup>=</sup> 0 (8)

�**f**ineq(**s**(*t*), **x**1(*t*), ..., **x**n(*t*)) = **0**r3 , (7)

<sup>2</sup>diag �

**ss**T� = **<sup>0</sup>**r3 . The

**<sup>R</sup>**n3 such that **<sup>s</sup>**(*t*) ∈ C2(*t*1, *<sup>t</sup>*2) and **<sup>f</sup>**ineq(*t*, **<sup>x</sup>**1(*t*), ..., **<sup>x</sup>**n(*t*)) + <sup>1</sup>

The following theorem is needed for the main results of this paper.

*∂* � **s**T, **x**<sup>T</sup>

**q** (*t*, **s**(*t*), **x**1(*t*), ..., **x**n(*t*)) as an explicit function of (**s**(*t*), **x**1(*t*), ..., **x**n(*t*)).

assumptions of Theorem 4.1 hold, we define the *kinematic equation*

where �**f**ineq (**s**(*t*), **<sup>x</sup>**1(*t*), ..., **<sup>x</sup>**n(*t*)) **<sup>f</sup>**ineq(**x**1(*t*), ..., **<sup>x</sup>**n(*t*)) + <sup>1</sup>

ineq (**s**, **<sup>x</sup>**1, ..., **<sup>x</sup>**n)**f**<sup>T</sup>

**4. Mathematical background**

**4.1 Slack variables**

**4.2 Lagrange coordinates**

det

⎛

⎜⎝ *∂* � �**f**T

*uniquely characterize the system's configuration.*

In the following we assume that the path planning optimization problem can be solved over the time interval � *t* ∗ 1, *t* ∗ 2 � ⊃ [*t*1, *t*2] and that the given set of Lagrange coordinates can be defined on the open connected set I �, where [*t*1, *<sup>t</sup>*2] <sup>⊂</sup> I ⊂� � *t* ∗ 1, *t* ∗ 2 � . Thus, (4) can be rewritten as

$$\mathbf{S}\_{\mathbf{l}}\left(\left[\widetilde{\mathbf{x}}\_{\mathbf{l}}^{\mathrm{T}}\left(\mathbf{q}(t\_{1}),\mathbf{q}\_{\mathrm{dot}}(\mathbf{q}(t\_{1}))\right),\ldots,\widetilde{\mathbf{x}}\_{\mathbf{n}}^{\mathrm{T}}\left(\mathbf{q}(t\_{1}),\mathbf{q}\_{\mathrm{dot}}(\mathbf{q}(t\_{1}))\right)\right]^{\mathrm{T}}\right) = \mathbf{0}\_{\mathrm{r}\_{\mathbf{l}}}\tag{10}$$

$$\mathbf{S}\_2\left(\left[\tilde{\mathbf{x}}\_1^\mathrm{T}(\mathbf{q}(t\_2), \mathbf{q}\_{\mathrm{dot}}(\mathbf{q}(t\_2))), \dots, \tilde{\mathbf{x}}\_\mathrm{n}^\mathrm{T}(\mathbf{q}(t\_2), \mathbf{q}\_{\mathrm{dot}}(\mathbf{q}(t\_2)))\right]^\mathrm{T}\right) = \mathbf{0}\_{\mathbf{r}\_2}.\tag{11}$$

**Example 4.1.** Consider a UAV formation with two vehicles so that n = 2. Assume that

$$\mathbf{f\_{i \text{neg}}} \left( \mathbf{x\_1}(t), \mathbf{x\_2}(t) \right) = \begin{bmatrix} ||\mathbf{r\_1}(t) - \mathbf{r\_2}(t)||\_2^2 - \mathbf{r\_{max}} \\ \mathbf{r\_{min}} - ||\mathbf{r\_1}(t) - \mathbf{r\_2}(t)||\_2^2 \end{bmatrix} \le \mathbf{0\_2} \tag{12}$$

$$\mathbf{f\_{eq}}\left(t, \mathbf{x\_1}(t), \mathbf{x\_2}(t)\right) = \sigma\_1(t) - \sigma\_2(t) = \mathbf{0\_{3\prime}}\tag{13}$$

$$\mathbf{S}\_{1}\left(\left[\tilde{\mathbf{x}}\_{1}^{\mathrm{T}}(t\_{1})\,\tilde{\mathbf{x}}\_{2}^{\mathrm{T}}(t\_{1})\right]^{\mathrm{T}}\right) = \begin{bmatrix} ||\mathbf{r}\_{1}(t\_{1}) - \mathbf{r}\_{2}(t\_{1})||\_{2}^{2} - \left(\frac{\mathbf{r}\_{\mathrm{max}} + \mathbf{r}\_{\mathrm{min}}}{2}\right) \\ \sigma\_{1}(t\_{1}) - \sigma\_{2}(t\_{1}) \end{bmatrix} = \mathbf{0}\_{4\prime} \tag{14}$$

$$\mathbf{S}\_{2}\left(\left[\tilde{\mathbf{x}}\_{1}^{\mathrm{T}}(t\_{2})\,\tilde{\mathbf{x}}\_{2}^{\mathrm{T}}(t\_{2})\right]^{\mathrm{T}}\right) = \begin{bmatrix} ||\mathbf{r}\_{1}(t\_{2}) - \mathbf{r}\_{2}(t\_{2})||\_{2}^{2} - \frac{2(\mathbf{r}\_{\mathrm{max}} - \mathbf{r}\_{\mathrm{min}})}{3} \\ \sigma\_{1}(t\_{2}) - \sigma\_{2}(t\_{2}) \end{bmatrix} = \mathbf{0}\_{4\prime} \tag{15}$$

where rmin and rmax are real constants such that 0 < rmin < rmax. Equation (12) ensures that rmin ≤ ||**r**1(*t*) <sup>−</sup> **<sup>r</sup>**2(*t*)||<sup>2</sup> <sup>2</sup> ≤ rmax and (13) ensures that both vehicles always have the same attitude: *D*rel = �� **x**T <sup>1</sup> (*t*) **<sup>x</sup>**<sup>T</sup> <sup>2</sup> (*t*) �T : rmin ≤ ||**r**1(*t*) <sup>−</sup> **<sup>r</sup>**2(*t*)||<sup>2</sup> <sup>2</sup> ≤ rmax, σ1(*t*) = σ2(*t*), *t* ∈ [*t*1, *t*2] � .

Introducing the slack variables s1 : [*t*1, *t*2] → **R** and s2 : [*t*1, *t*2] → **R**, (12) becomes

$$\tilde{\mathbf{f}}\_{\text{ineq}}(\mathbf{s}(t), \mathbf{x}\_1(t), \mathbf{x}\_2(t)) = \begin{bmatrix} ||\mathbf{r}\_1(t) - \mathbf{r}\_2(t)||\_2^2 - \mathbf{r}\_{\text{max}} + \frac{1}{2}\mathbf{s}\_1^2(t) \\ \mathbf{r}\_{\text{min}} - ||\mathbf{r}\_1(t) - \mathbf{r}\_2(t)||\_2^2 + \frac{1}{2}\mathbf{s}\_2^2(t) \end{bmatrix} = \mathbf{0}\_2. \tag{16}$$

As noted in Section 3.3, the equality constraint (13) can be embedded into (12) to give

$$
\widetilde{\mathbf{f}}\_{\text{ineq}}(\mathbf{s}(t), \mathbf{x}\_1(t), \mathbf{x}\_2(t)) = \begin{bmatrix}
\mathbf{r}\_{\text{min}} - ||\mathbf{r}\_1(t) - \mathbf{r}\_2(t)||\_2^2 + \frac{1}{2}\mathbf{s}\_2^2(t) \\
\sigma\_1(t) - \sigma\_2(t) + \frac{1}{2}\text{diag}(\mathbf{s}\_3\mathbf{s}\_3^\top) \\
\sigma\_2(t) - \sigma\_1(t) + \frac{1}{2}\text{diag}(\mathbf{s}\_4\mathbf{s}\_4^\top)
\end{bmatrix} = \mathbf{0}\_8.
$$

where **<sup>s</sup>**<sup>j</sup> : [*t*1, *<sup>t</sup>*2] <sup>→</sup> **<sup>R</sup>**3, j <sup>=</sup> 3, 4. Note that in this case, the dimension of �**f**ineq is increased since six additional slack variables have been introduced, which increases computational complexity.

Next, define ri,j : [*t*1, *t*2] → **R** (respectively, *σ*i,j : [*t*1, *t*2] → **R**) as the j-th component of **r**i(*t*) (respectively, σi(*t*)). If **q**(*t*) = � *s*1(*t*), *s*2(*t*), **r**<sup>T</sup> <sup>1</sup> (*t*), <sup>σ</sup><sup>T</sup> <sup>1</sup> (*t*), *r*2,1(*t*) �T , then (8) gives

$$\det\left(\frac{\partial\left[\overleftarrow{\mathbf{f}}\_{\text{ineq}}^{\text{T}}\left(\mathbf{s},\mathbf{x}\_{1},\mathbf{x}\_{2}\right),\mathbf{f}\_{\text{eq}}^{\text{T}}\left(t,\mathbf{s},\mathbf{x}\_{1},\mathbf{x}\_{2}\right),\mathbf{q}^{\text{T}}\left(t,\mathbf{s},\mathbf{x}\_{1},\mathbf{x}\_{2}\right)\right]^{\text{T}}}{\partial\left[\mathbf{s}^{\text{T}},\mathbf{x}\_{1}^{\text{T}},\mathbf{x}\_{2}^{\text{T}}\right]^{\text{T}}}\right) = 0.5$$

**4.3 Constrained formation dynamic equations**

<sup>k</sup> (**q**(*t*), **<sup>q</sup>**dot(*t*)) <sup>=</sup> <sup>1</sup>

2

n ∑ i=1

+ 1 2

applying the *Boltzmann-Hammel equation* (Greenwood, 2003) to give

+ n ∑ i=1 ωT

+ n ∑ i=1

+ n ∑ i=1

*<sup>∂</sup>***q**dot .

 = 

mi**v**<sup>T</sup>

= n ∑ i=1 mi**v**<sup>T</sup>

n ∑ i=1 ωT

takes the form

d d*t*

and (6).

where **u**˜(*t*) [**u**<sup>T</sup>

**<sup>u</sup>**i,tran(*t*) *<sup>∂</sup>***v**i(**q**,**q**dot)

give

<sup>1</sup> (*t*), ..., **<sup>u</sup>**<sup>T</sup>

isolate the contribution of **u**˜ in (24), we define ˆ

*<sup>∂</sup>***q**dot <sup>−</sup> **<sup>u</sup>**i,rot(*t*) *<sup>∂</sup>*ωi(**q**,**q**dot)

 **q**dot(*t*) **q**˙ dot(*t*)

 *∂*k (**q**, **q**dot) *<sup>∂</sup>***q**dot

The formation's kinetic energy is given by *König's theorem* (Pars, 1965) and for our problem

A Variational Approach to the Fuel Optimal Control Problem for UAV Formations 229

where mi is the mass of the i-th vehicle, which is assumed to be constant. The dynamic equations of the constrained formation can be written in terms of Lagrange coordinates by

<sup>i</sup> (**q**(*t*), **<sup>q</sup>**dot(*t*)) <sup>d</sup>

Equations (10) and (11) are the boundary conditions for (22). It is important to note that the dynamic equation (22) is written in terms of Lagrange coordinates, and hence, accounts for (5)

Analytical optimization techniques such as Pontryagin's minimum principle, Bellman's theorem, and calculus of variations require the dynamic equations to be written as a first-order ordinary differential equation in explicit form. Therefore, using the hypothesis on **q**dot, the second-order ordinary differential equation (22) needs to be written in a first-order form

Equation (22) or, equivalently, (23) gives a set of 6n − n4 equations in 2(6n − n4) unknowns, which are **q** and **q**dot. Thus, (22) needs to be solved together with (9) (Greenwood, 2003) to

> **Ψ** (**q**(*t*)) **˙q**(*t*) + ψ (**q**(*t*)) **<sup>f</sup>**dyn(**q**(*t*), **<sup>q</sup>**dot(*t*), **<sup>u</sup>**˜(*t*))

<sup>i</sup> (**q**(*t*), **q**dot(*t*))**I**in,i

<sup>i</sup> (**q**(*t*), **q**dot(*t*)) **v**<sup>i</sup> (**q**(*t*), **q**dot(*t*))

d*t*

*∂***v**<sup>i</sup> (**q**, **q**dot) *∂***q**dot

> d d*t*

(**<sup>a</sup>** (**<sup>x</sup>**<sup>i</sup> (**q**(*t*), **<sup>q</sup>**dot(*t*))) <sup>+</sup> **<sup>u</sup>**i,tran(*t*)) *<sup>∂</sup>***v**<sup>i</sup> (**q**, **<sup>q</sup>**dot)

(**<sup>m</sup>** (**<sup>x</sup>**<sup>i</sup> (**q**(*t*), **<sup>q</sup>**dot(*t*))) <sup>+</sup> **<sup>u</sup>**i,rot(*t*)) *<sup>∂</sup>*ω<sup>i</sup> (**q**, **<sup>q</sup>**dot)

**q**˙ dot(*t*) = **f**dyn(**q**(*t*), **q**dot(*t*), **u**˜(*t*)), (23)

<sup>n</sup>(*t*)]<sup>T</sup> and **<sup>f</sup>**dyn : *<sup>D</sup>*<sup>q</sup> <sup>×</sup> **<sup>R</sup>**6n−n4 <sup>×</sup> **<sup>R</sup>**12n <sup>→</sup> **<sup>R</sup>**6n−n4 . In order to

**f**dyn(**q**(*t*), **q**dot(*t*)) **f**dyn(**q**(*t*), **q**dot(*t*), **u**˜(*t*)) −

. (24)

<sup>i</sup> (**q**(*t*), **q**dot(*t*))**I**in,iω<sup>i</sup> (**q**(*t*), **q**dot(*t*)), (21)

*∂*ω<sup>i</sup> (**q**, **q**dot) *∂***q**dot

*∂***q**dot

*∂***q**dot

. (22)

Thus, by Theorem 4.1, the components of **q** are not Lagrange coordinates.

$$\text{Alternatively, if } \mathbf{q}(t) = \begin{bmatrix} \mathbf{s}\_1(t), \mathbf{r}\_{1,1}(t), \mathbf{r}\_{1,2}(t), \sigma\_1^{\mathrm{T}}(t), \mathbf{r}\_2^{\mathrm{T}}(t) \end{bmatrix}^{\mathrm{T}}, \text{ then}$$

$$\det\left(\frac{\partial \left[\overline{\mathbf{f}}\_{\mathrm{ineq}}^{\mathrm{T}}(\mathbf{s}, \mathbf{x}\_1, \mathbf{x}\_2) \text{ } \mathbf{f}\_{\mathrm{eq}}^{\mathrm{T}}(t, \mathbf{s}, \mathbf{x}\_1, \mathbf{x}\_2) \text{ } \mathbf{q}^{\mathrm{T}}(t, \mathbf{s}, \mathbf{x}\_1, \mathbf{x}\_2) \right]^{\mathrm{T}}}{\partial \left[\mathbf{s}^{\mathrm{T}}, \mathbf{x}\_1^{\mathrm{T}}, \mathbf{x}\_2^{\mathrm{T}}\right]^{\mathrm{T}}}\right) = -2\mathbf{s}\_2(t) \left(\mathbf{r}\_{1,3}(t) - \mathbf{r}\_{2,3}(t)\right).$$

for all (*t*, **<sup>s</sup>**, **<sup>x</sup>**1, **<sup>x</sup>**2) <sup>∈</sup> (*t*1, *<sup>t</sup>*2) <sup>×</sup> **<sup>R</sup>**<sup>2</sup> <sup>×</sup> int(*D*rel) such that r1,3(*t*) �<sup>=</sup> r2,3(*t*), and hence, the components of **<sup>q</sup>** are suitable Lagrange coordinates if rmin <sup>&</sup>lt; ||**r**1(*t*) <sup>−</sup> **<sup>r</sup>**2(*t*)||<sup>2</sup> <sup>2</sup> and r1,3(*t*) �= r2,3(*t*). In this case, (9) gives

$$\mathbf{q}\_{\rm dot}(t) = \begin{bmatrix} \dot{\mathbf{s}}\_1(t) \\ \mathbf{v}\_{1,1}(t) \\ \mathbf{v}\_{1,2}(t) \\ \mathbf{v}\_1(t) \\ \mathbf{v}\_2(t) \end{bmatrix} = \begin{bmatrix} \mathbf{I}\_3 & \mathbf{0}\_{3 \times 3} & \mathbf{0}\_{3 \times 3} \\ \mathbf{0}\_{3 \times 3} & \mathbf{R}\_{\rm rod}^{-1}(\sigma\_1(t)) & \mathbf{0}\_{3 \times 3} \\ \mathbf{0}\_{3 \times 3} & \mathbf{0}\_{3 \times 3} & \mathbf{I}\_3 \end{bmatrix} \begin{bmatrix} \dot{\mathbf{s}}\_1(t) \\ \mathbf{v}\_{1,1}(t) \\ \mathbf{v}\_{1,2}(t) \\ \dot{\sigma}\_1(t) \\ \mathbf{v}\_2(t) \end{bmatrix} \tag{17}$$

where v1,j : [*t*1, *t*2] → **R** is the j-th component of **v**1(*t*).

A more suitable choice of Lagrange coordinates is given by **q**(*t*) = � **x**T <sup>1</sup> (*t*), **<sup>r</sup>**<sup>T</sup> <sup>2</sup> (*t*) �<sup>T</sup> since

$$\det\left(\frac{\partial\left[\tilde{\mathbf{f}}\_{\text{ineq}}^{\text{T}}\left(\mathbf{s},\mathbf{x}\_{1},\mathbf{x}\_{2}\right),\mathbf{f}\_{\text{eq}}^{\text{T}}\left(t,\mathbf{s},\mathbf{x}\_{1},\mathbf{x}\_{2}\right),\mathbf{q}^{\text{T}}\left(t,\mathbf{s},\mathbf{x}\_{1},\mathbf{x}\_{2}\right)\right]^{\text{T}}}{\partial\left[\mathbf{s}^{\text{T}}\left(\mathbf{x}\_{1}^{\text{T}},\mathbf{x}\_{2}^{\text{T}}\right)^{\text{T}}\right]}\right) = s\_{1}(t)s\_{2}(t)$$

for all (*t*, **<sup>s</sup>**, **<sup>x</sup>**1, **<sup>x</sup>**2) <sup>∈</sup> (*t*1, *<sup>t</sup>*2) <sup>×</sup> **<sup>R</sup>**<sup>2</sup> <sup>×</sup> int(*D*rel), and hence, the components of **<sup>q</sup>** are suitable Lagrange coordinates if rmin <sup>&</sup>lt; ||**r**1(*t*) <sup>−</sup> **<sup>r</sup>**2(*t*)||<sup>2</sup> <sup>2</sup> < rmax. In this case, (9) gives

$$\mathbf{q}\_{\rm dot}(t) = \begin{bmatrix} \mathbf{v}\_1(t) \\ \omega\_1(t) \\ \mathbf{v}\_2(t) \end{bmatrix} = \begin{bmatrix} \mathbf{I}\_3 & \mathbf{0}\_{3 \times 3} & \mathbf{0}\_{3 \times 3} \\ \mathbf{0}\_{3 \times 3} & \mathbf{R}\_{\rm rod}^{-1}(\sigma\_1(t)) & \mathbf{0}\_{3 \times 3} \\ \mathbf{0}\_{3 \times 3} & \mathbf{0}\_{3 \times 3} & \mathbf{I}\_3 \end{bmatrix} \begin{bmatrix} \mathbf{v}\_1(t) \\ \dot{\sigma}\_1(t) \\ \mathbf{v}\_2(t) \end{bmatrix}. \tag{18}$$

Since we use this example throughout the paper, we define **q**dot,1 [**v**<sup>T</sup> <sup>1</sup> , <sup>ω</sup><sup>T</sup> 1 ] T, **q**dot,2 **v**2, and

$$\mathbf{Y}\_1(\mathbf{x}\_1(t)) \stackrel{\scriptstyle \Delta}{=} \begin{bmatrix} \mathbf{I}\_3 & \mathbf{0}\_{3 \times 3} \\ \mathbf{0}\_{3 \times 3} & \mathbf{R}\_{\rm rod}^{-1}(\sigma\_1(t)) \end{bmatrix}.$$

Finally, note that if rmin <sup>&</sup>lt; ||**r**1(*t*) <sup>−</sup> **<sup>r</sup>**2(*t*)||<sup>2</sup> <sup>2</sup> <sup>&</sup>lt; rmax for *<sup>t</sup>* <sup>∈</sup> � *t* ∗ 1, *t* ∗ 2 � ⊃ [*t*1, *t*2], then (14) and (15) reduce to

$$||\mathbf{r}\_1(t\_1) - \mathbf{r}\_2(t\_1)||\_2^2 - \left(\frac{\mathbf{r}\_{\max} + \mathbf{r}\_{\min}}{2}\right) = 0,\tag{19}$$

$$||\mathbf{r}\_1(t\_2) - \mathbf{r}\_2(t\_2)||\_2^2 - \frac{2\left(\mathbf{r}\_{\text{max}} - \mathbf{r}\_{\text{min}}\right)}{3} = 0. \tag{20}$$

#### **4.3 Constrained formation dynamic equations**

8 Will-be-set-by-IN-TECH

for all (*t*, **<sup>s</sup>**, **<sup>x</sup>**1, **<sup>x</sup>**2) <sup>∈</sup> (*t*1, *<sup>t</sup>*2) <sup>×</sup> **<sup>R</sup>**<sup>2</sup> <sup>×</sup> int(*D*rel) such that r1,3(*t*) �<sup>=</sup> r2,3(*t*), and hence, the

**<sup>0</sup>**3×<sup>3</sup> **<sup>R</sup>**−<sup>1</sup>

<sup>1</sup> (*t*), **<sup>r</sup>**<sup>T</sup> <sup>2</sup> (*t*) �T , then

**I**<sup>3</sup> **0**3×<sup>3</sup> **0**3×<sup>3</sup>

**0**3×<sup>3</sup> **0**3×<sup>3</sup> **I**<sup>3</sup>

eq (*t*, **s**, **x**1, **x**2), **q**<sup>T</sup> (*t*, **s**, **x**1, **x**2)

**I**<sup>3</sup> **0**3×<sup>3</sup> **0**3×<sup>3</sup>

**0**3×<sup>3</sup> **0**3×<sup>3</sup> **I**<sup>3</sup>

� **<sup>I</sup>**<sup>3</sup> **<sup>0</sup>**3×<sup>3</sup> **<sup>0</sup>**3×<sup>3</sup> **<sup>R</sup>**−<sup>1</sup>

<sup>2</sup> <sup>&</sup>lt; rmax for *<sup>t</sup>* <sup>∈</sup> �

�rmax + rmin 2

<sup>2</sup> <sup>−</sup> <sup>2</sup> (rmax <sup>−</sup> rmin)

rod(σ1(*t*)) **0**3×<sup>3</sup>

rod(σ1(*t*))

for all (*t*, **<sup>s</sup>**, **<sup>x</sup>**1, **<sup>x</sup>**2) <sup>∈</sup> (*t*1, *<sup>t</sup>*2) <sup>×</sup> **<sup>R</sup>**<sup>2</sup> <sup>×</sup> int(*D*rel), and hence, the components of **<sup>q</sup>** are suitable

**<sup>0</sup>**3×<sup>3</sup> **<sup>R</sup>**−<sup>1</sup>

2 −

rod(σ1(*t*)) **0**3×<sup>3</sup>

�T

⎞

⎤ ⎥ ⎦ ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣

*s*˙1(*t*) v1,1(*t*) v1,2(*t*) σ˙ <sup>1</sup>(*t*) **v**2(*t*)

> **x**T <sup>1</sup> (*t*), **<sup>r</sup>**<sup>T</sup> <sup>2</sup> (*t*)

⎞

⎟⎠ <sup>=</sup> *<sup>s</sup>*1(*t*)*s*2(*t*)

�T

<sup>2</sup> < rmax. In this case, (9) gives

� .

*t* ∗ 1, *t* ∗ 2 �

�

⎤ ⎥ ⎦ ⎡ ⎢ ⎣

**v**1(*t*) σ˙ <sup>1</sup>(*t*) **v**2(*t*)

⎤ ⎥

<sup>1</sup> , <sup>ω</sup><sup>T</sup> 1 ]

⊃ [*t*1, *t*2], then (14) and (15)

= 0, (19)

<sup>3</sup> <sup>=</sup> 0. (20)

⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦

⎟⎠ <sup>=</sup> <sup>−</sup>2s2(*t*)(r1,3(*t*) <sup>−</sup> r2,3(*t*)),

<sup>2</sup> and r1,3(*t*) �=

, (17)

�<sup>T</sup> since

<sup>⎦</sup> . (18)

T, **q**dot,2 **v**2,

Thus, by Theorem 4.1, the components of **q** are not Lagrange coordinates.

s1(*t*), r1,1(*t*), r1,2(*t*), σ<sup>T</sup>

eq (*t*, **s**, **x**1, **x**2), **q**<sup>T</sup> (*t*, **s**, **x**1, **x**2)

components of **<sup>q</sup>** are suitable Lagrange coordinates if rmin <sup>&</sup>lt; ||**r**1(*t*) <sup>−</sup> **<sup>r</sup>**2(*t*)||<sup>2</sup>

Alternatively, if **q**(*t*) = �

r2,3(*t*). In this case, (9) gives

det

and

reduce to

⎛

⎜⎝ *∂* � �**f**T

ineq (**s**, **<sup>x</sup>**1, **<sup>x</sup>**2), **<sup>f</sup>**<sup>T</sup>

**q**dot(*t*) =

*∂* � **s**T, **x**<sup>T</sup> <sup>1</sup> , **<sup>x</sup>**<sup>T</sup> 2 �T

⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣

where v1,j : [*t*1, *t*2] → **R** is the j-th component of **v**1(*t*).

ineq (**s**, **<sup>x</sup>**1, **<sup>x</sup>**2), **<sup>f</sup>**<sup>T</sup>

⎡ ⎢ ⎣

**v**1(*t*) ω1(*t*) **v**2(*t*)

Lagrange coordinates if rmin <sup>&</sup>lt; ||**r**1(*t*) <sup>−</sup> **<sup>r</sup>**2(*t*)||<sup>2</sup>

**q**dot(*t*) =

Finally, note that if rmin <sup>&</sup>lt; ||**r**1(*t*) <sup>−</sup> **<sup>r</sup>**2(*t*)||<sup>2</sup>

*s*˙1(*t*) v1,1(*t*) v1,2(*t*) ω1(*t*) **v**2(*t*)

⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦

A more suitable choice of Lagrange coordinates is given by **q**(*t*) = �

*∂* � **s**<sup>T</sup> **x**<sup>T</sup> <sup>1</sup> , **<sup>x</sup>**<sup>T</sup> 2 �T

⎤ ⎥ <sup>⎦</sup> <sup>=</sup>

Since we use this example throughout the paper, we define **q**dot,1 [**v**<sup>T</sup>

**Ψ**1(**x**1(*t*))



⎡ ⎢ ⎣

= ⎡ ⎢ ⎣

det

⎛

⎜⎝ *∂* � �**f**T The formation's kinetic energy is given by *König's theorem* (Pars, 1965) and for our problem takes the form

$$\begin{split} \mathbf{k}\left(\mathbf{q}(t), \mathbf{q}\_{\mathrm{dot}}(t)\right) &= \frac{1}{2} \sum\_{i=1}^{n} \mathbf{m}\_{\mathrm{i}} \mathbf{v}\_{\mathrm{i}}^{\mathrm{T}}\left(\mathbf{q}(t), \mathbf{q}\_{\mathrm{dot}}(t)\right) \mathbf{v}\_{\mathrm{i}}\left(\mathbf{q}(t), \mathbf{q}\_{\mathrm{dot}}(t)\right) \\ &+ \frac{1}{2} \sum\_{i=1}^{n} \omega\_{\mathrm{i}}^{\mathrm{T}}\left(\mathbf{q}(t), \mathbf{q}\_{\mathrm{dot}}(t)\right) \mathbf{I}\_{\mathrm{in},i} \omega\_{\mathrm{i}}\left(\mathbf{q}(t), \mathbf{q}\_{\mathrm{dot}}(t)\right), \end{split} \tag{21}$$

where mi is the mass of the i-th vehicle, which is assumed to be constant. The dynamic equations of the constrained formation can be written in terms of Lagrange coordinates by applying the *Boltzmann-Hammel equation* (Greenwood, 2003) to give

$$\frac{\mathbf{d}}{\mathbf{d}t} \left( \frac{\partial \mathbf{k} \left( \mathbf{q}, \mathbf{q}\_{\mathrm{dot}} \right)}{\partial \mathbf{q}\_{\mathrm{dot}}} \right) = \sum\_{i=1}^{n} \mathbf{m}\_{i} \mathbf{v}\_{i}^{\mathrm{T}} \left( \mathbf{q}(t), \mathbf{q}\_{\mathrm{dot}}(t) \right) \frac{\mathbf{d}}{\mathbf{d}t} \frac{\partial \mathbf{v}\_{i} \left( \mathbf{q}, \mathbf{q}\_{\mathrm{dot}} \right)}{\partial \mathbf{q}\_{\mathrm{dot}}} $$

$$+ \sum\_{i=1}^{n} \omega\_{i}^{\mathrm{T}} \left( \mathbf{q}(t), \mathbf{q}\_{\mathrm{dot}}(t) \right) \mathbf{I}\_{\mathrm{in},i} \frac{\mathbf{d}}{\mathbf{d}t} \frac{\partial \omega\_{i} \left( \mathbf{q}\_{\mathrm{r}} \mathbf{q}\_{\mathrm{dot}} \right)}{\partial \mathbf{q}\_{\mathrm{dot}}} $$

$$+ \sum\_{i=1}^{n} \left( \mathbf{a} \left( \tilde{\mathbf{x}}\_{i} \left( \mathbf{q}(t), \mathbf{q}\_{\mathrm{dot}}(t) \right) \right) + \mathbf{u}\_{i,\mathrm{rran}}(t) \right) \frac{\partial \mathbf{v}\_{i} \left( \mathbf{q}\_{\mathrm{r}} \mathbf{q}\_{\mathrm{dot}} \right)}{\partial \mathbf{q}\_{\mathrm{dot}}} $$

$$+ \sum\_{i=1}^{n} \left( \mathbf{m} \left( \tilde{\mathbf{x}}\_{i} \left( \mathbf{q}(t), \mathbf{q}\_{\mathrm{dot}}(t) \right) \right) + \mathbf{u}\_{i,\mathrm{rot}}(t) \right) \frac{\partial \omega\_{i} \left( \mathbf{q}\_{\mathrm{r}} \mathbf{q}\_{\mathrm{dot}} \right$$

Equations (10) and (11) are the boundary conditions for (22). It is important to note that the dynamic equation (22) is written in terms of Lagrange coordinates, and hence, accounts for (5) and (6).

Analytical optimization techniques such as Pontryagin's minimum principle, Bellman's theorem, and calculus of variations require the dynamic equations to be written as a first-order ordinary differential equation in explicit form. Therefore, using the hypothesis on **q**dot, the second-order ordinary differential equation (22) needs to be written in a first-order form

$$\dot{\mathbf{q}}\_{\rm dot}(t) = \mathbf{f}\_{\rm dyn}(\mathbf{q}(t), \mathbf{q}\_{\rm dot}(t), \tilde{\mathbf{u}}(t))\_{\prime} \tag{23}$$

where **u**˜(*t*) [**u**<sup>T</sup> <sup>1</sup> (*t*), ..., **<sup>u</sup>**<sup>T</sup> <sup>n</sup>(*t*)]<sup>T</sup> and **<sup>f</sup>**dyn : *<sup>D</sup>*<sup>q</sup> <sup>×</sup> **<sup>R</sup>**6n−n4 <sup>×</sup> **<sup>R</sup>**12n <sup>→</sup> **<sup>R</sup>**6n−n4 . In order to isolate the contribution of **u**˜ in (24), we define ˆ **f**dyn(**q**(*t*), **q**dot(*t*)) **f**dyn(**q**(*t*), **q**dot(*t*), **u**˜(*t*)) − **<sup>u</sup>**i,tran(*t*) *<sup>∂</sup>***v**i(**q**,**q**dot) *<sup>∂</sup>***q**dot <sup>−</sup> **<sup>u</sup>**i,rot(*t*) *<sup>∂</sup>*ωi(**q**,**q**dot) *<sup>∂</sup>***q**dot .

Equation (22) or, equivalently, (23) gives a set of 6n − n4 equations in 2(6n − n4) unknowns, which are **q** and **q**dot. Thus, (22) needs to be solved together with (9) (Greenwood, 2003) to give

$$
\begin{bmatrix}
\mathbf{q}\_{\text{dot}}(t) \\
\mathbf{\dot{q}}\_{\text{dot}}(t)
\end{bmatrix} = \begin{bmatrix}
\mathbf{F}\left(\mathbf{q}(t)\right)\dot{\mathbf{q}}(t) + \boldsymbol{\psi}\left(\mathbf{q}(t)\right) \\
\mathbf{f}\_{\text{dyn}}(\mathbf{q}(t), \mathbf{q}\_{\text{dot}}(t), \tilde{\mathbf{u}}(t))
\end{bmatrix}.
\tag{24}
$$

where, for j <sup>=</sup> 1, 2, u1,tran,j : [*t*1, *<sup>t</sup>*2] <sup>→</sup> **<sup>R</sup>** (respectively, aj : **<sup>R</sup>**<sup>12</sup> <sup>→</sup> **<sup>R</sup>**) is the j-th component of **<sup>u</sup>**1,tran(*t*) (respectively, **<sup>a</sup>** (**<sup>x</sup>**<sup>1</sup> (*t*, **<sup>q</sup>**(*t*), **<sup>q</sup>**dot(*t*, **<sup>q</sup>**(*t*))))). Instead of deducing the dynamics of

A Variational Approach to the Fuel Optimal Control Problem for UAV Formations 231

<sup>d</sup>*<sup>t</sup>* <sup>=</sup> m1a3 (**<sup>x</sup>**<sup>1</sup> (**q**(*t*), **<sup>q</sup>**dot(*t*))) <sup>+</sup> m1u1,tran,3(*t*),

2

<sup>2</sup> <sup>−</sup> rmax <sup>+</sup> <sup>1</sup>

<sup>2</sup> <sup>−</sup> rmax <sup>+</sup> <sup>1</sup>

+ (**r**1(*t*) <sup>−</sup> **<sup>r</sup>**2(*t*))T(**<sup>a</sup>** (**<sup>x</sup>**<sup>1</sup> (**q**(*t*), **<sup>q</sup>**dot(*t*))) <sup>+</sup> **<sup>u</sup>**1,tran(*t*))

2 *s*2 <sup>1</sup>(*t*1)

2 *s*2 <sup>1</sup>(*t*2)

2

**<sup>v</sup>**1(*t*) = m1**<sup>a</sup>** (**<sup>x</sup>**1(*t*)) <sup>+</sup> *<sup>m</sup>*1**u**1,tran(*t*), (37)

<sup>1</sup> (*t*) T 

<sup>1</sup> (*t*), <sup>ω</sup><sup>T</sup>

<sup>2</sup> <sup>&</sup>lt; rmax, whereas (29) – (33) hold if rmin <sup>&</sup>lt; ||**r**1(*t*) <sup>−</sup> **<sup>r</sup>**2(*t*)||<sup>2</sup>

<sup>1</sup> (*t*), **<sup>r</sup>**<sup>T</sup> <sup>2</sup> (*t*) T .

<sup>2</sup> <sup>+</sup> <sup>1</sup> 2 *s*2 <sup>2</sup>(*t*1)

<sup>2</sup> <sup>+</sup> <sup>1</sup> 2 *s*2 <sup>2</sup>(*t*2)

<sup>i</sup> (*t*)**v**i(*t*) + <sup>1</sup>

<sup>−</sup> (**r**1(*t*) <sup>−</sup> **<sup>r</sup>**2(*t*))T(**<sup>a</sup>** (**<sup>x</sup>**<sup>2</sup> (**q**(*t*), **<sup>q</sup>**dot(*t*))) <sup>+</sup> **<sup>u</sup>**2,tran(*t*)), (34)

 = **0**2,

 = **0**2.

, then the formation's kinetic energy is given by

rod(σ1(*t*))σ˙ <sup>1</sup>(*t*), **<sup>v</sup>**2(*t*) = <sup>d</sup>**r**2(*t*)

<sup>1</sup> (ω1(*t*))**I**in,1ω1(*t*) + **<sup>m</sup>** (**<sup>x</sup>**1(*t*)) <sup>+</sup> **<sup>I</sup>**in,1**u**1,*rot*(*t*), (38)

2 ∑ i=1 ωT

<sup>2</sup> < rmax, *t* ∈ [*t*1, *t*2]. In this case , the boundary

<sup>1</sup> (*t*)**I**in,iω1(*t*) (35)

+ m2**u**2,tran(*t*). (39)

<sup>T</sup> is a more convenient choice of Lagrange

<sup>d</sup>*<sup>t</sup>* , (36)

<sup>2</sup> < rmax

s1(*t*) from

s˙ 2 m1

we use (16) and (25) to obtain

conditions to (34) are given by

If, alternatively, **q**(*t*) =

m1 d d*t*

**I**in,1 d d*t*

> m2 d d*t*

rmin <sup>&</sup>lt; ||**r**1(*t*) <sup>−</sup> **<sup>r</sup>**2(*t*)||<sup>2</sup>

coordinates than **q**(*t*) =

and r1,3(*t*) �<sup>=</sup> r2,3(*t*). Thus, **<sup>q</sup>**(*t*) =

dv1,3(*t*)

which can be solved for s1(*t*) if ||**r**1(*t*) <sup>−</sup> **<sup>r</sup>**2(*t*)||<sup>2</sup>

**x**T <sup>1</sup> (*t*), **<sup>r</sup>**<sup>T</sup> <sup>2</sup> (*t*) T

<sup>k</sup> (**q**(*t*), **<sup>q</sup>**dot(*t*)) <sup>=</sup> <sup>1</sup>

**<sup>v</sup>**1(*t*) = <sup>d</sup>**r**1(*t*)

ω1(*t*) = −ω<sup>×</sup>

**v**2(*t*) = m2**a**

 **r** T <sup>2</sup> (*t*), **<sup>v</sup>**<sup>T</sup>



2

rmin − ||**r**1(*t*1) <sup>−</sup> **<sup>r</sup>**2(*t*1)||<sup>2</sup>

rmin − ||**r**1(*t*2) <sup>−</sup> **<sup>r</sup>**2(*t*2)||<sup>2</sup>

2 ∑ i=1

and the dynamic equations, obtained by applying (22) and (18), are given by

<sup>d</sup>*<sup>t</sup>* , <sup>ω</sup>1(*t*) = **<sup>R</sup>**−<sup>1</sup>

**x**T <sup>1</sup> (*t*), **<sup>r</sup>**<sup>T</sup> <sup>2</sup> (*t*)

s1(*t*), r1,1(*t*), r1,2(*t*), σ<sup>T</sup>

mi**v**<sup>T</sup>

<sup>2</sup> (*t*), <sup>σ</sup><sup>T</sup>

The Lagrange coordinates chosen imply that the first vehicle can be considered as unconstrained, that is, subject to (3), (14), and (15) only, and therefore, the dynamic equations (36) – (38) can be directly deduced from (3). Similarly, the translational dynamics of the second vehicle can be considered as unconstrained. Thus, (39) can be directly obtained from (3). Recall from Example 4.1 that the components of **q** are suitable Lagrange coordinates if

<sup>1</sup>(*t*) + s1(*t*)s¨1(*t*) = <sup>−</sup> <sup>2</sup>||**v**1(*t*) <sup>−</sup> **<sup>v</sup>**2(*t*)||<sup>2</sup>

From (21) it follows that the formation's kinetic energy k is not an explicit function of **s**, and hence, if **<sup>q</sup>** is chosen as an explicit function of p components of **<sup>s</sup>**(*t*) <sup>∈</sup> **<sup>R</sup>**n3 , then p of the 6n−n4 equations in (22) cannot be straightforwardly recast in the explicit form given by (23). In this case, assume, without loss of generality, that **q** explicitly depends on the first p components of **s** and substitute the corresponding p equations in (24) with

$$\mathbf{s}\_{\mathbf{j}}(t)\ddot{\mathbf{s}}\_{\mathbf{j}}(t) = -\dot{\mathbf{s}}\_{\mathbf{j}}^{2}(t) - \frac{\mathbf{d}^{2}}{\mathbf{d}t^{2}}\mathbf{f}\_{\text{ineq},\mathbf{j}}(\mathbf{x}\_{1}(\mathbf{q}(t),\mathbf{q}\_{\text{dot}}(t)),\,\ldots,\mathbf{x}\_{\mathbf{n}}(\mathbf{q}(t),\mathbf{q}\_{\text{dot}}(t))),\tag{25}$$

which is obtained by differentiating (7). In this case, the boundary conditions are given by

$$\overline{\mathbf{f}}\_{\text{ineq}}(\mathbf{s}\left(\mathbf{q}(t\_1), \mathbf{q}\_{\text{dot}}(\mathbf{q}(t\_1))\right), \mathbf{x}\_1\left(\mathbf{q}(t\_1), \mathbf{q}\_{\text{dot}}(\mathbf{q}(t\_1))\right), \dots, \mathbf{x}\_{\text{n}}\left(\mathbf{q}(t\_1), \mathbf{q}\_{\text{dot}}(\mathbf{q}(t\_1))\right)) = \mathbf{0}\_{\mathbf{f}\boldsymbol{\omega}} \tag{26}$$

$$\mathbf{f\_{ineq}}(\mathbf{s}\left(\mathbf{q}(t\_{2}),\mathbf{q\_{dot}}(\mathbf{q}(t\_{2}))\right),\mathbf{x\_{1}}\left(\mathbf{q}(t\_{2}),\mathbf{q\_{dot}}(\mathbf{q}(t\_{2}))\right),\dots,\mathbf{x\_{n}}\left(\mathbf{q}(t\_{2}),\mathbf{q\_{dot}}(\mathbf{q}(t\_{2}))\right)) = \mathbf{0\_{5}}\tag{27}$$

where fineq,j : **<sup>R</sup>**n3 <sup>×</sup> *<sup>D</sup>*rel <sup>→</sup> **<sup>R</sup>** is the j-th component of **<sup>f</sup>**ineq(**s**(*t*), **<sup>x</sup>**1(*t*), ..., **<sup>x</sup>**n(*t*)) (Jacobson & Lele, 1969), for j = 1, . . . , p. If *s*j(*t* ∗) = 0 for some *t* <sup>∗</sup> ∈ [*t*1, *t*2], then (25) can be replaced by

$$\mathbf{c} \cdot \mathbf{s}\_{\uparrow}(t)\ddot{\mathbf{s}}\_{\uparrow}(t) + \mathbf{s}\_{\uparrow}(t)\frac{\mathbf{d}^3 \mathbf{s}\_{\uparrow}(t)}{\mathbf{d}t^3} = -\frac{\mathbf{d}^3}{dt^3} f\_{\text{ineq}j}(\mathbf{x}\_{\downarrow}(\mathbf{q}(t), \mathbf{q}\_{\text{dot}}(t)), \dots, \mathbf{x}\_{\text{n}}(\mathbf{q}(t), \mathbf{q}\_{\text{dot}}(t))), \tag{28}$$

where **<sup>s</sup>** ∈ C3(*t*1, *<sup>t</sup>*2). In general, (7) must be differentiated so that *<sup>s</sup>*¨j(*t*), or one of its higher-order derivatives, explicitly appears and is multiplied by a term that is non-zero for all *t* ∈ [*t*1, *t*2]. In this case, the differentiability assumptions on **s** and **f**ineq must be modified accordingly.

**Example 4.2.** Consider Example 4.1 with **q**(*t*) = s1(*t*), r1,1(*t*), r1,2(*t*), σ<sup>T</sup> <sup>1</sup> (*t*), **<sup>r</sup>**<sup>T</sup> <sup>2</sup> (*t*) T . In this case, the formation's kinetic energy is given by

$$\begin{aligned} \mathbf{k}\left(\mathbf{q}(t), \mathbf{q}\_{\mathrm{dot}}(t)\right) &= \frac{1}{2} \mathbf{m}\_1 \mathbf{v}\_1^\mathrm{T}\left(\mathbf{q}(t), \mathbf{q}\_{\mathrm{dot}}(t)\right) \mathbf{v}\_1\left(\mathbf{q}(t), \mathbf{q}\_{\mathrm{dot}}(t)\right) \\ &+ \frac{1}{2} \mathbf{m}\_2 \mathbf{v}\_2^\mathrm{T}(t) \mathbf{v}\_2(t) + \frac{1}{2} \sum\_{i=1}^2 \omega\_1^\mathrm{T}(t) \mathbf{I}\_{\mathrm{in},i} \omega\_1(t) . \end{aligned}$$

The dynamic equations can now be found by applying (22) and accounting for (17) giving

$$\mathbf{v}\_{1\dot{\jmath}}(t) = \frac{\mathbf{d}\mathbf{r}\_{1\dot{\jmath}}(t)}{\mathbf{d}t}, \quad \omega\_1(t) = \mathbf{R}\_{\text{rod}}^{-1}(\sigma\_1(t))\dot{\sigma}\_1(t), \quad \mathbf{v}\_2(t) = \frac{\mathbf{d}\mathbf{r}\_2(t)}{\mathbf{d}t},\tag{29}$$

$$\mathbf{m}\_{1} \frac{\mathbf{dv}\_{1,1}(t)}{\mathbf{d}t} = \mathbf{m}\_{1} \mathbf{a}\_{1} \left( \tilde{\mathbf{x}}\_{1} \left( \mathbf{q}(t), \mathbf{q}\_{\text{dot}}(t) \right) \right) + \mathbf{m}\_{1} \mathbf{u}\_{1,\text{ran},1}(t), \tag{30}$$

$$\mathbf{m}\_{1}\frac{\mathbf{dv}\_{1,2}(t)}{\mathbf{d}t} = \mathbf{m}\_{1}\mathbf{a}\_{2}\left(\tilde{\mathbf{x}}\_{1}(\mathbf{q}(t),\mathbf{q}\_{\text{dot}}(t))\right) + \mathbf{m}\_{1}\mathbf{u}\_{1,\text{tran},2}(t),\tag{31}$$

$$\mathbf{I}\_{\rm in,1} \frac{\mathbf{d}\omega\_1(t)}{\mathbf{d}t} = -\omega\_1^\times \left(\omega\_1(t)\right) \mathbf{I}\_{\rm in,1} \omega\_1(t) + \mathbf{m}\left(\tilde{\mathbf{x}}\_1\left(\mathbf{q}(t), \mathbf{q}\_{\rm dot}(t, \mathbf{q}(t))\right)\right) + \mathbf{I}\_{\rm in,1} \mathbf{u}\_{1,\rm rot}(t), \tag{32}$$

$$\mathbf{m}\_2 \frac{d\mathbf{v}\_2(t)}{dt} = \mathbf{m}\_2 \mathbf{a}\left(\tilde{\mathbf{x}}\_2\left(\mathbf{q}(t), \mathbf{q}\_{\text{dot}}(t)\right)\right) + \mathbf{m}\_2 \mathbf{u}\_{2,\text{tan}}(t),\tag{33}$$

where, for j <sup>=</sup> 1, 2, u1,tran,j : [*t*1, *<sup>t</sup>*2] <sup>→</sup> **<sup>R</sup>** (respectively, aj : **<sup>R</sup>**<sup>12</sup> <sup>→</sup> **<sup>R</sup>**) is the j-th component of **<sup>u</sup>**1,tran(*t*) (respectively, **<sup>a</sup>** (**<sup>x</sup>**<sup>1</sup> (*t*, **<sup>q</sup>**(*t*), **<sup>q</sup>**dot(*t*, **<sup>q</sup>**(*t*))))). Instead of deducing the dynamics of s1(*t*) from

$$\mathbf{m}\_1 \frac{\mathbf{d} \mathbf{v}\_{1,3}(t)}{\mathbf{d}t} = \mathbf{m}\_1 \mathbf{a}\_3 \left( \tilde{\mathbf{x}}\_1 \left( \mathbf{q}(t), \mathbf{q}\_{\text{dot}}(t) \right) \right) + \mathbf{m}\_1 \mathbf{u}\_{1,\text{tran},3}(t) \boldsymbol{\lambda}$$

we use (16) and (25) to obtain

10 Will-be-set-by-IN-TECH

From (21) it follows that the formation's kinetic energy k is not an explicit function of **s**, and hence, if **<sup>q</sup>** is chosen as an explicit function of p components of **<sup>s</sup>**(*t*) <sup>∈</sup> **<sup>R</sup>**n3 , then p of the 6n−n4 equations in (22) cannot be straightforwardly recast in the explicit form given by (23). In this case, assume, without loss of generality, that **q** explicitly depends on the first p components

which is obtained by differentiating (7). In this case, the boundary conditions are given by

**f**ineq(**s** (**q**(*t*1), **q**dot(**q**(*t*1))), **x**<sup>1</sup> (**q**(*t*1), **q**dot(**q**(*t*1))), ..., **x**<sup>n</sup> (**q**(*t*1), **q**dot(**q**(*t*1)))) = **0**r3 , (26) **f**ineq(**s** (**q**(*t*2), **q**dot(**q**(*t*2))), **x**<sup>1</sup> (**q**(*t*2), **q**dot(**q**(*t*2))), ..., **x**<sup>n</sup> (**q**(*t*2), **q**dot(**q**(*t*2)))) = **0**r3 , (27)

where fineq,j : **<sup>R</sup>**n3 <sup>×</sup> *<sup>D</sup>*rel <sup>→</sup> **<sup>R</sup>** is the j-th component of **<sup>f</sup>**ineq(**s**(*t*), **<sup>x</sup>**1(*t*), ..., **<sup>x</sup>**n(*t*)) (Jacobson &

where **<sup>s</sup>** ∈ C3(*t*1, *<sup>t</sup>*2). In general, (7) must be differentiated so that *<sup>s</sup>*¨j(*t*), or one of its higher-order derivatives, explicitly appears and is multiplied by a term that is non-zero for all *t* ∈ [*t*1, *t*2]. In this case, the differentiability assumptions on **s** and **f**ineq must be modified

∗) = 0 for some *t*

<sup>d</sup>*t*<sup>2</sup> fineq,j(**x**<sup>1</sup> (**q**(*t*), **<sup>q</sup>**dot(*t*)), ..., **<sup>x</sup>**<sup>n</sup> (**q**(*t*), **<sup>q</sup>**dot(*t*))), (25)

*dt*<sup>3</sup> *<sup>f</sup>*ineq,j(**x**<sup>1</sup> (**q**(*t*), **<sup>q</sup>**dot(*t*)), ..., **<sup>x</sup>**<sup>n</sup> (**q**(*t*), **<sup>q</sup>**dot(*t*))), (28)

s1(*t*), r1,1(*t*), r1,2(*t*), σ<sup>T</sup>

<sup>1</sup> (**q**(*t*), **q**dot(*t*)) **v**<sup>1</sup> (**q**(*t*), **q**dot(*t*))

2

rod(σ1(*t*))σ˙ <sup>1</sup>(*t*), **<sup>v</sup>**2(*t*) = <sup>d</sup>**r**2(*t*)

<sup>1</sup> (ω1(*t*))**I**in,1ω1(*t*) + **<sup>m</sup>** (**<sup>x</sup>**<sup>1</sup> (**q**(*t*), **<sup>q</sup>**dot(*t*, **<sup>q</sup>**(*t*)))) <sup>+</sup> **<sup>I</sup>**in,1**u**1,*rot*(*t*), (32)

2 ∑ i=1 ωT

<sup>2</sup> (*t*)**v**2(*t*) + <sup>1</sup>

<sup>d</sup>*<sup>t</sup>* <sup>=</sup> m1a1 (**<sup>x</sup>**<sup>1</sup> (**q**(*t*), **<sup>q</sup>**dot(*t*))) <sup>+</sup> m1u1,tran,1(*t*), (30)

<sup>d</sup>*<sup>t</sup>* <sup>=</sup> m1a2 (**<sup>x</sup>**<sup>1</sup> (**q**(*t*), **<sup>q</sup>**dot(*t*))) <sup>+</sup> m1u1,tran,2(*t*), (31)

<sup>d</sup>*<sup>t</sup>* <sup>=</sup> m2**<sup>a</sup>** (**<sup>x</sup>**<sup>2</sup> (**q**(*t*), **<sup>q</sup>**dot(*t*))) <sup>+</sup> m2**u**2,tran(*t*), (33)

The dynamic equations can now be found by applying (22) and accounting for (17) giving

<sup>∗</sup> ∈ [*t*1, *t*2], then (25) can be replaced by

<sup>1</sup> (*t*), **<sup>r</sup>**<sup>T</sup> <sup>2</sup> (*t*) T

<sup>d</sup>*<sup>t</sup>* , (29)

<sup>1</sup> (*t*)**I**in,iω1(*t*).

. In this

of **s** and substitute the corresponding p equations in (24) with

2 <sup>j</sup> (*t*) <sup>−</sup> d2

d3sj(*t*)

**Example 4.2.** Consider Example 4.1 with **q**(*t*) =

<sup>k</sup> (**q**(*t*), **<sup>q</sup>**dot(*t*)) <sup>=</sup> <sup>1</sup>

2 m1**v**<sup>T</sup>

<sup>d</sup>*<sup>t</sup>* , <sup>ω</sup>1(*t*) = **<sup>R</sup>**−<sup>1</sup>

+ 1 2 m2**v**<sup>T</sup>

case, the formation's kinetic energy is given by

dr1,j(*t*)

<sup>d</sup>*t*<sup>3</sup> <sup>=</sup> <sup>−</sup> d3

sj(*t*)s¨j(*t*) = −s˙

Lele, 1969), for j = 1, . . . , p. If *s*j(*t*

3s˙j(*t*)s¨j(*t*) + sj(*t*)

v1,j(*t*) =

dv1,1(*t*)

dv1,2(*t*)

d*ω*1(*t*)

d**v**2(*t*)

<sup>d</sup>*<sup>t</sup>* <sup>=</sup> <sup>−</sup>ω<sup>×</sup>

accordingly.

m1

m1

**I**in,1

m2

$$\begin{aligned} \dot{\mathbf{s}}\_1^2(t) + \mathbf{s}\_1(t)\ddot{\mathbf{s}}\_1(t) &= -2||\mathbf{v}\_1(t) - \mathbf{v}\_2(t)||\_2^2 \\ &+ (\mathbf{r}\_1(t) - \mathbf{r}\_2(t))^\mathsf{T} (\mathbf{a}\left(\widetilde{\mathbf{x}}\_1\left(\mathbf{q}(t), \mathbf{q}\_{\mathrm{dot}}(t)\right)\right) + \mathbf{u}\_{1,\mathrm{ran}}(t)) \\ &- (\mathbf{r}\_1(t) - \mathbf{r}\_2(t))^\mathsf{T} (\mathbf{a}\left(\widetilde{\mathbf{x}}\_2\left(\mathbf{q}(t), \mathbf{q}\_{\mathrm{dot}}(t)\right)\right) + \mathbf{u}\_{2,\mathrm{ran}}(t)), \end{aligned} \tag{34}$$

which can be solved for s1(*t*) if ||**r**1(*t*) <sup>−</sup> **<sup>r</sup>**2(*t*)||<sup>2</sup> <sup>2</sup> < rmax, *t* ∈ [*t*1, *t*2]. In this case , the boundary conditions to (34) are given by

$$\begin{bmatrix} ||\mathbf{r}\_1(t\_1) - \mathbf{r}\_2(t\_1)||\_2^2 - \mathbf{r}\_{\max} + \frac{1}{2}s\_1^2(t\_1) \\ \mathbf{r}\_{\min} - ||\mathbf{r}\_1(t\_1) - \mathbf{r}\_2(t\_1)||\_2^2 + \frac{1}{2}s\_2^2(t\_1) \end{bmatrix} = \mathbf{0}\_{2'} $$
 
$$\begin{bmatrix} ||\mathbf{r}\_1(t\_2) - \mathbf{r}\_2(t\_2)||\_2^2 - \mathbf{r}\_{\max} + \frac{1}{2}s\_1^2(t\_2) \\ \mathbf{r}\_{\min} - ||\mathbf{r}\_1(t\_2) - \mathbf{r}\_2(t\_2)||\_2^2 + \frac{1}{2}s\_2^2(t\_2) \end{bmatrix} = \mathbf{0}\_2.$$

If, alternatively, **q**(*t*) = **x**T <sup>1</sup> (*t*), **<sup>r</sup>**<sup>T</sup> <sup>2</sup> (*t*) T , then the formation's kinetic energy is given by

$$\mathbf{k}\left(\mathbf{q}(t), \mathbf{q}\_{\text{dot}}(t)\right) = \frac{1}{2} \sum\_{\mathbf{i}=1}^{2} \mathbf{m}\_{\text{i}} \mathbf{v}\_{\text{i}}^{\text{T}}(t) \mathbf{v}\_{\text{i}}(t) + \frac{1}{2} \sum\_{\mathbf{i}=1}^{2} \omega\_{1}^{\text{T}}(t) \mathbf{I}\_{\text{in},\text{i}} \omega\_{1}(t) \tag{35}$$

and the dynamic equations, obtained by applying (22) and (18), are given by

$$\mathbf{v}\_{1}(t) = \frac{\mathbf{d}\mathbf{r}\_{1}(t)}{\mathbf{d}t}, \quad \omega\_{1}(t) = \mathbf{R}\_{\text{rod}}^{-1}(\sigma\_{1}(t))\dot{\sigma}\_{1}(t), \quad \mathbf{v}\_{2}(t) = \frac{\mathbf{d}\mathbf{r}\_{2}(t)}{\mathbf{d}t}, \tag{36}$$

$$\mathbf{m}\_1 \frac{\mathbf{d}}{\mathbf{d}t} \mathbf{v}\_1(t) = \mathbf{m}\_1 \mathbf{a}\left(\widetilde{\mathbf{x}}\_1(t)\right) + m\_1 \mathbf{u}\_{1, \text{tran}}(t),\tag{37}$$

$$\mathbf{I}\_{\rm in,1} \frac{\mathbf{d}}{\mathbf{d}t} \omega\_1(t) = -\omega\_1^\times \left(\omega\_1(t)\right) \mathbf{I}\_{\rm in,1} \omega\_1(t) + \mathbf{m} \left(\widetilde{\mathbf{x}}\_1(t)\right) + \mathbf{I}\_{\rm in,1} \mathbf{u}\_{1,\rm rot}(t),\tag{38}$$

$$\mathbf{m}\_{2}\frac{\mathbf{d}}{\mathbf{d}t}\mathbf{v}\_{2}(t) = \mathbf{m}\_{2}\mathbf{a}\left(\left[\mathbf{r}\_{2}^{\mathrm{T}}(t), \mathbf{v}\_{2}^{\mathrm{T}}(t), \sigma\_{1}^{\mathrm{T}}(t), \omega\_{1}^{\mathrm{T}}(t)\right]^{\mathrm{T}}\right) + \mathbf{m}\_{2}\mathbf{u}\_{2,\mathrm{tran}}(t). \tag{39}$$

The Lagrange coordinates chosen imply that the first vehicle can be considered as unconstrained, that is, subject to (3), (14), and (15) only, and therefore, the dynamic equations (36) – (38) can be directly deduced from (3). Similarly, the translational dynamics of the second vehicle can be considered as unconstrained. Thus, (39) can be directly obtained from (3). Recall from Example 4.1 that the components of **q** are suitable Lagrange coordinates if rmin <sup>&</sup>lt; ||**r**1(*t*) <sup>−</sup> **<sup>r</sup>**2(*t*)||<sup>2</sup> <sup>2</sup> <sup>&</sup>lt; rmax, whereas (29) – (33) hold if rmin <sup>&</sup>lt; ||**r**1(*t*) <sup>−</sup> **<sup>r</sup>**2(*t*)||<sup>2</sup> <sup>2</sup> < rmax and r1,3(*t*) �<sup>=</sup> r2,3(*t*). Thus, **<sup>q</sup>**(*t*) = **x**T <sup>1</sup> (*t*), **<sup>r</sup>**<sup>T</sup> <sup>2</sup> (*t*) <sup>T</sup> is a more convenient choice of Lagrange coordinates than **q**(*t*) = s1(*t*), r1,1(*t*), r1,2(*t*), σ<sup>T</sup> <sup>1</sup> (*t*), **<sup>r</sup>**<sup>T</sup> <sup>2</sup> (*t*) T .

Finally, define

**q**(*t*), **q**dot(*t*),λdyn(*t*),λdot(*t*)

**Theorem 4.2.** *(Pontryagin et al., 1962) For all* i *=* 1*, ...,* n*, let* **u**∗

**q**∗(*t*), **u**˜ ∗(*t*),λ∗

dyn(*t*)*, and* λ<sup>∗</sup>

*on* [*t*1, *t*2] *except on a finite number of points, and iv)* λ<sup>∗</sup>

min **<sup>u</sup>**˜∈∏<sup>n</sup>

<sup>i</sup>=1(Γi,tran×Γi,rot)

A Variational Approach to the Fuel Optimal Control Problem for UAV Formations 233

The following theorem is known as the *Pontryagin minimum principle*. For details on this theorem and its numerous applications to optimal control, see Pontryagin et al. (1962).

*be admissible controls in* Γi,tran *and* Γi,rot*, respectively, such that* **q**∗(*t*) *satisfies* (24)*,* (10)*, and* (11)*.*

dyn(*t*),λ<sup>∗</sup>

Pontryagin minimum principle is a necessary condition for optimality, and hence, it provides *candidate* optimal control vectors. Sufficient conditions for optimality that are currently

It is worth noting that, instead of introducing the Lagrange coordinates, the equality constraints (7) and (5) can be accounted for by introducing Lagrange multipliers. This approach requires modifying the assigned performance measure and introducing additional costate vectors (Giaquinta & Hildebrandt, 1996; Lee & Markus, 1968). The dynamics of the costate vectors are characterized by ordinary differential equations known as costate equations, which need to be integrated numerically together with the dynamic equations of the state vector. Therefore, the computational complexity of finding optimal trajectories for large formations increases drastically when Lagrange multipliers are employed (L'Afflitto & Sultan, 2010). Alternatively, finding a suitable set of Lagrange coordinates can be a demanding task and in some cases the Lagrange coordinates may not have physical meaning (Pars, 1965); however, this reduces the dimension of the costate equation and consequently reduces the

Finally, we say the optimization problem is *normal* if *λ*<sup>0</sup> �= 0, otherwise the optimization

n,tran(*t*), **u**∗*<sup>T</sup>*

In particular, assume, *ad absurdum*, that *λ*<sup>0</sup> = 0. Now, if (41) and (44) imply that λdot(*t*) = **0**6n−n4 and λdyn(*t*) = **0**6n−n4 for some *t* ∈ [*t*1, *t*2], then assertion *i)* of Theorem 4.2 is contradicted. Therefore, *λ*<sup>0</sup> �= 0, and hence, the optimization problem is normal. In this

problem is *abnormal*. Normality can be shown by using the *Euler necessary condition*

**q**(*t*), **q**dot(*t*), **u**˜(*t*),λdyn(*t*),λdot(*t*)

*∂***u**˜

1,rot(*t*)]T, ..., [**u**∗*<sup>T</sup>*

dot(*t*) *such that i)* |*λ*<sup>∗</sup>

available in the literature do not apply to the optimization problem discussed herein.

h 

i,rot(*t*) *solve the trajectory optimization problem stated in Section 4.4, then there*

dot(*t*) 

dot(*t*2)) *satisfy the transversality condition for* **S**<sup>1</sup> *(respectively,* **S**2*) at* **q**∗(*t*1) *(respectively,*

<sup>0</sup> | + ||λ<sup>∗</sup>

dyn(*t*1) *and* λ<sup>∗</sup>

n,rot(*t*)]<sup>T</sup>

 **u**˜ =**u**˜ ∗

T

= **0**<sup>T</sup>

<sup>∈</sup> int(∏<sup>n</sup>

6n, (44)

<sup>i</sup>=1(Γi,tran × Γi,rot)).

**q**(*t*), **q**dot(*t*), **u**˜(*t*),λdyn(*t*),λdot(*t*)

*attains its minimum almost everywhere*

dot(*t*1) *(respectively,* λ<sup>∗</sup>

i,tran(*t*) *and* **u**<sup>∗</sup>

dyn(*t*)||<sup>2</sup> + ||λ<sup>∗</sup>

 . (43)

i,rot(*t*)*, t* ∈ [*t*1, *t*2]*,*

dot(*t*)||<sup>2</sup> �= 0*, t* ∈

dyn(*t*2)

m 

*If* **u**∗

*exist λ*∗

*and* λ∗

**q**∗(*t*2)*).*

i,tran(*t*) *and* **u**<sup>∗</sup>

<sup>0</sup> ∈ **R**+*,* λ<sup>∗</sup>

[*t*1, *t*2]*, ii)* (41) *holds, iii)* h

computational complexity.

 [**u**∗*<sup>T</sup>*

where **u**˜ ∗(*t*)

*∂*h 

1,tran(*t*), **<sup>u</sup>**∗*<sup>T</sup>*

case, we assume without loss of generality that *λ*<sup>0</sup> = 1.

This example will be further elaborated on in Section 6 for **q**(*t*) = **x**T <sup>1</sup> (*t*), **<sup>r</sup>**<sup>T</sup> <sup>2</sup> (*t*) T , and hence, for notational convenience define **<sup>f</sup>**dyn,2 (**<sup>x</sup>**2(*t*), **<sup>u</sup>**2,tran(*t*)) **<sup>a</sup>** (**<sup>x</sup>**2(*t*)) <sup>+</sup> **<sup>u</sup>**2,tran(*t*) and

$$\mathbf{f}\_{\mathsf{dyn},1}(\mathbf{x}\_{1},\mathbf{q}\_{\mathsf{dot},1}(\mathbf{x}\_{1}),\mathbf{u}\_{1}) \triangleq \begin{bmatrix} \mathbf{a}\left(\widetilde{\mathbf{x}}\_{1}(t)\right) + \mathbf{u}\_{1,\mathsf{tran}}(t) \\ -\mathbf{I}\_{\mathsf{in},1}^{-1}\boldsymbol{\omega}\_{1}^{\times}\left(\boldsymbol{\omega}\_{1}(t)\right)\mathbf{I}\_{\mathsf{in},1}\boldsymbol{\omega}\_{1}(t) + \widetilde{\boldsymbol{\omega}}\_{1}\left(\widetilde{\mathbf{x}}\_{1}(t)\right) + \mathbf{u}\_{1,\mathsf{tot}}(t) \end{bmatrix}.$$

#### **4.4 Path planning optimization problem revisited**

The trajectory optimization problem defined in Section 3.4 can be reformulated as follows. For all i = 1, ..., n and *t* ∈ [*t*1, *t*2], find **u**i,tran(*t*) (respectively, **u**i,rot(*t*)) among all admissible controls in Γi,tran (respectively, Γi,rot) such that the performance measure (2) is minimized and **q**(*t*) satisfies (24), (10), and (11).

By comparing this problem statement to the problem statement given in Section 3.4, it is clear that (5) and (6) are not explicitly accounted for in the above reformulation of the optimization problem. Hence, the constrained optimization problem has been reduced to an unconstrained optimization problem by the introduction of slack variables and Lagrange coordinates.

#### **4.5 Transversality condition**

Let **<sup>S</sup>** : *<sup>D</sup>*<sup>1</sup> <sup>→</sup> *<sup>D</sup>*2, where *<sup>D</sup>*<sup>1</sup> <sup>⊂</sup> **<sup>R</sup>**<sup>p</sup> and *<sup>D</sup>*<sup>2</sup> <sup>⊂</sup> **<sup>R</sup>**m, be a a continuously differentiable manifold and let the *manifold tangent* to **S** at **y**<sup>0</sup> be given by

$$
\left.\frac{\partial \mathbf{S}(\mathbf{y})}{\partial \mathbf{y}}\right|\_{\mathbf{y}=\mathbf{y}\_0} (\mathbf{y} - \mathbf{y}\_0) = \mathbf{0}\_{\mathbf{m}}.\tag{40}
$$

Every vector **<sup>v</sup>** <sup>∈</sup> **<sup>R</sup>**<sup>p</sup> that is normal to the manifold tangent to **<sup>S</sup>** at **<sup>y</sup>**0, that is, **<sup>v</sup>**T**<sup>y</sup>** <sup>=</sup> 0 for all **<sup>y</sup>** <sup>∈</sup> **<sup>R</sup>**<sup>p</sup> such that (40) holds, is said to verify the *transversality condition* for **<sup>S</sup>** at **<sup>y</sup>**0.

#### **4.6 Pontryagin's minimum principle**

Assume that a set of Lagrange coordinates has been found and that the formation's dynamic equations can be written in the form given by (24). Define the *costate vectors* λdot : [*t*1, *t*2] → **<sup>R</sup>**6n−n4 and <sup>λ</sup>dyn : [*t*1, *<sup>t</sup>*2] <sup>→</sup> **<sup>R</sup>**6n−n4 so that the *costate equation*

$$\frac{\partial}{\partial t} \begin{bmatrix} \lambda\_{\text{dot}}(t) \\ \lambda\_{\text{dyn}}(t) \end{bmatrix} = - \left( \frac{\partial}{\partial [\mathbf{q}^T, \mathbf{q}\_{\text{dot}}^T]^\mathsf{T}} \begin{bmatrix} \mathbf{Y}(\mathbf{q}(t))\dot{\mathbf{q}}(t) + \boldsymbol{\psi}(\mathbf{q}(t)) \\ \mathbf{f}\_{\text{dyn}}(\mathbf{q}(t), \mathbf{q}\_{\text{dot}}(t), \dot{\mathbf{u}}(t)) \end{bmatrix} \right)^\mathsf{T} \begin{bmatrix} \lambda\_{\text{dot}}(t) \\ \lambda\_{\text{dyn}}(t) \end{bmatrix} \tag{41}$$

holds. The boundary conditions for (41) are given in Theorem 4.2 below. Given *λ*<sup>0</sup> ∈ **R**, define the *Hamiltonian function*

$$\begin{split} \hbar \left( \mathbf{q}(t), \mathbf{q}\_{\mathrm{dot}}(t), \tilde{\mathbf{u}}(t), \lambda\_{\mathrm{dyn}}(t), \lambda\_{\mathrm{dot}}(t) \right) & \stackrel{\scriptstyle \Delta}{=} \lambda\_0 \sum\_{i=1}^{n} \mu\_i ||\mathbf{u}\_i(t)||\_2 + \lambda\_{\mathrm{dot}}^{\mathrm{T}}(t) \mathbf{q}\_{\mathrm{dot}}(t) \\ & + \lambda\_{\mathrm{dyn}}^{\mathrm{T}}(t) \mathbf{f}\_{\mathrm{dyn}}(\mathbf{q}(t), \mathbf{q}\_{\mathrm{dot}}(t), \tilde{\mathbf{u}}(t)). \end{split} \tag{42}$$

Finally, define

12 Will-be-set-by-IN-TECH

The trajectory optimization problem defined in Section 3.4 can be reformulated as follows. For all i = 1, ..., n and *t* ∈ [*t*1, *t*2], find **u**i,tran(*t*) (respectively, **u**i,rot(*t*)) among all admissible controls in Γi,tran (respectively, Γi,rot) such that the performance measure (2) is minimized and

By comparing this problem statement to the problem statement given in Section 3.4, it is clear that (5) and (6) are not explicitly accounted for in the above reformulation of the optimization problem. Hence, the constrained optimization problem has been reduced to an unconstrained optimization problem by the introduction of slack variables and Lagrange coordinates.

Let **<sup>S</sup>** : *<sup>D</sup>*<sup>1</sup> <sup>→</sup> *<sup>D</sup>*2, where *<sup>D</sup>*<sup>1</sup> <sup>⊂</sup> **<sup>R</sup>**<sup>p</sup> and *<sup>D</sup>*<sup>2</sup> <sup>⊂</sup> **<sup>R</sup>**m, be a a continuously differentiable manifold

Every vector **<sup>v</sup>** <sup>∈</sup> **<sup>R</sup>**<sup>p</sup> that is normal to the manifold tangent to **<sup>S</sup>** at **<sup>y</sup>**0, that is, **<sup>v</sup>**T**<sup>y</sup>** <sup>=</sup> 0 for all

Assume that a set of Lagrange coordinates has been found and that the formation's dynamic equations can be written in the form given by (24). Define the *costate vectors* λdot : [*t*1, *t*2] →

holds. The boundary conditions for (41) are given in Theorem 4.2 below. Given *λ*<sup>0</sup> ∈ **R**, define

n ∑ i=1

+ λ<sup>T</sup>

**Ψ** (**q**(*t*)) **˙q**(*t*) + ψ (**q**(*t*)) **f**dyn(**q**(*t*), **q**dot(*t*), **u**˜(*t*))

*<sup>μ</sup>*i||**u**i(*t*)||<sup>2</sup> <sup>+</sup> <sup>λ</sup><sup>T</sup>

**<sup>a</sup>** (**<sup>x</sup>**1(*t*)) <sup>+</sup> **<sup>u</sup>**1,tran(*t*)

for notational convenience define **<sup>f</sup>**dyn,2 (**<sup>x</sup>**2(*t*), **<sup>u</sup>**2,tran(*t*)) **<sup>a</sup>** (**<sup>x</sup>**2(*t*)) <sup>+</sup> **<sup>u</sup>**2,tran(*t*) and

**x**T <sup>1</sup> (*t*), **<sup>r</sup>**<sup>T</sup> <sup>2</sup> (*t*) T

(**y** − **y**0) = **0**m. (40)

<sup>T</sup>

dot(*t*)**q**dot(*t*)

dyn(*t*)**f**dyn(**q**(*t*), **q**dot(*t*), **u**˜(*t*)). (42)

λdot(*t*) λdyn(*t*)

(41)

<sup>1</sup> (ω1(*t*))**I**in,1ω1(*t*) + <sup>ω</sup><sup>i</sup> (**<sup>x</sup>**1(*t*)) <sup>+</sup> **<sup>u</sup>**1,rot(*t*)

, and hence,

 .

This example will be further elaborated on in Section 6 for **q**(*t*) =

<sup>−</sup>**I**−<sup>1</sup> in,1ω<sup>×</sup>

*∂***S**(**y**) *∂***y**

**<sup>R</sup>**6n−n4 and <sup>λ</sup>dyn : [*t*1, *<sup>t</sup>*2] <sup>→</sup> **<sup>R</sup>**6n−n4 so that the *costate equation*

*∂ ∂*[**q**T, **q**<sup>T</sup>

dot]<sup>T</sup>

 *λ*0

 **y**=**y**<sup>0</sup>

**<sup>y</sup>** <sup>∈</sup> **<sup>R</sup>**<sup>p</sup> such that (40) holds, is said to verify the *transversality condition* for **<sup>S</sup>** at **<sup>y</sup>**0.

**f**dyn,1(**x**1, **q**dot,1(**x**1), **u**1)

**q**(*t*) satisfies (24), (10), and (11).

**4.5 Transversality condition**

**4.6 Pontryagin's minimum principle**

 = − 

**q**(*t*), **q**dot(*t*), **u**˜(*t*),λdyn(*t*),λdot(*t*)

d d*t* λdot(*t*) λdyn(*t*)

the *Hamiltonian function*

h 

**4.4 Path planning optimization problem revisited**

and let the *manifold tangent* to **S** at **y**<sup>0</sup> be given by

$$\mathfrak{m}\left(\mathbf{q}(t), \mathbf{q}\_{\mathrm{dot}}(t), \lambda\_{\mathrm{dyn}}(t), \lambda\_{\mathrm{dot}}(t)\right) \stackrel{\Delta}{=} \min\_{\mathfrak{h} \in \prod\_{i=1}^{n} (\Gamma\_{\mathrm{i}\mathrm{inta}} \times \Gamma\_{\mathrm{i},\mathrm{rot}})} \mathfrak{h}\left(\mathbf{q}(t), \mathbf{q}\_{\mathrm{dot}}(t), \tilde{\mathbf{u}}(t), \lambda\_{\mathrm{dyn}}(t), \lambda\_{\mathrm{dot}}(t)\right) \tag{43}$$

The following theorem is known as the *Pontryagin minimum principle*. For details on this theorem and its numerous applications to optimal control, see Pontryagin et al. (1962).

**Theorem 4.2.** *(Pontryagin et al., 1962) For all* i *=* 1*, ...,* n*, let* **u**∗ i,tran(*t*) *and* **u**<sup>∗</sup> i,rot(*t*)*, t* ∈ [*t*1, *t*2]*, be admissible controls in* Γi,tran *and* Γi,rot*, respectively, such that* **q**∗(*t*) *satisfies* (24)*,* (10)*, and* (11)*. If* **u**∗ i,tran(*t*) *and* **u**<sup>∗</sup> i,rot(*t*) *solve the trajectory optimization problem stated in Section 4.4, then there exist λ*∗ <sup>0</sup> ∈ **R**+*,* λ<sup>∗</sup> dyn(*t*)*, and* λ<sup>∗</sup> dot(*t*) *such that i)* |*λ*<sup>∗</sup> <sup>0</sup> | + ||λ<sup>∗</sup> dyn(*t*)||<sup>2</sup> + ||λ<sup>∗</sup> dot(*t*)||<sup>2</sup> �= 0*, t* ∈ [*t*1, *t*2]*, ii)* (41) *holds, iii)* h **q**∗(*t*), **u**˜ ∗(*t*),λ∗ dyn(*t*),λ<sup>∗</sup> dot(*t*) *attains its minimum almost everywhere on* [*t*1, *t*2] *except on a finite number of points, and iv)* λ<sup>∗</sup> dyn(*t*1) *and* λ<sup>∗</sup> dot(*t*1) *(respectively,* λ<sup>∗</sup> dyn(*t*2) *and* λ∗ dot(*t*2)) *satisfy the transversality condition for* **S**<sup>1</sup> *(respectively,* **S**2*) at* **q**∗(*t*1) *(respectively,* **q**∗(*t*2)*).*

Pontryagin minimum principle is a necessary condition for optimality, and hence, it provides *candidate* optimal control vectors. Sufficient conditions for optimality that are currently available in the literature do not apply to the optimization problem discussed herein.

It is worth noting that, instead of introducing the Lagrange coordinates, the equality constraints (7) and (5) can be accounted for by introducing Lagrange multipliers. This approach requires modifying the assigned performance measure and introducing additional costate vectors (Giaquinta & Hildebrandt, 1996; Lee & Markus, 1968). The dynamics of the costate vectors are characterized by ordinary differential equations known as costate equations, which need to be integrated numerically together with the dynamic equations of the state vector. Therefore, the computational complexity of finding optimal trajectories for large formations increases drastically when Lagrange multipliers are employed (L'Afflitto & Sultan, 2010). Alternatively, finding a suitable set of Lagrange coordinates can be a demanding task and in some cases the Lagrange coordinates may not have physical meaning (Pars, 1965); however, this reduces the dimension of the costate equation and consequently reduces the computational complexity.

Finally, we say the optimization problem is *normal* if *λ*<sup>0</sup> �= 0, otherwise the optimization problem is *abnormal*. Normality can be shown by using the *Euler necessary condition*

$$\frac{\partial \mathfrak{h}\left(\mathbf{q}(t), \mathbf{q}\_{\rm dot}(t), \tilde{\mathbf{u}}(t), \lambda\_{\rm dyn}(t), \lambda\_{\rm dot}(t)\right)}{\partial \tilde{\mathbf{u}}}\Big|\_{\tilde{\mathbf{u}} = \tilde{\mathbf{u}}^\*} = \mathbf{0}\_{\rm fm}^T\tag{44}$$

where **u**˜ ∗(*t*) [**u**∗*<sup>T</sup>* 1,tran(*t*), **<sup>u</sup>**∗*<sup>T</sup>* 1,rot(*t*)]T, ..., [**u**∗*<sup>T</sup>* n,tran(*t*), **u**∗*<sup>T</sup>* n,rot(*t*)]<sup>T</sup> T <sup>∈</sup> int(∏<sup>n</sup> <sup>i</sup>=1(Γi,tran × Γi,rot)). In particular, assume, *ad absurdum*, that *λ*<sup>0</sup> = 0. Now, if (41) and (44) imply that λdot(*t*) = **0**6n−n4 and λdyn(*t*) = **0**6n−n4 for some *t* ∈ [*t*1, *t*2], then assertion *i)* of Theorem 4.2 is contradicted. Therefore, *λ*<sup>0</sup> �= 0, and hence, the optimization problem is normal. In this case, we assume without loss of generality that *λ*<sup>0</sup> = 1.

{**u**n(·)}<sup>∞</sup>

minimizers (Wall, 2008).

(Hassan et al., 2005).

the formation system dynamics problem.

the UAV formation problem.

+*∂*ψ(**q**) *∂***q q**=**q**∗

<sup>n</sup>=0, that is, if limn→+<sup>∞</sup> **u**n(*t*) = **u**(*t*), then J[**u**(*t*)] ≤ lim infn→+<sup>∞</sup> J[**u**n(*t*)], **u**<sup>n</sup> ∈ Γ. Finally, it is also worth noting that approximate analytical methods can be used to solve the optimal path planning problem such as shape-based approximation methods (Petropoulos & Longuski, 2004), which are generally less effective due to the arbitrary parameterization of the

Most of the results on the fuel consumption optimization employ numerical methods (Betts, 1998), which can be categorized as indirect or direct. Indirect numerical methods, which mimic the variational approach, suffer from high computational complexity since adjoint variables must be introduced. Alternatively, direct numerical methods are computationally more efficient, however, they require casting the given problem into a parameter optimization problem (Herman & Conway, 1987). Among the numerical methods commonly in use, it is worth mentioning genetic algorithms (Seereram et al., 2000) and particle swarm optimizers

A Variational Approach to the Fuel Optimal Control Problem for UAV Formations 235

One of the contributions of the present paper is that it extends Lawden's results on primer vector theory to formations of vehicles modeled as 6 DoF rigid bodies subject to generic environmental forces and moments by applying Pontryagin's minimum principle. As in all classical variational methods, Pontryagin's minimum principle is not suitable for numerically computing the optimal trajectory of a formation. However, Pontryagin's minimum principle allows us to draw analytical conclusions since it provides a generalization of the necessary conditions used by Lawden (1963), allows us to formally implement bounded integrable functions as admissible controls, and allows us to account for control constraints. Prussing (2010) and Marec (1979) have used Pontryagin's minimum principle to address primer vector theory using the same assumptions as Lawden (1963). In contrast, the present work provides additional analytical results for generic mission scenarios and complex environmental conditions for which numerical results can be verified. Furthermore, this paper exploits some properties of the costate space and consequently provides further insight into

**6. Necessary conditions for optimality of UAV formation trajectories**

*at least one* **<sup>u</sup>**˜ <sup>∗</sup> *such that* Jformation [**u**˜ <sup>∗</sup>(·)] <sup>≤</sup> Jformation [**u**˜(·)] *for all* **<sup>u</sup>**˜ <sup>∈</sup> <sup>∏</sup><sup>n</sup>

minimizer on <sup>Γ</sup>i,tran <sup>×</sup> <sup>Γ</sup>i,rot. Now, since *<sup>μ</sup>*<sup>i</sup> <sup>∈</sup> [0, 1] with <sup>∑</sup><sup>n</sup>

**Proposition 6.2.** *Assume that the hypothesis of Theorem 4.1 hold. If* λ∗

*, then the path planning problem is normal.*

The following propositions are needed to develop the necessary conditions for optimality of

**Proposition 6.1.** *Consider the performance measure* Jformation [**u**˜(·)] *given by* (2)*. Then, there exists*

*Proof.* Since the integrand of the performance measure (1) is a continuous function defined on the compact set Γi,tran × Γi,rot, it follows from Weierstrass' theorem that (1) has a global

<sup>i</sup>=1(Γi,tran × Γi,rot)*.*

*∂***Ψ**(**q**) *∂***q q**=**q**∗

**˙q**∗(*t*)

<sup>i</sup>=<sup>1</sup> *μ*<sup>i</sup> = 1, the result is immediate.

dot(*t*) ∈ N

#### **5. Analytical and numerical approaches to the optimal path planning problem**

Finding minimizers to (2) subject to the constraints (3) – (6) can be formulated as a Lagrange optimization problem (Ewing, 1969), which has been extensively studied both analytically and numerically in the literature. Analytical methods rely on either Lagrange's variational approach using calculus of variations or on the direct approach. In the classical variational approach, candidate minimizers for a given performance functional can be found by applying the Euler necessary condition. In order to find the minimizers, candidate optimal solutions need to be further tested by applying the Clebsh necessary condition, Jacobi necessary condition, Weierstrass necessary condition, as well as the associated sufficient conditions (Ewing, 1969; Giaquinta & Hildebrandt, 1996).

This classical analytical approach is not practical since applying the Euler necessary condition involves solving a differential-algebraic boundary value problem, whose analytical solutions are impossible to find for many practical problems of interest. Moreover, numerical solutions to this boundary value problem are affected by a strong sensitivity to the boundary conditions (Bryson, 1975). Furthermore, verifying the Jacobi necessary condition or the Weierstrass necessary condition can be a dauting task (L'Afflitto & Sultan, 2010).

A variational approach to the optimal path planning problem for a single vehicle, known as *primer vector theory*, was addressed by Lawden (1963). Lawden's problem was formulated using the assumptions that the acceleration vector **a** induced by external forces due to the environment is function of only the position vector, the vehicle is a 3 DoF point mass, and the state and control are only subject to equality constraints (Lawden, 1963). Primer vector theory is successfully employed in spacecraft trajectory optimization (Jamison & Coverstone, 2010), orbit transfers (Petropoulos & Russell, 2008), and optimal rendezvous problems (Zaitri et al., 2010), however, vehicles are often assumed to be point masses subject to only gravitational acceleration. Among the few studies on primer vector theory applied to vehicle formations, it is worth noting the work of Mailhe & Guzman (2004), where the formation initialization problem is addressed. Applications of primer vector theory to 6 DoF single vehicles have been employed to optimize the descent on Mars (Topcu et al., 2007). These studies, however, assume that the spacecraft is subject to a constant gravity acceleration, the control variables are the translational acceleration and the angular rates, and the translational acceleration can be pointed in any direction by rotating the vehicle.

Pontryagin's minimum principle is a variational method that is equivalent to the Weierstrass necessary condition with the advantage of addressing constraints on the control more effectively than applying the classical variational approach. State constraints need to be addressed by applying an optimal switching condition on the costate equation (Pontryagin et al., 1962), which generally increases the complexity of the problem. In the present formulation, the constraints on the formation are addressed by employing Lagrange coordinates, which does not introduce further conditions on the costate vector dynamics.

The direct approach in the calculus of variations, which is more recent than the variational approach, is based on defining a minimizing sequence of control functions **u**n(*t*) in some set Γ such that limn→+<sup>∞</sup> **u**n(*t*) = **u**(*t*) is a minimizer of the performance measure *J*[**u**(·)]. To this end, the following conditions should be met. *i*) Compactness of Γ, so that a minimizing sequence contains a convergent subsequence, *ii*) closedness of Γ, so that the limit of such a subsequence is contained in Γ, and *iii*) lower semicontinuity of the sequence 14 Will-be-set-by-IN-TECH

Finding minimizers to (2) subject to the constraints (3) – (6) can be formulated as a Lagrange optimization problem (Ewing, 1969), which has been extensively studied both analytically and numerically in the literature. Analytical methods rely on either Lagrange's variational approach using calculus of variations or on the direct approach. In the classical variational approach, candidate minimizers for a given performance functional can be found by applying the Euler necessary condition. In order to find the minimizers, candidate optimal solutions need to be further tested by applying the Clebsh necessary condition, Jacobi necessary condition, Weierstrass necessary condition, as well as the associated sufficient conditions

This classical analytical approach is not practical since applying the Euler necessary condition involves solving a differential-algebraic boundary value problem, whose analytical solutions are impossible to find for many practical problems of interest. Moreover, numerical solutions to this boundary value problem are affected by a strong sensitivity to the boundary conditions (Bryson, 1975). Furthermore, verifying the Jacobi necessary condition or the Weierstrass

A variational approach to the optimal path planning problem for a single vehicle, known as *primer vector theory*, was addressed by Lawden (1963). Lawden's problem was formulated using the assumptions that the acceleration vector **a** induced by external forces due to the environment is function of only the position vector, the vehicle is a 3 DoF point mass, and the state and control are only subject to equality constraints (Lawden, 1963). Primer vector theory is successfully employed in spacecraft trajectory optimization (Jamison & Coverstone, 2010), orbit transfers (Petropoulos & Russell, 2008), and optimal rendezvous problems (Zaitri et al., 2010), however, vehicles are often assumed to be point masses subject to only gravitational acceleration. Among the few studies on primer vector theory applied to vehicle formations, it is worth noting the work of Mailhe & Guzman (2004), where the formation initialization problem is addressed. Applications of primer vector theory to 6 DoF single vehicles have been employed to optimize the descent on Mars (Topcu et al., 2007). These studies, however, assume that the spacecraft is subject to a constant gravity acceleration, the control variables are the translational acceleration and the angular rates, and the translational acceleration can

Pontryagin's minimum principle is a variational method that is equivalent to the Weierstrass necessary condition with the advantage of addressing constraints on the control more effectively than applying the classical variational approach. State constraints need to be addressed by applying an optimal switching condition on the costate equation (Pontryagin et al., 1962), which generally increases the complexity of the problem. In the present formulation, the constraints on the formation are addressed by employing Lagrange coordinates, which does not introduce further conditions on the costate vector dynamics.

The direct approach in the calculus of variations, which is more recent than the variational approach, is based on defining a minimizing sequence of control functions **u**n(*t*) in some set Γ such that limn→+<sup>∞</sup> **u**n(*t*) = **u**(*t*) is a minimizer of the performance measure *J*[**u**(·)]. To this end, the following conditions should be met. *i*) Compactness of Γ, so that a minimizing sequence contains a convergent subsequence, *ii*) closedness of Γ, so that the limit of such a subsequence is contained in Γ, and *iii*) lower semicontinuity of the sequence

**5. Analytical and numerical approaches to the optimal path planning problem**

(Ewing, 1969; Giaquinta & Hildebrandt, 1996).

be pointed in any direction by rotating the vehicle.

necessary condition can be a dauting task (L'Afflitto & Sultan, 2010).

{**u**n(·)}<sup>∞</sup> <sup>n</sup>=0, that is, if limn→+<sup>∞</sup> **u**n(*t*) = **u**(*t*), then J[**u**(*t*)] ≤ lim infn→+<sup>∞</sup> J[**u**n(*t*)], **u**<sup>n</sup> ∈ Γ. Finally, it is also worth noting that approximate analytical methods can be used to solve the optimal path planning problem such as shape-based approximation methods (Petropoulos & Longuski, 2004), which are generally less effective due to the arbitrary parameterization of the minimizers (Wall, 2008).

Most of the results on the fuel consumption optimization employ numerical methods (Betts, 1998), which can be categorized as indirect or direct. Indirect numerical methods, which mimic the variational approach, suffer from high computational complexity since adjoint variables must be introduced. Alternatively, direct numerical methods are computationally more efficient, however, they require casting the given problem into a parameter optimization problem (Herman & Conway, 1987). Among the numerical methods commonly in use, it is worth mentioning genetic algorithms (Seereram et al., 2000) and particle swarm optimizers (Hassan et al., 2005).

One of the contributions of the present paper is that it extends Lawden's results on primer vector theory to formations of vehicles modeled as 6 DoF rigid bodies subject to generic environmental forces and moments by applying Pontryagin's minimum principle. As in all classical variational methods, Pontryagin's minimum principle is not suitable for numerically computing the optimal trajectory of a formation. However, Pontryagin's minimum principle allows us to draw analytical conclusions since it provides a generalization of the necessary conditions used by Lawden (1963), allows us to formally implement bounded integrable functions as admissible controls, and allows us to account for control constraints. Prussing (2010) and Marec (1979) have used Pontryagin's minimum principle to address primer vector theory using the same assumptions as Lawden (1963). In contrast, the present work provides additional analytical results for generic mission scenarios and complex environmental conditions for which numerical results can be verified. Furthermore, this paper exploits some properties of the costate space and consequently provides further insight into the formation system dynamics problem.

#### **6. Necessary conditions for optimality of UAV formation trajectories**

The following propositions are needed to develop the necessary conditions for optimality of the UAV formation problem.

**Proposition 6.1.** *Consider the performance measure* Jformation [**u**˜(·)] *given by* (2)*. Then, there exists at least one* **<sup>u</sup>**˜ <sup>∗</sup> *such that* Jformation [**u**˜ <sup>∗</sup>(·)] <sup>≤</sup> Jformation [**u**˜(·)] *for all* **<sup>u</sup>**˜ <sup>∈</sup> <sup>∏</sup><sup>n</sup> <sup>i</sup>=1(Γi,tran × Γi,rot)*.*

*Proof.* Since the integrand of the performance measure (1) is a continuous function defined on the compact set Γi,tran × Γi,rot, it follows from Weierstrass' theorem that (1) has a global minimizer on <sup>Γ</sup>i,tran <sup>×</sup> <sup>Γ</sup>i,rot. Now, since *<sup>μ</sup>*<sup>i</sup> <sup>∈</sup> [0, 1] with <sup>∑</sup><sup>n</sup> <sup>i</sup>=<sup>1</sup> *μ*<sup>i</sup> = 1, the result is immediate.

**Proposition 6.2.** *Assume that the hypothesis of Theorem 4.1 hold. If* λ∗ dot(*t*) ∈ N *∂***Ψ**(**q**) *∂***q q**=**q**∗ **˙q**∗(*t*)

+*∂*ψ(**q**) *∂***q q**=**q**∗ *, then the path planning problem is normal.*

iii) *If λ*∗

iv) *If λ*∗

v) *If λ*∗

vi) *If λ*∗

<sup>0</sup>*μ*<sup>i</sup> < 

<sup>0</sup>*μ*<sup>i</sup> < 

<sup>0</sup>*μ*<sup>i</sup> = 

<sup>0</sup>*μ*<sup>i</sup> = 

1, . . . , n, <sup>−</sup>*∂***v**i(**q**,**q**dot)

is parallel to **u**∗

h 

  ≤ n ∑ i=1

*∂*ωi(**q**,**q**dot) *∂***q**dot

*∂*ωi(**q**,**q**dot) *∂***q**dot

 (**q**∗,**q**∗ dot) λ∗

*Proof.* It follows from (45) that h

 (**q**∗,**q**∗ dot) λ∗

dyn(*t*),λ<sup>∗</sup>

dot(*t*) <sup>−</sup> <sup>λ</sup>∗<sup>T</sup>

*∂***v**<sup>i</sup> (**q**, **q**dot) *∂***q**dot

*∂***q**dot

*λ*∗ <sup>0</sup>*μ*<sup>i</sup> − 

which proves i) – iv). Next, if *λ*∗

**q**∗(*t*), **u**˜ ∗(*t*),λ∗

*λ*∗ <sup>0</sup>*μ*<sup>i</sup> − 

+ n ∑ i=1

> (**q**∗,**q**∗ dot) λ∗ dyn(*t*) 2

*∂***v**i(**q**,**q**dot) *∂***q**dot

*∂*ωi(**q**,**q**dot) *∂***q**dot

*∂***v**i(**q**,**q**dot) *∂***q**dot

*∂*ωi(**q**,**q**dot) *∂***q**dot

 (**q**∗,**q**∗ dot) λ∗ dyn(*t*) 2

 (**q**∗,**q**∗ dot) λ∗ dyn(*t*) 2

 (**q**∗,**q**∗ dot) λ∗ dyn(*t*) 2

 (**q**∗,**q**∗ dot) λ∗ dyn(*t*) 2

 (**q**∗,**q**∗ dot) λ∗ dyn(*t*) 2

*∂*ω<sup>i</sup> (**q**, **q**dot) *∂***q**dot

information about the optimal control, and hence, v) and vi) hold.

dyn(*t*) is parallel to **u**<sup>∗</sup>

 (**q**∗,**q**∗ dot) λ∗ dyn(*t*) 2

> 

*∂***v**i(**q**,**q**dot) *∂***q**dot

and hence, we denote them as the *translational primer vector* and the *rotational primer vector*, respectively. Moreover, the trajectory given by each of the cases in Theorem 6.1 are called *arcs*. For each i = 1, . . . , n, the arcs corresponding to i) (respectively, ii)) are called *maximum translational* (respectively, *rotational*) *thrust arcs*. Similarly, arcs corresponding to iii) (respectively, iv)) are called *null translational* (respectively, *rotational*) *thrust arcs*. Finally, arcs corresponding to v) (respectively, vi)) are called *singular translational* (respectively, *rotational*) *thrust arcs*. The optimal translational and rotational control vectors for v) and vi) in Theorem

 (**q**∗,**q**∗ dot) λ∗ dyn(*t*) 2

dyn(*t*) determine the magnitude and the direction of the control forces,

<sup>0</sup>*μ*<sup>i</sup> =

Analogous to Lawden's (Lawden, 1963) primer vector theory, *<sup>∂</sup>***v**i(**q**,**q**dot)

i,rot(*t*). Thus, using the triangular inequality, it follows that

*, then* **u**∗

A Variational Approach to the Fuel Optimal Control Problem for UAV Formations 237

*, then* **u**∗

*, then* **u**∗

*, then* **u**∗

**q**(*t*), **u**˜(*t*),λdyn(*t*),λdot(*t*)

dot(*t*)**q**dot(**q**∗(*t*)) <sup>−</sup> <sup>λ</sup><sup>T</sup>

 ||**u**<sup>∗</sup>

i,tran(*t*) = *ρ*i,2*.*

i,rot(*t*) = *ρ*i,4*.*

i,tran(*t*) *is unspecified.*

i,rot(*t*) *is unspecified.*

i,tran(*t*) and if <sup>−</sup>*∂*ωi(**q**,**q**dot)

dyn(*t*)<sup>ˆ</sup>

i,rot(*t*)||<sup>2</sup>

*∂***q**dot

 (**q**∗,**q**∗ dot) λ∗

i,tran(*t*)||<sup>2</sup>

 ||**u**<sup>∗</sup>

), then Pontryagin's minimum principle does not provide any

is minimized if, for all i =

 (**q**∗,**q**∗ dot) λ∗ dyn(*t*)

**f**dyn(**q**∗(*t*), **q**dot(**q**∗(*t*)))

, (47)

<sup>0</sup>*μ*<sup>i</sup> =

dyn(*t*) and

(respectively, *λ*∗

*∂***q**dot

*Proof.* First, note that the Hamiltonian function (42) can be rewritten as

$$\begin{aligned} \text{fb}\left(\mathbf{q}(t), \mathbf{q}\_{\text{dof}}(t), \hat{\mathbf{u}}(t), \lambda\_{\text{dgn}}(t), \lambda\_{\text{dot}}(t)\right) &= \lambda\_0 \sum\_{i=1}^n \mu\_i ||\mathbf{u}\_i(t)||\_2 \\ &+ \sum\_{i=1}^n \mathbf{u}\_{\text{i},\text{ran}}(t) \frac{\partial \mathbf{v}\_i\left(\mathbf{q}, \mathbf{q}\_{\text{dot}}\right)}{\partial \mathbf{q}\_{\text{dot}}} \lambda\_{\text{dyn}}(t) \\ &+ \sum\_{i=1}^n \mathbf{u}\_{\text{i},\text{rot}}(t) \frac{\partial \omega\_i\left(\mathbf{q}, \mathbf{q}\_{\text{dot}}\right)}{\partial \mathbf{q}\_{\text{dot}}} \lambda\_{\text{dyn}}(t) \\ &+ \lambda\_{\text{dyn}}^T(t) \mathbf{\hat{f}}\_{\text{dyn}}(\mathbf{q}(t), \mathbf{q}\_{\text{dot}}(t)) + \lambda\_{\text{dot}}^T(t) \mathbf{q}\_{\text{dot}}(t). \end{aligned}$$

Furthemore, note that (44) implies that

$$
\lambda\_0^\* \sum\_{\mathbf{i}=1}^n \mu\_{\mathbf{i}} \frac{\mathbf{u}\_{\mathbf{i}}^{\ast T}(t)}{||\mathbf{u}\_{\mathbf{i}}^{\ast}(t)||\_2} = -\sum\_{\mathbf{i}=1}^n \left[ \frac{\partial \mathbf{v}\_{\mathbf{i}}(\mathbf{q}, \mathbf{q}\_{\mathrm{dot}})}{\partial \mathbf{q}\_{\mathrm{dot}}} \Big|\_{\left(\mathbf{q}^\*, \mathbf{q}\_{\mathrm{dot}}^{\ast}\right)} \lambda\_{\mathrm{dyn}}^{\*}(t) \right.\\ \left. \left. \frac{\partial \omega\_{\mathbf{i}}(\mathbf{q}, \mathbf{q}\_{\mathrm{dot}})}{\partial \mathbf{q}\_{\mathrm{dot}}} \right|\_{\left(\mathbf{q}^\*, \mathbf{q}\_{\mathrm{dot}}^{\ast}\right)} \lambda\_{\mathrm{dyn}}^{\*}(t) \right], 1
$$

where **<sup>u</sup>**˜ <sup>∗</sup> <sup>∈</sup> *int* � Π<sup>n</sup> <sup>i</sup>=1(Γi,tran × Γi,rot) � and where we use the subscript (**q**∗, **q**<sup>∗</sup> dot) for (**q**, **q**dot)=(**q**∗, **q**<sup>∗</sup> dot). Now, assume, *ad absurdum*, that *λ*<sup>∗</sup> <sup>0</sup> <sup>=</sup> 0 and note that *<sup>∂</sup>***v**i(**q**,**q**dot) *<sup>∂</sup>***q**dot = *∂***v**i(**q**,**q**dot) *∂***q** *∂***q** *<sup>∂</sup>***q**dot and *<sup>∂</sup>*ωi(**q**,**q**dot) *<sup>∂</sup>***q**dot <sup>=</sup> *<sup>∂</sup>*ωi(**q**,**q**dot) *∂***q** *∂***q** *<sup>∂</sup>***q**dot . Since **Ψ** (**q**) is diffeomorphic and Theorem 4.1 holds, it follows that λ∗ dyn(*t*) = **0**6n−n4 . In this case, (41) can be explicitly written as

$$
\frac{\partial}{\partial t} \begin{bmatrix} \boldsymbol{\lambda}^\*\_{\mathrm{dot}}(t) \\ \boldsymbol{\lambda}^\*\_{\mathrm{dyn}}(t) \end{bmatrix} = - \begin{bmatrix} \frac{\partial \mathbf{F}(\mathbf{q})}{\partial \mathbf{q}} \dot{\mathbf{q}}(t) + \frac{\partial \boldsymbol{\psi}(\mathbf{q})}{\partial \mathbf{q}} \, \mathbf{0}\_{\left(\mathrm{fm} - \mathrm{n}\boldsymbol{\up}\right) \times \left(\mathrm{fm} - \mathrm{n}\boldsymbol{\up}\right)} \\\ \frac{\partial \mathbf{f}\_{\mathrm{dyn}}(\mathbf{q}, \mathbf{q}\_{\mathrm{det}}, \mathbf{0})}{\partial \mathbf{q}} \qquad \frac{\partial \mathbf{f}\_{\mathrm{dyn}}(\mathbf{q}, \mathbf{q}\_{\mathrm{det}}, \mathbf{0})}{\partial \mathbf{q}\_{\mathrm{det}}} \end{bmatrix}^{\mathrm{T}}\_{\mathrm{d}\mathbf{q}, \mathrm{t}} \begin{bmatrix} \boldsymbol{\lambda}^\*\_{\mathrm{dot}}(t) \\ \boldsymbol{\lambda}^\*\_{\mathrm{dot}}(t) \end{bmatrix}, \tag{46}
$$

and hence, λ∗ dot(*t*) = **0**6n−n4 , which contradicts *i*) of Theorem 4.2.

If follows from Proposition 6.2 that the path planning optimization problem for a constrained formation is abnormal. Example 6.1 below, however, shows that this problem is normal for unconstrained 3 DoF vehicles, which is a well known result in the literature (Lawden, 1963).

**Theorem 6.1.** *Consider the path planning optimization problem. If* <sup>n</sup> ∑ i=1 **u**∗ i,tran(*t*) *<sup>∂</sup>***v**i(**q**,**q**dot) *∂***q**dot � � � � (**q**∗,**q**∗ dot) +

n ∑ i=1 **u**∗ i,rot(*t*) *<sup>∂</sup>*ωi(**q**,**q**dot) *∂***q**dot � � � � (**q**∗,**q**∗ dot) *and* −λ<sup>∗</sup> dyn(*t*) *are parallel, then the performance measure* (2) *is minimized. Moreover, for all* i = 1, . . . , n*, the following conditions hold.*

$$\begin{array}{ll} \text{i)} \ f \lambda\_{0}^{\*} \mu\_{\text{i}} > \left| \left| \frac{\partial \mathbf{v}\_{\text{i}}(\mathbf{q}\_{\text{i}} \mathbf{q}\_{\text{dot}})}{\partial \mathbf{q}\_{\text{dot}}} \right|\_{\left(\mathbf{q}^{\*}\_{\text{}}, \mathbf{q}^{\*}\_{\text{dot}}\right)} \lambda\_{\text{dyn}}^{\*}(t) \right|\_{2} \; \text{then } \mathbf{u}\_{\text{i}, \text{tran}}^{\*}(t) = \mathbf{0}\_{3}. \\\\ \text{ii)} \ f \lambda\_{0}^{\*} \mu\_{\text{i}} > \left| \left| \frac{\partial \omega\_{\text{i}}(\mathbf{q}\_{\text{i}} \mathbf{q}\_{\text{dot}})}{\partial \mathbf{q}\_{\text{dot}}} \right|\_{\left(\mathbf{q}^{\*}\_{\text{}}, \mathbf{q}\_{\text{dot}}^{\*}\right)} \lambda\_{\text{dyn}}^{\*}(t) \right|\_{2} \; \text{then } \mathbf{u}^{\*}\_{\text{i,rot}}(t) = \mathbf{0}\_{3}. \end{array}$$

$$\text{iii) } \,\,\,\,\not\,\lambda\_0^\*\mu\_\text{i} < \left\vert \left\vert \begin{array}{c} \frac{\partial \mathbf{v}\_i(\mathbf{q}, \mathbf{q}\_{\text{det}})}{\partial \mathbf{q}\_{\text{det}}} \right\vert\_{\left(\mathbf{q}^\*, \mathbf{q}\_{\text{det}}^\*\right)} \lambda\_{\text{dyn}}^\*(t) \right\vert \end{array} \right\vert\_2 \,\,\,\text{then}\,\,\mathbf{u}\_{\text{i},\text{tran}}^\*(t) = \rho\_{\text{i},2}.$$

16 Will-be-set-by-IN-TECH

+ n ∑ i=1

+ n ∑ i=1

+ λ<sup>T</sup>

� � � � (**q**∗,**q**∗ dot) λ∗

*∂***q**

*μ*i||**u**i(*t*)||<sup>2</sup>

**u**i,tran(*t*)

**u**i,rot(*t*)

dyn(*t*) = **0**6n−n4 . In this case, (41) can be explicitly written as

*<sup>∂</sup>***<sup>q</sup> <sup>0</sup>**(6n−n4)×(6n−n4)

*∂***f**dyn(**q**,**q**dot,**u**˜) *∂***q**dot

dyn(*t*)<sup>ˆ</sup>

*∂***v**<sup>i</sup> (**q**, **q**dot) *∂***q**dot

*∂*ω<sup>i</sup> (**q**, **q**dot) *∂***q**dot

**f**dyn(**q**(*t*), **q**dot(*t*)) + λ<sup>T</sup>

dyn(*t*), *<sup>∂</sup>*ω<sup>i</sup> (**q**, **<sup>q</sup>**dot)

� and where we use the subscript (**q**∗, **q**<sup>∗</sup>

*∂***q**dot

*<sup>∂</sup>***q**dot . Since **Ψ** (**q**) is diffeomorphic and Theorem 4.1

⎤ ⎦

∑ i=1 **u**∗

dyn(*t*) *are parallel, then the performance measure* (2) *is*

T

(**q**∗,**q**<sup>∗</sup> dot)

λdyn(*t*)

λdyn(*t*)

� � � � (**q**∗,**q**∗ dot) λ∗ dyn(*t*) � ,

<sup>0</sup> <sup>=</sup> 0 and note that *<sup>∂</sup>***v**i(**q**,**q**dot)

� λ∗ dot(*t*) λ∗ dyn(*t*)

i,tran(*t*) *<sup>∂</sup>***v**i(**q**,**q**dot) *∂***q**dot

dot(*t*)**q**dot(*t*). (45)

dot) for

*<sup>∂</sup>***q**dot =

, (46)

�

� � � � (**q**∗,**q**∗ dot) +

*Proof.* First, note that the Hamiltonian function (42) can be rewritten as

� =*λ*<sup>0</sup> n ∑ i=1

**q**(*t*), **q**dot(*t*), **u**˜(*t*),λdyn(*t*),λdot(*t*)

Furthemore, note that (44) implies that

= −

Π<sup>n</sup>

� = −

� � � � (**q**∗,**q**∗ dot)

*∂***v**i(**q**,**q**dot) *∂***q**dot

*∂*ωi(**q**,**q**dot) *∂***q**dot

� � � � (**q**∗,**q**∗ dot) λ∗ dyn(*t*) � � � � � � � � 2

� � � � (**q**∗,**q**∗ dot) λ∗ dyn(*t*) � � � � � � � � 2

*<sup>∂</sup>***q**dot and *<sup>∂</sup>*ωi(**q**,**q**dot)

n ∑ i=1 �

<sup>i</sup>=1(Γi,tran × Γi,rot)

*<sup>∂</sup>***q**dot <sup>=</sup> *<sup>∂</sup>*ωi(**q**,**q**dot)

*∂***Ψ**(**q**)

**Theorem 6.1.** *Consider the path planning optimization problem. If* <sup>n</sup>

*and* −λ<sup>∗</sup>

*minimized. Moreover, for all* i = 1, . . . , n*, the following conditions hold.*

⎡ ⎣ *∂***v**<sup>i</sup> (**q**, **q**dot) *∂***q**dot

dot). Now, assume, *ad absurdum*, that *λ*<sup>∗</sup>

*<sup>∂</sup>***<sup>q</sup> ˙q**(*t*) + *<sup>∂</sup>*ψ(**q**)

dot(*t*) = **0**6n−n4 , which contradicts *i*) of Theorem 4.2.

If follows from Proposition 6.2 that the path planning optimization problem for a constrained formation is abnormal. Example 6.1 below, however, shows that this problem is normal for unconstrained 3 DoF vehicles, which is a well known result in the literature (Lawden, 1963).

*, then* **u**∗

*, then* **u**∗

i,tran(*t*) = **0**3*.*

i,rot(*t*) = **0**3*.*

*∂***f**dyn(**q**,**q**dot,**u**˜) *∂***q**

*∂***q**

**u**∗*<sup>T</sup>* <sup>i</sup> (*t*) ||**u**<sup>∗</sup> <sup>i</sup> (*t*)||<sup>2</sup>

where **<sup>u</sup>**˜ <sup>∗</sup> <sup>∈</sup> *int* �

(**q**, **q**dot)=(**q**∗, **q**<sup>∗</sup>

d d*t*

and hence, λ∗

n ∑ i=1 **u**∗

i) *If λ*∗

ii) *If λ*∗

*∂***q**

holds, it follows that λ∗

� λ∗ dot(*t*) λ∗ dyn(*t*)

i,rot(*t*) *<sup>∂</sup>*ωi(**q**,**q**dot) *∂***q**dot

> <sup>0</sup>*μ*<sup>i</sup> > � � � � � � � �

<sup>0</sup>*μ*<sup>i</sup> > � � � � � � � �

h �

> *λ*∗ 0 n ∑ i=1 *μ*i

*∂***v**i(**q**,**q**dot) *∂***q**

$$\text{(iv)}\quad\text{If }\lambda\_0^\*\mu\_\text{i} < \left| \left| \frac{\partial\omega\_\text{i}(\mathbf{q},\mathbf{q}\_{\text{dot}})}{\partial\mathbf{q}\_{\text{dot}}} \right|\_{\left(\mathbf{q}^\*,\mathbf{q}\_{\text{dot}}^\*\right)}\lambda\_\text{dyn}^\*(t) \right|\_2 \text{ }^\prime\text{ then }\mathbf{u}\_{\text{i},\text{rot}}^\*(t) = \rho\_{\text{i},\text{4}}.$$

$$\text{(iv)}\quad\text{If }\lambda\_0^\*\mu\_\text{i} = \left| \left| \begin{matrix} \frac{\partial \mathbf{v}\_i(\mathbf{q}, \mathbf{q}\_{\text{dot}0})}{\partial \mathbf{q}\_{\text{dot}0}} \end{matrix} \right|\_{\left(\mathbf{q}^\*, \mathbf{q}\_{\text{dot}0}^\*\right)} \lambda\_{\text{dyn}}^\*(t) \right|\Big|\_{2}, \text{ then }\mathbf{u}\_{\text{i},\text{tran}}^\*(t) \text{ is } \mathsf{unspecified}.\text{}$$

$$\begin{array}{ll}\text{vi)} \text{ } If \lambda\_0^\* \mu\_1 = \left| \left| \begin{array}{c} \frac{\partial \omega\_{\text{i}}(\mathbf{q}, \mathbf{q}\_{\text{dot}})}{\partial \mathbf{q}\_{\text{dot}}} \right| \\ \end{array} \lambda\_{\text{dyn}}^\*(t) \right| \Big|\_{2} \text{ } then \ \mathbf{u}\_{\text{i},\text{rot}}^\*(t) \text{ is } unspaciffied. \end{array}$$

*Proof.* It follows from (45) that h **q**(*t*), **u**˜(*t*),λdyn(*t*),λdot(*t*) is minimized if, for all i = 1, . . . , n, <sup>−</sup>*∂***v**i(**q**,**q**dot) *∂***q**dot (**q**∗,**q**∗ dot) λ∗ dyn(*t*) is parallel to **u**<sup>∗</sup> i,tran(*t*) and if <sup>−</sup>*∂*ωi(**q**,**q**dot) *∂***q**dot (**q**∗,**q**∗ dot) λ∗ dyn(*t*) is parallel to **u**∗ i,rot(*t*). Thus, using the triangular inequality, it follows that

h **q**∗(*t*), **u**˜ ∗(*t*),λ∗ dyn(*t*),λ<sup>∗</sup> dot(*t*) <sup>−</sup> <sup>λ</sup>∗<sup>T</sup> dot(*t*)**q**dot(**q**∗(*t*)) <sup>−</sup> <sup>λ</sup><sup>T</sup> dyn(*t*)<sup>ˆ</sup> **f**dyn(**q**∗(*t*), **q**dot(**q**∗(*t*))) ≤ n ∑ i=1 *λ*∗ <sup>0</sup>*μ*<sup>i</sup> − *∂***v**<sup>i</sup> (**q**, **q**dot) *∂***q**dot (**q**∗,**q**∗ dot) λ∗ dyn(*t*) 2 ||**u**<sup>∗</sup> i,tran(*t*)||<sup>2</sup> + n ∑ i=1 *λ*∗ <sup>0</sup>*μ*<sup>i</sup> − *∂*ω<sup>i</sup> (**q**, **q**dot) *∂***q**dot (**q**∗,**q**∗ dot) λ∗ dyn(*t*) 2 ||**u**<sup>∗</sup> i,rot(*t*)||<sup>2</sup> , (47)

which proves i) – iv). Next, if *λ*∗ <sup>0</sup>*μ*<sup>i</sup> = *∂***v**i(**q**,**q**dot) *∂***q**dot (**q**∗,**q**∗ dot) λ∗ dyn(*t*) 2 (respectively, *λ*∗ <sup>0</sup>*μ*<sup>i</sup> = *∂*ωi(**q**,**q**dot) *∂***q**dot (**q**∗,**q**∗ dot) λ∗ dyn(*t*) 2 ), then Pontryagin's minimum principle does not provide any information about the optimal control, and hence, v) and vi) hold.

Analogous to Lawden's (Lawden, 1963) primer vector theory, *<sup>∂</sup>***v**i(**q**,**q**dot) *∂***q**dot (**q**∗,**q**∗ dot) λ∗ dyn(*t*) and

*∂*ωi(**q**,**q**dot) *∂***q**dot (**q**∗,**q**∗ dot) λ∗ dyn(*t*) determine the magnitude and the direction of the control forces,

and hence, we denote them as the *translational primer vector* and the *rotational primer vector*, respectively. Moreover, the trajectory given by each of the cases in Theorem 6.1 are called *arcs*. For each i = 1, . . . , n, the arcs corresponding to i) (respectively, ii)) are called *maximum translational* (respectively, *rotational*) *thrust arcs*. Similarly, arcs corresponding to iii) (respectively, iv)) are called *null translational* (respectively, *rotational*) *thrust arcs*. Finally, arcs corresponding to v) (respectively, vi)) are called *singular translational* (respectively, *rotational*) *thrust arcs*. The optimal translational and rotational control vectors for v) and vi) in Theorem 6.1 need to be deduced by applying the generalized Legendre-Clebsch condition (Giaquinta & Hildebrandt, 1996).

**Theorem 6.2.** *Consider the path planning optimization problem. Then, there exists c*<sup>∗</sup> ∈ **R** *such that*

$$\mathfrak{m}\left(\mathbf{q}^\*(t), \mathbf{q}\_{\mathrm{dot}}^\*(t), \lambda\_{\mathrm{dyn}}^\*(t), \lambda\_{\mathrm{dot}}^\*(t)\right) = c^\*.\tag{48}$$

*Proof.* It follows from the Weierstrass - Erdmann condition (Giaquinta & Hildebrandt, 1996) that on an optimal trajectory,

$$\frac{\partial}{\partial t}\mathfrak{h}\left(\mathbf{q}^\*(t), \mathbf{q}^\*\_{\mathrm{dot}}(t), \bar{\mathbf{u}}^\*(t), \lambda^\*\_{\mathrm{dyn}}(t), \lambda^\*\_{\mathrm{dot}}(t)\right) = \frac{\partial}{\partial t}\mathfrak{h}\left(\mathbf{q}(t), \mathbf{q}^\*\_{\mathrm{dot}}(t), \bar{\mathbf{u}}^\*(t), \lambda^\*\_{\mathrm{dyn}}(t), \lambda^\*\_{\mathrm{dot}}(t)\right)$$

holds for all *t* ∈ (*t*1, *t*2). Now, since h does not explicitly depend on *t*, it follows that there exists c<sup>∗</sup> ∈ **R** such that h **q**∗(*t*), **q**∗ dot(*t*), **0**6n,λ<sup>∗</sup> dyn(*t*),λ<sup>∗</sup> dot(*t*) = c∗, which proves (48).

**Proposition 6.3.** *Consider the costate dynamics given by* (46)*. Then, the dynamics of* λ∗ dyn(*t*) *are decoupled from the dynamics of* λ∗ dot(*t*)*.*

*Proof.* The result is immediate from the form of (46).

It follows from Proposition 6.3 that the translational primer vector and the rotational primer vector dynamics are independent of the choice of **q**dot. Moreover, in solving for λ<sup>∗</sup> dyn(*t*) we need not integrate a system of 2(6n − n4) ordinary differential equations as in (41), but rather a system of (6n − n4) ordinary differential equations, which is very advantageous for large formations.

**Proposition 6.4.** *The translational primer vector and the rotational primer vector are continuously differentiable functions.*

*Proof.* First, note that λ∗ dyn(·) and λ<sup>∗</sup> dot(·) are continuous with continuous derivatives almost everywhere on *t* ∈ (*t*1, *t*2) except for a finite number of points (Pontryagin et al., 1962). Next, the differentiability assumption on the environmental model for **a**(·) and **m**(·) implies that the matrix on the right-hand side of (41) is of class *<sup>C</sup>*1(**R**6n−n4 <sup>×</sup> **<sup>R</sup>**6n−n4 <sup>×</sup> **<sup>R</sup>**12n). Hence, <sup>d</sup> d*t*λ<sup>∗</sup> dyn(·) and <sup>d</sup> d*t*λ<sup>∗</sup> dot(·) are continuous on (*t*1, *t*2).

In order to elucidate the translational primer vector and rotational primer vector dynamics for a vehicle formation problem, we focus on specific formation configurations and on a specific environmental model. Hence, in the reminder of the paper we concentrate on the case where nv components of **v**<sup>i</sup> and n*<sup>ω</sup>* components of ω<sup>i</sup> are also components of **q**dot. A justification for this model is as follows. Assume that the i-th formation vehicle behaves as unconstrained, e.g., the first vehicle in Examples 4.1 and 4.2, or the dynamics of the i-th vehicle can be addressed as partly unconstrained, e.g., the second formation vehicle in the aforementioned examples. In either of these cases, it is natural to choose the unconstrained components of **v**<sup>i</sup> and ω<sup>i</sup> as some of the components of **q**dot. This model includes the classical formation configuration known

$$\mathbb{D}$$

as the *leader-follower* model, whose trajectories are computed as a function of the leader's path

A Variational Approach to the Fuel Optimal Control Problem for UAV Formations 239

For notational convenience, we will refer to (49) and (50) as **<sup>a</sup>**(**v**i(*t*)) and <sup>ω</sup>�i(**v**i(*t*), <sup>ω</sup>i(*t*)), respectively. This assumption on the accelerations induced by external forces and external

> 2 �

(respectively, pitch and yaw), and *k*i,D, *k*i,L, *k*i,S, *k*i,R, *k*i,P, and *k*i,Y, are the drag, lift, side force,

where ˆ**v**<sup>i</sup> : [*t*1, *<sup>t</sup>*2] <sup>→</sup> **<sup>R</sup>**nv (respectively, ˆω<sup>i</sup> : [*t*1, *<sup>t</sup>*2] <sup>→</sup> **<sup>R</sup>**n*<sup>ω</sup>* ) represents the components of **v**i(**q**(*t*), **q**dot(*t*)) (respectively, ωi(**q**(*t*), **q**dot(*t*))) that are also components of **q**dot(*t*), and **a**ˆ : **<sup>R</sup>**<sup>3</sup> <sup>→</sup> **<sup>R</sup>**nv and **<sup>u</sup>**ˆi,tran : [*t*1, *<sup>t</sup>*2] <sup>→</sup> **<sup>R</sup>**nv (respectively, <sup>ω</sup>�<sup>ˆ</sup> <sup>i</sup> : **<sup>R</sup>**<sup>3</sup> <sup>×</sup> **<sup>R</sup>**<sup>3</sup> <sup>→</sup> **<sup>R</sup>**n*<sup>ω</sup>* and **<sup>u</sup>**ˆi,rot : [*t*1, *<sup>t</sup>*2] <sup>→</sup> **<sup>R</sup>**n*<sup>ω</sup>* ) are the corresponding components of **<sup>a</sup>**(**v**i(*t*)) and **<sup>u</sup>**i,tran(*t*) (respectively, <sup>ω</sup>�i(**v**i(*t*), <sup>ω</sup>i(*t*))

> �<sup>T</sup> � *<sup>∂</sup>*ω�<sup>ˆ</sup> <sup>i</sup>(**v**i,ωi) *∂***v**ˆi

where <sup>λ</sup>dyn,i, ˆv : [*t*1, *<sup>t</sup>*2] <sup>→</sup> **<sup>R</sup>**nv and <sup>λ</sup>dyn,i,*ω*ˆ(*t*) : [*t*1, *<sup>t</sup>*2] <sup>→</sup> **<sup>R</sup>**n*<sup>ω</sup>* are the nv and n*<sup>ω</sup>* components of

dyn,i(*t*) corresponding to the nv and n*<sup>ω</sup>* components of ˙**v**i(*t*) and ˙ωi(*t*), respectively.

� *<sup>∂</sup>*ω�<sup>ˆ</sup> <sup>i</sup>(**v**i,ωi) *∂*ωˆ <sup>i</sup>

�T

⎤ ⎥ ⎦ (**v**ˆ ∗ <sup>i</sup> , ˆω<sup>∗</sup> i )

� λ∗ dyn,i, ˆv(*t*)

> λ∗ dyn,i,*ω*ˆ(*t*)

�

, (55)

�T

moments is justified by a common environmental model given by (Anderson, 2001)

2 � *<sup>k</sup>*i,Rω�<sup>R</sup>

where **<sup>g</sup>** is the constant gravitational acceleration, �**v**<sup>i</sup> **<sup>v</sup>**i/||**v**i||2, �**v**<sup>L</sup>

(respectively, in the direction opposite to the aerodynamic side force), <sup>ω</sup>�<sup>R</sup>

<sup>i</sup> (**q**(*t*), **<sup>q</sup>**dot(*t*)), **<sup>0</sup>**<sup>T</sup>

<sup>i</sup> (**q**(*t*), **<sup>q</sup>**dot(*t*)), **<sup>0</sup>**<sup>T</sup>

<sup>−</sup>*k*i,D�**v**i(*t*) + *<sup>k</sup>*i,L**v**�<sup>L</sup>

<sup>i</sup> (*t*) + *<sup>k</sup>*i,Pω�<sup>p</sup>

<sup>i</sup> : [*t*1, *<sup>t</sup>*2] <sup>→</sup> **<sup>R</sup>**3) is the unit vector in the direction of the aerodynamic lift

<sup>3</sup> , **<sup>0</sup>**<sup>T</sup> 3 �T �

<sup>3</sup> , <sup>ω</sup><sup>T</sup>

, (49)

�

. (50)

, (51)

, (52)

<sup>i</sup> : [*t*1, *<sup>t</sup>*2] <sup>→</sup> **<sup>R</sup>**<sup>3</sup>

<sup>i</sup> : [*t*1, *<sup>t</sup>*2] <sup>→</sup> **<sup>R</sup>**<sup>3</sup>

<sup>i</sup> (**q**(*t*), **<sup>q</sup>**dot(*t*))�<sup>T</sup>

<sup>i</sup> (*t*) <sup>−</sup> *<sup>k</sup>*i,S**v**�<sup>S</sup>

<sup>i</sup> (*t*) �

<sup>i</sup> (*t*) + *<sup>k</sup>*i,Yω�<sup>Y</sup>

<sup>i</sup> : [*t*1, *<sup>t</sup>*2] <sup>→</sup> **<sup>R</sup>**3) is the unit vector in the direction of roll

**v**ˆi(*t*) = **a**ˆ(**v**i(*t*)) + **u**ˆi,tran(*t*), (53)

˙ω<sup>ˆ</sup> <sup>i</sup>(*t*) = <sup>ω</sup>�<sup>ˆ</sup> <sup>i</sup>(**v**i(*t*), <sup>ω</sup>i(*t*)) + **<sup>u</sup>**ˆi,rot(*t*), (54)

<sup>i</sup> (*t*) �

(Wang, 1991).

(respectively, **<sup>v</sup>**�<sup>S</sup>

(respectively, <sup>ω</sup>�<sup>p</sup>

and **u**i,rot(*t*)).

λ∗

To simplify the environmental model assume that

**<sup>a</sup>** (�**x**<sup>i</sup> (**q**(*t*), **<sup>q</sup>**dot(*t*))) <sup>=</sup> **<sup>g</sup>** <sup>+</sup> ||**v**i(*t*)||<sup>2</sup>

**<sup>m</sup>** (�**x**<sup>i</sup> (**q**(*t*), **<sup>q</sup>**dot(*t*))) <sup>=</sup> ||**v**i(*t*)||<sup>2</sup>

<sup>i</sup> : [*t*1, *<sup>t</sup>*2] <sup>→</sup> **<sup>R</sup>**<sup>3</sup> and <sup>ω</sup>�<sup>Y</sup>

Using the above assumptions, it follows from (22) that

˙

roll, pitch, and yaw coefficients, respectively.

Next, it follows from (46), (53), and (54) that

� = − ⎡ ⎢ ⎣

� *∂***a**ˆ(**v**i) *∂***v**ˆi

**0**n*ω*×nv

d d*t* � λ∗ dyn,i, ˆv(*t*)

> λ∗ dyn,i,*ω*<sup>ˆ</sup> (*t*)

�� **0**T <sup>3</sup> , **<sup>v</sup>**<sup>T</sup>

> �� **0**T <sup>3</sup> , **<sup>v</sup>**<sup>T</sup>

**<sup>a</sup>** (�**x**<sup>i</sup> (**q**(*t*), **<sup>q</sup>**dot(*t*))) <sup>=</sup> **<sup>a</sup>**

<sup>ω</sup>�<sup>i</sup> (�**x**<sup>i</sup> (**q**(*t*), **<sup>q</sup>**dot(*t*))) <sup>=</sup> <sup>ω</sup>�<sup>i</sup>

as the *leader-follower* model, whose trajectories are computed as a function of the leader's path (Wang, 1991).

To simplify the environmental model assume that

18 Will-be-set-by-IN-TECH

6.1 need to be deduced by applying the generalized Legendre-Clebsch condition (Giaquinta

**Theorem 6.2.** *Consider the path planning optimization problem. Then, there exists c*<sup>∗</sup> ∈ **R** *such that*

*Proof.* It follows from the Weierstrass - Erdmann condition (Giaquinta & Hildebrandt, 1996)

holds for all *t* ∈ (*t*1, *t*2). Now, since h does not explicitly depend on *t*, it follows that there

It follows from Proposition 6.3 that the translational primer vector and the rotational primer

need not integrate a system of 2(6n − n4) ordinary differential equations as in (41), but rather a system of (6n − n4) ordinary differential equations, which is very advantageous for large

**Proposition 6.4.** *The translational primer vector and the rotational primer vector are continuously*

everywhere on *t* ∈ (*t*1, *t*2) except for a finite number of points (Pontryagin et al., 1962). Next, the differentiability assumption on the environmental model for **a**(·) and **m**(·) implies that the

In order to elucidate the translational primer vector and rotational primer vector dynamics for a vehicle formation problem, we focus on specific formation configurations and on a specific environmental model. Hence, in the reminder of the paper we concentrate on the case where nv components of **v**<sup>i</sup> and n*<sup>ω</sup>* components of ω<sup>i</sup> are also components of **q**dot. A justification for this model is as follows. Assume that the i-th formation vehicle behaves as unconstrained, e.g., the first vehicle in Examples 4.1 and 4.2, or the dynamics of the i-th vehicle can be addressed as partly unconstrained, e.g., the second formation vehicle in the aforementioned examples. In either of these cases, it is natural to choose the unconstrained components of **v**<sup>i</sup> and ω<sup>i</sup> as some of the components of **q**dot. This model includes the classical formation configuration known

matrix on the right-hand side of (41) is of class *<sup>C</sup>*1(**R**6n−n4 <sup>×</sup> **<sup>R</sup>**6n−n4 <sup>×</sup> **<sup>R</sup>**12n). Hence, <sup>d</sup>

dyn(*t*),λ<sup>∗</sup>

dyn(*t*),λ<sup>∗</sup>

dot(*t*) 

dot(*t*)  = *c*∗. (48)

dyn(*t*),λ<sup>∗</sup>

dot(*t*) 

dyn(*t*) *are*

dyn(*t*) we

d*t*λ<sup>∗</sup> dyn(·)

dot(*t*), **u**˜ <sup>∗</sup>(*t*),λ<sup>∗</sup>

dot(·) are continuous with continuous derivatives almost

= c∗, which proves (48).

dot(*t*),λ<sup>∗</sup>

dot(*t*) <sup>=</sup> *<sup>∂</sup> ∂t* h **q**(*t*), **q**∗

dot(*t*), **0**6n,λ<sup>∗</sup>

**Proposition 6.3.** *Consider the costate dynamics given by* (46)*. Then, the dynamics of* λ∗

vector dynamics are independent of the choice of **q**dot. Moreover, in solving for λ<sup>∗</sup>

& Hildebrandt, 1996).

that on an optimal trajectory,

**q**∗(*t*), **q**∗

exists c<sup>∗</sup> ∈ **R** such that h

*decoupled from the dynamics of* λ∗

d d*t* h 

formations.

and <sup>d</sup> d*t*λ<sup>∗</sup>

*differentiable functions.*

*Proof.* First, note that λ∗

m 

dot(*t*), **u**˜ <sup>∗</sup>(*t*),λ<sup>∗</sup>

*Proof.* The result is immediate from the form of (46).

**q**∗(*t*), **q**∗

dyn(·) and λ<sup>∗</sup>

dot(·) are continuous on (*t*1, *t*2).

**q**∗(*t*), **q**∗

dyn(*t*),λ<sup>∗</sup>

dot(*t*)*.*

$$\mathbf{a}\left(\widetilde{\mathbf{x}}\_{\mathrm{i}}\left(\mathbf{q}(t),\mathbf{q}\_{\mathrm{dot}}(t)\right)\right) = \mathbf{a}\left(\left[\mathbf{0}\_{3\prime}^{\mathrm{T}},\mathbf{v}\_{\mathrm{i}}^{\mathrm{T}}(\mathbf{q}(t),\mathbf{q}\_{\mathrm{dot}}(t)),\,\mathbf{0}\_{3\prime}^{\mathrm{T}},\mathbf{0}\_{3}^{\mathrm{T}}\right]^{\mathrm{T}}\right),\tag{49}$$

$$
\tilde{\omega}\_{\mathbf{i}} \left( \tilde{\mathbf{x}}\_{\mathbf{i}} (\mathbf{q}(t), \mathbf{q}\_{\mathrm{dot}}(t)) \right) = \tilde{\omega}\_{\mathbf{i}} \left( \left[ \mathbf{0}\_{3\prime}^{\mathrm{T}}, \mathbf{v}\_{\mathrm{i}}^{\mathrm{T}}(\mathbf{q}(t), \mathbf{q}\_{\mathrm{dot}}(t)), \mathbf{0}\_{3\prime}^{\mathrm{T}}, \omega\_{\mathrm{i}}^{\mathrm{T}}(\mathbf{q}(t), \mathbf{q}\_{\mathrm{dot}}(t)) \right]^{\mathrm{T}} \right). \tag{50}
$$

For notational convenience, we will refer to (49) and (50) as **<sup>a</sup>**(**v**i(*t*)) and <sup>ω</sup>�i(**v**i(*t*), <sup>ω</sup>i(*t*)), respectively. This assumption on the accelerations induced by external forces and external moments is justified by a common environmental model given by (Anderson, 2001)

$$\mathbf{a}\left(\tilde{\mathbf{x}}\_{\mathrm{i}}\left(\mathbf{q}(t),\mathbf{q}\_{\mathrm{dot}}(t)\right)\right) = \mathbf{g} + ||\mathbf{v}\_{\mathrm{i}}(t)||\_{2}^{2} \left(-k\_{\mathrm{i},\mathrm{D}}\hat{\mathbf{v}}\_{\mathrm{i}}(t) + k\_{\mathrm{i},\mathrm{L}}\hat{\mathbf{v}}\_{\mathrm{i}}^{\mathrm{L}}(t) - k\_{\mathrm{i},\mathrm{S}}\hat{\mathbf{v}}\_{\mathrm{i}}^{\mathrm{S}}(t)\right),\tag{51}$$

$$\mathbf{m}\left(\widetilde{\mathbf{x}}\_{\mathrm{i}}\left(\mathbf{q}(t),\mathbf{q}\_{\mathrm{dot}}(t)\right)\right) = ||\mathbf{v}\_{\mathrm{i}}(t)||\_{2}^{2}\left(k\_{\mathrm{i},\mathbb{R}}\widehat{\boldsymbol{\omega}}\_{\mathrm{i}}^{\mathrm{R}}(t) + k\_{\mathrm{i},\mathbb{P}}\widehat{\boldsymbol{\omega}}\_{\mathrm{i}}^{\mathrm{P}}(t) + k\_{\mathrm{i},\mathbb{Y}}\widehat{\boldsymbol{\omega}}\_{\mathrm{i}}^{\mathrm{Y}}(t)\right),\tag{52}$$

where **<sup>g</sup>** is the constant gravitational acceleration, �**v**<sup>i</sup> **<sup>v</sup>**i/||**v**i||2, �**v**<sup>L</sup> <sup>i</sup> : [*t*1, *<sup>t</sup>*2] <sup>→</sup> **<sup>R</sup>**<sup>3</sup> (respectively, **<sup>v</sup>**�<sup>S</sup> <sup>i</sup> : [*t*1, *<sup>t</sup>*2] <sup>→</sup> **<sup>R</sup>**3) is the unit vector in the direction of the aerodynamic lift (respectively, in the direction opposite to the aerodynamic side force), <sup>ω</sup>�<sup>R</sup> <sup>i</sup> : [*t*1, *<sup>t</sup>*2] <sup>→</sup> **<sup>R</sup>**<sup>3</sup> (respectively, <sup>ω</sup>�<sup>p</sup> <sup>i</sup> : [*t*1, *<sup>t</sup>*2] <sup>→</sup> **<sup>R</sup>**<sup>3</sup> and <sup>ω</sup>�<sup>Y</sup> <sup>i</sup> : [*t*1, *<sup>t</sup>*2] <sup>→</sup> **<sup>R</sup>**3) is the unit vector in the direction of roll (respectively, pitch and yaw), and *k*i,D, *k*i,L, *k*i,S, *k*i,R, *k*i,P, and *k*i,Y, are the drag, lift, side force, roll, pitch, and yaw coefficients, respectively.

Using the above assumptions, it follows from (22) that

$$
\dot{\mathbf{v}}\_{\mathbf{i}}(t) = \mathbf{\hat{a}}(\mathbf{v}\_{\mathbf{i}}(t)) + \mathbf{\hat{u}}\_{\mathbf{i},\text{ran}}(t), \tag{53}
$$

$$
\dot{\omega}\_{\text{i}}(t) = \hat{\omega}\_{\text{i}}(\mathbf{v}\_{\text{i}}(t), \omega\_{\text{i}}(t)) + \hat{\mathbf{u}}\_{\text{i,rot}}(t), \tag{54}
$$

where ˆ**v**<sup>i</sup> : [*t*1, *<sup>t</sup>*2] <sup>→</sup> **<sup>R</sup>**nv (respectively, ˆω<sup>i</sup> : [*t*1, *<sup>t</sup>*2] <sup>→</sup> **<sup>R</sup>**n*<sup>ω</sup>* ) represents the components of **v**i(**q**(*t*), **q**dot(*t*)) (respectively, ωi(**q**(*t*), **q**dot(*t*))) that are also components of **q**dot(*t*), and **a**ˆ : **<sup>R</sup>**<sup>3</sup> <sup>→</sup> **<sup>R</sup>**nv and **<sup>u</sup>**ˆi,tran : [*t*1, *<sup>t</sup>*2] <sup>→</sup> **<sup>R</sup>**nv (respectively, <sup>ω</sup>�<sup>ˆ</sup> <sup>i</sup> : **<sup>R</sup>**<sup>3</sup> <sup>×</sup> **<sup>R</sup>**<sup>3</sup> <sup>→</sup> **<sup>R</sup>**n*<sup>ω</sup>* and **<sup>u</sup>**ˆi,rot : [*t*1, *<sup>t</sup>*2] <sup>→</sup> **<sup>R</sup>**n*<sup>ω</sup>* ) are the corresponding components of **<sup>a</sup>**(**v**i(*t*)) and **<sup>u</sup>**i,tran(*t*) (respectively, <sup>ω</sup>�i(**v**i(*t*), <sup>ω</sup>i(*t*)) and **u**i,rot(*t*)).

Next, it follows from (46), (53), and (54) that

$$\frac{\mathbf{d}}{\mathbf{d}\mathbf{t}} \begin{bmatrix} \boldsymbol{\lambda}\_{\mathrm{dyn},i,\boldsymbol{\psi}}^{\*}(t) \\ \boldsymbol{\lambda}\_{\mathrm{dyn},i,\boldsymbol{\omega}}^{\*}(t) \end{bmatrix} = - \begin{bmatrix} \left(\frac{\partial\boldsymbol{\hat{\omega}}(\mathbf{v}\_{i})}{\partial\boldsymbol{\Phi}\_{i}}\right)^{\mathrm{T}} \left(\frac{\partial\boldsymbol{\hat{\omega}}\_{i}(\mathbf{v}\_{i},\boldsymbol{\omega}\_{i})}{\partial\boldsymbol{\Phi}\_{i}}\right)^{\mathrm{T}} \\\ \mathbf{0}\_{\mathrm{n}\_{\boldsymbol{\omega}}\times\mathbf{n}\_{\boldsymbol{\eta}}} \left(\frac{\partial\boldsymbol{\hat{\omega}}\_{i}(\mathbf{v}\_{i},\boldsymbol{\omega}\_{i})}{\partial\boldsymbol{\hat{\omega}}\_{i}}\right)^{\mathrm{T}} \end{bmatrix}\_{\left(\boldsymbol{\Psi}\_{i}^{\*},\boldsymbol{\hat{\omega}}\_{i}^{\*}\right)} \begin{bmatrix} \boldsymbol{\lambda}\_{\mathrm{dyn},i,\boldsymbol{\uphat}}^{\*}(t) \\ \boldsymbol{\lambda}\_{\mathrm{dyn},i,\boldsymbol{\hat{\omega}}}^{\*}(t) \end{bmatrix},\tag{55}$$

where <sup>λ</sup>dyn,i, ˆv : [*t*1, *<sup>t</sup>*2] <sup>→</sup> **<sup>R</sup>**nv and <sup>λ</sup>dyn,i,*ω*ˆ(*t*) : [*t*1, *<sup>t</sup>*2] <sup>→</sup> **<sup>R</sup>**n*<sup>ω</sup>* are the nv and n*<sup>ω</sup>* components of λ∗ dyn,i(*t*) corresponding to the nv and n*<sup>ω</sup>* components of ˙**v**i(*t*) and ˙ωi(*t*), respectively.

**Corollary 6.1.** *Assume that the hypothesis of Theorem 6.3 hold. If* n*<sup>ω</sup>* = 0*, then*

�−<sup>T</sup> **v**∗ i **u**ˆ ∗ i,tran(*t*) � � � � � � � � 2 ≤ � *ρ*2 i,tran <sup>+</sup> *<sup>ρ</sup>*<sup>2</sup>

�−<sup>T</sup>

(**v**∗ <sup>i</sup> ,ω<sup>∗</sup> i ) **u**ˆ ∗ i,rot(*t*) � � � � � � � � 2 ≤ � *ρ*2 i,tran <sup>+</sup> *<sup>ρ</sup>*<sup>2</sup>

**Example 6.1.** Consider the formation of the two vehicles addressed in Examples 4.1 and 4.2,

A Variational Approach to the Fuel Optimal Control Problem for UAV Formations 241

rmax, then the first vehicle and the translational dynamics of the second vehicle can be considered unconstrained. Thus, the costate equation (41) can be rewritten as two decoupled

�T

⎤ ⎦

From (61) and (62) it follows that the path planning optimization problem for the first vehicle

**0**3×<sup>3</sup> **0**3×<sup>3</sup>

� � � � σ˙ <sup>1</sup>(**q**(*t*)

+ h<sup>2</sup> �

*∂*R−<sup>1</sup> rod(σ1) *∂*σ<sup>1</sup>

whereas the path planning optimization problem for the second vehicle is normal since its rotational dynamics are not expressed by (62). Normality for the second formation vehicle can also be proven by rewriting the unconstrained dynamic equations (3) for a 3 DoF vehicle.

T �

. As shown in Example 4.2, if rmin <sup>&</sup>lt; ||**r**1(*t*) <sup>−</sup> **<sup>r</sup>**2(*t*)||<sup>2</sup>

*<sup>∂</sup>***q**dot,1 �<sup>T</sup>

�

dyn,2(*t*)]T, <sup>λ</sup>dyn,1 : [*t*1, *<sup>t</sup>*2] <sup>→</sup> **<sup>R</sup>**6, <sup>λ</sup>dyn,2 : [*t*1, *<sup>t</sup>*2] <sup>→</sup> **<sup>R</sup>**3, <sup>λ</sup>dot(*t*)

⎤ ⎦ **q**=**q**∗

**x**1(*t*), **u**1(*t*),λdyn,1(*t*),λdot,1(*t*)

**x**2(*t*), **u**2,tran(*t*),λdyn,2(*t*)

⎞

⎟⎠ ,

⎤ ⎥ ⎥ ⎥ ⎦

�

λdot,1(*t*) λdyn,1(*t*)

, (62)

�

�

, (63)

�

, (61)

**0**6×<sup>6</sup>

�<sup>T</sup> � *<sup>∂</sup>***f**dyn,1(**x**1,**q**dot,1(**q**),**u**1)

λdot,2(*t*) λdyn,2(*t*) i,rot. (59)

i,rot. (60)

<sup>2</sup> <

� � � � � � � � � *∂***a**ˆi (**v**i) *∂***v**ˆi

� � � � � � � �

ordinary differential equations given by

� = −

� = −

�

*<sup>∂</sup>*ω�<sup>ˆ</sup> <sup>i</sup> (**v**i, <sup>ω</sup>i) *∂*ωˆ <sup>i</sup>

*Proof.* The proof is a direct consequence of Theorem 6.3.

**x**T <sup>1</sup> (*t*), **<sup>r</sup>**<sup>T</sup> <sup>2</sup> (*t*) �T

⎡ ⎢ ⎢ ⎢ ⎣

⎡ ⎣ **0**3×<sup>3</sup>

dyn,1(*t*) <sup>λ</sup><sup>T</sup>

λ∗

For details, see L'Afflitto & Sultan (2008).

Using (18) it follows that (45) can be written as

**q**(*t*), **q**dot(*t*), **u**˜(*t*),λdyn(*t*),λdot(*t*)

**0**3×<sup>3</sup>

�

**0**3×<sup>3</sup>

**0**3×<sup>3</sup> **0**3×<sup>3</sup>

*∂*R−<sup>1</sup> rod(σ1) *<sup>∂</sup>*σ<sup>1</sup> σ˙ <sup>1</sup>

� *∂***f**dyn,1(**x**1,**q**dot,1(**x**1),**u**1) *∂***x**<sup>1</sup>

is possibly abnormal since we cannot verify a priori whether or not

⎛

⎜⎝ ⎡ ⎣

**0**3×<sup>3</sup>

� = h<sup>1</sup> �

dot,1(*t*) ∈ N

*<sup>∂</sup>***f**dyn,2(�**x**2(*t*),**u**2,tran(*t*)) *∂***r**<sup>2</sup>

*<sup>∂</sup>***f**dyn,2(�**x**2(*t*),**u**2,tran(*t*)) *∂***v**<sup>2</sup>

dot,2(*t*)]T, <sup>λ</sup>dot,1 : [*t*1, *<sup>t</sup>*2] <sup>→</sup> **<sup>R</sup>**6, and <sup>λ</sup>dot,2 : [*t*1, *<sup>t</sup>*2] <sup>→</sup> **<sup>R</sup>**3.

*Alternatively, if* nv = 0*, then*

and assume that **q**(*t*) = �

λdot,1(*t*) λdyn,1(*t*)

λdot,2(*t*) λdyn,2(*t*)

where λdyn(*t*) [λ<sup>T</sup>

dot,1(*t*), <sup>λ</sup><sup>T</sup>

h �

d d*t* �

d d*t* �

[λ<sup>T</sup>

**Theorem 6.3.** *Assume that* ||**u**ˆ <sup>∗</sup> i,tran(*t*)||<sup>2</sup> = *ρ*ˆi,tran*,* ||**u**ˆ <sup>∗</sup> i,rot(*t*)||<sup>2</sup> = *ρ*ˆi,rot*,* ||**u**<sup>∗</sup> i,tran(*t*)||<sup>2</sup> = *ρ*i,tran*,* ||**u**<sup>∗</sup> i,rot(*t*)||<sup>2</sup> = *ρ*i,rot*, where ρ*ˆi,tran *and ρ*i,tran ∈ (*ρ*i,1, *ρ*i,2)*, ρ*i,rot *and ρ*i,rot ∈ (*ρ*i,3, *ρ*i,4)*, and <sup>∂</sup>*<sup>ω</sup><sup>ˆ</sup> <sup>i</sup>(**v**i,ωi) *∂*ωˆ <sup>i</sup> (**v**∗ <sup>i</sup> ,ω<sup>∗</sup> i ) *is invertible. Then,*

$$\left\lVert \left[ \left[ \frac{\partial \hat{\boldsymbol{\omega}}\_{\mathrm{i}} (\mathbf{v}\_{\mathrm{i}}, \boldsymbol{\omega}\_{\mathrm{i}})}{\partial \boldsymbol{\omega}\_{\mathrm{i}}} \right]\_{\left( \mathbf{v}\_{\mathrm{i}}^{\*}, \boldsymbol{\omega}\_{\mathrm{i}}^{\*} \right)}^{-T} \left( \mathbf{I}\_{\mathrm{3}} + \left[ \frac{\partial \hat{\mathbf{a}}\_{\mathrm{i}} \left( \mathbf{v}\_{\mathrm{i}} \right)}{\partial \boldsymbol{\uptheta}\_{\mathrm{i}}} \right]\_{\mathbf{v}\_{\mathrm{i}}^{\*}}^{T} \right) \mathbf{u}\_{\mathrm{i}, \mathrm{tran}}^{\*} (t) \right\rVert\_{2} \leq \sqrt{\rho\_{\mathrm{i}, \mathrm{tran}}^{2} + \rho\_{\mathrm{i, \mathrm{rot}}}^{2}} \tag{56}$$

$$\left\lVert \left\lVert \frac{\partial \hat{\omega}\_{\mathrm{i}}^{\*} (\mathbf{v}\_{\mathrm{i}}, \omega\_{\mathrm{i}})}{\partial \hat{\omega}\_{\mathrm{i}}} \right\rVert\_{\left(\mathbf{v}\_{\mathrm{i}}^{\*}, \omega\_{\mathrm{i}}^{\*}\right)}^{-\mathrm{T}} \dot{\mathbf{u}}\_{\mathrm{i}, \mathrm{rot}}^{\*} (t) \right\rVert\_{2} \leq \sqrt{\rho\_{\mathrm{i}, \mathrm{ran}}^{2} + \rho\_{\mathrm{i}, \mathrm{rot}}^{2}}.\tag{57}$$

*Proof.* It follows from (44), (42), (53), and (54) that

$$\begin{split} \lambda\_0^\* \mu\_{\mathrm{i}} \frac{\mathfrak{u}\_{\mathrm{i},\mathrm{tran}}^\*(t)}{||\breve{\mathfrak{u}}^\*(t)||\_2} &= -\lambda\_{\mathrm{dyn},\mathrm{i},\psi}^\*(t), \\\\ \lambda\_0^\* \mu\_{\mathrm{i}} \frac{\mathfrak{u}\_{\mathrm{i},\mathrm{rot}}^\*(t)}{||\breve{\mathfrak{u}}^\*(t)||\_2} &= -\lambda\_{\mathrm{rot},\mathrm{i},\hat{\omega}^\circ(t)}^\*(t). \end{split} \tag{58}$$

Recalling that **u**ˆ ∗ i,rot(*t*) ||**u**˜ <sup>∗</sup>(*t*)||<sup>2</sup> 2 ≤ 1 and using (55) and (58) we obtain *λ*∗ <sup>0</sup>*μ*<sup>i</sup> *<sup>∂</sup>*<sup>ω</sup><sup>ˆ</sup> <sup>i</sup> (**v**i, <sup>ω</sup>i) *∂*ωˆ <sup>i</sup> −<sup>T</sup> (**v**∗ <sup>i</sup> ,ω<sup>∗</sup> i ) ||**u**˜ <sup>∗</sup> <sup>i</sup> (*t*)||<sup>2</sup> ˙ **u**ˆ ∗ i,tran(*t*) + ˙**u**˜ <sup>∗</sup>*<sup>T</sup>* <sup>i</sup> (*t*)**u**˜ <sup>∗</sup> <sup>i</sup> (*t*)**u**ˆ <sup>∗</sup> i,tran(*t*) ||**u**˜ <sup>∗</sup> <sup>i</sup> (*t*)||<sup>2</sup> 2 + *λ*∗ <sup>0</sup>*μ*<sup>i</sup> *<sup>∂</sup>*<sup>ω</sup><sup>ˆ</sup> <sup>i</sup> (**v**i, <sup>ω</sup>i) *∂*ωˆ <sup>i</sup> −<sup>T</sup> (**v**∗ <sup>i</sup> ,ω<sup>∗</sup> i ) *∂***a**ˆi (**v**i) *∂***v**ˆi *T* **v**∗ i **u**ˆ ∗ i,tran(*t*) ||**u**˜ <sup>∗</sup> <sup>i</sup> (*t*)||<sup>2</sup> 2 ≤ *λ*<sup>∗</sup> <sup>0</sup>*μ*i, *λ*∗ <sup>0</sup>*μ*<sup>i</sup> *<sup>∂</sup>*<sup>ω</sup><sup>ˆ</sup> <sup>i</sup> (**v**i, <sup>ω</sup>i) *∂*ωˆ <sup>i</sup> −<sup>T</sup> (**v**∗ <sup>i</sup> ,ω<sup>∗</sup> i ) ||**u**˜ <sup>∗</sup> <sup>i</sup> (*t*)||<sup>2</sup> ˙ **u**ˆ ∗ i,rot(*t*) + ˙**u**˜ <sup>∗</sup>*<sup>T</sup>* <sup>i</sup> (*t*)**u**˜ <sup>∗</sup> <sup>i</sup> (*t*)**u**ˆ <sup>∗</sup> i,rot(*t*) ||**u**˜ <sup>∗</sup> <sup>i</sup> (*t*)||<sup>2</sup> 2 2 ≤ *λ*<sup>∗</sup> <sup>0</sup>*μ*i.

Now, noting that ˙**u**˜ <sup>∗</sup>*<sup>T</sup>* <sup>i</sup> (*t*)**u**˜ <sup>∗</sup> <sup>i</sup> (*t*) = 0, the result follows.

Since Theorem 6.3 is proven using the Euler necessary condition, it follows that (**u**∗ i,tran, **u**<sup>∗</sup> i,rot) ∈ int(Γi,tran × int(Γi,rot)). However, the parameter bounds *ρ*i,j, j = 1, 2, 3, 4, are imposed by physical and not mathematical considerations, and hence, for practical applications we can assume that there exists *�* > 0 such that Theorem 6.3 holds for *ρ*i,tran ∈ (*ρ*i,1 − *�*, *ρ*i,2 + *�*) and *ρ*i,rot ∈ (*ρ*i,3 − *�*, *ρ*i,4 + *�*). Consequently, for engineering applications we can assume that Theorem 6.3 also holds on arcs of maximum translational and rotational thrust.

**Corollary 6.1.** *Assume that the hypothesis of Theorem 6.3 hold. If* n*<sup>ω</sup>* = 0*, then*

$$\left\lVert \left\lVert \frac{\partial \mathbf{\hat{a}\_i}(\mathbf{v\_i})}{\partial \mathbf{\hat{v}\_i}} \right\rVert\_{\mathbf{v\_i^\*}}^{-1} \mathbf{\hat{u}\_{i,\text{ran}}^\*(t)} \right\rVert\_2 \leq \sqrt{\rho\_{\mathbf{i,\text{ran}}}^2 + \rho\_{\mathbf{i,\text{rot}}}^2}. \tag{59}$$

*Alternatively, if* nv = 0*, then*

20 Will-be-set-by-IN-TECH

i,rot(*t*)||<sup>2</sup> = *ρ*i,rot*, where ρ*ˆi,tran *and ρ*i,tran ∈ (*ρ*i,1, *ρ*i,2)*, ρ*i,rot *and ρ*i,rot ∈ (*ρ*i,3, *ρ*i,4)*, and*

*T* **v**∗ i

−<sup>T</sup>

(**v**∗ <sup>i</sup> ,ω<sup>∗</sup> i ) **u**˙ ∗ i,rot(*t*) 2 ≤ *ρ*2 i,tran <sup>+</sup> *<sup>ρ</sup>*<sup>2</sup>

= −λ<sup>∗</sup>

= −λ<sup>∗</sup>

i,tran(*t*) + ˙**u**˜ <sup>∗</sup>*<sup>T</sup>*


 *∂***a**ˆi (**v**i) *∂***v**ˆi

i,rot(*t*) + ˙**u**˜ <sup>∗</sup>*<sup>T</sup>*


i,rot) ∈ int(Γi,tran × int(Γi,rot)). However, the parameter bounds *ρ*i,j, j = 1, 2, 3, 4,

≤ 1 and using (55) and (58) we obtain

−<sup>T</sup>

(**v**∗ <sup>i</sup> ,ω<sup>∗</sup> i )

Since Theorem 6.3 is proven using the Euler necessary condition, it follows that

are imposed by physical and not mathematical considerations, and hence, for practical applications we can assume that there exists *�* > 0 such that Theorem 6.3 holds for *ρ*i,tran ∈ (*ρ*i,1 − *�*, *ρ*i,2 + *�*) and *ρ*i,rot ∈ (*ρ*i,3 − *�*, *ρ*i,4 + *�*). Consequently, for engineering applications we can assume that Theorem 6.3 also holds on arcs of maximum translational and rotational

dyn,i, ˆv(*t*),

rot,i,*ω*ˆ(*t*)(*t*).

<sup>i</sup> (*t*)**u**˜ <sup>∗</sup>

*T* **v**∗ i

<sup>i</sup> (*t*)**u**˜ <sup>∗</sup>

<sup>i</sup> (*t*)**u**ˆ <sup>∗</sup>

**u**ˆ ∗ i,tran(*t*)


<sup>i</sup> (*t*)**u**ˆ <sup>∗</sup>

i,rot(*t*)

i,tran(*t*)

 2 ≤ *λ*<sup>∗</sup> <sup>0</sup>*μ*i,

 2 ≤ *λ*<sup>∗</sup> <sup>0</sup>*μ*i.

 **u**∗ i,tran(*t*) 2 ≤ *ρ*2 i,tran <sup>+</sup> *<sup>ρ</sup>*<sup>2</sup>

i,rot(*t*)||<sup>2</sup> = *ρ*ˆi,rot*,* ||**u**<sup>∗</sup>

i,tran(*t*)||<sup>2</sup> = *ρ*i,tran*,*

i,rot, (56)

i,rot. (57)

(58)

i,tran(*t*)||<sup>2</sup> = *ρ*ˆi,tran*,* ||**u**ˆ <sup>∗</sup>

*<sup>∂</sup>*<sup>ω</sup><sup>ˆ</sup> <sup>i</sup> (**v**i, <sup>ω</sup>i) *∂*ωˆ <sup>i</sup>

**Theorem 6.3.** *Assume that* ||**u**ˆ <sup>∗</sup>

*<sup>∂</sup>*<sup>ω</sup><sup>ˆ</sup> <sup>i</sup> (**v**i, <sup>ω</sup>i) *∂*ωˆ <sup>i</sup>

*is invertible. Then,*

−<sup>T</sup>

 **I**<sup>3</sup> + *∂***a**ˆi (**v**i) *∂***v**ˆi

 

*λ*∗ <sup>0</sup>*μ*<sup>i</sup> **u**ˆ ∗ i,tran(*t*) ||**u**˜ <sup>∗</sup>(*t*)||<sup>2</sup>

*λ*∗ <sup>0</sup>*μ*<sup>i</sup> **u**ˆ ∗ i,rot(*t*) ||**u**˜ <sup>∗</sup>(*t*)||<sup>2</sup>

−<sup>T</sup>


*<sup>∂</sup>*<sup>ω</sup><sup>ˆ</sup> <sup>i</sup> (**v**i, <sup>ω</sup>i) *∂*ωˆ <sup>i</sup>

> ||**u**˜ <sup>∗</sup> <sup>i</sup> (*t*)||<sup>2</sup> ˙ **u**ˆ ∗

<sup>i</sup> (*t*) = 0, the result follows.

(**v**∗ <sup>i</sup> ,ω<sup>∗</sup> i )

−<sup>T</sup>

(**v**∗ <sup>i</sup> ,ω<sup>∗</sup> i )

+ *λ*∗ <sup>0</sup>*μ*<sup>i</sup>  (**v**∗ <sup>i</sup> ,ω<sup>∗</sup> i )

*Proof.* It follows from (44), (42), (53), and (54) that

 2

*<sup>∂</sup>*<sup>ω</sup><sup>ˆ</sup> <sup>i</sup> (**v**i, <sup>ω</sup>i) *∂*ωˆ <sup>i</sup>

*<sup>∂</sup>*<sup>ω</sup><sup>ˆ</sup> <sup>i</sup> (**v**i, <sup>ω</sup>i) *∂*ωˆ <sup>i</sup>

<sup>i</sup> (*t*)**u**˜ <sup>∗</sup>


 *<sup>∂</sup>*<sup>ω</sup><sup>ˆ</sup> <sup>i</sup>(**v**i,ωi) *∂*ωˆ <sup>i</sup>

 (**v**∗ <sup>i</sup> ,ω<sup>∗</sup> i )

 

Recalling that

 *λ*∗ <sup>0</sup>*μ*<sup>i</sup> 

 *λ*∗ <sup>0</sup>*μ*<sup>i</sup> 

(**u**∗ i,tran, **u**<sup>∗</sup>

thrust.

Now, noting that ˙**u**˜ <sup>∗</sup>*<sup>T</sup>*

 **u**ˆ ∗ i,rot(*t*) ||**u**˜ <sup>∗</sup>(*t*)||<sup>2</sup>

$$\left\lVert \left| \left[ \frac{\partial \hat{\omega}\_{\mathrm{i}} \left( \mathbf{v}\_{\mathrm{i}}, \omega\_{\mathrm{i}} \right)}{\partial \hat{\omega}\_{\mathrm{i}}} \right]^{-\mathrm{T}} \hat{\mathbf{u}}\_{\mathrm{i}, \mathrm{rot}}^{\*} (t) \right\rVert\_{2} \leq \sqrt{\rho\_{\mathrm{i}, \mathrm{ran}}^{2} + \rho\_{\mathrm{i}, \mathrm{rot}}^{2}}.\tag{60}$$

*Proof.* The proof is a direct consequence of Theorem 6.3.

**Example 6.1.** Consider the formation of the two vehicles addressed in Examples 4.1 and 4.2, and assume that **q**(*t*) = � **x**T <sup>1</sup> (*t*), **<sup>r</sup>**<sup>T</sup> <sup>2</sup> (*t*) �T . As shown in Example 4.2, if rmin <sup>&</sup>lt; ||**r**1(*t*) <sup>−</sup> **<sup>r</sup>**2(*t*)||<sup>2</sup> <sup>2</sup> < rmax, then the first vehicle and the translational dynamics of the second vehicle can be considered unconstrained. Thus, the costate equation (41) can be rewritten as two decoupled ordinary differential equations given by

$$
\frac{\mathbf{d}}{\mathbf{d}t} \begin{bmatrix} \lambda\_{\mathrm{dot},1}(t) \\ \lambda\_{\mathrm{dym},1}(t) \end{bmatrix} = - \begin{bmatrix} \begin{bmatrix} \mathbf{0}\_{3\times3} & \mathbf{0}\_{3\times3} \\ \mathbf{0}\_{3\times3} & \frac{\partial \mathbf{R}\_{\mathrm{vol}}^{-1}(\mathbf{c}\mathbf{r})}{\partial \mathbf{r}\_{1}} \dot{\mathbf{r}} \\ \frac{\partial \mathbf{I}\_{\mathrm{dym}}(\mathbf{x}, \mathbf{q}\_{\mathrm{det1}}(\mathbf{x}), \mathbf{u}\_{1})}{\partial \mathbf{x}\_{1}} \end{bmatrix}^{\mathrm{T}} & \mathbf{0}\_{6\times6} \\\\ \frac{\mathbf{d}}{\mathbf{d}t} \begin{bmatrix} \lambda\_{\mathrm{dot},2}(t) \\ \lambda\_{\mathrm{dym},2}(t) \end{bmatrix} = - \begin{bmatrix} \mathbf{0}\_{3\times3} \frac{\partial \mathbf{I}\_{\mathrm{dym}}(\tilde{\mathbf{x}}\_{2}(t), \mathbf{u}\_{2\times3}(t))}{\partial \mathbf{r}\_{2}} \\ \mathbf{0}\_{3\times3} \end{bmatrix}^{\mathrm{T}} \begin{bmatrix} \lambda\_{\mathrm{dot},1}(t) \\ \lambda\_{\mathrm{dym},1}(t) \end{bmatrix}, \tag{61}
$$

where λdyn(*t*) [λ<sup>T</sup> dyn,1(*t*) <sup>λ</sup><sup>T</sup> dyn,2(*t*)]T, <sup>λ</sup>dyn,1 : [*t*1, *<sup>t</sup>*2] <sup>→</sup> **<sup>R</sup>**6, <sup>λ</sup>dyn,2 : [*t*1, *<sup>t</sup>*2] <sup>→</sup> **<sup>R</sup>**3, <sup>λ</sup>dot(*t*) [λ<sup>T</sup> dot,1(*t*), <sup>λ</sup><sup>T</sup> dot,2(*t*)]T, <sup>λ</sup>dot,1 : [*t*1, *<sup>t</sup>*2] <sup>→</sup> **<sup>R</sup>**6, and <sup>λ</sup>dot,2 : [*t*1, *<sup>t</sup>*2] <sup>→</sup> **<sup>R</sup>**3.

From (61) and (62) it follows that the path planning optimization problem for the first vehicle is possibly abnormal since we cannot verify a priori whether or not

$$
\lambda\_{\mathrm{dot},1}^\*(t) \in \mathcal{N} \left( \begin{bmatrix} \mathbf{0}\_{3 \times 3} & \mathbf{0}\_{3 \times 3} \\ \mathbf{0}\_{3 \times 3} & \frac{\partial \mathbf{R}\_{\mathrm{prod}}^{-1}(\sigma\_1)}{\partial \sigma\_1} \Big| \dot{\sigma}\_1(\mathbf{q}(t)) \end{bmatrix}\_{\mathbf{q} = \mathbf{q}^\*} \right), 1$$

whereas the path planning optimization problem for the second vehicle is normal since its rotational dynamics are not expressed by (62). Normality for the second formation vehicle can also be proven by rewriting the unconstrained dynamic equations (3) for a 3 DoF vehicle. For details, see L'Afflitto & Sultan (2008).

Using (18) it follows that (45) can be written as

$$\begin{split} \mathfrak{h}\left(\mathbf{q}(t), \mathbf{q}\_{\mathrm{dot}}(t), \tilde{\mathbf{u}}(t), \lambda\_{\mathrm{dyn}}(t), \lambda\_{\mathrm{dot}}(t)\right) &= \mathfrak{h}\_{1}\left(\mathbf{x}\_{1}(t), \mathbf{u}\_{1}(t), \lambda\_{\mathrm{dyn},1}(t), \lambda\_{\mathrm{dot},1}(t)\right) \\ &+ \mathfrak{h}\_{2}\left(\mathbf{x}\_{2}(t), \mathbf{u}\_{2,\mathrm{fran}}(t), \lambda\_{\mathrm{dyn},2}(t)\right), \end{split} \tag{63}$$

4.1, 4.2, and 6.1 with masses 0.1kg and inertia matrices 0.40**I**<sup>3</sup> kgm<sup>4</sup> flying in an environment

A Variational Approach to the Fuel Optimal Control Problem for UAV Formations 243

*k*i,R = 0.30, *k*i,P = 0.30, and *k*i,Y = 0.30, for i = 1, 2. Furthermore, we assume that

T, and <sup>σ</sup>1(*t*2)=[0.00, 0.00, 120.00 *<sup>π</sup>* 180.00 ]

boundary conditions for the second vehicle are deduced from (14) and (15) by assuming that

the optimal trajectory shown in Figure 1. Figures 2 and 3 show the optimal control as a function of the norm of the translational primer vector and the rotational primer vector, as

translational primer vector and the rotational primer vector of the first vehicle as a function of

<sup>0</sup> 0.2 0.4 0.6 0.8 <sup>1</sup> −1 −0.5 <sup>0</sup> 0.5

x y

Fig. 1. Optimal trajectories for vehicles 1 and 2. The cube represents the first vehicle and the

s2 , *<sup>ρ</sup>*i,3 <sup>=</sup> 10.00 <sup>1</sup>

= 22.30 <sup>m</sup>

T m

s2 , *k*i,D = 0.20, *k*i,L = 1.20, *k*i,S = 0.50,

<sup>2</sup> and applying Theorem 6.1, we obtain

<sup>s</sup> and J [**u**2(·)] <sup>=</sup> 11.60 <sup>m</sup>

s2 , Theorem 6.2 holds. Finally, Figure 4 shows the

T. For our simulation we

s2 , for i = 1, 2. The

<sup>T</sup> m,

<sup>s</sup> . Since

<sup>T</sup> m, **<sup>r</sup>**1(*t*2)=[0.90, <sup>−</sup>10.00, <sup>−</sup>1.80]

s2 , and *<sup>ρ</sup>*i,1 <sup>=</sup> 20.00 <sup>1</sup>

<sup>50</sup>m. It can be easily verified that the constraints given by (12) and

modeled by (51) and (52), where **g** = [0, 0, −9.81]

*t*<sup>1</sup> = 0.00 s, *t*<sup>2</sup> = 60.00 s, **r**1(*t*1)=[0.00, 0.00, 0.00]

s2 , *<sup>ρ</sup>*i,2 <sup>=</sup> 45.00 <sup>m</sup>

(13) hold for all *<sup>t</sup>* <sup>∈</sup> [*t*1, *<sup>t</sup>*2]. Letting *<sup>μ</sup>*<sup>1</sup> <sup>=</sup> *<sup>μ</sup>*<sup>2</sup> <sup>=</sup> <sup>1</sup>

dyn(*t*),λ<sup>∗</sup>

well as time, respectively. For this example J [**u**1(·)] <sup>=</sup> 10.00 <sup>m</sup>

dot(*t*) 

σ1(*t*1)=[0.00, 0.00, 0.00]

<sup>25</sup>m and rmin <sup>=</sup> <sup>33</sup>

dot(*t*),λ<sup>∗</sup>

take *ρ*i,1 = 10.00 <sup>m</sup>

rmax = <sup>21</sup>

**q**∗(*t*), **q**∗

−1

prism represents the second vehicle.

−0.8

−0.6

−0.4

−0.2

z

0

0.2

0.4

0.6

m 

time.

where

$$\begin{aligned} \mathfrak{h}\_{1}\left(\mathbf{x}\_{1}(t),\mathbf{u}\_{1}(t),\lambda\_{\mathrm{dyn},1}(t),\lambda\_{\mathrm{dot},1}(t)\right) &= \lambda\_{0}\mu\_{1}||\mathbf{u}\_{1}(t)||\_{2} + \lambda\_{\mathrm{dyn},1,1}^{\mathrm{T}}(t)\mathbf{u}\_{1,\mathrm{ran}}(t) \\ &+ \lambda\_{\mathrm{dyn},1,2}^{\mathrm{T}}(t)\mathbf{u}\_{1,\mathrm{rot}}(t) + \lambda\_{\mathrm{dyn},1,1}^{\mathrm{T}}(t)\mathbf{a}\left(\tilde{\mathbf{x}}\_{1}(t)\right) \\ &+ \lambda\_{\mathrm{dyn},1,2}^{\mathrm{T}}(t)\left(\tilde{\omega}\_{1}\left(\tilde{\mathbf{x}}\_{1}(t)\right) - \mathbf{I}\_{\mathrm{in},1}^{-1}\boldsymbol{\omega}\_{1}^{\times}\left(\omega\_{1}(t)\right)\mathbf{I}\_{\mathrm{in},1}\boldsymbol{\omega}\_{1}(t)\right) \\ &+ \lambda\_{\mathrm{dot},1,1}^{\mathrm{T}}\mathbf{v}\_{1}(t) - \lambda\_{\mathrm{dot},1,2}^{\mathrm{T}}\mathbf{R}\_{\mathrm{rod}}^{-1}(\sigma\_{1}\mathbf{f}(t))\boldsymbol{\sigma}\_{1}(t), \end{aligned} \tag{64}$$

$$\begin{split} \mathfrak{h}\_{2}\left(\mathbf{x}\_{2}(t), \mathbf{u}\_{2,\text{ran}}(t), \lambda\_{\text{dyn},2}(t)\right) &= \mu\_{2}||\mathbf{u}\_{2,\text{ran}}(t)||\_{2} + \lambda\_{\text{dyn},2,1}^{\text{T}}(t)\mathbf{u}\_{2,\text{ran}}(t) \\ &+ \lambda\_{\text{dyn},2,1}^{\text{T}}(t)\mathbf{a}\left(\tilde{\mathbf{x}}\_{2}(t)\right) + \lambda\_{\text{dot},2,1}^{\text{T}}\mathbf{v}\_{2}(t), \end{split} \tag{65}$$

where λdyn,1(*t*) [λ<sup>T</sup> dyn,1,1(*t*), <sup>λ</sup><sup>T</sup> dyn,1,2(*t*)]T, <sup>λ</sup>dyn,2(*t*) [λ<sup>T</sup> dyn,2,1(*t*), <sup>λ</sup><sup>T</sup> dyn,2,2(*t*)]T, and <sup>λ</sup>dyn,j,k : [*t*1, *<sup>t</sup>*2] <sup>→</sup> **<sup>R</sup>**3, j, k <sup>=</sup> 1, 2. Now, using Theorem 6.3 we can construct a candidate optimal control law. Remarkably, the same candidate optimal control law can be obtained by applying Theorem 6.3 to (64) and (65) independently. The fact that the candidate optimal control law for the the first vehicle can be found independently from the second vehicle is another advantage in employing Lagrange coordinates. The minimization of h<sup>2</sup> leads to the same candidate optimal control law as given by primer vector theory with the only difference being that the arcs of maximum, null, and singular thrust are not characterized by the sign of ||λ<sup>∗</sup> dyn,2,1(*t*)||<sup>2</sup> − 1 as in Lawden's work (Lawden, 1963) but rather by the sign of ||λ<sup>∗</sup> dyn,2,1(*t*)||<sup>2</sup> − *μ*2.

Singular translational thrust arcs for the first vehicle occur when

$$\left(\lambda\_0 \mu\_1\right)^2 = \lambda\_{\text{dyn},1,1}^T(t)\lambda\_{\text{dyn},1,1}(t) \tag{66}$$

and, as shown in Theorem 6.3, **u**∗ 2,tran cannot be found on singular arcs by applying Pontryagin's minimum principle. However, from (44) and (64), we note that *λ*0*μ*<sup>1</sup> **u**∗ 1,tran(*t*) ||**u**<sup>∗</sup> <sup>1</sup> (*t*)||<sup>2</sup> <sup>=</sup> −λ<sup>∗</sup> dyn,1,1(*t*), and hence, (66) yields

$$||\mathbf{u}\_1^\*(t)||\_2^2 = \mathbf{u}\_{1,\text{ran}}^{\*\text{T}}(t)\mathbf{u}\_{1,\text{ran}}^\*(t). \tag{67}$$

Thus, on singular translational thrust arcs for the first vehicle **u**∗ 1,rot(*t*) = **0**3. Similarly, it can be shown that **u**∗ 1,tran(*t*) = **0**<sup>3</sup> on singular rotational thrust arcs for the first vehicle. Finally, singular arcs for the second vehicle occur when

$$
\mu\_2^2 = \lambda\_{\text{dyn},2,1}^{\*\text{T}}(t)\lambda\_{\text{dyn},2,1}^\*(t). \tag{68}
$$

From (44) and (65) it follows that *μ*<sup>2</sup> **u**∗ 2,tran(*t*) ||**u**<sup>∗</sup> 2,tran(*t*)||<sup>2</sup> <sup>=</sup> <sup>−</sup>λ<sup>∗</sup> dyn,2,1(*t*), which satisfies (68). Hence, any admissible **u**2,tran can be applied on singular arcs. This was first noted by Lawden (1963).

#### **7. Illustrative numerical example**

In this section, we present a numerical example to highlight the efficacy of the framework presented in the paper. In particular, we consider the two vehicles presented in Examples 22 Will-be-set-by-IN-TECH

<sup>=</sup> *<sup>λ</sup>*0*μ*1||**u**1(*t*)||<sup>2</sup> <sup>+</sup> <sup>λ</sup><sup>T</sup>

dyn,1,2(*t*)

<sup>=</sup> *<sup>μ</sup>*2||**u**2,tran(*t*)||<sup>2</sup> <sup>+</sup> <sup>λ</sup><sup>T</sup>

dyn,1,2(*t*)]T, <sup>λ</sup>dyn,2(*t*) [λ<sup>T</sup>

: [*t*1, *<sup>t</sup>*2] <sup>→</sup> **<sup>R</sup>**3, j, k <sup>=</sup> 1, 2. Now, using Theorem 6.3 we can construct a candidate optimal control law. Remarkably, the same candidate optimal control law can be obtained by applying Theorem 6.3 to (64) and (65) independently. The fact that the candidate optimal control law for the the first vehicle can be found independently from the second vehicle is another advantage in employing Lagrange coordinates. The minimization of h<sup>2</sup> leads to the same candidate optimal control law as given by primer vector theory with the only difference being that the arcs of maximum, null, and singular thrust are not characterized by the

dyn,1,2(*t*)**u**1,rot(*t*) + <sup>λ</sup><sup>T</sup>

dyn,2,1(*t*)**<sup>a</sup>** (**<sup>x</sup>**2(*t*)) <sup>+</sup> <sup>λ</sup><sup>T</sup>

dyn,2,1(*t*)||<sup>2</sup> − 1 as in Lawden's work (Lawden, 1963) but rather by the sign of

<sup>ω</sup><sup>1</sup> (**<sup>x</sup>**1(*t*)) <sup>−</sup> **<sup>I</sup>**

dot,1,2**R**−<sup>1</sup>

dot,1,1**v**1(*t*) <sup>−</sup> <sup>λ</sup><sup>T</sup>

dyn,1,1(*t*)**u**1,tran(*t*)

−1 in,1ω<sup>×</sup>

dyn,2,1(*t*)**u**2,tran(*t*)

dyn,1,1(*t*)λdyn,1,1(*t*) (66)

1,tran(*t*). (67)

dyn,2,1(*t*). (68)

dyn,2,1(*t*), which satisfies (68). Hence, any

1,rot(*t*) = **0**3. Similarly, it can

2,tran cannot be found on singular arcs by applying

dyn,2,1(*t*), <sup>λ</sup><sup>T</sup>

dyn,1,1(*t*)**<sup>a</sup>** (**<sup>x</sup>**1(*t*))

<sup>1</sup> (ω1(*t*))**I**in,1ω1(*t*)

rod(σ1(t))σ˙ <sup>1</sup>(*t*), (64)

dot,2,1**v**2(*t*), (65)

dyn,2,2(*t*)]T, and <sup>λ</sup>dyn,j,k

**u**∗ 1,tran(*t*) ||**u**<sup>∗</sup> <sup>1</sup> (*t*)||<sup>2</sup> <sup>=</sup>

+ λ<sup>T</sup>

+ λ<sup>T</sup>

+ λ<sup>T</sup>

+ λ<sup>T</sup>

where

h2 

sign of ||λ<sup>∗</sup>


−λ<sup>∗</sup>

be shown that **u**∗

where λdyn,1(*t*) [λ<sup>T</sup>

dyn,2,1(*t*)||<sup>2</sup> − *μ*2.

and, as shown in Theorem 6.3, **u**∗

dyn,1,1(*t*), and hence, (66) yields

From (44) and (65) it follows that *μ*<sup>2</sup>

**7. Illustrative numerical example**

singular arcs for the second vehicle occur when

**x**1(*t*), **u**1(*t*),λdyn,1(*t*),λdot,1(*t*)

**x**2(*t*), **u**2,tran(*t*),λdyn,2(*t*)

dyn,1,1(*t*), <sup>λ</sup><sup>T</sup>

Singular translational thrust arcs for the first vehicle occur when

(*λ*0*μ*1)


Thus, on singular translational thrust arcs for the first vehicle **u**∗

*μ*2 <sup>2</sup> <sup>=</sup> <sup>λ</sup>∗<sup>T</sup>

> **u**∗ 2,tran(*t*) ||**u**<sup>∗</sup>

<sup>2</sup> = λ<sup>T</sup>

Pontryagin's minimum principle. However, from (44) and (64), we note that *λ*0*μ*<sup>1</sup>

<sup>2</sup> <sup>=</sup> **<sup>u</sup>**∗<sup>T</sup>

dyn,2,1(*t*)λ<sup>∗</sup>

2,tran(*t*)||<sup>2</sup> <sup>=</sup> <sup>−</sup>λ<sup>∗</sup>

In this section, we present a numerical example to highlight the efficacy of the framework presented in the paper. In particular, we consider the two vehicles presented in Examples

admissible **u**2,tran can be applied on singular arcs. This was first noted by Lawden (1963).

1,tran(*t*)**u**<sup>∗</sup>

1,tran(*t*) = **0**<sup>3</sup> on singular rotational thrust arcs for the first vehicle. Finally,

h1  4.1, 4.2, and 6.1 with masses 0.1kg and inertia matrices 0.40**I**<sup>3</sup> kgm<sup>4</sup> flying in an environment modeled by (51) and (52), where **g** = [0, 0, −9.81] T m s2 , *k*i,D = 0.20, *k*i,L = 1.20, *k*i,S = 0.50, *k*i,R = 0.30, *k*i,P = 0.30, and *k*i,Y = 0.30, for i = 1, 2. Furthermore, we assume that *t*<sup>1</sup> = 0.00 s, *t*<sup>2</sup> = 60.00 s, **r**1(*t*1)=[0.00, 0.00, 0.00] <sup>T</sup> m, **<sup>r</sup>**1(*t*2)=[0.90, <sup>−</sup>10.00, <sup>−</sup>1.80] <sup>T</sup> m, σ1(*t*1)=[0.00, 0.00, 0.00] T, and <sup>σ</sup>1(*t*2)=[0.00, 0.00, 120.00 *<sup>π</sup>* 180.00 ] T. For our simulation we take *ρ*i,1 = 10.00 <sup>m</sup> s2 , *<sup>ρ</sup>*i,2 <sup>=</sup> 45.00 <sup>m</sup> s2 , *<sup>ρ</sup>*i,3 <sup>=</sup> 10.00 <sup>1</sup> s2 , and *<sup>ρ</sup>*i,1 <sup>=</sup> 20.00 <sup>1</sup> s2 , for i = 1, 2. The boundary conditions for the second vehicle are deduced from (14) and (15) by assuming that rmax = <sup>21</sup> <sup>25</sup>m and rmin <sup>=</sup> <sup>33</sup> <sup>50</sup>m. It can be easily verified that the constraints given by (12) and (13) hold for all *<sup>t</sup>* <sup>∈</sup> [*t*1, *<sup>t</sup>*2]. Letting *<sup>μ</sup>*<sup>1</sup> <sup>=</sup> *<sup>μ</sup>*<sup>2</sup> <sup>=</sup> <sup>1</sup> <sup>2</sup> and applying Theorem 6.1, we obtain the optimal trajectory shown in Figure 1. Figures 2 and 3 show the optimal control as a function of the norm of the translational primer vector and the rotational primer vector, as well as time, respectively. For this example J [**u**1(·)] <sup>=</sup> 10.00 <sup>m</sup> <sup>s</sup> and J [**u**2(·)] <sup>=</sup> 11.60 <sup>m</sup> <sup>s</sup> . Since m **q**∗(*t*), **q**∗ dot(*t*),λ<sup>∗</sup> dyn(*t*),λ<sup>∗</sup> dot(*t*) = 22.30 <sup>m</sup> s2 , Theorem 6.2 holds. Finally, Figure 4 shows the translational primer vector and the rotational primer vector of the first vehicle as a function of time.

Fig. 1. Optimal trajectories for vehicles 1 and 2. The cube represents the first vehicle and the prism represents the second vehicle.

<sup>0</sup> 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 −0.5

A Variational Approach to the Fuel Optimal Control Problem for UAV Formations 245

t [min]

<sup>0</sup> 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 <sup>0</sup>

t [min]

In this paper, we addressed the problem of minimizing the control effort needed to operate a formation of n UAVs. Specifically, a candidate optimal control law as well as necessary conditions for optimality that characterize the resulting optimal trajectories are derived and discussed assuming that the formation vehicles are 6 DoF rigid bodies flying in generic environmental conditions and subject to equality and inequality constraints. The results presented extend Lawden's seminal work (Lawden, 1963) and several papers predicated on

An illustrative numerical example involving a formation of two vehicles is provided to illustrate the mathematical path planning optimization framework presented in the paper. Furthermore, we show that our framework is not restricted to UAV formations and can be

The results of the present paper can be further extended in several directions. Specifically, an analytical study of the translational primer vector and the rotational primer vector can be useful in identifying numerous properties of the formation's optimal path. In particular, the translational primer vector and the rotational primer vector can be used to measure the sensitivity of the candidate optimal control law to uncertainties in the dynamical model. In this paper, we provide a generic formulation to the optimal path planning problem in order to address a large number of formation problems. However, specializing our results to a particular formation and a particular environmental model can lead to analytical tools that can be amenable to efficient numerical methods. Additionally, nonholonomic constraints have not been accounted in our framework and can be addressed by modifying Theorem 4.1. Finally, in this paper, we penalize vehicle control effort by tuning the constants *μ*1,...,*μ*<sup>n</sup> in (2). In many practical applications, however, it is preferable to trade-off the control effort in a formation of

Fig. 4. Translational and rotational primer vector norms as functions of time for the first

**8. Conclusion and recommendations for future research**

applied to formations of robots, spacecraft, and underwater vehicles.

vehicles by optimizing over the free parameters *μ*1,...,*μ*n.

0 0.5 1 1.5 2

0.05 0.1 0.15 0.2 0.25

vehicle.

his work.



Fig. 2. Optimal control for the first vehicle as function of the norm of the translational primer vector and the rotational primer vector.

Fig. 3. Optimal control for the first vehicle as function of time.

24 Will-be-set-by-IN-TECH

<sup>0</sup> 0.5 <sup>1</sup> 1.5 <sup>0</sup>


<sup>0</sup> 0.05 0.1 0.15 0.2 0.25 −1


Fig. 2. Optimal control for the first vehicle as function of the norm of the translational primer

<sup>0</sup> 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 <sup>0</sup>

t [min]

<sup>0</sup> 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 −1

Fig. 3. Optimal control for the first vehicle as function of time.

t [min]

−0.5 0 0.5 1

vector and the rotational primer vector.


−0.5 0 0.5 1




Fig. 4. Translational and rotational primer vector norms as functions of time for the first vehicle.

#### **8. Conclusion and recommendations for future research**

In this paper, we addressed the problem of minimizing the control effort needed to operate a formation of n UAVs. Specifically, a candidate optimal control law as well as necessary conditions for optimality that characterize the resulting optimal trajectories are derived and discussed assuming that the formation vehicles are 6 DoF rigid bodies flying in generic environmental conditions and subject to equality and inequality constraints. The results presented extend Lawden's seminal work (Lawden, 1963) and several papers predicated on his work.

An illustrative numerical example involving a formation of two vehicles is provided to illustrate the mathematical path planning optimization framework presented in the paper. Furthermore, we show that our framework is not restricted to UAV formations and can be applied to formations of robots, spacecraft, and underwater vehicles.

The results of the present paper can be further extended in several directions. Specifically, an analytical study of the translational primer vector and the rotational primer vector can be useful in identifying numerous properties of the formation's optimal path. In particular, the translational primer vector and the rotational primer vector can be used to measure the sensitivity of the candidate optimal control law to uncertainties in the dynamical model. In this paper, we provide a generic formulation to the optimal path planning problem in order to address a large number of formation problems. However, specializing our results to a particular formation and a particular environmental model can lead to analytical tools that can be amenable to efficient numerical methods. Additionally, nonholonomic constraints have not been accounted in our framework and can be addressed by modifying Theorem 4.1. Finally, in this paper, we penalize vehicle control effort by tuning the constants *μ*1,...,*μ*<sup>n</sup> in (2). In many practical applications, however, it is preferable to trade-off the control effort in a formation of vehicles by optimizing over the free parameters *μ*1,...,*μ*n.

L'Afflitto, A. & Sultan, C. (2010). On calculus of variations in aircraft and spacecraft

A Variational Approach to the Fuel Optimal Control Problem for UAV Formations 247

Mailhe, L. & Guzman, J. (2004). Initialization and resizing of formation flying using

Majewski, S. E. (1999). Naval command and control for future UAVs. MS Thesis, Naval

Neimark, J. I. & Fufaev, N. A. (1972). *Dynamics of Nonholonimic Systems*, American

Oyekan, J. & Huosheng, H. (2009). Toward bacterial swarm for environmental monitoring, *IEEE International Conference on Automation and Logistics*, pp. 399 –404.

Petropoulos, A. E. & Longuski, J. M. (2004). Shape-based algorithm for automated design of

Petropoulos, A. E. & Russell, R. P. (2008). Low-thrust transfers using primer vector theory

Plnes, D. & Bohorquez, F. (2006). Challenges facing future micro-air-vehicle development,

Pontryagin, L. S., Boltyanskii, V. G., Gamkrelidze, R. V. & Mishchenko, E. F. (1962). *The Mathematical Theory of Optimal Processes*, Interscience Publishers, New York, NY. Prussing, J. E. (2010). Primer vector theory and applications, *in* B. A. Conway (ed.), *Spacecraft Trajectory Optimization*, Cambridge University Press, Chicago, IL, pp. 155–188. Ramage, J., Avalle, M., Berglund, E., Crovella, L., Frampton, R., Krogmann, U., Ravat, C.,

Scharf, D., Hadaegh, F. & Ploen, S. (2003a). A survey of spacecraft formation flying guidance

Scharf, D., Hadaegh, F. & Ploen, S. (2003b). A survey of spacecraft formation flying guidance

Schouwenaars, T., Feron, E. & How, J. (2006). Multi-vehicle path planning for non-line of sight communication, *Proceedings of the American Control Conference*, pp. 5758–5762. Seereram, S., Li, E., Ravichandran, B., Mehra, R. K., Smith, R. & Beard, R.

low-thrust, gravity-assist trajectories, *AIAA Journal of Guidance, Control, and Dynamics*

and a second-order penalty method, *Proceedings of the AIAA Astrodynamics Specialist*

Robinson, M., Shulte, A. & Wood, S. (2009). Automation technologies and application considerations for highly integrated mission systems, *Technical Report TR-SCI-118*,

and control (part 1): Guidance, *Proceedings of the American Control Conference*, pp. 1733

and control (part 2): Control, *Proceedings of the American Control Conference*, pp. 1740

(2000). Multispacecraft formation initialization using genetic algorithm techniques,

Lawden, D. F. (1963). *Optimal Trajectories for Space Navigation*, Butterworths, London, UK. Lee, E. B. & Markus, L. (1968). *Foundations of Optimal Control Theory*, Wiley, New York, NY. Lillesand, T., Kiefer, R. W. & Chipman, J. (2007). *Remote Sensing and Image Interpretation*, Wiley,

*Control Conference*, AIAA, Toronto, Canada.

Postgraduate School, Monterey, CA.

Mathematical Society, New York, NY.

*Conference*, AIAA, Honolulu, HI.

*AIAA Journal of Aircraft* 43: 290–305.

North Atlantic Treaty Organisation.

Marec, J. P. (1979). *Optimal Space Trajectories*, Elsevier, New York, NY.

Pars, L. A. (1965). *A Treatise on Analytical Dynamics*, Wiley, New York, NY.

New York, NY.

pp. 547–556.

32: 95–101.

– 1739.

– 1748.

formation flying path planning, *Proceedings of the AIAA Guidance, Navigation, and*

global and local optimization methods, *Proceedings IEEE Aerospace Conference*, Vol. 1,

#### **9. Acknowledgments**

The first-named author would like to thank Drs. C. Sultan and E. Cliff at Virginia Polytechnic Institute and State University for several helpful discussions. This research was supported in part by the Domenica Rea d'Onofrio Fellowship Foundation and the Air Force Office of Scientific Research under Grant FA9550-09-1-0429.

#### **10. References**


26 Will-be-set-by-IN-TECH

The first-named author would like to thank Drs. C. Sultan and E. Cliff at Virginia Polytechnic Institute and State University for several helpful discussions. This research was supported in part by the Domenica Rea d'Onofrio Fellowship Foundation and the Air Force Office of

Ambrosia, V. & Hinkley, E. (2008). NASA science serving society: Improving capabilities for

Bataillé, B., Moschetta, J. M., Poinsot, D., Bérard, C. & Piquereau, A. (2009). Development of a VTOL mini UAV for multi-tasking missions, *The Aeronautical Journal* 13: 87–98. Betts, J. T. (1998). Survey of numerical methods for trajectory optimization, *AIAA Journal of*

Blackmore, L. (2008). Robust path planning and feedback design under stochastic uncertainty,

Ewing, E. G. (1969). *Calculus of Variations with Applications*, Dover Edition, New York, NY. Giaquinta, M. & Hildebrandt, S. (1996). *Calculus of Variations I*, Springer-Verlag, Berlin,

Greenwood, T. D. (2003). *Advanced Dynamics*, Cambridge University Press, New York, NY. Haddal, C. C. & Gertler, J. (2010). Homeland security: Unmanned aerial vehicles and border

Hassan, R., Cohanim, B. & de Weck, O. (2005). A comparison of particle swarm optimization

Herman, A. L. & Conway, B. A. (1987). Direct optimization using nonlinear programming and collocation, *AIAA Journal of Guidance, Control, and Dynamics* 10: 338–342. Jacobson, D. & Lele, M. (1969). A transformation technique for optimal control problems

Jamison, B. R. & Coverstone, V. (2010). Analytical study of the primer vector and orbit transfer switching function, *AIAA Journal of Guidance, Control, and Dynamics* 33: 235–245. Jang, J. S. & Tomlin, C. J. (2005). Control strategies in multi-player pursuit and evasion game,

L'Afflitto, A. & Sultan, C. (2008). Applications of calculus of variations to aircraft and

Anderson, J. D. (2001). *Fundamentals of Aerodynamics*, McGraw Hill, New York, NY.

fire characterization to effect reduction in disaster losses, *IEEE International Geoscience and Remote Sensing Symposium, 2008. IGARSS 2008.*, Vol. 4, pp. IV –628 –IV –631.

*Proceedings of the AIAA Guidance, Navigation, and Control Conference*, AIAA, Honolulu,

surveillance, *Technical Report RS21698*, Congressional Research Service, Washington,

and the genetic algorithm, *Proceedings of the 46th AIAA Structures, Structural Dynamics*

with a state variable inequality constraint, *IEEE Transactions on Automatic Control*

*Proceeding AIAA Guidance, Navigation, and Control Conference*, AIAA, San Francisco,

spacecraft path planning, *Proceedings of the AIAA Guidance, Navigation, and Control*

**9. Acknowledgments**

**10. References**

HI.

Germany.

14(5): 457–464.

*Conference*, AIAA, Chicago, IL.

D.C.

CA.

Scientific Research under Grant FA9550-09-1-0429.

*Guidance, Control, and Dynamics* 21: 193–207.

Bryson, A. E. (1975). *Applied Optimal Control*, Hemisphere, New York, NY.

*and Materials Conference*, AIAA, Breckenridge, CO.


**0**

**11**

*Australia*

**Measuring and Managing Uncertainty Through**

Despite the use of modern Identification Friend Foe (IFF) technology, aircraft recognition remains problematic even though a great deal of research effort has already been invested in this area. In the military context, IFF identification is supposed to be initiated when the interrogator transmits a signal to the aircraft and friendly aircraft are 'supposed' to reply to the signal by transmitting an identification code to the interrogator. Hostile aircraft often become unresponsive to the interrogator because it is either does not have the appropriate transponder or is trying to avoid being identified as an unfriendly aircraft. In the civilian air transport system, the Secondary Surveillance Radar (SSR) allows the location of the civilian aircraft being transmitted (through transponder) to the Air Traffic Controller (ATC). However, in extreme incidents, such as the attacks on the World Trade Center on 11th September 2001, the SSR transponders were manually disabled, which prevented the ATC detecting flight path alternation. To avoid the drawback of the transponder based aircraft identification system, the technique of Non-Cooperative Target Recognition (NCTR) has become a useful technology, because it does not require the participation of friendly aircraft. The NCTR technique relies primarily on the ground based target classification technology. In a typical classification problem, the goal is to develop a classifier that is capable to discriminate targets. This technology shares a great deal of similarity with the modern Electronics Support Measures (ESM) system that often employs as a Radar Warning Receiver (RWR) for modern military aircraft self-protection. Acknowledging the number of successful classifier technologies reported in this area, the goal of this work is not to propose any new algorithm to enhance the classification technology. Instead, a novel method, based on uncertainty measures, is introduced to improve the classification function by employing a data fusion technique. Data fusion applying evidential reasoning framework is a well established technique to fuse diverse sources of information. A number of fusion methods within this formalism were introduced including Dempster-Shafer Theory (DST) Fusion, Dezert Samarandche Fusion (DSmT), and Smets' Transferable Belief Model (TBM) based fusion. However, the impact of fusion on the level of uncertainty within these techniques was not studied in detail. While the use of Shannon entropy with the Bayesian fusion is well understood, the measures of uncertainty within the Dempster-Shafer formalism is not widely regarded. In this paper, an uncertainty based technique is proposed to quantify the evolution of DST fusion. This technique is then

**1. Introduction**

**Data Fusion for Application to Aircraft**

<sup>2</sup>*NICTA Victoria Research Laboratory / University of Melbourne*

**Identification System**

Peter Pong<sup>1</sup> and Subhash Challa2 <sup>1</sup>*Jacobs Australia / University of Melbourne*

*Proceedings of the 23rd Annual AAS Guidance and Control Conference*, AAS, Breckenridge, CO.


## **Measuring and Managing Uncertainty Through Data Fusion for Application to Aircraft Identification System**

Peter Pong<sup>1</sup> and Subhash Challa2

<sup>1</sup>*Jacobs Australia / University of Melbourne* <sup>2</sup>*NICTA Victoria Research Laboratory / University of Melbourne Australia*

#### **1. Introduction**

28 Will-be-set-by-IN-TECH

248 Recent Advances in Aircraft Technology

Shanmugavel, M., Tsourdos, A. & White, B. (2010). Collision avoidance and path planning of

Shuster, M. D. (1993). Survey of attitude representations, *Journal of the Astronautical Sciences*

Topcu, U., Casoliva, J. & Mease, K. D. (2007). Minimum-fuel powered descent for Mars pinpoint landing, *AIAA Journal of Spacecraft and Rockets* 44(2): 324–331. Valentine, F. A. (1937). The problem of Lagrange with differential inequalities as added side

Wall, B. J. (2008). Shape-based approximation method for low-thrust trajectory optimization, *Proceedings of the AIAA Astrodynamics Specialist Conference*, AIAA, Honolulu, HI. Wang, P. K. C. (1991). Navigation strategies for multiple autonomous mobile robots moving

Zaitri, M. K., Arzelier, D. & Louembert, C. (2010). Mixed iterative algorithm for solving

*Models in Automation and Robotics*, pp. 218–222.

University Press, Chicago, IL, pp. 407–448.

in formation, *Journal of Robotic Systems* 8: 177 – 195.

*Navigation, and Control Conference*, AIAA, Toronto, Canada.

Breckenridge, CO.

11: 439–517.

*Proceedings of the 23rd Annual AAS Guidance and Control Conference*, AAS,

multiple UAVs using flyable paths in 3D, *15th International Conference on Methods and*

conditions, *in* G. A. Bliss (ed.), *Contributions to the Calculus of Variations*, Chicago

optimal impulsive time-fixed rendezvous problem, *Proceedings of the AIAA Guidance,*

Despite the use of modern Identification Friend Foe (IFF) technology, aircraft recognition remains problematic even though a great deal of research effort has already been invested in this area. In the military context, IFF identification is supposed to be initiated when the interrogator transmits a signal to the aircraft and friendly aircraft are 'supposed' to reply to the signal by transmitting an identification code to the interrogator. Hostile aircraft often become unresponsive to the interrogator because it is either does not have the appropriate transponder or is trying to avoid being identified as an unfriendly aircraft. In the civilian air transport system, the Secondary Surveillance Radar (SSR) allows the location of the civilian aircraft being transmitted (through transponder) to the Air Traffic Controller (ATC). However, in extreme incidents, such as the attacks on the World Trade Center on 11th September 2001, the SSR transponders were manually disabled, which prevented the ATC detecting flight path alternation. To avoid the drawback of the transponder based aircraft identification system, the technique of Non-Cooperative Target Recognition (NCTR) has become a useful technology, because it does not require the participation of friendly aircraft. The NCTR technique relies primarily on the ground based target classification technology. In a typical classification problem, the goal is to develop a classifier that is capable to discriminate targets. This technology shares a great deal of similarity with the modern Electronics Support Measures (ESM) system that often employs as a Radar Warning Receiver (RWR) for modern military aircraft self-protection. Acknowledging the number of successful classifier technologies reported in this area, the goal of this work is not to propose any new algorithm to enhance the classification technology. Instead, a novel method, based on uncertainty measures, is introduced to improve the classification function by employing a data fusion technique. Data fusion applying evidential reasoning framework is a well established technique to fuse diverse sources of information. A number of fusion methods within this formalism were introduced including Dempster-Shafer Theory (DST) Fusion, Dezert Samarandche Fusion (DSmT), and Smets' Transferable Belief Model (TBM) based fusion. However, the impact of fusion on the level of uncertainty within these techniques was not studied in detail. While the use of Shannon entropy with the Bayesian fusion is well understood, the measures of uncertainty within the Dempster-Shafer formalism is not widely regarded. In this paper, an uncertainty based technique is proposed to quantify the evolution of DST fusion. This technique is then

Fig. 2. Example of a radar range profile of a fighter aircraft

Through Data Fusion for Application to Aircraft Identification System

the best target identification.

**3. Sensor selection and decision making**

system is employed to demonstrate the characteristics of uncertainty variation. In terms of target tracking, significant advancements have been made in the past two decades to improve tracking technology by employing sophisticated data fusion techniques. Some of the earlier works went even further by incorporating Target Identification information, such as IFF data, to improve the overall track quality (Leung & Wu, 2000), (Carson & Peters, 26-30 Oct 1997), (Bastiere, 1997), and (Perlovsky & Schoendorf, 1995). When legitimate statistical information is presented, the techniques employed by tracking and identification using IFF information are relatively mature. However, when conflicting information is presented to the NCTR system, most techniques employed today may find it difficult to discriminate the contradicting information. In this work, we propose a technique based on uncertainty measures to resolve this problem. The employment of uncertainty in recent aviation research was reported in areas, such as air traffic control (Porretta & Ochieng, 2010), navigation (Deng & Liu, 2011) and airport surface movement management (Schuster & Ochieng, 2011), however, all these works essentially model uncertainty based on the target statistical characteristics, such as model based classified illustrated in Figure 2. Instead of treating uncertainty implicitly using their statistical values, the concept proposed in this work treats uncertainty measures directly as input parameters. In this way, we could explicitly quantify the fusion performance to make

<sup>251</sup> Measuring and Managing Uncertainty

Information fusion is often perceived to produce improved decision. This assumption is generally true when sensor availability is limited, however, one has to question whether fusing all available data guarantee synergy. The focus of this work is on the reduction of uncertainties by expressing the relevant uncertainties in the reasoning system and utilise these measures to achieve the best information fusion strategy. In order to develop an uncertainty based information fusion in the aircraft identification context, the authors argue that the best fusion decision can only be observed when (i) the information fusion could provide the least ambiguous choice, (ii) the result produced by the fusion system induces the least vague answer under the reasoning framework, and (iii) the final recommendation provided by the fusion system has the fewest uncertainties. These three axioms underlying this paper are used

utilised to determine the optimal combination of sensor information to achieve the least uncertainty in the context of the aircraft identification problem using sensors operating the NCTR technique.

#### **2. Background**

Information fusion is often used as a data-processing technique to integrate uncertain information from multiple sensors. Information often contains uncertainties, which are usually related to physical constrains, detection algorithms and the transmitting channel of the sensors. Whilst the intuitive approaches, such as Dempster-Shafer Fusion (Shafer, 1976), Dezert Samarandche Fusion (DSmT)(Dezert & Smarandache, 2006) and Smets' Transferable Belief Model (TBM) (B.Ristic & P.Smets, 2005) aggregate all available information, these approaches do not always guarantee optimum results. Acknowledging that these techniques have associated measurement costs, the essence is to derive a fusion technique to minimise global uncertainties.

Fig. 1. JDL Model and Uncertainty

In the aerospace community, there is an increasing trend to automate decision processes based on information fusion techniques. As an example, fighter pilots may rely on various forms of data fusion models to assist in assessing the current situations, when uncertain information co-exists at all levels of fusion. Considering the many data fusion models, the Joint Defence Laboratory (JDL) model (Hall & Llinas, 2001) is one the most commonly referred frameworks, which consists of Level 1 Object Assessment, Level 2 Situation Assessment, Level 3 Impact Assessment and Level 4 Process Refinement. The decision maker is supposed to treat the JDL model at 4 independent levels of functions, however, each level of fusion often includes unavoidable uncertainties. That means any aircraft identification system employing real-time situation analysis technology is required to manage uncertainty in the most effective manner. The techniques based on statistical models employed in aircraft tracking were widely acknowledged, but the methods based on uncertainty measures for target identification are not well understood in the aviation community. In recognition of this deficiency, this paper explores a novel aircraft identification technique by leveraging a new uncertainty based fusion concept.

The new concept introduced in this work explores a number of uncertainty measures under the reasoning framework and attempts to introduce a methodology to manage uncertainty variation under the DST based fusion. An example derived from an Aircraft Identification (AI) 2 Will-be-set-by-IN-TECH

utilised to determine the optimal combination of sensor information to achieve the least uncertainty in the context of the aircraft identification problem using sensors operating the

Information fusion is often used as a data-processing technique to integrate uncertain information from multiple sensors. Information often contains uncertainties, which are usually related to physical constrains, detection algorithms and the transmitting channel of the sensors. Whilst the intuitive approaches, such as Dempster-Shafer Fusion (Shafer, 1976), Dezert Samarandche Fusion (DSmT)(Dezert & Smarandache, 2006) and Smets' Transferable Belief Model (TBM) (B.Ristic & P.Smets, 2005) aggregate all available information, these approaches do not always guarantee optimum results. Acknowledging that these techniques have associated measurement costs, the essence is to derive a fusion technique to minimise

Reduction in

Source Processing

Level Four Process Refinement

Sensor Sources

Fig. 1. JDL Model and Uncertainty

concept.

Reduction in Uncertainty Uncertainty

Reduction in

Level One Object Refinement

Uncertainty

Reduction in Total Uncertainty for the Decision Maker

In the aerospace community, there is an increasing trend to automate decision processes based on information fusion techniques. As an example, fighter pilots may rely on various forms of data fusion models to assist in assessing the current situations, when uncertain information co-exists at all levels of fusion. Considering the many data fusion models, the Joint Defence Laboratory (JDL) model (Hall & Llinas, 2001) is one the most commonly referred frameworks, which consists of Level 1 Object Assessment, Level 2 Situation Assessment, Level 3 Impact Assessment and Level 4 Process Refinement. The decision maker is supposed to treat the JDL model at 4 independent levels of functions, however, each level of fusion often includes unavoidable uncertainties. That means any aircraft identification system employing real-time situation analysis technology is required to manage uncertainty in the most effective manner. The techniques based on statistical models employed in aircraft tracking were widely acknowledged, but the methods based on uncertainty measures for target identification are not well understood in the aviation community. In recognition of this deficiency, this paper explores a novel aircraft identification technique by leveraging a new uncertainty based fusion

The new concept introduced in this work explores a number of uncertainty measures under the reasoning framework and attempts to introduce a methodology to manage uncertainty variation under the DST based fusion. An example derived from an Aircraft Identification (AI)

Reduction in

Level Two Situation Refinement

Uncertainty

Support Database

Database Management System

Reduction in

Fusion Database

Level Three Threat Refinement

Uncertainty

NCTR technique.

**2. Background**

global uncertainties.

Fig. 2. Example of a radar range profile of a fighter aircraft

system is employed to demonstrate the characteristics of uncertainty variation. In terms of target tracking, significant advancements have been made in the past two decades to improve tracking technology by employing sophisticated data fusion techniques. Some of the earlier works went even further by incorporating Target Identification information, such as IFF data, to improve the overall track quality (Leung & Wu, 2000), (Carson & Peters, 26-30 Oct 1997), (Bastiere, 1997), and (Perlovsky & Schoendorf, 1995). When legitimate statistical information is presented, the techniques employed by tracking and identification using IFF information are relatively mature. However, when conflicting information is presented to the NCTR system, most techniques employed today may find it difficult to discriminate the contradicting information. In this work, we propose a technique based on uncertainty measures to resolve this problem. The employment of uncertainty in recent aviation research was reported in areas, such as air traffic control (Porretta & Ochieng, 2010), navigation (Deng & Liu, 2011) and airport surface movement management (Schuster & Ochieng, 2011), however, all these works essentially model uncertainty based on the target statistical characteristics, such as model based classified illustrated in Figure 2. Instead of treating uncertainty implicitly using their statistical values, the concept proposed in this work treats uncertainty measures directly as input parameters. In this way, we could explicitly quantify the fusion performance to make the best target identification.

#### **3. Sensor selection and decision making**

Information fusion is often perceived to produce improved decision. This assumption is generally true when sensor availability is limited, however, one has to question whether fusing all available data guarantee synergy. The focus of this work is on the reduction of uncertainties by expressing the relevant uncertainties in the reasoning system and utilise these measures to achieve the best information fusion strategy. In order to develop an uncertainty based information fusion in the aircraft identification context, the authors argue that the best fusion decision can only be observed when (i) the information fusion could provide the least ambiguous choice, (ii) the result produced by the fusion system induces the least vague answer under the reasoning framework, and (iii) the final recommendation provided by the fusion system has the fewest uncertainties. These three axioms underlying this paper are used

Sensor 1&2 Sensor 1&3 Sensors 2&3

D,Q,H 0.026 0.022 0.0356 D,Q 0.0779 0.037 0.0595 H 0.3506 0.7704 0.3810 Q 0.1558 0.1185 0.0833 D 0.3896 0.0519 0.4405

<sup>253</sup> Measuring and Managing Uncertainty

reasoning framework, and provides an insight into how this method can be applied in an

The notion of Basic Probability Assignment (BPA) (Shafer, 1976) is defined with respect to a finite universe of propositions or frame of discernment, Ω. The sum of the probabilities assigned to all subsets of Ω and all propositions which support Ω must be in unity, as such *BPA* is a function from the set of subsets, 2Ω, of Ω to the unit interval [0, 1]. In accordance

The *subset A* of Ω such that *m*(*A*) > 0 is called a *focal element* of *m*, and ∅ is the empty set. Whilst the summation of BPA must be unity, it is not manditory for the BPA of a proposition

The idea of linking belief with evidential measures was first discussed by Shafer, and the idea

∑ *A*⊆Ω

*m*(∅) = 0 (1)

*m*(*A*) = 1 (2)

*Bel*(*Ai* <sup>∩</sup> *Aj*) + ... + (−1)*n*+1*Bel*(*A*<sup>1</sup> <sup>∩</sup> ... <sup>∩</sup> *An*)

*m*(*B*) ∀*A* ⊆ Ω (3)

Table 1. Sensor fusion example with contradicted information

Through Data Fusion for Application to Aircraft Identification System

with the convention proposed by Shafer (Shafer, 1976):

of Belief function in reference to the BPA is defined as,

**Definition 1.** *Bel:* <sup>2</sup><sup>Ω</sup> <sup>→</sup> [0, 1] *is a belief function over* <sup>Ω</sup> *if it satisfies:*

*• for every integer n* > 0 *and collection of subsets A*1, ...., *An of* Ω

*Bel*(*Ai*) − ∑

*Bel*(*A*) = ∑

*i*<*j*

*B*⊆*A*

BPA gives a measure of support that is assigned exactly to the focal elements of a given frame of discernment. In order to aggregate the total belief in a subset *A*, the extent to which all the available evidence supports *A*, one needs to sum together the BPAs of all the subsets of A for

*i*

aircraft identification capability.

and

*• Bel*(∅) = 0 *• Bel*(Ω) = 1

a belief measurement.

**4. Evidential reasoning framework**

*A* and its negation *A* sum to unity.

**4.1 Belief and plausibility measures**

*Bel*(*A*<sup>1</sup> ∪ ... ∪ *An*) ≥ ∑

to define the best fusion configuration. It is apparent that the goal of uncertainty based fusion is to choose the result with the least uncertainty. A fusion process based on uncertainties has the potential to lead to a biased result. However, it is difficult to neglect a decision based on information fusion when it is the least uncertain, least ambiguous and the most defined answer when compared with other potential solutions.

Figure 3 depicts an illustrative example where an aircraft identification scenario is considered. Assuming a model based classifier is employed to identify three kinds of aircraft types - Dual engines aircraft (D), Quadruple engines aircraft (Q) and Helicopter (H). Also assuming that the sensors produced an "unknown" state in the form of {D, Q, H}, where the decision of the aircraft type is not possible to be classified. Three sensors are utilised in this example to simplify the demonstration, where a classification value based on Basic Probability Assignment (BPA) are given to each of the classification reports with details also summarised in Figure 3.

Fig. 3. Multi-sensor aircraft classification

If the identification process performed by each sensor is independent, information provided by Sensor 2 is clearly contradicting with Sensor 1 and Sensor 3. The errors can be induced by the incorrect scatter angle, or simply estimated by an inaccurate model. Based on the axioms discussed, it is observed that fusing Sensor 1 and Sensor 2, or Sensor 2 and Sensor 3 under DST (which be discussed in the next section) will not produce a pronounced result to identify the aircraft type. The result of the fusion is illustrated in Table 1, where only the combination of Sensor 1 and Sensor 3 could provide an unambiguous fusion result. This example highlights the criticality of uncertainty measures in relation to the standard DST fusion process. Section 5 and Section 6 of this paper provide an empirical uncertainty measures analysis in the


Table 1. Sensor fusion example with contradicted information

reasoning framework, and provides an insight into how this method can be applied in an aircraft identification capability.

#### **4. Evidential reasoning framework**

The notion of Basic Probability Assignment (BPA) (Shafer, 1976) is defined with respect to a finite universe of propositions or frame of discernment, Ω. The sum of the probabilities assigned to all subsets of Ω and all propositions which support Ω must be in unity, as such *BPA* is a function from the set of subsets, 2Ω, of Ω to the unit interval [0, 1]. In accordance with the convention proposed by Shafer (Shafer, 1976):

$$m(\mathcal{Q}) = 0 \tag{1}$$

and

4 Will-be-set-by-IN-TECH

to define the best fusion configuration. It is apparent that the goal of uncertainty based fusion is to choose the result with the least uncertainty. A fusion process based on uncertainties has the potential to lead to a biased result. However, it is difficult to neglect a decision based on information fusion when it is the least uncertain, least ambiguous and the most defined

Figure 3 depicts an illustrative example where an aircraft identification scenario is considered. Assuming a model based classifier is employed to identify three kinds of aircraft types - Dual engines aircraft (D), Quadruple engines aircraft (Q) and Helicopter (H). Also assuming that the sensors produced an "unknown" state in the form of {D, Q, H}, where the decision of the aircraft type is not possible to be classified. Three sensors are utilised in this example to simplify the demonstration, where a classification value based on Basic Probability Assignment (BPA) are given to each of the classification reports with details also summarised

If the identification process performed by each sensor is independent, information provided by Sensor 2 is clearly contradicting with Sensor 1 and Sensor 3. The errors can be induced by the incorrect scatter angle, or simply estimated by an inaccurate model. Based on the axioms discussed, it is observed that fusing Sensor 1 and Sensor 2, or Sensor 2 and Sensor 3 under DST (which be discussed in the next section) will not produce a pronounced result to identify the aircraft type. The result of the fusion is illustrated in Table 1, where only the combination of Sensor 1 and Sensor 3 could provide an unambiguous fusion result. This example highlights the criticality of uncertainty measures in relation to the standard DST fusion process. Section 5 and Section 6 of this paper provide an empirical uncertainty measures analysis in the

answer when compared with other potential solutions.

Fig. 3. Multi-sensor aircraft classification

in Figure 3.

$$\sum\_{A \subseteq \Omega} m(A) = 1 \tag{2}$$

The *subset A* of Ω such that *m*(*A*) > 0 is called a *focal element* of *m*, and ∅ is the empty set. Whilst the summation of BPA must be unity, it is not manditory for the BPA of a proposition *A* and its negation *A* sum to unity.

#### **4.1 Belief and plausibility measures**

The idea of linking belief with evidential measures was first discussed by Shafer, and the idea of Belief function in reference to the BPA is defined as,

**Definition 1.** *Bel:* <sup>2</sup><sup>Ω</sup> <sup>→</sup> [0, 1] *is a belief function over* <sup>Ω</sup> *if it satisfies:*


$$\operatorname{Bel}(A\_1 \cup \dots \cup A\_{\mathfrak{n}}) \ge \sum\_{i} \operatorname{Bel}(A\_i) - \sum\_{i < j} \operatorname{Bel}(A\_i \cap A\_j) + \dots + (-1)^{n+1} \operatorname{Bel}(A\_1 \cap \dots \cap A\_{\mathfrak{n}})$$

BPA gives a measure of support that is assigned exactly to the focal elements of a given frame of discernment. In order to aggregate the total belief in a subset *A*, the extent to which all the available evidence supports *A*, one needs to sum together the BPAs of all the subsets of A for a belief measurement.

$$Bel(A) = \sum\_{B \subseteq A} m(B) \quad \forall A \subseteq \Omega \tag{3}$$

on a singleton set is in the form of,

Through Data Fusion for Application to Aircraft Identification System

*H*, defined for any basic possibility functions, *rA*,

context of DST framework is thus defined by the function,

where Ω is the superset of the focal elements.

**5.2 Aggregated uncertainty measures**

*for all A* ⊆ <sup>Ω</sup>*, Bel*(*A*) ≤ <sup>∑</sup>*x*∈*<sup>A</sup> px.*

subadditivity/additivity characteristics.

*GH*(*m*) = ∑

or alternatively

*defined by*

− *<sup>c</sup>*∑ *<sup>p</sup>*(*x*)log*<sup>b</sup> <sup>p</sup>*(*x*) (7)

*rA*(*x*) (8)

*c* log*<sup>b</sup>* |*A*| (9)

*H*(*rA*) = log2 |*A*| (10)

*m*(*A*)log2 |*A*| (11)

*px* log2 *px*} (12)

where *b* and *c* are positive constants, and *b* �= 1. While this technique is useful to apply in sensor management system operating under the probabilistic framework, it cannot be used under a finite set condition. An alternative is to employ the legacy Hartley measures (Hartley, n.d.), where it seems to be the only meaningful way to measure uncertainty in the form of,

<sup>255</sup> Measuring and Managing Uncertainty

where *A* is a finite set and |*A*| is the cardinality of the finite set. *b* and *c* are positive constants, and *b* �= 1. When uncertainty is measured in *bits*, *c* log*<sup>b</sup>* 2 = 1. Harley uncertainty measures,

On closer examination of (10), *H*(*rA*) is a measure directly related to the specificity of a finite set. In other words, the larger the size of a set, the less specific the measurement becomes. This type of measures was defined as *non-specificity* by Klir (Klir, 2006). In the reasoning framework, Hartley Measures are usually treated as a weighted average of all the focal subsets in the form of BPA function (Klir, 2006).The concept of generalised Harley measures in the

*A*∈Ω

Suppose the goal of information fusion is to reduce global uncertainties, Harmanec (Harmanec, 1996) was the first to explore the concept of uncertainty measures in the DST framework. The idea of *AU* uncertainty measures was proposed as the optimum uncertainty measures technique under the DST domain, because it is the only way to incorporate the value of non-specificity and conflict simultaneously, which often coexist in the DST framework.

**Definition 2.** *The measure of the Aggregated Uncertainty contained in Bel, denoted as AU*(*Bel*)*, is*

*where the maximum is taken over all* {*px*}*x*∈<sup>Ω</sup> *such that px* ∈ [0, 1] *for all x* ∈ <sup>Ω</sup>*,* <sup>∑</sup>*x*∈<sup>Ω</sup> *px* = <sup>1</sup> *and*

Although the *AU* technique is not an efficient algorithm, it does satisfy all the properties defined as uncertainty measures (Harmanec, 1996), and specifically, the

*x*∈Ω

*AU*(*Bel*) = max{− ∑

*c* log*<sup>b</sup>* ∑ *x*∈Ω

The remaining evidence may not necessarily support the negation *A*. In fact some of them may be assigned to propositions which are not disjointed from *A*, and hence, could be plausibly transferred directly to *A* for further information. Shafer called this the plausibility of A:

$$\operatorname{Pl}(A) = \sum\_{\mathcal{B}\cap A \neq \mathcal{Q}} m(\mathcal{B}) \quad \forall A \subseteq \Omega \tag{4}$$

#### **4.2 Dempster-Shafer fusion under an iterative process**

Dempster's rule of combination forms a new body of evidence with which the focal elements are all non-empty intersections *X* ∩ *Y*. Given any *S* ⊆ *U* there are many pairs *X*, *Y* ⊆ *U* such that *X* ∩ *Y* = *S* and so the total weight of agreement assignable to the focal subset *X* ∩ *Y* is <sup>∑</sup>*X*∩*Y*=*<sup>S</sup> <sup>m</sup>*(*X*)*m*� (*Y*). Once normalising the agreement with the "non-conflicting values" (1 − *K*), Dempster's rule of combination for imprecise evidence becomes,

$$m(m\*m')(\mathcal{S}) = \frac{1}{1-K} \sum\_{X \cap Y = \mathcal{S}} m(X)m'(Y) \tag{5}$$

for all ∅ �= *S* ⊆ *U*. The *conflict* between two bodies of evidence *m*, *m*� is the total weight of contradiction between the events of *m* and the events of *m*� :

$$K(m, m') = \sum\_{X \cap Y = \mathcal{Q}} m(X)m'(Y) \tag{6}$$

The quantity 1 − *K* is the cumulative degree to which the two bodies of evidence do not contradict with each other and is called the *agreement* between *m* and *m*� . In general evidential theory, Dampster-Shafer rules, belief functions, plausibility functions and BPA forms a suite of significant tools to construct probabilities through carefully modelled evidence. Through this combination process, two new measurement values - *non-specificity* and *conflict*, are also generated as a by-product. An empirical analysis is presented in Section 5 in conjunction with the theory of Aggregated Uncertainty (AU) and the recently proposed generalised Total Uncertainty (TU) measures.

#### **5. Uncertainty measures within the evidential reasoning framework**

While the classical uncertainties are often measured by the Hartley and Shannon functions, the two functions are tailored for different purposes. In order to cater for both uncertainties, evidential based uncertainty measures are adopted. Two types of classical evidential based uncertainties - non-specificity and conflict - are often measured as part of the DST fusion (Harmanec, 1996). In this section, an overview is introduced to the concept of Hartley Uncertainty measures, Aggregrated Uncertainty (AU) measures and Total Uncertainty (TU) measures which was proposed by Klir (Klir, 2006). This analysis covers the context of the DST fusion system and their subsequent implication. A practical example based on aircraft identification applying uncertainty measures as sensor discrimination matrices is discussed in Section 7 to verify our observations.

#### **5.1 Hartley uncertainty**

The technique of uncertainty measures was first addressed by Shannon. Under his proposal, the way to quantify uncertainty measures expressed by a probability distribution function *p* on a singleton set is in the form of,

$$-c\sum p(\mathbf{x})\log\_b p(\mathbf{x})\tag{7}$$

where *b* and *c* are positive constants, and *b* �= 1. While this technique is useful to apply in sensor management system operating under the probabilistic framework, it cannot be used under a finite set condition. An alternative is to employ the legacy Hartley measures (Hartley, n.d.), where it seems to be the only meaningful way to measure uncertainty in the form of,

$$\mathcal{L}\log\_b\sum\_{\mathbf{x}\in\Omega}r\_A(\mathbf{x})\tag{8}$$

or alternatively

6 Will-be-set-by-IN-TECH

The remaining evidence may not necessarily support the negation *A*. In fact some of them may be assigned to propositions which are not disjointed from *A*, and hence, could be plausibly transferred directly to *A* for further information. Shafer called this the plausibility of A:

Dempster's rule of combination forms a new body of evidence with which the focal elements are all non-empty intersections *X* ∩ *Y*. Given any *S* ⊆ *U* there are many pairs *X*, *Y* ⊆ *U* such that *X* ∩ *Y* = *S* and so the total weight of agreement assignable to the focal subset *X* ∩ *Y*

> <sup>1</sup> <sup>−</sup> *<sup>K</sup>* <sup>∑</sup> *X*∩*Y*=*S*

for all ∅ �= *S* ⊆ *U*. The *conflict* between two bodies of evidence *m*, *m*� is the total weight of

The quantity 1 − *K* is the cumulative degree to which the two bodies of evidence do not

theory, Dampster-Shafer rules, belief functions, plausibility functions and BPA forms a suite of significant tools to construct probabilities through carefully modelled evidence. Through this combination process, two new measurement values - *non-specificity* and *conflict*, are also generated as a by-product. An empirical analysis is presented in Section 5 in conjunction with the theory of Aggregated Uncertainty (AU) and the recently proposed generalised Total

While the classical uncertainties are often measured by the Hartley and Shannon functions, the two functions are tailored for different purposes. In order to cater for both uncertainties, evidential based uncertainty measures are adopted. Two types of classical evidential based uncertainties - non-specificity and conflict - are often measured as part of the DST fusion (Harmanec, 1996). In this section, an overview is introduced to the concept of Hartley Uncertainty measures, Aggregrated Uncertainty (AU) measures and Total Uncertainty (TU) measures which was proposed by Klir (Klir, 2006). This analysis covers the context of the DST fusion system and their subsequent implication. A practical example based on aircraft identification applying uncertainty measures as sensor discrimination matrices is discussed

The technique of uncertainty measures was first addressed by Shannon. Under his proposal, the way to quantify uncertainty measures expressed by a probability distribution function *p*

) = ∑ *X*∩*Y*=∅

(*Y*). Once normalising the agreement with the "non-conflicting values"

*m*(*X*)*m*�

:

*m*(*X*)*m*�

*m*(*B*) ∀*A* ⊆ Ω (4)

(*Y*) (5)

(*Y*) (6)

. In general evidential

*B*∩*A*�=∅

*Pl*(*A*) = ∑

(1 − *K*), Dempster's rule of combination for imprecise evidence becomes,

*K*(*m*, *m*�

contradict with each other and is called the *agreement* between *m* and *m*�

**5. Uncertainty measures within the evidential reasoning framework**

)(*S*) = <sup>1</sup>

(*m* ∗ *m*�

contradiction between the events of *m* and the events of *m*�

**4.2 Dempster-Shafer fusion under an iterative process**

is <sup>∑</sup>*X*∩*Y*=*<sup>S</sup> <sup>m</sup>*(*X*)*m*�

Uncertainty (TU) measures.

in Section 7 to verify our observations.

**5.1 Hartley uncertainty**

$$c \log\_b |A| \tag{9}$$

where *A* is a finite set and |*A*| is the cardinality of the finite set. *b* and *c* are positive constants, and *b* �= 1. When uncertainty is measured in *bits*, *c* log*<sup>b</sup>* 2 = 1. Harley uncertainty measures, *H*, defined for any basic possibility functions, *rA*,

$$H(r\_A) = \log\_2 |A| \tag{10}$$

On closer examination of (10), *H*(*rA*) is a measure directly related to the specificity of a finite set. In other words, the larger the size of a set, the less specific the measurement becomes. This type of measures was defined as *non-specificity* by Klir (Klir, 2006). In the reasoning framework, Hartley Measures are usually treated as a weighted average of all the focal subsets in the form of BPA function (Klir, 2006).The concept of generalised Harley measures in the context of DST framework is thus defined by the function,

$$GH(m) = \sum\_{A \in \Omega} m(A) \log\_2 |A| \tag{11}$$

where Ω is the superset of the focal elements.

#### **5.2 Aggregated uncertainty measures**

Suppose the goal of information fusion is to reduce global uncertainties, Harmanec (Harmanec, 1996) was the first to explore the concept of uncertainty measures in the DST framework. The idea of *AU* uncertainty measures was proposed as the optimum uncertainty measures technique under the DST domain, because it is the only way to incorporate the value of non-specificity and conflict simultaneously, which often coexist in the DST framework.

**Definition 2.** *The measure of the Aggregated Uncertainty contained in Bel, denoted as AU*(*Bel*)*, is defined by*

$$AUI(Bel) = \max\{-\sum\_{\mathbf{x}\in\Omega} p\_{\mathbf{x}} \log\_2 p\_{\mathbf{x}}\}\tag{12}$$

*where the maximum is taken over all* {*px*}*x*∈<sup>Ω</sup> *such that px* ∈ [0, 1] *for all x* ∈ <sup>Ω</sup>*,* <sup>∑</sup>*x*∈<sup>Ω</sup> *px* = <sup>1</sup> *and for all A* ⊆ <sup>Ω</sup>*, Bel*(*A*) ≤ <sup>∑</sup>*x*∈*<sup>A</sup> px.*

Although the *AU* technique is not an efficient algorithm, it does satisfy all the properties defined as uncertainty measures (Harmanec, 1996), and specifically, the subadditivity/additivity characteristics.

Classification 1 Sensor 3 Sensors 7 Sensors A 0.22 0.3485 0.4125 B 0.25 0.3309 0.3525 C 0.26 0.2845 0.2343 D 0,00 0.0015 0.0001 A,B 0.07 0.0163 0.0004 A,C 0.03 0.005 0.0001 A,D 0.03 0.005 0.0001 B,C 0.015 0.0022 0.0000 B,D 0.005 0.0007 0.0000 C,D 0.01 0.0014 0.0000 A,B,C,D 0.1 0.0042 0.0000

<sup>257</sup> Measuring and Managing Uncertainty

**6. Analysis of uncertainty measures under the Dempster Shafer fusion framework** To appreciate the impact of uncertainty variation, an example with a set of arbitrary data is illustrated in Table 2. The data set is exactly the same measurement values, such that an iterative DST fusion can be performed. The results in Table 2 confirmed that sensor information can be refined and appears to have a reduction of ambiguity under an iterative DST fusion process. However, the merit of these results cannot be examined further, unless an acceptable matrices is used to quantify the fusion. To address this point, the results illustrated in Figure 5 a demonstrate how AU uncertainty reduction could quantify the DST fusion process. Whilst the AU uncertainty measure are a useful index to quantify the DST fusion process, it is suggested to be insensitive to small change in evidences (Klir, 2006). Acknowledging the inherited issues with the AU uncertainty measures, this work also examines the concept of employing Total Uncertainty Map (TUM) to evaluate a standard DST Fusion process. Considering TU is an amalgamation of GH and GS, the uncertainty variation becomes significant if it is illustrated in two dimensional space. Figure 5b is an illustration of how a TUM can be used to visualise the recursive DST fusion. To assist the interpretation, the results of *GS*/*GH* are also provided in Figure 5a to enhance the illustration. In this case, *GS* and *GH* are treated as an unified parameters with the variation under the DST fusion process observed. Due to the equivalent sensor input for the DST fusion, the weighted average of

−1 −0.5 <sup>0</sup> 0.5 <sup>1</sup> 1.5 1.15

(b) TU Map Variation Under the DS Fusion

GH

1.2 1.25 1.3 1.35 1.4 1.45 1.5 1.55 1.6

GS

Table 2. Classification Results with DST Fusion

Through Data Fusion for Application to Aircraft Identification System

<sup>1</sup> <sup>2</sup> <sup>3</sup> <sup>4</sup> <sup>5</sup> <sup>6</sup> <sup>7</sup> 1.5

(a) AU Uncertainty Variation Under the DS

Number of Sensors

1.55 1.6 1.65 1.7 1.75 1.8 1.85 1.9 1.95

Fusion

Fig. 5. Uncertainty Variation

AU Uncertainty

Fig. 4. Additivity and Subadditivity

**Subadditivity**. If *Bel* is an arbitrary joint belief function on *X* × *Y* and the associated marginal belief functions are *BelX* and *BelY*, then

$$A\mathcal{U}(Bel) \le A\mathcal{U}(Bel\_X) + A\mathcal{U}(Bel\_Y) \tag{13}$$

**Additivity**. If *Bel* is a joint belief function on *X* × *Y*, and the marginal belief functions *BelX* and *BelY* are noninteractive, then

$$ALI(Bel) = ALI(Bel\_X) + ALI(Bel\_Y) \tag{14}$$

The property of additivity/subadditivity of AU call forth the assumption that uncertainties could be reduced if sensors share common interaction prior the information fusion process occurring. Assuming sensor dependency exists among *BelA*, *BelB* and *BelC*, the characteristics of the resultant uncertainty under an evidential fusion system is illustrated pictorially in Figure 4. The algorithm to compute AU uncertainty was originated by Harmanec (Harmanec, 1996). Under the proposed algorithm, the input is treated in the form of a frame of discernment *X*, with a belief function *Bel* on *X*. This algorithm's computation completes once a finite number of steps have been taken and the output is the correct value of the function *AU*(*Bel*), since {*px*}*x*∈*<sup>X</sup>* maximises the Shannon entropy within the constraints induced by *Bel*.

#### **5.3 Total uncertainty measures**

The concept of generalised Total Uncertainty (TU) was proposed by Klir (Klir & Smith, 2001) not long after the introduction of AU uncertainty. This measure is defined as a combination of *AU* uncertainty and Generalised Hartley Measures,

$$T\mathcal{U} = \langle \mathcal{G}H, \mathcal{G}S \rangle \tag{15}$$

where *GH* represent the Generalised Hartley measures which was discussed in (11). The factor *GS* is called Generalised Shannon measurement (Klir, 2006), which is the conflicts measurement with the consideration of evident specificity. In other words, it is *GS* = *AU* − *GH*, the Aggregated Uncertainty with the reduction of specificity consideration. One advantage of the disaggregated TU, in comparison with AU, is that it expresses amounts of both types of uncertainty (non-specificity and conflict) explicitly, and consequently, it is highly sensitive to changes in evidence. These new features of uncertainty measures allow one to work with any set of recognised and well-developed theories of uncertainty as a whole, which are commonly seen in any evidential based fusion problem.

8 Will-be-set-by-IN-TECH

AU AU'

AU' > AU

*AU*(*Bel*) ≤ *AU*(*BelX*) + *AU*(*BelY*) (13)

*AU*(*Bel*) = *AU*(*BelX*) + *AU*(*BelY*) (14)

*TU* = �*GH*, *GS*� (15)

BelA BelB BelC

BelA BelB BelC

**Subadditivity**. If *Bel* is an arbitrary joint belief function on *X* × *Y* and the associated

**Additivity**. If *Bel* is a joint belief function on *X* × *Y*, and the marginal belief functions *BelX*

The property of additivity/subadditivity of AU call forth the assumption that uncertainties could be reduced if sensors share common interaction prior the information fusion process occurring. Assuming sensor dependency exists among *BelA*, *BelB* and *BelC*, the characteristics of the resultant uncertainty under an evidential fusion system is illustrated pictorially in Figure 4. The algorithm to compute AU uncertainty was originated by Harmanec (Harmanec, 1996). Under the proposed algorithm, the input is treated in the form of a frame of discernment *X*, with a belief function *Bel* on *X*. This algorithm's computation completes once a finite number of steps have been taken and the output is the correct value of the function *AU*(*Bel*), since {*px*}*x*∈*<sup>X</sup>* maximises the Shannon entropy within the constraints induced by

The concept of generalised Total Uncertainty (TU) was proposed by Klir (Klir & Smith, 2001) not long after the introduction of AU uncertainty. This measure is defined as a combination

where *GH* represent the Generalised Hartley measures which was discussed in (11). The factor *GS* is called Generalised Shannon measurement (Klir, 2006), which is the conflicts measurement with the consideration of evident specificity. In other words, it is *GS* = *AU* − *GH*, the Aggregated Uncertainty with the reduction of specificity consideration. One advantage of the disaggregated TU, in comparison with AU, is that it expresses amounts of both types of uncertainty (non-specificity and conflict) explicitly, and consequently, it is highly sensitive to changes in evidence. These new features of uncertainty measures allow one to work with any set of recognised and well-developed theories of uncertainty as a whole, which

Fig. 4. Additivity and Subadditivity

and *BelY* are noninteractive, then

**5.3 Total uncertainty measures**

*Bel*.

marginal belief functions are *BelX* and *BelY*, then

of *AU* uncertainty and Generalised Hartley Measures,

are commonly seen in any evidential based fusion problem.


Table 2. Classification Results with DST Fusion

#### **6. Analysis of uncertainty measures under the Dempster Shafer fusion framework**

To appreciate the impact of uncertainty variation, an example with a set of arbitrary data is illustrated in Table 2. The data set is exactly the same measurement values, such that an iterative DST fusion can be performed. The results in Table 2 confirmed that sensor information can be refined and appears to have a reduction of ambiguity under an iterative DST fusion process. However, the merit of these results cannot be examined further, unless an acceptable matrices is used to quantify the fusion. To address this point, the results illustrated in Figure 5 a demonstrate how AU uncertainty reduction could quantify the DST fusion process. Whilst the AU uncertainty measure are a useful index to quantify the DST fusion process, it is suggested to be insensitive to small change in evidences (Klir, 2006). Acknowledging the inherited issues with the AU uncertainty measures, this work also examines the concept of employing Total Uncertainty Map (TUM) to evaluate a standard DST Fusion process. Considering TU is an amalgamation of GH and GS, the uncertainty variation becomes significant if it is illustrated in two dimensional space. Figure 5b is an illustration of how a TUM can be used to visualise the recursive DST fusion. To assist the interpretation, the results of *GS*/*GH* are also provided in Figure 5a to enhance the illustration. In this case, *GS* and *GH* are treated as an unified parameters with the variation under the DST fusion process observed. Due to the equivalent sensor input for the DST fusion, the weighted average of

(a) AU Uncertainty Variation Under the DS Fusion (b) TU Map Variation Under the DS Fusion

Fig. 5. Uncertainty Variation

E0 E1 E2 E3 E4 E5 E6 E7 E8 E9 E10 E11 <sup>0</sup>

<sup>259</sup> Measuring and Managing Uncertainty

E0 E1 E2 E3 E4 E5 E6 E7 E8 E9 E10 E11 <sup>0</sup>

E0 E1 E2 E3 E4 E5 E6 E7 E8 E9 E10 E11 <sup>0</sup>

E0 E1 E2 E3 E4 E5 E6 E7 E8 E9 E10 E11 <sup>0</sup>

E0 E1 E2 E3 E4 E5 E6 E7 E8 E9 E10 E11 <sup>0</sup>

E0 E1 E2 E3 E4 E5 E6 E7 E8 E9 E10 E11 <sup>0</sup>

E0 E1 E2 E3 E4 E5 E6 E7 E8 E9 E10 E11 <sup>0</sup>

Actual result is E1

Sensor1 Sensor2 Sensor3 Sensor4 Sensor5 Sensor6 *m*{*e*1} = 0.5188 *m*{*e*1} = 0.3617 *m*{*e*1} = 0.5126 *m*{*e*1} = 0.4565 *m*{*e*1} = 0.4414 *m*{*e*1} = 0.3480 *m*{*e*6} = 0.4124 *m*{*e*4} = 0.1540 *m*{*e*6} = 0.4387 *m*{*e*4} = 0.1551 *m*{*e*6} = 0.5342 *m*{*e*4} = 0.1533 *m*{Θ} = 0.0687 *m*{*e*6} = 0.3971 *m*{Θ} = 0.0487 *m*{*e*6} = 0.3546 *m*{Θ} = 0.0244 *m*{*e*6} = 0.3254

computational workload this example only employs 12 of the target type signature instead of the potential 37 type of targets, where the results are depicted in Figure 7. The 12 aircraft type signatures selected for this simulation share similar characteristics, and often cause confusion to this particular NCTR platform. The remaining 25 emitter detections are not discarded, but are consolidated as detection CLUTTER. This method is similar to the strategy reported in (Yu & Sycara, 2006), instead this case study treats all aircraft signatures as the total frame of discernment Θ*E*, { *e*0, *e*1, *e*2, *e*3, *e*4, *e*5, *e*6, *e*7, *e*8, *e*9, *e*10, *e*<sup>11</sup> }. In terms of the simulation, each emitter signature is considered as *ei* ∈ *E*, where *m*(*ei*) is the normalised confidence level assigned by the post threshold detection process. For instance, the normalised post-detection confidence level with Sensor 2 are *m*{*e*1} = 0.3617, *m*{*e*4} = 0.1540, *m*{*e*6} = 0.3971 and *m*{*e*9} = 0.0733. To include the non-mutually exclusive aircraft type as CLUTTER, *m*{Θ*E*} = *c*(*CLUTTER*), where we assign the confidence of CLUTTER to the set of all possible aircraft types. In this case, the normalised *m*{Θ*E*} based on the pre-detection process is 0.0138. Upon completion with the BPA preparation, we performed a DST based fusion with a permutation space of 27. Figure 8 shows the uncertainty in the form of AU as gradually reduced with the increment of DST fusion. However, the results become less effective when more sensors are fused together. In accordance with the discussions covered in Section 6, the authors believe the optimum approach when conducting an uncertainty based DST fusion cannot rely on one single parameter alone. Depending on the computational workload and the tolerance of conflicts, the uncertainty based fusion process ought to be determined by a TU map, where *GS* and *GH* are to be treated separately. The preliminary results based on this

*m*{*e*9} = 0.0733 *m*{Θ} = 0.0337 *m*{*e*9} = 0.1602 *m*{Θ} = 0.0138 *m*{Θ} = 0.0357

0.5 1 Sensor 1

Through Data Fusion for Application to Aircraft Identification System

0.5 1 Sensor 2

0.5 1 Sensor 3

0.5 1 Sensor 4

0.5 1 Sensor 5

0.5 1 Sensor 6

> 0.5 1

Fig. 7. Model based classifier for aircraft type detection

NN Scores

NN Scores

NN Scores

NN Scores

NN Scores

NN Scores

NN Scores

Sensor 7

Table 4. Normalised Aircraft Detection

concept are illustrated in Figure 9.


Table 3. Random Sensor Input

each focal subset are virtually unchanged, which is why the GH values displayed in Figure 5b remain constant throughout the iterative DST fusion process. Further observation shows, however, that other uncertainty in the form of conflicts are gradually reduced as part of the DST fusion process. To further explore the characteristics of uncertainty variation, four arbitrary sensor data sets are outlined in Table 3. The TU uncertainty is displayed in Figure 6 b. These results are further broken down into four levels and each level represents the number of sensors fused by the DST fusion. Based on the sample results, it is difficult to provide a consolidated uncertainty variation within the DST fusion framework. However, a potential optimisation solution exists when the fusion goal is to present the most specific and least conflicted information to the decision maker. This concept will be covered in Section 7 by leveraging a NCTR based AI example.

(a) GS/GH Variation Under the DS Fusion (b) TU Map Under the DS Fusion with Random Sensors Input Data

#### **7. NCTR based Aircraft Identification (AI)**

This case study utilises an example commonly encountered in a model based classification system. Assuming each NCTR sensor has a potential to produce feature detection of,

$$B = \{E0, E1, E2, ..., E36\}$$

where B is the frame of discernment of the aircraft's type attributes, and this example allows seven model based classifiers to report aircraft type identification. To reduce the 10 Will-be-set-by-IN-TECH

Sensor 1 Sensor 2 Sensor 3 Sensor 4 { A } = 0.26 {B} = 0.2 {B} = 0.1 {A} = 0.05 {B}= 0.26 {A,B}= 0.1 {C} = 0.1 {B} = 0.05 {C}= 0.26 {A,C} = 0.1 {A,B}=0.16 {D} = 0.2 {A,B}= 0.07 {A,B,C}=0.1 {B,C}=0.14 {A,B} = 0.11 {A,C}= 0.01 {A,C,D}=0.1 {B,D}=0.05 {A,C} = 0.03 {A,D}= 0.01 {B,C,D}=0.3 {A,C}=0.1 {A,D} = 0.03 {B,C}= 0.01 {A,B,C}=0.2 {C,D} = 0.03 {B,D}= 0.01 {B,C,D}=0.15 {B,C,D} = 0.3 {C,D}= 0.01 {A,B,C,D} = 0.2

each focal subset are virtually unchanged, which is why the GH values displayed in Figure 5b remain constant throughout the iterative DST fusion process. Further observation shows, however, that other uncertainty in the form of conflicts are gradually reduced as part of the DST fusion process. To further explore the characteristics of uncertainty variation, four arbitrary sensor data sets are outlined in Table 3. The TU uncertainty is displayed in Figure 6 b. These results are further broken down into four levels and each level represents the number of sensors fused by the DST fusion. Based on the sample results, it is difficult to provide a consolidated uncertainty variation within the DST fusion framework. However, a potential optimisation solution exists when the fusion goal is to present the most specific and least conflicted information to the decision maker. This concept will be covered in Section 7

> −2 0 2

Sensor 1

−2 0 2

−2 0 2

−2 0 2

GS

GS

GS

GS

This case study utilises an example commonly encountered in a model based classification

*B* = {*E*0, *E*1, *E*2, ....., *E*36} where B is the frame of discernment of the aircraft's type attributes, and this example allows seven model based classifiers to report aircraft type identification. To reduce the

system. Assuming each NCTR sensor has a potential to produce feature detection of,

0 0.5 1 1.5 2 2.5 3

Sensor 2

Sensor 1&2

Sensor 3 Sensor 4

Sensor 1&3 / 2&3 / 1&4 / 2&4 / 3&4

Sensor 1&2&3 / 1&2&4 / 1&3&4 / 2&3&4

Random Sensors Input Data

GH

0 0.5 1 1.5 2 2.5 3

GH

0 0.5 1 1.5 2 2.5 3

GH

0 0.5 1 1.5 2 2.5 3

Sensor 1&2&3&4

GH

(b) TU Map Under the DS Fusion with

{A,B,C,D}= 0.1

Table 3. Random Sensor Input

by leveraging a NCTR based AI example.

3.4 3.6 3.8 4 4.2 4.4 4.6 4.8

GS/GH

<sup>1</sup> <sup>2</sup> <sup>3</sup> <sup>4</sup> <sup>5</sup> <sup>6</sup> <sup>7</sup> 3.2

(a) GS/GH Variation Under the DS Fusion

Fig. 6. Extended Uncertainty Variation Modelling

**7. NCTR based Aircraft Identification (AI)**

Number of Sensors

Fig. 7. Model based classifier for aircraft type detection


Table 4. Normalised Aircraft Detection

computational workload this example only employs 12 of the target type signature instead of the potential 37 type of targets, where the results are depicted in Figure 7. The 12 aircraft type signatures selected for this simulation share similar characteristics, and often cause confusion to this particular NCTR platform. The remaining 25 emitter detections are not discarded, but are consolidated as detection CLUTTER. This method is similar to the strategy reported in (Yu & Sycara, 2006), instead this case study treats all aircraft signatures as the total frame of discernment Θ*E*, { *e*0, *e*1, *e*2, *e*3, *e*4, *e*5, *e*6, *e*7, *e*8, *e*9, *e*10, *e*<sup>11</sup> }. In terms of the simulation, each emitter signature is considered as *ei* ∈ *E*, where *m*(*ei*) is the normalised confidence level assigned by the post threshold detection process. For instance, the normalised post-detection confidence level with Sensor 2 are *m*{*e*1} = 0.3617, *m*{*e*4} = 0.1540, *m*{*e*6} = 0.3971 and *m*{*e*9} = 0.0733. To include the non-mutually exclusive aircraft type as CLUTTER, *m*{Θ*E*} = *c*(*CLUTTER*), where we assign the confidence of CLUTTER to the set of all possible aircraft types. In this case, the normalised *m*{Θ*E*} based on the pre-detection process is 0.0138.

Upon completion with the BPA preparation, we performed a DST based fusion with a permutation space of 27. Figure 8 shows the uncertainty in the form of AU as gradually reduced with the increment of DST fusion. However, the results become less effective when more sensors are fused together. In accordance with the discussions covered in Section 6, the authors believe the optimum approach when conducting an uncertainty based DST fusion cannot rely on one single parameter alone. Depending on the computational workload and the tolerance of conflicts, the uncertainty based fusion process ought to be determined by a TU map, where *GS* and *GH* are to be treated separately. The preliminary results based on this concept are illustrated in Figure 9.

<sup>0</sup> <sup>20</sup> <sup>40</sup> <sup>60</sup> <sup>80</sup> <sup>100</sup> <sup>120</sup> <sup>0</sup>

0 20 40 60 80 100 120

nz = 448 (a) AU Uncertainty measurements in conjunction

Sensors Combination

Through Data Fusion for Application to Aircraft Identification System

<sup>0</sup> <sup>20</sup> <sup>40</sup> <sup>60</sup> <sup>80</sup> <sup>100</sup> <sup>120</sup> <sup>0</sup>

0 20 40 60 80 100 120

conjunction with sensor fusion

nz = 448 (b) GS/GH Uncertainty measurements in

Sensors Combination

GS/GH uncertainty values

<sup>261</sup> Measuring and Managing Uncertainty

Sensor1 Sensor2 Sensor3 Sensor4 Sensor5 Sensor6 Sensor7

the detected emitters are given below, which is equivalent to sensor combination with the least

*m*{Θ*E*} = 0 *m*{*e*1} = 0.5604 *m*{*e*6} = 0.4396 Although the final results obtained from the uncertainty based DST fusion do not yield distinct decisions, the results justify that aircraft type *e*<sup>1</sup> or aircraft type *e*<sup>6</sup> are detected.

This paper reviews the role of uncertainty measures in the data fusion framework within the context of evidential reasoning. An empirical analysis of the AU and TU uncertainty variations is conducted under the DST fusion framework. A preliminary method to choose sensors based on the uncertainty level is proposed. This technique is illustrated with an aircraft identification problem, when the radar range profile classifier is employed to support an identification system such as NCTR. Since the amount of reflected radar energy is different for different parts of the aircraft, inconsistency often occurs even when the same target is being observed by a number of sensors despite using the same classifier model. It is this inconsistency which makes the uncertainty based fusion technique useful in resolving aircraft identification problems. While the proposed technique can be computationally intensive, the idea underwrites a conservative result with the least measurable uncertainty. This approach essentially yields the potential to evaluate all kinds of reasoning based fusion systems. We have certainly not reached the end of our research effort yet, as the proposed concept only considers primarily the reduction of AU uncertainty. The authors recognise the benefits in further investigation of TUM in conjunction with the theory of optimisation, when a trade-off can be computed based on the classification's precision and accuracy. At the moment, our proposed concept does not take into account the sensor information based on human originated data. It is certainly an exciting future research topic, if the proposed concept is to be extended to cover identification systems where human originate information is employed.

0.5 1 1.5 2 2.5 3

Sensor1 Sensor2 Sensor3 Sensor4 Sensor5 Sensor6 Sensor7

with sensor fusion

AU uncertainty:

**8. Conclusion**

Fig. 10. Uncertainty Variation

AU uncertainty values

Fig. 8. AU Uncertainty Variation Under the model based classifier DST Fusion

(a) TU Map Variation Under the model based classifier DST Fusion (b) GS/GH Variation Under the model based classifier DST Fusion

Fig. 9. Uncertainty Variation

Notwithstanding the treatment of uncertainty in the DST context, Figure 9a outlined a method when adopting the theory of AU uncertainty to search for the least uncertain post-fusion results. For comparison purposes, the results of *GS*/*GH* measures are also displayed in Figure 9b. Under such a process, the final result is to be determined by the fusion that produces the minimum AU uncertainty. In this particular example, Sensors 1, 3, 4 and 7 can be selected to participate in the fusion process. Based on the least AU uncertainty, the final BPA for the detected emitters are given below:

$$m\{\Theta\_E\} = 0$$

$$m\{e\_1\} = 0.5604$$

$$m\{e\_6\} = 0.4396$$

With a similar approach, and adopting the *GS*/*GH* characteristics, Sensors 1, 2 and 3 are selected to join the fusion process. Based on the least *GS*/*GH* uncertainty, the final BPA for

(a) AU Uncertainty measurements in conjunction with sensor fusion (b) GS/GH Uncertainty measurements in conjunction with sensor fusion

Fig. 10. Uncertainty Variation

the detected emitters are given below, which is equivalent to sensor combination with the least AU uncertainty:

$$m\{\Theta\_E\} = 0$$

$$m\{e\_1\} = 0.5604$$

$$m\{e\_6\} = 0.4396$$

Although the final results obtained from the uncertainty based DST fusion do not yield distinct decisions, the results justify that aircraft type *e*<sup>1</sup> or aircraft type *e*<sup>6</sup> are detected.

#### **8. Conclusion**

12 Will-be-set-by-IN-TECH

<sup>1</sup> <sup>2</sup> <sup>3</sup> <sup>4</sup> <sup>5</sup> <sup>6</sup> <sup>7</sup> 0.9

Fig. 8. AU Uncertainty Variation Under the model based classifier DST Fusion

Number of Sensors

10

Notwithstanding the treatment of uncertainty in the DST context, Figure 9a outlined a method when adopting the theory of AU uncertainty to search for the least uncertain post-fusion results. For comparison purposes, the results of *GS*/*GH* measures are also displayed in Figure 9b. Under such a process, the final result is to be determined by the fusion that produces the minimum AU uncertainty. In this particular example, Sensors 1, 3, 4 and 7 can be selected to participate in the fusion process. Based on the least AU uncertainty, the final BPA for the

*m*{Θ*E*} = 0 *m*{*e*1} = 0.5604 *m*{*e*6} = 0.4396 With a similar approach, and adopting the *GS*/*GH* characteristics, Sensors 1, 2 and 3 are selected to join the fusion process. Based on the least *GS*/*GH* uncertainty, the final BPA for

classifier DST Fusion

15

20

GS/GH

25

30

<sup>1</sup> <sup>2</sup> <sup>3</sup> <sup>4</sup> <sup>5</sup> <sup>6</sup> <sup>7</sup> <sup>5</sup>

Number of Sensors

(b) GS/GH Variation Under the model based

1

0.05 0.1 0.15 0.2 0.25 0.3 <sup>0</sup>

(a) TU Map Variation Under the model based

Generalised Hartley Measure

0.5

classifier DST Fusion

Fig. 9. Uncertainty Variation

detected emitters are given below:

1

1.5

Generalise Shannon Measure

2

2.5

1.1

1.2

1.3

AU Uncertainty

1.4

1.5

1.6

This paper reviews the role of uncertainty measures in the data fusion framework within the context of evidential reasoning. An empirical analysis of the AU and TU uncertainty variations is conducted under the DST fusion framework. A preliminary method to choose sensors based on the uncertainty level is proposed. This technique is illustrated with an aircraft identification problem, when the radar range profile classifier is employed to support an identification system such as NCTR. Since the amount of reflected radar energy is different for different parts of the aircraft, inconsistency often occurs even when the same target is being observed by a number of sensors despite using the same classifier model. It is this inconsistency which makes the uncertainty based fusion technique useful in resolving aircraft identification problems. While the proposed technique can be computationally intensive, the idea underwrites a conservative result with the least measurable uncertainty. This approach essentially yields the potential to evaluate all kinds of reasoning based fusion systems. We have certainly not reached the end of our research effort yet, as the proposed concept only considers primarily the reduction of AU uncertainty. The authors recognise the benefits in further investigation of TUM in conjunction with the theory of optimisation, when a trade-off can be computed based on the classification's precision and accuracy. At the moment, our proposed concept does not take into account the sensor information based on human originated data. It is certainly an exciting future research topic, if the proposed concept is to be extended to cover identification systems where human originate information is employed.

**12** 

Jozsef Rohacs

*Hungary* 

**Subjective Factors in Flight Safety** 

The central deterministic element of the aircraft conventional control systems is the pilot – operator. Such systems are called as active endogenous subjective systems, because (i) the actively used control inputs (ii) origin from inside elements (pilots) of the system as (iii) results of operators' subjective decisions. The decisions depend on situation awareness, knowledge, practice and skills of pilot-operators. They may make decisions in situations characterized by a lack of information, human robust behaviors and their individual possibilities. These attributes as subjective factors have direct influences on the system

Aircraft control containing human operator in loop can be characterized by subjective analysis and vehicle motion models. The general model of solving the control problems includes the passive (information, energy - like vehicle control system in its physical form) and active (physical, intellectual, psychophysiology, etc. behaviors of subjects - operators) resources. The decision-making is the appropriate selection of the required results leading to

This chapter defines the flight safety and investigates aircraft stochastic motion. It shows the disadvantages of the stochastic approximation and discusses, how, the methods of

The applicability of the developed method of investigation will be demonstrated by analysis of the aircraft controlled landing. The applied equation of motions describes the motion of aircraft in vertical plane, only. The boundary constraints are defined for velocity, trajectory angle and altitude. The subjective factor is the ratio of required and available time to decision on the go-around. The decision depends on the available information and psychophysiological condition of operator pilots and can be determined by the theory of statistical hypotheses. The endogenous dynamics of the given active system is modeled by a modified

Safety is the condition of being safe; freedom from danger, risk, or injury. From the technical point of view, safety is a set of methods, rules, technologies applied to avoid the emergency situation caused by unwanted system uncertainties, errors or failures appearing randomly.

subjective analysis can be applied for the evaluation of flight safety.

**1. Introduction** 

Lorenz attractor.

**2. Flight safety 2.1 Definitions** 

characteristics, system quality and safety.

the best (effective, safety, etc.) solutions.

*Budapest University of Technology and Economics* 

#### **9. References**


## **Subjective Factors in Flight Safety**

## Jozsef Rohacs

*Budapest University of Technology and Economics Hungary* 

## **1. Introduction**

14 Will-be-set-by-IN-TECH

262 Recent Advances in Aircraft Technology

Bastiere, A. (1997). Fusion methods for mltisensor classification of airborne targets, *Aerospace*

B.Ristic & P.Smets (2005). Target classification approach based on the belief function theory,

Carson, R. Meyer, M. & Peters, D. (26-30 Oct 1997). Fusion of iff and radar data, *16th*

Deng, H. Chao, P. & Liu, J. (2011). Entropy flow-aided navigation, *The Journal of Navigation*,

Dezert, J. & Smarandache, F. (2006). Dsmt: A new paradigm shift for information fusion, *Cogis*

Harmanec, D. (1996). *Uncertainty in Dempster-Shafer Theory*, PhD Dissertation, State University

Hartley, R. (n.d.). Transmission of information, *The Bell System Technical Journal* 7(3): 535–563. Klir, G. (2006). *Uncertainty and Information, Fundations of Generalised Information theory*, Wiley

Klir, G. & Smith, R. (2001). On measuring uncertainty and uncertainty based information: Recent developments, *Annals of Mathematics and Artificial Intelligence* 32: 5–33. Leung, H. & Wu, J. (2000). Bayesian and dempster-shafer target identification for radar surveillance, *IEEE Transactions on Aerospace and Electronic Systems* 36(2): 432–447. Perlovsky, L. Chernick, J. & Schoendorf, W. (1995). Multi-sensor atr and identification of friend

Porretta, M. Schuster, W. & Ochieng, W. (2010). Strategic conflict detection and resolution

Schuster, W. & Ochieng, W. (2011). Airport surface movement - critical analysis of navigation

Yu, B. & Sycara, K. (2006). Learning the quality of sensor data in distributed decision fusion,

Shafer, G. (1976). *A mathematical theory of evidence*, Princeton University Press.

*Proceeding of the 9th International Conference on Information Fusion* .

using aircraft intent information, *The Journal of Navigation*, Vol. 63, The Royal Institute

system performance requirements, *The Journal of Navigation*, Vol. 64, The Royal

*IEEE Transactions On Aerospace And Electronics Systems* 41(2).

Vol. 64, The Royal Institute of Navigation, pp. 109–125.

Hall, H. & Llinas, J. (2001). *Handbook of Multisensor Data Fusion*, CRC.

or foe using mlans, *Neural Networks* 8(7/8).

Institute of Navigation, pp. 281–294.

*AIAA/IEEE Digital Avionics Systems Conference (DASC)* 1: 5.3–9–15.

**9. References**

*Science and Technology* 1: 83–94.

*' 06 Conference, Paris* .

of Navigation, pp. 61–88.

of New York.

Interscience.

The central deterministic element of the aircraft conventional control systems is the pilot – operator. Such systems are called as active endogenous subjective systems, because (i) the actively used control inputs (ii) origin from inside elements (pilots) of the system as (iii) results of operators' subjective decisions. The decisions depend on situation awareness, knowledge, practice and skills of pilot-operators. They may make decisions in situations characterized by a lack of information, human robust behaviors and their individual possibilities. These attributes as subjective factors have direct influences on the system characteristics, system quality and safety.

Aircraft control containing human operator in loop can be characterized by subjective analysis and vehicle motion models. The general model of solving the control problems includes the passive (information, energy - like vehicle control system in its physical form) and active (physical, intellectual, psychophysiology, etc. behaviors of subjects - operators) resources. The decision-making is the appropriate selection of the required results leading to the best (effective, safety, etc.) solutions.

This chapter defines the flight safety and investigates aircraft stochastic motion. It shows the disadvantages of the stochastic approximation and discusses, how, the methods of subjective analysis can be applied for the evaluation of flight safety.

The applicability of the developed method of investigation will be demonstrated by analysis of the aircraft controlled landing. The applied equation of motions describes the motion of aircraft in vertical plane, only. The boundary constraints are defined for velocity, trajectory angle and altitude. The subjective factor is the ratio of required and available time to decision on the go-around. The decision depends on the available information and psychophysiological condition of operator pilots and can be determined by the theory of statistical hypotheses. The endogenous dynamics of the given active system is modeled by a modified Lorenz attractor.

## **2. Flight safety**

#### **2.1 Definitions**

Safety is the condition of being safe; freedom from danger, risk, or injury. From the technical point of view, safety is a set of methods, rules, technologies applied to avoid the emergency situation caused by unwanted system uncertainties, errors or failures appearing randomly.

Subjective Factors in Flight Safety 265

In practice, the analysis of accident statistics could characterize the flight risks. Such statistics give the evidences for the well-known facts (Rohacs, 1995, 2000; Statistical 2008): (i) the longest part of the flight (with about 50 - 80 % of flight time) is the cruise phase, which only accounts for 5 - 8 % of the total accidents and 6 - 10 % of the total fatal accidents, (ii) the most dangerous phases of flight are the take-off and landing, because during this about 2 % of flight time the 25 - 28 % of fatal accidents are occurring, and (iii) generally nearly 80 % of the accidents are caused by human factors and about 50 % of them are initiated by the pilots. A good example of using accident statistics is shown in Figure 1. Beside showing the effects of technological development on the reduction of flight risks, it also shows that since 2003, the European fatal accident rate - as fatalities per 10 million flights - has increased, without

knowing – so far – the reason causing it.

Fig. 1. Characterization of the European accident statistics (EASA, 2008).

appearing the fatal accident following the accidents are the same.

**2.3 Human factors** 

role of human factors is increasing.

The accident statistics could be also used for flight safety analysis in original, or unusual method. While accident statistics demonstrate a considerable higher risk, accident rates for small aircraft, according to the Figure 2., the ratio of all and fatal accidents are nearly the same for airlines and general aviation. This means that the small and larger civilian aircraft are developed, designed, and produced with the same philosophy, at least the same safety approach and 'structural damping of damage processes'. The flight performances, flight dynamics, load conditions, structural solutions are different for small and larger aircraft, and therefore the accidents rates are also different. However, the risk of hard aftermath,

In 1908, 80 % of licensed pilots were killed in flight accident (Flight, 2000). Since that, the World and the aviation have changed a lot. After 1945, the role of technical factors in causing the accidents (and generally in safe piloting) is continuously decreasing while the

As it was outlined already, nearly 80 % of accidents are caused by human factors. (Rohacs, 1995, 2000; Statistical, 2008). While, only 4 -7 % of accidents are defined by the "independent investigators" as accident caused by unknown factors. According to Ponomerenko (2000) this figure might be changed when one tries to establish the truth in fatal accidents,

Safety and security are the twin brothers. The difference between them could be defined such as follows:


Safety related investigations start as early as the development of the given system. At the definition and preliminary phase of a new system, one should also concentrate some efforts on the (i) potential safety problems, (ii) critical situations, (iii) critical system failures, (iv) and their possible classification, identification. After the risk assessment, the next step is the development of a set of policies and strategies to mitigate those risks. Generally, the safety policies and strategies are based on the synergy of the


The safety of any systems can be evaluated by using the risk analysis methods. Risk is the probability that an emergency situation occurs in the future, and which could also be avoided or mitigated, rather than present problems that must be immediately addressed.

## **2.2 Flight safety metrics**

The evaluation of the flight safety is not a simple task. There is no uniformly applicable metrics for the evaluation. Some governments have already published (CASA, 2005; FAA, 2006; Transport, 2007) their opinion and possible methodologies for flight safety measures that are applied by evaluators (Ropp & Dillmann, 2008). The problem is associated with the very complex character of flight safety depending on the developed and applied


Risk analyses methods defining the probability of emergency situations or risks are very widely used for flight safety evaluation. Metrics of risk is the probability of the given risk as an unwanted danger event. This probability has at least four slightly different interpretations:


Safety and security are the twin brothers. The difference between them could be defined



Safety related investigations start as early as the development of the given system. At the definition and preliminary phase of a new system, one should also concentrate some efforts on the (i) potential safety problems, (ii) critical situations, (iii) critical system failures, (iv) and their possible classification, identification. After the risk assessment, the next step is the development of a set of policies and strategies to mitigate those risks. Generally, the safety

physical safety (characteristics of the applied materials, structural solutions, system

technical safety (dedicated active or passive safety systems including e.g. sensors to

non-technical safety (such as policy manuals, traffic rules, awareness and mitigation

The safety of any systems can be evaluated by using the risk analysis methods. Risk is the probability that an emergency situation occurs in the future, and which could also be avoided or mitigated, rather than present problems that must be immediately addressed.

The evaluation of the flight safety is not a simple task. There is no uniformly applicable metrics for the evaluation. Some governments have already published (CASA, 2005; FAA, 2006; Transport, 2007) their opinion and possible methodologies for flight safety measures that are applied by evaluators (Ropp & Dillmann, 2008). The problem is associated with the

Risk analyses methods defining the probability of emergency situations or risks are very widely used for flight safety evaluation. Metrics of risk is the probability of the given risk as an unwanted danger event. This probability has at least four slightly different

very complex character of flight safety depending on the developed and applied

architecture that help to overcome safety critical – emergency situations),

such as follows:

– threats.

programs).

**2.2 Flight safety metrics** 

failures appearing randomly.

enhance situation awareness),

safety plan with management commitment,

safety assurance (quality management on safety),

subjective - individual explanation of the events.

documentation management,

emergency response plan.

 classic - the unwanted event, logic - the necessary evil, objective - relative frequency,

 risk monitoring, education and training,

interpretations:

policies and strategies are based on the synergy of the

In practice, the analysis of accident statistics could characterize the flight risks. Such statistics give the evidences for the well-known facts (Rohacs, 1995, 2000; Statistical 2008): (i) the longest part of the flight (with about 50 - 80 % of flight time) is the cruise phase, which only accounts for 5 - 8 % of the total accidents and 6 - 10 % of the total fatal accidents, (ii) the most dangerous phases of flight are the take-off and landing, because during this about 2 % of flight time the 25 - 28 % of fatal accidents are occurring, and (iii) generally nearly 80 % of the accidents are caused by human factors and about 50 % of them are initiated by the pilots.

A good example of using accident statistics is shown in Figure 1. Beside showing the effects of technological development on the reduction of flight risks, it also shows that since 2003, the European fatal accident rate - as fatalities per 10 million flights - has increased, without knowing – so far – the reason causing it.

Fig. 1. Characterization of the European accident statistics (EASA, 2008).

The accident statistics could be also used for flight safety analysis in original, or unusual method. While accident statistics demonstrate a considerable higher risk, accident rates for small aircraft, according to the Figure 2., the ratio of all and fatal accidents are nearly the same for airlines and general aviation. This means that the small and larger civilian aircraft are developed, designed, and produced with the same philosophy, at least the same safety approach and 'structural damping of damage processes'. The flight performances, flight dynamics, load conditions, structural solutions are different for small and larger aircraft, and therefore the accidents rates are also different. However, the risk of hard aftermath, appearing the fatal accident following the accidents are the same.

#### **2.3 Human factors**

In 1908, 80 % of licensed pilots were killed in flight accident (Flight, 2000). Since that, the World and the aviation have changed a lot. After 1945, the role of technical factors in causing the accidents (and generally in safe piloting) is continuously decreasing while the role of human factors is increasing.

As it was outlined already, nearly 80 % of accidents are caused by human factors. (Rohacs, 1995, 2000; Statistical, 2008). While, only 4 -7 % of accidents are defined by the "independent investigators" as accident caused by unknown factors. According to Ponomerenko (2000) this figure might be changed when one tries to establish the truth in fatal accidents,

Subjective Factors in Flight Safety 267

development of the cockpit, that were radically redesigned for that period. However, the ergonomic investigations had used the governing idea, how to make better for operator. A new approach has developed for last 20 years that investigates the 'ergatic' systems (see for example Pavlov & Chepijenko, 2009) in which the operator (pilot) one of the important (might be most important) element of the systems, and the psycho-physiological behaviors

The third group of human factors has not investigated on the required level yet. Generally, the key element of human reaction on the situation, especially on the emergency situation is the time. *However, the speed and time of reaction is "... not determined by the amount of processed information, but by the choice of the signal's importance, which is always subjective and affected by individual personality traits" (Ponomarenko 2000)*. In an emergency situation, flight safety does not depend as much on the detailed information on the emergency situation and the size of pilot supporting information, as on the whole picture including space and time, knowledge and practice of pilots and the actual determination of the ethical limits of man's struggle

Flight safety could also be analyzed with the prediction of the future air transport characteristics. For example, the NASA initiated zero accident project, (Commercial, 2000; Shin, 2000; White, 2009) leads to the following general conclusion: before introducing the wide-body aircraft, the risk of flight was decreased by a factor of 10, but this cannot be further reduced with the present technical and technological methods (Rohacs, 1998; Shin, 2000). Even so, the number of aircraft and the number of yearly, daily flights are continuously increasing (Fig. 3.); Seeing this, the absolute number of accidents is expected to increase in the future, which might even lead to the vision made by Boeing, in which by 2016/17, each week one large-body aircraft is envisioned to have an accident. "Given the very visible, damaging, and tragic effects of even a single major accident, this number of accidents would clearly have an unacceptable impact upon the public's confidence in the aviation system and impede the anticipated growth of the commercial air-travel market" (Shin, 2000). Therefore, new methods like emergency management might need to be developed and applied to keep the absolute number of accidents on the present level.

Seeing the envisioned rapid development of the future aviation, especially the small aircraft transportation system, the conclusion derived from the zero accident program and use of the subjective analysis in flight safety investigation might be relevant to be kept in mind.

> **Hull loss accidents per year**

**Airplanes in service 28,550**

**2019**

**1997**

Fig. 3. The NASA zero accident program (Commercial, 2000).

**12,595**

**1965 1975 1985 1995 2005 2015 Year**

**Hull loss accident rate**

**Business as usual**

**Our goal**

**Millions of departures**

of the operator may play determining roles in operation of system.

with the arisen situation.

Fig. 2. An original way to compare airliner and GA accident statistics.

especially by taking into account the socio-psychological aspects and use of " 'guilt' and 'guilty' as the 'master key' to unlock the true cause of the accident. Hence, the bias of the investigators often does not represent the interest of the victims, but that of the administrative superstructure. It side steps the legal and socio-psychological estimation of aircrew behavior, and replaces it by formal logic analysis of known rules: permitted/forbidden, man or machine, chance/relationship, violated/not violated, etc."

Accident investigations show that human factors could be divided into three groups depending on their origins.


The different groups have nearly the same role in accident casualty, equal to 25, 35, 40 %, respectively. Others (Lee, 2003) call the same type of factors as system data problems, human limitation and time related problems.

The first group of human factors, harmonization of the man-machine interface from the technical side of view is well investigated and such type of human factors are taken into account in aircraft development and design processes. Generally, the handling quality or (nowadays) the car free characteristics are the merits and used as main philosophical approaches to solve these types of problems.

The ergonomic factors have been investigated a lot for last 40 - 50 years. The third generation of the fighters had been developed with the use of ergonomics, especially in

especially by taking into account the socio-psychological aspects and use of " 'guilt' and 'guilty' as the 'master key' to unlock the true cause of the accident. Hence, the bias of the investigators often does not represent the interest of the victims, but that of the administrative superstructure. It side steps the legal and socio-psychological estimation of aircrew behavior, and replaces it by formal logic analysis of known rules: permitted/forbidden, man or machine, chance/relationship, violated/not violated, etc."

Accident investigations show that human factors could be divided into three groups

 Technical factors: disharmony in human - machine interface. Most known cases from this group are called as PIDs (pilot induced oscillations). Some of these factors, like limitations of the control stick forces are included even into the airworthiness

 Ergonomic factors: a lack of ergonomic information display, guidance control, out-ofcockpit visibility, design of instrument panel, as well as of adequate training

Subjective factors: un-predictable and non-uniform man's behavior. Making wrong

The different groups have nearly the same role in accident casualty, equal to 25, 35, 40 %, respectively. Others (Lee, 2003) call the same type of factors as system data problems,

The first group of human factors, harmonization of the man-machine interface from the technical side of view is well investigated and such type of human factors are taken into account in aircraft development and design processes. Generally, the handling quality or (nowadays) the car free characteristics are the merits and used as main philosophical

The ergonomic factors have been investigated a lot for last 40 - 50 years. The third generation of the fighters had been developed with the use of ergonomics, especially in

decisions because the lack of knowledge and practice of operators.

Fig. 2. An original way to compare airliner and GA accident statistics.

depending on their origins.

requirements.

[Ponomarenko 2000].

human limitation and time related problems.

approaches to solve these types of problems.

development of the cockpit, that were radically redesigned for that period. However, the ergonomic investigations had used the governing idea, how to make better for operator. A new approach has developed for last 20 years that investigates the 'ergatic' systems (see for example Pavlov & Chepijenko, 2009) in which the operator (pilot) one of the important (might be most important) element of the systems, and the psycho-physiological behaviors of the operator may play determining roles in operation of system.

The third group of human factors has not investigated on the required level yet. Generally, the key element of human reaction on the situation, especially on the emergency situation is the time. *However, the speed and time of reaction is "... not determined by the amount of processed information, but by the choice of the signal's importance, which is always subjective and affected by individual personality traits" (Ponomarenko 2000)*. In an emergency situation, flight safety does not depend as much on the detailed information on the emergency situation and the size of pilot supporting information, as on the whole picture including space and time, knowledge and practice of pilots and the actual determination of the ethical limits of man's struggle with the arisen situation.

Flight safety could also be analyzed with the prediction of the future air transport characteristics. For example, the NASA initiated zero accident project, (Commercial, 2000; Shin, 2000; White, 2009) leads to the following general conclusion: before introducing the wide-body aircraft, the risk of flight was decreased by a factor of 10, but this cannot be further reduced with the present technical and technological methods (Rohacs, 1998; Shin, 2000). Even so, the number of aircraft and the number of yearly, daily flights are continuously increasing (Fig. 3.); Seeing this, the absolute number of accidents is expected to increase in the future, which might even lead to the vision made by Boeing, in which by 2016/17, each week one large-body aircraft is envisioned to have an accident. "Given the very visible, damaging, and tragic effects of even a single major accident, this number of accidents would clearly have an unacceptable impact upon the public's confidence in the aviation system and impede the anticipated growth of the commercial air-travel market" (Shin, 2000). Therefore, new methods like emergency management might need to be developed and applied to keep the absolute number of accidents on the present level.

Seeing the envisioned rapid development of the future aviation, especially the small aircraft transportation system, the conclusion derived from the zero accident program and use of the subjective analysis in flight safety investigation might be relevant to be kept in mind.

Fig. 3. The NASA zero accident program (Commercial, 2000).

Subjective Factors in Flight Safety 269

fs

fs fs *PP t <sup>r</sup> r r d*

 **Q Q** 

 

fs

   

**<sup>Q</sup> Q Q** . (7)

 <sup>2</sup> fs fs *Pt D* ( ) / *r r*

 <sup>2</sup> fs fs fs *PP t D*

 **Q***r r* 

Such type of system approach was developed, applied and improved. Generally, once the aircraft is investigated as a dynamic system, the effects of the system anomalies could be

 

> 

 

(8)

1 Q/ , (9)

**xuzp** x uz p , (10.a)

**xyzp** x yz p , (10.b)

*f <sup>f</sup>* t, t, t, t, t **xuzpy** (11)

*i j*

> *i j*

**xuzpy**

**xuzp**

**xuzpy**

**xzpy**

( , ,,) d dddd

( ,,,) d dddd

provides the

, (12.a)

. (12.b)

( ) **Q***<sup>r</sup> t* , the flight

**Q***<sup>r</sup> t* is the random value with probability density,

**Q** .

given by the following type of probabilities (Rohacs 1986; Rohacs & Nemeth, 1997):

the parameter vector characterizing the state of the aircraft, *t* defines the time,

is known, then the recommended characteristics can be calculated as:



*i*

*i*

*f*

 

*f*

 

*j*

*j*

*f*

*f*

<sup>1</sup> <sup>y</sup> <sup>+</sup>

<sup>2</sup> <sup>u</sup> <sup>+</sup>

**y**

**u**

*P t* <sup>1</sup> **y** *t tt* 0 , , , , 

*P t* <sup>2</sup> **u** *t tt* 0 , , , , 

where **y**  *Rr* defines the output (measurable) signal vector (measured vector of operational characteristics) **x**  *Rn* is the state vector, **u**  *Rm* gives the input (control) vector, **z**  *Ri* stands for the vector of environmental characteristics (vector of service conditions), **p** *Rk* is

elementary time, x <sup>y</sup> zup , , , , are the allowed ranges for the given characteristics.

ddddd ( , ,,,) <sup>P</sup> t ...

**y xuzp**

ddddd ( ,,,,) <sup>P</sup> t ... .

**u xzpy**

**xuzpy**

**xuzpy**

**Q** is the dispersion of *<sup>r</sup>*

Because

where ( ) *D <sup>r</sup>* 

If the joint density function,

safety level can be given as:

According to the Tchebyshev inequality

the flight safety level takes the form:

#### **3. Flight safety evaluation**

#### **3.1 Technical approach to flight safety evaluation**

Technically, flight risks are always initiated by the deviations in the system parameters. Therefore, the investigation of the system parameter uncertainties and anomalies might be applied as a basis to evaluate flight safety. Flight safety is the risk that an emergency situation occurs, when the system parameters (at least one of them) are out of the tolerance zones. In the view of this, flight safety might be characterized by the probability of the deviations (in the structural and operational characteristics) being larger than those predetermined by the airworthiness (safety) requirements (Bezapasnostj 1988, Rohacs & Németh, 1997).

Mathematically, flight operation quality, ( ) *<sup>r</sup>* **Q** *t* , could be given in the following simple form:

$$\mathbf{Q} \equiv \{a\_i\} \quad , \quad i = 1, \mathbf{n} \tag{1}$$

where *<sup>i</sup> a* are the parameters defining the attributes of the given aircraft or system. In a more general form, it could be given as:

$$a\_i = f\left(a\_1, a\_2, \ldots a\_{i-1}, a\_{i+1}, \ldots a\_n\right). \tag{2}$$

In real flight situations, the real quality of operation ( ) *<sup>r</sup>* **Q** *t* is deviated from the design (nominal) quality ( ) *<sup>n</sup> <sup>r</sup>* **Q** *t* :

$$\delta \, \mathbf{Q}\_{\mathbf{r}}(t) = \mathbf{Q}\_{\mathbf{r}}(t) - \mathbf{Q}\_{\mathbf{r}}^{n}(t) \quad . \tag{3}$$

For each case, the acceptable level of deviation is maximized by the flight safety threshold ( fs ),

$$\left|\delta \mathbf{Q}\_{\mathbf{r}}(t)\right| \geq \delta\_{\mathrm{fs}} \quad \text{ } \tag{4}$$

where *P t* **Q**r fs describes the probability of a flight event (flight out of prescribed operational modes).

By summing all the potential flight events, flight safety (*Pfs*) could be given with the following probability:

$$P\_{\rm fs} = 1 - \sum\_{i=1}^{n} R\_i\left(t\right) P\_i\left(t\right) \tag{5}$$

where ( ) *R t <sup>i</sup>* - is the risk of flight accident.

For time period [0, T] the following integral risk can be applied:

$$\tilde{P}\_{\rm fs} = 1 - \frac{1}{T} \prod\_{0}^{T} (1 \cdot P\_{\rm fs}(t))dt \quad . \tag{6}$$

Technically, flight risks are always initiated by the deviations in the system parameters. Therefore, the investigation of the system parameter uncertainties and anomalies might be applied as a basis to evaluate flight safety. Flight safety is the risk that an emergency situation occurs, when the system parameters (at least one of them) are out of the tolerance zones. In the view of this, flight safety might be characterized by the probability of the deviations (in the structural and operational characteristics) being larger than those predetermined by the airworthiness (safety) requirements (Bezapasnostj 1988, Rohacs &

Mathematically, flight operation quality, ( ) *<sup>r</sup>* **Q** *t* , could be given in the following simple

where *<sup>i</sup> a* are the parameters defining the attributes of the given aircraft or system. In a more

In real flight situations, the real quality of operation ( ) *<sup>r</sup>* **Q** *t* is deviated from the design

 <sup>n</sup> rrr

For each case, the acceptable level of deviation is maximized by the flight safety threshold

 **Q**r fs *t* 

By summing all the potential flight events, flight safety (*Pfs*) could be given with the

fs 1

*i P R tPt* 

T fs fs 0 <sup>1</sup> 1 1- T

*i i*

1 *n*

**Q** *a i <sup>i</sup>* , = 1,n (1)

*ai f aa a a a* 12 1 1 , ,... , ,... *ii n* . (2)

**QQQ** *ttt* . (3)

(5)

*<sup>P</sup> P t dt* . (6)

describes the probability of a flight event (flight out of prescribed

, (4)

**3. Flight safety evaluation** 

general form, it could be given as:

 **Q**r fs 

where ( ) *R t <sup>i</sup>* - is the risk of flight accident.

For time period [0, T] the following integral risk can be applied:

*<sup>r</sup>* **Q** *t* :

(nominal) quality ( ) *<sup>n</sup>*

where *P t* 

operational modes).

following probability:

Németh, 1997).

form:

( fs ),

**3.1 Technical approach to flight safety evaluation** 

Because **Q***<sup>r</sup> t* is the random value with probability density, ( ) **Q***<sup>r</sup> t* , the flight safety level can be given as:

$$P\_{\rm fs} \equiv P\left( \left| \mathcal{S} \,\mathbf{Q}\_r(t) \right| \le \delta\_{\rm fs} \right) = \int\_{-\delta\_{\rm fs}}^{\delta\_{\rm fs}} \rho \left( \mathcal{S} \,\mathbf{Q}\_r \right) d\mathcal{S} \,\mathbf{Q}\_r \,\,\cdot \,\tag{7}$$

According to the Tchebyshev inequality

$$P(\left|\boldsymbol{\delta}\,\mathbf{Q}\_r\left(t\right)\right| \succ \delta\_{\mathrm{fs}}) \leq D\left(\boldsymbol{\delta}\,\mathbf{Q}\_r\right) / \left.\delta\_{\mathrm{fs}}^2\right| \tag{8}$$

the flight safety level takes the form:

$$P\_{\rm fs} \equiv P\left( \left| \mathcal{S} \; \mathbf{Q}\_r \left( t \right) \right| \le \delta\_{\rm fs} \right) \ge 1 - D\left( \mathcal{S} \; \mathbf{Q}\_r \right) / \; \delta\_{\rm fs}^2 \quad \text{\textquotedblleft} \quad \text{\textquotedblleft} \right|$$

where ( ) *D <sup>r</sup>* **Q** is the dispersion of *<sup>r</sup>* **Q** .

Such type of system approach was developed, applied and improved. Generally, once the aircraft is investigated as a dynamic system, the effects of the system anomalies could be given by the following type of probabilities (Rohacs 1986; Rohacs & Nemeth, 1997):

$$P\_1\left(\left.\mathbf{y}\left(t\right)\right|\_{t\_0} \le t \le t+\tau, \left.\mathbf{x}\circ\Omega\_{\mathbf{x}}, \left.\mathbf{u}\circ\Omega\_{\mathbf{u}}, \left.\mathbf{z}\circ\Omega\_{\mathbf{z}}, \left.\mathbf{p}\circ\Omega\_{\mathbf{p}}\right.\right|\right)\right|\;/\;\tag{10.\mathbf{a}}$$

$$P\_2\left(\mathbf{u}\left(t\right)\Big|\_{t\_0 \le t \le t+\tau,\ \mathbf{x}\in\Omega\_{\mathbf{x}},\ \mathbf{y}\in\Omega\_{\mathbf{y}},\ \mathbf{z}\in\Omega\_{\mathbf{z}},\ \mathbf{p}\in\Omega\_{\mathbf{p}}}\right)\tag{10.b}$$

where **y**  *Rr* defines the output (measurable) signal vector (measured vector of operational characteristics) **x**  *Rn* is the state vector, **u**  *Rm* gives the input (control) vector, **z**  *Ri* stands for the vector of environmental characteristics (vector of service conditions), **p** *Rk* is the parameter vector characterizing the state of the aircraft, *t* defines the time, provides the elementary time, x <sup>y</sup> zup , , , , are the allowed ranges for the given characteristics.

If the joint density function,

$$\int \mathbf{f}\_{\,\,\Sigma} = f\left[\mathbf{x}(\mathbf{t}), \mathbf{u}(\mathbf{t}), \mathbf{z}(\mathbf{t}), \mathbf{p}(\mathbf{t}), \mathbf{y}(\mathbf{t})\right] \tag{11}$$

is known, then the recommended characteristics can be calculated as:

$$\mathbf{P}\_1\Big\langle \mathbf{y}(t) \in \mathbf{Q}\_\mathbf{y} \Big| \dots \Big\rangle = \begin{cases} \int\_\Sigma \mathbf{dx} \mathbf{d} \mathbf{z} \mathbf{d} \mathbf{z} \mathbf{d} \mathbf{p} \mathbf{d} \mathbf{y} \\ = \sum\_{i=0}^{\Omega\_i} \int\_\Sigma \mathbf{f}\_\Sigma \mathbf{dx} \mathbf{d} \mathbf{z} \mathbf{d} \mathbf{z} \mathbf{d} \mathbf{p} \\ \quad \quad (j \in \mathbf{x}, \mathbf{u}, \mathbf{z}, \mathbf{p}) \end{cases} \quad \text{( $i \in \mathbf{x}$ ,  $\mathbf{u}, \mathbf{z}$ ,  $\mathbf{p}$ )} \quad , \tag{12.a.1}$$

$$\mathbf{P}\_{2}\left(\mathbf{u}(t)\in\mathbf{Q}\_{\mathrm{u}}\big|\cdots\right) = \frac{\int\_{\Omega\_{i}} \mathbf{f}\_{\Sigma}\mathbf{dx}\mathbf{d}\mathbf{z}\mathbf{d}\mathbf{p}\mathbf{d}\mathbf{y}}{\underset{\int\_{\omega\_{i}}}{\int\_{\omega\_{i}}}\mathbf{d}\mathbf{u}\int\_{\omega}\mathbf{f}\_{\Sigma}\mathbf{dx}\mathbf{d}\mathbf{z}\mathbf{d}\mathbf{p}\mathbf{d}\mathbf{y}}\qquad(i\in\text{x},\mathbf{u},\mathbf{z},\mathbf{p},\mathbf{y})\bigg|\tag{12.b}$$

Subjective Factors in Flight Safety 271

Naturally, this equation might be also given in vector form. The first part of the right side of the equation describes the drift (direction of the changes) of the stochastic process passing through *xt X* ( ) at the moment *t*, while the second part shows the scattering (variance) of

effects of random load processes, including even extreme loads as hard touchdown, etc.). Seeing that the future states depend only on the present sate, the equation (13) is in fact a Markov process (Ibe, 2008; Rohacs & Simon, 1989; Tihonov, 1977). Such process can be fully

which characterizes the distribution probability of the continuous random process (*x*(*t*)) at

The transition probability density function can be described by the following Fokker -

<sup>2211</sup>

, , , , , *px t X t f x t px t X t t x*

<sup>+</sup> , , , 2 *x t px t X t*

 

2

<sup>2211</sup> 2 2 1 1 , , , , , ,

*X t*

( )*t* is the random disturbance (e.g. air turbulence, or cumulative

22 11 2 1 *p*( , , ), ( ) *xtXt t t* , (14)

22 2211

, (15.a)

2

 

*<sup>p</sup> X t X t p X t x t p x tX t* , *t tt* 2 1 , (16)

2 2

. (15.b)

2 22 2211

, 1 , , , , 2 *x t <sup>f</sup> x tpx t x tpx t t x <sup>x</sup>*

Statistic flight mechanics has already worked out several methods for the application of such models. For example, the statistical linearization through the proof of the sensitivity function matrix to the flight mechanic models and generating out the set of equations for the moments of the investigated stochastic process could be used to study the scattering of the

Using the equations (15.a), (15.b), which define the Markov process, the following definition

This is called Chapman - Kolmogorov – Smoluchovski equation. It gives the possibility to approximate the investigated non-linear stochastic process with continuous time and state space with a Markov chain with continuous time and discrete state space. This leads us back

The space of the motion variables can be divided into several subspaces, called as situations. The motion of the aircraft is in fact a time invariant series of situations. This is the situation dynamics.

the random process. Here

described by its transition probability density function:

Planck - Kolmogorov equations (Gardiner, 2004):

or

process.

could be made:

to the situation chain process.

the moment *t*2, once it's passing through the *xt X* at time *t*1.

2 2

2

1

  2

*x* 

Unfortunately, this method of determining the effects of the system anomalies on the flight safety is often considered to be too complex, while it is found to be reasonable, since the formulas given above could be supported with statistical data collected during aircraft operation. The method of determining the flight risk on the probability approach (as given in (Gudkov & Lesakov, 1968; Howard, 1980)) is envisioned to be too complicated, once it is also desirable to consider the so-called common (failures appearing at the same time due to different reasons) and depending failures or errors. The Figures 4 and 5 show a nice example of using the described method is the investigation changes in geometrical and operational characteristics of aircraft investigated by (Rohacs 1986) and published in several articles, like (Rohacs, 1990).

Fig. 4. The level book and examples of the measuring data for Mig-21.

Fig. 5. Probability of lack of generated lift at fighters Míg-21 due the changes in wing geometry during the operation (line - single seat, dot line - double seats aircraft)

#### **3.2 Stochastic model of flight risk**

The aircraft's motion is the result of the deterministic control and the stochastic disturbance processes. Such motion might be mathematically given by the following stochastic (random) differential equation, called as diffusion process (Gardiner, 2004):

$$\mathbf{\dot{x}} = f(\mathbf{x}, t) + \sigma(\mathbf{x}, t)\eta(t) \quad \text{,} \tag{13}$$

Unfortunately, this method of determining the effects of the system anomalies on the flight safety is often considered to be too complex, while it is found to be reasonable, since the formulas given above could be supported with statistical data collected during aircraft operation. The method of determining the flight risk on the probability approach (as given in (Gudkov & Lesakov, 1968; Howard, 1980)) is envisioned to be too complicated, once it is also desirable to consider the so-called common (failures appearing at the same time due to different reasons) and depending failures or errors. The Figures 4 and 5 show a nice example of using the described method is the investigation changes in geometrical and operational characteristics of aircraft investigated by (Rohacs 1986) and published in several

Fig. 4. The level book and examples of the measuring data for Mig-21.

400 800 1200 1600 2000

The aircraft's motion is the result of the deterministic control and the stochastic disturbance processes. Such motion might be mathematically given by the following stochastic (random)

> *x f xt xt t* ( ,) ( ,)()

Fig. 5. Probability of lack of generated lift at fighters Míg-21 due the changes in wing geometry during the operation (line - single seat, dot line - double seats aircraft)

operational time (flying hours)

, (13)


differential equation, called as diffusion process (Gardiner, 2004):

probability

**3.2 Stochastic model of flight risk** 

articles, like (Rohacs, 1990).

Naturally, this equation might be also given in vector form. The first part of the right side of the equation describes the drift (direction of the changes) of the stochastic process passing through *xt X* ( ) at the moment *t*, while the second part shows the scattering (variance) of the random process. Here ( )*t* is the random disturbance (e.g. air turbulence, or cumulative effects of random load processes, including even extreme loads as hard touchdown, etc.).

Seeing that the future states depend only on the present sate, the equation (13) is in fact a Markov process (Ibe, 2008; Rohacs & Simon, 1989; Tihonov, 1977). Such process can be fully described by its transition probability density function:

$$p(\mathbf{x}\_2, t\_2 | X\_1, t\_1) \tag{14}$$

which characterizes the distribution probability of the continuous random process (*x*(*t*)) at the moment *t*2, once it's passing through the *xt X* at time *t*1.

The transition probability density function can be described by the following Fokker - Planck - Kolmogorov equations (Gardiner, 2004):

$$\frac{\partial p(\mathbf{x}\_{2},t\_{2}|\mathbf{X}\_{1},t\_{1})}{\partial t\_{2}} = -\frac{\partial}{\partial \mathbf{x}\_{2}} \Big[ f(\mathbf{x}\_{2},t\_{2}) p(\mathbf{x}\_{2},t\_{2}|\mathbf{X}\_{1},t\_{1}) \Big] +$$

$$+ \frac{1}{2} \frac{\partial^{2}}{\partial \mathbf{x}\_{2}^{2}} \Big[ \sigma^{2}(\mathbf{x}\_{2},t\_{2}) p(\mathbf{x}\_{2},t\_{2}|\mathbf{X}\_{1},t\_{1}) \Big] \Big, \tag{15.a}$$

or

$$\frac{\partial \mathcal{E}(\mathbf{x},t)}{\partial t} = -\frac{\partial}{\partial \mathbf{x}} [f(\mathbf{x},t)p(\mathbf{x},t)] + \frac{1}{2} \frac{\partial^2}{\partial \mathbf{x}^2} [\sigma^2(\mathbf{x},t)p(\mathbf{x},t)].\tag{15.5b}$$

Statistic flight mechanics has already worked out several methods for the application of such models. For example, the statistical linearization through the proof of the sensitivity function matrix to the flight mechanic models and generating out the set of equations for the moments of the investigated stochastic process could be used to study the scattering of the process.

Using the equations (15.a), (15.b), which define the Markov process, the following definition could be made:

$$p\left(\mathbf{X}\_{2'}\,\,t\_2\middle|\mathbf{X}\_{1'}\,\,t\_1\right) = \sum\_{\mathbf{X}(t)} p\left(\mathbf{X}\_{2'}\,\,t\_2\middle|\mathbf{x},\,t\right) p\left(\mathbf{x},\,t\middle|\mathbf{X}\_{1'}\,\,t\_1\right),\qquad \left(t\_2\geq t\geq t\_1\right)\,,\tag{16}$$

This is called Chapman - Kolmogorov – Smoluchovski equation. It gives the possibility to approximate the investigated non-linear stochastic process with continuous time and state space with a Markov chain with continuous time and discrete state space. This leads us back to the situation chain process.

The space of the motion variables can be divided into several subspaces, called as situations. The motion of the aircraft is in fact a time invariant series of situations. This is the situation dynamics.

Subjective Factors in Flight Safety 273

Our theoretical and practical investigations on flight safety showed that the aircraft's operational process is a complicated process. For example, if a pilot reports an in-operating engine, than ATCOs are often to make 40 - 100 times more mistakes relative to normal circumstances. The simplified graph model of flight situations - taking into account such effects - is given in the Figure 7. The advantage of this representation method over the others, could be summarized in the followings. Firstly, this model includes a new state, called state of anomalies (*An*), in which the aircraft does not have any failures or errors, but still, its characteristics are essentially deviating from their nominal values. Secondly, the total amount of states are decomposed or grouped into four subparts (structure, pilot, air

**A**

i,j = i,j,o + **K**i,j **P**(t) ; (20)

**F3**

**F2**

**F1**

structure pilot ATC surroundings

To simplify the representation of this method, the Figure 7. shows only the nominal state decomposition (Rohacs & Nemeth, 1997; Rohacs, 2000). Even so, the different numbers of failures are further decomposed. States *N* is a prescribed nominal state. States *An* and *F1* might only be initiated by the anomalies or failures in one of the aircraft's flight operation subsystems (e.g. aircraft structure, pilot, ATC, surroundings). On the other hand, the states *F2*, *F3* might be initiated by two or three failures appearing in any combination of the subsystems. For example *F2* may contain mistake of the pilot and ATCO, or two aircraft

According to these specific features of the model, the general Markov model should have 43 states. For example in our model, the state number 21, is the state with two failures generated in the structure and one is initiated by the mistake of the pilot. As a consequence, the transfer matrix is composed of 43 x 43 elements, while the elements of the matrix are the

where, i,j,o is the initial transfer matrix element, **K**i,j is the vector of coefficients. The vector **K**i,j may contain zero elements, too, if the given state has no influence on transfer process. The determination of the vector elements **K**i,j, is based on the theory of anomalies, dealing with the calculation of the real deviations, characteristics, and distributions. For example,

**An**

**N**

Fig. 7. The suggested general graph model of aircraft.

traffic control, surroundings).

structural (system) failures.

linear functions of **P**(t):

Accidents are the results of the situation process, which is assumed to be similar to the one given in the Figure 6. Here, *N* marks the normal, conventional flight, *S1* , *S2* , *S3* are different states related to the case when the aircraft has one (*F1)*, two (*F2*), or three serious system failures (*F3* ), while *A* shows the accident situation (Rohacs & Nemeth, 1997; Rohacs, 2000).

Fig. 6. Simple graph model of aircraft pre-accident process

The Markov chain can be described by the transition probabilities, *i,j*. These variables give the probability of moving the aircraft from a state (situation) *Si* to a state *Sj* . As it is known, this type of process can be approximated by Markov process, under the following conditions:


Under the conditions mentioned above, the process could be described with the following model:

$$
\dot{\mathbf{P}}(t) = \mathbf{P}\_t(t)\mathbf{P}(t) \; \; \; \; \tag{17}
$$

where **P**(t)=[Pi(t)] is a vector of probabilities defining the states *Si* (*i=N*, *F1*, *F2*, *F3*, *A*).

At this stage, one should give the applicable graph model and estimate the transition probability matrix.

In this simple case, the aircraft's operational process – as a stochastic process with continuous time and discrete states shown in the Fig. 6. – could be approximated by the following Markov model:

$$
\dot{\mathbf{P}}(t) = \mathbf{g}(t)\mathbf{P}(t) \tag{18}
$$

where **P**(t)=[P i(t)] is a vector of probabilities that the aircraft is in the states *Si* (*i=N*, *F1*, *F2*, *F3*, *A*), and

$$\mathbf{B(t)} = [\beta \text{ i,j}] \tag{19}$$

is a time depending transition matrix:

$$
\mathfrak{B}(\mathfrak{e}) = \begin{bmatrix}
\mathcal{B}\_{N,F1} & -\mathcal{B}\_{F1,N} - \mathcal{B}\_{F1,F2} - \mathcal{B}\_{F1,F3} - \mathcal{B}\_{F1,A} & -\mathcal{B}\_{F2,F3} & 0 & 0 \\
\mathcal{B}\_{N,F2} & \mathcal{B}\_{F1,F2} & -\mathcal{B}\_{F2,F1} - \mathcal{B}\_{F2,F3} - \mathcal{B}\_{F2,A} & -\mathcal{B}\_{F3,F2} & 0 \\
\mathcal{B}\_{N,F3} & \mathcal{B}\_{F1,F3} & \mathcal{B}\_{F2,F3} & -\mathcal{B}\_{F3,F2} - \mathcal{B}\_{F2,A} & -\mathcal{B}\_{F3,F3} & -\mathcal{B}\_{F3,F1} \\
\mathcal{B}\_{N,A} & \mathcal{B}\_{F1,A} & \mathcal{B}\_{F2,A} & \mathcal{B}\_{F3,A} & -\mathcal{B}\_{F3,F3} \\
\end{bmatrix}
$$

Accidents are the results of the situation process, which is assumed to be similar to the one given

**N** F1 F2 F3 A

this type of process can be approximated by Markov process, under the following

the probability of a transfer from one state into another through one or more other

Under the conditions mentioned above, the process could be described with the following

At this stage, one should give the applicable graph model and estimate the transition

In this simple case, the aircraft's operational process – as a stochastic process with continuous time and discrete states shown in the Fig. 6. – could be approximated by the

where **P**(t)=[P i(t)] is a vector of probabilities that the aircraft is in the states *Si*

, 1 1, 1, 2 1, 3 1, 2, 3

 

*N F FN FF FF FA F F*

, 2 1, 2 2, 1 2, 3 2, 3, 2

*N F F F FF FF FA F F*

 

, 3 1, 3 2, 3 3, 2 2, , 3 , 1, 2 , 3, , 3

*N F F F F F F F F A AF N A F A F A F A A F*

the time spent in the states could be approximated by an exponential distribution.

the transition from one state into another occurs in a significantly short time,

to the case when the aircraft has one (*F1)*, two (*F2*), or three serious system failures (*F3*

, *S2*

**P PP** *t tt <sup>t</sup>* , (17)

**P***t tt* **β P** (18)

(t)=[ i,j] (19)

 

(*i=N*, *F1*, *F2*, *F3*, *A*).

0 00

 

 

to a state *Sj*

, *S3* are different states related

*i,j*. These variables give

. As it is known,

(*i=N*, *F1*, *F2*,

0 0

0

 

), while *A*

in the Figure 6. Here, *N* marks the normal, conventional flight, *S1*

Fig. 6. Simple graph model of aircraft pre-accident process

conditions:

model:

probability matrix.

*F3*, *A*), and

*t*

**β**

following Markov model:

is a time depending transition matrix:

,1 ,2 ,3 , 1,

*NF NF NF NA F N*

states is a limited, and

The Markov chain can be described by the transition probabilities,

the probability of moving the aircraft from a state (situation) *Si*

where **P**(t)=[Pi(t)] is a vector of probabilities defining the states *Si*

shows the accident situation (Rohacs & Nemeth, 1997; Rohacs, 2000).

Our theoretical and practical investigations on flight safety showed that the aircraft's operational process is a complicated process. For example, if a pilot reports an in-operating engine, than ATCOs are often to make 40 - 100 times more mistakes relative to normal circumstances. The simplified graph model of flight situations - taking into account such effects - is given in the Figure 7. The advantage of this representation method over the others, could be summarized in the followings. Firstly, this model includes a new state, called state of anomalies (*An*), in which the aircraft does not have any failures or errors, but still, its characteristics are essentially deviating from their nominal values. Secondly, the total amount of states are decomposed or grouped into four subparts (structure, pilot, air traffic control, surroundings).

Fig. 7. The suggested general graph model of aircraft.

To simplify the representation of this method, the Figure 7. shows only the nominal state decomposition (Rohacs & Nemeth, 1997; Rohacs, 2000). Even so, the different numbers of failures are further decomposed. States *N* is a prescribed nominal state. States *An* and *F1* might only be initiated by the anomalies or failures in one of the aircraft's flight operation subsystems (e.g. aircraft structure, pilot, ATC, surroundings). On the other hand, the states *F2*, *F3* might be initiated by two or three failures appearing in any combination of the subsystems. For example *F2* may contain mistake of the pilot and ATCO, or two aircraft structural (system) failures.

According to these specific features of the model, the general Markov model should have 43 states. For example in our model, the state number 21, is the state with two failures generated in the structure and one is initiated by the mistake of the pilot. As a consequence, the transfer matrix is composed of 43 x 43 elements, while the elements of the matrix are the linear functions of **P**(t):

$$\mathbf{P}\mathbf{\dot{i}}\mathbf{\dot{j}} = \mathbf{\dot{\beta}i}\mathbf{\dot{j}}, \mathbf{\dot{o}} + \mathbf{K}\mathbf{\dot{i}}, \mathbf{\dot{p}}\mathbf{P}(\mathbf{t}) \; ; \tag{20}$$

where, i,j,o is the initial transfer matrix element, **K**i,j is the vector of coefficients. The vector **K**i,j may contain zero elements, too, if the given state has no influence on transfer process.

The determination of the vector elements **K**i,j, is based on the theory of anomalies, dealing with the calculation of the real deviations, characteristics, and distributions. For example,

Subjective Factors in Flight Safety 275

<sup>0</sup> <sup>2</sup> <sup>4</sup> <sup>6</sup> <sup>8</sup> <sup>10</sup> <sup>12</sup> <sup>0</sup>

Fig. 9. Probability of appearance of first failures (solid blue line - pilot error (failure), green dashed line - pilot error in case of system anomalies red dashed line - structure failure, blue dashed dot line - structure failure calculated considering the influences of the anomalies).

The major determinative element of the aircraft's conventional control systems is the pilot. Such systems are called as ergatic active endogenous systems [Kasyanov 2007], since the systems are actively controlled by solutions initiated by ergates (Greek ἐργάτης ergatēs worker), human organism (e.g. nervous cells). So the control solution becomes from inside the system, from the operator. Such effects are called often as endogenous feedback or endogenous dynamics (Banos, Lamnabhi-Lagarrigue & Montoya, 2001; Fliens et all, 1999, Nieuwstadt 1997]. Because pilots make their decision upon their situation awareness, knowledge, practice and skills, e.g. on the subjective way, the system would be also subjective. Beside human robust behaviors and individual possibilities, pilots – in certain circumstances – should also make decisions, even if the information for an appropriate

Safety of active systems is determined by risks initiated by subjects being the central elements of the given system. For example, flight safety is the probability that a flight happens without an accident. Aircraft are moving in the three dimensional space, in function of their aerodynamic characteristics, flight dynamics, environmental stochastic disturbances (e.g. wind, air turbulence) and applied control. Pilots make decision upon their situation awareness. They must define the problem and choose the solution from their resources, which makes human controlled active systems endogenous. Resources are methods or technologies that can be applied to solve the problems (Kasyanov, 2007). These could be classified into the so-called (i) passive (finance, materials, information, energy - like aircraft control system in its physical form) and (ii) active (physical, intellectual, psychophysiological behaviors, possibilities of subjects) resources. The passive resources are therefore the resources of the system (e.g. air transportation system, ATM, services provided), while the active resources are related to the pilot itself. Based on these, decision making is in fact the process of choosing the right resources that leads to an optimal

0.2 0.4 0.6 0.8 1 1.2

**4. Subjective analysis and flight safety** 

**4.1 Theoretical background** 

reaction is limited.

solution.

1.4 x 10-6

human error depends on weather, traffic situations, or possible system failures. Naturally, if the aircraft is piloted by pilot with limited skills, then the coefficients would be higher than it is for the conventional small aircraft operations. After the evaluation of different models based on the above discussed Markov and semi-Markov processes, we found that the inadequate initial data and the relatively large number of states makes the semi-Markov process irrelevant for our purposes.

Due to the large number of states, the developed model might be seen too complex. On the other hand, by the analysis of the potential methods to simplify the model, it was found that the suggested approach can be transferred to the model shown in the Figure 7. This is reasonable, since from a flight safety point of view, the most important is the transfer of one state into another, and not the detail how that transfer could be made. Therefore, the transition matrix element, F1, F2, describing the transfer from one failure state (*F1*) into the state with two failures (*F2*) can be given in the following form:

$$\mathcal{B}\_{\text{F1},\text{F2}} = \frac{\sum\_{i,j} \mathcal{B}\_{\text{F1}\_i,\text{F2}\_j} P\_{\text{F1}\_i}}{\sum\_{k,i} \mathcal{B}\_{An\_k,\text{F1}\_i} P\_{An\_k}} \; \; \; \tag{21}$$

where *An* indicates the state with anomalies, and *k, i, j* are indexes defining the states.

Fig. 8. Flight risk by considering (state An included - solid blue line ) or neglecting the effects of anomalies (green dashed line).

As a result, the general model – describing the real interactions between different types of failures, distinguishing common and depending failures – could be reduced to a simple model.

The developed model was used for the analysis of the aircraft control. Some results are shown in Figures 8. and 9.

human error depends on weather, traffic situations, or possible system failures. Naturally, if the aircraft is piloted by pilot with limited skills, then the coefficients would be higher than it is for the conventional small aircraft operations. After the evaluation of different models based on the above discussed Markov and semi-Markov processes, we found that the inadequate initial data and the relatively large number of states makes the semi-Markov

Due to the large number of states, the developed model might be seen too complex. On the other hand, by the analysis of the potential methods to simplify the model, it was found that the suggested approach can be transferred to the model shown in the Figure 7. This is reasonable, since from a flight safety point of view, the most important is the transfer of one state into another, and not the detail how that transfer could be made. Therefore, the transition matrix element, F1, F2, describing the transfer from one failure state (*F1*) into the

,

*i j*

 

<sup>0</sup> <sup>2</sup> <sup>4</sup> <sup>6</sup> <sup>8</sup> <sup>10</sup> <sup>12</sup> <sup>0</sup>

As a result, the general model – describing the real interactions between different types of failures, distinguishing common and depending failures – could be reduced to a simple

The developed model was used for the analysis of the aircraft control. Some results are

Fig. 8. Flight risk by considering (state An included - solid blue line ) or neglecting the

,

*k i*

where *An* indicates the state with anomalies, and *k, i, j* are indexes defining the states.

1, 2

*F F*

 1, 2 1

*FF F*

*ij i*

*P*

*ki k*

*P*

, (21)

, 1

*An F An*

process irrelevant for our purposes.

0.2

effects of anomalies (green dashed line).

shown in Figures 8. and 9.

model.

0.4 0.6 0.8

1

1.2 x 10-6

state with two failures (*F2*) can be given in the following form:

Fig. 9. Probability of appearance of first failures (solid blue line - pilot error (failure), green dashed line - pilot error in case of system anomalies red dashed line - structure failure, blue dashed dot line - structure failure calculated considering the influences of the anomalies).

## **4. Subjective analysis and flight safety**

#### **4.1 Theoretical background**

The major determinative element of the aircraft's conventional control systems is the pilot. Such systems are called as ergatic active endogenous systems [Kasyanov 2007], since the systems are actively controlled by solutions initiated by ergates (Greek ἐργάτης ergatēs worker), human organism (e.g. nervous cells). So the control solution becomes from inside the system, from the operator. Such effects are called often as endogenous feedback or endogenous dynamics (Banos, Lamnabhi-Lagarrigue & Montoya, 2001; Fliens et all, 1999, Nieuwstadt 1997]. Because pilots make their decision upon their situation awareness, knowledge, practice and skills, e.g. on the subjective way, the system would be also subjective. Beside human robust behaviors and individual possibilities, pilots – in certain circumstances – should also make decisions, even if the information for an appropriate reaction is limited.

Safety of active systems is determined by risks initiated by subjects being the central elements of the given system. For example, flight safety is the probability that a flight happens without an accident. Aircraft are moving in the three dimensional space, in function of their aerodynamic characteristics, flight dynamics, environmental stochastic disturbances (e.g. wind, air turbulence) and applied control. Pilots make decision upon their situation awareness. They must define the problem and choose the solution from their resources, which makes human controlled active systems endogenous. Resources are methods or technologies that can be applied to solve the problems (Kasyanov, 2007). These could be classified into the so-called (i) passive (finance, materials, information, energy - like aircraft control system in its physical form) and (ii) active (physical, intellectual, psychophysiological behaviors, possibilities of subjects) resources. The passive resources are therefore the resources of the system (e.g. air transportation system, ATM, services provided), while the active resources are related to the pilot itself. Based on these, decision making is in fact the process of choosing the right resources that leads to an optimal solution.

Subjective Factors in Flight Safety 277

It is clear that the operational processes can be given by a series of situations: pilot identifies the situation (*S*i,), makes decision, controls ( req *<sup>R</sup>*<sup>a</sup> ), which transits the aircraft into the next situation (*S*j,). (The situation *S*j, is one of the set of possible situations). This is a repeating process (Fig. 11.), in which the transition from one situation into another depends on (i) the evaluation (identification) of the given situation, (ii) the available resources, (iii) the appropriate decision of the pilot, (iv) the correct application of the active resources, (v) the

> *Sk Sk*

*Sl*

*Sl Sl*

*Sl Sl*

*Sl*

; (27)

*Sl*

, (26)

defines the available time for the transition of the

; *P* are the problems how to transit the

gives

.

*Sl Sl*

*Sk Sk*

*Sk*

*Sk Sk*

 : , , , ; , ,... 00 00 0 0 *disp req <sup>f</sup> ct x t t t t R t R t*

 : : 0 0 0 0 , , , ,... 0 0 *disp req j f f a ct P t t t t S S R t R t*

where 0 *x* is the vector of parameters at the initial (actually starting) state at 0*t* time;

system from the initial state into the one of the possible state *S S <sup>f</sup> <sup>a</sup>* not later than

not later than *t t* 0 0 ,

**4.2 Using the developed model to investigation of the aircraft landing** 

significant problem for personal flights, controlled by less-skilled pilots.

applied to investigate a landing procedure of a small aircraft.

 

During a flight, one flight situation is followed by another. Therefore, the aircraft flight operational process with continuous state space and time can be approximated by the stochastic process with continuous time and discrete state space, flight situations. This means that a flight is a typical situation chain process. (This is a basis for using the stochastic

Final approach and landing are the most dangerous phases of flights. It is even a more

The developed method using the subjective analysis to the flight safety evaluation was

In this investigation, no side wing, and no lateral motion were considered. By using the trajectory reference system – in which the *x* axis shows the direction of the wind, *z* axis is

 

Fig. 11. Situation chain process of aircraft operational process as a result of an active

The situation chain process can be given by the following mathematical formula:

limitation of the resources and (vi) the affecting disturbances.

*Si Sj Sj*

subjective endogenous control.

or in a more general approach:

state vector into the set of

the state of the system in the given time;

model of flight risk - see 2.2. point.)

*Sj*

*Sj Sj*

Subjects (like pilots) could develop their active resources (or competences) with theoretical studies and practical lessons. However, the ability of choosing and using the right resources is highly depending on (i) the information support, (ii) the available time, (iii) the real knowledge, (iv) the way of thinking, and (v) the skills of the subject. Such decisions are the results of the subjective analysis.

There is insufficient information on the physical, systematic, intellectual, physiological characteristics of the subjective analysis, as well as on the way of thinking, and making decision of subjects-operators like pilots. Only limited information is available on the time effects, possible damping the non-linear oscillations, the long-term memory, which makes the decision system chaotic.

Flight safety can be evaluated by the combination of subjective analysis and aircraft motion models.

At first, the pilot as subject (Σ) must identify and understand the problem or the situation (*S*i,), then from the set of accessible or possible devices, methods and factors (*S*p) must choose the disposable resources ( disp *R* ) available to solve the identified problems, to finally decide and apply the required resources ( req *R* ) (Kasyanov 2007) (Fig.10.). For this task, the pilot applies its active and passive resources. The active resources will define how the passive resources are used:

$$R\_{\rm a}^{\rm req} = f\left(R\_{\rm p}^{\rm req}\right) \tag{22}$$

Fig. 10. Pilot decision – action process (endogenous dynamics) in aircraft operation (control) system.

Instead of the function between the resources (22), the literature often uses the velocity of transferring the passive resources into the actives:

$$
\sigma\_\mathbf{a}^{\rm req} = f\_v \left( \upsilon\_\mathbf{a}^{\rm req} \right) \upsilon\_\mathbf{a}^{\rm req} \,\,\,\,\tag{23}
$$

where

$$
\upsilon\_{\rm a}^{\rm req} = \frac{d R\_{\rm a}^{\rm req}}{dt}, \qquad \qquad \upsilon\_{\rm p}^{\rm req} = \frac{d R\_{\rm p}^{\rm req}}{dt}, \tag{24}
$$

and in simple cases

$$f\_v = \frac{\partial R\_\text{a}^{\text{req}}}{\partial R\_\text{p}^{\text{req}}} \,. \tag{25}$$

Subjects (like pilots) could develop their active resources (or competences) with theoretical studies and practical lessons. However, the ability of choosing and using the right resources is highly depending on (i) the information support, (ii) the available time, (iii) the real knowledge, (iv) the way of thinking, and (v) the skills of the subject. Such decisions are the

There is insufficient information on the physical, systematic, intellectual, physiological characteristics of the subjective analysis, as well as on the way of thinking, and making decision of subjects-operators like pilots. Only limited information is available on the time effects, possible damping the non-linear oscillations, the long-term memory, which makes

Flight safety can be evaluated by the combination of subjective analysis and aircraft motion

At first, the pilot as subject (Σ) must identify and understand the problem or the situation (*S*i,), then from the set of accessible or possible devices, methods and factors (*S*p) must choose the disposable resources ( disp *R* ) available to solve the identified problems, to finally decide and apply the required resources ( req *R* ) (Kasyanov 2007) (Fig.10.). For this task, the pilot applies its active and passive resources. The active resources will define how the

*Si SRR <sup>p</sup>* dispreq

Fig. 10. Pilot decision – action process (endogenous dynamics) in aircraft operation (control)

Instead of the function between the resources (22), the literature often uses the velocity of

req req req

req a req p a p , , *dR dR*

> *v <sup>R</sup> <sup>f</sup> <sup>R</sup>*

*v v*

req req

req a req p

req req *R fR* a p (22)

a aa *<sup>v</sup> v fv v* , (23)

*dt dt* (24)

. (25)

results of the subjective analysis.

the decision system chaotic.

passive resources are used:

transferring the passive resources into the actives:

models.

system.

where

and in simple cases

It is clear that the operational processes can be given by a series of situations: pilot identifies the situation (*S*i,), makes decision, controls ( req *<sup>R</sup>*<sup>a</sup> ), which transits the aircraft into the next situation (*S*j,). (The situation *S*j, is one of the set of possible situations). This is a repeating process (Fig. 11.), in which the transition from one situation into another depends on (i) the evaluation (identification) of the given situation, (ii) the available resources, (iii) the appropriate decision of the pilot, (iv) the correct application of the active resources, (v) the limitation of the resources and (vi) the affecting disturbances.

Fig. 11. Situation chain process of aircraft operational process as a result of an active subjective endogenous control.

The situation chain process can be given by the following mathematical formula:

$$\mathcal{L}\left(t\right) \colon \quad \left(\mathbf{x}\_{0}, t\_{0}, o\left(t\_{f} \in \left[t\_{0}, t\_{0} + \tau\right]\right); \mathbb{R}^{disp}\left(t\_{0}\right), \mathbb{R}^{eq}\left(t\_{0}\right), \ldots\right), \tag{26}$$

or in a more general approach:

$$\mathcal{L}\left(t\right) \colon \quad \left(P : \sigma\_0\left(t\_0\right) \to \sigma\_j\left(t\_f \in \left[t\_0, t\_0 + \tau\right]\right) \in S\_f \subset S\_{a^\prime}, \\ R^{\mathrm{disp}}\left(t\_0\right), R^{\mathrm{req}}\left(t\_0\right), \ldots\right); \tag{27}$$

where 0 *x* is the vector of parameters at the initial (actually starting) state at 0*t* time; gives the state of the system in the given time; defines the available time for the transition of the state vector into the set of not later than *t t* 0 0 , ; *P* are the problems how to transit the system from the initial state into the one of the possible state *S S <sup>f</sup> <sup>a</sup>* not later than .

During a flight, one flight situation is followed by another. Therefore, the aircraft flight operational process with continuous state space and time can be approximated by the stochastic process with continuous time and discrete state space, flight situations. This means that a flight is a typical situation chain process. (This is a basis for using the stochastic model of flight risk - see 2.2. point.)

#### **4.2 Using the developed model to investigation of the aircraft landing**

Final approach and landing are the most dangerous phases of flights. It is even a more significant problem for personal flights, controlled by less-skilled pilots.

The developed method using the subjective analysis to the flight safety evaluation was applied to investigate a landing procedure of a small aircraft.

In this investigation, no side wing, and no lateral motion were considered. By using the trajectory reference system – in which the *x* axis shows the direction of the wind, *z* axis is

Subjective Factors in Flight Safety 279

 , *req req req req ue k dec a react k a t t tS t S*

*Sa* is the chosen solution from the set of possible solutions. It is clear that all solutions have

The subjective factor of pilots might be introduced with the use of the ratio of the required

*req req k k*

*k k*

 

 

*r t R t* 

*<sup>a</sup> t S* is a time required to recognize the set of alternative strategies.

*k*

and

 

*p*

 *k k disp disp*

*R t*

Naturally, we can assume that pilots are able to evaluate the consequences of their decisions, and therefore they can evaluate the risk of the applied solutions. Such evaluation can be

the distribution of canonic assemble of the preferences is assumed to hold the following

*P e*

endogenous dynamics, model the subjective psycho physiological personalities of pilots. The qualities of the pilots are depending on different factors including "periodical" incapacity to make decisions that increases while getting closer to the decision time

> *req k*

*k disp*

*t t t*

1

*q*

<sup>2</sup>

*q*

*<sup>k</sup>* describes the distribution of the best alternatives from a negative point of view.

*k*

0

*<sup>k</sup>* , only, and in case 1 *kt* , the preference turn into zero. The

*k*

*P e*

 

 

 

 1

*k k*

 

 

*k q*

*req req dec k k k a k k disp req k k disp dec req <sup>k</sup> k k k k r t t t S or r t t t tt*

. (33)

5 for

, (35)

1 might be the situation of landing at first

2 could be related to the situation when the under

. (34)

 

*<sup>k</sup>* , canonic distribution of which as

, (36)

should be chosen in a way to model the

preferences are determined by the

3 might stand for a landing on the fuselage,

5 for a successful landing after second approach).

*<sup>k</sup>* defines all possible situations (e.g.

a limited drawback, such as extra cost, or extra fuel.

In this case, an endogenous index can be defined as

defined as the subjective probability of situations: *P*

The (36) has special features: in case of

(36 ) comes from the solution of the following function:

and disposable resources (Kasyanov 2007):

approach without any problems,

carriage system could not be opened,

Here 

go-around, or

 

where *dec*

form:

where *p*

(altitude) of go-around.

subjective probability, *P*

The time-depending coefficients

perpendicular to *x* in the local vertical plane, while centre of the coordinate system is located in the aircraft's centre of gravity – the motion of the aircraft could be given by the motion and the rotation of its center of gravity (Kasyanov 2004):

$$m\frac{dV}{dt} = T\left(V, z, t\right) - W\sin\theta - D\left(V, z, t\right) \,, \tag{28.a}$$

$$
\Delta m \, V \frac{d\theta}{dt} = L\left(V, z, t\right) - W \cos \theta \,\,\,\,\,\tag{28.b}
$$

$$I\_y \frac{d\boldsymbol{q}}{dt} = M(\boldsymbol{a}, \boldsymbol{q}, \boldsymbol{V}, \boldsymbol{z}, \boldsymbol{t}) \; . \tag{28.c}$$

Due to the applied control, the trust (*T)*, the lift (*L)*, the drag (*D)* and the aerodynamic moment (*M)* are all clearly depending on time. The altitude (*z)* has also an influence on the variable above, through the ground effect. Mass (*m)* and therefore the weight (*W)* of the aircraft are assumed to be constant. The aircraft's velocity (*V)* and pitch rate (*q)* describes the motion, while the flight path angle (or descent angle ) gives the position of the aircraft. The angle of attack ( ) is the difference between the pitch attitude, and flight path angles:

$$
\alpha = \upsilon - \theta \ . \tag{29}
$$

The pitch rate and the modification of the altitude could be easily given by:

$$
\eta = \frac{d\nu}{dt}\tag{30}
$$

$$\frac{dH}{dt} = V\sin\theta \,\,. \tag{31}$$

According to the flight operational manuals and airworthiness requirements, limitations (mi - minimum and ma - maximum) should be applied on the velocity, the descent angle and the decision altitude:

$$V \in \left[ \left. V\_{mi}^\* \right|\_{ma} \right]\_{\prime} \tag{32.a}$$

$$\theta \in \left[ \boldsymbol{\theta}\_{mi}^{\*} , \boldsymbol{\theta}\_{ma}^{\*} \right] \,\tag{32.b}$$

$$H \ge \mathcal{H}\_{Dmi}^\*. \tag{32.c}$$

A simple assumption could be applied: during an approach, pilots should decide whether to land or to make a go-around. For this decision they need time, which is the sum of (i) the time to understand and evaluate the given situation, *<sup>k</sup>* , (ii) the time for decision making and (iii) the time to react (covering also the reaction time of the aircraft for the applied decision) (Kasyanov 2007):

perpendicular to *x* in the local vertical plane, while centre of the coordinate system is located in the aircraft's centre of gravity – the motion of the aircraft could be given by the motion

> , , sin , , *dV m TV zt W DV zt dt*

> > *<sup>d</sup> mV L V z t W*

 ,, ,, *<sup>y</sup> dq I M qV zt dt* 

Due to the applied control, the trust (*T)*, the lift (*L)*, the drag (*D)* and the aerodynamic moment (*M)* are all clearly depending on time. The altitude (*z)* has also an influence on the variable above, through the ground effect. Mass (*m)* and therefore the weight (*W)* of the aircraft are assumed to be constant. The aircraft's velocity (*V)* and pitch rate (*q)* describes the

) is the difference between the pitch attitude,

*d <sup>q</sup> dt* 

sin *dH <sup>V</sup> dt*

According to the flight operational manuals and airworthiness requirements, limitations (mi - minimum and ma - maximum) should be applied on the velocity, the descent angle and

\* \*

A simple assumption could be applied: during an approach, pilots should decide whether to land or to make a go-around. For this decision they need time, which is the sum of (i) the

and (iii) the time to react (covering also the reaction time of the aircraft for the applied

   

The pitch rate and the modification of the altitude could be easily given by:

*dt*  , , cos

, (28.a)

, (28.b)

. (28.c)

) gives the position of the aircraft.

. (29)

, (30)

\* \* *V VV mi ma* , , (32.a)

*mi ma* , , (32.b)

\* *H H Dmi* . (32.c)

*<sup>k</sup>* , (ii) the time for decision making

. (31)

and flight path

and the rotation of its center of gravity (Kasyanov 2004):

motion, while the flight path angle (or descent angle

time to understand and evaluate the given situation,

The angle of attack (

the decision altitude:

decision) (Kasyanov 2007):

angles:

$$t^{relq} = t^{rel}\_{\
u \epsilon} \left( \sigma\_k \right) + t^{rel}\_{\text{dec}} \left( S\_a \right) + t^{rel}\_{\text{react}} \left( \sigma\_{k'} S\_a \right) \ . \tag{33}$$

Here *<sup>k</sup>* defines all possible situations (e.g. 1 might be the situation of landing at first approach without any problems, 2 could be related to the situation when the under carriage system could not be opened, 3 might stand for a landing on the fuselage, 5 for go-around, or 5 for a successful landing after second approach).

*Sa* is the chosen solution from the set of possible solutions. It is clear that all solutions have a limited drawback, such as extra cost, or extra fuel.

The subjective factor of pilots might be introduced with the use of the ratio of the required and disposable resources (Kasyanov 2007):

$$\overline{\tau\_k} = \frac{R^{r\alpha\eta} \left(\sigma\_k\right)}{R^{\text{disp}} \left(\sigma\_k\right)} = \overline{t\_k} = \frac{t^{r\alpha\eta} \left(\sigma\_k\right)}{t^{\text{disp}} \left(\sigma\_k\right)}.\tag{34}$$

In this case, an endogenous index can be defined as

$$\varepsilon\_k \left( \sigma\_k \right) = \frac{\overline{r}\_k}{1 - \overline{r}\_k} = \frac{t^{\text{req}} \left( \sigma\_k \right)}{t^{\text{disp}} \left( \sigma\_k \right) - t^{\text{req}} \left( \sigma\_k \right)} \quad \text{or} \quad \varepsilon\_k \left( \sigma\_k \right) = \frac{t^{\text{req}} \left( \sigma\_k \right) + t^{\text{dec}} \left( S\_a \right)}{t^{\text{disp}} \left( \sigma\_k \right) + t^{\text{dec}} - t^{\text{req}} \left( \sigma\_k \right)} \tag{35}$$

where *dec <sup>a</sup> t S* is a time required to recognize the set of alternative strategies.

Naturally, we can assume that pilots are able to evaluate the consequences of their decisions, and therefore they can evaluate the risk of the applied solutions. Such evaluation can be defined as the subjective probability of situations: *P <sup>k</sup>* , canonic distribution of which as the distribution of canonic assemble of the preferences is assumed to hold the following form:

$$p\left(\sigma\_k\right) = \frac{P^{-\alpha}\left(\sigma\_k\right)e^{-\beta\varepsilon\_k\left(\sigma\_k\right)}}{\sum\_{q=1}^{2}P^{-\alpha}\left(\sigma\_q\right)e^{-\beta\varepsilon\_k\left(\sigma\_q\right)}}\quad,\tag{36}$$

where *p<sup>k</sup>* describes the distribution of the best alternatives from a negative point of view.

The time-depending coefficients and should be chosen in a way to model the endogenous dynamics, model the subjective psycho physiological personalities of pilots. The qualities of the pilots are depending on different factors including "periodical" incapacity to make decisions that increases while getting closer to the decision time (altitude) of go-around.

The (36) has special features: in case of 0 *req k k disp k t t t* preferences are determined by the

subjective probability, *P <sup>k</sup>* , only, and in case 1 *kt* , the preference turn into zero. The (36 ) comes from the solution of the following function:

Subjective Factors in Flight Safety 281

A human as "biomotoric system" uses the information provided by sense organs (sight, hearing, balance, etc.) to determine the motoric actions (Zamora, 2004). From a piloting point of view, balance is the most important from the human sense organs. (As known, pilots are flying upon their "botty" for sensing the aircraft's real spatial position, orientation and motion dynamics (Rohacs, 2006).) The sense of balance (Zamora, 2004) is maintained by a complex interaction of visual inputs (the proprioceptive sensors being affected by gravity and stretch sensors found in muscles, skin, and joints), the inner ear vestibular system, and the central nervous system. Disturbances occurring in any part of the balance system, or

In addition to this, human has another sensing, kinesthesia (Zamora, 2004) that is the precise awareness of muscle and joint movement that allows us to coordinate our muscles when we walk, talk, and use our hands. It is the sense of kinesthesia that enables us to touch the tip of our nose with our eyes closed or to know which part of the body we should scratch when we itch. This type of sensing is very important in controlling an aircraft and moving in 3D space. (Some scientists believe that future aircraft control system must be operated by thumbs, as the new generation is trained on video-games such as "Game Boy" (Rohacs,

The main element of the "human biomotoric system" is the human brain that is the anteriormost part of the central nervous system in humans as well as the primary control

The human brain (Russel, 1979; Davidmann, 1998). is a very complex system based on the net of brain cells called as neurons that specialize in communication. The brain contains

The neurons contain the dendrites, cell body and axon. In neurons, information passes from dendrites through the cell body and down the axon (Russel, 1979; Davidmann, 1998).

Principally, transmission of information through the neuron is an electrical process. The passage of a nerve impulse starts at a dendrite, it then travels through the cell body, down the axon to an axon terminal. Axon terminals lie close to the dendrites of neighboring

From control theory point of view, the most important behavior of human brain is the memory, namely learning, memorizing and remembering (Receiving, Storing and Recalling). Generally, human beings are learning all the time, storing information and then recalling it when it is required (Davidmann, 1998). After the investigation of human thinking, including recognition, information analysis, reasoning, decision support (Rohacs,

model-formation and using the models (including verbal models applied in learning

2006; 2007) the human way of thinking is found to be have the following behaviors:

working on the basis of large net of small and simplified articles (neurons),

syntactic and semantic processing of the sensed information,

processes and complex mathematical representation),

using the complex system oriented approach,

 making parallel thinking and activity, learning (synthesis of the new knowledge),

long-term memory,

circuits of interconnected neurons that pass information between themselves.

even within the brain's integration of inputs, could cause dizziness or unsteadiness.

**4.3 Modeling the human way of thinking and decision making** 

2006).)

neurons.

center for the peripheral nervous system.

$$\Phi\_p = -\sum\_{k=1}^N p(\sigma\_k) \ln p(\sigma\_k) - \beta \sum\_{k=1}^N p(\sigma\_k) \varepsilon\_k(\sigma\_k) - a \sum\_{k=1}^N p(\sigma\_k) \ln P(\sigma\_k) + \gamma \sum\_{k=1}^N p(\sigma\_k) \ . \tag{37}$$

A special feature of this function is that the structure of the efficiency function includes the logarithm of the subjective probability:

$$\eta\_p = -\sum\_{k=1}^{N} \left( \alpha \ln P(\sigma\_k) + \beta \varepsilon(\sigma\_k) \right) p(\sigma\_k) \,. \tag{38}$$

The complexity of decision making could be characterized by the uncertainties or the pilots' incapacity to make decisions, which is increasing while getting closer to the minimum decision altitude, \* *HDmi* . To make decisions, the pilots must overcome their "entropic barrier", *Hp* . The rate of incapacity could be defined with the norm of entropy:

$$
\overline{H}\_p = \frac{H\_p}{\ln N} \,\,\,\,\,\,\tag{39}
$$

Figure 12. shows a simplified decision making situation at an approach about the go-around [Kasyanov 2004, 2007]. At 0 0 *t x*, , *Sa* : , 1 2 indicates the set of alternative situations with the distribution of preferences <sup>1</sup> *p* and <sup>2</sup> *p* (where 1 indicates the landing and 2 defines the go-around).

Fig. 12. Final phase of aircraft approach.

The preferences are oscillating, because of the exogenous fluctuation (while decision altitude is getting closer) and the endogenous processes (depending on the uncertainties in the situation awareness and operators (pilots) incapacity to make decisions). If pilots are able to overcome their entropy barrier up to command for go-around (reaching the decision minimum altitude), *\*\* x,t* , then they could make a decision. Due to this decision, the set of situations, *<sup>a</sup> S* , can be given with the followings:

$$\begin{aligned} S\_{a^{\;}} &: (\sigma\_2) ; p(\sigma\_2) = 1 ; p(\sigma\_1) = 0 \\ t &< t^\* \\ p(\sigma\_1) + p(\sigma\_2) &= 1 \end{aligned} \qquad \begin{aligned} S\_{a1^{\;}} &: (\sigma\_2) ; p(\sigma\_2) = 1 ; p(\sigma\_1) = 0 \\ S\_{a2^{\;}} &: (\sigma\_1) ; p(\sigma\_1) = 1 ; p(\sigma\_2) = 0 \\ t &\ge t^\* \end{aligned} \tag{40}$$

If pilots are not able to overcome their entropy barrier before reaching \* \* *t x*, , the flight situation would become more complex, and therefore the possibility to perform a go-around (case <sup>2</sup> ) might be even out of the possible set of situations.

1 11 1 ln ln *N NN N p k k kk k k k k k kk k pp p pP p*

A special feature of this function is that the structure of the efficiency function includes the

*p k kk*

The complexity of decision making could be characterized by the uncertainties or the pilots' incapacity to make decisions, which is increasing while getting closer to the minimum decision altitude, \* *HDmi* . To make decisions, the pilots must overcome their "entropic

> ln *p*

Figure 12. shows a simplified decision making situation at an approach about the go-around

<sup>2</sup> *p* (where

command: go-around

*H*

*p*

*H*

 

logarithm of the subjective probability:

[Kasyanov 2004, 2007]. At 0 0 *t x*, , *Sa* : ,

Fig. 12. Final phase of aircraft approach.

situations, *<sup>a</sup> S* , can be given with the followings:

2 2 \*

 

1

<sup>2</sup> ) might be even out of the possible set of situations.

*p p t t*

If pilots are not able to overcome their entropy barrier before reaching \* \*

: ;

*S p*

*a*

1 2

 

the distribution of preferences

defines the go-around).

minimum altitude), *\*\**

(case    

1

*N*

*k*

 

ln

barrier", *Hp* . The rate of incapacity could be defined with the norm of entropy:

 1 2 

H <sup>21</sup> *Sa* ,: 

00

*gaga* ,,, *xtxtxt* \*\*

1000 m

The preferences are oscillating, because of the exogenous fluctuation (while decision altitude is getting closer) and the endogenous processes (depending on the uncertainties in the situation awareness and operators (pilots) incapacity to make decisions). If pilots are able to overcome their entropy barrier up to command for go-around (reaching the decision

*a*

*a*

situation would become more complex, and therefore the possibility to perform a go-around

*t t Spp*

<sup>1</sup> *p* and

30 m

60 m

. (37)

*P p*

 

   

> 

X

*x,t* , then they could make a decision. Due to this decision, the set of

: ; 1; 0

 

 

*t x*, , the flight

12 2 1

*Spp*

21 1 2 \*

(40)

: ; 1; 0

. (38)

 

*<sup>N</sup>* . (39)

indicates the set of alternative situations with

1 indicates the landing and

2

   

#### **4.3 Modeling the human way of thinking and decision making**

A human as "biomotoric system" uses the information provided by sense organs (sight, hearing, balance, etc.) to determine the motoric actions (Zamora, 2004). From a piloting point of view, balance is the most important from the human sense organs. (As known, pilots are flying upon their "botty" for sensing the aircraft's real spatial position, orientation and motion dynamics (Rohacs, 2006).) The sense of balance (Zamora, 2004) is maintained by a complex interaction of visual inputs (the proprioceptive sensors being affected by gravity and stretch sensors found in muscles, skin, and joints), the inner ear vestibular system, and the central nervous system. Disturbances occurring in any part of the balance system, or even within the brain's integration of inputs, could cause dizziness or unsteadiness.

In addition to this, human has another sensing, kinesthesia (Zamora, 2004) that is the precise awareness of muscle and joint movement that allows us to coordinate our muscles when we walk, talk, and use our hands. It is the sense of kinesthesia that enables us to touch the tip of our nose with our eyes closed or to know which part of the body we should scratch when we itch. This type of sensing is very important in controlling an aircraft and moving in 3D space. (Some scientists believe that future aircraft control system must be operated by thumbs, as the new generation is trained on video-games such as "Game Boy" (Rohacs, 2006).)

The main element of the "human biomotoric system" is the human brain that is the anteriormost part of the central nervous system in humans as well as the primary control center for the peripheral nervous system.

The human brain (Russel, 1979; Davidmann, 1998). is a very complex system based on the net of brain cells called as neurons that specialize in communication. The brain contains circuits of interconnected neurons that pass information between themselves.

The neurons contain the dendrites, cell body and axon. In neurons, information passes from dendrites through the cell body and down the axon (Russel, 1979; Davidmann, 1998).

Principally, transmission of information through the neuron is an electrical process. The passage of a nerve impulse starts at a dendrite, it then travels through the cell body, down the axon to an axon terminal. Axon terminals lie close to the dendrites of neighboring neurons.

From control theory point of view, the most important behavior of human brain is the memory, namely learning, memorizing and remembering (Receiving, Storing and Recalling). Generally, human beings are learning all the time, storing information and then recalling it when it is required (Davidmann, 1998). After the investigation of human thinking, including recognition, information analysis, reasoning, decision support (Rohacs, 2006; 2007) the human way of thinking is found to be have the following behaviors:


Subjective Factors in Flight Safety 283

entropy at the beginning is rather high. If the limit for the entropy would be 0.7 (that is still quit high) then decisions could be made in about 10 sec. This means that pilots will not able

If the parameters are set to a=10; b=10; c=35; d=1; f=0; h= 0.065; m=0.065; n=0.065 and

 *.P,.P* , then (see Figure 14) the entropy would quickly decrease and the decision could be made in about 3 sec. According to the ICAO requirements, time \* *ttt ga* (see Figure 12.) should not be less than 3.16 sec. Therefore, if the situation presented in the Figure 12. appears before <sup>00</sup> , *xt* , then the right decision could be made.

Fig. 13. Results of using the developed model to landing of a medium sized aircraft.

Fig. 14. Results, when the parameters are chosen for well-skilled pilots.

to do that according to the Figure 12.

530 60 <sup>1</sup> <sup>2</sup>


Seeing all the features listed above, it is clear that human thinking and decision making is a very complex process, containing some chaotic effects.

There is not enough information on the physical, systematic, intellectual, psychophysiology, etc. characteristics of the subjective analysis, about the way of thinking and making decision of subjects-operators like pilots. Only limited information is available on the time effects, possible damping the non-linear oscillations, long term memory, etc. making the decision system chaotic.

Professor Kasyanov introduced a special chaotic model (Kasyanov, 2007) based on the modified Lorenz attractor (Stogatz, 1994) for modeling the endogenous dynamics of the described process.

$$\begin{aligned} \frac{dX}{dt} &= aY - bZ - hX^2 + f\left(t\right);\\ \frac{dY}{dt} &= -Y - XZ + cX - mY^2;\\ \frac{dZ}{dt} &= XY - dZ - nZ^2. \end{aligned} \tag{41}$$

where *a, b, c, d, h, m, n* are the constants while *f* takes into account the disturbance. (In case of *h=m=n=*0 and *f(t)*=0 the model turns into the classic form of Lorenz attractor.)

Principally, there are no strong arguments explaining the use of Lorenz attractor to model the human way of decision making (human thinking) (Dartnell, 2010; Krakovska, 2009), but the results of application are close to real situations.

#### **4.4 Results of investigations**

Professor Kasyanov investigated various model types, and evaluated the model parameters (Kasyanov, 2007). For a medium sized aircraft (weight of aircraft, W = 106 N; wing area, S = 100 m2; wing aspect ratio A = 7; thrust T = 9.4 x 104 N; and velocity V = 70 m/sec) with commercial pilots, he recommended to use the following values: a=8; b=8; c=20; d.43; f=0.8; h= 0.065; m=0.065; n=0.065.

Using these parameters, the subjective probabilities might be chosen as *P P* 1 2 0.53, 0.6 and 1 2 5.5 0.01 , 5.4 0.04 *t t* take into account the decreasing difference in the required and the available time for the decision. The typical results of using the described model are shown in the Figure 13., demonstrating the chaotic character of decision making.

In this example, the figures demonstrate that pilots are unfixed for a period of about 10 sec, during which their preferences (A, B) are changing by sudden oscillations and the H

Seeing all the features listed above, it is clear that human thinking and decision making is a

There is not enough information on the physical, systematic, intellectual, psychophysiology, etc. characteristics of the subjective analysis, about the way of thinking and making decision of subjects-operators like pilots. Only limited information is available on the time effects, possible damping the non-linear oscillations, long term memory, etc. making the decision

Professor Kasyanov introduced a special chaotic model (Kasyanov, 2007) based on the modified Lorenz attractor (Stogatz, 1994) for modeling the endogenous dynamics of the

*dX aY bZ hX <sup>f</sup> <sup>t</sup>*

*dY Y XZ cX mY*

where *a, b, c, d, h, m, n* are the constants while *f* takes into account the disturbance. (In case of

Principally, there are no strong arguments explaining the use of Lorenz attractor to model the human way of decision making (human thinking) (Dartnell, 2010; Krakovska, 2009), but

Professor Kasyanov investigated various model types, and evaluated the model parameters (Kasyanov, 2007). For a medium sized aircraft (weight of aircraft, W = 106 N; wing area, S = 100 m2; wing aspect ratio A = 7; thrust T = 9.4 x 104 N; and velocity V = 70 m/sec) with commercial pilots, he recommended to use the following values: a=8; b=8; c=20; d.43; f=0.8;

Using these parameters, the subjective probabilities might be chosen as

decreasing difference in the required and the available time for the decision. The typical results of using the described model are shown in the Figure 13., demonstrating the chaotic

In this example, the figures demonstrate that pilots are unfixed for a period of about 10 sec, during which their preferences (A, B) are changing by sudden oscillations and the H

 

5.5 0.01 , 5.4 0.04 *t t* take into account the

*dZ XY dZ nZ*

*dt*

*dt*

*dt*

*h=m=n=*0 and *f(t)*=0 the model turns into the classic form of Lorenz attractor.)

<sup>2</sup>

2

.

2

;

(41)

;

 tacit knowledge (took in practice), intentional thinking (goal and wish), intuition (subconscious thinking), creativity (finding the contexts),

 unexpected values can be appeared, jumping from quantity to quality.

system chaotic.

described process.

innovativity (making originally new minds, things),

very complex process, containing some chaotic effects.

the results of application are close to real situations.

1 2 0.53, 0.6 and 1 2

**4.4 Results of investigations** 

h= 0.065; m=0.065; n=0.065.

character of decision making.

 

*P P*

entropy at the beginning is rather high. If the limit for the entropy would be 0.7 (that is still quit high) then decisions could be made in about 10 sec. This means that pilots will not able to do that according to the Figure 12.

If the parameters are set to a=10; b=10; c=35; d=1; f=0; h= 0.065; m=0.065; n=0.065 and 530 60 <sup>1</sup> <sup>2</sup> *.P,.P* , then (see Figure 14) the entropy would quickly decrease and the decision could be made in about 3 sec. According to the ICAO requirements, time \* *ttt ga* (see Figure 12.) should not be less than 3.16 sec. Therefore, if the situation presented in the Figure 12. appears before <sup>00</sup> , *xt* , then the right decision could be made.

Fig. 13. Results of using the developed model to landing of a medium sized aircraft.

Fig. 14. Results, when the parameters are chosen for well-skilled pilots.

Subjective Factors in Flight Safety 285

Banos, A., Lamnabhi-Lagarrigue, F., Montoya, F. J. (2001) Advances in the Control of

Bezopastnostj (1988) poletov letateljnüh apparatov (pod red. A.I. Starikova). Transport,

CASA (2005) AC 139-16(0): Developing a Safety Management System at Your Aerodrome,

FAA (2006) Introduction to Safety Management Systems for Air Operators, Federal Aviation Administration Advisory Circular 120-92: Appendix 1, Jun. 22, 2006. Fliens, M., Levine, J., Martin, P., Rouchen, P. (1999) A Lie-Bäcklund Approach to

Flight (2000) Control design – Best Practice, NATO, RTO-TR-029, AC/323(SCI)TP/23,

Gardiner, C. W. (2004) Handbook of Stochastic Methods for Physics, Chemistry and the

Gudkov A. I., Lesakov P. S. (1968) Vneisnie nagruzki i prochnostj letateljnih apparatov,

Howard R. W. (1980) Progress in the Use of Automatic Flight Controls in Safety Critical Applications, The Aeronautical Journals, 1980 v. 84. X. No. 837. pp.316-326.

Kasyanov, V. A. (2004) Flight modelling (in Russian), ), National Aviation University, Kiev,

Kasyanov, V. A. (2007) Subjective analysis (in Russian), National Aviation University, Kiev,

Krakovska A. (2009) Two Decades of Search for Chaos in Brain, MEASUREMENT 2009, Proceedings of the 7th International Conference, Smolenice, Slovakia, pp. 90 - 94.

Pavlov, V.V., Chepijenko, V. I. (2009) Ergaticheskie sistemii upravleniya (Ergatic control

http://194.44.242.245:8080/dspace/bitstream/handle/123456789/7645/01-

Ponomarenko, V. (2000) Kingdom in the Sky – Earthly Fetters and Heavenly Freedoms. The

Rohacs, J. (1986) Deviation of Aerodynamic Characteristics and Performance Data of Aircraft in the Operational Process. (Ph.D. thesis) KIIGA, Kiev , 1986. Rohacs, J. (1990) Analysis of Methods for Modeling Real Flight Situations. 17th Congress of

systems), gasudarstvennij nauchno-Isledovatjelskij Institute Avitacii, Kiev,

Pilot's Approach to the Military Flight Environment, NATO RTO-AG-338

the International Council of the Aeronautical Sciences, Stockholm, Sweden, Sept. 9-

Ibe, O. C. (2008) Markov Process for Stochastic Modeling, Academic press. 2008

 http://www.icao.int/fsix/cast/CAST%20Process%20Overview%209-29-03.ppt Dartnell, L. (2010) Chaos in the brain, (2010), http://plus.maths.org/content/chaos-brain Davidmann, M. (1998) How the Human Brain Developed and How the Human Mind Works, 1998, 2006, http://www.solbaram.org/articles/humind.html

Spinger - Verlag London Berlin Heidelberg, 2001,

Commercial (2000) Aviation Safety Team (CAST), Process Overview,

Control, Vol. 44, No. 5, MAY 1999, 922 - 937.

Neuilly-sur-Seine Cedex, France, 2000.

Masinostroyeniye Moscow, 1968.

Lee, C. A. (2003) Human error in aviation (2003)

AC/323(HFM)TP/5, July, 2000

Pavlov.pdf?sequence=1

http://www.carrielee.net/pdfs/HumanError.pdf

14. 1990, ICAS Proceedings 1990. pp. 2046-2054.

EASA (2008) Annual Safety Review, EASA 2008.

Moscow, 1988.

Circular, 2005.

New York, 2004.

2004, 400 p.

2007, 512. p.

Nonlinear Systems, Lecture Notes in Control and Information Sciences 264,

Australian Government – Civil Aviation Safety Authority (CASA) Advisory

Equivalence and Flatness of Nonlinear Systems, IEEE Transactions on Automatic

natural Sciences, Springer Series in Synergetics, Springer-Verlag Berlin Heidenberg

From the results of using the developed model to the landing phase of a small aircraft (such as analyzed in the Hungarian national projects SafeFly: development of the innovative safety technologies for a 4 seats composite aircraft and EU FP7 project PPlane: Personal Plane: Assessment and Validation of Pioneering Concepts for Personal Air Transport Systems, Grant agreement no.233805) several important conclusions had been made (Rohacs et all, 2011; Rohacs & Kasyanov, 2011; Rohacs, 2010).

During the final approach, the common airliner pilots require about three times more time for making decision on go-around than the well practiced colleagues.

Using the developed model and condition defined by Figure 12, the descent velocity of a small aircraft could be determined to about 100 km/h for airliner common pilots, and 75 km/h for those of less-skilled.

In this case, the airport can be designed with a landing distance of less than 600 m (runway about 250 - 300 m) and a protected zone under the approach (to overfly the altitude of 100 m) of about 1500 m. These characteristics enable to place small airports close / closer to the city center.

## **5. Conclusions**

This chapter introduced the subjective analysis methodology into the investigation of the real flight situation, flight safety. The subject, as pilot operator generates his decision on the basis of his subjective situation analysis depending on the available information and his psycho-physiological condition. The subjective factor is the time available for the decision of the given tasks.

After the general discussion on flight safety, its metrics and accident statistics, an original approach was introduced to study the role of human factors in flight safety. The deterministic or stochastic models of flight safety are not included clearly the subjective behaviors of human operators. However, the subjective analysis may open a new vision on the flight safety and may result to improve the aircraft development methods and tools.

The subjective decision making of pilots was modeled by the modified Lorenz attractor that needs further investigation and explanation. The applicability of the developed methodology was applied to study the small aircraft final approach and landing. It demonstrates that the model is suitable to investigate the difference between the well trained and less-skilled pilots. The model helped in the definition of the aircraft and airport characteristics for the personal air transportation system.

This work is connected to the scientific program of the "Development of the innovative safety technologies for a 4 seats composite aircraft - SafeFly" (NKTH-MAG ZRt. OM-000167/2008) supported by Hungarian National Development Office and Personal Plane - PPLANE Projhect supported by EU FPO7 (Contract No - 233805) and the research is supported by the Hungarian National New Széchenyi Plan (TÁMOP-4.2.2/B-10/1-2010-0009)

## **6. References**

Afrazeh A & Bartsch H (2007) Human reliability and flight safety. International Journal of Reliability, Quality and Safety Engineering 14(5): 501–516

From the results of using the developed model to the landing phase of a small aircraft (such as analyzed in the Hungarian national projects SafeFly: development of the innovative safety technologies for a 4 seats composite aircraft and EU FP7 project PPlane: Personal Plane: Assessment and Validation of Pioneering Concepts for Personal Air Transport Systems, Grant agreement no.233805) several important conclusions had been made (Rohacs

During the final approach, the common airliner pilots require about three times more time

Using the developed model and condition defined by Figure 12, the descent velocity of a small aircraft could be determined to about 100 km/h for airliner common pilots, and 75

In this case, the airport can be designed with a landing distance of less than 600 m (runway about 250 - 300 m) and a protected zone under the approach (to overfly the altitude of 100 m) of about 1500 m. These characteristics enable to place small airports close / closer to the

This chapter introduced the subjective analysis methodology into the investigation of the real flight situation, flight safety. The subject, as pilot operator generates his decision on the basis of his subjective situation analysis depending on the available information and his psycho-physiological condition. The subjective factor is the time available for the decision of

After the general discussion on flight safety, its metrics and accident statistics, an original approach was introduced to study the role of human factors in flight safety. The deterministic or stochastic models of flight safety are not included clearly the subjective behaviors of human operators. However, the subjective analysis may open a new vision on the flight safety and may result to improve the aircraft development methods and tools.

The subjective decision making of pilots was modeled by the modified Lorenz attractor that needs further investigation and explanation. The applicability of the developed methodology was applied to study the small aircraft final approach and landing. It demonstrates that the model is suitable to investigate the difference between the well trained and less-skilled pilots. The model helped in the definition of the aircraft and airport

This work is connected to the scientific program of the "Development of the innovative safety technologies for a 4 seats composite aircraft - SafeFly" (NKTH-MAG ZRt. OM-000167/2008) supported by Hungarian National Development Office and Personal Plane - PPLANE Projhect supported by EU FPO7 (Contract No - 233805) and the research is supported by the Hungarian

Afrazeh A & Bartsch H (2007) Human reliability and flight safety. International Journal of

et all, 2011; Rohacs & Kasyanov, 2011; Rohacs, 2010).

characteristics for the personal air transportation system.

National New Széchenyi Plan (TÁMOP-4.2.2/B-10/1-2010-0009)

Reliability, Quality and Safety Engineering 14(5): 501–516

km/h for those of less-skilled.

city center.

**5. Conclusions** 

the given tasks.

**6. References** 

for making decision on go-around than the well practiced colleagues.


**Part 3** 

**Aircraft Electrical Systems** 


http://www.docstoc.com/docs/798142/NASA-s-Aviation-Safety-Program

Zamora, A. (2004) "Human Sense Organs - The Five Senses." Anatomy and Structure of Human Sense Organs. Scientific Psychic, 2004, 2011,

http://www.scientificpsychic.com/workbook/chapter2.htm

**Part 3** 

**Aircraft Electrical Systems** 

286 Recent Advances in Aircraft Technology

Rohács, J. (1995) Repülések biztonsága (Safety of Flights) Bólyai János Műszaki Katonai Főiskola (Military Technology High School Named János Bólyai), Budapest, 1995. Rohács, J. (1998) Revolution in Safety Sciences -- Application of the Micro Devices "Progress

Rohacs, J. (2000) Risk Analysis of Systems with System Anomalies and Common Failures

Rohacs, J. (2006) Development of the control based on the biologycal principles, ICAS

Rohacs, J. (2007) Some thoughts about the biological principle based control, Sixth

Rohacs, J. (2010) Subjective Aspects of the less-skilled Pilots, Performance, Safety and Well-

Rohacs, J., Kasyanov, V. A. (2011) Pilot subjective decisions in aircraft active control system,

Rohács, J., Németh, M. (1997) Effects of Aircraft Anomalies on Flight Safety "Aviation Safety

Rohacs, J., Rohacs, D., Jankovics, I., Rozental, S., Hlinka, J, Katrnak, T., Helena, T. (2011)

Rohacs, J., Simon I. (1989) Repülőgépek és helikopterek üzemeltetési zsebkönyve (The

Ropp, T. D., Dillman, B. G. (2008) Standardized Measures of Safety: Finding Global

Shin, J. (2000) The NASA Aviation Safety Program: Overview, Nasa, 2000, NASA/TM— 2000-209810, http://gltrs.grc.nasa.gov/reports/2000/TM-2000-209810.pdf Statistical (2008) summary of commercial jet airplane accidents worldwide operations 1959 - 2008, Boeing, http://www.boeing.com/news/techissues/pdf/statsum.pdf Strogatz, S. (1994) Nonlinear dynamics and chaos : with applications to physics, biology, chemistry, and engineering. Perseus Books, Massachusetts, US, 1994. Tihonov, V.I., Mironov, M.A. (1977) Markovskie processi. Sovetskoe Radio, Moscow, 1977. Transport (2007) Canada, TP 14343, Implementation Procedures guide for Air Operators and

Congress, Hamburg, 2006 Sept. CD-ROM, ICAS, 2006

Droog, M. Heese), ISBN: 978-90-815253-2-9 pp. 153-159

2007, pp. 627-638, ISBN 978-1-904868-56-9

J. Theor. Appl. Mech., 49, 1, pp. 175-186, 2011

Topics in Aviation Safety, Paper No. 29.

White, J.: (2009) Aviation safety program, NASA, (2009)

Russel, P. (1979) The Brain Book, Penguin Group, new York, 1979.

Approved Maintenance Organizations, April, 2007.

Human Sense Organs. Scientific Psychic, 2004, 2011, http://www.scientificpsychic.com/workbook/chapter2.htm

http://www.docstoc.com/docs/798142/NASA-s-Aviation-Safety-Program

Zamora, A. (2004) "Human Sense Organs - The Five Senses." Anatomy and Structure of

550–560.

203–211.

1989.

Budapest, 2011.

in Safety Sciences and Technology" (Edited by Zeng Quingxuan, Wang Liqiong, Xie Xianping, Qian Xinming) Science Press Beijing / New York, 1998, pp. 969 – 973.

"Progress in Safety Sciences and Technology" Vol. II. Part. A. (edited by Li Shengcai, Jing Guoxun, Qian Xinming), Chemical Industry Press, Beijing, 2000,

International Conference on Mathematical Probéems and Engineering and Aerospace Sciences (ed. By Sivasundaram, S.), Cambridge Scientific Publisherm

being in Aviation, Proceedings of the 29th Conference of the European Association for Aviation Psychology, 20-24 September 2010, Budapest, Hungary, (edited by A.

(Editor: Hans M. Soekkha) VSP, Ultrecht, The Netherland, Tokyo, Japan, 1997, pp.

Personal aircraft system improvements Internal report, PPLANE (EU FP 7 projects),

handbook of airplane and helicopter operation) Müszaki Könyvkiadó, Budapest,

Common Ground for Safety Metrics, IAJC -IJME Conference, International Conference on Engineering and Technology, 2008, Nashville, TN, US, ENT 203:

**13** 

Ahmed Abdel-Hafez *Shaqra University Kingdom of Saudi Arabia* 

**Power Generation and Distribution System for a** 

More-Electric Aircraft (MEA) is the future trend in adopting single power type for driving the non-propulsive aircraft systems; i.e. is the electrical power. The MEA is anticipated to achieve numerous advantages such as optimising the aircraft performance and decreasing the operation and maintenance costs. Moreover, MEA reduces the emissions of air pollutant gases from aircrafts, which can contribute in signifcantly solving some of the problems of climate change. However, the MEA puts some challenges on the aircraft electrical system, both in the amount of the required power and the processing and management of this power. This chapter introduces the outline for MEA. It investigates possible topologies for the power system of the aircraft. The different electric power generation options are highlighted; while at the same time assessing the generator topologies. It also includes a general review of the power electronic interfacing circuits. Also, the key design requirements for an interfacing circuit are addressed. Finally, a glance at protection facilities

Recently, the aircraft industry has achieved a tremendous progress both in civil and military sectors (AbdElhafez & Forsyth, 2008,2009; Cronin, 1990; Moir & Seabridge, 2001). For example some current commercial aircraft operate at weights of over 300 000 kg and have the ability to fly up to 16 000 km in non-stop journey at speed of 1000 km/h (AbdElhafez & Forsyth, 2009). The non-propulsive aircraft systems are typically driven by a combination of different secondary power drives/subsystems such as hydraulic, pneumatic, electrical and mechanical (AbdElhafez & Forsyth, 2008,2009; Jones, 1999; Moir, 1999; Moir & Seabridge, 2001; Quigley, 1993). These powers subsystems are all soured from the aircraft main engine by different methods. For example, mechanical power is extracted from the engine by a driven shaft and distributed to a gearbox to drive lubrication pumps, fuel pumps, hydraulic pumps and electrical generators (AbdElhafez & Forsyth, 2009; Jones, 1999; Moir, 1999; Quigley, 1993). Pneumatic power is obtained by bleeding the compressor to drive turbine motors for the engine's starter subsystem, and wing anti-icing and Environmental Control Systems (ECS), while electrical power and hydraulic power subsystems are distributed throughout the aircraft for driving actuation systems such as flight control actuators,

**1. Introduction** 

for the aircraft power system is given.

**2. More electric aircraft** 

**More Electric Aircraft - A Review** 

## **Power Generation and Distribution System for a More Electric Aircraft - A Review**

Ahmed Abdel-Hafez *Shaqra University Kingdom of Saudi Arabia* 

## **1. Introduction**

More-Electric Aircraft (MEA) is the future trend in adopting single power type for driving the non-propulsive aircraft systems; i.e. is the electrical power. The MEA is anticipated to achieve numerous advantages such as optimising the aircraft performance and decreasing the operation and maintenance costs. Moreover, MEA reduces the emissions of air pollutant gases from aircrafts, which can contribute in signifcantly solving some of the problems of climate change. However, the MEA puts some challenges on the aircraft electrical system, both in the amount of the required power and the processing and management of this power. This chapter introduces the outline for MEA. It investigates possible topologies for the power system of the aircraft. The different electric power generation options are highlighted; while at the same time assessing the generator topologies. It also includes a general review of the power electronic interfacing circuits. Also, the key design requirements for an interfacing circuit are addressed. Finally, a glance at protection facilities for the aircraft power system is given.

### **2. More electric aircraft**

Recently, the aircraft industry has achieved a tremendous progress both in civil and military sectors (AbdElhafez & Forsyth, 2008,2009; Cronin, 1990; Moir & Seabridge, 2001). For example some current commercial aircraft operate at weights of over 300 000 kg and have the ability to fly up to 16 000 km in non-stop journey at speed of 1000 km/h (AbdElhafez & Forsyth, 2009).

The non-propulsive aircraft systems are typically driven by a combination of different secondary power drives/subsystems such as hydraulic, pneumatic, electrical and mechanical (AbdElhafez & Forsyth, 2008,2009; Jones, 1999; Moir, 1999; Moir & Seabridge, 2001; Quigley, 1993). These powers subsystems are all soured from the aircraft main engine by different methods. For example, mechanical power is extracted from the engine by a driven shaft and distributed to a gearbox to drive lubrication pumps, fuel pumps, hydraulic pumps and electrical generators (AbdElhafez & Forsyth, 2009; Jones, 1999; Moir, 1999; Quigley, 1993). Pneumatic power is obtained by bleeding the compressor to drive turbine motors for the engine's starter subsystem, and wing anti-icing and Environmental Control Systems (ECS), while electrical power and hydraulic power subsystems are distributed throughout the aircraft for driving actuation systems such as flight control actuators,

Power Generation and Distribution System for a More Electric Aircraft - A Review 291

The adoption of MEA in the future aircraft both in civil and military sectors will result in

1. Removal of hydraulic systems, which are costly, labour-intensive, and susceptible to leakage and contamination problems, improves the aircraft reliability, vulnerability, and reduces complexity, redundancy, weight, installation and running cost ( Cutts,

2. Deployment of electrical starting for the aero-engine through the engine starter/generator scheme eliminates the engine tower shaft and gears, power take-off shaft, accessory gearboxes and reduces engine starting power especially in the cold conditions and aircraft front area (Clyod, 1997; Emadi & Ehsani, 2000; Jones, 1999; Moir

3. Utilization of the Advanced Magnetic Bearing (AMB) system, which could be integrated into the internal starter/generator for both the main engine and auxiliary power units, allows for oil-free, gear-free engine area (AbdElhafez & Forsyth, 2009; Andrade & Tenning, 1992a, 1992b; Hoffman et al., 1985; Jones, 1999; Moir & Seabridge,

4. In MEA, using a fan shaft generator that allowing emergency power extraction under windmill conditions removes the conventional inefficient single-shot ram air turbine, which increases the aircraft's reliability, and survivability under engine-failure conditions (AbdElhafez & Forsyth, 2009; Andrade & Tenning, 1992a, 1992b; Quigley,

5. Replacement of the engine-bleed system by electric motor-driven pumps reduces the complexity and the installation cost, and improves the efficiency (Jones, 1999).

In general, adopting MEA will revolutionise the aerospace industry completely, and significant improvements in terms of aircraft-empty weight, reconfigureability, fuel consumption, overall cost, maintainability, supportability, and system reliability will be achieved (AbdElhafez & Forsyth, 2009; Clyod, 1997; Cronin, 1990; Emadi & Ehsani, 2000;

On the other hand, the MEA requires more demand on the aircraft electric power system in areas of power generation and handling, reliability, and fault tolerance. These entails innovations in power generation, processing, distribution and management systems (AbdElhafez & Forsyth, 2009; Clyod, 1997; Cronin, 1990; Emadi & Ehsani, 2000; Hoffman et

The proceeding sections briefly discuss a general overview of the electrical power

The power distribution system of the most in-service civil aircrafts is composed of combined of AC and DC topologies. E.g., an AC supply of 115V/400Hz is used to power large loads as such as galleys, while the DC supply of 28V DC is used for avionics, flight control and

Recently there is a trend for using only high voltage DC system for power distribution and management in MEA. A number of factors encouraged this trend (AbdElhafez & Forsyth,

distribution and management, generation and processing systems in MEA.

Hoffman et at., 1985; Moir, 21998, 1999, Weimer, 1993 ).

2002; Pearson, 1998; Ponton, 1998; Quigely, 1993; Weimer, 1993).

tremendous benefits such as:-

& Seabridge, 2001).

at., 1985; Moir, 21998, 1999).

**3. Distribution systems** 

battery-driven vital services.

2001).

1993).

landing gear brakes, utility actuators, avionics, lighting, galleys, commercial loads and weapon systems (AbdElhafez & Forsyth, 2009, Howse, 2003; Jones, 1999; Moir, 1998, 1999; Quigley, 1993).

This combination had always been debated, because these systems had become rather complicated, and their interactions reduce the efficiency of the whole system. For example, a simple leak in pneumatic or hydraulic system jeopardises the journey by grounding the aircraft, and eventually causing inconvenient flight delays. The leak is usually difficult to locate and once located it cannot easily be handled (AbdElhafez & Forsyth, 2009; Cutts, 2002; Hoffman, 1985; Moir, 1998; Pearson, 1998; Rosero, et al, 2007; Weimer, 1993). Furthermore, from manufacturing point of view reducing the cost of ownership, increasing the profit and some anticipated future legislation regarding the climate changes demand radical changes to the entire aircraft, as it is no longer sufficient to optimise the current aircraft sub-systems and components individually to achieve these goals (AbdElhafez & Forsyth, 2009; Andrade, 1992; Cutts, 2002; Clyod, 1997; Emadi & Ehsani, 2000; Hoffman, 1985; Moir, 1998; Pearson, 1998; Ponton, 1998; Rosero, etal, 2007; Weimer, 1993).

The trend is using the electrical power for sourcing and distributing non-propulsive aircraft engine powers. This trend is defined as MEA. The MEA concept is utterly not a new concept, it has been investigated for several decades since W.W. II (Andrade, 1992; Cutts, 2002; Pearson, 1998; Ponton, 1998; Weimer, 1993). However, due to the lack of electric power generation capabilities and prohibitive volume of power conditioning equipments, the focus has been drifted into the conventional power types. Relatively, the recent technology breakthroughs in the field of power electronics systems, fault-tolerant electric machines, electro- hydrostatic actuators, electromechanical actuators, and fault-tolerant electrical power systems have renewed the interest in MEA (AbdElhafez & Forsyth, 2009; Andrade, 1992; Cutts, 2002; Clyod, 1997; Emadi & Ehsani, 2000; Hoffman, 1985; Moir, 1998; Pearson, 1998; Ponton, 1998; Rosero, etal, 2007; Weimer, 1993). A comparison between conventional aircraft subsystems and MEA subsystems is shown in Fig. 1 (AbdElhafez & Forsyth, 2009).

Fig. 1. Comparison between conventional systems aircraft and MEA systems (AbdElhafez & Forsyth, 2009).

landing gear brakes, utility actuators, avionics, lighting, galleys, commercial loads and weapon systems (AbdElhafez & Forsyth, 2009, Howse, 2003; Jones, 1999; Moir, 1998, 1999;

This combination had always been debated, because these systems had become rather complicated, and their interactions reduce the efficiency of the whole system. For example, a simple leak in pneumatic or hydraulic system jeopardises the journey by grounding the aircraft, and eventually causing inconvenient flight delays. The leak is usually difficult to locate and once located it cannot easily be handled (AbdElhafez & Forsyth, 2009; Cutts, 2002; Hoffman, 1985; Moir, 1998; Pearson, 1998; Rosero, et al, 2007; Weimer, 1993). Furthermore, from manufacturing point of view reducing the cost of ownership, increasing the profit and some anticipated future legislation regarding the climate changes demand radical changes to the entire aircraft, as it is no longer sufficient to optimise the current aircraft sub-systems and components individually to achieve these goals (AbdElhafez & Forsyth, 2009; Andrade, 1992; Cutts, 2002; Clyod, 1997; Emadi & Ehsani, 2000; Hoffman, 1985; Moir, 1998; Pearson, 1998; Ponton, 1998; Rosero, etal, 2007;

The trend is using the electrical power for sourcing and distributing non-propulsive aircraft engine powers. This trend is defined as MEA. The MEA concept is utterly not a new concept, it has been investigated for several decades since W.W. II (Andrade, 1992; Cutts, 2002; Pearson, 1998; Ponton, 1998; Weimer, 1993). However, due to the lack of electric power generation capabilities and prohibitive volume of power conditioning equipments, the focus has been drifted into the conventional power types. Relatively, the recent technology breakthroughs in the field of power electronics systems, fault-tolerant electric machines, electro- hydrostatic actuators, electromechanical actuators, and fault-tolerant electrical power systems have renewed the interest in MEA (AbdElhafez & Forsyth, 2009; Andrade, 1992; Cutts, 2002; Clyod, 1997; Emadi & Ehsani, 2000; Hoffman, 1985; Moir, 1998; Pearson, 1998; Ponton, 1998; Rosero, etal, 2007; Weimer, 1993). A comparison between conventional aircraft subsystems and MEA subsystems is shown in Fig. 1 (AbdElhafez & Forsyth, 2009).

Fig. 1. Comparison between conventional systems aircraft and MEA systems (AbdElhafez &

Quigley, 1993).

Weimer, 1993).

Forsyth, 2009).

The adoption of MEA in the future aircraft both in civil and military sectors will result in tremendous benefits such as:-


In general, adopting MEA will revolutionise the aerospace industry completely, and significant improvements in terms of aircraft-empty weight, reconfigureability, fuel consumption, overall cost, maintainability, supportability, and system reliability will be achieved (AbdElhafez & Forsyth, 2009; Clyod, 1997; Cronin, 1990; Emadi & Ehsani, 2000; Hoffman et at., 1985; Moir, 21998, 1999, Weimer, 1993 ).

On the other hand, the MEA requires more demand on the aircraft electric power system in areas of power generation and handling, reliability, and fault tolerance. These entails innovations in power generation, processing, distribution and management systems (AbdElhafez & Forsyth, 2009; Clyod, 1997; Cronin, 1990; Emadi & Ehsani, 2000; Hoffman et at., 1985; Moir, 21998, 1999).

The proceeding sections briefly discuss a general overview of the electrical power distribution and management, generation and processing systems in MEA.

## **3. Distribution systems**

The power distribution system of the most in-service civil aircrafts is composed of combined of AC and DC topologies. E.g., an AC supply of 115V/400Hz is used to power large loads as such as galleys, while the DC supply of 28V DC is used for avionics, flight control and battery-driven vital services.

Recently there is a trend for using only high voltage DC system for power distribution and management in MEA. A number of factors encouraged this trend (AbdElhafez & Forsyth,

Power Generation and Distribution System for a More Electric Aircraft - A Review 293

Fig. 2. Centralised Electrical Power Distribution System CEPDS for the MEA (AbdElhafez &

SDEPDS was proposed to overcome the problems of CEPDS (AbdElhafez & Forsyth; 2009; Cross et al., 2002; Hoffman, 1985; Glennon, 1998; Maldonado et al., 1996, 1997, 1999; Mallov et al., 2000; Worth, 1990) . The SDEPDS as shown in Figure 3 has a large number of Power Distribution Centres (PDCs). These centres are scaled versions of PDCs in CEPDS. The PDCs are distributed around the aircraft in such way to optimise the system volume, weight and

Fig. 3. Semi-Distributed Electrical Power Distribution System SDEPDS for the MEA

**3.2 Semi-Distributed Electrical Power Distribution System (SDEPDS)** 

reliability. They are located, Figure 3, close to load centres.

Forsyth, 2009).

(AbdElhafez & Forsyth, 2009)

2009; Cross et al., 2002; Hoffman, 1985; Jones, 1999; Glennon, 1998; Maldonado et al., 1996, 1997, 1999; Mallov et al., 2000; Quigely, 1993; Worth, 1990) :


Some values of the system voltage are presently under research. These values are: 270, 350 and 540V. The exact value, however, is determined by a number of factors such as, the capabilities of DC switchgear, the availability of the components and the risk of corona discharge at high altitude and reduced pressure (Brockschmidt, 1999).

Different topologies were suggested for implementing the distribution system in MEA (Cross et al., 2002; Hoffman, 1985; Glennon, 1998; Maldonado et al., 1996, 1997, 1999; Mallov et al., 2000; Worth, 1990). In the following four main candidates of these topologies are briefly reviewed, as follows :


#### **3.1 Centralized Electrical Power Distribution System (CEPDS)**

CEPDS is a point-to-point radial power distribution system as shown in Figure 2. It has only one distribution centre. The generators supply this distribution centre. The electrical power is being processed and fed to the different electrical loads. The distribution centre is normally positioned in the avionics bay, Figure 2, where the voltage regulation is also located. In this system, each load is supplied individually from the power distribution centre (Cross et al., 2002; Worth et al., 1990). CEPDS has a number of advantages, such as :


As stated CEPDS may have significant advantages, however it also has a number of disadvantages, such as:


2009; Cross et al., 2002; Hoffman, 1985; Jones, 1999; Glennon, 1998; Maldonado et al., 1996,

2. Recent advancements in the areas of interfacing circuits, control techniques and

3. The advantages of the high voltage DC distribution system in reducing the weight, the

Some values of the system voltage are presently under research. These values are: 270, 350 and 540V. The exact value, however, is determined by a number of factors such as, the capabilities of DC switchgear, the availability of the components and the risk of corona

Different topologies were suggested for implementing the distribution system in MEA (Cross et al., 2002; Hoffman, 1985; Glennon, 1998; Maldonado et al., 1996, 1997, 1999; Mallov et al., 2000; Worth, 1990). In the following four main candidates of these topologies are

CEPDS is a point-to-point radial power distribution system as shown in Figure 2. It has only one distribution centre. The generators supply this distribution centre. The electrical power is being processed and fed to the different electrical loads. The distribution centre is normally positioned in the avionics bay, Figure 2, where the voltage regulation is also located. In this system, each load is supplied individually from the power distribution centre (Cross et al., 2002; Worth et al., 1990). CEPDS has a number of advantages, such

1. The ease of maintenance, since all equipments are located in one place, i.e. avionics

2. Decoupling between loads; thus the disturbance in a load is not transferred to the

As stated CEPDS may have significant advantages, however it also has a number of

2. The faults in the distribution system affect probably all loads and disable the entire

3. CEPDS is cumbersome, expensive and unreliable, as each load has to be wired from the

4. Costly and bulky protection system has to be deployed to protect the distribution

size and the losses, while increasing the levels of the transmitted power.

1997, 1999; Mallov et al., 2000; Quigely, 1993; Worth, 1990) :

protection systems,

briefly reviewed, as follows :

as :

bay.

others.

system.

system.

avionics bay.

disadvantages, such as:

1. Adopting the new generation options as variable frequency,

discharge at high altitude and reduced pressure (Brockschmidt, 1999).

1. Centralized Electrical Power Distribution System (CEPDS), 2. Semi-Distributed Electrical Power Distribution System (SDEPDS),

3. Advanced Electrical Power Distribution System (AEPDS), 4. Fault-Tolerant Electrical Power Distribution System (FTEPDS).

3. Fault-tolerance, as the main buses are highly protected.

1. CEPDS suffers from the difficulty of upgrading.

**3.1 Centralized Electrical Power Distribution System (CEPDS)** 

Fig. 2. Centralised Electrical Power Distribution System CEPDS for the MEA (AbdElhafez & Forsyth, 2009).

### **3.2 Semi-Distributed Electrical Power Distribution System (SDEPDS)**

SDEPDS was proposed to overcome the problems of CEPDS (AbdElhafez & Forsyth; 2009; Cross et al., 2002; Hoffman, 1985; Glennon, 1998; Maldonado et al., 1996, 1997, 1999; Mallov et al., 2000; Worth, 1990) . The SDEPDS as shown in Figure 3 has a large number of Power Distribution Centres (PDCs). These centres are scaled versions of PDCs in CEPDS. The PDCs are distributed around the aircraft in such way to optimise the system volume, weight and reliability. They are located, Figure 3, close to load centres.

Fig. 3. Semi-Distributed Electrical Power Distribution System SDEPDS for the MEA (AbdElhafez & Forsyth, 2009)

Power Generation and Distribution System for a More Electric Aircraft - A Review 295

RT1 RT2 RTN

RSU1 RSU2 RSUN

Fig. 4. Advanced Electrical Power Distribution System AEPDS for MEA (AbdElhafez &

FTEPDS is adequatly protected. A typical FTEPDS for a two-engine aircraft is shown in Figure 5. The system is composed of two switch matrices, six multi-purpose converter, six generators and different loads. The source and load switch matrices could be implemented by using mechanical or solid-state switches. However, the latter has the advantages of controllability, fast response and high efficiency (Cross et al., 2002; Hoffman et al., 1985;

FTEPDS is a mixed distribution system; the AC power from generators and airport grid are connected to source switch matrix, while 270V DC system is interfaced with the converters. The bi-directional power flow in the generators indicates that system allows integral starter/generator operation, where the generator initally acts as a motor to start the jet engine; then it operates as generator to supply the aircraft electrical system. Also 270V DC system has a bi-directional power flow; this is to charge the batteries and other energy storage units during normal flight conditions. However, during faults and disturbances the

3. Fault-tolerant, the ability of the system to continue functioning even under an engine

**3.4 Faulted-Tolerant Electrical Power Distribution System (FTEPDS)** 

Glennon, 1998; Maldonado et al., 1996, 1997) over the former.

DC system injects power to stabilize the aircraft distribution system.

1. The ability to start the aircraft engine by generator/starter scheme,

FTEPDS enjoys the following advantages:

2. High redundancy,

failure,

Forsyth, 2009).

SDEPDS has a number of advantages :


On the other hand, the close coupling between the loads in SDEPDS may reduce the reliability, as faults/ disturbances in a load can propagate to nearby loads. Moreover, extra equipments are required to perform the monitoring and control of the distributed PDCs.

## **3.3 Advanced Electrical Power Distribution System (AEPDS)**

AEPDS is a flexible, fault-tolerant system controlled by a redundant microprocessor system. This system is developed to replace the conventionally centralized and semi-distributed systems.

AEPDS as shown in Figure 4, is highly protected. The electrical power from the generators, Auxiliary Power Unit (APU), battery and ground sources is supplied to the primary power distribution, where the Contactor Control Units (CCU) and high power contactors are located. The primary power distribution centre performs a number of tasks: voltage/frequency regulation, damping oscillation and transient and controlling the flow of the reactive power.

The aircraft loads are supplied via the Relay Switching Units (RSU). Each RSU is controlled and monitored by a Remote Terminal (RT) unit. The AEPDS is controlled by either one of the two redundant Electrical load Management Units (ELMU). The ELMU interact and exchange data/control strategies with the RTs through a quad redundant data bus (Mollov et al., 2002; Worth, 1990) .

The AEPDS has improved performance than CEPDS and SDEPDS. This is attributed for the following (Worth, 1990):


The AEPDS has the disadvantage of concentrating the distribution and the management of power supplied by the generating units/sources into a single unit; therefore a fault in this unit may interrupt the whole system operation.

1. Elevated power quality and improved Electromagntic compatibility, due to the position

2. High efficiency and cost effective, attributed to the deployment of electrical components

3. Efficient and stable system operation, due to reduced losses/voltage drops across the

4. High level of redundancy in primary power distribution path, due to the strategy of

On the other hand, the close coupling between the loads in SDEPDS may reduce the reliability, as faults/ disturbances in a load can propagate to nearby loads. Moreover, extra equipments are required to perform the monitoring and control of the distributed PDCs.

AEPDS is a flexible, fault-tolerant system controlled by a redundant microprocessor system. This system is developed to replace the conventionally centralized and semi-distributed

AEPDS as shown in Figure 4, is highly protected. The electrical power from the generators, Auxiliary Power Unit (APU), battery and ground sources is supplied to the primary power distribution, where the Contactor Control Units (CCU) and high power contactors are located. The primary power distribution centre performs a number of tasks: voltage/frequency regulation, damping oscillation and transient and controlling the flow of

The aircraft loads are supplied via the Relay Switching Units (RSU). Each RSU is controlled and monitored by a Remote Terminal (RT) unit. The AEPDS is controlled by either one of the two redundant Electrical load Management Units (ELMU). The ELMU interact and exchange data/control strategies with the RTs through a quad redundant data bus (Mollov

The AEPDS has improved performance than CEPDS and SDEPDS. This is attributed for the

1. AEPDS reduces the aircraft life cycle cost, as the system reconfiguration in case of

2. AEPDS can detect deviant conditions of current/voltage and provide instantaneous

3. A major reduction in the weight and wiring in the AEPDS is achieved due to the

The AEPDS has the disadvantage of concentrating the distribution and the management of power supplied by the generating units/sources into a single unit; therefore a fault in this

aircraft modification/upgrade can easily be accommodated.

elimination of circuit breaker panels from the flight deck stands.

4. AEPDS is fault-tolerant distribution system.

unit may interrupt the whole system operation.

SDEPDS has a number of advantages :

distribution network.

systems.

the reactive power.

et al., 2002; Worth, 1990) .

following (Worth, 1990):

load shut-off.

of the distribution centres near to the loads,

with small weight/volume in PDCs,

increasing and distributing the PDCs, 5. Simplicity and flexibility of upgrading.

**3.3 Advanced Electrical Power Distribution System (AEPDS)** 

Fig. 4. Advanced Electrical Power Distribution System AEPDS for MEA (AbdElhafez & Forsyth, 2009).

#### **3.4 Faulted-Tolerant Electrical Power Distribution System (FTEPDS)**

FTEPDS is adequatly protected. A typical FTEPDS for a two-engine aircraft is shown in Figure 5. The system is composed of two switch matrices, six multi-purpose converter, six generators and different loads. The source and load switch matrices could be implemented by using mechanical or solid-state switches. However, the latter has the advantages of controllability, fast response and high efficiency (Cross et al., 2002; Hoffman et al., 1985; Glennon, 1998; Maldonado et al., 1996, 1997) over the former.

FTEPDS is a mixed distribution system; the AC power from generators and airport grid are connected to source switch matrix, while 270V DC system is interfaced with the converters. The bi-directional power flow in the generators indicates that system allows integral starter/generator operation, where the generator initally acts as a motor to start the jet engine; then it operates as generator to supply the aircraft electrical system. Also 270V DC system has a bi-directional power flow; this is to charge the batteries and other energy storage units during normal flight conditions. However, during faults and disturbances the DC system injects power to stabilize the aircraft distribution system.

FTEPDS enjoys the following advantages:


Power Generation and Distribution System for a More Electric Aircraft - A Review 297

Fig. 6. Growth of generated electrical power in aircraft since the first flight (AbdElhafez &

Fig. 7. Aircraft Electrical Power Generation Options (AbdElhafez & Forsyth, 2009).

Forsyth, 2009).

However FEEPDS has a serious drawback; a fault in source/load switch matrices may interrupt the operation of the entire system.

Fig. 5. Fault-tolerant Electrical Power Distribution System FTEPDS for MEA (AbdElhafez & Forsyth, 2009).

## **4. Electric power generation in MEA**

Since its advent, generated electrical power uilization has been rising rapidly. The growth of electrical power generation/application in aircrafts is shown in Figure 6. The quadratic growth is attributed to the increase aircraft system loading such as : galley and In-Flight Entertainment (IFE) systems.

MEA recently is one of the major driving force in electric generation in aircrafts (AbdElhafez & Forsyth, 2009; Andrade, 1992; Bansal et al., 2003, 2005; Howse, 2003; Jones, 1999; Quigely, 1993; Mellor et al., 2005; Moir & Seabridge, 2001; Moir, 1999; Raimondi et al., 2002). Not only are aircraft electrical system power levels growing, but the diversity of the power generation types is increasing as well.

#### **4.1 Schemes of power generation**

The various in-service and prospect schemes of electrical power generation are shown in Figure 7 (AbdElhafez & Forsyth, 2009; Cossar, 2004)

Examples of civil/military aircraft and the corresponding generation scheme are given in Table 1.

However FEEPDS has a serious drawback; a fault in source/load switch matrices may

Fig. 5. Fault-tolerant Electrical Power Distribution System FTEPDS for MEA (AbdElhafez &

Since its advent, generated electrical power uilization has been rising rapidly. The growth of electrical power generation/application in aircrafts is shown in Figure 6. The quadratic growth is attributed to the increase aircraft system loading such as : galley and In-Flight

MEA recently is one of the major driving force in electric generation in aircrafts (AbdElhafez & Forsyth, 2009; Andrade, 1992; Bansal et al., 2003, 2005; Howse, 2003; Jones, 1999; Quigely, 1993; Mellor et al., 2005; Moir & Seabridge, 2001; Moir, 1999; Raimondi et al., 2002). Not only are aircraft electrical system power levels growing, but the diversity of the power generation

The various in-service and prospect schemes of electrical power generation are shown in

Examples of civil/military aircraft and the corresponding generation scheme are given in

interrupt the operation of the entire system.

Forsyth, 2009).

**4. Electric power generation in MEA** 

Entertainment (IFE) systems.

types is increasing as well.

Table 1.

**4.1 Schemes of power generation** 

Figure 7 (AbdElhafez & Forsyth, 2009; Cossar, 2004)

Fig. 7. Aircraft Electrical Power Generation Options (AbdElhafez & Forsyth, 2009).

Power Generation and Distribution System for a More Electric Aircraft - A Review 299

3. CF could not allow internal starting for the aero-engine by integral starter/generator

Variable Speed Constant Frequency (VSCF) DC-link system is now the preferred option for most new military aircraft and some commercial aircraft, Table 1. The generator in this scheme, Figure 7, is attached directly to the engine, thus according to (1) the output frequency will vary with engine speed. The engine speed is subjected to wide variation during the normal course of flight, and so does the frequency; therefore interfacing circuits

The output of the generator is supplied to diode rectifiers, which converts the variable frequency AC power into DC form. Then three-phase inverters are used to convert the DC power into three-phase 115V/400Hz AC type. This is the typical form of VSCF DC-link system. However, recently several topologies were reported. These new topologies produce improved performance regarding harmonics, reactive power flow and system stability. Moreover, the range of VSCF DC-link system has been widened due to the recent advancements in field of high power electronic switches. VSCF DC-link option is generally characterised by simplicity and reliability (AbdElhafez & Forsyth, 2009; Hoffman et al., 1985; Ferriera, 1995; Moir, 1999; Quigley, 1993; Olaiya, &. Buchan, 1999; Ying shing & Lin, 1995).

Variable Speed Constant Frequency (VSCF) Cycloconverters as shown in Figure 2 convert directly the variable frequency AC input power into AC form with fixed frequency and amplitude, three-phase 115V/400Hz (AbdElhafez & Forsyth, 2009; Cloyd, 1997; Cronin, 1990; Emad & Ehasni, 2000; Howse, 2003, Jones, 1999; Moir & Seabridge, 2001). The output frequency is lower than the input frequency; thus, making it possible for the generator to be attached to the engine with a fixed turns ratio gearbox. In the typical form of cycloconverters, three bidirectional switches interface each generator phase with the

The VSCF cycloconverters are more efficient than CF and VSCF DC-link; however they require sophisticated control. The power generation efficiency of the cycloconverters increases as the power factor decrease, which would be beneficial if this technique is applied

Variable Frequency (VF), commonly known as wild frequency, is the most recent electric power generation contender. In VF approach, the generator is attached directly to the engine shaft. This method is commonly termed embedded generation (Raimondi et al., 2002). Generator direct allocation in the engine shafts de-rates power take-off shaft and the associated gearbox, which reduce their size and weight and increase the reliability. However, a number of implications will arise, in case of embedding one or more electrical

to motor loads with significant lagging power factors (AbdElhafez & Forsyth, 2009).

2. The system has to be examined for every flight, increasing the operational costs.

are required to change the generator output power into usable form.

scheme.

**4.1.2 DC-link system** 

**4.1.3 Cycloconverters** 

corresponding supply phase.

**4.1.4 Wild frequency** 

machines within the core of the engine:


Table 1. Civil/Military aircraft and electrical power generation techniques (AbdElhafez & Forsyth, 2009).

A brief review of the different generation techniques is given below where the focus is on the merits/demerits of each.

#### **4.1.1 Constant frequency**

The constant Frequency (CF), three-phase 115V/400Hz scheme is the most common electric power generation option. This scheme is in-service in most civil aircrafts as shown in Table 1. The CF is alternatively termed Integrated Drive Generator (IDG).

In CF system, the generator is attached to the engine through unreliable and cumbersome mechanical gearbox. This gearbox is essential to ensure that the generator speed is constant irrespective of the engine speed and aircraft status. The frequency f of the generated power is related to generator speed N by,

$$\text{f} = \frac{\text{PN}}{120} \tag{1}$$

where f is output frequency in cycle/sec(Hz); N is generator speed in revolution per minutes (rpm) and P is the number of magnetic poles. Maintaining generator speed N constant ensures that output frequency remains fixed; however the CF has a number of disadvantages (AbdElhafez & Forsyth, 2009; Cossar, 2004; Howse, 2003; Jones, 1999; Quigely, 1993; Moir, 1999; Raimondi et al., 2002):

1. The interfacing mechanical gear box is unreliable, inefficient and costly, which reduces the overall system efficiency.


### **4.1.2 DC-link system**

298 Recent Advances in Aircraft Technology

Aircraft Rating

VSCF (Cycloconverters) F-18E/F 2x60/65

Table 1. Civil/Military aircraft and electrical power generation techniques (AbdElhafez &

A brief review of the different generation techniques is given below where the focus is on

The constant Frequency (CF), three-phase 115V/400Hz scheme is the most common electric power generation option. This scheme is in-service in most civil aircrafts as shown in Table

In CF system, the generator is attached to the engine through unreliable and cumbersome mechanical gearbox. This gearbox is essential to ensure that the generator speed is constant irrespective of the engine speed and aircraft status. The frequency f of the generated power

PN f=

where f is output frequency in cycle/sec(Hz); N is generator speed in revolution per minutes (rpm) and P is the number of magnetic poles. Maintaining generator speed N constant ensures that output frequency remains fixed; however the CF has a number of disadvantages (AbdElhafez & Forsyth, 2009; Cossar, 2004; Howse, 2003; Jones, 1999;

1. The interfacing mechanical gear box is unreliable, inefficient and costly, which reduces

2x120 4x90 2x90 4x120 4x120 2x40 2x40 2x40

2x20

4x50 2x20/25 4x150

B777 A340 B373NG MD-12 B747-X B717 B767-400 Do728

MD-90

Global Ex Horizon A3xx

270VDC F-22 Raptor

1. The CF is alternatively termed Integrated Drive Generator (IDG).

Civil Military

(kVA) Aircraft Rating (kVA)

2x75 C145 2x120

X-35A/B/C

Boeing JSF 2x50

<sup>120</sup> (1)

2x70 2x50

Generation scheme

VSCF(DC-link) B77(backup)

CF(IDG)

VF

Forsyth, 2009).

the merits/demerits of each.

**4.1.1 Constant frequency** 

is related to generator speed N by,

Quigely, 1993; Moir, 1999; Raimondi et al., 2002):

the overall system efficiency.

Variable Speed Constant Frequency (VSCF) DC-link system is now the preferred option for most new military aircraft and some commercial aircraft, Table 1. The generator in this scheme, Figure 7, is attached directly to the engine, thus according to (1) the output frequency will vary with engine speed. The engine speed is subjected to wide variation during the normal course of flight, and so does the frequency; therefore interfacing circuits are required to change the generator output power into usable form.

The output of the generator is supplied to diode rectifiers, which converts the variable frequency AC power into DC form. Then three-phase inverters are used to convert the DC power into three-phase 115V/400Hz AC type. This is the typical form of VSCF DC-link system. However, recently several topologies were reported. These new topologies produce improved performance regarding harmonics, reactive power flow and system stability. Moreover, the range of VSCF DC-link system has been widened due to the recent advancements in field of high power electronic switches. VSCF DC-link option is generally characterised by simplicity and reliability (AbdElhafez & Forsyth, 2009; Hoffman et al., 1985; Ferriera, 1995; Moir, 1999; Quigley, 1993; Olaiya, &. Buchan, 1999; Ying shing & Lin, 1995).

### **4.1.3 Cycloconverters**

Variable Speed Constant Frequency (VSCF) Cycloconverters as shown in Figure 2 convert directly the variable frequency AC input power into AC form with fixed frequency and amplitude, three-phase 115V/400Hz (AbdElhafez & Forsyth, 2009; Cloyd, 1997; Cronin, 1990; Emad & Ehasni, 2000; Howse, 2003, Jones, 1999; Moir & Seabridge, 2001). The output frequency is lower than the input frequency; thus, making it possible for the generator to be attached to the engine with a fixed turns ratio gearbox. In the typical form of cycloconverters, three bidirectional switches interface each generator phase with the corresponding supply phase.

The VSCF cycloconverters are more efficient than CF and VSCF DC-link; however they require sophisticated control. The power generation efficiency of the cycloconverters increases as the power factor decrease, which would be beneficial if this technique is applied to motor loads with significant lagging power factors (AbdElhafez & Forsyth, 2009).

#### **4.1.4 Wild frequency**

Variable Frequency (VF), commonly known as wild frequency, is the most recent electric power generation contender. In VF approach, the generator is attached directly to the engine shaft. This method is commonly termed embedded generation (Raimondi et al., 2002). Generator direct allocation in the engine shafts de-rates power take-off shaft and the associated gearbox, which reduce their size and weight and increase the reliability. However, a number of implications will arise, in case of embedding one or more electrical machines within the core of the engine:

Power Generation and Distribution System for a More Electric Aircraft - A Review 301

Therefore, the rating of the three-stage synchronous generator has increased over the years reaching to 150KVA (Hoffman, 1985) on the Airbus A380. The synchronous machine has the ability to absorb/generate reactive power, which enhances the stability of the aircraft power system. However, this machine requires external DC excitation, which unfortunately

The Switched Reluctance (SR) machine has a very simple robust structure, and can operate over a wide speed range. The three-phase type has a salient rotor similar to salient pole synchronous machine. The stator consists of three phases; each phase is interfaced with the DC supply through two pairs of anti-parallel switch-diode combination. Thus, the SR machine is inherently fault-tolerant. However the machine has the severe disadvantage of producing high acoustic noise and torque ripples (Mitcham & Cullenm, 2002, 2005; Pollock

The Permanent Magnet (PM) generator has a number of favourable characteristics (AbdElhafez & Forsyth, 2009; Argile, 2008; Bianchi, 2003; Jack et al., 1996; Pollock & Chi-Yao,

However, conventional PM machines are claimed to have inferior fault tolerance compared with SR machines (Argile, 2008; Mecrow et al., 1996; White, 1996). Conventional PM generators are intolerant to elevated temperatures. Furthermore, PM generators require power converters with high VA rating to cater for a wide speed range of operation (AbdElhafez & Forsyth, 2009; Bianchi, 2003 ;Jack et al., 1996;Mecrow et al., 1996; Mitcham & Cullenm, 2002, 2005). Therefore, a different implementation is mandatory in PM machine

The fault-tolerant PM machines are one solution and offer high levels of redundancy and fault tolerance (Argile, 2008; Ho et al.,1988; Mitcham & Grum, 1998; Mellor, at al.,2005). These machines are designed with a high number of phases, such that the machine can continue to deliver a satisfactory level of torque/power after a fault in one or more phases. Furthermore, each phase has minimal electrical, magnetic, and thermal impact upon the others (Argile, 2008; Jack et al., 1996; Jones & Drager, 1997;Mecrow et al., 1996; Mitcham &

1. The number of magnetic poles in the machine being similar to the stator slot number; each phase winding can be placed in a single slot, which is thermally isolated from the other phases (AbdElhafez, 2008; Adefajo, 2008; Jones & Drager, 1997;Mecrow et al.,

2. The stator coils being wound around alternate teeth, which provides physical and magnetic isolation between the phases (AbdElhafez, 2008; Jones & Drager, 1997).

1. Ease of cooling, as the PM generator theoretically has almost zero rotor losses.

& Chi-Yao, 1997; Trainer & Cullen, 2005; Skvarenina et al., 1996,1997).

1997; Mecrow et al., 1996; Mitcham & Cullenm, 2002, 2005):

4. High pole number with reduced length of stator end windings.

2. High efficiency compared to other machine types. 3. High volumetric and gravimetric power density.

technology if they are to be used in aero-engines.

Cullenm, 2002, 2005; White, 1996). This is realised by:

1996; Mitcham & Cullenm, 2002).

decreases the reliability and the efficiency.

**4.2.3 Switched reluctance generator** 

**4.2.4 Permanent generator** 

5. Self excitation at all times.


In VF, variations in engine speed would manifest directly into the output frequency as shown from (1) and Figure 2. The promising features of VF are the small size, weight, volume, and cost as compared with other aircraft electrical power generation options. Also VF offers a very cost-effective source of power for the galley loads, which consumes a lot of on-board power. However VF may pose significant risk at higher power levels, particularly with high power motor loads. Furthermore, the cost of motor controllers required due to the variation in the supply frequency, need to be taken into consideration when assessing VF (AbdElhafez & Forsyth, 2009; Cronin, 2005; Elbuluk & Kankam, 1997; Hoffman, 1999; ; Moir, 1998, Pearson, 1998; Weimer, 1993).

## **4.2 Generator topologies**

The anticipated increase in electrical power generation requirements on MEA suggests that high power generators should be attached directly to the engine, mounted on the engine shaft and used for the engine start in Integral Starter/Generator (IS/G) scheme . The harsh operating conditions and the high ambient temperatures push most materials close to or even beyond their limits, requiring more innovations in materials, processes and thermal management systems design.

Consequently, Induction, Switched Reluctance, Synchronous and Permanent Magnet machine types (Hoffman et al., 1985; Mollov et al., 2000; Cross, 2002 ) have been considered for application in MEA due to their robust features.

## **4.2.1 Induction generator**

Induction Generators (IGs) are characterized by their robustness, reduced cost and ability to withstand harsh environment. However, the IG requires complex power electronics and is considered unlikely to have the power density of the other machines (Khatounian et al.,2003; Ying & Lin, 1995; Bansal et. al, 2003, 2005).

### **4.2.2 Synchronous generator**

The current generator technology employed on most commercial and military aircraft is the three-stage wound field synchronous generator (Hoffman, 1985). This machine is reliable and inherently safe; as the field excitation can be removed, de-energising the machine. Therefore, the rating of the three-stage synchronous generator has increased over the years reaching to 150KVA (Hoffman, 1985) on the Airbus A380. The synchronous machine has the ability to absorb/generate reactive power, which enhances the stability of the aircraft power system. However, this machine requires external DC excitation, which unfortunately decreases the reliability and the efficiency.

## **4.2.3 Switched reluctance generator**

300 Recent Advances in Aircraft Technology

1. Accommodation of the embedded generators requires revision of the design of the engine components from their current state, which may change the components

2. The heat loss within the generator places a significant burden on the engine oil cooling

3. If the generator rotor is only supported through main engine bearings, the small air gap requirement of the generator may lead to obligatory stiffening of the engine structure. The latter being nessary to ensure that rotor and stator do not come into contact under

4. Transmitting high levels of electrical power to and from the core of the engine would require significant alterations in the supporting engine core structure relative to the

In VF, variations in engine speed would manifest directly into the output frequency as shown from (1) and Figure 2. The promising features of VF are the small size, weight, volume, and cost as compared with other aircraft electrical power generation options. Also VF offers a very cost-effective source of power for the galley loads, which consumes a lot of on-board power. However VF may pose significant risk at higher power levels, particularly with high power motor loads. Furthermore, the cost of motor controllers required due to the variation in the supply frequency, need to be taken into consideration when assessing VF (AbdElhafez & Forsyth, 2009; Cronin, 2005; Elbuluk & Kankam, 1997; Hoffman, 1999; ; Moir,

The anticipated increase in electrical power generation requirements on MEA suggests that high power generators should be attached directly to the engine, mounted on the engine shaft and used for the engine start in Integral Starter/Generator (IS/G) scheme . The harsh operating conditions and the high ambient temperatures push most materials close to or even beyond their limits, requiring more innovations in materials, processes and thermal

Consequently, Induction, Switched Reluctance, Synchronous and Permanent Magnet machine types (Hoffman et al., 1985; Mollov et al., 2000; Cross, 2002 ) have been considered

Induction Generators (IGs) are characterized by their robustness, reduced cost and ability to withstand harsh environment. However, the IG requires complex power electronics and is considered unlikely to have the power density of the other machines (Khatounian et

The current generator technology employed on most commercial and military aircraft is the three-stage wound field synchronous generator (Hoffman, 1985). This machine is reliable and inherently safe; as the field excitation can be removed, de-energising the machine.

structure and probably the profile of the airflow through the engine.

system, requiring additional or alternative heat exchange.

high acceleration

engine pylon (Raimondi et al., 2002).

1998, Pearson, 1998; Weimer, 1993).

**4.2 Generator topologies** 

management systems design.

**4.2.1 Induction generator** 

**4.2.2 Synchronous generator** 

for application in MEA due to their robust features.

al.,2003; Ying & Lin, 1995; Bansal et. al, 2003, 2005).

The Switched Reluctance (SR) machine has a very simple robust structure, and can operate over a wide speed range. The three-phase type has a salient rotor similar to salient pole synchronous machine. The stator consists of three phases; each phase is interfaced with the DC supply through two pairs of anti-parallel switch-diode combination. Thus, the SR machine is inherently fault-tolerant. However the machine has the severe disadvantage of producing high acoustic noise and torque ripples (Mitcham & Cullenm, 2002, 2005; Pollock & Chi-Yao, 1997; Trainer & Cullen, 2005; Skvarenina et al., 1996,1997).

## **4.2.4 Permanent generator**

The Permanent Magnet (PM) generator has a number of favourable characteristics (AbdElhafez & Forsyth, 2009; Argile, 2008; Bianchi, 2003; Jack et al., 1996; Pollock & Chi-Yao, 1997; Mecrow et al., 1996; Mitcham & Cullenm, 2002, 2005):


However, conventional PM machines are claimed to have inferior fault tolerance compared with SR machines (Argile, 2008; Mecrow et al., 1996; White, 1996). Conventional PM generators are intolerant to elevated temperatures. Furthermore, PM generators require power converters with high VA rating to cater for a wide speed range of operation (AbdElhafez & Forsyth, 2009; Bianchi, 2003 ;Jack et al., 1996;Mecrow et al., 1996; Mitcham & Cullenm, 2002, 2005). Therefore, a different implementation is mandatory in PM machine technology if they are to be used in aero-engines.

The fault-tolerant PM machines are one solution and offer high levels of redundancy and fault tolerance (Argile, 2008; Ho et al.,1988; Mitcham & Grum, 1998; Mellor, at al.,2005). These machines are designed with a high number of phases, such that the machine can continue to deliver a satisfactory level of torque/power after a fault in one or more phases. Furthermore, each phase has minimal electrical, magnetic, and thermal impact upon the others (Argile, 2008; Jack et al., 1996; Jones & Drager, 1997;Mecrow et al., 1996; Mitcham & Cullenm, 2002, 2005; White, 1996). This is realised by:


Power Generation and Distribution System for a More Electric Aircraft - A Review 303

the emergency generation system is continuously monitored and backup power will be immediately available following a main generator failure. Also the stored inertial energy of the engine is significant and could be recovered as another source of emergency power

Different machine topologies are competing for LP emergency generators. Trade-off studies were conducted to identify the most suitable machine technology. Due to the difficulty of the location, reliability is paramount and it is clear that a brushless machine format is required. The harsh operating environment particularly extremely high ambient temperatures, pushes many common materials, e.g. permanent magnet materials and insulation materials close to or beyond their operating limits. Consequenclty, cooling or alternative materials and process would be required (AbdElhafez & Forsyth, 2008, 2009;

Machine efficiency is another crucial issue, since dissipated heat needs to be absorbed by the engine cooling system. Currently, the generator loss is absorbed by the engine oil system and this is in turn mainly cooled by the fuel entering the engine. This restricts the amount of

Some key requirements, assisting in the choice of LP generator type are list below

2. The machine is subject to a harsh operating environmental conditions (specifically high

3. Power must be generated over a very wide speed range (approximately 12:1) with an output voltage compatible with the aircraft DC-distribution system voltage 350 V dc. 4. The machine is fault tolerant, such that it continues to run even if there is a fault on one

Also the operating speed range, weight and volume constraints are important parameters

Several brushless machine types seem to have the required ruggedness and hence the capability of operation in such environment. These include: IG, SR and PM machines

There are many occasions within the aircraft industry where it is required to convert the electrical power from one level/form to another level/form, resulting in a wide range of Power Electronics Circuits (PECs) such as AC/DC, DC/DC, DC/AC and matrix converters (AbdElhafez & Forsyth, 2009; Chivite-Zabalza, 2004; Cutts, 2002; Lawless & Clark, 1997; Matheson, &. Karimi, 2002; Moir & Seabridge, 2001; Singh et. al, 2008 ). There are general

2. PEC should be fault-tolerant, which implies its ability to continue functioning under abnormal conditions without much loss in its output power or degradation of its

heat that can be dissipated without introducing an alternative cooling method.

1. The machine operates only as a generator, drive torque is not allowed.

or two phases without significantly degrading the output power.

temperature), with limited access for maintenance.

(AbdElhafez & Forsyth, 2008, 2009; Mitcham &. Grum, 1998 ).

1. PEC should have reduced weight and volumetric dimension.

(AbdElhafez & Forsyth, 2008, 2009; Ganev, 2006).

Mitcham &. Grum, 1998 ) .

(AbdElhafez & Forsyth, 2008, 2009 ):

that affect the choice of machine type.

requirements, which PEC should satisfy:

**5. Interfacing circuits** 

performance.


## **4.3 Integrated generation**

MEA as mentioned, suggests innovative strategies for optimizing the aircraft performance and reducing the installation and operational costs, such as IS/G and emergency power generation schemes.

## **4.3.1 Integral starter/generator**

Commonly, jet engines are externally started by pneumatic power from a ground cart. This reduces the system reliability and increases maintenance and running cost. A move toward internal starting for the engine is adopted in MEA.

The jet engine has two shafts: High Pressure (HP) and Low Pressure (LP) shafts. The main generator is usually attached to the HP shaft . The trend is to use that generator as the prime mover to start the engine. Once the engine is started, the generator returns to its default operation, generator. The prime mover (starter) is powered from the aircraft system, which during this stage is supplied from energy storage devices. ISG scheme has a number of advantages (AbdElhafez & Forsyth, 2009; Ganev, 2006; Elbuluk & Kankam, 1997; Ferreira, 1995; Skvarenina, 1996, 1997 ) :


Different machine topologies are suggested for IS/G scheme; however the SR and faulttolerant PM machines are most reliable. These machines do not require external excitation or sophisticated control techniques. Also, they are either inherently or artificially fault-tolerant.

#### **4.3.2 Emergency power generation**

The level of the emergency power is expected to grow significantly for future aircrafts, due to rising demands of critical aircraft loads/services. Currently, the emergency power is sourced from generators coupled to a Ram Air Turbine (RAT). This scheme is deployed only under emergency conditions, and suffers from serious drawbacks such as (AbdElhafez et al., 2006a, 2006b, 2008; Adefajo, 2008; Bianchi, 2003 ) :


The proposal is to utilize the windmill effect of the aero-engine fan, which is driven from the LP shaft, for emergency power generation. While, the fan is normally rotating, the heath of the emergency generation system is continuously monitored and backup power will be immediately available following a main generator failure. Also the stored inertial energy of the engine is significant and could be recovered as another source of emergency power (AbdElhafez & Forsyth, 2008, 2009; Ganev, 2006).

Different machine topologies are competing for LP emergency generators. Trade-off studies were conducted to identify the most suitable machine technology. Due to the difficulty of the location, reliability is paramount and it is clear that a brushless machine format is required. The harsh operating environment particularly extremely high ambient temperatures, pushes many common materials, e.g. permanent magnet materials and insulation materials close to or beyond their operating limits. Consequenclty, cooling or alternative materials and process would be required (AbdElhafez & Forsyth, 2008, 2009; Mitcham &. Grum, 1998 ) .

Machine efficiency is another crucial issue, since dissipated heat needs to be absorbed by the engine cooling system. Currently, the generator loss is absorbed by the engine oil system and this is in turn mainly cooled by the fuel entering the engine. This restricts the amount of heat that can be dissipated without introducing an alternative cooling method.

Some key requirements, assisting in the choice of LP generator type are list below (AbdElhafez & Forsyth, 2008, 2009 ):


Also the operating speed range, weight and volume constraints are important parameters that affect the choice of machine type.

Several brushless machine types seem to have the required ruggedness and hence the capability of operation in such environment. These include: IG, SR and PM machines (AbdElhafez & Forsyth, 2008, 2009; Mitcham &. Grum, 1998 ).

## **5. Interfacing circuits**

302 Recent Advances in Aircraft Technology

3. Each phase being attached to a separate single-phase power converter, which achieves the electrical isolation (AbdElhafez, 2008; Adefajo, 2008; Jack et al., 1996; Jones &

4. The machine synchronous reactance per phase is typically 1.0 p.u., limiting the shortcircuit fault current to no greater than the rated phase current (AbdElhafez, 2008; Jack et al., 1996; Jones & Drager, 1997;Mecrow et al., 1996; Mitcham & Cullenm, 2002, 2005).

MEA as mentioned, suggests innovative strategies for optimizing the aircraft performance and reducing the installation and operational costs, such as IS/G and emergency power

Commonly, jet engines are externally started by pneumatic power from a ground cart. This reduces the system reliability and increases maintenance and running cost. A move toward

The jet engine has two shafts: High Pressure (HP) and Low Pressure (LP) shafts. The main generator is usually attached to the HP shaft . The trend is to use that generator as the prime mover to start the engine. Once the engine is started, the generator returns to its default operation, generator. The prime mover (starter) is powered from the aircraft system, which during this stage is supplied from energy storage devices. ISG scheme has a number of advantages (AbdElhafez & Forsyth, 2009; Ganev, 2006; Elbuluk & Kankam, 1997; Ferreira,

1. Improves the aircraft reconfigureability by eliminating the arrangement used

4. Reduces the operational and maintenance cost, which boosts the air traffic industry

Different machine topologies are suggested for IS/G scheme; however the SR and faulttolerant PM machines are most reliable. These machines do not require external excitation or sophisticated control techniques. Also, they are either inherently or artificially fault-tolerant.

The level of the emergency power is expected to grow significantly for future aircrafts, due to rising demands of critical aircraft loads/services. Currently, the emergency power is sourced from generators coupled to a Ram Air Turbine (RAT). This scheme is deployed only under emergency conditions, and suffers from serious drawbacks such as (AbdElhafez et al.,

The proposal is to utilize the windmill effect of the aero-engine fan, which is driven from the LP shaft, for emergency power generation. While, the fan is normally rotating, the heath of

Drager, 1997;Mecrow et al., 1996; Mitcham & Cullenm, 2002, 2005).

**4.3 Integrated generation** 

**4.3.1 Integral starter/generator** 

1995; Skvarenina, 1996, 1997 ) :

previously for ground starting.

**4.3.2 Emergency power generation** 

2. It is unpopular with the airliners.

2006a, 2006b, 2008; Adefajo, 2008; Bianchi, 2003 ) : 1. It is expensive to develop, install and maintain.

2. Allows the adoption of All Electric Aircraft (AEA)

3. Uses AMB system that results in reliable robust and compact engine.

3. The integrity of such a 'one-shot' system is always subject to some doubt.

internal starting for the engine is adopted in MEA.

generation schemes.

There are many occasions within the aircraft industry where it is required to convert the electrical power from one level/form to another level/form, resulting in a wide range of Power Electronics Circuits (PECs) such as AC/DC, DC/DC, DC/AC and matrix converters (AbdElhafez & Forsyth, 2009; Chivite-Zabalza, 2004; Cutts, 2002; Lawless & Clark, 1997; Matheson, &. Karimi, 2002; Moir & Seabridge, 2001; Singh et. al, 2008 ). There are general requirements, which PEC should satisfy:


Power Generation and Distribution System for a More Electric Aircraft - A Review 305

The distribution system of aircraft is adequatly protected; different types of Circuit Breakers (CBs) are utilized. Thus includes the conventional and power electronics based. The conventional CBs include air, SF6, and oil, while the Solid-State Circuit Breakers (SSCBs) represent the power electronics based breakers (AbdElhafez & Forsyth, 2009; Jones, 1999; Moir & Seabridge, 2001). A comparison between SSCB and a generic conventional CB is

AbdElhafez, A. (2008). *Active Rectifier Control for Multi-Phase Fault-Tolerant Generators*. PhD

AbdEl-Hafez, A.; Cross, A.; Forsyth, A.; Mitcham, A.; Trainer, D.& Cullen, J. (2006). Fault

AbdEl-hafez, A.; Cross, A.; Forsyth, A.;Trainer, D.& Cullen, J. (2006). Single-Phase Active

*Power Electronics, Machines and Drives, PEMD* 2006, pp. 435-439, April 2006. AbdElHafez, A & Forsyth, A. J. (2009) . A Review of More-Electric Aircraft, *Proceedings of The* 

AbdEl-Hafez, A.; Todd, R.; Forsyth, A.& Long, S. (2008). Single-Phase Controller Design for a

Adefajo, O.; Barnes, M.; Smith, A.; Long, S.; Trainer, D.; AbdEl-hafez, A.; & Forsyth, A. (2008).

Andrade, L. & Tenning, C.(1992). Design of the Boeing 777 Electric System, *IEEE National* 

Andrade, L. & Tenning, C.(1992). Design of Boeing 777 electric system, *IEEE Aerospace and* 

Argile, R.; Mecrow, B.; Atkinson, D.; Jack, A.& Sangha, P. (2008). reliability analysis of fault

Bansal, R. (2005). Three-phase self-excited induction generators: an overview, *IEEE Transaction* 

Bianchi, N.; Bolognani, S.; Zigliotto, M. & Zordan, M. (2003). Innovative remedial strategies for

Brock, A. & Schmidt, T. (1999). Electrical environments in aerospace applications, *in* 

Chivite-Zabalza, F.; Forsyth, A. & Trainer, D. (2004). Analysis and practical evaluation of an

*Aerospace and Electronics Conference*,.pp.1281 - 1290, May 18-22, 1992.

*Electronics, Machines and Drives,PEMD 2008,* pp 11-15, April 2-4, 2008. Bansal, R.; Bhatti, T.& Kothari, D.(2003). Bibliography on the application of induction

Tolerant Starter-Generator Converter Optimisation ",Patent Application Rolls-Royce,

Rectifier Selection for Fault Tolerant Machine, *in 3rd IET International Conference on*

*13rd international conference on Aerospace Science and Aviation Technology conference* 

Fault Tolerant Permanent Magnet Generator, in *IEEE Vehicle Power and Propulsion* 

Voltage Control On An Uninhabited Autonomous Vehicle Electrical Distribution System," in *The 4th IET International Conference on Power Electronics, Machines and* 

tolerant drive topologies, *the Proceeding of The 4th IET International Conference on Power* 

generators in nonconventional energy systems," *IEEE Transaction on Energy* 

inverter faults in IPM synchronous motor drives, *IEEE Transaction on Energy* 

*Proceedings of International Conference Electric Machines and Drives, IEMD '99,* pp. 719-

18-pulse rectifier for aerospace applications, *in Second International Conference on Power* 

desertion, University of Manchester, UK.

,ASAT-13. Cairo, Egypt, May 26-28, 2009.

*Conference, VPPC 2008,* pp. 250-257, September 2008.

*Drives, PEMD 2008*, pp. 676-680, April 2-4, 2008.

*Electronic Systems Magazine*, Vol. 7, (1992) pp. 4-11.

*Conversion,* Vol. 18, (September 2003), pp. 433-439.

*on Energy Conversion,* Vol. 20, (20005), pp. 292-299.

*Conversion,,* Vol. 18, (June 2003), pp. 306-314.

721, May 9-12, 1999.

**6. Protection system** 

given in Table 2 above.

Ed. UK, 2006.

**7. References** 


Innovation in the area of power electronics components is required to enable realisation of MEA. Wide-Band Gap (WBG) High-Temperature Electronics (THE) is an example of these developments. The devices manufactured from WBG-THE are capable of operating at both higher temperatures (600 0C) (Reinhardt & Marciniak, 1996) and higher efficiencies compared to Si-based devices (-55 0C to 125 0C). A number of advantages are expected to be realized from employing WBG-THE devices (AbdElhafez et al., 2006, 2008, Howse, 2003; Gong et al., 2003; Lawless & Clark, 1997; Matheson, &. Karimi, 2002; Moir & Seabridge, 1998, 2001; Trainer & Cullen, 2005 ):


Another main challenge for PECs in the aircraft is passive electrical component size, as the current components are heavy and bulky, especially for the high power level expected in the MEA. However, the on-going research in the design and fabrication of the passive components for MEA gives some optimistic results. For example, some advanced polymer insulation materials such as Eymyd, L-30N, and Upilex S (AbdElhafez & Forsyth, 2009; Cutts, 2002; Lawless & Clark, 1997; Moir & Seabridge, 2001 ) have the ability to operate over a wide temperature range (-269 0C to 300 0C). Also these materials can withstand the environmental conditions such as humidity, ultraviolet radiation, basic solution and solvent at high altitudes (AbdElhafez & Forsyth, 2009; Lawless & Clark, 1997). The ceramic capacitor is a good example, which offers remarkable advantages in volumetric density compared to other capacitor technology (Lawless & Clark, 1997).



## **6. Protection system**

304 Recent Advances in Aircraft Technology

3. PEC should be efficient and have the ability for operation in harsh conditions such as

4. PEC should emit minimum levels of harmonic and Electromagnetic Interference (EMC).

Innovation in the area of power electronics components is required to enable realisation of MEA. Wide-Band Gap (WBG) High-Temperature Electronics (THE) is an example of these developments. The devices manufactured from WBG-THE are capable of operating at both higher temperatures (600 0C) (Reinhardt & Marciniak, 1996) and higher efficiencies compared to Si-based devices (-55 0C to 125 0C). A number of advantages are expected to be realized from employing WBG-THE devices (AbdElhafez et al., 2006, 2008, Howse, 2003; Gong et al., 2003; Lawless & Clark, 1997; Matheson, &. Karimi, 2002; Moir & Seabridge, 1998,

1. Eliminating/reducing of ECS required for cooling flight control electronics and other

Another main challenge for PECs in the aircraft is passive electrical component size, as the current components are heavy and bulky, especially for the high power level expected in the MEA. However, the on-going research in the design and fabrication of the passive components for MEA gives some optimistic results. For example, some advanced polymer insulation materials such as Eymyd, L-30N, and Upilex S (AbdElhafez & Forsyth, 2009; Cutts, 2002; Lawless & Clark, 1997; Moir & Seabridge, 2001 ) have the ability to operate over a wide temperature range (-269 0C to 300 0C). Also these materials can withstand the environmental conditions such as humidity, ultraviolet radiation, basic solution and solvent at high altitudes (AbdElhafez & Forsyth, 2009; Lawless & Clark, 1997). The ceramic capacitor is a good example, which offers remarkable advantages in volumetric density

2. Reducing the engine control system weight and volumetric dimension

3. Improving the system reliability by using a distributed processing architecture 4. Optimizing the aircraft system and reducing the installation and running cost

high temperature and low maintenance.

2001; Trainer & Cullen, 2005 ):

critical PECs

5. PEC could be easily upgraded and computerized.

5. Improving system fault-tolerance and redundancy

compared to other capacitor technology (Lawless & Clark, 1997).

Mechanism The breaker consists of

Functionality Multi-task, they perform

Table 2. Comparison between SSCB and conventional breakers.

SSCB Conventional

Response time Very small Long

Power rating Small Medium to high Volumetric/weight Compact/small Bulky/heavy Cost Expensive Cheap

reporting

bidirectional switches that allow current flow in both directions. The gating signal of the switches are blocked to inhabit the faulty current

current monitoring and status

Commonly an isolating air gap is developed in the path of the fault current. A upon disconnection, an arc is created. Depending on the arc distinguishing methodology the breaker is termed.

They should be instructed to

be opened

The distribution system of aircraft is adequatly protected; different types of Circuit Breakers (CBs) are utilized. Thus includes the conventional and power electronics based. The conventional CBs include air, SF6, and oil, while the Solid-State Circuit Breakers (SSCBs) represent the power electronics based breakers (AbdElhafez & Forsyth, 2009; Jones, 1999; Moir & Seabridge, 2001). A comparison between SSCB and a generic conventional CB is given in Table 2 above.

## **7. References**


Power Generation and Distribution System for a More Electric Aircraft - A Review 307

Lawless, W.& Clark, C. (1997). Energy storage at 77 K in multilayer ceramic capacitors, *IEEE Aerospace and Electronic Systems Magazine, ,* Vol. 12, (August 1997), pp. 32-35. Maldonado, M. & Korba, G. (1999). Power management and distribution system for a more-

Maldonado, M.; Shah, N.; Cleek, K.; Walia, P. & Korba, G. (1996). Power management and

Maldonado, M.; Shah, N.; Cleek, K.; Walia, P. & Korba, G. (1997). "Power Management and

Matheson, E.; Karimi, K. (2002). Power Quality Specification Development for More Electric Airplane Architectures, in *SAE International Conference,* Vol. 2, pp. 343-347, 2002. Mecrow, B.; Jack, A.; Haylock, J. & Coles, J. (1996). Fault-tolerant permanent magnet machine drives, *IEE Electric Power Applications,* Vol. 143, (November 1996), pp. 437-442. Mellor, P.; Burrow, S.; Sawata, T.& Holme, M. (2005). A wide-speed-range hybrid variable-

Mitcham, A. ; Antonopoulos , G. & Cullen, J. (2002). Favourable slot and pole number

Mitcham, A. & Grum, N. (1998). An integrated LP shaft generator for the more electric aircraft.

Mitcham, A. & Cullen, J. (2005). Permanent Magnet Modular Machines: New design

Mitcham, A. & Cullen, J. (2002). Permanent magnet generator options for the More Electric

Moir, I. (1999). .More-electric aircraft-system considerations, *in IEE Colloquium on Electrical* 

Moir, I. & Seabridge, A. (2001). *Aircraft systems : mechanical, electrical, and avionics subsystems* 

Moir, I. (1998). .The all-electric aircraft-major challenges," *in IEE Colloquium on All Electric* 

Mollov, S.; Forsyth, A. & Bailey, M. (2000). System modelling of advanced electric power distribution architecture for large aircraft," *SAE Transaction* (2000), pp. 904-913. Olaiya, M.& Buchan, N. (1999) "High power variable frequency generator for large civil

Pearson, W. (1998). The more electric/all electric aircraft-a military fast jet perspective, *in IEE* 

Ponton, A. & at al (1998). "Rolls-Royce Market Outlook 1998-2017," *Rolls-Royce Publication No* 

*Applications,* Vol. 151, (September 2004), pp. 520-525.

*Drives, PEMD* 2002, pp. 241-245, April 16-18, 2002.

*Colloquium on All Electric Aircraft* (June 1998), pp. 1-7.

*integration,* London press, London, UK

*Aircraft*, (June 1998), pp. 1-6.

*TS22388* (1998).

*Aircraft, (*November 1999), pp. 1-4.

*in IEE Colloquium on All Electric Aircraft*, (June 1998), pp. 1-9.

*Machines and Systems for the More Electric Aircraft*, (1999), pp. 1-9.

pp. 2711-2716.

(1999), pp. 3-8.

551-556.

1-8, 2005.

Vol. 1, pp. 148-153, 1996.

Vol. 1, pp. 274-279. 1997.

*Conference of the IEEE Industrial Electronics Society, IECON '03*, Vol.3, (November 2003),

electric aircraft (MADMEL)," I*EEE Aerospace and Electronic Systems Magazine,* Vol. 14,

distribution system for a more-electric aircraft (MADMEL)-program status," i*n Proceedings of the 32nd Intersociety Energy Conversion Engineering Conference, IECEC-97*.

Distribution System for a More-Electric Aircraft (MADMEL)-program status," i*n Proceedings of the 33nd Intersociety Energy Conversion Engineering Conference, IECEC-97*.

reluctance/permanent-magnet generator for future embedded aircraft generation systems, *IEEE Transactions on Industry Applications,* Vol. 41, (Marc-April 2005), pp.

combinations for fault-tolerant PM machines, *in Proceedings of IEE Electric Power* 

Philosophy, in *Electrical Drive Systems for the More Electric Aircraft one-Day Seminar*, pp.

Aircraft, *in Proceeding of International Conference on Power Electronics, Machines and* 

aircraft," in *IEE Colloquium on Electrical Machines and Systems for the More Electric* 

*Electronics, Machines and Drives, PEMD 2004*, Vol.1, pp. 338-343, March-31 April-2, 2004.


Cloyd, J. (1997). A status of the United States Air Force's More Electric Aircraft initiative," in

Cross, M.; Forsyth, A & Mason, G. (2002) . Modelling and simulation strategies for the electric system of large passenger aircraft," *in SAE 2002 conference*, pp. 450-459, 2002. Cutts, S. (2002). A collaborative approach to the More Electric Aircraft, *in Proceedings* of

Cossar, C.& Sawata, T. (2004). Microprocessor controlled DC power supply for the generator

Cronin, M. (1990). The all-electric aircraft, *IEE Review,* Vol. 36, (September 1990), pp. 309-311. Elbuluk, M.& Kankam, P. (1997). Potential starter/generator technologies for future aerospace

Emadi, K &. Ehsani, M. (2000). Aircraft power systems: technology, state of the art, and future

Ferreira, C.; Jones, S.; Heglund, W.& Jones, W. (1995). Detailed design of a 30-kW switched

*Transactions on Industry Applications,* Vol. 31, (May/June 1995). pp. 553-561. Ganev, E. (2006). High-Reactance Permanent Magnet Machine for High-Performance Power Generation Systems" *SAE Power Systems Conference*, pp. 247-253, November, 2006. Glennon, T. (1998). Fault tolerant generating and distribution system architecture," in *IEE* 

Gong, G.; Drofenik, U.& Kolar, J. (2003). 12-pulse rectifier for more electric aircraft

Hoffman, A; Hansen, A. Beach, R.; Plencner, R.; Dengler, R.; Jefferies, K. & Frye, R. (1985)

(May 1985) http://ntrs.nasa.gov/archive/nasa/19850020632\_1985020632.pdf Hoffman, A.; Hansen, I.; Beach, R.; Plencner, R.;Dengler, R.; Jefferies, K. & Frye, J. (1985)

Ho, T.; Bayles, R. & Sieger, E. (1988). Aircraft VSCF generator expert system," *IEEE Aerospace* 

Jack, A.; Mecrow, B. and Haylock, J. (1996). A comparative study of permanent magnet and

*Transactions on Industry Applications,* Vol. 32, (July/August 1996), pp. 889-895. Jones, R. (1999). The More Electric Aircraft: the past and the future?, *in IEE Colloquium on Electrical Machines and Systems for the More Electric Aircraft,*(November 1999), pp. 1-4. Jones, S. & Drager, B. (1997). Sensorless switched reluctance starter/generator performance,*IEEE Industry Applications Magazine*, Vol. 3, (1997), pp. 33-38. Khatounian, F.; Monmasson, E.; Berthereau, F.; Delaleau, E.& Louis, J. (2003). Control of a

*and Electronic Systems Magazine,* Vol. 3, (April 1988), pp. 6-13. Howse, M. (2003). All electric aircraft, *Power Engineer Journal,* Vol. 17, (2003) pp. 35-37.

*Colloquium on All Electric Aircraft,* (June 1998), pp. 1-4.

1101, December 10-12, 2003.

*Electric Aircraft,* (June 1995), pp. 1-4.

Vol.1, pp. 681-686, July-27 August-1 1997.

Vol.2, pp. 458-463, March 31, April-2, 2004.

228, April 16-18, 2002.

2004.

24-31.

32.

*Electronics, Machines and Drives, PEMD 2004*, Vol.1, pp. 338-343, March-31 April-2,

*Proceedings of the 32nd Intersociety Energy Conversion Engineering Conference, IECEC-97,*

*International Conference on Power Electronics, Machines and Drives,PEMD 2002*, pp. 223-

control unit of a future aircraft generator with a wide operating speed range, in *Second International Conference on Power Electronics, Machines and Drives, PEMD 2004*,

applications, *IEEE Aerospace and Electronic Systems Magazine,* Vol. 12, (May 1997),pp.

trends, *IEEE Aerospace and Electronic Systems Magazine,* Vol. 15, (January 2000), pp. 28-

reluctance starter/generator system for a gas turbine engine application, *IEEE* 

applications, *in IEEE International Conference on Industrial Technology, V*ol.2, pp. 1096-

Advanced secondary power system for transport aircraft, *NASA Technical Paper* 2463,

"Advanced secondary power system for transport aircraft," in *IEE Colloquium on All* 

switched reluctance motors for high-performance fault-tolerant applications, *IEEE* 

doubly fed induction generator for aircraft application, *in Proceedings of 29th Annual* 

*Conference of the IEEE Industrial Electronics Society, IECON '03*, Vol.3, (November 2003), pp. 2711-2716.


**14** 

*Lebanon* 

**Power Electronics Application** 

In the competitive world of airline economics, where low cost carriers are driving dawn profit margins on airline seat miles, techniques for reducing the direct operating costs of aircraft are in great demand. In effort to meet this demand, the aircraft manufacturing industry is placing greater emphasis on the use of technology, which can influence

There is a general move in the aerospace industry to increase the amount of electrically powered equipments on future aircraft. This trend is referred to as the "More Electric Aircraft". It assumes using electrical energy instead of hydraulic, pneumatic and mechanical means to power virtually all aircraft subsystem including flight control actuation, environmental control system and utility function. The concept offers advantages of reduced overall aircraft weight, reduced need for ground support equipment and

Many aircraft power systems are now operating with a variable frequency over a typical

a. Nominal 115/200 V rms and 230/400 V rms ac, both one phase and three phase, over

c. High DC voltage which could be suitable for use with an electric actuator (or other)

This chapter presents studies, analysis and simulation results for a boost and buck converters at variable input frequency using vector control scheme. The design poses significant challenges due to the supply frequency variation and requires many features

1. The supply current to the converter must have a low harmonic contents to minimize its

2. A high input power factor must be achieved to minimize reactive power requirements.

**1. Introduction** 

range of 360 Hz to 800 Hz.

aircraft loads.

such as:

variable frequency range. b. Nominal 14, 28 and 42 V DC.

maintenance costs and fuel usage. (Faleiro, 2005)

maintenance and increased reliability (Taha,2007,Wiemer,1999).

Distribution voltages for an aircraft system can be classified as:

impact on the aircraft variable frequency electrical system.

3. Power density must be maximized for minimum size and weight.

**for More Electric Aircraft** 

Mohamad Hussien Taha *Hariri Canadian University* 


## **Power Electronics Application for More Electric Aircraft**

Mohamad Hussien Taha *Hariri Canadian University Lebanon* 

## **1. Introduction**

308 Recent Advances in Aircraft Technology

Pollock C. & Chi-Yao, W. (1997). Acoustic noise cancellation techniques for switched

Provost, M. (2002). The More Electric Aero-engine: a general overview from an engine

Quigley, R. (1993). .More Electric Aircraft, *in Proceedings of Eighth Annual Applied Power Electronics Conference and Exposition, APEC '93*, pp. 906-911, March 1993. Raimondi, C.; Sawata, T.; Holme, M.; Barton, A.; White, G.; Coles, J.; Mellor, P. & Sidell, N

Richter, E. & Ferreira, C. (1995). Performance evaluation of a 250 kW switched reluctance

Rosero, J; Ortega, J.; Aldabas, E. & Romeral, L. (2007) Moving towards a more electric aircraft, *IEEE Aerospace and Electronic Systems Magazine*, Vol. 22, (2007), pp. 3-9. Shing, Y.& Lin, C. (1995). A prototype induction generator VSCF system for aircraft, *in*

Singh, B.; Gairola, S.; Singh, N.; Chandra, A.& Al-Haddad, K. (2008). Multiples AC-DC

Skvarenina, T.; Pekarek, S.; Wasynczuk, O.; Krause, P.; Thibodeaux, R. & and Weimer, J.

Skvarenina, T.; Wasynczuk, O.; Krause, P.; Zon, W.; Thibodeaux, R.& Weimer, J. (1996).

Trainer, D. & Cullen, J. (2005). Active Rectifier for Fault Tolerant Machine Application, Derby,

White, R. & Miles, M. (1996). Principles of fault tolerance, *the Proceeding of Eighth Annual Applied Power Electronics Conference and Exposition, APEC '96*, Vol.1, pp. 18-25, 1996. Weimer, J. (1993). Electrical power technology for the more electric aircraft, *in Proceedings of*

Welchko, B.; Lipo, T.; Jahns, T. & Schulz, S.(2004). Fault tolerant three-phase AC motor drive

Worth, F.; Forker, V.; Cronin, M. (1990). Advanced Electrical System (AES)," *in Aerospace and* 

*Conference, IECEC 96.*, Vol.1, pp. 143-147, August 11-16, 1996.

*and Drives, PEMD 2002*, pp. 246-251, April 16-18 , 2002.

*IECEC 96.*, pp. 127 – 132, August 11-16, 1996.

*IAS '95.,* Vol.1, pp. 434-440, October 8-12, 1995.

*Technologies,* pp. 148-155, May 22-27, 1995.

internal memorandum, February 24, 2005.

*Electronics, V*ol. 19, (July 2004), pp. 1108-1116.

*Electronics Conference*, pp. 400 - 403, 1990.

*Electronics,* Vol. 23, (January 2008.), pp. 260-281.

1997), pp. 477-484.

1, 1997.

25-28, 1993.

reluctance drives," *IEEE Transactions on Industry Applications,* Vol. 33, (March/April

manufacturer, *in Proceedings of International Conference on Power Electronics, Machines* 

(2002). Aircraft embedded generation systems, in *Proceeding of International Conference on Power Electronics, Machines and Drives,* PEMD 2002, pp. 217-222, April 16-18, 2002. Reinhardt, K.& Marciniak, M. (1996). Wide-band gap power electronics for the More Electric

Aircraft, *in Proceedings of the 31st Intersociety Energy Conversion Engineering Conference,* 

starter generator, *in Thirtieth IAS Annual Meeting IEEE Industry Applications Conference,* 

*International IEEE/IAS Conference on Industrial Automation and Control: Emerging* 

Converters for Improving Power Quality: A Review, *IEEE Transactions on Power* 

(1997), Simulation of a switched reluctance, More Electric Aircraft power system using a graphical user interface, *in Proceedings of the 32nd Intersociety Energy Conversion Engineering Conference, IECEC-97*, 1997, Vol.1, pp. 580-584, July-27 August -

Simulation and analysis of a switched reluctance generator/More Electric Aircraft power system, *in Proceedings of the 31st Intersociety Energy Conversion Engineering* 

*12th AIAA/IEEE Digital Avionics Systems Conference, DASC 1993*, pp. 445-450, October

topologies: a comparison of features, cost, and limitations, *IEEE Transactions on Power* 

In the competitive world of airline economics, where low cost carriers are driving dawn profit margins on airline seat miles, techniques for reducing the direct operating costs of aircraft are in great demand. In effort to meet this demand, the aircraft manufacturing industry is placing greater emphasis on the use of technology, which can influence maintenance costs and fuel usage. (Faleiro, 2005)

There is a general move in the aerospace industry to increase the amount of electrically powered equipments on future aircraft. This trend is referred to as the "More Electric Aircraft". It assumes using electrical energy instead of hydraulic, pneumatic and mechanical means to power virtually all aircraft subsystem including flight control actuation, environmental control system and utility function. The concept offers advantages of reduced overall aircraft weight, reduced need for ground support equipment and maintenance and increased reliability (Taha,2007,Wiemer,1999).

Many aircraft power systems are now operating with a variable frequency over a typical range of 360 Hz to 800 Hz.

Distribution voltages for an aircraft system can be classified as:


This chapter presents studies, analysis and simulation results for a boost and buck converters at variable input frequency using vector control scheme. The design poses significant challenges due to the supply frequency variation and requires many features such as:


Power Electronics Application for More Electric Aircraft 311

The buck 3-phase/dc converter is a controlled current circuit which relies on pulse width modulation of a constant current to achieve low distortion. As shown in fig. 4. The circuit

The AC side input and DC side output filters are standard second order low pass L-C filters. For the input filter, the carrier frequency has to be considerably higher than the filter resonance frequency in order to avoid resonance effects and ensure carrier attenuation. The Ac side filter is arranged to bypass the commutating energy when the MOSFETs are turning off and to absorb the harmonic for the high frequency switching. At the DC side, the inductor is used to maintain a constant current, this inductor can be relatively small since the ripple frequency will be related to the switching frequency. The magnitude and the phase of the input current can be controlled and hence the power transfer that occurs

consists of 3 power MOSFETs and 12 diodes, an AC side filter and DC side filter.

between the AC and DC sides can also be controlled. (Green et al., 1997).

Fig. 3. Boost converter when Va is negative.

Fig. 4. Buck converter.

**3. Buck converter for aircraft application** 

#### **2. Boost converter for aircraft application**

A three phase boost converter which is shown in fig. 1 with six steps PWM provide DC output and sinusoidal input current with no low frequency harmonic. However the switching frequency harmonics contained in the input currents must be suppressed by the input filter. Referring to fig.1 after the output capacitor has charged up via the diodes to a voltage equals to 1.73Vpk, the diodes are all reverse biased. Turning one of the MOSFETs in each of the three phases will cause the inductor current to increase. Assume the input voltage Va is positive, if S2 is turned on, the inductor current increases through the diode D4 or D6 and the magnetic energy is stored in the inductor. Since the diodes D1, D3 and D5 are reverse biased, the output capacitor Cdc provides the power to the load. When S2 is turned off, the stored energy in the inductor and the AC source are transferred to Cdc and the load via the diodes. When the AC voltage is negative, S1 is turned on and the inductor current increases through the diode D3 or D5. The same operation modes are involved for phase B and phase C (Taha., 2008; Habetler.,1993). Fig.2 and fig. 3 show different operating modes.

Fig. 1. Boost converter.

Fig. 2. Boost converter when Va is positive.

A three phase boost converter which is shown in fig. 1 with six steps PWM provide DC output and sinusoidal input current with no low frequency harmonic. However the switching frequency harmonics contained in the input currents must be suppressed by the input filter. Referring to fig.1 after the output capacitor has charged up via the diodes to a voltage equals to 1.73Vpk, the diodes are all reverse biased. Turning one of the MOSFETs in each of the three phases will cause the inductor current to increase. Assume the input voltage Va is positive, if S2 is turned on, the inductor current increases through the diode D4 or D6 and the magnetic energy is stored in the inductor. Since the diodes D1, D3 and D5 are reverse biased, the output capacitor Cdc provides the power to the load. When S2 is turned off, the stored energy in the inductor and the AC source are transferred to Cdc and the load via the diodes. When the AC voltage is negative, S1 is turned on and the inductor current increases through the diode D3 or D5. The same operation modes are involved for phase B and phase C (Taha., 2008; Habetler.,1993). Fig.2 and fig. 3 show different operating modes.

**2. Boost converter for aircraft application** 

Fig. 1. Boost converter.

Fig. 2. Boost converter when Va is positive.

Fig. 3. Boost converter when Va is negative.

#### **3. Buck converter for aircraft application**

The buck 3-phase/dc converter is a controlled current circuit which relies on pulse width modulation of a constant current to achieve low distortion. As shown in fig. 4. The circuit consists of 3 power MOSFETs and 12 diodes, an AC side filter and DC side filter.

The AC side input and DC side output filters are standard second order low pass L-C filters. For the input filter, the carrier frequency has to be considerably higher than the filter resonance frequency in order to avoid resonance effects and ensure carrier attenuation. The Ac side filter is arranged to bypass the commutating energy when the MOSFETs are turning off and to absorb the harmonic for the high frequency switching. At the DC side, the inductor is used to maintain a constant current, this inductor can be relatively small since the ripple frequency will be related to the switching frequency. The magnitude and the phase of the input current can be controlled and hence the power transfer that occurs between the AC and DC sides can also be controlled. (Green et al., 1997).

Fig. 4. Buck converter.

Power Electronics Application for More Electric Aircraft 313

With S2 on and S1 is modulated by reference Ta, current flows in phase (a) and phase (b). Ia>0 and Ib<0, The bridge output voltage VL is connected to main line supply Vab which

IL = Ia = -Ib (7)

VL = Vab (8)

With S2 on and S3 is modulated by Tb, current flows in phase (c) and phase (b). Ic>0 and Ib<0. Line voltage Vcb opposed by VDC is applied across the inductor. Again the inductor current

IL = Ic = -Ib (9)

VL = Vcb (10)

opposed by VDC is applied across the inductor. Current IL increases in the inductor.

**Mode 1** 

Fig. 6. Mode 1 equivalent circuit.

Fig. 7. Mode 2 equivalent circuit.

**Mode 2** 

increases.

The input phase voltages Va, Vb, Vc and the input currents Ia, Ib, Ic are assumed to be sinusoidal of equal magnitude and symmetrical

$$\mathbf{V\_a = V\_{pk} \sin\left(\alpha t\right)} \quad \mathbf{I\_a = I\_{pk} \sin\left(\alpha t + \mathbf{q}\right)} \tag{1}$$

$$\mathbf{V\_b = V\_{pk}} \sin \text{ (ot - 2\pi/3)} \quad \mathbf{I\_b = I\_{pk}} \sin \text{ (ot - 2\pi/3 + \mathbf{q})} \tag{2}$$

$$\mathbf{V\_c = V\_{pk}} \sin\left(\alpha \mathbf{t} + 2\pi/3\right) \quad \mathbf{I\_c = I\_{pk}} \sin\left(\alpha \mathbf{t} + 2\pi/3 \mathbf{\tau} \mathbf{q}\right) \tag{3}$$

Figure (5) shows 60 degrees of two sine waveforms.

$$\mathbf{T\_a = TM \sin\left(\alpha t \mathbf{\dot{q}}\right)}\tag{4}$$

$$\mathbf{T\_b = TM} \sin\left(\alpha \mathbf{t} + 2\pi/3 \tau \mathbf{q}\right) \tag{5}$$

The freewheeling time Tf is equal to:

$$\mathbf{T}\_f = \mathbf{T} - \mathbf{T}\_a - \mathbf{T}\_b \tag{6}$$

Where: φ is the displacement angle, Vpk is the peak phase voltage,. M modulation index. T is the PWM switching period.

The general operation of the system is as follows: The switching of the devices is divided into six equal intervals of the 360 degrees main cycle. The waveforms repeat a similar pattern at each interval. At any time during the switching interval, only two converter legs are modulated independently and the third leg is always on. There are some time intervals that only one device is only on, thus providing a freewheeling for the DC current since at this time the energy stored on the DC inductor feeds the load.

Fig. 5. Two 60o sine waves.

#### **Mode 1**

312 Recent Advances in Aircraft Technology

The input phase voltages Va, Vb, Vc and the input currents Ia, Ib, Ic are assumed to be

Va = Vpk sin (t) Ia = Ipk sin (t +φ) (1)

Vb = Vpk sin (t - 2/3) Ib = Ipk sin (t - 2/3+φ) (2)

Vc = Vpk sin (t + 2/3) Ic = Ipk sin (t + 2/3+φ) (3)

Ta = TM sin (t+φ) (4)

Tb = TM sin (t + 2/3+φ) (5)

Tf = T – Ta –Tb (6)

The general operation of the system is as follows: The switching of the devices is divided into six equal intervals of the 360 degrees main cycle. The waveforms repeat a similar pattern at each interval. At any time during the switching interval, only two converter legs are modulated independently and the third leg is always on. There are some time intervals that only one device is only on, thus providing a freewheeling for the DC current since at

sinusoidal of equal magnitude and symmetrical

Figure (5) shows 60 degrees of two sine waveforms.

this time the energy stored on the DC inductor feeds the load.

The freewheeling time Tf is equal to:

Where: φ is the displacement angle, Vpk is the peak phase voltage,.

T is the PWM switching period.

Fig. 5. Two 60o sine waves.

M modulation index.

With S2 on and S1 is modulated by reference Ta, current flows in phase (a) and phase (b). Ia>0 and Ib<0, The bridge output voltage VL is connected to main line supply Vab which opposed by VDC is applied across the inductor. Current IL increases in the inductor.

$$\mathbf{I}\_{\mathbf{L}} = \mathbf{I}\_{\mathbf{a}} = \mathbf{-I}\_{\mathbf{b}} \tag{7}$$

$$\mathbf{V\_L = V\_{ab}}\tag{8}$$

Fig. 6. Mode 1 equivalent circuit.

#### **Mode 2**

With S2 on and S3 is modulated by Tb, current flows in phase (c) and phase (b). Ic>0 and Ib<0. Line voltage Vcb opposed by VDC is applied across the inductor. Again the inductor current increases.

$$\mathbf{I}\_{\rm L} = \mathbf{I}\_{\rm c} = \mathbf{-I}\_{\rm b} \tag{9}$$

$$\mathbf{V}\_{\rm L} = \mathbf{V}\_{\rm cb} \tag{10}$$

Fig. 7. Mode 2 equivalent circuit.

Power Electronics Application for More Electric Aircraft 315

Fig. 9. PWM for the buck converter.

**4. Adaptive reactive power control using boost converter** 

voltage at the POR and has two detrimental effects:

which is usually at the generator output.

resistive load).

In comparison to the operating frequencies of land based power systems, which is normally 50-60 Hz, the operation of aircraft power systems at these relatively high frequencies can present some technical difficulties. One area of importance is associated with the impedance of the potentially long cables (which may run along part of the wing and a large proportion of the fuselage). These cables connect electrical loads, such as electric actuators for aircraft flight surfaces, to the AC supply, or "point of regulation" (POR). In large modern aircraft, the cables can be in excess of 200 ft and contribute impedance which is dependent on the cable's inductance and resistance. The inductive reactance, XL, is proportional to the operating frequency of the power system and is given by XL = 2πfL, where f is the operating frequency and L is the inductance of the cable and therefore the reactance changes with operating frequency. (Taha M, Trainer R D 2004). As the connected load draws a current, the cable develops a voltage drop due to its impedance which is out of phase with respect to the

The voltage at the load is reduced below the regulated voltage at the point of regulation

The power factor of the load seen at the point of regulation reduces (even for a purely

The voltage drop across the cable is clearly disadvantageous. The voltage drop may be tolerated and the connected loads have to be correspondingly down rated for the lower received voltage. Alternatively the voltage drop across the length of the cable is not allowed to exceed a threshold (typically 4 V) and it is necessary to provide cables that are both large and heavy such that their resistance remains low. Clearly space and weight are at a premium in aerospace applications. There can be significant weight saving if smaller, high resistance cables are used, particularly where low duty cycle, pulsed loads like electric

#### **Mode 3**

In this mode, only one MOSFET is on (S2), the inductor current freewheels and the converter is disconnected from the mains and the DC voltage is zero.

$$V\_{\rm L} = 0\tag{11}$$

Therefore the average voltage VL over one switching period T is:

$$\mathbf{V\_L = \left[ (\mathbf{V\_{ab}} \times \mathbf{T\_a}) + (\mathbf{V\_{cb}} \times \mathbf{T\_b}) \right] / \text{ T}}\tag{12}$$

Where:

$$\mathbf{V\_{ab} = V\_{a} \cdot V\_{b} = 1.5 V\_{pk} \sin \text{ (at)} + 0.866 \,\text{V}\_{pk} \cos \text{ (at)} \tag{13}$$

$$\mathbf{V\_{cb} = V\_c - V\_b = 1.73 \text{ V\_{pk} \cos \text{ (out)}} \tag{14}$$

By substituting equations 8, 10, 12 and 13 into equation 12 yeilds:

$$\mathbf{V\_{L}} = \mathbf{V\_{DC}} = \left[ (\mathbf{V\_{ab}} \times \mathbf{T\_a}) + (\mathbf{V\_{cb}} \times \mathbf{T\_b}) \right] / \text{ T} = \mathbf{1.5 M} \,\mathrm{V\_{pk}} \cos \mathbf{q} \tag{15}$$

By assuming an ideal power converter in which the power losses are negligible, the power nput is then equal to power output, and by assuming cosφ = 1 , The DC output voltage can be defined as :

$$\mathbf{V\_{DC}} = \mathbf{1.5 M V\_{pk}} \tag{16}$$

Fig. 8. Mode 3 equivalent circuit.

In this mode, only one MOSFET is on (S2), the inductor current freewheels and the converter

VL = 0 (11)

VL = [(Vab x Ta ) + (Vcb x Tb)] / T (12)

Vab = Va - Vb = 1.5Vpk sin (t) + 0.866 Vpk cos (t) (13)

Vcb = Vc – Vb = 1.73 Vpk cos (t) (14)

 VL = VDC = [(Vab x Ta ) + (Vcb x Tb)] / T = 1.5 M Vpk cosφ (15) By assuming an ideal power converter in which the power losses are negligible, the power nput is then equal to power output, and by assuming cosφ = 1 , The DC output voltage can

VDC = 1.5 M Vpk (16)

is disconnected from the mains and the DC voltage is zero.

Therefore the average voltage VL over one switching period T is:

By substituting equations 8, 10, 12 and 13 into equation 12 yeilds:

**Mode 3** 

Where:

be defined as :

Fig. 8. Mode 3 equivalent circuit.

Fig. 9. PWM for the buck converter.

#### **4. Adaptive reactive power control using boost converter**

In comparison to the operating frequencies of land based power systems, which is normally 50-60 Hz, the operation of aircraft power systems at these relatively high frequencies can present some technical difficulties. One area of importance is associated with the impedance of the potentially long cables (which may run along part of the wing and a large proportion of the fuselage). These cables connect electrical loads, such as electric actuators for aircraft flight surfaces, to the AC supply, or "point of regulation" (POR). In large modern aircraft, the cables can be in excess of 200 ft and contribute impedance which is dependent on the cable's inductance and resistance. The inductive reactance, XL, is proportional to the operating frequency of the power system and is given by XL = 2πfL, where f is the operating frequency and L is the inductance of the cable and therefore the reactance changes with operating frequency. (Taha M, Trainer R D 2004). As the connected load draws a current, the cable develops a voltage drop due to its impedance which is out of phase with respect to the voltage at the POR and has two detrimental effects:

The voltage at the load is reduced below the regulated voltage at the point of regulation which is usually at the generator output.

The power factor of the load seen at the point of regulation reduces (even for a purely resistive load).

The voltage drop across the cable is clearly disadvantageous. The voltage drop may be tolerated and the connected loads have to be correspondingly down rated for the lower received voltage. Alternatively the voltage drop across the length of the cable is not allowed to exceed a threshold (typically 4 V) and it is necessary to provide cables that are both large and heavy such that their resistance remains low. Clearly space and weight are at a premium in aerospace applications. There can be significant weight saving if smaller, high resistance cables are used, particularly where low duty cycle, pulsed loads like electric

Power Electronics Application for More Electric Aircraft 317

(iq/ip) = (iq R + ip XL)/ (V + ip R – iq XL) (23)

In a practical system, ip could take the form of a current demand and iq would be a separate reactive current demand that is made to vary as a function of ip and XL (frequency

Referring to Fig. 10, the inputs here are system frequency and load current, the output is Q demand, which is an input to the power electronic converter. The parameters of the cable are stored and used within the electronic circuitry to calculate the required compensation

θ1 = tan-1(iq/ip) (18)

E = V + (ip + jiq)R + (ip + jiq)jXL (19)

E = V+ ip R – iq XL + j(iq R + ip XL) (20)

θ2 = tan-1((iq R + ip XL)/ (V + ip R – iq XL)) (22)

E = E ((cosθ2 + jsinθ2) (21)

2)1/2)/2 (24)

Where

Where

Therefore:

For unity power factor θ1 should equals θ2.

for the system under consideration.

iq = ((V/ XL) ± ((V/ XL)2 – 4 ip

dependant). R and XL are cable dependant parameters.

Fig. 10. System performance for reactive power compensation.

actuators are supplied. The detrimental effects of such cables may be offset if the system designer uses the high inductive reactance present at the higher operating frequencies to affect voltage boost and power factor correction.

The simplest type of compensation for this type of problem is to connect a set of 3-phase capacitors (star or delta) at the point of connection of the load, in a similar way to ac motorstart capacitors. The capacitors can be used as a generator of reactive power but the beneficial effects are limited since the capacitive compensation is mainly controlled by the voltage magnitude and system frequency rather than the requirements of the load. Having noted the limitations of connecting shunt capacitors, there may be some applications where this type of compensation is applicable.

There is growing interest in the use of advanced power electronic circuits for aerospace loads, particularly in the motor-drives associated with electric actuators. The main two classes of converters currently being considered are active rectifiers and direct ac-ac frequency changer circuits (e.g. Matrix converters). Both types of converter can be made to operate with leading, lagging or unity power factor by suitable control of the semiconductor switching elements.

The current view in the aerospace industry appears to be that the operation of these converters should be limited to unity power factor and little (or no) work has been carried out to explore the true system level benefits of variable power factor operation.

Fig.10 shows a basic circuit diagram for an electric actuator load incorporating an advanced power electronic converter with power factor control. It is clear that by controlling the power factor of the converter (shown leading), the effects of cable inductance can be eliminated so that the load as seen from the POR becomes unity power factor. Other operating power factors may be desirable in order to optimize the operation of the overall power system, including the generator loading.

Because the effects are proportional to the load current flowing through the cable and the system frequency, the reactive power compensation provided by the converter also needs to be variable.

The voltage magnitude at the load can be made the same as that at the POR. It could be beneficial in some applications to boost the input voltage by increasing the capacitive compensation provided by the power electronic converter.

The main benefit of using the advanced power electronic converter as a source of reactive power is to reduce (or eliminate) the voltage drop down the connecting cable. This gives us the possibility to use high impedance cables with benefits of reduced conductor diameter and significantly lower weight.

In order to understand the benefits of reactive power control, it is convenient to consider the flow of real and reactive current separately as shown in figure 10. Superposition can then be used to assess the net effect of both forms of current flow.

Therefore:

$$\mathbf{i} \cdot \mathbf{i} = \mathbf{i}\_{\mathbf{P}} + \mathbf{j} \mathbf{i}\_{\mathbf{q}} = \mathbf{I} \text{ (}\cos\theta\_1 + \mathbf{j}\sin\theta\_1\text{)}\tag{17}$$

Where

316 Recent Advances in Aircraft Technology

actuators are supplied. The detrimental effects of such cables may be offset if the system designer uses the high inductive reactance present at the higher operating frequencies to

The simplest type of compensation for this type of problem is to connect a set of 3-phase capacitors (star or delta) at the point of connection of the load, in a similar way to ac motorstart capacitors. The capacitors can be used as a generator of reactive power but the beneficial effects are limited since the capacitive compensation is mainly controlled by the voltage magnitude and system frequency rather than the requirements of the load. Having noted the limitations of connecting shunt capacitors, there may be some applications where

There is growing interest in the use of advanced power electronic circuits for aerospace loads, particularly in the motor-drives associated with electric actuators. The main two classes of converters currently being considered are active rectifiers and direct ac-ac frequency changer circuits (e.g. Matrix converters). Both types of converter can be made to operate with leading, lagging or unity power factor by suitable control of the semi-

The current view in the aerospace industry appears to be that the operation of these converters should be limited to unity power factor and little (or no) work has been carried

Fig.10 shows a basic circuit diagram for an electric actuator load incorporating an advanced power electronic converter with power factor control. It is clear that by controlling the power factor of the converter (shown leading), the effects of cable inductance can be eliminated so that the load as seen from the POR becomes unity power factor. Other operating power factors may be desirable in order to optimize the operation of the overall

Because the effects are proportional to the load current flowing through the cable and the system frequency, the reactive power compensation provided by the converter also needs to

The voltage magnitude at the load can be made the same as that at the POR. It could be beneficial in some applications to boost the input voltage by increasing the capacitive

The main benefit of using the advanced power electronic converter as a source of reactive power is to reduce (or eliminate) the voltage drop down the connecting cable. This gives us the possibility to use high impedance cables with benefits of reduced conductor diameter

In order to understand the benefits of reactive power control, it is convenient to consider the flow of real and reactive current separately as shown in figure 10. Superposition can then be

i = ip + jiq = I (cosθ1 + jsinθ1) (17)

out to explore the true system level benefits of variable power factor operation.

affect voltage boost and power factor correction.

this type of compensation is applicable.

power system, including the generator loading.

compensation provided by the power electronic converter.

used to assess the net effect of both forms of current flow.

conductor switching elements.

and significantly lower weight.

be variable.

Therefore:

$$\theta\_1 = \tan \mathbf{r}^1(\mathbf{i}\_{\mathbf{q}}/\mathbf{i}\_{\mathbf{p}}) \tag{18}$$

$$\mathbf{E} = \mathbf{V} + (\mathbf{i\_p} + \mathbf{j\_q})\mathbf{R} + (\mathbf{i\_p} + \mathbf{j\_q})\mathbf{j\_L} \tag{19}$$

$$\mathbf{E} = \mathbf{V} + \mathbf{i}\_{\mathbb{P}} \mathbf{R} - \mathbf{i}\_{\mathbb{q}} \,\mathbf{X}\_{\mathbb{L}} + \mathbf{j} (\mathbf{i}\_{\mathbb{q}} \mathbf{R} + \mathbf{i}\_{\mathbb{P}} \,\mathbf{X}\_{\mathbb{L}}) \tag{20}$$

$$E = E \left( (\cos \theta\_2 + j \sin \theta\_2) \right) \tag{21}$$

Where

$$\Theta\_2 = \text{tan}^{-1}(\left(\mathbf{i\_q}\,\mathbf{R} + \mathbf{i\_p}\,\mathbf{X\_L}\right) / \left(\mathbf{V} + \mathbf{i\_p}\,\mathbf{R} - \mathbf{i\_q}\,\mathbf{X\_L}\right))\tag{22}$$

For unity power factor θ1 should equals θ2. Therefore:

$$(\mathbf{i}\_{\mathbf{q}}/\mathbf{i}\_{\mathbf{p}}) = (\mathbf{i}\_{\mathbf{q}}\,\mathbf{R} + \mathbf{i}\_{\mathbf{p}}\,\mathbf{X}\_{\mathbf{L}}) / \ (\mathbf{V} + \mathbf{i}\_{\mathbf{p}}\,\mathbf{R} - \mathbf{i}\_{\mathbf{q}}\,\mathbf{X}\_{\mathbf{L}}) \tag{23}$$

$$\mathbf{i\_q} = \left( (\mathbf{V} / \ \mathbf{X\_L}) \pm ((\mathbf{V} / \ \mathbf{X\_L})^2 - 4 \ \mathbf{i\_q}^2)^{1/2} \right) / 2 \tag{24}$$

In a practical system, ip could take the form of a current demand and iq would be a separate reactive current demand that is made to vary as a function of ip and XL (frequency dependant). R and XL are cable dependant parameters.

Referring to Fig. 10, the inputs here are system frequency and load current, the output is Q demand, which is an input to the power electronic converter. The parameters of the cable are stored and used within the electronic circuitry to calculate the required compensation for the system under consideration.

Fig. 10. System performance for reactive power compensation.

Power Electronics Application for More Electric Aircraft 319

The general formulas for DQ transformations are given as follows. We assume that the three-phase source voltages va, vb and vc are balanced and sinusoidal with an angular

The components of the input voltage phasor along the axes of a stationary orthogonal

v = va (25)

The input voltage can then be transformed to a rotating reference frame DQ chosen with the

vd = v cos t- v. sin t (27)

vq = v sin t + v cos t (28)

id = i cos t - i. sin t (29)

iq = i sin t + i cos t (30)

va = Ria + L dia/dt + va1 (31)

vb = Rib + L dib/dt + vb1 (32)

vc = Ric + L dic/dt + vc1 (33)

Taking the steady state DQ transformation for the inductor, the input voltage to the

vd = Rid +L.did/dt - Liq + vd1 (34)

vq = Riq +L.diq/dt + Lid + vq1 (35)

Inverse DQ transformations then need to be applied to provide the three phase modulating

Let va1, vb1 and vc1 be the fundamental voltages per phase at the input of the converter.

where L is the value of input line inductance and R is its resistance of the inductor.

D axis aligned with the voltage phasor. The voltage components are given by:

The same transformations are applied to the phase currents.

converter in the DQ reference frame is given by:

The active and reactive powers are given by:

waves (varef, vbref and vcref) for the PWM generation.

The main advantages of the DQ control are :

3 (2 vb + va) (26)

P = vd.id + vq.iq (36)

Q = vd.iq - vq.id (37)

frequency.

reference frame (α, β) are given by:

<sup>v</sup> = <sup>1</sup>

## **5. DQ vector control for the converters**

In the DQ vector control strategy the instantaneous 3 phase voltages and currents are transferred to a 2-axis reference frame system which rotates at the angular frequency of the supply. This has the effect of transforming the three phase AC quantities (representing rotating volt and current phasors in the stationary co-ordinate frame) into DC quantities in the synchronously rotating frame (Taha et al.,2002, Taha,. 2008). If the D axis is chosen to be aligned with the voltage phasor, the D and Q axis current components represent the active and reactive components respectively. Fig. 11 shows the schematic of the DQ control scheme implemented in the input converter.

The proposed control scheme consists of two parts:


The outer voltage controller regulates the DC link voltage. The error signal is used as input for the PI voltage controller this provides a reference to the D current of the inner current controller. The Q current reference is set to zero to give unity power factor. A PI inner current control is used to determine the demand of the stationary DQ voltage values (Taha M & Trainer R D 2004; Kazmierkoski et al., 1991).

Fig. 11. DQ Control lock diagram.

Each gain in the controller affects the system characteristics differently. Settling time, steady state error and system stability are affected by the amount of the proportional gain. Selecting a large gain attains faster system response, but cost of large overshoot and longer settling time. Application of the integral feedback drives the steady state error to zero. The integral term increases as the sum of the steady state error increases causing the error to eventually be zero. However it can cause overshoot and ringing.

Selection of the two gain constants is critical in providing fast system response with good system characteristics.

In the DQ vector control strategy the instantaneous 3 phase voltages and currents are transferred to a 2-axis reference frame system which rotates at the angular frequency of the supply. This has the effect of transforming the three phase AC quantities (representing rotating volt and current phasors in the stationary co-ordinate frame) into DC quantities in the synchronously rotating frame (Taha et al.,2002, Taha,. 2008). If the D axis is chosen to be aligned with the voltage phasor, the D and Q axis current components represent the active and reactive components respectively. Fig. 11 shows the schematic of the DQ control scheme

The outer voltage controller regulates the DC link voltage. The error signal is used as input for the PI voltage controller this provides a reference to the D current of the inner current controller. The Q current reference is set to zero to give unity power factor. A PI inner current control is used to determine the demand of the stationary DQ voltage values (Taha

Each gain in the controller affects the system characteristics differently. Settling time, steady state error and system stability are affected by the amount of the proportional gain. Selecting a large gain attains faster system response, but cost of large overshoot and longer settling time. Application of the integral feedback drives the steady state error to zero. The integral term increases as the sum of the steady state error increases causing the error to

Selection of the two gain constants is critical in providing fast system response with good

eventually be zero. However it can cause overshoot and ringing.

**5. DQ vector control for the converters** 

The proposed control scheme consists of two parts:

M & Trainer R D 2004; Kazmierkoski et al., 1991).

implemented in the input converter.

1. An outer voltage controller. 2. An inner current controller.

Fig. 11. DQ Control lock diagram.

system characteristics.

The general formulas for DQ transformations are given as follows. We assume that the three-phase source voltages va, vb and vc are balanced and sinusoidal with an angular frequency.

The components of the input voltage phasor along the axes of a stationary orthogonal reference frame (α, β) are given by:

$$\mathbf{v\_a = v\_a} \tag{25}$$

$$\mathbf{v}\_{\parallel} = \frac{1}{\sqrt{3}} \left( 2\,\mathbf{v}\_{\mathbf{b}} + \mathbf{v}\_{\mathbf{a}} \right) \tag{26}$$

The input voltage can then be transformed to a rotating reference frame DQ chosen with the D axis aligned with the voltage phasor. The voltage components are given by:

$$\mathbf{v}\_d = \mathbf{v}\_a \cos \text{ ot-} \mathbf{v}\_{\text{\textquotedblleft}r} \sin \text{ot} \tag{27}$$

$$\mathbf{v}\_q = \mathbf{v}\_a \sin \alpha \mathbf{t} + \mathbf{v}\_\beta \cos \alpha \mathbf{t} \tag{28}$$

The same transformations are applied to the phase currents.

$$\mathbf{i}\_d = \mathbf{i}\_a \cos \text{ot - i}\_{\mathbb{P}} \sin \text{ot} \tag{29}$$

$$\mathbf{i}\_q = \mathbf{i}\_a \sin \alpha \mathbf{t} + \mathbf{i}\_\emptyset \cos \alpha \mathbf{t} \tag{30}$$

Let va1, vb1 and vc1 be the fundamental voltages per phase at the input of the converter.

$$\mathbf{v\_a = Ri\_a + L \, di\_a / dt + v\_{a1}} \tag{31}$$

$$\mathbf{v}\_{\rm b} = \mathbf{R}\mathbf{i}\_{\rm b} + \mathbf{L} \,\mathrm{d}\mathbf{i}\_{\rm b}/\,\mathrm{d}\mathbf{t} + \mathbf{v}\_{\rm b1} \tag{32}$$

$$\mathbf{v}\_{\mathbf{c}} = \mathbf{R}\mathbf{i}\_{\mathbf{c}} + \mathbf{L} \,\mathrm{d}\mathbf{i}\_{\mathbf{c}}/\,\mathrm{d}\mathbf{t} + \mathbf{v}\_{\mathbf{c}1} \tag{33}$$

where L is the value of input line inductance and R is its resistance of the inductor.

Taking the steady state DQ transformation for the inductor, the input voltage to the converter in the DQ reference frame is given by:

$$\mathbf{v}\_{\mathrm{d}} = \mathrm{Ri}\_{\mathrm{d}} + \mathrm{L.dLi}\_{\mathrm{d}}/\mathrm{dt} \cdot \mathrm{coL}\_{\mathrm{q}} + \mathbf{v}\_{\mathrm{d}1} \tag{34}$$

$$\mathbf{v\_{q}} = \mathbf{R}\mathbf{i\_{q}} + \mathbf{L} \iota \mathbf{d\_{q}} / \mathbf{d}\mathbf{t} + \iota \alpha \mathbf{L} \mathbf{i\_{d}} + \mathbf{v\_{q1}} \tag{35}$$

The active and reactive powers are given by:

$$\mathbf{P} = \mathbf{v}\_d \mathbf{i}\_d + \mathbf{v}\_q \mathbf{i}\_q \tag{36}$$

$$\mathbf{Q} = \mathbf{v}\_{\mathrm{d}} \mathbf{i}\_{\mathrm{q}} \text{ - } \mathbf{v}\_{\mathrm{q}} \mathbf{i}\_{\mathrm{d}} \tag{37}$$

Inverse DQ transformations then need to be applied to provide the three phase modulating waves (varef, vbref and vcref) for the PWM generation.

The main advantages of the DQ control are :

Power Electronics Application for More Electric Aircraft 321

An approach to the controller design is to locate this zero to coincide with one of the poles,

The second pole can then be placed at any desired location, to give the desired bandwidth.

In this case the system is the line from the generator to the input converter, which may be modelled by an inductor in series with a resistor. The generator e.m.f. is assumed to have no dynamic effect, and so is represented as a short circuit. The system schematic is shown in

> a a <sup>a</sup> v di <sup>L</sup> <sup>i</sup>

The open loop transfer function relating the phase current to the phase voltage is, therefore:

Ia L R

Fig. 13. Schematic of current control for the boost converter.

C ω ω 1 R 

s

1 2 1 2 Cω ω

The zero of the transfer function is where:

say at 1 s ω , so as to cancel its effect. This gives:

This gives the proportional and integral gains:

**Current Control for boost converter** 

The phase current is given by:

Therefore:

Fig. 13.

K C i 12 ω ω (43)

ω<sup>1</sup> 1 RC (46)

K C p 2 ω (47)

Ki 2 ω R (48)

Va

R R dt (49)

(45)

Ks K C C p i ω ω 1 2 1R C s 0 ω ω1 2 C (44)


The PWM generator based on a regular asymmetric PWM strategy.

#### **Voltage Control**

The DC side may be modelled by a capacitor C, representing the smoothing capacitors, and a resistor R, representing the load. This is shown in Figure 12.

Fig. 12. Schematic of dc voltage link.

The linearised model for the DC side is given by the open-loop transfer function relating the DC link voltage to the supply current:

$$\mathbf{G(s)} = \frac{\mathbf{v}\_{\rm DC}(s)}{\mathbf{i(s)}} = \frac{\mathbf{R}}{1 + \mathbf{R} \,\mathrm{Cs}} \tag{38}$$

Applying the PI controller illustrated, i(s) is given by:

$$\mathbf{i}(\mathbf{s}) = \left(\mathbf{K}\_{\mathrm{p}} + \frac{\mathbf{K}\_{\mathrm{i}}}{\mathbf{s}}\right) \left(\mathbf{v}\_{\mathrm{REF}} - \mathbf{v}\_{\mathrm{DC}}\right) \tag{39}$$

Thus, the closed-loop transfer function is given by:

$$\frac{\mathbf{v}\_{\rm DC}(\mathbf{s})}{\mathbf{v}\_{\rm REF}(\mathbf{s})} = \frac{\left(\mathbf{K}\_{\rm p}\mathbf{s} + \mathbf{K}\_{\rm i}\right) \Big| \mathbf{C}}{\mathbf{s}^2 + \frac{\mathbf{s}\left(1 + \mathbf{R}\mathbf{K}\_{\rm p}\right)}{\mathbf{R}\mathbf{C}} + \frac{\mathbf{K}\_{\rm i}}{\mathbf{C}}} \tag{40}$$

To give a damped response, the poles of the system should be placed along the real axis in the s-domain, i.e. at 1 s ω and 2 s ω , giving the transfer function:

$$\frac{\mathbf{v}\_{\rm DC}(\mathbf{s})}{\mathbf{v}\_{\rm REF}(\mathbf{s})} = \frac{\left(\mathbf{K}\_{\rm p}\mathbf{s} + \mathbf{K}\_{\rm i}\right)\Big|\!\!\!/\mathbf{C}}{\mathbf{s}^2 + \mathbf{s}\left(\boldsymbol{\alpha}\_1 + \boldsymbol{\alpha}\_2\right) + \boldsymbol{\alpha}\_1\boldsymbol{\alpha}\_2} \tag{41}$$

By equating the coefficients of the denominators of the above equations, the proportional and integral gains are:

$$\mathbf{K}\_{\mathbf{p}} = \mathbf{C} \left(\boldsymbol{\omega}\_1 + \boldsymbol{\omega}\_2\right) - \mathbf{1}/\mathbf{R} \tag{42}$$

$$\mathbf{K}\_{\mathbf{i}} = \mathbf{C} \mathbf{o}\_1 \mathbf{o}\_2 \tag{43}$$

The zero of the transfer function is where:

$$\left(\mathbf{K}\_{\mathbf{p}}\mathbf{s} + \mathbf{K}\_{\mathbf{i}}\right)\Big|\mathbf{C} = \left(\left(\mathbf{C}\left(\boldsymbol{\alpha}\_{1} + \boldsymbol{\alpha}\_{2}\right) - 1/\mathbf{R}\right)\mathbf{s} + \mathbf{C}\boldsymbol{\alpha}\_{1}\boldsymbol{\alpha}\_{2}\right)\Big|\mathbf{C} = \mathbf{0} \tag{44}$$

Therefore:

320 Recent Advances in Aircraft Technology

The DC side may be modelled by a capacitor C, representing the smoothing capacitors, and

The linearised model for the DC side is given by the open-loop transfer function relating the

 <sup>i</sup> p REF DC

REF <sup>2</sup> <sup>p</sup> <sup>i</sup>

To give a damped response, the poles of the system should be placed along the real axis in

REF 1 2 12

By equating the coefficients of the denominators of the above equations, the proportional

 

 

K K

1 RK K RC C

ω ω ωω

K K

K s vv K s

DC p i

DC p i 2

v s s C

v s s C

 v s DC <sup>R</sup> G s

*<sup>i</sup>* 

> 

the s-domain, i.e. at 1 s ω and 2 s ω , giving the transfer function:

 

v s s s

v s s s

C R

i s 1 RCs (38)

(39)

K C p 12 ω ω 1 R (42)

(40)

(41)

1. Direct control the active and reactive power. 2. Fast dynamics of current control loops.

**Voltage Control** 

The PWM generator based on a regular asymmetric PWM strategy.

i

a resistor R, representing the load. This is shown in Figure 12.

vDC

Applying the PI controller illustrated, i(s) is given by:

Thus, the closed-loop transfer function is given by:

Fig. 12. Schematic of dc voltage link.

DC link voltage to the supply current:

and integral gains are:

$$\text{ms} = \frac{-\text{Co}\_1\text{o}\_2}{\text{C}\left(\text{o}\_1 + \text{o}\_2\right) - 1/\text{R}}\tag{45}$$

An approach to the controller design is to locate this zero to coincide with one of the poles, say at 1 s ω , so as to cancel its effect. This gives:

$$
\alpha\_1 = 1/\text{RC} \tag{46}
$$

The second pole can then be placed at any desired location, to give the desired bandwidth. This gives the proportional and integral gains:

$$\mathbf{K}\_{\mathbf{p}} = \mathbf{C} \mathbf{o}\_2 \tag{47}$$

$$\mathbf{K}\_{\rm i} = \boldsymbol{\alpha}\_2 / \mathbf{R} \tag{48}$$

#### **Current Control for boost converter**

In this case the system is the line from the generator to the input converter, which may be modelled by an inductor in series with a resistor. The generator e.m.f. is assumed to have no dynamic effect, and so is represented as a short circuit. The system schematic is shown in Fig. 13.

Fig. 13. Schematic of current control for the boost converter.

The phase current is given by:

$$\mathbf{i}\_{\mathbf{a}} = -\frac{\mathbf{v}\_{\mathbf{a}}}{\mathbf{R}} - \frac{\mathbf{L}}{\mathbf{R}} \frac{\mathbf{di}\_{\mathbf{a}}}{\mathbf{dt}} \tag{49}$$

The open loop transfer function relating the phase current to the phase voltage is, therefore:

Power Electronics Application for More Electric Aircraft 323

From this transfer function it would be appear that the PI controller proposed would suffice, driving the steady state error to zero and allowing the behaviour and bandwidth (position of the poles) of closed loop system to be fully determined by choosing the proportional and integral gains. The procedure for this is the same as that described above for voltage control.

All of the converters components had to be selected so that normal service maintenance would ensure the retention of their specified characteristics through the full range of operational and environmental conditions likely to be encountered through the life of the

The choice of capacitors is very important for aerospace industry. Wet aluminum electrolytic capacitors are not suitable due to their limited operating temperature range and hence limited life. Equivalent series resistance is also a problem for these and other types of electrolytic capacitor and therefore alternative technologies, such as ceramic or plastic, are

Ceramic capacitors have a good lifetime, low series resistance and they work in high temperature conditions. On the other hand for a rating of a few hundred volts this type of capacitor has a very small value per unit volume and are only available in units of up to 20uF. The size and weight for this converter are very important. Therefore care was taken to

Another important factor is the design of the magnetic components. In order to achieve a small air gap, minimum winding turns, minimum eddy current losses and small inductor size, the inductor should be designed to operate at the maximum possible flux density. Also, care should be taken to ensure that the filter inductors do not reach a saturated state during

The power conversion in the boost or buck converter is exclusively performed in switched mode. Operation in the switch mode ensures that the efficiency of the power conversion is high. The switching losses of the devices increase with the switching frequency and this should preferably be high in order have small THD therefore choosing the switching

For the boost converter the simulation carried out with a fixed switching frequency. However, for the buck converter. one of the method could be used is a variable switching frequency

the overload condition. As the cores saturate, the inductance falls and the THD rises.

**7. Simulation results for boost and buck converter** 

ac ac ac ac

(53)

*s s* LC RC 1

<sup>2</sup>

aircraft, or support facility, in which they are installed (Taha M 1999).

**6. Hardware design** 

**6.1 Capacitors** 

recommended.

choose the optimal value of the DC capacitor

frequency poses significant challenges due to: 1. Supply frequency Variation (360 to 800 Hz).

**6.2 Magnetic components** 

<sup>1</sup> G s

$$\frac{\dot{\mathbf{i}}\_{\rm a}}{\mathbf{v}\_{\rm a}} = -\frac{1}{\mathbf{R} + \mathbf{L}\mathbf{s}}\tag{50}$$

From this simple transfer function, it would appear that the PI controller proposed would suffice, driving the steady-state current error to zero and allowing the behaviour and bandwidth, (i.e. the positions of the poles, of the closed loop system) to be fully determined by choosing the proportional and integral gains. The D-axis and Q-axis currents are compared to their respective demanded values and the error is applied to individual PI controllers to give voltage demands referred to the D-axis and Q-axis. With the feedforward and dc-coupling terms, the transfer functions of the systems being controlled are:

$$\frac{\mathbf{i}\_{\rm d}(\mathbf{s})}{\mathbf{v}\_{\rm d} \, \mathrm{(s)}} = \frac{1}{\mathrm{L} \, \mathrm{s} + \mathrm{R}} \tag{51}$$

$$\frac{\mathbf{i}\_q(\mathbf{s})}{\mathbf{v}\_q \mathbf{'} (\mathbf{s})} = \frac{\mathbf{1}}{\mathbf{L}\mathbf{s} + \mathbf{R}} \tag{52}$$

Again, these are first-order equations and similar to the voltage control loop, the PI controllers will drive the steady-state error to zero and enable the behaviour and bandwidth of the closed-loop system to be determined by placing the poles appropriately.

#### **Current Control for buck converter**

The idea of controlling the current of the AC side LC filter has been proposed as a way of suppressing the excitation of the resonance of this filter. In steady state and in the absence of distortion there are no current components to excite the resonance because the resonant frequency will have been chosen to fall between the fundamental and the switching frequency. During the transient, the resonance of the filter can be damped by choosing the characteristics impedance to match the resistance and the inductance.

In this case the system is the line from the generator to the input converter, which may be modeled by an inductor in series with a resistor and capacitor.. The system schematic is shown in Figure 14.

Fig. 14. Schematic of current control for the buck converter.

The open loop transfer function relating the phase current to the phase voltage is, therefore:

$$\text{rG}(\text{s}) = \frac{1}{s^2 \text{ L}\_{\text{ac}}\text{C}\_{\text{ac}} + s\text{R}\_{\text{ac}}\text{C}\_{\text{ac}} + 1} \tag{53}$$

From this transfer function it would be appear that the PI controller proposed would suffice, driving the steady state error to zero and allowing the behaviour and bandwidth (position of the poles) of closed loop system to be fully determined by choosing the proportional and integral gains. The procedure for this is the same as that described above for voltage control.

## **6. Hardware design**

322 Recent Advances in Aircraft Technology

v R Ls (50)

v ' s Ls R (52)

(51)

i 1

From this simple transfer function, it would appear that the PI controller proposed would suffice, driving the steady-state current error to zero and allowing the behaviour and bandwidth, (i.e. the positions of the poles, of the closed loop system) to be fully determined by choosing the proportional and integral gains. The D-axis and Q-axis currents are compared to their respective demanded values and the error is applied to individual PI controllers to give voltage demands referred to the D-axis and Q-axis. With the feedforward and dc-coupling terms, the transfer functions of the systems being controlled are:

a a

 d d

 q q

of the closed-loop system to be determined by placing the poles appropriately.

characteristics impedance to match the resistance and the inductance.

Fig. 14. Schematic of current control for the buck converter.

**Current Control for buck converter** 

shown in Figure 14.

i s 1 v ' s Ls R

i s 1

Again, these are first-order equations and similar to the voltage control loop, the PI controllers will drive the steady-state error to zero and enable the behaviour and bandwidth

The idea of controlling the current of the AC side LC filter has been proposed as a way of suppressing the excitation of the resonance of this filter. In steady state and in the absence of distortion there are no current components to excite the resonance because the resonant frequency will have been chosen to fall between the fundamental and the switching frequency. During the transient, the resonance of the filter can be damped by choosing the

In this case the system is the line from the generator to the input converter, which may be modeled by an inductor in series with a resistor and capacitor.. The system schematic is

The open loop transfer function relating the phase current to the phase voltage is, therefore:

All of the converters components had to be selected so that normal service maintenance would ensure the retention of their specified characteristics through the full range of operational and environmental conditions likely to be encountered through the life of the aircraft, or support facility, in which they are installed (Taha M 1999).

## **6.1 Capacitors**

The choice of capacitors is very important for aerospace industry. Wet aluminum electrolytic capacitors are not suitable due to their limited operating temperature range and hence limited life. Equivalent series resistance is also a problem for these and other types of electrolytic capacitor and therefore alternative technologies, such as ceramic or plastic, are recommended.

Ceramic capacitors have a good lifetime, low series resistance and they work in high temperature conditions. On the other hand for a rating of a few hundred volts this type of capacitor has a very small value per unit volume and are only available in units of up to 20uF. The size and weight for this converter are very important. Therefore care was taken to choose the optimal value of the DC capacitor

#### **6.2 Magnetic components**

Another important factor is the design of the magnetic components. In order to achieve a small air gap, minimum winding turns, minimum eddy current losses and small inductor size, the inductor should be designed to operate at the maximum possible flux density. Also, care should be taken to ensure that the filter inductors do not reach a saturated state during the overload condition. As the cores saturate, the inductance falls and the THD rises.

## **7. Simulation results for boost and buck converter**

The power conversion in the boost or buck converter is exclusively performed in switched mode. Operation in the switch mode ensures that the efficiency of the power conversion is high. The switching losses of the devices increase with the switching frequency and this should preferably be high in order have small THD therefore choosing the switching frequency poses significant challenges due to:

1. Supply frequency Variation (360 to 800 Hz).

For the boost converter the simulation carried out with a fixed switching frequency. However, for the buck converter. one of the method could be used is a variable switching frequency

Power Electronics Application for More Electric Aircraft 325

Fig. 17. Buck converter simulation results at 360 Hz input frequency.

Fig. 18. Buck converter simulation results at 800 Hz input frequency.

Simulation has been done at 16 kW approximately. Fig. 19 shows the "per phase" parameter values used. Fig. 20 and fig. 21 show results for 360 Hz, the voltage drops to 107.5 V at the input filter of the converter. To compensate for the voltage drop across the cable, q ( reactive demand) has been set this gave leading power factor. Fig. 21 shows that the voltage Va3

Fig. 22 and Fig. 23 show results for 800 Hz, the voltage drops to 106 V at the input filter of

the converter. Fig. 23 shows that the voltage Va3 increase to 1113 V at 0.9 PF.

**8. Simulation results for the adaptive power control** 

increase to 111.3 V at 0.9 PF.

which depend on the input frequency. Trade off between the values of the filters and the switching frequency have been studies, in order to maintain the THD within the required value at different input frequency. Another method is to use the same switching frequency for different input frequency, here the highest input frequency should be considered.

The parameter values used for the simulation are shown in table 1. Fig. 15 to fig. 18 show, input AC voltage and current and Dc output voltage.


Table 1. Simulation parameters.

Fig. 15. Boost converter simulation results at 360 Hz input frequency.

Fig. 16. Boost converter simulation results at 800 Hz input frequency.

which depend on the input frequency. Trade off between the values of the filters and the switching frequency have been studies, in order to maintain the THD within the required value at different input frequency. Another method is to use the same switching frequency for

The parameter values used for the simulation are shown in table 1. Fig. 15 to fig. 18 show,

RMS phase voltage = 115 V RMS phase voltage = 115 V

AC Input Filter = Lac= 100uH AC Input Filter = Lac= 150uH; Cac = 1μF Dc Output filter Cdc = 50μF Dc Output filter = Ldc = 1mH ; Cdc = 50μF

Switching freq for 800 input freq =20000 Hz Switching freq for 800 input freq =33600 Hz Switching freq for 360 input freq =20000 Hz Switching freq for 360 input freq =23760 Hz

different input frequency, here the highest input frequency should be considered.

input AC voltage and current and Dc output voltage.

Boost converter Buck converter

Load = 10 Ω Load = 0.5 Ω

Table 1. Simulation parameters.

DC voltage setting = 400 V DC voltage setting = 42 V

Fig. 15. Boost converter simulation results at 360 Hz input frequency.

Fig. 16. Boost converter simulation results at 800 Hz input frequency.

Fig. 17. Buck converter simulation results at 360 Hz input frequency.

Fig. 18. Buck converter simulation results at 800 Hz input frequency.

#### **8. Simulation results for the adaptive power control**

Simulation has been done at 16 kW approximately. Fig. 19 shows the "per phase" parameter values used. Fig. 20 and fig. 21 show results for 360 Hz, the voltage drops to 107.5 V at the input filter of the converter. To compensate for the voltage drop across the cable, q ( reactive demand) has been set this gave leading power factor. Fig. 21 shows that the voltage Va3 increase to 111.3 V at 0.9 PF.

Fig. 22 and Fig. 23 show results for 800 Hz, the voltage drops to 106 V at the input filter of the converter. Fig. 23 shows that the voltage Va3 increase to 1113 V at 0.9 PF.

Power Electronics Application for More Electric Aircraft 327

Fig. 21. Results at 360 Hz input frequency for 0.9 power factor.

Va1 is the point of regulation voltage

Va3 is the point of connection of the load

L1 = 25uH,R1= 0.015 ohms is the generator inductance and resistance.

L2 = 10uH,R2= 0.01 ohms is the cable inductance and resistance from the generator to contactor.

L3 = 20uH,R2= 0.1 ohms is the cable inductance and resistance from the contactor to the load.

L4 = 100uH,R4= 0.1 ohms is the inductance and resistance of the load converter input filter.

Fig. 19. Single phase parameters for the adaptive power control.

Fig. 20. Results at 360 Hz input frequency for unity power factor.

L2 = 10uH,R2= 0.01 ohms is the cable inductance and resistance from the generator to contactor. L3 = 20uH,R2= 0.1 ohms is the cable inductance and resistance from the contactor to the load. L4 = 100uH,R4= 0.1 ohms is the inductance and resistance of the load converter input filter.

Va1 is the point of regulation voltage Va3 is the point of connection of the load

L1 = 25uH,R1= 0.015 ohms is the generator inductance and resistance.

Fig. 19. Single phase parameters for the adaptive power control.

Fig. 20. Results at 360 Hz input frequency for unity power factor.

Fig. 21. Results at 360 Hz input frequency for 0.9 power factor.

Power Electronics Application for More Electric Aircraft 329

Fig. 23. Results at 800 Hz input frequency for 0.9 power factor.

On the basis of the space vector concept a PWM controller was developed. It has been shown that sinusoidal modulation generated in a space vector representation with PI controllers give an adequate performance in steady state and transient condition fast. It has been shown that with the future use of advanced power electronic converters within aircraft equipment, there is the possibility to operate these at variable input frequency and keeping

**9. Conclusion** 

the input current harmonics low.

Fig. 22. Results at 800 Hz input frequency for unity power factor.

Fig. 22. Results at 800 Hz input frequency for unity power factor.

Fig. 23. Results at 800 Hz input frequency for 0.9 power factor.

#### **9. Conclusion**

On the basis of the space vector concept a PWM controller was developed. It has been shown that sinusoidal modulation generated in a space vector representation with PI controllers give an adequate performance in steady state and transient condition fast. It has been shown that with the future use of advanced power electronic converters within aircraft equipment, there is the possibility to operate these at variable input frequency and keeping the input current harmonics low.

**1. Introduction**

**1.1 Background and historical issues**

were fixed to the hull to power the aircraft.

Fig. 1. The Graf Zeppelin aircraft

Most of researches concerning *In-Flight Entertainment (IFE)* systems are done on case bases without a global view that encompasses all IFE components. Thus, we try to highlight the key factors of designing IFE system, and showing how its various components can integrate

**Key Factors in Designing In-Flight** 

Ahmed Akl1,2,3, Thierry Gayraud1,2 and Pascal Berthou1,2

**Entertainment Systems** 

*11CNRS-LAAS, Université de Toulouse* 

*Technology, and Maritime Transport, Cairo* 

*2UPS, INSA, INP, ISAE; LAAS, F-31077 Toulouse; 3College of Engineering, Arab Academy for Science,* 

**15**

*1,2France 3Egypt* 

Flight entertainment started before the First World War by the Graf Zeppelin (see Figure 1). This aircraft had a long, thin body with a teardrop shape; it was about 776 feet long and 100 feet in diameter, filled with hydrogen, and the cabin was located under the hull; five engines

From the passengers comfort perspective, this model was equipped with a kitchen having electric ovens and a refrigeration unit, a small dinning room, washrooms for men and women, and passenger cabins with a capacity of two passengers each. Unfortunately, the craft was not heated, so passengers were dressing heavy coats and covered with blankets during winter flights. As developments went on, the "*Hindenburg*" aircraft came with heated passenger area, larger dinning room, passengers lounge with a piano as the first audio entertainment, a decorated writing room, a more enhanced passenger cabins, and promenades with seating

In 1949, the "*De Havilland DH 106 Comet*" was the first commercial jet airliner to go into service. It had four jet engines located into the wings. It provided passengers with low-noise

and windows that can be opened during the flight (Airships.net, Last visit 2011).

together to provide the required services for all parties involved with the system.

An AC/DC buck and boost converters with different input frequency and offers low THD ( less than 7%) has been described and simulated.

The operation and performance of the proposed topology was verified by simulating a 16 KW with a pure resistive load of 10 Ω and 400 dc voltage for the boost converter and a 3.5 KW with a pure resistive load of 0.5 Ω and 42 dc voltage for the buck converter. The input current is sinusoidal and power factor is unity. The DC voltage is well smoothed.

With the future use of advanced power electronic converters such as active rectifiers and matrix converters within aircraft equipment, there is the possibility to operate these at variable power factor in order to provide system level benefits. These include control of the voltage at the load and improvements in power factor seen at the POR.

#### **10. Acknowledgment**

I would like to express my sincere appreciation and respect to the late Prime Minister Rafik El hariri who is entirely responsible for funding my studies in England.

#### **11. References**


## **Key Factors in Designing In-Flight Entertainment Systems**

Ahmed Akl1,2,3, Thierry Gayraud1,2 and Pascal Berthou1,2 *11CNRS-LAAS, Université de Toulouse 2UPS, INSA, INP, ISAE; LAAS, F-31077 Toulouse; 3College of Engineering, Arab Academy for Science, Technology, and Maritime Transport, Cairo 1,2France 3Egypt* 

### **1. Introduction**

330 Recent Advances in Aircraft Technology

An AC/DC buck and boost converters with different input frequency and offers low THD (

The operation and performance of the proposed topology was verified by simulating a 16 KW with a pure resistive load of 10 Ω and 400 dc voltage for the boost converter and a 3.5 KW with a pure resistive load of 0.5 Ω and 42 dc voltage for the buck converter. The input

With the future use of advanced power electronic converters such as active rectifiers and matrix converters within aircraft equipment, there is the possibility to operate these at variable power factor in order to provide system level benefits. These include control of the

I would like to express my sincere appreciation and respect to the late Prime Minister Rafik

Faleiro, L. (2005). Beyond the More Electric Aircraft*, Aerospace America,* September 2005, pp

Green, A.; Boys J. & Gates G. (1988). 3-phase voltage sources reversible rectifier, *IEE* 

Green T.; Taha M.; Rahim N.; &Williams B.W. (1997). Three Phase Step-Down Reversible AC-DC Power converter*, IEEE Trans. Power Electron*, 1997,12, pp 319-324 Habetler T. (1993). A space Vector-Bases Rectifier Regulator for AC/DC/AC Converters.

Kazmierkowski M.; Dzeiniakowski M.& Sulkowski W. (1991). Novel space vector based current control for PWM inverters, *IEEE Trans. Power Electron*, 1991, 6, pp 158-166 Taha M (1999). "Power electronics for aircraft application" *Power electronics for demanding* 

Taha M.; Skinner D.; Gami S.; Holme M. & Raimondi G (2002); *Variable Frequency to constant* 

Taha M. (2008). Mitigation of Supply Current Distortion in 3- Phase /DC Boost converters

Taha, M (2007). Active rectifier using DQ vector control for aircraft power system, *IEMDC* 

Taha M, Trainer R D (2004); Adaptive reactive power control for aircraft application Power

Weimer J. (1995). Powe Managemennt and Distribution for More Electric Aircraft, *Proceeding of the 30th Intersociety Energy Conversion Engineering Conference*,pp.273-277

Electronics, Machines and Drives, 2004. (PEMD 2004). Second International

current is sinusoidal and power factor is unity. The DC voltage is well smoothed.

voltage at the load and improvements in power factor seen at the POR.

El hariri who is entirely responsible for funding my studies in England.

*Proceeding* ,1988,135, pp 362-370, 2002

I*EEE Trans. Power Electron*, 1993 Vol 8, pp. 30-36

*applications colloquium, IEE,April 1999, 069 pp 5 -8*

Conference on (Conf. Publ. No. 498) 2:469- 474 Vol.2.

For Aircraft Applications*, PEMD 2008*.

*2007* pp 1306-1310

*frequency converter (VFCF) for aircraft application*, PEMD 2002.

less than 7%) has been described and simulated.

**10. Acknowledgment** 

**11. References** 

3540

Most of researches concerning *In-Flight Entertainment (IFE)* systems are done on case bases without a global view that encompasses all IFE components. Thus, we try to highlight the key factors of designing IFE system, and showing how its various components can integrate together to provide the required services for all parties involved with the system.

#### **1.1 Background and historical issues**

Flight entertainment started before the First World War by the Graf Zeppelin (see Figure 1). This aircraft had a long, thin body with a teardrop shape; it was about 776 feet long and 100 feet in diameter, filled with hydrogen, and the cabin was located under the hull; five engines were fixed to the hull to power the aircraft.

Fig. 1. The Graf Zeppelin aircraft

From the passengers comfort perspective, this model was equipped with a kitchen having electric ovens and a refrigeration unit, a small dinning room, washrooms for men and women, and passenger cabins with a capacity of two passengers each. Unfortunately, the craft was not heated, so passengers were dressing heavy coats and covered with blankets during winter flights. As developments went on, the "*Hindenburg*" aircraft came with heated passenger area, larger dinning room, passengers lounge with a piano as the first audio entertainment, a decorated writing room, a more enhanced passenger cabins, and promenades with seating and windows that can be opened during the flight (Airships.net, Last visit 2011).

In 1949, the "*De Havilland DH 106 Comet*" was the first commercial jet airliner to go into service. It had four jet engines located into the wings. It provided passengers with low-noise

to use, and varieties of choice; otherwise, the passenger may get bored and is not able to get

Key Factors in Designing In-Flight Entertainment Systems 333

From the airlines companies' perspective, productivity and profitability are one of the main targets. Achieving these targets is always hindered by the strong competition between companies. Thus, airlines are trying to maximize their attractiveness to get more clients because every empty seat means a revenue loss. IFE systems can play a remarkable role in customer satisfaction and attraction, and it can be used as an efficient portal for in-flight shopping. Moreover, one of the main tasks of aircraft attendants is to keep the passengers calm, unstressed, and to quickly respond to their requests. IFE systems can be a factor of stress elimination, decreasing passenger's movements during the flight, and providing

Achieving such level of services requires various technologies and design concepts to be integrated together for implementing such systems. A single networking technology is not capable of providing all types of services. Thus, a good heterogeneous communication network is required to connect different devices and provide multiple services on both system and passenger's levels. For example, a GSM network can provide telephony services; WiFi, Bluetooth, and Infrared to keep passenger's devices connected to the system; LAN and/or

Section 2 presents the different types of services provided by IFE systems, and shows the various components which are directly used by passengers as well as the components working at the background, which passengers are not aware of their existence. Section 3 introduces our proposed SysML model that integrates parts of the IFE system to help designers to have a global view of the whole system. Section 4 presents our conclusion. Finally, section 5 discusses

IFE systems can provide various services for different parties such as airline companies, crew members, and basically passengers. These services are provided through software and hardware components; some components are used directly by passengers, while the others

IFE services can give solutions for different domains. They can provide health care and monitoring for passengers of health problems, business solutions to advertise products and support business decision making through surveys, and the expected service of

Although it seems that IFE systems are providing services to passengers only, but it can be extended to provide the cabin attendants with services to facilitate their job. Attendants have to keep a big smile and descent attitude during their work regardless of the current situation,

*Power Line Communication (PLC)* to form the communication network backbone.

the expected satisfaction level.

**1.2 Chapter structure**

future issues of IFE systems.

are used indirectly.

**2.1 IFE services**

entertainment.

**2.1.1 Crew services**

**2. IFE services and components**

request information quickly to the attendants.

pressurized cabin (when compared to propeller-driven airliners), and large windows; hot and cold drinks, and food are serviced through the galley; separate women and men washrooms were available (Davies & Birtles, 1999).

Starting from 1960, *In-Flight Entertainment (IFE)* systems started to attract attention; they were basically a pre-selected audio track that may be accompanied with a film projector. They had shown improvements in both vertical and horizontal dimensions. They expanded horizontally by improving the existing services; audio entertainment moved from using simple audio devices to surround sound and live radio; video display progressed from using a film projector, to CRT displays hanged in the ceiling, to LCD displays dedicated to each passenger. The vertical improvement was noticed through introducing new technologies; cabin telephones allowed passengers to make phone calls during the flight; the system become interactive and allowed passengers to select their own services, while in the past they were forced to follow fixed services; web-based internet services allowed passengers to use some services such as emails and SMS messaging.

The basic idea behind IFE systems was to provide passengers with comfortableness during their long range flights; especially with long transatlantic flights where passengers see nothing but a large blue surface, so that services were initially based on delivering food and drinks to passengers. As passengers demand for more services grows, accompanied with an increase in airlines competition and technology advancement, more services were introduced and modern electronic devices played a remarkable role. This caused a change in the basic concept behind IFE systems; it becomes more than just giving physical comfortableness and providing food. It is extended to provide interactive services that allow passengers to participate as a part of the entertainment process as well as providing business oriented services through connectivity tools. Moreover, it can provide means of health monitoring and physiological comfort.

In recent years, market surveys have revealed a surprising and growing trend in the importance of *IFE* systems with regard to choice of airline. With modern long range aircraft the need for "stop-over" has been reduced, so the duration of flights has also been increased. Air flights, especially long distance, may expose passengers to discomfort and even stress. (Liu, 2007) mentioned that the enclosed environment of the aircraft can cause discomfort or even problems to passengers. This may include psychological and physical discomfort due to cabin pressure, humidity, and continuous engine noise. IFE systems can provide stress reduction entertainment services to the passenger which provides mental distraction to decrease the psychological stress. This can be done by using e-books, video/audio broadcasting, games, internet, and On Demand services. On the other hand, physical problems can range from stiffness and fatigue to the threat of *Deep Vein Thrombosis (DVT)* (Westelaken et al., 2010). IFE systems can provide different solutions such as video guided exercises to decrease fatigue, and seat sensors to monitor the passenger's health status

In fact, passengers from highly heterogeneous pools (i.e., age, gender, ethnicity, etc...) cause an impact on the adaptive interface systems. In non-interactive IFE systems, services (i.e., video and audio contents) are usually implemented based on previous concepts of what passengers may like or require. Using an interactive system based on context-aware services can make passengers more comfortable since they are able to get their own personalized entertainment services. However, such system must be user friendly in terms of easiness to use, and varieties of choice; otherwise, the passenger may get bored and is not able to get the expected satisfaction level.

From the airlines companies' perspective, productivity and profitability are one of the main targets. Achieving these targets is always hindered by the strong competition between companies. Thus, airlines are trying to maximize their attractiveness to get more clients because every empty seat means a revenue loss. IFE systems can play a remarkable role in customer satisfaction and attraction, and it can be used as an efficient portal for in-flight shopping. Moreover, one of the main tasks of aircraft attendants is to keep the passengers calm, unstressed, and to quickly respond to their requests. IFE systems can be a factor of stress elimination, decreasing passenger's movements during the flight, and providing request information quickly to the attendants.

Achieving such level of services requires various technologies and design concepts to be integrated together for implementing such systems. A single networking technology is not capable of providing all types of services. Thus, a good heterogeneous communication network is required to connect different devices and provide multiple services on both system and passenger's levels. For example, a GSM network can provide telephony services; WiFi, Bluetooth, and Infrared to keep passenger's devices connected to the system; LAN and/or *Power Line Communication (PLC)* to form the communication network backbone.

#### **1.2 Chapter structure**

2 Will-be-set-by-IN-TECH

pressurized cabin (when compared to propeller-driven airliners), and large windows; hot and cold drinks, and food are serviced through the galley; separate women and men washrooms

Starting from 1960, *In-Flight Entertainment (IFE)* systems started to attract attention; they were basically a pre-selected audio track that may be accompanied with a film projector. They had shown improvements in both vertical and horizontal dimensions. They expanded horizontally by improving the existing services; audio entertainment moved from using simple audio devices to surround sound and live radio; video display progressed from using a film projector, to CRT displays hanged in the ceiling, to LCD displays dedicated to each passenger. The vertical improvement was noticed through introducing new technologies; cabin telephones allowed passengers to make phone calls during the flight; the system become interactive and allowed passengers to select their own services, while in the past they were forced to follow fixed services; web-based internet services allowed passengers to use some

The basic idea behind IFE systems was to provide passengers with comfortableness during their long range flights; especially with long transatlantic flights where passengers see nothing but a large blue surface, so that services were initially based on delivering food and drinks to passengers. As passengers demand for more services grows, accompanied with an increase in airlines competition and technology advancement, more services were introduced and modern electronic devices played a remarkable role. This caused a change in the basic concept behind IFE systems; it becomes more than just giving physical comfortableness and providing food. It is extended to provide interactive services that allow passengers to participate as a part of the entertainment process as well as providing business oriented services through connectivity tools. Moreover, it can provide means of health monitoring and physiological

In recent years, market surveys have revealed a surprising and growing trend in the importance of *IFE* systems with regard to choice of airline. With modern long range aircraft the need for "stop-over" has been reduced, so the duration of flights has also been increased. Air flights, especially long distance, may expose passengers to discomfort and even stress. (Liu, 2007) mentioned that the enclosed environment of the aircraft can cause discomfort or even problems to passengers. This may include psychological and physical discomfort due to cabin pressure, humidity, and continuous engine noise. IFE systems can provide stress reduction entertainment services to the passenger which provides mental distraction to decrease the psychological stress. This can be done by using e-books, video/audio broadcasting, games, internet, and On Demand services. On the other hand, physical problems can range from stiffness and fatigue to the threat of *Deep Vein Thrombosis (DVT)* (Westelaken et al., 2010). IFE systems can provide different solutions such as video guided exercises to decrease fatigue, and seat sensors to monitor the passenger's health status In fact, passengers from highly heterogeneous pools (i.e., age, gender, ethnicity, etc...) cause an impact on the adaptive interface systems. In non-interactive IFE systems, services (i.e., video and audio contents) are usually implemented based on previous concepts of what passengers may like or require. Using an interactive system based on context-aware services can make passengers more comfortable since they are able to get their own personalized entertainment services. However, such system must be user friendly in terms of easiness

were available (Davies & Birtles, 1999).

services such as emails and SMS messaging.

comfort.

Section 2 presents the different types of services provided by IFE systems, and shows the various components which are directly used by passengers as well as the components working at the background, which passengers are not aware of their existence. Section 3 introduces our proposed SysML model that integrates parts of the IFE system to help designers to have a global view of the whole system. Section 4 presents our conclusion. Finally, section 5 discusses future issues of IFE systems.

#### **2. IFE services and components**

IFE systems can provide various services for different parties such as airline companies, crew members, and basically passengers. These services are provided through software and hardware components; some components are used directly by passengers, while the others are used indirectly.

#### **2.1 IFE services**

IFE services can give solutions for different domains. They can provide health care and monitoring for passengers of health problems, business solutions to advertise products and support business decision making through surveys, and the expected service of entertainment.

#### **2.1.1 Crew services**

Although it seems that IFE systems are providing services to passengers only, but it can be extended to provide the cabin attendants with services to facilitate their job. Attendants have to keep a big smile and descent attitude during their work regardless of the current situation,

• *Single and multiplayer games:* Video games are another emerging facet of in-flight entertainment. Gaming systems can be networked to allow interactive playing by multiple passengers. Providing high quality gaming in an aircraft cabin environment presents significant engineering challenges. User expectation of video quality and game performance should be considered because many users had experienced sophisticated computer games with multiplayer capabilities, and high quality three dimensional video rendering. Network traffic characteristics associated with computer games should be studied to help in system design; (Kim et al., 2005) measured the traffic of a *Massively Multi-player On-line Role Playing Game (MMORPG)*, showing the differences in traffic between the server and client side. In a *Massively Multiuser Virtual Environment (MMVE)*, where large number of users can interact in real time, consistency management is required to realize a consistent world view for all users. (Itzel et al., 2010) present an approach that identifies users which actually interact with each other in the virtual world, groups them in consistency sessions and synchronizes them at runtime. On the other hand, there is a trend to use wireless networks in IFE systems; the feasibility of using wireless games is

Key Factors in Designing In-Flight Entertainment Systems 335

studied in different researches (Khan, 2010; Khan et al., 2010; Qi et al., 2009).

products and buy them instantaneously.

feeling can be boring for many passengers.

minimize the feeling of being a stranger in a foreign country.

**2.1.3 Information services**

degrees.

• *E-documents:* An in-flight magazine is a free magazine usually placed at the seat back by the airline company. Most airlines are distributing a paper version, and some of them are now distributing their magazines digitally via tablet computer applications. Furthermore, ebooks are widely available electronically with value-added features and search options not available in their print counterparts. Electronic versions are not limited to just text; they may present information in multiple media formats, for example, the text about a type of bird may be accompanied by video depicting the bird in flight and audio featuring its song. Using an electronic version of printed media can change their importance by adding interactive features such as e-commerce services where a passenger can choose his

Air map display provides passengers with up to date information about their travel. They are aware of the plane location and at which part of earth it is passing over. Information telling the outside temperature, speed, altitude, elapsed time, and remaining time gives passengers the sense of movement, because it is difficult at high altitudes, where you can find nothing except blue sky, and sun or moon, to evaluate and sense the aircraft motion. Missing this

Exterior-view cameras also enable passengers to have the pilot's forward view on take-off and landing on their personal TV screens. The cameras can have different locations. A tail-mounted camera is located in housing atop the vertical stabilizer of the aircraft; it provides a wide-angle view looking forward and typically shows most of the aircraft from above. A belly-mounted camera provides a view looking vertically down, or down at an angle that includes the horizon. A quad-cam belly installation offers a choice of four views covering 360

Passengers can pass their time navigating through available entertainment contents to have information about their destination. This can include city maps, sightseeing, languages, and cultural information. Such information will allow passengers to pass a fruitful time and

and are burdened with various responsibilities and tasks. We believe that IFE systems can create a dynamic link between passengers and attendants. When an attendant respond to a passenger call, he does not know the reason for the call, so he has to make two moves, one to know the request and the second to fulfill it. An IFE system can allow the passenger to inform the attendant with their request (i.e., drinking water), so that the attendant can finish the service in one move instead of two. Moreover, the IFE system can ask the passenger if he had requested a special meal or not, so the attendant can bring the exact meal to the desired place without moving around with all meals in hand while asking passengers.

The cabin intercommunication service allows the pilot and cabin crew to make announcements to passengers, such as boarding, door closure, take off, turbulence, and landing announcements. These announcements are very important and need to be delivered to all passengers without any interruption; they are usually introduced via a loudspeaker installed in the cabin. If the passenger is wearing his headset, or is not able to understand the announcement language, then few numbers of passengers will comprehend the message. An IFE system can elevate the service through its audio system. When an announcement is introduced while the passenger is running an entertainment service, the entertainment pauses and he hears the announcement through the IFE audio system. Moreover, if it is a standard message such as "*Fasten your seat belt*", it can be directly translated into the language currently used by the passenger.

Safety demonstrations are used to increase passenger safety awareness. The demonstrations are usually done by crew members. This means that an attendant will stop any current activity and dedicate himself to the demonstration. As an alternative, the IFE system can be used to provide *Aviation safety education for passengers* via multimedia services; insuring accurate instructions, situational awareness, emergency responses, and relevant cabin-safety regulations (Chang & Liao, 2009), so that the attendants can be freed to perform other tasks. Moreover, IFE systems can be used in pre-flight briefing for crew members to improve the quality and availability of information provided to flight crew (Bani-Salameh et al., 2010).

#### **2.1.2 Entertainment services**

They are the basic services introduced by IFE systems. They aim at providing multimedia contents for passenger entertainment, audio tracks for different types of music channels, special programs recorded for the airlines, games, and printed media

• *Video on Demand:* As mentioned by (Alamdari, 1999), IFE systems usually include screen-based, audio and communication systems. The screen-based products include video systems enabling passengers to watch movies, news and sports. These systems had progressed into *Video on Demand (VoD)*, allowing passengers to have control when they watch movies. The general VoD problem is to provide a library of movies where multiple clients can view movies according to their own needs in terms of when to start and stop a movie. This can be solved by using an *In-flight Management System* to store the pre-recorded contents on a central server, and streams a specific content to passengers privately.

The service can be enhanced by using subtitles as a textual version of the running dialogue; it is usually displayed at the bottom of the screen with or without added information to help viewers who are deaf or having hearing difficulties, or people who have accent recognition problems to follow the dialogue. In addition, they can be written in a different language to help people who can not understand the spoken dialogue.

4 Will-be-set-by-IN-TECH

and are burdened with various responsibilities and tasks. We believe that IFE systems can create a dynamic link between passengers and attendants. When an attendant respond to a passenger call, he does not know the reason for the call, so he has to make two moves, one to know the request and the second to fulfill it. An IFE system can allow the passenger to inform the attendant with their request (i.e., drinking water), so that the attendant can finish the service in one move instead of two. Moreover, the IFE system can ask the passenger if he had requested a special meal or not, so the attendant can bring the exact meal to the desired

The cabin intercommunication service allows the pilot and cabin crew to make announcements to passengers, such as boarding, door closure, take off, turbulence, and landing announcements. These announcements are very important and need to be delivered to all passengers without any interruption; they are usually introduced via a loudspeaker installed in the cabin. If the passenger is wearing his headset, or is not able to understand the announcement language, then few numbers of passengers will comprehend the message. An IFE system can elevate the service through its audio system. When an announcement is introduced while the passenger is running an entertainment service, the entertainment pauses and he hears the announcement through the IFE audio system. Moreover, if it is a standard message such as "*Fasten your seat belt*", it can be directly translated into the language currently

Safety demonstrations are used to increase passenger safety awareness. The demonstrations are usually done by crew members. This means that an attendant will stop any current activity and dedicate himself to the demonstration. As an alternative, the IFE system can be used to provide *Aviation safety education for passengers* via multimedia services; insuring accurate instructions, situational awareness, emergency responses, and relevant cabin-safety regulations (Chang & Liao, 2009), so that the attendants can be freed to perform other tasks. Moreover, IFE systems can be used in pre-flight briefing for crew members to improve the quality and availability of information provided to flight crew (Bani-Salameh et al., 2010).

They are the basic services introduced by IFE systems. They aim at providing multimedia contents for passenger entertainment, audio tracks for different types of music channels,

• *Video on Demand:* As mentioned by (Alamdari, 1999), IFE systems usually include screen-based, audio and communication systems. The screen-based products include video systems enabling passengers to watch movies, news and sports. These systems had progressed into *Video on Demand (VoD)*, allowing passengers to have control when they watch movies. The general VoD problem is to provide a library of movies where multiple clients can view movies according to their own needs in terms of when to start and stop a movie. This can be solved by using an *In-flight Management System* to store the pre-recorded contents on a central server, and streams a specific content to passengers privately.

The service can be enhanced by using subtitles as a textual version of the running dialogue; it is usually displayed at the bottom of the screen with or without added information to help viewers who are deaf or having hearing difficulties, or people who have accent recognition problems to follow the dialogue. In addition, they can be written in a different

special programs recorded for the airlines, games, and printed media

language to help people who can not understand the spoken dialogue.

place without moving around with all meals in hand while asking passengers.

used by the passenger.

**2.1.2 Entertainment services**


#### **2.1.3 Information services**

Air map display provides passengers with up to date information about their travel. They are aware of the plane location and at which part of earth it is passing over. Information telling the outside temperature, speed, altitude, elapsed time, and remaining time gives passengers the sense of movement, because it is difficult at high altitudes, where you can find nothing except blue sky, and sun or moon, to evaluate and sense the aircraft motion. Missing this feeling can be boring for many passengers.

Exterior-view cameras also enable passengers to have the pilot's forward view on take-off and landing on their personal TV screens. The cameras can have different locations. A tail-mounted camera is located in housing atop the vertical stabilizer of the aircraft; it provides a wide-angle view looking forward and typically shows most of the aircraft from above. A belly-mounted camera provides a view looking vertically down, or down at an angle that includes the horizon. A quad-cam belly installation offers a choice of four views covering 360 degrees.

Passengers can pass their time navigating through available entertainment contents to have information about their destination. This can include city maps, sightseeing, languages, and cultural information. Such information will allow passengers to pass a fruitful time and minimize the feeling of being a stranger in a foreign country.

making process (i.e., to buy a product); this includes pre-purchase information searching, and

Key Factors in Designing In-Flight Entertainment Systems 337

An IFE system can be a remarkable factor for in-flight shopping. It can increase passenger convenience and facilitate decision making. An electronic catalogue viewed through the IFE display unit can provide search options that allow passengers to find other alternatives and make his own comparisons, and it can provide him with exhaustive information about the product. In turn, this will allow passenger to plan ahead without making two much physical effort, and in a relatively shorter time than making the same process in a paper document or through discussion with a crew member. Furthermore, the IFE system can play an extra ordinary role to e-commerce, not only for in-flight shopping, but also for shopping outside the flight. The IFE system can be connected to ground commercial services, so that the passenger can buy products or services (i.e., transport tickets, and duty free products), and receive them directly when he reaches his destination. In addition, multimedia advertising

Surveying is an important part of market evaluation. (Balcombe et al., 2009) held a survey to determine passenger's *Willingness To Pay (WTP)* for in-flight service and comfort level. The survey focused on seat comfort, meal provision, bar service, ticket price, entertainment (i.e., overhead screens for pre-set programs). He reported that older passengers are WTP more for

However, performing such surveys is very tedious and difficult. (Aksoy et al., 2003) held a survey to evaluate Airline service marketing by domestic and foreign firms from customers viewpoint. The usable responses were 1014 out of 1350 responses, producing a 75.1% response rate. An IFE system can be an effective tool to increase the response rate, where an electronic version of the survey can guarantee that more passengers will participate, erroneous answers

An elevated type of services, which IFE can provide, is health services. Flight conditions may cause the cabin environment to be tough; especially for persons who can face ill conditions. Flight duration, dehydration, pressure, engine noise, and other factors can be reasons of physical and/or psychological problems. A sensory system integrated in IFE system can provide a way to sense bad health conditions of passengers having health problem, and either

(Schumm et al., 2010) and (Westelaken et al., 2010) suggest solutions based on sensory systems embedded in passenger's seat to sense his current status. (Schumm et al., 2010) introduce the design of smart seat containing sensors to measure *Electrocardiogram (ECG)*, *Electrodermal Activity (EDA)*, respiration, and skin temperature. These measured values can give a good indication about physical and psychological state. ECG is measured in two ways; without skin contact through sensors embedded in the backrest, and with a sensor fixed on the index finger. The second type is more obtrusive, but is more reliable. The same fixation system to the finger includes the EDA, and temperature sensors. The passenger movements can affect the reading quality, so a 3-axis accelerometer is added to compensate the errors. Respiration level is detected through sensors fixed in the seatbelt. The combined reading of these sensors

seat comfort, while younger passengers are WTP more for bar and screen services.

can be reduced, and analyzing the results becomes faster and accurate.

inform the crew members or perform an action to reduce the effect.

can give a good indication about the passenger's health status.

can attract companies to use it as a way to reach passengers.

evaluation of alternatives.

**2.1.6 Health services**

#### **2.1.4 E-business services**

*Airborne internet communications* allows passengers and crew members to use their own WiFi enabled devices, such as laptops, smart phones and PDAs, to surf the Web, send and receive in-flight e-mail with attachments, Instant Message, and access their corporate VPN. Many companies are offering solutions to provide passengers with Internet connectivity. FlyNet (FlyNet, Last visit 2011) is an example for onboard communication service provided by Lufthansa to allow passengers to connect to the Internet during their flight. ROW44 (ROW44, 2011) provides a satellite-based connectivity system that allows airlines to offer uninterrupted broadband service

*Mobile phones* are one of the most demanded devices by passengers. Many passengers, especially businessmen, are welling to make calls through their personal mobile phone during their flight. However, there are doubts that cell phone signals may endanger aircraft safety by interfering with navigational systems. To overcome this situation, different techniques (i.e., (AeroMobile, Last visit 2011)) were introduced to the market, where an on-board pico cell can connect the mobile phones to the ground stations through the satellite link and managing the signal strength to insure that there is no interference with the navigational systems.

*On-board conferencing* can turn wasted flight time into productive time for traveling teams of salespersons. Also, it will reduce the effort done by passengers to trade seats after boarding to bring their group together. With the addition of a headset with *Active Noise Cancellation*, the experience can be extended to conversing with someone in the next seat, due to the reduction of ambient noise.

*Personal Electronic Devices (PEDs)* such as laptop computers (including WiFi and Bluetooth enabled devices), PDAs (without mobile phones), personal music (i.e., iPods), iPads, ebooks and electronic game devices are electronic devices that can be used when the aircraft seat belt sign is extinguished after take-off and turned off during landing. On the other hand, other PEDs using radio transmission such as walkie-talkies, two-way pagers, or global positioning systems are prohibited at all stages of flight, as it may interfere with the aircraft communication and navigation systems.

*Power outlets* are hardly reached by passenger during traveling to their destination. Spending too much time without a power source can cause PEDs to run out of power, and causing passengers to be frustrated. As a solution for such situation, airlines (AmericanAirlines, 2011; Qantas, 2011) add power outlets to passenger seats. These outlets are usually present in first and business class seats. For safety reasons, some outlets are designed to provide 110 Volt (60 Hz) with 75 watts, however, this may be unsuitable for PCs that consumes more power. Other companies provide 15 volt cigarette lighter outlet, which needs an adapter to connect devices.

#### **2.1.5 E-commerce services**

In-flight shopping is dragging more attention from airlines as it is considered as a source of revenue, and a way for passengers to utilize their flight time. (Liou, 2011) presented passenger attitude towards in-flight shopping. He mentioned that customer's convenience increases when the shopping process takes less time, less effort in planning ahead, and less physical effort to obtain the product or service. Moreover, many factors can affect the decision 6 Will-be-set-by-IN-TECH

*Airborne internet communications* allows passengers and crew members to use their own WiFi enabled devices, such as laptops, smart phones and PDAs, to surf the Web, send and receive in-flight e-mail with attachments, Instant Message, and access their corporate VPN. Many companies are offering solutions to provide passengers with Internet connectivity. FlyNet (FlyNet, Last visit 2011) is an example for onboard communication service provided by Lufthansa to allow passengers to connect to the Internet during their flight. ROW44 (ROW44, 2011) provides a satellite-based connectivity system that allows airlines to offer uninterrupted

*Mobile phones* are one of the most demanded devices by passengers. Many passengers, especially businessmen, are welling to make calls through their personal mobile phone during their flight. However, there are doubts that cell phone signals may endanger aircraft safety by interfering with navigational systems. To overcome this situation, different techniques (i.e., (AeroMobile, Last visit 2011)) were introduced to the market, where an on-board pico cell can connect the mobile phones to the ground stations through the satellite link and managing the signal strength to insure that there is no interference with the navigational systems.

*On-board conferencing* can turn wasted flight time into productive time for traveling teams of salespersons. Also, it will reduce the effort done by passengers to trade seats after boarding to bring their group together. With the addition of a headset with *Active Noise Cancellation*, the experience can be extended to conversing with someone in the next seat, due to the reduction

*Personal Electronic Devices (PEDs)* such as laptop computers (including WiFi and Bluetooth enabled devices), PDAs (without mobile phones), personal music (i.e., iPods), iPads, ebooks and electronic game devices are electronic devices that can be used when the aircraft seat belt sign is extinguished after take-off and turned off during landing. On the other hand, other PEDs using radio transmission such as walkie-talkies, two-way pagers, or global positioning systems are prohibited at all stages of flight, as it may interfere with the aircraft

*Power outlets* are hardly reached by passenger during traveling to their destination. Spending too much time without a power source can cause PEDs to run out of power, and causing passengers to be frustrated. As a solution for such situation, airlines (AmericanAirlines, 2011; Qantas, 2011) add power outlets to passenger seats. These outlets are usually present in first and business class seats. For safety reasons, some outlets are designed to provide 110 Volt (60 Hz) with 75 watts, however, this may be unsuitable for PCs that consumes more power. Other companies provide 15 volt cigarette lighter outlet, which needs an adapter to

In-flight shopping is dragging more attention from airlines as it is considered as a source of revenue, and a way for passengers to utilize their flight time. (Liou, 2011) presented passenger attitude towards in-flight shopping. He mentioned that customer's convenience increases when the shopping process takes less time, less effort in planning ahead, and less physical effort to obtain the product or service. Moreover, many factors can affect the decision

**2.1.4 E-business services**

broadband service

of ambient noise.

connect devices.

**2.1.5 E-commerce services**

communication and navigation systems.

making process (i.e., to buy a product); this includes pre-purchase information searching, and evaluation of alternatives.

An IFE system can be a remarkable factor for in-flight shopping. It can increase passenger convenience and facilitate decision making. An electronic catalogue viewed through the IFE display unit can provide search options that allow passengers to find other alternatives and make his own comparisons, and it can provide him with exhaustive information about the product. In turn, this will allow passenger to plan ahead without making two much physical effort, and in a relatively shorter time than making the same process in a paper document or through discussion with a crew member. Furthermore, the IFE system can play an extra ordinary role to e-commerce, not only for in-flight shopping, but also for shopping outside the flight. The IFE system can be connected to ground commercial services, so that the passenger can buy products or services (i.e., transport tickets, and duty free products), and receive them directly when he reaches his destination. In addition, multimedia advertising can attract companies to use it as a way to reach passengers.

Surveying is an important part of market evaluation. (Balcombe et al., 2009) held a survey to determine passenger's *Willingness To Pay (WTP)* for in-flight service and comfort level. The survey focused on seat comfort, meal provision, bar service, ticket price, entertainment (i.e., overhead screens for pre-set programs). He reported that older passengers are WTP more for seat comfort, while younger passengers are WTP more for bar and screen services.

However, performing such surveys is very tedious and difficult. (Aksoy et al., 2003) held a survey to evaluate Airline service marketing by domestic and foreign firms from customers viewpoint. The usable responses were 1014 out of 1350 responses, producing a 75.1% response rate. An IFE system can be an effective tool to increase the response rate, where an electronic version of the survey can guarantee that more passengers will participate, erroneous answers can be reduced, and analyzing the results becomes faster and accurate.

#### **2.1.6 Health services**

An elevated type of services, which IFE can provide, is health services. Flight conditions may cause the cabin environment to be tough; especially for persons who can face ill conditions. Flight duration, dehydration, pressure, engine noise, and other factors can be reasons of physical and/or psychological problems. A sensory system integrated in IFE system can provide a way to sense bad health conditions of passengers having health problem, and either inform the crew members or perform an action to reduce the effect.

(Schumm et al., 2010) and (Westelaken et al., 2010) suggest solutions based on sensory systems embedded in passenger's seat to sense his current status. (Schumm et al., 2010) introduce the design of smart seat containing sensors to measure *Electrocardiogram (ECG)*, *Electrodermal Activity (EDA)*, respiration, and skin temperature. These measured values can give a good indication about physical and psychological state. ECG is measured in two ways; without skin contact through sensors embedded in the backrest, and with a sensor fixed on the index finger. The second type is more obtrusive, but is more reliable. The same fixation system to the finger includes the EDA, and temperature sensors. The passenger movements can affect the reading quality, so a 3-axis accelerometer is added to compensate the errors. Respiration level is detected through sensors fixed in the seatbelt. The combined reading of these sensors can give a good indication about the passenger's health status.

normal body mechanism for returning fluid to the heart can be inhibited and gravity can cause

Key Factors in Designing In-Flight Entertainment Systems 339

From another side, modern technologies can be used to elevate seat entertainment and comfortable role. Thus, we propose two terms, *Passive Seat (PS)*, and *Active Seat (AS)*. The *PS* is providing the service through its own structural design without any interaction with the passenger. The *AS* is providing the service in response to an intentional or unintentional input

• *Passive Seat***:** (Nadadur & Parkinson, 2009) discussed different seat design problems. Airlines are aiming at increasing the seats density inside the cabin to increase their revenue. However, such approach diminishes the comfortableness factors in seat design. Increasing the seats density negatively affects the seat pitch, causing a decrease in the passenger's leg room (see Figure 2), which is considered as an important factor especially for tall passengers. He also mentioned that passengers should minimize the pressure between their lower thighs and the surface of the seat to prevent the occurrence of *Deep Vein Thrombosis (DVT)*. This can be achieved by keeping the knees height greater than the seat's height. A design contradiction here arises because lower seat height requires more leg room causing seat pitch to increase, and consequently seats density will decrease. On the other hand, increasing the seat height increases discomfort and the probability to have DVT problems. To find a compromise between these contradictions he proposed a mathematical solution to embed the passenger comfort as a design parameter and link it with the passenger's willingness to pay higher prices. (Vink, 2011) introduced other factors to be considered during seat design such as wider seats, adjustable headrests, space under the armrest, backrest angle, and ideal distribution of pressure over body parts. A better pressure distribution can be achieved by using support under the front part of the legs to spread the load, and ergonomic design of seat back and seat pan. Also, a well

the fluid to collect in the feet, resulting in swollen feet after a long flight.

designed headrest and neck rest can increase the comfort feeling.

Fig. 2. Some aircraft seat design parameters from (Nadadur & Parkinson, 2009)

adjustable moving parts is usually required.

Sleeping and sitting posture is an important factor for passenger's comfort especially in long haul flights and it can affect the pressure distribution over the body. (Tan, Iaeng, Chen, Kimman & Rauterberg, 2009) held an analysis of passengers' postures in the economy class to help in seat design. In their study, they identified seven different sleeping positions for passengers. When considering the anthropometry differences between humans of different origins, we can say that it is difficult for a passive seat to achieve all comfort positions of different postures for all passengers, so an active seat with

captured from the passenger.

Physical exercises can reduce physical stress and fatigue. However, the challenge is how to stimulate passengers to do them. (Westelaken et al., 2010) introduces a solution to reduce physical and psychological stress by detecting body movements and gestures to be used as an input for interactive applications in the IFE system. The basic idea for implementing these applications is to allow the passenger to participate in a gaming activity. His movements are captured as inputs for the chosen game. Three techniques were introduced to capture movements; sensors integrated in the floor, sensors integrated in the seat, and video-based gesture recognition. However, each of these techniques has its own pros and cons which need more investigation.

For passengers of special health needs, IFE system can be an effective tool to relief their pain. Passengers of *Spinal Cord Injury (SCI)* are not able to sense pressure acting on certain parts of their body that are cut off nervous system. This may increase the risk of decubitus ulcer, especially for long flights, where passengers may sit for several hours. (Tan, Chen, Verbunt, Bartneck & Rauterberg, 2009) proposed an *Adaptive Posture Advisory System (APAS)* for people of SCI. The passenger's seat plays a great role by having various sensors and actuators. Sensors are used as input source for a central processor connected to a database which is used to record passenger's sitting behavior and conditions. The suitable decision is taken and sent to the actuator to change the seat shape, and softness. This system helps SCI passengers to reposition their sitting posture to shift the points under pressure so that decubitus ulcer risk is minimized.

#### **2.2 IFE components**

The IFE components can be categorized into passenger and system components. In (Akl et al., 2011), we identified passenger components as the devices that the passenger uses directly to achieve a service, and system components as the components which are provided by the system and used indirectly by the passenger.

#### **2.2.1 Passenger components**

Passenger components are usually designed to be very simple and familiar in appearance and functionality in order to allow passengers of different background to use them; such as display units, remote controls, seat control buttons, headphones, etc...

#### 2.2.1.1 Passenger seat

From the first sight, the passenger's seat may seem to be out of the scope of IFE systems, which are basically designed for entertainment. However, a deep look shows the contrary since passenger's seat is one of the main comfortableness components; especially when we consider that it is the place where the passenger spends most of his travel time. From one side, a poorly designed seat can causes discomfort, which can be extended to a musculoskeletal disorders regardless of the presence of any entertainment or stress reduction techniques; imagine the stay on such a seat for three or two hours, you will think in nothing except the time when the flight ends. Furthermore, when the passenger sits upright and inactive for a long period of time, he may be exposed to several health hazards. The central blood vessels in his legs can be compressed, making it harder for the blood to get back to his heart. Muscles can become tense, resulting in backaches and a feeling of excessive fatigue during, and even after the flight. The 8 Will-be-set-by-IN-TECH

Physical exercises can reduce physical stress and fatigue. However, the challenge is how to stimulate passengers to do them. (Westelaken et al., 2010) introduces a solution to reduce physical and psychological stress by detecting body movements and gestures to be used as an input for interactive applications in the IFE system. The basic idea for implementing these applications is to allow the passenger to participate in a gaming activity. His movements are captured as inputs for the chosen game. Three techniques were introduced to capture movements; sensors integrated in the floor, sensors integrated in the seat, and video-based gesture recognition. However, each of these techniques has its own pros and cons which need

For passengers of special health needs, IFE system can be an effective tool to relief their pain. Passengers of *Spinal Cord Injury (SCI)* are not able to sense pressure acting on certain parts of their body that are cut off nervous system. This may increase the risk of decubitus ulcer, especially for long flights, where passengers may sit for several hours. (Tan, Chen, Verbunt, Bartneck & Rauterberg, 2009) proposed an *Adaptive Posture Advisory System (APAS)* for people of SCI. The passenger's seat plays a great role by having various sensors and actuators. Sensors are used as input source for a central processor connected to a database which is used to record passenger's sitting behavior and conditions. The suitable decision is taken and sent to the actuator to change the seat shape, and softness. This system helps SCI passengers to reposition their sitting posture to shift the points under

The IFE components can be categorized into passenger and system components. In (Akl et al., 2011), we identified passenger components as the devices that the passenger uses directly to achieve a service, and system components as the components which are provided by the

Passenger components are usually designed to be very simple and familiar in appearance and functionality in order to allow passengers of different background to use them; such as display

From the first sight, the passenger's seat may seem to be out of the scope of IFE systems, which are basically designed for entertainment. However, a deep look shows the contrary since passenger's seat is one of the main comfortableness components; especially when we consider that it is the place where the passenger spends most of his travel time. From one side, a poorly designed seat can causes discomfort, which can be extended to a musculoskeletal disorders regardless of the presence of any entertainment or stress reduction techniques; imagine the stay on such a seat for three or two hours, you will think in nothing except the time when the flight ends. Furthermore, when the passenger sits upright and inactive for a long period of time, he may be exposed to several health hazards. The central blood vessels in his legs can be compressed, making it harder for the blood to get back to his heart. Muscles can become tense, resulting in backaches and a feeling of excessive fatigue during, and even after the flight. The

more investigation.

**2.2 IFE components**

**2.2.1 Passenger components**

2.2.1.1 Passenger seat

pressure so that decubitus ulcer risk is minimized.

system and used indirectly by the passenger.

units, remote controls, seat control buttons, headphones, etc...

normal body mechanism for returning fluid to the heart can be inhibited and gravity can cause the fluid to collect in the feet, resulting in swollen feet after a long flight.

From another side, modern technologies can be used to elevate seat entertainment and comfortable role. Thus, we propose two terms, *Passive Seat (PS)*, and *Active Seat (AS)*. The *PS* is providing the service through its own structural design without any interaction with the passenger. The *AS* is providing the service in response to an intentional or unintentional input captured from the passenger.

• *Passive Seat***:** (Nadadur & Parkinson, 2009) discussed different seat design problems. Airlines are aiming at increasing the seats density inside the cabin to increase their revenue. However, such approach diminishes the comfortableness factors in seat design. Increasing the seats density negatively affects the seat pitch, causing a decrease in the passenger's leg room (see Figure 2), which is considered as an important factor especially for tall passengers. He also mentioned that passengers should minimize the pressure between their lower thighs and the surface of the seat to prevent the occurrence of *Deep Vein Thrombosis (DVT)*. This can be achieved by keeping the knees height greater than the seat's height. A design contradiction here arises because lower seat height requires more leg room causing seat pitch to increase, and consequently seats density will decrease. On the other hand, increasing the seat height increases discomfort and the probability to have DVT problems. To find a compromise between these contradictions he proposed a mathematical solution to embed the passenger comfort as a design parameter and link it with the passenger's willingness to pay higher prices. (Vink, 2011) introduced other factors to be considered during seat design such as wider seats, adjustable headrests, space under the armrest, backrest angle, and ideal distribution of pressure over body parts. A better pressure distribution can be achieved by using support under the front part of the legs to spread the load, and ergonomic design of seat back and seat pan. Also, a well designed headrest and neck rest can increase the comfort feeling.

Fig. 2. Some aircraft seat design parameters from (Nadadur & Parkinson, 2009)

Sleeping and sitting posture is an important factor for passenger's comfort especially in long haul flights and it can affect the pressure distribution over the body. (Tan, Iaeng, Chen, Kimman & Rauterberg, 2009) held an analysis of passengers' postures in the economy class to help in seat design. In their study, they identified seven different sleeping positions for passengers. When considering the anthropometry differences between humans of different origins, we can say that it is difficult for a passive seat to achieve all comfort positions of different postures for all passengers, so an active seat with adjustable moving parts is usually required.

services. With respect to the display quality, Video games do not need high resolution for their images since small moving objects are the main constitute of Video games. On the contrary, movies and virtual reality applications need high resolution to present their high quality images. The interactive feature of Video games and Virtual Reality applications require special input devices, since touch screens are usually suitable for simple selections and not for

Key Factors in Designing In-Flight Entertainment Systems 341

**Service Realistic Interactive Immersive Detailed Character**

The VDU location depends on the philosophy of the installed IFE system. If the system is going to present the pre-selected media without any intervention from the user, then a global VDU is installed in the cabin ceiling (see Figure 4(d)). If VoD service are presented with user interaction to select his own media contents, then each passenger seat is provided with a private VDU fixed in the back of the front seat (see Figure 4(a)). Furthermore, seats of special locations such as seats of first row or in the business class may have special VDU placement

The VDU viewing angle is an important satisfaction factor. The viewing angle of VDUs fixed at the back of the front seat may change when the front passenger changes the position of his seat back, so that VDUs are usually fixed on a pivot to allow the user to change their inclination; otherwise, the user has to move his head to a fixed position to be able to view the VDU. Another solution is to fix the VDU on a movable axis to give the VDU different degrees

(a) Private VDU (b) Movable VDU (c) First seat VDU (d) Ceiling VDU

As IFE systems are becoming more and more interactive, a *Remote Control Device (RCD)* is needed to control the surrounding devices. It should be compact and easily held. Moreover, the pocket holding the RCD has to be placed in a way that makes it easily reached and not to affect passenger comfort. At the beginning, RCD used to be fixed aside to the VDU at the back of the front seat. This orientation introduced a problem when the passenger setting beside the window wants to move to the corridor; where all his neighbors have to replace their RCDs to allow him to pass. To overcome this problem, RCDs are now connected to their VDUs through wires passing via their seat. Using wireless technology can minimize such physical

Video Games No Yes No Yes Movies Yes No No Yes Virtual reality Yes Yes Yes Yes

quick repetitive pressing.

(see Figure 4(b) & 4(c)).

of freedom (see Figure 4(b))

Fig. 4. Different VDU placements

2.2.1.3 Remote control

complexity (Akl et al., 2011).

Table 1. Various Display requirements

• *Active Seat***:** A *Passive Seat* provides services of static features. On the contrary, an *Active Seat* is able to get an input from the passenger to change the service it provides. The input can be an activity to change the angle or position of adjustable parts of the seat; for example, the passenger can freely set the backrest angle or adjust the height of headrest to match his posture. In business class, the seat can accommodate a variety of postures for different activities such as watching TV, reading, sleeping, etc... Figure 3 shows a simple mechanical button (in economic class) for changing backrest angle Vs an electronic buttons (in business class) that can easily change the orientation of different parts of the seat using embedded motors.

Fig. 3. Electronic Vs Mechanical seat adjustment

In economy class, the degrees of freedom of an *Active Seat* are very limited where minimal parts are allowed to change their orientation due to limited space. For example, the armrest can be moved from the horizontal position to the vertical position to give more space, and the backrest angle can be changed to increase the body inclination and reduce the pressure exerted on the back. However, the inclination angle is usually very small in order not to reduce leg space of the behind seat. On the contrary, the business class seat is featured by large spaces; thus, different parts can be reoriented easily. A premium seat may be in a pod and capable of opening out into a flat sleeping configuration or folding up into a seat for take-off and landing. Moreover, it includes more amenities such as power, task lighting, and has also a design trend towards a higher level of privacy.

#### 2.2.1.2 Visual display units

A *Visual Display Unit (VDU)* is the principal component in the entertainment process. It is the main interface between passengers and the IFE system, as well as their ability to provide interactive services. There are different types of VDUs. At the very beginning, *Cathode Ray Tube (CRT)* displays were used. Although they were able to provide the required service at that time, but were suffering of many drawbacks. They were relatively large in size and heavy in weight, so they were used as a shared display between a set of seats. Furthermore, the ambient lighting may affect the clearness of images. As technology advances, *Liquid Crystal Display (LCD)* units were introduced. They are small in size and light in weight. These characteristics helped greatly in introducing *Video on Demand (VoD)* service, where each passenger has his own display unit to watch his selected items. At the same time LCDs can still be used as shared displays. Nowadays, displays are equipped with an extra feature that allowed them to be used as input devices. Touch screens allow users to choose their own selections by touching the screen in the appropriate location.

Although a normal VDU is usually sufficient to display the required contents, certain services may have special needs. Table 1 shows the characteristics required to display different media 10 Will-be-set-by-IN-TECH

• *Active Seat***:** A *Passive Seat* provides services of static features. On the contrary, an *Active Seat* is able to get an input from the passenger to change the service it provides. The input can be an activity to change the angle or position of adjustable parts of the seat; for example, the passenger can freely set the backrest angle or adjust the height of headrest to match his posture. In business class, the seat can accommodate a variety of postures for different activities such as watching TV, reading, sleeping, etc... Figure 3 shows a simple mechanical button (in economic class) for changing backrest angle Vs an electronic buttons (in business class) that can easily change the orientation of different parts of the seat using

In economy class, the degrees of freedom of an *Active Seat* are very limited where minimal parts are allowed to change their orientation due to limited space. For example, the armrest can be moved from the horizontal position to the vertical position to give more space, and the backrest angle can be changed to increase the body inclination and reduce the pressure exerted on the back. However, the inclination angle is usually very small in order not to reduce leg space of the behind seat. On the contrary, the business class seat is featured by large spaces; thus, different parts can be reoriented easily. A premium seat may be in a pod and capable of opening out into a flat sleeping configuration or folding up into a seat for take-off and landing. Moreover, it includes more amenities such as power, task lighting, and has also a design trend

A *Visual Display Unit (VDU)* is the principal component in the entertainment process. It is the main interface between passengers and the IFE system, as well as their ability to provide interactive services. There are different types of VDUs. At the very beginning, *Cathode Ray Tube (CRT)* displays were used. Although they were able to provide the required service at that time, but were suffering of many drawbacks. They were relatively large in size and heavy in weight, so they were used as a shared display between a set of seats. Furthermore, the ambient lighting may affect the clearness of images. As technology advances, *Liquid Crystal Display (LCD)* units were introduced. They are small in size and light in weight. These characteristics helped greatly in introducing *Video on Demand (VoD)* service, where each passenger has his own display unit to watch his selected items. At the same time LCDs can still be used as shared displays. Nowadays, displays are equipped with an extra feature that allowed them to be used as input devices. Touch screens allow users to choose their own selections by touching

Although a normal VDU is usually sufficient to display the required contents, certain services may have special needs. Table 1 shows the characteristics required to display different media

embedded motors.

Fig. 3. Electronic Vs Mechanical seat adjustment

towards a higher level of privacy.

the screen in the appropriate location.

2.2.1.2 Visual display units

services. With respect to the display quality, Video games do not need high resolution for their images since small moving objects are the main constitute of Video games. On the contrary, movies and virtual reality applications need high resolution to present their high quality images. The interactive feature of Video games and Virtual Reality applications require special input devices, since touch screens are usually suitable for simple selections and not for quick repetitive pressing.


Table 1. Various Display requirements

The VDU location depends on the philosophy of the installed IFE system. If the system is going to present the pre-selected media without any intervention from the user, then a global VDU is installed in the cabin ceiling (see Figure 4(d)). If VoD service are presented with user interaction to select his own media contents, then each passenger seat is provided with a private VDU fixed in the back of the front seat (see Figure 4(a)). Furthermore, seats of special locations such as seats of first row or in the business class may have special VDU placement (see Figure 4(b) & 4(c)).

The VDU viewing angle is an important satisfaction factor. The viewing angle of VDUs fixed at the back of the front seat may change when the front passenger changes the position of his seat back, so that VDUs are usually fixed on a pivot to allow the user to change their inclination; otherwise, the user has to move his head to a fixed position to be able to view the VDU. Another solution is to fix the VDU on a movable axis to give the VDU different degrees of freedom (see Figure 4(b))

(a) Private VDU (b) Movable VDU (c) First seat VDU (d) Ceiling VDU

#### Fig. 4. Different VDU placements

#### 2.2.1.3 Remote control

As IFE systems are becoming more and more interactive, a *Remote Control Device (RCD)* is needed to control the surrounding devices. It should be compact and easily held. Moreover, the pocket holding the RCD has to be placed in a way that makes it easily reached and not to affect passenger comfort. At the beginning, RCD used to be fixed aside to the VDU at the back of the front seat. This orientation introduced a problem when the passenger setting beside the window wants to move to the corridor; where all his neighbors have to replace their RCDs to allow him to pass. To overcome this problem, RCDs are now connected to their VDUs through wires passing via their seat. Using wireless technology can minimize such physical complexity (Akl et al., 2011).

companies (Lufthansa, 2011; Thales, 2011) are now offering broadband communication for

Key Factors in Designing In-Flight Entertainment Systems 343

System components are usually complex to be able to handle the services while keeping simplicity of passenger components. Furthermore, the cabin environment is strict in terms of safety and imposed constrains. These characteristics encouraged the solution of using multiple technologies to form a heterogeneous system where each technology provides a

A context-aware IFE system can increase passenger satisfaction level. If there are many choices and the interaction design is poor, the passenger tends to get disoriented and is not able to achieve the most appealing contents. This is because most IFE systems are user adaptive systems where the user initiates system adaptation to get his personalized contents. (Liu & Rauterberg, 2007) showed the main architectural components to make a context-aware IFE system which can provide the passenger with entertainment contents based on his personal demographic information, activity, physical and psychological states if the passenger was in stress. Furthermore, the passenger is able to decline the proposed contents,

For IFE networking, wireless technology can introduce different solutions to solve many existing problems as well as providing new services. Nowadays, wired networks are the principal technology of implementing IFE systems. Ethernet is currently the standard for wired communication in different fields. (Thompson, 2004) showed that it is characterized by interesting features such as good communication performance, scalability, high availability, and resistivity to external noise. Using off shelf technologies such as routers can reduce the costs of networking inside the cabin. In spite of all these advantages, IFE system designers are welling to exchange it -or part of it- by wireless technology to achieve more targets. Ethernet cabling is considered a burden for aircraft design because lighter aircrafts consume less fuel, and it imposes difficulties on easiness of reconfiguration and maintenance of the cabin (Akl et al., 2011). Accordingly, using different technologies within the same communication network can introduce a solution to the limitations of using each of them

WiFi is a well known technology used in different commercial, industrial, and home devices. It can easily coexist with other technologies to form a heterogeneous network (Niebla, 2003). Moreover, (Lansford et al., 2001) stated that WiFi and Bluetooth technologies are two complementary not a competing technologies. They can cooperate together to provide users

However, using large number of wireless devices in a very narrow metallic tunnel like the cabin has a dramatic effect on network performance. Furthermore, a major concern for using wireless devices in aircraft cabin is their interference with the aircraft communication and navigation system, especially unintended interference from passenger's *Personal Electronic Devices(PED)*. (Holzbock et al., 2004) said that the installed navigation and communication systems on the aircraft are designed to be sensitive to electromagnetic signals, so they

PEDs.

**2.2.2 System components**

solution for a part of the problem.

and create his personalized contents.

individually.

2.2.2.1 WiFi and Bluetooth

with different connecting services.

Furthermore, passengers of no knowledge about using modern technology must be able to use RDCs easily. Usual control buttons (i.e., Volume, Rewind, Forward, etc...) are known for almost everyone; especial purpose controls such as *Settings*, and *Mode* can be carefully manipulated and, if used, to be provided by explanatory information when possible.

#### 2.2.1.4 Noise canceling headphones

Headphones are used to privatize audio contents, so that each passenger can listen to his own selection without annoying his neighbors or being affected by the surrounding noise. Ordinary headphones are usually enough to do the job. However, modern technology can elevate the service level, by introducing active headphones capable of reducing the effect of surrounding noise (see Figure 5).

Generally, headphone ear cups have passive absorption capability which allows them to block some high frequency noise. However, they are not efficient for attenuating low frequency noise. A *Noise Canceling Headphones (NCH)* can reduce the noise through active noise cancellation techniques (Chang & Li, 2011)

(a) Passiveheadphones (b) Activeheadphones

Fig. 5. Headphones

2.2.1.5 Personal Electronic Device (PED)

Nowadays, people are getting more sticky to their *Personal Electronic Devices (PEDs)* such as laptop, mobile phone, and PDA, so most passengers are traveling with their PEDs. Connecting PEDs do not require special interfaces since modern IFE systems are moving towards wireless communication such as WiFi, Bluetooth, and IrDA, which are already used in most PEDs.

Using PEDs can have several advantages for both Airlines and passengers. Passengers will be able to use their devices to interact with the IFE system. They do not need to use or investigate unknown devices. Also, they can utilize their own data if the system permits them. Furthermore, if the IFE contents can be copied, the passenger can continue it at his hotel.

From the airlines perspective, PEDs can be used to save some dedicated devices of IFE systems. It is cheaper for airlines to remove expensive seatback monitors and let passengers to use their own devices; this is a good option for airlines offering cheap flights. Many

companies (Lufthansa, 2011; Thales, 2011) are now offering broadband communication for PEDs.

#### **2.2.2 System components**

12 Will-be-set-by-IN-TECH

Furthermore, passengers of no knowledge about using modern technology must be able to use RDCs easily. Usual control buttons (i.e., Volume, Rewind, Forward, etc...) are known for almost everyone; especial purpose controls such as *Settings*, and *Mode* can be carefully

Headphones are used to privatize audio contents, so that each passenger can listen to his own selection without annoying his neighbors or being affected by the surrounding noise. Ordinary headphones are usually enough to do the job. However, modern technology can elevate the service level, by introducing active headphones capable of reducing the effect of

Generally, headphone ear cups have passive absorption capability which allows them to block some high frequency noise. However, they are not efficient for attenuating low frequency noise. A *Noise Canceling Headphones (NCH)* can reduce the noise through active

(a) Passiveheadphones (b) Activeheadphones

Nowadays, people are getting more sticky to their *Personal Electronic Devices (PEDs)* such as laptop, mobile phone, and PDA, so most passengers are traveling with their PEDs. Connecting PEDs do not require special interfaces since modern IFE systems are moving towards wireless communication such as WiFi, Bluetooth, and IrDA, which are already used

Using PEDs can have several advantages for both Airlines and passengers. Passengers will be able to use their devices to interact with the IFE system. They do not need to use or investigate unknown devices. Also, they can utilize their own data if the system permits them. Furthermore, if the IFE contents can be copied, the passenger can continue it at his

From the airlines perspective, PEDs can be used to save some dedicated devices of IFE systems. It is cheaper for airlines to remove expensive seatback monitors and let passengers to use their own devices; this is a good option for airlines offering cheap flights. Many

manipulated and, if used, to be provided by explanatory information when possible.

2.2.1.4 Noise canceling headphones

surrounding noise (see Figure 5).

Fig. 5. Headphones

in most PEDs.

hotel.

2.2.1.5 Personal Electronic Device (PED)

noise cancellation techniques (Chang & Li, 2011)

System components are usually complex to be able to handle the services while keeping simplicity of passenger components. Furthermore, the cabin environment is strict in terms of safety and imposed constrains. These characteristics encouraged the solution of using multiple technologies to form a heterogeneous system where each technology provides a solution for a part of the problem.

A context-aware IFE system can increase passenger satisfaction level. If there are many choices and the interaction design is poor, the passenger tends to get disoriented and is not able to achieve the most appealing contents. This is because most IFE systems are user adaptive systems where the user initiates system adaptation to get his personalized contents. (Liu & Rauterberg, 2007) showed the main architectural components to make a context-aware IFE system which can provide the passenger with entertainment contents based on his personal demographic information, activity, physical and psychological states if the passenger was in stress. Furthermore, the passenger is able to decline the proposed contents, and create his personalized contents.

For IFE networking, wireless technology can introduce different solutions to solve many existing problems as well as providing new services. Nowadays, wired networks are the principal technology of implementing IFE systems. Ethernet is currently the standard for wired communication in different fields. (Thompson, 2004) showed that it is characterized by interesting features such as good communication performance, scalability, high availability, and resistivity to external noise. Using off shelf technologies such as routers can reduce the costs of networking inside the cabin. In spite of all these advantages, IFE system designers are welling to exchange it -or part of it- by wireless technology to achieve more targets. Ethernet cabling is considered a burden for aircraft design because lighter aircrafts consume less fuel, and it imposes difficulties on easiness of reconfiguration and maintenance of the cabin (Akl et al., 2011). Accordingly, using different technologies within the same communication network can introduce a solution to the limitations of using each of them individually.

#### 2.2.2.1 WiFi and Bluetooth

WiFi is a well known technology used in different commercial, industrial, and home devices. It can easily coexist with other technologies to form a heterogeneous network (Niebla, 2003). Moreover, (Lansford et al., 2001) stated that WiFi and Bluetooth technologies are two complementary not a competing technologies. They can cooperate together to provide users with different connecting services.

However, using large number of wireless devices in a very narrow metallic tunnel like the cabin has a dramatic effect on network performance. Furthermore, a major concern for using wireless devices in aircraft cabin is their interference with the aircraft communication and navigation system, especially unintended interference from passenger's *Personal Electronic Devices(PED)*. (Holzbock et al., 2004) said that the installed navigation and communication systems on the aircraft are designed to be sensitive to electromagnetic signals, so they

an antenna that can produce *Electromagnetic Emissions (EME)*. Thus, a PLC device must be *Electromagnetic Compatible (EMC)* to the surrounding environment. This means that it must not produce intolerable EME, and not to be susceptible to them. To overcome this problem, the transmission power should not be high in order not to disturb other communicating devices (Hrasnica et al., 2004). However, working on a limited power signal makes the system sensitive for external noise. In spite of this, the PLC devices can work without concerns of external interference due to two reasons. Firstly, the PLC network is divided into segments; this minimizes signal attenuation. Secondly, all cabin devices are designed according to strict rules that prevent EME high enough to interfere with the surrounding devices. (Akl et al., 2010) presented a PLC network dedicated for IFE systems to replace part of the wired communication network, where two PLC devices were used; *Power Line Head Box (PLHB) and Power Line Box (PLB)*. PLHB connects the two terminals of the power line to connect data servers with seats. Each PLHB service a group of seats, which are equipped with PLB per seat

Key Factors in Designing In-Flight Entertainment Systems 345

For several years the aircraft industry has been looking for a technology to provide, at a reasonable cost, an onboard phone service (see Figure 7). Nevertheless, some technical hitches make successful calls via the terrestrial *Global System for Mobile Communications (GSM)* network impossible. The mobiles are unable to make reliable contact with ground-based base stations, so they would transmit with maximum RF power and these RF fields could potentially cause interference with the aircraft communications systems. On the other hand, the high speed of the aircraft causes frequent handover from cell to cell, and in extreme cases could even cause degradation of terrestrial services due to the large amount of control signaling required in managing these handovers. In order to avoid these problems and allow airline passengers to use their own mobile terminals during certain stages of flight, a novel approach called *GSM On-Board (GSMOB)* is used. The GSMOB system consists of a low power base station carried on board the aircraft itself, and an associated unit emitting radio noise in the GSM band, raising the noise floor above the signal level originated by ground base stations. Thus mobiles activated at cruising altitude do not see any terrestrial network signal, but only the aircraft-originated cell. This way, the power level needed is low, which reduces

The AeroMobile (AeroMobile, Last visit 2011) is a GSM service provider for the aviation industry that allows passengers to use their mobile phones and devices safely during the flight. Passengers can connect to an AeroMobile pico cell located inside the craft which

(see Figure 6).

2.2.2.4 GSM

Fig. 6. Heterogeneous network architecture

the interference with aircraft systems.

can be protected against passenger's emitters by means of frequency separation. In addition, (Jahn & Holzbock, 2003) mentioned that there are two types of PEDs interference, intentional and spurious. The former is the emissions used to transmit data over the PED allocated frequency band. The latter is the emissions due to the RF noise level. However, indoor channel models mainly investigate office or home environments, thus these models may not be appropriate for modeling an aircraft cabin channel. Attenuation of walls and multi path effects in a normal indoor environment are effects, which are not expected to be comparable to the effect of the higher obstacle density in a metallic tunnel. The elongated structure of a cabin causes smaller losses, than that expected in other type of room shapes. However, the power addition of local signal paths can lead to fading of the signal in particular points. In addition, small movements of the receiver can have a substantial effect on reception. The same opinion was emphasized by (Diaz & Esquitino, 2004).

Different efforts were held to overcome this problem, (Youssef et al., 2004) used the commercial software package *Wireless Insite* to model the electromagnetic propagation of different wireless access points inside different types of aircrafts. (Moraitis et al., 2009) held a measurement campaign inside a Boeing 737-400 aircraft to obtain a propagation development model for three different frequencies, 1.8, 2.1, and 2.45GHz which represent the GSM, UMTS, and WLAN and Bluetooth technologies, respectively. Nowadays, many airline companies allows WiFi devices on their aircrafts such as Lufthanza (FlyNet, Last visit 2011), and Delta Airlines (DeltaAirline, Last visit 2011).

#### 2.2.2.2 Wireless Universal Serial Bus (WUSB)

*Universal Serial Bus (USB)* technology allows different peripherals to be connected to the same PC more easily and efficiently than other technologies such as serial and parallel ports. However, cables are still needed to connect the devices. This raised the issue of *Wireless USB (WUSB)* where devices can have the same connectivity through a wireless technology. (Leavitt, 2007) stated that although it is difficult to achieve a wireless performance similar to wired USB, but the rapid improvements in radio communication can make WUSB a competent rival. It is based on the *Ultra Wide Band (UWB)* technology. In Europe, it supports a frequency range from 3.1 to 4.8 GHz. Moreover, (Udar et al., 2007) mentioned that UWB communication is suitable for short range communications, which can be extended by the use of mesh networks. Although WUSB was designed to satisfy client needs, but it can also be used in a data centre environment. They discussed how WUSB characteristics can match such environment. This application can be of a great help in IFE systems, which strive to massive data communication to support multimedia services and minimizing connection cables. Moreover, (Sohn et al., 2008) discussed the design issues related to WUSB. He stated that WUSB can support up to 480 Mbps, but in real world it does not give the promised values; and they showed the effect of design parameters on device performance.

#### 2.2.2.3 PowerLine Communication (PLC)

A PLC network can be used to convey data signals over cables dedicated to carry electrical power; where PLC modems are used to convert data from the digital signal level to the high power level; and vice versa. Using an existing wiring infrastructure can dramatically reduce costs and effort for setting up a communication network. Moreover, it can decrease the time needed for reconfiguring cabin layout since less cables are going to be relocated. However, such technology suffers from different problems. A power line cable works as an antenna that can produce *Electromagnetic Emissions (EME)*. Thus, a PLC device must be *Electromagnetic Compatible (EMC)* to the surrounding environment. This means that it must not produce intolerable EME, and not to be susceptible to them. To overcome this problem, the transmission power should not be high in order not to disturb other communicating devices (Hrasnica et al., 2004). However, working on a limited power signal makes the system sensitive for external noise. In spite of this, the PLC devices can work without concerns of external interference due to two reasons. Firstly, the PLC network is divided into segments; this minimizes signal attenuation. Secondly, all cabin devices are designed according to strict rules that prevent EME high enough to interfere with the surrounding devices. (Akl et al., 2010) presented a PLC network dedicated for IFE systems to replace part of the wired communication network, where two PLC devices were used; *Power Line Head Box (PLHB) and Power Line Box (PLB)*. PLHB connects the two terminals of the power line to connect data servers with seats. Each PLHB service a group of seats, which are equipped with PLB per seat (see Figure 6).

Fig. 6. Heterogeneous network architecture

#### 2.2.2.4 GSM

14 Will-be-set-by-IN-TECH

can be protected against passenger's emitters by means of frequency separation. In addition, (Jahn & Holzbock, 2003) mentioned that there are two types of PEDs interference, intentional and spurious. The former is the emissions used to transmit data over the PED allocated frequency band. The latter is the emissions due to the RF noise level. However, indoor channel models mainly investigate office or home environments, thus these models may not be appropriate for modeling an aircraft cabin channel. Attenuation of walls and multi path effects in a normal indoor environment are effects, which are not expected to be comparable to the effect of the higher obstacle density in a metallic tunnel. The elongated structure of a cabin causes smaller losses, than that expected in other type of room shapes. However, the power addition of local signal paths can lead to fading of the signal in particular points. In addition, small movements of the receiver can have a substantial effect on reception.

Different efforts were held to overcome this problem, (Youssef et al., 2004) used the commercial software package *Wireless Insite* to model the electromagnetic propagation of different wireless access points inside different types of aircrafts. (Moraitis et al., 2009) held a measurement campaign inside a Boeing 737-400 aircraft to obtain a propagation development model for three different frequencies, 1.8, 2.1, and 2.45GHz which represent the GSM, UMTS, and WLAN and Bluetooth technologies, respectively. Nowadays, many airline companies allows WiFi devices on their aircrafts such as Lufthanza (FlyNet, Last visit 2011), and Delta

*Universal Serial Bus (USB)* technology allows different peripherals to be connected to the same PC more easily and efficiently than other technologies such as serial and parallel ports. However, cables are still needed to connect the devices. This raised the issue of *Wireless USB (WUSB)* where devices can have the same connectivity through a wireless technology. (Leavitt, 2007) stated that although it is difficult to achieve a wireless performance similar to wired USB, but the rapid improvements in radio communication can make WUSB a competent rival. It is based on the *Ultra Wide Band (UWB)* technology. In Europe, it supports a frequency range from 3.1 to 4.8 GHz. Moreover, (Udar et al., 2007) mentioned that UWB communication is suitable for short range communications, which can be extended by the use of mesh networks. Although WUSB was designed to satisfy client needs, but it can also be used in a data centre environment. They discussed how WUSB characteristics can match such environment. This application can be of a great help in IFE systems, which strive to massive data communication to support multimedia services and minimizing connection cables. Moreover, (Sohn et al., 2008) discussed the design issues related to WUSB. He stated that WUSB can support up to 480 Mbps, but in real world it does not give the promised values; and they showed the effect

A PLC network can be used to convey data signals over cables dedicated to carry electrical power; where PLC modems are used to convert data from the digital signal level to the high power level; and vice versa. Using an existing wiring infrastructure can dramatically reduce costs and effort for setting up a communication network. Moreover, it can decrease the time needed for reconfiguring cabin layout since less cables are going to be relocated. However, such technology suffers from different problems. A power line cable works as

The same opinion was emphasized by (Diaz & Esquitino, 2004).

Airlines (DeltaAirline, Last visit 2011).

2.2.2.2 Wireless Universal Serial Bus (WUSB)

of design parameters on device performance.

2.2.2.3 PowerLine Communication (PLC)

For several years the aircraft industry has been looking for a technology to provide, at a reasonable cost, an onboard phone service (see Figure 7). Nevertheless, some technical hitches make successful calls via the terrestrial *Global System for Mobile Communications (GSM)* network impossible. The mobiles are unable to make reliable contact with ground-based base stations, so they would transmit with maximum RF power and these RF fields could potentially cause interference with the aircraft communications systems. On the other hand, the high speed of the aircraft causes frequent handover from cell to cell, and in extreme cases could even cause degradation of terrestrial services due to the large amount of control signaling required in managing these handovers. In order to avoid these problems and allow airline passengers to use their own mobile terminals during certain stages of flight, a novel approach called *GSM On-Board (GSMOB)* is used. The GSMOB system consists of a low power base station carried on board the aircraft itself, and an associated unit emitting radio noise in the GSM band, raising the noise floor above the signal level originated by ground base stations. Thus mobiles activated at cruising altitude do not see any terrestrial network signal, but only the aircraft-originated cell. This way, the power level needed is low, which reduces the interference with aircraft systems.

The AeroMobile (AeroMobile, Last visit 2011) is a GSM service provider for the aviation industry that allows passengers to use their mobile phones and devices safely during the flight. Passengers can connect to an AeroMobile pico cell located inside the craft which

**3.1 Proposed IFE model**

achieve the expected services.

by the PLC network or not.

**3.1.1 Use Case diagram**

*(or stake) in a system"*.

system).

that should be considered during the design process.

In this section, we propose a SysML model that takes us through a step by step design process as a systematic design approach to help designers to handle such complex system. The model will show system components, the involved actors, and their interactions with the system. We believe that the model can give the designer an idea on how to adapt his own IFE system to

Key Factors in Designing In-Flight Entertainment Systems 347

A real IFE system is a large system, where its model can not be fully presented in a book chapter, so we will consider a small case study, and stress on the basic steps and techniques

Our case study is based on the work done by (Loureiro & Anzaloni, 2011), and our previous work in (Akl et al., 2010) to model the part related to the VoD service and the PLC network. (Loureiro & Anzaloni, 2011) introduced a peer-to-peer networking approach for using VoD for IFE systems and propose two solutions for the problem of content searching in such network. We chose their work because it is a recent research that presents two different techniques to distribute video content over a peer-to-peer network rather than using traditional client-server architecture. The peer-to-peer approach allows passenger IFE units to monitor, store, and serve media contents to each other. This can be achieved by having a *Distribution Table (DT)* containing the video file information (i.e., file ID and IP of storing peer). The work is based on how to build and update the DT. In (Akl et al., 2010), we proposed using a PLC network to replace traditional LAN (see Figure 6). The PLC system consists of a *Power Line Head Box (PLHB)* and a *Power Line Box (PLB)*, where the PLHB connects the two terminals of the power line. Each PLHB service a group of seats which are equipped with PLB per seat. The PLB is responsible for distributing the signal received by the PLHB to the seat attached devices. Each PLHB can support up to 20 PLBs at a rate of 3480 bit/sec each. We will use the model to verify if the technique proposed by (Loureiro & Anzaloni, 2011) can be supported

A *Use Case* describes system functionality in terms of how its users (i.e., passengers, crew) use the system to achieve the needed targets. It represents a high level of abstraction to model IFE requirements and interaction with users. Consequently, it typically covers scenarios through which stake holders (i.e., actors) can use the IFE system. Hull et al. (2011) stated that *"A stake holder is an individual, group of people, organization or other entity that has a direct or indirect interest*

In a Use Case, the system boundaries are identified by a square box to decide what belongs to the system and what does not. For example, a GPS device that provides the IFE system with data used in a map display is considered as a part of an external system (i.e., navigational

Figure 8 presents a *Use Case diagram* for our proposed IFE model. There are seven actors; passengers, crew members, a navigational system, a cabin environment, maintenance personnel, airline company, and avionic regulations. The IFE system is enclosed inside the box representing the system boundary. The oval shapes show the interactions of each actor with the system. These interactions are related together through different relations (i.e.,

relays text messages and calls to a satellite link which sends them to the ground network. The AeroMobile system manages all the cellular devices onboard. This system is adopted by Panasonic to be part of its in-flight cellular phone component.

#### 2.2.2.5 Satellite communication

In-cabin communication can be extended by being connected to terrestrial networks through satellite links (see Figure 7). Using satellite channels allow passengers to use their mobile phones, send emails, access internet, and achieve online entertainment services. However, the satellite link is considered as the connection bottleneck, so traffic flow in and out of the cabin must be analyzed (Niebla, 2003). (Radzik et al., 2008) performed a satellite system performance assessment for IFE system and *Air Traffic Control (ATC)*, where the satellite link can be shared between IFE and ATC streams. (Holzbock et al., 2004) presented, in details, two systems that allow in-cabin communication to be connected to assessment networks; the ABATE system (1996-1998), and the WirelessCabin system (2002-2004). Another recent project is the E-CAB project (ECAB, Last visit 2011) which was held by Airbus.

Fig. 7. Satellite link from (Niebla, 2003)

#### **3. Design and evaluation of modern IFE systems**

To design an IFE system, different types of requirements need to be defined and constrains must be considered. It is not just adding some entertainment devices, but it is a system which will be located in a very strict environment. This system will have an impact on passengers, airlines, and aircraft design. Therefore, a formal modeling of IFE systems is a paramount need which can be achieved through *System Modeling Language (SysML)*. SysML is a modeling language for representing systems and product architectures, as well as their behavior and functionalities. It is an important tool to have an understanding of a system to prevent complex failure modes leading to costly product recalls. Furthermore, it uses generic language, which is not specific to any engineering discipline, able to present the incremental details of system modeling. Modeling starts by gathering the required functionality until reaching the complex system model. This is achieved by presenting its sub-system structures, and showing their behavior of interacting together as well as with external system. However, we have to stress on the fact that there is no optimum model for any system, but we can have a good model. A good model is the one that fulfills all of system functional and non-functional requirements.

#### **3.1 Proposed IFE model**

16 Will-be-set-by-IN-TECH

relays text messages and calls to a satellite link which sends them to the ground network. The AeroMobile system manages all the cellular devices onboard. This system is adopted by

In-cabin communication can be extended by being connected to terrestrial networks through satellite links (see Figure 7). Using satellite channels allow passengers to use their mobile phones, send emails, access internet, and achieve online entertainment services. However, the satellite link is considered as the connection bottleneck, so traffic flow in and out of the cabin must be analyzed (Niebla, 2003). (Radzik et al., 2008) performed a satellite system performance assessment for IFE system and *Air Traffic Control (ATC)*, where the satellite link can be shared between IFE and ATC streams. (Holzbock et al., 2004) presented, in details, two systems that allow in-cabin communication to be connected to assessment networks; the ABATE system (1996-1998), and the WirelessCabin system (2002-2004). Another recent project

To design an IFE system, different types of requirements need to be defined and constrains must be considered. It is not just adding some entertainment devices, but it is a system which will be located in a very strict environment. This system will have an impact on passengers, airlines, and aircraft design. Therefore, a formal modeling of IFE systems is a paramount need which can be achieved through *System Modeling Language (SysML)*. SysML is a modeling language for representing systems and product architectures, as well as their behavior and functionalities. It is an important tool to have an understanding of a system to prevent complex failure modes leading to costly product recalls. Furthermore, it uses generic language, which is not specific to any engineering discipline, able to present the incremental details of system modeling. Modeling starts by gathering the required functionality until reaching the complex system model. This is achieved by presenting its sub-system structures, and showing their behavior of interacting together as well as with external system. However, we have to stress on the fact that there is no optimum model for any system, but we can have a good model. A good model is the one that fulfills all of system functional and non-functional

Panasonic to be part of its in-flight cellular phone component.

is the E-CAB project (ECAB, Last visit 2011) which was held by Airbus.

2.2.2.5 Satellite communication

Fig. 7. Satellite link from (Niebla, 2003)

requirements.

**3. Design and evaluation of modern IFE systems**

In this section, we propose a SysML model that takes us through a step by step design process as a systematic design approach to help designers to handle such complex system. The model will show system components, the involved actors, and their interactions with the system. We believe that the model can give the designer an idea on how to adapt his own IFE system to achieve the expected services.

A real IFE system is a large system, where its model can not be fully presented in a book chapter, so we will consider a small case study, and stress on the basic steps and techniques that should be considered during the design process.

Our case study is based on the work done by (Loureiro & Anzaloni, 2011), and our previous work in (Akl et al., 2010) to model the part related to the VoD service and the PLC network. (Loureiro & Anzaloni, 2011) introduced a peer-to-peer networking approach for using VoD for IFE systems and propose two solutions for the problem of content searching in such network. We chose their work because it is a recent research that presents two different techniques to distribute video content over a peer-to-peer network rather than using traditional client-server architecture. The peer-to-peer approach allows passenger IFE units to monitor, store, and serve media contents to each other. This can be achieved by having a *Distribution Table (DT)* containing the video file information (i.e., file ID and IP of storing peer). The work is based on how to build and update the DT. In (Akl et al., 2010), we proposed using a PLC network to replace traditional LAN (see Figure 6). The PLC system consists of a *Power Line Head Box (PLHB)* and a *Power Line Box (PLB)*, where the PLHB connects the two terminals of the power line. Each PLHB service a group of seats which are equipped with PLB per seat. The PLB is responsible for distributing the signal received by the PLHB to the seat attached devices. Each PLHB can support up to 20 PLBs at a rate of 3480 bit/sec each. We will use the model to verify if the technique proposed by (Loureiro & Anzaloni, 2011) can be supported by the PLC network or not.

#### **3.1.1 Use Case diagram**

A *Use Case* describes system functionality in terms of how its users (i.e., passengers, crew) use the system to achieve the needed targets. It represents a high level of abstraction to model IFE requirements and interaction with users. Consequently, it typically covers scenarios through which stake holders (i.e., actors) can use the IFE system. Hull et al. (2011) stated that *"A stake holder is an individual, group of people, organization or other entity that has a direct or indirect interest (or stake) in a system"*.

In a Use Case, the system boundaries are identified by a square box to decide what belongs to the system and what does not. For example, a GPS device that provides the IFE system with data used in a map display is considered as a part of an external system (i.e., navigational system).

Figure 8 presents a *Use Case diagram* for our proposed IFE model. There are seven actors; passengers, crew members, a navigational system, a cabin environment, maintenance personnel, airline company, and avionic regulations. The IFE system is enclosed inside the box representing the system boundary. The oval shapes show the interactions of each actor with the system. These interactions are related together through different relations (i.e.,

We will assume that the first and second steps are already done, so that our IFE system requirements are already gathered from stakeholders, and we will classify them as functional

Key Factors in Designing In-Flight Entertainment Systems 349

and non-functional requirements.

Fig. 9. Requirement diagram of entertainment specifications

Fig. 8. IFE system Use Case diagram

extended, constrain). *Extend* relationship identify an *extending use case* which is a fragment of functionality that is not considered part of the normal base use case functionality. *Constraint* relationship shows constraints imposed on the system.

The base Use Case is *"Uses IFE system"* which is directly utilized by passengers. It represents the utilization of IFE components (see section 2.2). Its functionality can be extended when a crew member gives an announcement (see section 2.1.1), or the navigational system provides data, or a maintenance personnel performs a maintenance action. Constraints comprise the difficulties imposed by cabin environment, and the standards provided by avionic regulations (i.e., ARINC standard 808, RTCA DO-160E). The next step is to model the requirements needed by stakeholders.

#### **3.1.2 Requirements model**

We present a part of the basic requirements related to the entertainment service that can exist in any IFE systems. These requirements are categorized as functional and non-functional requirements. This step helps designers to highlight the basic features of their system.

Defining system requirements seems easy, but in fact, it is not. The defining requirements process is divided into several steps. Firstly, to define stake holders. Second, to start a requirement gathering process, where requirements are collected from stakeholders. Finally, requirements are organized according to well defined rules that guarantee certain requirement characteristics which are essential for requirement analysis. For more information about requirement engineering, we refer readers to (Hull et al., 2011; Young, 2004).

18 Will-be-set-by-IN-TECH

extended, constrain). *Extend* relationship identify an *extending use case* which is a fragment of functionality that is not considered part of the normal base use case functionality. *Constraint*

The base Use Case is *"Uses IFE system"* which is directly utilized by passengers. It represents the utilization of IFE components (see section 2.2). Its functionality can be extended when a crew member gives an announcement (see section 2.1.1), or the navigational system provides data, or a maintenance personnel performs a maintenance action. Constraints comprise the difficulties imposed by cabin environment, and the standards provided by avionic regulations (i.e., ARINC standard 808, RTCA DO-160E). The next step is to model the requirements

We present a part of the basic requirements related to the entertainment service that can exist in any IFE systems. These requirements are categorized as functional and non-functional requirements. This step helps designers to highlight the basic features of their system.

Defining system requirements seems easy, but in fact, it is not. The defining requirements process is divided into several steps. Firstly, to define stake holders. Second, to start a requirement gathering process, where requirements are collected from stakeholders. Finally, requirements are organized according to well defined rules that guarantee certain requirement characteristics which are essential for requirement analysis. For more information about

requirement engineering, we refer readers to (Hull et al., 2011; Young, 2004).

Fig. 8. IFE system Use Case diagram

needed by stakeholders.

**3.1.2 Requirements model**

relationship shows constraints imposed on the system.

We will assume that the first and second steps are already done, so that our IFE system requirements are already gathered from stakeholders, and we will classify them as functional and non-functional requirements.

Fig. 9. Requirement diagram of entertainment specifications

Fig. 10. Block diagram of node structure and satisfied requirements

requirements of entertainment service shown in figure 9.

through an *Attribute* box or through an input from another block.

or provide.

component.

There are two types of ports: *Flow ports* specify what can flow in and out of blocks (i.e., data or physical items), and *Standard ports* that specify the types of services that a block either require

Key Factors in Designing In-Flight Entertainment Systems 351

Figure 10 shows the main blocks of each node and its relation with the requirements depicted in figure 9. It consists of two main blocks; *Multimedia Management System* and *Display System*. The former manages the multimedia contents, while the later is responsible for displaying multimedia contents and receiving passenger selections. The figure does not show the requirements satisfied by the *Display system* block because we are only interested in the

Figure 11 shows the node composition, and its relation with other components (i.e., *Networking System* block). Operations are listed in the *Operations* compartment of the block. However, for readability reasons, we only show the operations of *Multimedia management system* block. *Networking system* is responsible for handling communication between nodes. This is done through the PLHB component that connects different groups of PLBs. The *Multimedia Management System* block consists of three managers; *Content Search Manager*, *Local Storage Manager*, and *Content Selection Manager*. The *Local Storage Manager* handles the local multimedia contents, and defines its location inside the storage device. The *Content Selection Manager* receives the selection request from the *Display System* block, and send back the media content after being received from the *Content Search Manager*. The *Content Search Manager* searches for the requested item in the way mentioned by (Loureiro & Anzaloni, 2011) (the behavior of this technique is modeled in the next section). If the content is not stored locally, a search will be retrieved from neighboring nodes by communicating through the PLB

*Parametric diagram* uses constraint blocks that allow to define and use various system constraints. These constraints represent rules that can constraint system properties, or define rules that the system must conform to. A constraint block consists of constraint name and constraint formula. All variables or constants defined in the formula are linked to the block

#### 3.1.2.1 Functional requirements

Functional requirements describe what the system is supposed to do by defining its behavior (i.e., functions and services). For an IFE system, this includes the different services provided to passengers, and airlines companies (see section 2.1). For each service, there is a dedicated requirement diagram. A group of related requirements are called a specification.

Figure 9 presents the specifications of entertainment service. Each block represents a requirement; showing its name, ID number, and text explaining the purpose of the requirement. The *Derive* relationship shows sub-requirements needed to fulfill the parent requirement. For example, our entertainment service will include VoD, Gaming, and E-documents services. The VoD service will be fulfilled through a *Multimedia Library* to store the VoD contents, and an *Access on Demand* capability. A system component is responsible for satisfying (i.e., represented by the *Satisfy* relationship) these requirements; it is named the *Multimedia Management System*. If necessary, the last level of requirements can decompose into finer levels of derived requirements to show more details of the system. The *Distribution Table* technique (Loureiro & Anzaloni, 2011) will be used to satisfy part of the requirements of VoD service

#### 3.1.2.2 Non-Functional requirements

Non-functional requirements describe constraints and qualities. *Qualities* are properties or characteristics of the system that will affect user's degree of satisfaction. This includes maintainability, reliability, security, and safety issues. Designers usually focus on system functionality and may lately consider the non-functional requirements during the design process. Failing to achieve non-functional requirements may lead to a functional system with undesirable level of satisfaction.

Figure 9 shows QoS parameters as the non-functional requirements needed for the VoD service. (Loureiro & Anzaloni, 2011) identified two main parameters *ρ* and *θ* to define the required transmission. *ρ* and *θ* represent the amount of information (bytes) that needs to be transmitted across the application layer of the network during system startup, and system normal operation, respectively. They are presented in our model as constraints (explained further in section 3.1.3). *ρ* and *θ* are defined as:

$$
\rho = nF(c\_6 + c\_7L) \tag{1}
$$

$$
\theta = c\_5 n \tag{2}
$$

where *n* is the total number of peers, *F* is the number of messages sent between two nodes. *L* is the number of video files stored in the node's local storage, and *c*5, *c*6, and *c*<sup>7</sup> are constants. The next step is to model the system components that satisfy these requirements.

#### **3.1.3 Structural model**

*Block Definition Diagram* realizes the structural aspects of the model. It shows which components exist in the system, and the relation between them. It is formalized and reconciled with both behavior model and requirements. Blocks are used to present components; they are connected through relations, and ports to describe the points at which a block interacts with another block.

20 Will-be-set-by-IN-TECH

Functional requirements describe what the system is supposed to do by defining its behavior (i.e., functions and services). For an IFE system, this includes the different services provided to passengers, and airlines companies (see section 2.1). For each service, there is a dedicated

Figure 9 presents the specifications of entertainment service. Each block represents a requirement; showing its name, ID number, and text explaining the purpose of the requirement. The *Derive* relationship shows sub-requirements needed to fulfill the parent requirement. For example, our entertainment service will include VoD, Gaming, and E-documents services. The VoD service will be fulfilled through a *Multimedia Library* to store the VoD contents, and an *Access on Demand* capability. A system component is responsible for satisfying (i.e., represented by the *Satisfy* relationship) these requirements; it is named the *Multimedia Management System*. If necessary, the last level of requirements can decompose into finer levels of derived requirements to show more details of the system. The *Distribution Table* technique (Loureiro & Anzaloni, 2011) will be used to satisfy part of the requirements of VoD

Non-functional requirements describe constraints and qualities. *Qualities* are properties or characteristics of the system that will affect user's degree of satisfaction. This includes maintainability, reliability, security, and safety issues. Designers usually focus on system functionality and may lately consider the non-functional requirements during the design process. Failing to achieve non-functional requirements may lead to a functional system with

Figure 9 shows QoS parameters as the non-functional requirements needed for the VoD service. (Loureiro & Anzaloni, 2011) identified two main parameters *ρ* and *θ* to define the required transmission. *ρ* and *θ* represent the amount of information (bytes) that needs to be transmitted across the application layer of the network during system startup, and system normal operation, respectively. They are presented in our model as constraints (explained

where *n* is the total number of peers, *F* is the number of messages sent between two nodes. *L* is the number of video files stored in the node's local storage, and *c*5, *c*6, and *c*<sup>7</sup> are constants.

*Block Definition Diagram* realizes the structural aspects of the model. It shows which components exist in the system, and the relation between them. It is formalized and reconciled with both behavior model and requirements. Blocks are used to present components; they are connected through relations, and ports to describe the points at which a block interacts with

The next step is to model the system components that satisfy these requirements.

*ρ* = *nF*(*c*<sup>6</sup> + *c*7*L*) (1)

*θ* = *c*5*n* (2)

requirement diagram. A group of related requirements are called a specification.

3.1.2.1 Functional requirements

3.1.2.2 Non-Functional requirements

undesirable level of satisfaction.

**3.1.3 Structural model**

another block.

further in section 3.1.3). *ρ* and *θ* are defined as:

service

Fig. 10. Block diagram of node structure and satisfied requirements

There are two types of ports: *Flow ports* specify what can flow in and out of blocks (i.e., data or physical items), and *Standard ports* that specify the types of services that a block either require or provide.

Figure 10 shows the main blocks of each node and its relation with the requirements depicted in figure 9. It consists of two main blocks; *Multimedia Management System* and *Display System*. The former manages the multimedia contents, while the later is responsible for displaying multimedia contents and receiving passenger selections. The figure does not show the requirements satisfied by the *Display system* block because we are only interested in the requirements of entertainment service shown in figure 9.

Figure 11 shows the node composition, and its relation with other components (i.e., *Networking System* block). Operations are listed in the *Operations* compartment of the block. However, for readability reasons, we only show the operations of *Multimedia management system* block. *Networking system* is responsible for handling communication between nodes. This is done through the PLHB component that connects different groups of PLBs. The *Multimedia Management System* block consists of three managers; *Content Search Manager*, *Local Storage Manager*, and *Content Selection Manager*. The *Local Storage Manager* handles the local multimedia contents, and defines its location inside the storage device. The *Content Selection Manager* receives the selection request from the *Display System* block, and send back the media content after being received from the *Content Search Manager*. The *Content Search Manager* searches for the requested item in the way mentioned by (Loureiro & Anzaloni, 2011) (the behavior of this technique is modeled in the next section). If the content is not stored locally, a search will be retrieved from neighboring nodes by communicating through the PLB component.

*Parametric diagram* uses constraint blocks that allow to define and use various system constraints. These constraints represent rules that can constraint system properties, or define rules that the system must conform to. A constraint block consists of constraint name and constraint formula. All variables or constants defined in the formula are linked to the block through an *Attribute* box or through an input from another block.

Fig. 12. Parametric diagram for system constraints

how they can give different views for different parts of the system.

The behavior model is aiming at formalizing system behavior, and reconciling it with other requirements. In SysML, behaviors can be represented in different ways; they can be represented by *Activity diagrams*, *Sequence diagrams*, and *State machine diagrams*. We will show

Key Factors in Designing In-Flight Entertainment Systems 353

Figure 13(a) shows the state machine representing the states of the decentralized technique. When the system startup, the *Construct DT* state is initiated, and each node starts to broadcast the information of its local video contents. Neighboring nodes receive this information and construct their *Distribution Table (DT)*. The DT contains tuples that consist of a unique video file identifier accompanied with the IP address of the node storing this file. When the construction process completes, the *Normal running* state begins, and nodes start to run normally and exchange video contents. The *Update DT* state is fired in two cases. First, when a node

**3.1.4 Behavior model**

Fig. 11. Block diagram of control signals and flow items

Figure 12 shows three main constraint blocks; *Peer-to-Peer Transmission Rate*, *PLC Parameters*, and *Acceptance Criteria*. The *Peer-to-Peer Transmission rate* defines three formulas (as mentioned in (Loureiro & Anzaloni, 2011)). The PLC parameters define the PLHB maximum bandwidth, as mentioned in (Akl et al., 2010). The output of the two constraints are used to determine the validity of *Criteria 1*. *Criteria 1* is valid when the PLHB maximum bandwidth is greater than the rate of data transmission *B*. This means that the PLC network is able to handle the traffic generated to update the distribution table. *Criteria 2* defines the time taken to transfer data during startup (i.e., constructing the distribution table); this time should be less than a certain threshold defined as *Tacceptance*. Table 2 clarifies the meaning of symbols used in figure 12. The next step is to model the behavior of system components to acquire the expected services.

22 Will-be-set-by-IN-TECH

Fig. 11. Block diagram of control signals and flow items

Figure 12 shows three main constraint blocks; *Peer-to-Peer Transmission Rate*, *PLC Parameters*, and *Acceptance Criteria*. The *Peer-to-Peer Transmission rate* defines three formulas (as mentioned in (Loureiro & Anzaloni, 2011)). The PLC parameters define the PLHB maximum bandwidth, as mentioned in (Akl et al., 2010). The output of the two constraints are used to determine the validity of *Criteria 1*. *Criteria 1* is valid when the PLHB maximum bandwidth is greater than the rate of data transmission *B*. This means that the PLC network is able to handle the traffic generated to update the distribution table. *Criteria 2* defines the time taken to transfer data during startup (i.e., constructing the distribution table); this time should be less than a certain threshold defined as *Tacceptance*. Table 2 clarifies the meaning of symbols used in figure 12. The next step is to model the behavior of system components to acquire the expected services.

Fig. 12. Parametric diagram for system constraints

#### **3.1.4 Behavior model**

The behavior model is aiming at formalizing system behavior, and reconciling it with other requirements. In SysML, behaviors can be represented in different ways; they can be represented by *Activity diagrams*, *Sequence diagrams*, and *State machine diagrams*. We will show how they can give different views for different parts of the system.

Figure 13(a) shows the state machine representing the states of the decentralized technique. When the system startup, the *Construct DT* state is initiated, and each node starts to broadcast the information of its local video contents. Neighboring nodes receive this information and construct their *Distribution Table (DT)*. The DT contains tuples that consist of a unique video file identifier accompanied with the IP address of the node storing this file. When the construction process completes, the *Normal running* state begins, and nodes start to run normally and exchange video contents. The *Update DT* state is fired in two cases. First, when a node

Fig. 14. Activity Diagram

**3.2 System evaluation**

each PLHB

a transition to the normal running state.

updates its local database, rebroadcast the message, and send an *Update complete* signal to fire

Key Factors in Designing In-Flight Entertainment Systems 355

We are interested in checking the proposed configuration (i.e., distributed table and PLC network) with the criteria shown in Figure 12 to see if it is feasible or not. According to the values indicated in (Akl et al., 2010) we can calculate the maximum bandwidth supported by

As shown in (Loureiro & Anzaloni, 2011), when *γ* = 20*adv*/*sec* and *n* = 200*peers*, then

This can be interpreted as having 200 passengers, where 20 of them are performing an update to their DT. Since we will compare the value of *B* with the maximum bandwidth of PLHB,

*a* = *s* ∗ *j* = 3480 ∗ 20 = 69600*bit*/*sec* = 0.06638*Mb*/*sec* = 8.496*KB*/*sec* (3)

*B* = 0.1*Mb*/*sec* (4)


Table 2. Constraints symbols

Fig. 13. State Machine Diagram

has a change in its local video contents, it updates its local DT and broadcasts the change to allow other nodes to update their local DT. Second, when it receives a *broadcast change* from neighboring nodes. When the update process finishes, the *Normal running* state is fired by an *Update complete* signal. The system closes when a shutdown signal is detected. As any SysML diagram, a state can decompose into more detailed levels. This is indicated by a small icon at the right bottom corner of the state. The sub-levels are shown in figuers 13(b) and 13(c) to present a deeper presentation of *Construct DT* and *Update DT* states, respectively.

Each system state has its own *Activity diagram*. It includes the actions needed to fulfill the state, the signals required to initiate the state, and signals to fire a transition to another state.

Figure 14 shows the behavior of state *Update DT*. It is initiated by receiving a signal indicating a change in a local file; then an update of a local database is performed, followed by broadcasting an update signal to neighboring nodes. If a *broadcast change* signal is received, the node checks if it is a new message from a neighbor or it was its own broadcast message. It 24 Will-be-set-by-IN-TECH

rho (*ρ*) Total amount of information (bytes) transmitted during construction of DT

B The amount of data per second transmitted through the network

during normal operation when one peer advertise one local database change

**Symbol Meaning**

*c*5..*c*<sup>7</sup> Constants

gamma (*γ*) Advertisement per second

j Number of PLBs

Table 2. Constraints symbols

Fig. 13. State Machine Diagram

a PLHB maximum bandwidth s Maximum bandwidth of a single PLB

(a) First level state machine (b) Second level of

present a deeper presentation of *Construct DT* and *Update DT* states, respectively.

"Construct DT" state

has a change in its local video contents, it updates its local DT and broadcasts the change to allow other nodes to update their local DT. Second, when it receives a *broadcast change* from neighboring nodes. When the update process finishes, the *Normal running* state is fired by an *Update complete* signal. The system closes when a shutdown signal is detected. As any SysML diagram, a state can decompose into more detailed levels. This is indicated by a small icon at the right bottom corner of the state. The sub-levels are shown in figuers 13(b) and 13(c) to

Each system state has its own *Activity diagram*. It includes the actions needed to fulfill the state, the signals required to initiate the state, and signals to fire a transition to another state. Figure 14 shows the behavior of state *Update DT*. It is initiated by receiving a signal indicating a change in a local file; then an update of a local database is performed, followed by broadcasting an update signal to neighboring nodes. If a *broadcast change* signal is received, the node checks if it is a new message from a neighbor or it was its own broadcast message. It

(c) Second level of "Update

DT" state

n Total number of peers F Messages sent between two nodes

theta (*θ*) Total amount of information (bytes) transmitted

L Number of video files stored in a local storage

*Tacceptance* Maximum delay needed to complete the transmission of *ρ* or *θ*

Fig. 14. Activity Diagram

updates its local database, rebroadcast the message, and send an *Update complete* signal to fire a transition to the normal running state.

#### **3.2 System evaluation**

We are interested in checking the proposed configuration (i.e., distributed table and PLC network) with the criteria shown in Figure 12 to see if it is feasible or not. According to the values indicated in (Akl et al., 2010) we can calculate the maximum bandwidth supported by each PLHB

$$a = s \ast j = 3480 \ast 20 = 69600 \text{bit/sec} = 0.06638 \text{Mb/sec} = 8.496 \text{KB/sec} \tag{3}$$

As shown in (Loureiro & Anzaloni, 2011), when *γ* = 20*adv*/*sec* and *n* = 200*peers*, then

$$B = 0.1Mb/sec$$

This can be interpreted as having 200 passengers, where 20 of them are performing an update to their DT. Since we will compare the value of *B* with the maximum bandwidth of PLHB,

**4. Conclusion**

become less efficient.

possible available solutions.

instead of crew members.

*(ICWMC)* pp. 532–537.

AeroMobile (Last visit 2011). http://www.aeromobile.net/. Airships.net (Last visit 2011). http://www.airships.net.

**6. References**

**5. Future focus areas for IFE systems**

Since the very beginning, IFE systems were targeting passenger comfortableness. This target was the main intention to develop services dedicated to passengers. As time goes, business requirements changes, so IFE systems start to reveal another dimension of services to support crew members and airline companies in order to facilitate crew tasks and increase airline revenue. Recent technological advancements helped designers to offer various designs and services. However, this variations increased system complexity and former design techniques

Key Factors in Designing In-Flight Entertainment Systems 357

SysML is offered as indispensable tool for modeling complex systems. It can formalize all parts of the system, so that bug tracking, and future enhancements become more manageable. In this work, we showed the design steps for a part of an IFE system and how it can be modeled. Through SysML capabilities, we were able to integrate two different techniques; the Distribution Table for a peer-to-peer network, and the PLC network. These proposals were done by two independent research teams. However, SysML modeling allowed us to verify if these proposals can be used together in the same system or not, and if not, what are the

IFE systems are still in their development phase and different topics are still under research. In this section, we propose some ideas to be integrated in future designs. Although IFE system development made a great leap in the past years, but there are still various issues that need further research. These developments range from enhancing current systems to adding new components and services. As technology improves, more advanced devices can be used to enhance current components such as increasing network bandwidth, using more accurate contactless sensors, wireless devices, and lighter components. There is no limit for new services that can be added to IFE systems. Nowadays, a passenger who takes different connections to his destination may not be able to continue his selected IFE content even if he is using the same airline. An attractive service is to allow him to continue unfinished IFE content when changing to the next connection, so he can enjoy the selected service for the whole trip regardless of any flight change. Another service is to create a personal profile through which he can customize his favorite contents before taking the flight, so he does not waste time for selecting items during the flight, and his profile can be used for future travels. For health services, automatic pop-up reminders can be used to stop passengers from being stick to the entertainment content. Using 3D displaying devices can introduce a new sensation to IFE entertainment. Furthermore, hologram images can be used to present safety instructions

Akl, A., Gayraud, T. & Berthou, P. (2010). Investigating Several Wireless Technologies to Build

a Heteregeneous Network for the In-Flight Entertainment System Inside an Aircraft Cabin, *The Sixth International Conference on Wireless and Mobile Communications*

then we are assuming the worst case where all advertisements are initiated at the same PLHB segment.

Furthermore, *ρ* = 3371.3*KB* at *n* = 200, so we can deduce its value at *n* = 20, where

$$
\rho = (3371.3 \,\ast \,\text{200})/20 = 337.13 \,\text{KB} \tag{5}
$$

From 3 and 4, we find that *a* < *B*, so it does not fulfill the first acceptance criteria in Figure 12.

From 3 and 5, we calculate the time (T) required by the PLHB to transfer the data needed to construct DT is

$$T = \rho/a = \text{337.13/8.4969} = \text{39.681sec} \tag{6}$$

This is not an accepted value because it must be less than *Tacceptance* (i.e., 5 seconds).

Since both criteria are not fulfilled, then we can say that under this configuration, it is not feasible to use the decentralized technique with this PLC network. The available solutions for this problem are:


To achieve these changes, the designer has to change the behaviour diagrams. He also may change or add or remove some components in the block diagrams. Obviously, *Tacceptance* in parametric diagram needs to be changed if the third solution is considered. If possible, some requirements may be altered to minimize the constraints imposed on the design.

#### **3.3 Discussion**

IFE is a large system with various components and parameters, especially in an aircraft environment with strict regulations. SysML provides a solution to model and verify such system. The modeling process starts by defining all parties involved with the system and gathering their requirements; this step helps to have a design that complies with their needs. These requirements are presented in a requirement diagram to show consistency, and relations between requirements and constraints. Moreover, it shows system components that are responsible for satisfying the requirements. System components are modeled using block diagrams. The block diagram shows the relations and connections between different components, define the items flowing between them, and the services they provide or need. The behavior of these components is modeled through different diagrams, where each of them represents a different view of the desired behavior. The behavior diagrams show how components can satisfy needed requirements. During the design of these models, parametric diagrams are considered to model system constraints.

The design process life cycle is not a sequential one; this means that at any step, changes can be done to a previous step. For example, during the behaviour diagram design, changes can be done to block or requirement diagrams. However, changes to requirements must be done after the approval of stakeholders. At the end of the design process, all components and behaviours must fulfill all requirements and constraints.

#### **4. Conclusion**

26 Will-be-set-by-IN-TECH

then we are assuming the worst case where all advertisements are initiated at the same PLHB

From 3 and 4, we find that *a* < *B*, so it does not fulfill the first acceptance criteria in Figure 12. From 3 and 5, we calculate the time (T) required by the PLHB to transfer the data needed to

Since both criteria are not fulfilled, then we can say that under this configuration, it is not feasible to use the decentralized technique with this PLC network. The available solutions for

• To enhance the performance of the decentralized technique to have a less value for B and

To achieve these changes, the designer has to change the behaviour diagrams. He also may change or add or remove some components in the block diagrams. Obviously, *Tacceptance* in parametric diagram needs to be changed if the third solution is considered. If possible, some

IFE is a large system with various components and parameters, especially in an aircraft environment with strict regulations. SysML provides a solution to model and verify such system. The modeling process starts by defining all parties involved with the system and gathering their requirements; this step helps to have a design that complies with their needs. These requirements are presented in a requirement diagram to show consistency, and relations between requirements and constraints. Moreover, it shows system components that are responsible for satisfying the requirements. System components are modeled using block diagrams. The block diagram shows the relations and connections between different components, define the items flowing between them, and the services they provide or need. The behavior of these components is modeled through different diagrams, where each of them represents a different view of the desired behavior. The behavior diagrams show how components can satisfy needed requirements. During the design of these models, parametric

The design process life cycle is not a sequential one; this means that at any step, changes can be done to a previous step. For example, during the behaviour diagram design, changes can be done to block or requirement diagrams. However, changes to requirements must be done after the approval of stakeholders. At the end of the design process, all components and

*ρ* = (3371.3 ∗ 200)/20 = 337.13*KB* (5)

*T* = *ρ*/*a* = 337.13/8.4969 = 39.681*sec* (6)

Furthermore, *ρ* = 3371.3*KB* at *n* = 200, so we can deduce its value at *n* = 20, where

This is not an accepted value because it must be less than *Tacceptance* (i.e., 5 seconds).

• To enhance the performance of the PLC network to handle more traffic • To change the value of *Tacceptance* to allow the system to accept more delay.

requirements may be altered to minimize the constraints imposed on the design.

diagrams are considered to model system constraints.

behaviours must fulfill all requirements and constraints.

segment.

construct DT is

this problem are:

**3.3 Discussion**

T

Since the very beginning, IFE systems were targeting passenger comfortableness. This target was the main intention to develop services dedicated to passengers. As time goes, business requirements changes, so IFE systems start to reveal another dimension of services to support crew members and airline companies in order to facilitate crew tasks and increase airline revenue. Recent technological advancements helped designers to offer various designs and services. However, this variations increased system complexity and former design techniques become less efficient.

SysML is offered as indispensable tool for modeling complex systems. It can formalize all parts of the system, so that bug tracking, and future enhancements become more manageable. In this work, we showed the design steps for a part of an IFE system and how it can be modeled. Through SysML capabilities, we were able to integrate two different techniques; the Distribution Table for a peer-to-peer network, and the PLC network. These proposals were done by two independent research teams. However, SysML modeling allowed us to verify if these proposals can be used together in the same system or not, and if not, what are the possible available solutions.

#### **5. Future focus areas for IFE systems**

IFE systems are still in their development phase and different topics are still under research. In this section, we propose some ideas to be integrated in future designs. Although IFE system development made a great leap in the past years, but there are still various issues that need further research. These developments range from enhancing current systems to adding new components and services. As technology improves, more advanced devices can be used to enhance current components such as increasing network bandwidth, using more accurate contactless sensors, wireless devices, and lighter components. There is no limit for new services that can be added to IFE systems. Nowadays, a passenger who takes different connections to his destination may not be able to continue his selected IFE content even if he is using the same airline. An attractive service is to allow him to continue unfinished IFE content when changing to the next connection, so he can enjoy the selected service for the whole trip regardless of any flight change. Another service is to create a personal profile through which he can customize his favorite contents before taking the flight, so he does not waste time for selecting items during the flight, and his profile can be used for future travels. For health services, automatic pop-up reminders can be used to stop passengers from being stick to the entertainment content. Using 3D displaying devices can introduce a new sensation to IFE entertainment. Furthermore, hologram images can be used to present safety instructions instead of crew members.

#### **6. References**

AeroMobile (Last visit 2011). http://www.aeromobile.net/.

Airships.net (Last visit 2011). http://www.airships.net.

Akl, A., Gayraud, T. & Berthou, P. (2010). Investigating Several Wireless Technologies to Build a Heteregeneous Network for the In-Flight Entertainment System Inside an Aircraft Cabin, *The Sixth International Conference on Wireless and Mobile Communications (ICWMC)* pp. 532–537.

Khan, A. M., Arsov, I., Preda, M., Chabridon, S. & Beugnard, A. (2010). Adaptable

Key Factors in Designing In-Flight Entertainment Systems 359

*SIGCOMM workshop on Network and system support for games*, ACM, pp. 1–8. Lansford, J., Stephens, A. & Nevo, R. (2001). Wi-Fi (802.11b) and Bluetooth: Enabling

Leavitt, N. (2007). For Wireless USB, the Future Starts Now, *IEEE Computer Society* 40(7): 14–16. Liou, J. J. (2011). Consumer attitudes toward in-flight shopping, *Journal of Air Transport*

Liu, H. (2007). In-Flight Entertainment System: State of the Art and Research Directions,

Liu, H. & Rauterberg, M. (2007). Context-aware In-flight Entertainment System, *Proceedings of*

Loureiro, R. Z. & Anzaloni, A. (2011). Searching Content on Peer-to-Peer Networks for

Lufthansa (2011). http://www.lhsystems.com/solutions/infrastructure-services/wireless-

Moraitis, N., Constantinou, P., Fontan, F. P. & Valtr, P. (2009). Propagation Measurements and

Nadadur, G. & Parkinson, M. B. (2009). Using designing for human variability to optimize

Niebla, C. (2003). Coverage and capacity planning for aircraft in-cabin wireless heterogeneous

Radzik, J., Pirovano, A., Tao, N. & Bousquet, M. (2008). Satellite system performance

Schumm, J., Setz, C., Bächlin, M., Bächler, M., Arnrich, B. & Tröster, G. (2010). Unobtrusive

Sohn, J. M., Baek, S. H. & Huh, J. D. (2008). Design issues towards a high performance wireless USB device, *IEEE International Conference on Ultra-Wideband* 3: 109–112. Tan, C., Chen, W., Verbunt, M., Bartneck, C. & Rauterberg, M. (2009). Adaptive Posture

Qantas (2011). http://www.qantas.com.au/travel/airlines/inflight-communications/global/en. Qi, H., Malone, D. & Botvich, D. (2009). 802 . 11 Wireless LAN Multiplayer Game Capacity

networks, *IEEE Vehicular Technology Conference* pp. 1658–1662.

Comparison with EM Techniques for In-Cabin Wireless Networks, *Journal EURASIP Journal on Wireless Communications and Networking - Special issue on advances in*

aircraft seat layout, *SAE International Journal of Passenger Cars-Mechanical Systems*

and Optimization, *8th Annual Workshop on Network and Systems Support for Games*

assessment for In-Flight Entertainment and Air Traffic Control, *Journal of Space*

physiological monitoring in an airplane seat, *Personal and Ubiquitous Computing*

Advisory System for Spinal Cord Injury Patient, *Proceedings of the ASME International Design Engineering Technical Conferences & Computers and Information in Engineering*

*International ICST Conference on Simulation Tools and Techniques* 11: 1–7. Kim, J., Choi, J., Chang, D., Kwon, T., Choi, Y. & Yuk, E. (2005). Traffic Characteristics

Coexistence, *IEEE Communications Magazine* pp. 20–27.

In-Flight Entertainment, *IEEE Aerospace conference* pp. 1–4.

Personalization (SMAP 2007), pp. 241–244.

*Posters at HCI International* pp. 1249–1254.

*propagation modelling for wireless systems* 5: 1–13.

ROW44 (2011). http://row44.com/products-services/broadband/.

*Management* 17(4): 221–223.

in-flight-entertainment.htm.

2: 1641–1648.

*(NetGames)* pp. 1–6.

14(6): 541–550.

*Communication* 21: 69–82.

*Conference IDETC/CIE* pp. 1–7.

Client-Server Architecture for Mobile Multiplayer Games, *Proceedings of the 3rd*

of a Massively Multi-player Online Role Playing Game, *Proceedings of 4th ACM*

*Second International Workshop on Semantic Media Adaptation and Personalization (SMAP 2007)*, Second International Workshop on Semantic Media Adaptation and


28 Will-be-set-by-IN-TECH

Akl, A., Gayraud, T. & Berthou, P. (2011). A New Wireless Architecture for In-Flight

Aksoy, S., Atilgan, E. & Akinci, S. (2003). Airline services marketing by domestic and foreign

Alamdari, F. (1999). Airline in-flight entertainment: the passengers' perspective, *Journal of Air*

AmericanAirlines (2011). http://www.aa.com/i18n/travelInformation/duringFlight/onboa

Balcombe, K., Fraser, I. & Harris, L. (2009). Consumer willingness to pay for in-flight

Bani-Salameh, Z., Abbas, M., Kabilan, M. K. & Bani-Salameh, L. (2010). Design and

Chang, C.-Y. & Li, S.-T. (2011). Active Noise Control in Headsets by Using a Low-Cost Microcontroller, *IEEE Transactions On Industrial Electronics* 58(5): 1936–1942. Chang, Y.-H. & Liao, M.-Y. (2009). The effect of aviation safety education on passenger cabin

Davies, R. & Birtles, P. J. (1999). *Comet - The World's First Jet Airliner*, 1st edn, The Crowood

Diaz, N. R. & Esquitino, J. E. J. (2004). Wideband Channel Characterization for Wireless

Holzbock, M., Hu, Y.-F., Jahn, A. & Werner, M. (2004). Advances of aeronautical

Hrasnica, H., Haidine, A. & Lehnert, R. (2004). *Broadband Powerline Communications Networks*,

Hull, E., Jackson, K. & Dick, J. (2011). *Requirements Engineering*, 3rd edn, Springer-Verlag

Itzel, L., Tuttlies, V., Schiele, G. & Becker, C. (2010). Consistency Management for

Jahn, A. & Holzbock, M. (2003). Evolution of aeronautical communications for personal and

Khan, A. M. (2010). *Communication Abstraction for Data Synchronization in Distributed Virtual*

Social-Informatics and Telecommunications Engineering), pp. 1–8.

multimedia services, *IEEE Communications Magazine* 41: 36–43.

Communications inside a short haul aircraft, *Vehicular Technology Conference*,

communications in the EU framework, *International Journal of Satellite Communications*

Interactive Peer-to-Peer-based Systems, *Proceedings of the 3rd International ICST Conference on Simulation Tools and Techniques*, ICST (Institute for Computer Sciences,

*Environments Application to Multiplayer Games on Mobile Phones*, PhD thesis, Université

DeltaAirline (Last visit 2011). http://www.delta.com/traveling\_checkin/inflight\_serv-

ECAB (Last visit 2011). http://ec.europa.eu/research/transport/projects/items/e\_ca-

FlyNet (Last visit 2011). http://konzern.lufthansa.com/en/themen/net.html.

*Networks and Services* 4, no. 1 & 2(ISSN 1942-2644): 159–175.

9(6): 343–351.

rdTechnology.jsp.

15(5): 221–226.

pp. 327–342.

Press Ltd.

pp. 223–228.

b\_en.htm.

London.

ices/products/wi-fi.jsp.

*and Networking* 22(1): 113–137.

John Wiley & Sons, Ltd.

d'Evry-Val d'Essonne.

*Transport Management* 5(4): 203–209.

safety awareness, *Safety Science* 47(10): 1337–1345.

Entertainment Systems Inside Aircraft Cabin, *International Journal on Advances in*

firms: differences from the customers' viewpoint, *Journal of Air Transport Management*

service and comfort levels: A choice experiment, *Journal of Air Transport Management*

Development of Systematic Interactive Multimedia Instruction on Safety Topics for Flight Attendants, *Proceeding of the 5th International Conference on e-Learning*


**16** 

*Romania* 

**Methods for Analyzing the Reliability** 

Nicolae Jula1 and Cepisca Costin2 *1Military Technical Academy of Bucharest 2University Politehnica of Bucharest* 

**of Electrical Systems Used Inside Aircrafts** 

This chapter presents two solutions to perform reliability analysis of electrical systems installed on aircrafts. The first method for determining the reliability of electrical networks is based on an analogy between electrical impedance and reliability. The second method is based on application of Boolean algebra to the study of reliability in electrical circuits. By using these research methods we obtain information on operational safety of the electrical systems on board of an airplane, either for the entire system or for each of its components (Jula, 1986). The results allow further optimization of the construction of electrical system

Establishing the reliability of structures resulting from the analysis of electrical systems installed on board of aircrafts can be achieved by direct calculations, but involves a long working time as a result of taking into account all possible situations that can occur during

A more efficient calculation method for complex structures can be achieved by applying equivalent transformation methods in terms of reliability, similar to the transformation theorems for electrical circuits applied to determine the equivalent impedance between two

To highlight the approximations introduced by this method of calculation consider a group of elements connected in series, with the likelihood of downtime *q*1, *q*2, ..., *q*n. Using transformation theorem for elements in series, these elements can be replaced with a resultant, a single item that has a probability of downtime *q*, (Drujinin,1977), given by:

> 1 1 (1 ) *n*

*i q q* 

*i*

(1)

**1. Introduction** 

used on aircrafts (Aron et al., 1980), (Jula et al., 2008).

nodes (Moisil, 1979), (Drujinin,1977), (Billinton, 1996).

**2.1 Short presentation of the analogy method** 


**2. Calculating electrical impedance and reliability – an analogy** 

system operation (Reus, 1971), (Hoang Pham ,2003), (Levitin, G. et al., 1997).


## **Methods for Analyzing the Reliability of Electrical Systems Used Inside Aircrafts**

Nicolae Jula1 and Cepisca Costin2

*1Military Technical Academy of Bucharest 2University Politehnica of Bucharest Romania* 

## **1. Introduction**

30 Will-be-set-by-IN-TECH

360 Recent Advances in Aircraft Technology

Tan, C. F., Iaeng, M., Chen, W., Kimman, F. & Rauterberg, G. W. M. (2009). Sleeping

Thales (2011). http://www.thalesgroup.com/Case\_Studies/Markets/Aerospace/Inno-vating

Thompson, H. (2004). Wireless and Internet communications technologies for monitoring and

Udar, N., Kant, K., Viswanathan, R. & Cheung, D. (2007). Characterization of Ultra Wide Band

Vink, P. (2011). *Aircraft Interior Comfort and Design*, 1st edn, CRC Press (Taylor and Francis

Westelaken, R., Hu, J., Liu, H. & Rauterberg, M. (2010). Embedding gesture recognition into

Youssef, M., Vahala, L. & Beggs, J. (2004). Wireless network simulation in aircraft cabins, *IEEE*

Young, R. R. (2004). *The requirements engineering handbook*, 1 edn, Artech House Inc.

*Antennas and Propagation Society Symposium* 3: 2223–2226.

*on Engineering (WCE)* 1: 532–535.

Group).

*Computing* 2(2): 103–112.

\_for\_inflight\_entertainment\_(IFE)/?pid=10295.

control, *Control Engineering Practice* 12(6): 781–791.

Posture Analysis of Economy Class Aircraft Seat, *Proceedings of the World Congress*

Communications in Data Center Environments, *Procceedings of ICUWB* pp. 322–328.

airplane seats for in-flight entertainment, *Journal of Ambient Intelligence and Humanized*

This chapter presents two solutions to perform reliability analysis of electrical systems installed on aircrafts. The first method for determining the reliability of electrical networks is based on an analogy between electrical impedance and reliability. The second method is based on application of Boolean algebra to the study of reliability in electrical circuits. By using these research methods we obtain information on operational safety of the electrical systems on board of an airplane, either for the entire system or for each of its components (Jula, 1986). The results allow further optimization of the construction of electrical system used on aircrafts (Aron et al., 1980), (Jula et al., 2008).

### **2. Calculating electrical impedance and reliability – an analogy**

Establishing the reliability of structures resulting from the analysis of electrical systems installed on board of aircrafts can be achieved by direct calculations, but involves a long working time as a result of taking into account all possible situations that can occur during system operation (Reus, 1971), (Hoang Pham ,2003), (Levitin, G. et al., 1997).

A more efficient calculation method for complex structures can be achieved by applying equivalent transformation methods in terms of reliability, similar to the transformation theorems for electrical circuits applied to determine the equivalent impedance between two nodes (Moisil, 1979), (Drujinin,1977), (Billinton, 1996).

#### **2.1 Short presentation of the analogy method**

To highlight the approximations introduced by this method of calculation consider a group of elements connected in series, with the likelihood of downtime *q*1, *q*2, ..., *q*n. Using transformation theorem for elements in series, these elements can be replaced with a resultant, a single item that has a probability of downtime *q*, (Drujinin,1977), given by:


$$q = 1 - \prod\_{i=1}^{n} (1 - q\_i) \tag{1}$$

Methods for Analyzing the Reliability of Electric Systems Used Inside Aircrafts 363

1 23 1 2 3 12 31 2 31 2 3 1 23 12 3 12 3 1 2 31 23

It can be seen that the two systems described in (6) and (7) are incompatible. But if you take into account that the components used in electrical circuits on board of an aircraft are

Neglecting the smaller higher-order terms of the transformation delta-star, in this case the

1 2 12 23 12 31 2 3 23 31 23 12 3 1 31 12 31 23

If the second equation is multiplied by (-1) and all the system equations are added, equation

Applying the same methodology for the other two remaining equations in (8) results the

From (7) and using the same methodology, relationships for star-delta transformation are

*q q <sup>q</sup>*

1

*<sup>q</sup>* 3 1 31

*q q <sup>q</sup>*

1

*q qq q qq q qq*

 

1 2

*<sup>q</sup>* 2 3 23

3

12

*q q <sup>q</sup>*

*q q qq qq q q qq qq q q qq qq*

 

characterized by *q* 1 , approximate solutions can be utilized (Aron & Paun, 1980).

(7)

(8)

1 12 31 *q qq* (9)

(10)

*<sup>q</sup>* (11)

*q q qqq q q q q qqq q q q q qqq q q*

 

Fig. 1. Star-Delta and Delta - Star transformation for reliability.

third order component, equations in (6) become:

below equivalence for delta-star transformation:

(9) is obtained:

obtained (Hohan, 1982):


$$q = \sum\_{i=1}^{n} q\_i \tag{2}$$


$$q = \sum\_{i=1}^{n} q\_i - \frac{1}{2} \sum\_{i=1}^{n} \sum\_{j=1}^{n} q\_i q\_j \tag{3}$$

For the approximation of order 1, the error made is of the order of magnitude *qi* 2, while for 2nd order the approximation error is *qi* 3, etc.

Therefore for order 1 approximation, the probabilities of downtimes *q1*, *q2*, ..., *qn* of elements connected in series are added together as if determining the equivalent impedance of a circuit with electrical components connected in series.

A group of elements connected in parallel with the probability of downtimes *q1*, *q2*, ..., *qn* can be replaced by one single element that has a probability of downtime:

$$q = \prod\_{i=1}^{n} q\_i \tag{4}$$

In this case, the equivalent probability of downtime is achieved as a product of individual probabilities; therefore the result in this case is different from the equivalent impedance of an electrical circuit made of components in parallel.

A group of elements with delta connection, with the likelihood of downtime *q12, q23, q31* may be replaced by another group of elements connected in star with the probability of downtime *q1, q2, q3*. The relations for transformation are:

$$\begin{aligned} q\_1 &= q\_{12} q\_{31} \\ q\_2 &= q\_{23} q\_{12} \\ q\_3 &= q\_{31} q\_{23} \end{aligned} \tag{5}$$

with an approximation error proportional with *q12* · *q23* · *q31*.

Relation (5) was deducted under the assumption that the reliability of the circuit between two points, for example between point 1 and point 2 - Figure 1 - is the same for both connections in two borderline cases, namely:


Under these conditions the following relationships are obtained:

$$\begin{aligned} q\_1 + q\_2 - q\_1 q\_2 &= q\_{12} (q\_{23} + q\_{31} - q\_{23} q\_{31}) \\ q\_2 + q\_3 - q\_2 q\_3 &= q\_{23} (q\_{31} + q\_{12} - q\_{31} q\_{12}) \\ q\_3 + q\_1 - q\_3 q\_1 &= q\_{31} (q\_{12} + q\_{23} - q\_{12} q\_{23}) \end{aligned} \tag{6}$$

1

1 11 1 2 *n nn*

Therefore for order 1 approximation, the probabilities of downtimes *q1*, *q2*, ..., *qn* of elements connected in series are added together as if determining the equivalent impedance of a

A group of elements connected in parallel with the probability of downtimes *q1*, *q2*, ..., *qn* can

1

In this case, the equivalent probability of downtime is achieved as a product of individual probabilities; therefore the result in this case is different from the equivalent impedance of

A group of elements with delta connection, with the likelihood of downtime *q12, q23, q31* may be replaced by another group of elements connected in star with the probability of

Relation (5) was deducted under the assumption that the reliability of the circuit between two points, for example between point 1 and point 2 - Figure 1 - is the same for both

> 1 2 1 2 12 23 31 23 31 2 3 2 3 23 31 12 31 12 3 1 3 1 31 12 23 12 23

*q q qq q q q q q q q qq q q q q q q q qq q q q q q*

 

( ) ( ) ( )

*q qq q qq q qq*

 

*n i i q q* 

*i ij q q qq* 

For the approximation of order 1, the error made is of the order of magnitude *qi*

3, etc.

be replaced by one single element that has a probability of downtime:

*i i j*

(2)

(4)

2, while for

(5)

(6)

(3)

*n i i q q* 



2nd order the approximation error is *qi*

circuit with electrical components connected in series.

an electrical circuit made of components in parallel.

downtime *q1, q2, q3*. The relations for transformation are:

with an approximation error proportional with *q12* · *q23* · *q31*.

The third point is connected to one of the first two.

Under these conditions the following relationships are obtained:

connections in two borderline cases, namely:

The third point is offline,

Fig. 1. Star-Delta and Delta - Star transformation for reliability.

It can be seen that the two systems described in (6) and (7) are incompatible. But if you take into account that the components used in electrical circuits on board of an aircraft are characterized by *q* 1 , approximate solutions can be utilized (Aron & Paun, 1980).

Neglecting the smaller higher-order terms of the transformation delta-star, in this case the third order component, equations in (6) become:

$$\begin{aligned} q\_1 + q\_2 &= q\_{12}q\_{23} + q\_{12}q\_{31} \\ q\_2 + q\_3 &= q\_{23}q\_{31} + q\_{23}q\_{12} \\ q\_3 + q\_1 &= q\_{31}q\_{12} + q\_{31}q\_{23} \end{aligned} \tag{8}$$

If the second equation is multiplied by (-1) and all the system equations are added, equation (9) is obtained:

$$q\_1 = q\_{12} q\_{31} \tag{9}$$

Applying the same methodology for the other two remaining equations in (8) results the below equivalence for delta-star transformation:

$$\begin{aligned} q\_1 &= q\_{12} q\_{31} \\ q\_2 &= q\_{23} q\_{12} \\ q\_3 &= q\_{31} q\_{23} \end{aligned} \tag{10}$$

From (7) and using the same methodology, relationships for star-delta transformation are obtained (Hohan, 1982):

$$q\_{12} = \sqrt{\frac{q\_1 q\_2}{q\_3}} \quad q\_{23} = \sqrt{\frac{q\_2 q\_3}{q\_1}} \quad q\_{31} = \sqrt{\frac{q\_3 q\_1}{q\_1}} \tag{11}$$

Methods for Analyzing the Reliability of Electric Systems Used Inside Aircrafts 365

( ) ( ) ( )( ) 123 7 6 14 127 5 24 3 6 14 5 24

2 3 *Q q q*3

*Example 3.* The diagram in Figure 4 corresponds to an aircraft specific electromagnetic

*Q qqq q q qq qqq q qq q q qq q qq*

67 123 147 356

If the components have the same probability of downtime *q*, it results:

Fig. 4. Successive transformation of the electromagnetic system – example 3.

If the components have the same probability *q* of downtime, it results:

Alternatively, a more efficient transformation is presented in Figure 5.

Fig. 5. A version of the final state after the transformation.

A relation for this state is:

The downtime probability *Q*, resulting from the transformations illustrated above is:

( )( ) ( ) ( ) 1 5 26 4 7 36 123 7 36 7 234 5 26 *<sup>Q</sup> q q qq q q qq qqq q qq q qqq q qq*

15 47 126 346 *<sup>Q</sup> qq qq qqq qqq*

2 3 *Q* 2*q q*3

*Q qq qqq qqq qqq*

system powered by multiple nodes.

#### **2.2 The analogy method applied for electrical circuits used in aircrafts**

*Example 1.* The diagram presented in Figure 2.a corresponds to a three-phase electrical generator, part of the airplane power system, powered by a three-phase electric motor, both having their stators with delta connection. The transformed version of the diagram according to the analogy method is shown in Figure 2.b.

Fig. 2. Delta-star transformation – example 1.

The transformation delta – star applied to *q1, q2, q3* and *q4, q7, q8* becomes a simple network configuration for which downtime can be established with the specific probability when applying the previously derived relations:

$$Q = q\_1 q\_2 + q\_7 q\_8 + (q\_1 q\_3 + q\_5 + q\_4 q\_7)(q\_2 q\_3 + q\_6 + q\_4 q\_8)$$

$$Q = q\_1 q\_2 + q\_5 q\_6 + q\_7 q\_8 + q\_1 q\_3 q\_6 + q\_1 q\_3 q\_5 + q\_4 q\_6 q\_7 + q\_4 q\_5 q\_8$$

If the components have the same probability *q*, then the probability of downtime *Q* is:

$$Q = 3q^2 + 4q^3$$

*Example 2.* Figure 3 shows the diagram of a measurement instrument based on logometric principle, used to measure engine temperature or quantity of existing fuel in the plane tanks (Jula, 1986).

Fig. 3. Transformations for the measurement instrument – example 2.

The relations obtained for the probability of downtime *Q* after two transformations are:

*Example 1.* The diagram presented in Figure 2.a corresponds to a three-phase electrical generator, part of the airplane power system, powered by a three-phase electric motor, both having their stators with delta connection. The transformed version of the diagram

a) b)

The transformation delta – star applied to *q1, q2, q3* and *q4, q7, q8* becomes a simple network configuration for which downtime can be established with the specific probability when

<sup>1</sup> 2 56 78 136 135 467 458 *Q q q qq qq qqq qqq qqq qqq*

<sup>2</sup> <sup>3</sup> *<sup>Q</sup>* 3 4 *<sup>q</sup> <sup>q</sup>*

*Example 2.* Figure 3 shows the diagram of a measurement instrument based on logometric principle, used to measure engine temperature or quantity of existing fuel in the plane tanks

If the components have the same probability *q*, then the probability of downtime *Q* is:

Fig. 3. Transformations for the measurement instrument – example 2.

The relations obtained for the probability of downtime *Q* after two transformations are:

( )( ) 1 2 78 13 5 47 23 6 48 *Q qq qq qq q qq qq q qq*

**2.2 The analogy method applied for electrical circuits used in aircrafts** 

according to the analogy method is shown in Figure 2.b.

Fig. 2. Delta-star transformation – example 1.

applying the previously derived relations:

(Jula, 1986).

$$\begin{aligned} Q &= q\_1 q\_2 q\_3 + q\_7 (q\_6 + q\_1 q\_4) + q\_1 q\_2 q\_7 (q\_5 + q\_2 q\_4) + q\_3 (q\_6 + q\_1 q\_4)(q\_5 + q\_2 q\_4) \\\\ Q &\cong q\_6 q\_7 + q\_1 q\_2 q\_3 + q\_1 q\_4 q\_7 + q\_3 q\_5 q\_6 \end{aligned}$$

If the components have the same probability of downtime *q*, it results:

$$Q \equiv q^2 + 3q^3$$

*Example 3.* The diagram in Figure 4 corresponds to an aircraft specific electromagnetic system powered by multiple nodes.

Fig. 4. Successive transformation of the electromagnetic system – example 3.

The downtime probability *Q*, resulting from the transformations illustrated above is:

$$Q = q\_1(q\_5 + q\_2q\_6) + q\_4(q\_7 + q\_3q\_6) + q\_1q\_2q\_3(q\_7 + q\_3q\_6) + q\_7 + q\_2q\_3q\_4(q\_5 + q\_2q\_6)$$

$$Q \equiv q\_1q\_5 + q\_4q\_7 + q\_1q\_2q\_6 + q\_3q\_4q\_6$$

If the components have the same probability *q* of downtime, it results:

$$Q \equiv 2q^2 + 3q^3$$

Alternatively, a more efficient transformation is presented in Figure 5.

Fig. 5. A version of the final state after the transformation.

A relation for this state is:

Methods for Analyzing the Reliability of Electric Systems Used Inside Aircrafts 367

The method is based on binary logic. Thus, a system function is equivalent to a binary

is synthesized with logical elements AND/OR, using the following symbols and states:

*Xi* is 1 if the primary element is good and 0 otherwise, and *Y* is 1 if the system is good and 0

The method representation is depicted in Figure 6. For the reliability function indicators calculus, in the hypothesis of the failure intensity having an exponential distribution, we use

> 1 ( ) exp exp *n i i*

> > 1 ( ) 1 1 exp *n*

Relation (13) is used for the serial connection and relation (14) is used for the parallel

*i*

 

*R t*

a) b) c)

Fig. 6. a) The general concept of the method based on Boolean algebra (1, 2,..., n are

independent primary events); b) the schematics of the logic function AND; c) the schematics

*R t*

, ,..., 1 2 *Y fX X Xn* (12)

(13)

(14)

*t t*

*t*

*i*

function, which variables are the events (the failures).



This binary function:

otherwise.

the relations:

where:

1 . *n i i* 

connection of the elements.

of the logic function OR.

 

$$Q = q\_1 q\_5 + q\_4 q\_7 + (q\_6 + q\_2 q\_5 + q\_3 q\_7)(q\_1 q\_2 + q\_3 q\_4)$$

$$Q \equiv q\_1 q\_5 + q\_4 q\_7 + q\_1 q\_2 q\_6 + q\_1 q\_4 q\_6$$

Whereby the result is identical to the one previously obtained, the calculation time is significantly reduced.

#### **2.3 Conclusions regarding the analogy method**

The method draws on the similarity between the calculus for the electrical impedance and the reliability one, allowing the use of simple relationships and reducing the number of equations to be solved. In case of complex networks other methods would lead to difficulties in obtaining results in short time, while the analogy method, with its rather low number of calculations ensures a time efficient way of finding the downtime probability of any electrical circuit.

If one or more circuit elements are less reliable than other parts of the circuit, and therefore its downtime probability is high, the transformation can get more accurate approximations of the real state of the system than other methods, mainly due to the multiplier effect contained.

#### **3. The method based on Boolean logical structures**

Large-scale systems reliability analysis is based on the quantification of the failure process at the structural level. Thus, any system downtime is a result of a quantified sequence of states in the failure process. The quantification level can be chosen in accordance with the desired goal and probability, down even to individual components of the system. The more detailed the quantification, the more accurate would be the resulting probability (Reus, 1971) (Muzi, 2008).

The conceptual representation of an emergent downtime is formed by a series of primary events, interconnected through different Boolean logical structures, which indicate the possible combinations of those elements having as result a system failure (Denis-Papin& Malgrange, 1970), (Chern & Jan, 1986). Thus determining the reliability of an aircraft electrical system using Boolean algebra actually means calculating the probability of a "failure" event.

#### **3.1 Principles of the Boolean method**

From the structural point of view, for the reliability analysis, we will use the terms:


The method is based on binary logic. Thus, a system function is equivalent to a binary function, which variables are the events (the failures).

This binary function:

366 Recent Advances in Aircraft Technology

( )( ) 15 47 6 25 37 12 34

Whereby the result is identical to the one previously obtained, the calculation time is

The method draws on the similarity between the calculus for the electrical impedance and the reliability one, allowing the use of simple relationships and reducing the number of equations to be solved. In case of complex networks other methods would lead to difficulties in obtaining results in short time, while the analogy method, with its rather low number of calculations ensures a time efficient way of finding the downtime probability of

If one or more circuit elements are less reliable than other parts of the circuit, and therefore its downtime probability is high, the transformation can get more accurate approximations of the real state of the system than other methods, mainly due to the multiplier effect contained.

Large-scale systems reliability analysis is based on the quantification of the failure process at the structural level. Thus, any system downtime is a result of a quantified sequence of states in the failure process. The quantification level can be chosen in accordance with the desired goal and probability, down even to individual components of the system. The more detailed the quantification, the more accurate would be the resulting probability (Reus, 1971) (Muzi,

The conceptual representation of an emergent downtime is formed by a series of primary events, interconnected through different Boolean logical structures, which indicate the possible combinations of those elements having as result a system failure (Denis-Papin& Malgrange, 1970), (Chern & Jan, 1986). Thus determining the reliability of an aircraft electrical system




using Boolean algebra actually means calculating the probability of a "failure" event.

From the structural point of view, for the reliability analysis, we will use the terms: - Primary elements – components or blocks at the base level of the quantification,

*Q qq qq q qq qq qq qq*

15 47 126 146

*Q qq qq qqq qqq*

**2.3 Conclusions regarding the analogy method** 

**3. The method based on Boolean logical structures** 

**3.1 Principles of the Boolean method** 

drives to a system failure


simultaneously in failure mode, drive to a system failure

positions in the system failure representation.

significantly reduced.

any electrical circuit.

2008).

$$Y = f\left(X\_{1'}X\_{2'}...X\_n\right) \tag{12}$$

is synthesized with logical elements AND/OR, using the following symbols and states:


*Xi* is 1 if the primary element is good and 0 otherwise, and *Y* is 1 if the system is good and 0 otherwise.

The method representation is depicted in Figure 6. For the reliability function indicators calculus, in the hypothesis of the failure intensity having an exponential distribution, we use the relations:

$$R(t) = \exp\left(-\sum\_{i=1}^{n} \lambda\_i t\right) = \exp\left(-\wedge t\right) \tag{13}$$

$$R(t) = 1 - \prod\_{i=1}^{n} \left[ 1 - \exp\left(\mathcal{A}\_i t\right) \right] \tag{14}$$

where: 1 . *n i i* 

Relation (13) is used for the serial connection and relation (14) is used for the parallel connection of the elements.

Fig. 6. a) The general concept of the method based on Boolean algebra (1, 2,..., n are independent primary events); b) the schematics of the logic function AND; c) the schematics of the logic function OR.

Methods for Analyzing the Reliability of Electric Systems Used Inside Aircrafts 369

<sup>0</sup> <sup>h</sup> No. *<sup>k</sup>* <sup>1</sup>

<sup>6</sup> 6 10 1 160 <sup>5</sup>

<sup>6</sup> 0.25 10 1 160 <sup>5</sup>

<sup>6</sup> 13 10 1 160 <sup>5</sup>

Fig. 7. The electric power supply diagram for a DC main electric supply system aircraft

1 

> 1

1 

> 1

 

*<sup>i</sup> nk* <sup>0</sup> <sup>h</sup> <sup>1</sup> *<sup>i</sup>*

96 10 <sup>5</sup> 96 10

4 10 <sup>5</sup> 4 10

208 10 <sup>5</sup> 208 10

16 10 <sup>5</sup> 16 10

*<sup>t</sup> F e <sup>i</sup>* 

<sup>8</sup> <sup>1</sup> *<sup>t</sup> F e*

<sup>9</sup> <sup>1</sup> *<sup>t</sup> F e*

<sup>10</sup> <sup>1</sup> *<sup>t</sup> F e*

<sup>11</sup> <sup>1</sup> *<sup>t</sup> F e*

Symbol Description <sup>1</sup>

generator

Decoupler

regulator

1E Starter-

24E Coupler /

27E Voltage

Table 1. Part II

(fragment).


## **3.2 Method application for determining the reliability of the aircrafts electric circuits**

In order to exemplify the method for the reliability indicators determination, we will focus on the DC electrical power supply system of an aircraft. Figure 7 depicts the electric power supply system of an aircraft.

In principle, this electric power supply system is present (as the main electric power supply system) in a large number of military aircrafts ranging from the MiG family (21, 23, 27, 29,31,35), Su (30,33,34,35,37) to Chengdu (J-10), Shenyang (J-11) and ORAO. The example refers only to a DC electric power supply system nevertheless the method can be used in alternative current and mixed systems set-ups. In Figure 7:


The emerging failure state diagram using AND/OR elements is depicted in Figure 8. The failure event is the loss of voltage at the 28V bar.

For the failure intensity *<sup>i</sup>* of the components we use the relation:

$$
\mathcal{N}\_i = k \mathcal{N}\_0 \tag{15}
$$

where: *k* – maintenance and way-of-use coefficient (for aircraft components the coefficient varies between 120 and 160); 0 – failure intensity – manufacturer specific data.

The data relative to the electric power supply system are presented in Table 1.


Table 1. Part I


Table 1. Part II

368 Recent Advances in Aircraft Technology

In principle, this electric power supply system is present (as the main electric power supply system) in a large number of military aircrafts ranging from the MiG family (21, 23, 27, 29,31,35), Su (30,33,34,35,37) to Chengdu (J-10), Shenyang (J-11) and ORAO. The example refers only to a DC electric power supply system nevertheless the method can be used in


The emerging failure state diagram using AND/OR elements is depicted in Figure 8. The

*<sup>i</sup>* of the components we use the relation:

*i* 0 *k*

where: *k* – maintenance and way-of-use coefficient (for aircraft components the coefficient

<sup>0</sup> <sup>h</sup> No. *<sup>k</sup>* <sup>1</sup>

The data relative to the electric power supply system are presented in Table 1.

4E Switch <sup>6</sup> 0.12 10 1 160 <sup>5</sup>

5E Diode <sup>6</sup> 0.6 10 1 160 <sup>5</sup>

13E Accumulator <sup>6</sup> 1.4 10 1 160 <sup>5</sup>

14E Coupler <sup>6</sup> 0.4 10 1 160 <sup>5</sup>

47E Fuse <sup>6</sup> 2.75 10 1 160 <sup>5</sup>


0 – failure intensity – manufacturer specific data.

1 

> 1

1 

> 1

> > 1

1   

(15)

*<sup>i</sup> nk* <sup>0</sup> <sup>h</sup> <sup>1</sup> *<sup>i</sup>*

1.92 10 <sup>5</sup> 1.92 10

9.6 10 <sup>5</sup> 9.6 10

22.4 10 <sup>5</sup> 22.4 10

6.4 10 <sup>5</sup> 6.4 10

44 10 <sup>5</sup> 44 10

16 10 <sup>5</sup> 16 10

*<sup>t</sup> F e <sup>i</sup>* 

<sup>1</sup> <sup>1</sup> *<sup>t</sup> F e*

<sup>2</sup> <sup>1</sup> *<sup>t</sup> F e*

<sup>3</sup> <sup>1</sup> *<sup>t</sup> F e*

<sup>4</sup> <sup>1</sup> *<sup>t</sup> F e*

<sup>5</sup> <sup>1</sup> *<sup>t</sup> F e*

<sup>6</sup> <sup>1</sup> *<sup>t</sup> F e*

**3.2 Method application for determining the reliability of the aircrafts electric circuits**  In order to exemplify the method for the reliability indicators determination, we will focus on the DC electrical power supply system of an aircraft. Figure 7 depicts the electric power

supply system of an aircraft.



For the failure intensity

varies between 120 and 160);

Symbol Description <sup>1</sup>





failure event is the loss of voltage at the 28V bar.

voltage


Table 1. Part I

alternative current and mixed systems set-ups. In Figure 7:

Fig. 7. The electric power supply diagram for a DC main electric supply system aircraft (fragment).

Methods for Analyzing the Reliability of Electric Systems Used Inside Aircrafts 371

( ) 1 ( ) exp exp *ikp* exp

1 1.92 9.6 22.4 6.4 44 16 96 4 208 16 10

1 1 96 4 208 16 10 1.92 9.6 22.4 6.4 44 16 10

11 6 11

11 1 ( )d

81 1

*ikp ik p*

*ttt*

 

Thus, mean time between failures in the non improved system may be approximated as

We can improve the electric power supply system reliability using a redundant (reserve) subsystem. The proposed improved electric power supply, including the back-up subsystem

Further on we will analyze the improved electric power supply system reliability, using the Boolean method presented in chapter 3.2. This analysis also allows a determination of a relation between the system reliability and the system weight. Such a relation is useful when emphasizing the variation of the system reliability with the total weight of system

Through a compared analysis of different reliability improving variants, imposing as minimum condition the component weight, we can obtain an optimal solution. The logic structure that drives to the system failure status (for the improved system schematics) is

Table 2 presents the values of the failure intensity for the supplementary components from

No. *k* <sup>1</sup>

<sup>0</sup> <sup>h</sup> *<sup>i</sup> nk*

<sup>1</sup> 104.6

<sup>1</sup> 1092.1

<sup>1</sup> 104.6

*ti*

*<sup>t</sup>*

 *<sup>t</sup> eF*

*<sup>t</sup>*

*<sup>i</sup> eF*

<sup>1</sup> <sup>1</sup>

<sup>2</sup> <sup>1</sup>

<sup>3</sup> <sup>1</sup>

*eF*

*eF*

1

<sup>5</sup> 104.6

<sup>5</sup> 1092.1

<sup>5</sup> 104.6

the back-up system, in the exponential distribution hypothesis.

<sup>0</sup> <sup>h</sup> 

60E Coupler <sup>6</sup> 104.0 1 160 <sup>5</sup>

61E Switch <sup>6</sup> 1012.0 1 160 <sup>5</sup>


**3.3 Reliability optimization of electric power supply in the aircraft industry** 

7

 

*p*

*Rt Ft*

0

*MTBF R t t*

On results *MTBF* = 1069.79 hours.

(dotted lines) is depicted in Figure 9.

Symbol Description <sup>1</sup>

follows *MTBF* 1070hours.

components.

Table 2.

depicted in Figure 10.

11 6 11

81 1

5 5

 *t t t* 

*ik p*

7

 

*p*

(21)

5

Fig. 8. The logic structure that drives to the system failure status.

In these conditions, the Boolean function associated to the logic structure depicted in Figure 8 has the following form:

$$X = X\_7 \cap X\_{12} = \left(X\_1 \cup X\_2 \cup X\_3 \cup X\_4 \cup X\_5 \cup X\_6\right) \cap \left(X\_8 \cup X\_9 \cup X\_{10} \cup X\_{11}\right) \tag{16}$$

To transform the logic equation into algebraic form we use the following relations

$$X\_1 \cap X\_2 = X\_1 \cdot X\_2 \; : \; X\_1 \downarrow X\_2 = X\_1 + X\_2 - X\_1 X\_2 \; : \; \bigcup\_{i=1}^n X\_i = 1 - \prod\_{i=1}^n (1 - X\_i) \tag{17}$$

Thus, we have

$$Y = \left[1 - (1 - X\_1)(1 - X\_2)(1 - X\_3)(1 - X\_4)(1 - X\_5)(1 - X\_6)\right] \cdot \left[1 - (1 - X\_8)(1 - X\_9)(1 - X\_{10})(1 - X\_{11})\right] \tag{18}$$

which is similar to

$$Y = X\_7 \cdot X\_{12} = \left[ 1 - \prod\_{i=1}^6 (1 - X\_i) \right] \cdot \left[ 1 - \prod\_{k=8}^{11} (1 - X\_k) \right] \tag{19}$$

Considering the failure intensity as exponential distribution, the system failure probability is given by the following relations:

$$\begin{aligned} F(t) &= \left\langle 1 - \exp\left[ -\left(\lambda\_1 + \lambda\_2 + \lambda\_3 + \lambda\_4 + \lambda\_5 + \lambda\_6\right)t \right] \right\rangle \cdot \left[1 - \exp\left(-\lambda\_8 - \lambda\_9 - \lambda\_{10} - \lambda\_{11}\right)t \right] = \\ &= 1 - \exp\left[ -\sum\_{i=8}^{11} \lambda\_i t \right] - \exp\left[ -\sum\_{k=1}^6 \lambda\_k t \right] + \exp\left[ -\sum\_{\substack{p=1\\p\neq 7}}^{11} \lambda\_p t \right] \end{aligned} \tag{20}$$

$$R(t) = 1 - F(t) = \exp\left[-\sum\_{i=8}^{11} \lambda\_i t\right] + \exp\left[-\sum\_{k=1}^{6} \lambda\_k t\right] - \exp\left[-\sum\_{\substack{p=1\\p\neq 7}}^{11} \lambda\_p t\right] \tag{21}$$

$$MTBF = \bigcap\_{i=8}^{\infty} R(t) \, \text{d}t = \frac{1}{\sum\_{i=8}^{11} \lambda\_i t} + \frac{1}{\sum\_{k=1}^{6} \lambda\_k t} - \frac{1}{\sum\_{p=7}^{11} \lambda\_p t} =$$

$$= \frac{1}{\left(96 + 4 + 208 + 16\right) \cdot 10^{-5}} + \frac{1}{\left(1.92 + 9.6 + 22.4 + 6.4 + 44 + 16\right) \cdot 10^{-5}} +$$

$$+ \frac{1}{\left(1.92 + 9.6 + 22.4 + 6.4 + 44 + 16 + 96 + 4 + 208 + 16\right) \cdot 10^{-5}}$$

On results *MTBF* = 1069.79 hours.

370 Recent Advances in Aircraft Technology

In these conditions, the Boolean function associated to the logic structure depicted in Figure

1 2 1 2 1 2 1 2 12

*Y XXXXXX XXX X* 1 (1 )(1 )(1 )(1 )(1 )(1 ) 1 (1 )(1 )(1 )(1 ) <sup>123456</sup> 8 9 10 11 (18)

6 11

1 8 11 11 *i k i k*

7

 

(20)

*p*

*X X X X X X X X XX X X*

*YXX X X*

Considering the failure intensity as exponential distribution, the system failure probability is

<sup>123456</sup> 8 9 10 11

*F t t t*

11 6 11

( ) 1 exp 1 exp

1 exp exp *ik p* exp *ik p*

81 1

*tt t*

To transform the logic equation into algebraic form we use the following relations

*YX X X X X X X X X X X X* 7 12 1 2 3 4 5 6 8 9 10 11 (16)

; ; 1 1

(17)

(19)

1 1

*i i*

*i i*

 

*n n*

Fig. 8. The logic structure that drives to the system failure status.

7 12

8 has the following form:

Thus, we have

which is similar to

given by the following relations:

Thus, mean time between failures in the non improved system may be approximated as follows *MTBF* 1070hours.

#### **3.3 Reliability optimization of electric power supply in the aircraft industry**

We can improve the electric power supply system reliability using a redundant (reserve) subsystem. The proposed improved electric power supply, including the back-up subsystem (dotted lines) is depicted in Figure 9.

Further on we will analyze the improved electric power supply system reliability, using the Boolean method presented in chapter 3.2. This analysis also allows a determination of a relation between the system reliability and the system weight. Such a relation is useful when emphasizing the variation of the system reliability with the total weight of system components.

Through a compared analysis of different reliability improving variants, imposing as minimum condition the component weight, we can obtain an optimal solution. The logic structure that drives to the system failure status (for the improved system schematics) is depicted in Figure 10.

Table 2 presents the values of the failure intensity for the supplementary components from the back-up system, in the exponential distribution hypothesis.


Table 2.

Methods for Analyzing the Reliability of Electric Systems Used Inside Aircrafts 373

Fig. 10. The logic structure of the electric system presented in fig. 9.

From (24) we can determine the system failure probability *F t*( ):

6 11 11

exp exp exp

reliability *R t*( ) we will have the following relation:

1 81

*i*

*t*

*k pi kpi*

*t tt*

15 15 15

7,8,9,10,11,12 7 12 12

 

*i i i i*

 

1

*i*

exp

15 6 11 15

13 1 8 13

*i kpi*

*ikp i*

 

(24)

   

(25)

15 6 11

13 1 8 1 1 11 11 *ikp i kp YX X X* 

( ) 1 exp 1 exp 1 exp 1 exp

*F t t t t t*

7

1 8

*t t*

*i*

exp *i i* exp *i i*

*F t*( ) and *R t*( ) are complementary functions, thus, for the electric power supply system

The Boolean function in this case is:

$$\begin{array}{c} Y = \left(X\_{16} \cap X\_{7}\right) \cap X\_{12} = \left(X\_{13} \cup X\_{14} \cup X\_{15}\right) \cap\\ \qquad \cap \left(X\_{1} \cup X\_{2} \cup X\_{3} \cup X\_{4} \cup X\_{5} \cup X\_{6}\right) \cap\\ \qquad \cap \left(X\_{8} \cup X\_{9} \cup X\_{10} \cup X\_{11}\right) .\end{array} \tag{22}$$

Transforming in algebraic form, we have:

$$\begin{array}{c} Y = \left[ 1 - (1 - X\_{13})(1 - X\_{14})(1 - X\_{15}) \right] \cdot \left[ 1 - (1 - X\_{1})(1 - X\_{2})(1 - X\_{3})(1 - X\_{4})(1 - X\_{5})(1 - X\_{6}) \right] \cdot \\\ \cdot \left[ 1 - (1 - X\_{8})(1 - X\_{9})(1 - X\_{10})(1 - X\_{11}) \right] \end{array} \tag{23}$$

Fig. 9. Electric power supply system of an aircraft including the back-up subsystem (fragment).

 

654321 15141312716

13 14 15 654321

)1)(1)(1)(1)(1)(1(1)1)(1)(1(1

*Y XXX XXXXXX*

Fig. 9. Electric power supply system of an aircraft including the back-up subsystem

(fragment).

(22)

(23)

*XXXXXX XXXXXXY*

 

. <sup>111098</sup>

*XXXX*

The Boolean function in this case is:

Transforming in algebraic form, we have:

)1)(1)(1)(1(1

*XXXX*

98 10 11

Fig. 10. The logic structure of the electric system presented in fig. 9.

$$Y = \left[1 - \prod\_{i=13}^{15} (1 - X\_i)\right] \cdot \left[1 - \prod\_{k=1}^{6} (1 - X\_k)\right] \cdot \left[1 - \prod\_{p=8}^{11} (1 - X\_p)\right] \tag{24}$$

From (24) we can determine the system failure probability *F t*( ):

15 6 11 15 13 1 8 13 6 11 11 1 81 7 1 ( ) 1 exp 1 exp 1 exp 1 exp exp exp exp exp *ikp i i kpi k pi kpi i i i F t t t t t t tt t* 15 15 15 1 8 7,8,9,10,11,12 7 12 12 exp *i i* exp *i i i i i i t t* (25)

*F t*( ) and *R t*( ) are complementary functions, thus, for the electric power supply system reliability *R t*( ) we will have the following relation:

Methods for Analyzing the Reliability of Electric Systems Used Inside Aircrafts 375

Fig. 11. System failure probability *F*(*t*) for different values of *k* (initial system).

Fig. 12. System's reliability *R*(*t*) for different values of *k* (initial system).

15 6 13 1 11 11 8 1 7 15 15 15 1 18 7,8,9,10,11,12 7 12 12 ( ) exp exp exp exp exp exp exp *i k i k p i p i i ii i i ii i ii i R t t t t t tt t* (26) 15 6 11 0 13 1 8 11 15 15 15 1 1 1 8 7 7,8,9,10,11,12 7 12 12 <sup>111</sup> ( )d 1 1 11 6926hours *ikp i kp i iii i i i i i i i i i MTBF R t t* (27)

#### **3.4 Influence of the maintenance and way-of-use coefficient** *k* **on** *MTBF*

Taking into account the characteristics of the system failure probability - *F t*( ) and reliability *R t*( ) as in Figure 7 and 9, a simulation was made using a Matlab program (Jula et. Al., 2008), which presents the time evolutions of the variables.

Coefficient *k* from the equation (15) has the starting value *k* =160. For this value MTBF was calculated both for the initial and the improved systems. The Matlab program helps conduct a complex analysis of the influence of coefficient *k* on system failure's probability, its reliability and *MTBF.* 

Time characteristics *F t*( ) and *R t*( ), for different values of coefficient *k* are presented below (*k* = 120 (blue), *k* = 130 (red), *k* = 140 (black), *k* = 150 (magenta) and *k* = 130 (green)).

Figures 11 to 13 present the results for the initial system. As it can be seen, the increase of *k* is directly proportional with function *F t*( ) and inversely proportional with the reliability function *R t*( ). Mean time between failure (*MTBF*) is bigger for small values of the coefficient *k*.

The same analysis will be conducted for the improved system, in order to compare results. The graphic characteristics are the presented in Figures 14 to 16, while the obtained values both for initial system and improved system are presented in Table 3.

 

15 6

13 1

 

*i k*

*p i p i*

*t t*

11 15 15 15

1 1 1 8 7 7,8,9,10,11,12 7 12

**3.4 Influence of the maintenance and way-of-use coefficient** *k* **on** *MTBF*

*i i i i i i i i*

 

*i iii*

7

 

*i*

*i k*

exp exp exp

15 15 15

*ii i*

*tt t*

    (26)

(27)

1 18 7,8,9,10,11,12 7 12 12

15 6 11

13 1 8

*ikp*

1 1 11 6926hours

*i kp*

12

*i*

Taking into account the characteristics of the system failure probability - *F t*( ) and reliability *R t*( ) as in Figure 7 and 9, a simulation was made using a Matlab program (Jula et. Al., 2008),

Coefficient *k* from the equation (15) has the starting value *k* =160. For this value MTBF was calculated both for the initial and the improved systems. The Matlab program helps conduct a complex analysis of the influence of coefficient *k* on system failure's probability, its

Time characteristics *F t*( ) and *R t*( ), for different values of coefficient *k* are presented below (*k* = 120 (blue), *k* = 130 (red), *k* = 140 (black), *k* = 150 (magenta) and *k* = 130

Figures 11 to 13 present the results for the initial system. As it can be seen, the increase of *k* is directly proportional with function *F t*( ) and inversely proportional with the reliability function *R t*( ). Mean time between failure (*MTBF*) is bigger for small values of the

The same analysis will be conducted for the improved system, in order to compare results. The graphic characteristics are the presented in Figures 14 to 16, while the obtained values

both for initial system and improved system are presented in Table 3.

*i ii i ii i*

<sup>111</sup> ( )d

 

11 11

 

*R t t t*

( ) exp exp

exp exp

0

*MTBF R t t*

which presents the time evolutions of the variables.

reliability and *MTBF.* 

(green)).

coefficient *k*.

8 1

Fig. 11. System failure probability *F*(*t*) for different values of *k* (initial system).

Fig. 12. System's reliability *R*(*t*) for different values of *k* (initial system).

Methods for Analyzing the Reliability of Electric Systems Used Inside Aircrafts 377

Fig. 15. System's reliability for different values of *k* (improved system).

Fig. 16. *MTBF* for different values of *k* (improved system).

system (fig.18).

A comparative presentation of the two systems' reliability for different values of *k* is depicted in Figure 17 (for initial system with blue lines and red for the improved system). For the five analyzed values of coefficient *k,* the improved electric supply with a redundant (reserve) subsystem is characterized by superior values of *MTBF* compared to the initial

Fig. 13. *MTBF* for different values of *k* (initial system).


Table 3.

Fig. 14. System failure probability for different values of *k* (improved system).

*<sup>k</sup> <sup>k</sup>* 120 130 *<sup>k</sup>* <sup>140</sup> *<sup>k</sup>* <sup>150</sup> *<sup>k</sup>* <sup>160</sup> *<sup>k</sup>*

1222.6 hours

7.9160 hours

1141.1 hours

7.3883 hours

1069.8 hours

6.9265 hours

1316.7 hours

8.5250 hours

Fig. 14. System failure probability for different values of *k* (improved system).

6.4746 6.4745 6.4747 6.4747 6.4746

Fig. 13. *MTBF* for different values of *k* (initial system).

hours

9.2354 hours

*MTBF* for different

Improved system (fig.4)

> <sup>0</sup> *M <sup>r</sup> TBF MTBF*

Table 3.

Initial system (fig.3) 1426.4

Fig. 15. System's reliability for different values of *k* (improved system).

Fig. 16. *MTBF* for different values of *k* (improved system).

A comparative presentation of the two systems' reliability for different values of *k* is depicted in Figure 17 (for initial system with blue lines and red for the improved system).

For the five analyzed values of coefficient *k,* the improved electric supply with a redundant (reserve) subsystem is characterized by superior values of *MTBF* compared to the initial system (fig.18).

Methods for Analyzing the Reliability of Electric Systems Used Inside Aircrafts 379

From the analyzed examples and then results obtained for MTBF, we can conclude that the method can be successfully used in the aircraft industry for determining the reliability of the electrical systems. The *MTBF* influencing parameters in the main system nodes (power

Through the failure related logic function analysis we can determine the circuits that can improve the system reliability. In the case presented, through the introduction of the components 60E, 61E and corresponding contacts, substantial increase of the reliability (approximately 6 times higher) was obtained for the 28V DC power supply

We have conducted a complex analysis of the influence of the maintenance and way-of-use

Jula, N. (1986). Contribuţii la optimizarea circuitelor electrice de la bordul avioanelor

Aron, I.; Păun, V. (1980). *Echipamentul electric al aeronavelor*, Editura Didactică şi Pedagogică,

Mathur ,F.P.; De Sousa, P.T.Reliability modeling and analysis of general modular redundant

Muzi, F. Real-time Voltage Control to Improve Automation and Quality in Power

Levitin, G.;Lisnianski, A.; Ben Haim, H.; Elmakis, D. Redundancy optimization for series-

Levitin, G.;Lisnianski, A.; Elmakis, D. Structure optimization of power system with different

Denis-Papin, M.; Malgrange, Y. (1970), *Exerciţii de calcul boolean cu soluţiile lor,* Ed. Tehnică,

Jula, N.; Cepisca ,C.; Lungu, M.; Racuciu, C.; Ursu, T.; Raducanu, D. Theoretical and

Chern, C.S., Jan, R.H. Reliability optimization problems with multiple constraints. *IEEE* 

practical aspects for study and optimization of the aircrafts' electro energetic systems, *WSEAS Transactions on Circuits and Systems,* 12, Vol. 7, 2008, pp.999-

Distribution. *WSEAS Transactions on Circuits and Systems,* Issue 6, Vol. 7,

Gnedenko, B.; (1995). *Probabilistic reliability engineering*. New York, John Wiley & Sons Reus, I. (1971). *Tratarea simbolică a schemelor de comutaţie*. Ed. Academiei, Bucharest

paralell multi-state systems*. IEEE Trans. Reliab*. 1998, 47(2), 165-72

redundant elements. *Electr. Power Syst. Res*. 1997, 43, 19-27

**3.5 Conclusions regarding the Boolean method** 

bar.

**4. References** 

2008

1008

supply bars and distribution panels) can be calculated and compared.

coefficient *k* on system failure probability, system's reliability and *MTBF.* 

Moisil, G. (1979). *Teoria algebrică a mecanismelor automate*. Ed. Tehnică, Bucharest Drujinin, C.V. (1977). *Nadejnot aftometizirovannijh - Sistem*, Energhia, Moskva

Hoang Pham (2003). *Handbook of Reliability Engineering*. Springer Verlag

Hohan, I. (1982). *Fiabilitatea sistemelor mari*, E.D.P., Bucharest, Romania

militare. PhD Thesis, Bucureşti, Romania

systems, *IEEE Trans.Reliab*. 1975, 24, 296-9

Bucureşti, Romania

Bucharest, Romania

*Trans. Reliab*.,1986,R-35, 431-6

Fig. 17. Comparative analysis of the two systems' reliability for different values of *k.*

In Figure 18 the evolution of *MTBF* for the initial system is represented by a dashed line, while the evolution of *MTBF* for the improved system is represented by a continuous line.

Fig. 18. Evolutions of *MTBF* for the two systems.

#### **3.5 Conclusions regarding the Boolean method**

From the analyzed examples and then results obtained for MTBF, we can conclude that the method can be successfully used in the aircraft industry for determining the reliability of the electrical systems. The *MTBF* influencing parameters in the main system nodes (power supply bars and distribution panels) can be calculated and compared.

Through the failure related logic function analysis we can determine the circuits that can improve the system reliability. In the case presented, through the introduction of the components 60E, 61E and corresponding contacts, substantial increase of the reliability (approximately 6 times higher) was obtained for the 28V DC power supply bar.

We have conducted a complex analysis of the influence of the maintenance and way-of-use coefficient *k* on system failure probability, system's reliability and *MTBF.* 

### **4. References**

378 Recent Advances in Aircraft Technology

Fig. 17. Comparative analysis of the two systems' reliability for different values of *k.*

Fig. 18. Evolutions of *MTBF* for the two systems.

In Figure 18 the evolution of *MTBF* for the initial system is represented by a dashed line, while the evolution of *MTBF* for the improved system is represented by a continuous line.


**Part 4** 

**Aircraft Inspection and Maintenance** 

Lyn, M.R. (1996). *Handbook of software reliability engineering*, New York, McGraw-Hill Billinton, R; Allan, R.N. (1996). *Reliability evaluation of power systems*, 2nd ed., New York, Plenum Press

Hecht, H. (2004). *System Reliability and Failure Prevention*, Artech House, London

## **Part 4**

## **Aircraft Inspection and Maintenance**

380 Recent Advances in Aircraft Technology

Lyn, M.R. (1996). *Handbook of software reliability engineering*, New York, McGraw-Hill

Hecht, H. (2004). *System Reliability and Failure Prevention*, Artech House, London

York, Plenum Press

Billinton, R; Allan, R.N. (1996). *Reliability evaluation of power systems*, 2nd ed., New

**17** 

Marco Leo

*Italy* 

**Using Thermographic** 

*Intelligenti per l'Automazione* 

**and Ultrasonic Techniques** 

*Consiglio Nazionale delle Ricerche- Istituto di Studi sui Sistemi* 

**Automatic Inspection of Aircraft Components** 

Safety in aeronautics could be improved if continuous checks were guaranteed during the in-service inspection of aircraft. However, until now, the maintenance costs of doing so have proved prohibitive. In particular, the analysis of the internal defects (not detectable by a visual inspection) of the aircraft's composite materials is a challenging task: invasive techniques are counterproductive and, for this reason, there is a great interest in the development of non-destructive inspection techniques that can be applied during normal

Non Destructive Testing & Evaluation (NDT & E) techniques consist of a data acquisition phase (based on any scanning method that does not permanently alter the article being inspected) followed by a data analysis phase carried out by qualified personnel. In particular, transient thermography and ultrasound analysis are two of the most promising

Non-destructive evaluation requires an excessive amount of money and time and its reliability depends on a multitude of different factors. These range from physical aspects of the technology used (e.g., wavelength of ultrasound) to application issues (e.g. probe coupling or scanning coverage) and human factors (e.g. inspector training and stress or time

Most of the work in the literature concentrates on the study of data acquisition and manipulation processes in order to prove the relationship between data and structural defects or composition of the material (Chatterjee et al., 2011). Unfortunately only some of the work from the literature concentrates on the posterior analysis of the acquired data in order to (fully or partially) delegate, to some computational algorithm, the automatic recognition of material composition, operative conditions, presence of defects, and so on. This is undoubtedly a very attractive research field since it can reduce operational costs, save time and make the process independent from human factors. However, the development of proper algorithms and methodologies is in its infancy and their level of inspection reliability is still inadequate for those sectors (namely, transportation) where an

techniques for the analysis of aircraft composite materials (Hellier, 2001).

pressure during inspection) (Kemppainen. & Virkkunen, 2011).

error can have serious health and safety consequences.

**1. Introduction** 

routine tests.

## **Automatic Inspection of Aircraft Components Using Thermographic and Ultrasonic Techniques**

Marco Leo

*Consiglio Nazionale delle Ricerche- Istituto di Studi sui Sistemi Intelligenti per l'Automazione Italy* 

## **1. Introduction**

Safety in aeronautics could be improved if continuous checks were guaranteed during the in-service inspection of aircraft. However, until now, the maintenance costs of doing so have proved prohibitive. In particular, the analysis of the internal defects (not detectable by a visual inspection) of the aircraft's composite materials is a challenging task: invasive techniques are counterproductive and, for this reason, there is a great interest in the development of non-destructive inspection techniques that can be applied during normal routine tests.

Non Destructive Testing & Evaluation (NDT & E) techniques consist of a data acquisition phase (based on any scanning method that does not permanently alter the article being inspected) followed by a data analysis phase carried out by qualified personnel. In particular, transient thermography and ultrasound analysis are two of the most promising techniques for the analysis of aircraft composite materials (Hellier, 2001).

Non-destructive evaluation requires an excessive amount of money and time and its reliability depends on a multitude of different factors. These range from physical aspects of the technology used (e.g., wavelength of ultrasound) to application issues (e.g. probe coupling or scanning coverage) and human factors (e.g. inspector training and stress or time pressure during inspection) (Kemppainen. & Virkkunen, 2011).

Most of the work in the literature concentrates on the study of data acquisition and manipulation processes in order to prove the relationship between data and structural defects or composition of the material (Chatterjee et al., 2011). Unfortunately only some of the work from the literature concentrates on the posterior analysis of the acquired data in order to (fully or partially) delegate, to some computational algorithm, the automatic recognition of material composition, operative conditions, presence of defects, and so on. This is undoubtedly a very attractive research field since it can reduce operational costs, save time and make the process independent from human factors. However, the development of proper algorithms and methodologies is in its infancy and their level of inspection reliability is still inadequate for those sectors (namely, transportation) where an error can have serious health and safety consequences.

Automatic Inspection of Aircraft Components Using Thermographic and Ultrasonic Techniques 385

modality, the receiver is placed on the opposite side of the material from the pulser, whereas, in the reflection modality, the pulser and the receiver are placed on the same side

Ultrasonic data can be collected and displayed in a number of different formats. The three most common formats are known in the NDT community as A-scan, B-scan, and C-scan presentations. Each presentation mode provides a different way of looking at and evaluating the region of material being inspected. On the one hand, thermographic analysis is carried out to automatically discover water insertions whereas ultrasonic inspection aims

For thermographic inspection we analyze mono-dimensional signals obtained by considering the time variation of each pixel in the sequence of thermographic images. For each point (i,j) of the material the mono-dimensional signal is generated from the gray levels of the same point in the sequence of images: this signal represents the temperature variation of the material during and after the heating process. This way it is possible to generate spatial-time variant images, the analysis of which allows for the evaluation of the thermal

In Figure 2, the one-dimensional signals extracted from the thermographic sequence of aircraft fuselage are shown: one point belongs to an area affected by the presence of water (red line) whereas the other signal corresponds to non-defective areas (gray lines). From the graph it is clearly evident that a functional description of the intensity variations cannot be easily generalized and the behaviours of points corresponding to defective and non-

For the analysis of ultrasonic data we analyze one-dimensional signals acquired from the reflection working modality and A-scan representation. This means that, for each point of the inspected material, we have a continuous signal that represents the amount of received

Fig. 1. Scheme of the proposed framework.

at revealing solid insertions of brass foil.

gradient during the heating process.

defective areas are very similar.

ultrasonic energy as a function of time.

of the material.

The pioneering work on the a posteriori analysis of data dates back to the early 1990s: it suggested that solutions to the problem of automatic ultrasonic NDT data interpretation could be found by expert systems which embody the knowledge of human interpreters (McNab & Dunlop, 1995) (Hopgood et al., 1993) (Avdelidis et al., 2003) (Meola et al., 2006) (Silva et al., 2003). More effective approaches, based on advanced signal processing and artificial intelligence paradigms, have been proposed in the last decade (Benitez et al., 2009) (Wang et al., 2008).

In this chapter, we address the problem of developing an automatic system for the analysis of sequences of thermographic images and ultrasonic signals to help safety inspectors in the diagnosis of problems in aircraft components in all those cases where the defects or the internal damage are not detectable with a visual inspection. In particular thermographic analysis is proposed to automatically discover water insertions whereas ultrasonic inspection aims at revealing solid insertions of brass foil.

The proposed approach considers two main steps for interpreting thermographic and ultrasonic data: in the first step a pre-processing technique is introduced to clean data from noise and to emphasise embedded patterns and the classification techniques used to compare ultrasonic signals and to detect classes of similar points. In the second step two neural networks are trained to extract the information that characterises a range of internal defects starting from ultrasonic and thermographic signals extracted in correspondence to the defective areas. After that the same neural networks are applied to automatically inspect real aircraft components.

Section 2 gives an overview of the proposed approach whereas section 3 and 4 concentrate on the data pre-processing and classification respectively. Finally, section 5 presents the experimental results on real aircraft material and conclusions are derived in section 6.

## **2. Overview of the system**

The proposed system for automatic inspection of aircraft components is schematized in figure 1. The system takes the data extracted by non destructive processes reported in the literature as transient thermography and ultrasound scanning as input.

Transient thermography is a non-contact technique, which uses the thermal gradient variation to inspect the internal properties of the investigated area. The materials are heated by an external source (lamps) and the resulting thermal transient is recorded using an infrared camera. Of course, this kind of analysis is only applicable to materials that have a good thermal conductivity such as metals and carbon composites. Different types of thermal excitation can be used according to the materials and the defects under investigation: for instance uniform heating, spot heating, and line heating.

Ultrasonic inspection uses instead sound signals at frequencies beyond human hearing (more than 20 kHz) to estimate some properties of the irradiated material by analyzing either the reflected (reflection working modality) or transmitted (transmission working modality) signals. A typical ultrasonic inspection system consists of several functional units: pulser, receiver, transducer, and display devices. A pulser is an electronic device that can produce a high-voltage electrical pulse. Driven by the pulser, the transducer generates a high-frequency ultrasonic wave which propagates through the material. In the transmission

Fig. 1. Scheme of the proposed framework.

The pioneering work on the a posteriori analysis of data dates back to the early 1990s: it suggested that solutions to the problem of automatic ultrasonic NDT data interpretation could be found by expert systems which embody the knowledge of human interpreters (McNab & Dunlop, 1995) (Hopgood et al., 1993) (Avdelidis et al., 2003) (Meola et al., 2006) (Silva et al., 2003). More effective approaches, based on advanced signal processing and artificial intelligence paradigms, have been proposed in the last decade (Benitez et al., 2009)

In this chapter, we address the problem of developing an automatic system for the analysis of sequences of thermographic images and ultrasonic signals to help safety inspectors in the diagnosis of problems in aircraft components in all those cases where the defects or the internal damage are not detectable with a visual inspection. In particular thermographic analysis is proposed to automatically discover water insertions whereas ultrasonic

The proposed approach considers two main steps for interpreting thermographic and ultrasonic data: in the first step a pre-processing technique is introduced to clean data from noise and to emphasise embedded patterns and the classification techniques used to compare ultrasonic signals and to detect classes of similar points. In the second step two neural networks are trained to extract the information that characterises a range of internal defects starting from ultrasonic and thermographic signals extracted in correspondence to the defective areas. After that the same neural networks are applied to automatically inspect

Section 2 gives an overview of the proposed approach whereas section 3 and 4 concentrate on the data pre-processing and classification respectively. Finally, section 5 presents the experimental results on real aircraft material and conclusions are derived in section 6.

The proposed system for automatic inspection of aircraft components is schematized in figure 1. The system takes the data extracted by non destructive processes reported in the

Transient thermography is a non-contact technique, which uses the thermal gradient variation to inspect the internal properties of the investigated area. The materials are heated by an external source (lamps) and the resulting thermal transient is recorded using an infrared camera. Of course, this kind of analysis is only applicable to materials that have a good thermal conductivity such as metals and carbon composites. Different types of thermal excitation can be used according to the materials and the defects under investigation: for

Ultrasonic inspection uses instead sound signals at frequencies beyond human hearing (more than 20 kHz) to estimate some properties of the irradiated material by analyzing either the reflected (reflection working modality) or transmitted (transmission working modality) signals. A typical ultrasonic inspection system consists of several functional units: pulser, receiver, transducer, and display devices. A pulser is an electronic device that can produce a high-voltage electrical pulse. Driven by the pulser, the transducer generates a high-frequency ultrasonic wave which propagates through the material. In the transmission

literature as transient thermography and ultrasound scanning as input.

instance uniform heating, spot heating, and line heating.

inspection aims at revealing solid insertions of brass foil.

(Wang et al., 2008).

real aircraft components.

**2. Overview of the system** 

modality, the receiver is placed on the opposite side of the material from the pulser, whereas, in the reflection modality, the pulser and the receiver are placed on the same side of the material.

Ultrasonic data can be collected and displayed in a number of different formats. The three most common formats are known in the NDT community as A-scan, B-scan, and C-scan presentations. Each presentation mode provides a different way of looking at and evaluating the region of material being inspected. On the one hand, thermographic analysis is carried out to automatically discover water insertions whereas ultrasonic inspection aims at revealing solid insertions of brass foil.

For thermographic inspection we analyze mono-dimensional signals obtained by considering the time variation of each pixel in the sequence of thermographic images. For each point (i,j) of the material the mono-dimensional signal is generated from the gray levels of the same point in the sequence of images: this signal represents the temperature variation of the material during and after the heating process. This way it is possible to generate spatial-time variant images, the analysis of which allows for the evaluation of the thermal gradient during the heating process.

In Figure 2, the one-dimensional signals extracted from the thermographic sequence of aircraft fuselage are shown: one point belongs to an area affected by the presence of water (red line) whereas the other signal corresponds to non-defective areas (gray lines). From the graph it is clearly evident that a functional description of the intensity variations cannot be easily generalized and the behaviours of points corresponding to defective and nondefective areas are very similar.

For the analysis of ultrasonic data we analyze one-dimensional signals acquired from the reflection working modality and A-scan representation. This means that, for each point of the inspected material, we have a continuous signal that represents the amount of received ultrasonic energy as a function of time.

Automatic Inspection of Aircraft Components Using Thermographic and Ultrasonic Techniques 387

Fig. 3. Two ultrasound signals: the signal on top is relative to a non-defective point. The

The automatic classification of acquired signals as flawed or unflawed is not trivial due to the huge number of intra-class variance: on the one hand ultrasonic and thermal signals relative to unflawed areas can shows different temporal behaviours depending on manufacturing variations in the underlying composite layers or specimen thickness variations. This is evident in figure 4 where different thermographic signals relative to unflawed areas are reported. On the other hand, signals relative to flawed areas can differ

In order to make the classification easier, a pre-processing technique step is then required: on the one hand, it has to increase signal to noise ratio and, on the other hand, to detect and enhance the information that could increase the probability of separating signals belonging

Fig. 4. thermographic signals relative to unflawed areas: their temporal behaviours can

signal on the bottom is relative to a flawed area.

strongly differ depending on many factors.

since insertions and infiltrations can occur at different locations.

**3. Data pre-processing** 

to different classes.

In figure 3 two ultrasound signals are shown. The signal on top is relative to a non-defective point. Observe that there are large extrema at the beginning and at the end. These changes in ultrasound energy are caused by the transmitted signals being reflected by the boundaries of the material. These boundary extrema are referred to as tool side and bag side peaks, respectively. The ultrasonic signal for an area of material that contains defects is given on the bottom of figure 3. In addition to the boundary extrema, the signals contain extrema at other time locations caused by defective components. The time localization of the additional extrema depends on the defect location in the inspected material.

The temporal evolution of the thermographic and ultrasound signals x(t) is the input to the core of the proposed approach that consists of two main steps: the pre-processing of the data, in order to emphasize the characteristics of the signals belonging to the same class, and the following neural classification.

Pre-processing step allows to discard noise and to enhance the most relevant information for flawed area detection purposes. Two Multi Layer Perceptron (MLP) neural architectures characterized by the presence of an input layer of source nodes, a hidden layer and an output layer, are then used to build an inspection framework that automatically label each signal as belonging to a flawed area or not.

A final connectivity analysis of all the points labelled as belonging to flawed areas is done in order to both discard isolated false positives and to deduce size and shape of the flawed area as a whole.

Fig. 2. the one-dimensional signals extracted from the thermographic sequence of aircraft fuselage. The black line corresponds to unflawed areas whereas the red line corresponds to a pixel belonging to water infiltration.

Fig. 3. Two ultrasound signals: the signal on top is relative to a non-defective point. The signal on the bottom is relative to a flawed area.

#### **3. Data pre-processing**

386 Recent Advances in Aircraft Technology

In figure 3 two ultrasound signals are shown. The signal on top is relative to a non-defective point. Observe that there are large extrema at the beginning and at the end. These changes in ultrasound energy are caused by the transmitted signals being reflected by the boundaries of the material. These boundary extrema are referred to as tool side and bag side peaks, respectively. The ultrasonic signal for an area of material that contains defects is given on the bottom of figure 3. In addition to the boundary extrema, the signals contain extrema at other time locations caused by defective components. The time localization of

The temporal evolution of the thermographic and ultrasound signals x(t) is the input to the core of the proposed approach that consists of two main steps: the pre-processing of the data, in order to emphasize the characteristics of the signals belonging to the same class, and

Pre-processing step allows to discard noise and to enhance the most relevant information for flawed area detection purposes. Two Multi Layer Perceptron (MLP) neural architectures characterized by the presence of an input layer of source nodes, a hidden layer and an output layer, are then used to build an inspection framework that automatically label each

A final connectivity analysis of all the points labelled as belonging to flawed areas is done in order to both discard isolated false positives and to deduce size and shape of the flawed

Fig. 2. the one-dimensional signals extracted from the thermographic sequence of aircraft fuselage. The black line corresponds to unflawed areas whereas the red line corresponds to

the additional extrema depends on the defect location in the inspected material.

the following neural classification.

area as a whole.

signal as belonging to a flawed area or not.

a pixel belonging to water infiltration.

The automatic classification of acquired signals as flawed or unflawed is not trivial due to the huge number of intra-class variance: on the one hand ultrasonic and thermal signals relative to unflawed areas can shows different temporal behaviours depending on manufacturing variations in the underlying composite layers or specimen thickness variations. This is evident in figure 4 where different thermographic signals relative to unflawed areas are reported. On the other hand, signals relative to flawed areas can differ since insertions and infiltrations can occur at different locations.

In order to make the classification easier, a pre-processing technique step is then required: on the one hand, it has to increase signal to noise ratio and, on the other hand, to detect and enhance the information that could increase the probability of separating signals belonging to different classes.

Fig. 4. thermographic signals relative to unflawed areas: their temporal behaviours can strongly differ depending on many factors.

Automatic Inspection of Aircraft Components Using Thermographic and Ultrasonic Techniques 389

Let us think about our input as a time-varying signal. To analyze signal structures of very different sizes, it is necessary to use time-frequency atoms with different time supports. The wavelet transform decomposes signals over dilated and translated wavelets Mallat (1999). The signal may be sampled at discrete wavelength values yielding a spectrum. In continuous wavelet transform the input signal is correlated with an analyzing continuous wavelet. The latter is a function of two parameters such as scale and position. The widely used Fourier transform (FT) maps the input data into a new space, the basis functions of which are sins and cosines. Such basis functions are defined in an infinite space and are periodic, this means that FT is best suited to signal with these same features. The Wavelet transform maps the input signal into a new space which basis functions are usually of

compact support. The term wavelet comes from well- localized wave-like functions.

*x t*( ) with the shifted and scaled version of a prototype analysing function

mother wavelet which has the characteristic of a band pass filter impulse response.

can obtain a family starting from it by varying the scale *s* as follows:

*<sup>s</sup>* and vice versa.

In fact, they are well-localized in space and frequency i.e. their rate of variations is restricted. Fourier transform is only local in frequency not space. Furthermore, Fourier analysis is unique, but wavelet not, since there are many possible sets of wavelets which one can

Our trade-off between different wavelet sets is compactness versus smoothness. Working with fixed windows as in the Short Term Fourier Transform (STFT) may bring about problems. If the signal details are much smaller than the width of the window they can be detected but the transform will not localize them. If the signal details are larger than the window size, then they will not be detected properly. The scale is defined by the width of a modulation function. To solve this problem we must define a transform independent from the scale. This means that the function should not have a fixed scale but should vary. To

> <sup>1</sup> , () ( ) ( ) ( ) *<sup>p</sup> p ut u t st s s s <sup>s</sup>*

*u ut s*

The continuous wavelet transform *X* is the result of the scalar product of the original signal

The coefficients of the transformed signal represent how closely correlated the mother wavelet is with the section of the signal being analyzed. The higher the coefficient, the more

Calculating wavelet coefficients at every possible scale is a fair amount of work, and it generates a great amount of data. If we choose scales and positions based on the power of two (called dyadic scales and positions) then our analysis will be much more efficient. This

In the discrete case, WT is sampled at discrete mesh points and using smoother basis functions. This way a multiresolution representation of the signal *x t*( ) can be achieved.

 

( )*t* as a candidate of a modulation function and we

*<sup>s</sup>* is *sT* . In terms of frequencies, the smaller the *s*the

( )*t* called

 

**3.1 Wavelet transform** 

choose.

If 

achieve this, we start from a function

has width *T* then the width of

analysis is called the *discrete wavelet transform*.

higher the frequencies

the similarity.

There are many effective signal pre-processing techniques in the literature. Most of them work in a specific domain (time or frequency) whereas a few of them affect both domains simultaneously. In the latter category lies the so-called Wavelet Transform, an extension of Fourier Transform generalized to any wideband transient. For its capability to give a multidomain representation of the data, the wavelet transform has been used in this work to analyse collected thermographic and ultrasonic data.

In figure 5 the wavelet decomposition (by using Daubechies 3 kernels ) at level 3 using a thermographic (top) and ultrasound (bottom) signal are reported.

The next subsection gives some additional theoretical information about the considered preprocessing technique based on Wavelet Transform.

Fig. 5. the wavelet decomposition of a thermographic (on top) and ultrasound (at the bottom) signal.

#### **3.1 Wavelet transform**

388 Recent Advances in Aircraft Technology

There are many effective signal pre-processing techniques in the literature. Most of them work in a specific domain (time or frequency) whereas a few of them affect both domains simultaneously. In the latter category lies the so-called Wavelet Transform, an extension of Fourier Transform generalized to any wideband transient. For its capability to give a multidomain representation of the data, the wavelet transform has been used in this work to

In figure 5 the wavelet decomposition (by using Daubechies 3 kernels ) at level 3 using a

The next subsection gives some additional theoretical information about the considered pre-

(a)

(b)

Fig. 5. the wavelet decomposition of a thermographic (on top) and ultrasound (at the

bottom) signal.

analyse collected thermographic and ultrasonic data.

processing technique based on Wavelet Transform.

thermographic (top) and ultrasound (bottom) signal are reported.

Let us think about our input as a time-varying signal. To analyze signal structures of very different sizes, it is necessary to use time-frequency atoms with different time supports. The wavelet transform decomposes signals over dilated and translated wavelets Mallat (1999). The signal may be sampled at discrete wavelength values yielding a spectrum. In continuous wavelet transform the input signal is correlated with an analyzing continuous wavelet. The latter is a function of two parameters such as scale and position. The widely used Fourier transform (FT) maps the input data into a new space, the basis functions of which are sins and cosines. Such basis functions are defined in an infinite space and are periodic, this means that FT is best suited to signal with these same features. The Wavelet transform maps the input signal into a new space which basis functions are usually of compact support. The term wavelet comes from well- localized wave-like functions.

In fact, they are well-localized in space and frequency i.e. their rate of variations is restricted.

Fourier transform is only local in frequency not space. Furthermore, Fourier analysis is unique, but wavelet not, since there are many possible sets of wavelets which one can choose.

Our trade-off between different wavelet sets is compactness versus smoothness. Working with fixed windows as in the Short Term Fourier Transform (STFT) may bring about problems. If the signal details are much smaller than the width of the window they can be detected but the transform will not localize them. If the signal details are larger than the window size, then they will not be detected properly. The scale is defined by the width of a modulation function. To solve this problem we must define a transform independent from the scale. This means that the function should not have a fixed scale but should vary. To achieve this, we start from a function ( )*t* as a candidate of a modulation function and we can obtain a family starting from it by varying the scale *s* as follows:

$$\left| \psi \circ \iota \left( \mu \right) = \psi \circ \left( \mu \right) \right| = \left| s \right|^p \left. \nu \left( \frac{\iota \nmid t}{s} \right) = \frac{1}{\left| s \right|^r} \left. \nu \left( \frac{\iota \nmid t}{s} \right) \right| $$

If has width *T* then the width of *<sup>s</sup>* is *sT* . In terms of frequencies, the smaller the *s*the higher the frequencies *<sup>s</sup>* and vice versa.

The continuous wavelet transform *X* is the result of the scalar product of the original signal *x t*( ) with the shifted and scaled version of a prototype analysing function ( )*t* called mother wavelet which has the characteristic of a band pass filter impulse response.

The coefficients of the transformed signal represent how closely correlated the mother wavelet is with the section of the signal being analyzed. The higher the coefficient, the more the similarity.

Calculating wavelet coefficients at every possible scale is a fair amount of work, and it generates a great amount of data. If we choose scales and positions based on the power of two (called dyadic scales and positions) then our analysis will be much more efficient. This analysis is called the *discrete wavelet transform*.

In the discrete case, WT is sampled at discrete mesh points and using smoother basis functions. This way a multiresolution representation of the signal *x t*( ) can be achieved.

Automatic Inspection of Aircraft Components Using Thermographic and Ultrasonic Techniques 391

The MLP and many other neural networks learn using an algorithm called back propagation. With back propagation, the input data is repeatedly presented to the neural network. With each presentation the output of the neural network is compared to the desired output and an error is computed. This error is then fed back (back propagated) to the neural network and used to adjust the weights such that the error decreases with each iteration and the neural model gets closer and closer to producing the desired output. This

The hidden layers enable the network to extract higher-order statistics especially when the size of the input layer is large. There is no theoretical limit to the number of hidden layers but, typically, architectures with just one hidden layer are adequate to face the complexity of most of the practical problems . Most used neural architecture have only one hidden layer. Supervised learning involves applying a set of training examples to modify the synaptic weights connecting the neurons of the network. Each example consists of a unique input signal and the corresponding desired response. The network is presented with many examples many times and the synaptic weights are tuned so as to minimize the difference between the desired response and the actual response of the network. The network training is repeated until a steady state is reached, where there are no further significant changes in

The input layer has a number of neurons equal to the number of image features. In this work, the features are those extracted after the pre-processing phase. The number of nodes in the output layer depends on the number of classes that the network has to recognize. In our context the network has to recognize the sound point and the defect points (2 output

There is no quantifiable best answer to the layout of the network for any particular application. There are only general rules picked up over time and followed by most

Rule One: As the complexity in the relationship between the input data and the desired output increases, the number of the processing elements in the hidden layer should also

Rule Two: If the process being modelled is separable into multiple stages, then additional hidden layer(s) may be required. If the process is not separable into stages, then additional

nodes). The number of nodes in the hidden layer is determined by experiment.

researchers and engineers applying this architecture to their problems.

Fig. 6. A feed forward neural network scheme.

process is known as "training".

the synaptic weights.

increase.

Notice that the wavelet transform can be written as a convolution product (it is a linear space-invariant filter):

$$\tilde{X}(s,t) = x(\mu)\psi\_{s,t}(\mu)d\mu = \left\langle \psi\_{s,t'}, x \right\rangle\_{\mu}$$

This leads to a fast and efficient implementation of the wavelet transform for a discrete signal obtained using digital filtering techniques. The signal to be analyzed is passed through filters with different cut off frequencies at different scales. The wavelet transform for a discrete signal is computed by successive low-pass and high-pass filtering of the discrete time-domain signal. Many filter kernels can be used for this scope and the best choice depends on the features of the input signal that have to be exploited.

At each decomposition level, the half-band filters produce signals spanning only half the frequency band. This doubles the frequency resolution as the uncertainty in frequency is reduced by half. At the same time, the decimation by 2 doubles the scale. With this approach, the time resolution becomes arbitrarily good at high frequencies, whereas the frequency resolution becomes arbitrarily good at low frequencies.

## **4. Automatic learning and classification of defective and non-defective patterns**

After the pre-processing step the new wavelet based data representations is given as input to an automatic classifier that, after a proper learning phase, is able to label each input stream as belonging to a flawed or unflawed area on the basis of the learned input/output mapping model. One of the most powerful data modelling tools that is able to capture and represent complex input/output relationships is neural network (NN).

#### **4.1 Neural network paradigm**

The motivation for the development of neural network technology stemmed from the desire to develop an artificial system that could perform "intelligent" tasks similar to those performed by the human brain. Neural networks resemble the human brain in the following two ways:


The true power and advantage of neural networks lies in their ability to represent both linear and non-linear relationships and in their ability to learn these relationships directly from the data being modelled. Traditional linear models are simply inadequate when it comes to modelling data that contains non-linear characteristics.

The most common neural network model is the multilayer perceptron (MLP), having an architecture as reported in figure 6. This type of neural network is known as a supervised network because it requires a desired output in order to learn. The goal of this type of network is to create a model that correctly maps the input to the output using historical data so that the model can then be used to produce the output when the desired output is unknown.

Fig. 6. A feed forward neural network scheme.

Notice that the wavelet transform can be written as a convolution product (it is a linear

, , (,) ( ) ( ) , *X s t x u u du x* 

This leads to a fast and efficient implementation of the wavelet transform for a discrete signal obtained using digital filtering techniques. The signal to be analyzed is passed through filters with different cut off frequencies at different scales. The wavelet transform for a discrete signal is computed by successive low-pass and high-pass filtering of the discrete time-domain signal. Many filter kernels can be used for this scope and the best

At each decomposition level, the half-band filters produce signals spanning only half the frequency band. This doubles the frequency resolution as the uncertainty in frequency is reduced by half. At the same time, the decimation by 2 doubles the scale. With this approach, the time resolution becomes arbitrarily good at high frequencies, whereas the

After the pre-processing step the new wavelet based data representations is given as input to an automatic classifier that, after a proper learning phase, is able to label each input stream as belonging to a flawed or unflawed area on the basis of the learned input/output mapping model. One of the most powerful data modelling tools that is able to capture and

The motivation for the development of neural network technology stemmed from the desire to develop an artificial system that could perform "intelligent" tasks similar to those performed by the human brain. Neural networks resemble the human brain in the following

2. A neural network's knowledge is stored within inter-neuron connection strengths

The true power and advantage of neural networks lies in their ability to represent both linear and non-linear relationships and in their ability to learn these relationships directly from the data being modelled. Traditional linear models are simply inadequate when it

The most common neural network model is the multilayer perceptron (MLP), having an architecture as reported in figure 6. This type of neural network is known as a supervised network because it requires a desired output in order to learn. The goal of this type of network is to create a model that correctly maps the input to the output using historical data so that the model can then be used to produce the output when the desired output is

**4. Automatic learning and classification of defective and non-defective** 

choice depends on the features of the input signal that have to be exploited.

frequency resolution becomes arbitrarily good at low frequencies.

represent complex input/output relationships is neural network (NN).

1. A neural network acquires knowledge through learning.

comes to modelling data that contains non-linear characteristics.

*s t s t*

 

space-invariant filter):

**patterns** 

two ways:

unknown.

**4.1 Neural network paradigm** 

known as synaptic weights.

The MLP and many other neural networks learn using an algorithm called back propagation. With back propagation, the input data is repeatedly presented to the neural network. With each presentation the output of the neural network is compared to the desired output and an error is computed. This error is then fed back (back propagated) to the neural network and used to adjust the weights such that the error decreases with each iteration and the neural model gets closer and closer to producing the desired output. This process is known as "training".

The hidden layers enable the network to extract higher-order statistics especially when the size of the input layer is large. There is no theoretical limit to the number of hidden layers but, typically, architectures with just one hidden layer are adequate to face the complexity of most of the practical problems . Most used neural architecture have only one hidden layer. Supervised learning involves applying a set of training examples to modify the synaptic weights connecting the neurons of the network. Each example consists of a unique input signal and the corresponding desired response. The network is presented with many examples many times and the synaptic weights are tuned so as to minimize the difference between the desired response and the actual response of the network. The network training is repeated until a steady state is reached, where there are no further significant changes in the synaptic weights.

The input layer has a number of neurons equal to the number of image features. In this work, the features are those extracted after the pre-processing phase. The number of nodes in the output layer depends on the number of classes that the network has to recognize. In our context the network has to recognize the sound point and the defect points (2 output nodes). The number of nodes in the hidden layer is determined by experiment.

There is no quantifiable best answer to the layout of the network for any particular application. There are only general rules picked up over time and followed by most researchers and engineers applying this architecture to their problems.

Rule One: As the complexity in the relationship between the input data and the desired output increases, the number of the processing elements in the hidden layer should also increase.

Rule Two: If the process being modelled is separable into multiple stages, then additional hidden layer(s) may be required. If the process is not separable into stages, then additional

Automatic Inspection of Aircraft Components Using Thermographic and Ultrasonic Techniques 393

Fig. 8. One of the thermographic images where the liquid infiltrations are visible.

ultrasound scanning and this causes traditional NDT techniques to fail.

Fig. 9. three ultrasound signals relative to the non-defective area (on the left), a brass insertion placed in the middle of the material thickness (on the right) and very close to the

Acquired experimental data were then represented in the wavelet domain by using Daubechies 3 family of filters and the derived coefficients were given as input to two different neural networks in order to specialize each of them to recognize water infiltration and solid insertion respectively. The defect segmentation step is performed by using neural networks with two output neurons. Each available signal is fed into the net, which classifies

Preliminary experiments aimed at defining the best data model through the selected neural paradigm. In particular they allow the definition of the best number of neurons in the hidden layer and the most suited number of training points. To accomplish this fundamental task different set training examples were built. In particular, for each neural

transducer serving as transmitter and receiver (5MHz).

inspected material surface (in the centre).

it as either relative to defective areas or an unflawed area.

Ultrasonic data were obtained by an ultrasonic reflection technique that uses a single

In figure 9, the signal on the left is relative to a non-defective area whereas the signal on the right is relative to a brass insertion placed in the middle of the material thickness and for this reason the corresponding extrema is far from the boundary ones. The signal in the centre of figure 3 is relative to a brass insertion placed very close to the inspected material surface and then the corresponding extrema is mixed with the tool side one. This shows that defective and non-defective areas can have very similar temporal behaviours under

layers may simply enable memorization of the training set, and not a true general solution effective with other data.

Rule Three: The amount of training data available sets an upper bound for the number of processing elements in the hidden layer(s). To calculate this upper bound, use the number of cases in the training data set and divide that number by the sum of the number of nodes in the input and output layers in the network. Then divide that result again by a scaling factor between five and ten. Larger scaling factors are used for relatively less noisy data. If you use too many artificial neurons the training set will be memorized. If that happens, generalization of the data will not occur.

## **5. Experimental setup and results**

The composite material used in the experimental tests has an alloy core with a periodic honeycomb internal structure of 128-ply thicknesses (each ply has a thickness of 0.19 mm). The experiments were carried out on two specimens: the first one presents two water infiltrations whereas the second one presents three solid insertions of brass foil (0.02±0.01 mm thickness). One solid insertion was placed two plies from the tool side surface (TOP INSERTION), one at mid part thickness (MIDDLE INSERTION) and the remaining one two plies from the bag side surface (BOTTOM INSERTION). Brass inserts were introduced to represent voids and delamination. In all the cases the defects or the internal damage were not detectable with a visual inspection.

Figure 7 shows the specimens of sandwich material used in the experiments with the graphical information superimposed indicating the exact location of water infiltrations (in blue) and brass foil insertions i.e. top insertion (T) on the left, middle insertion (M) in the centre and bottom insertion (B) on the right.

Fig. 7. the sandwich materials used in the experiments with the superimposed graphical information indicating the exact location of water infiltrations and brass foil insertions

The thermographic image sequence was obtained by using a thermo camera sensitive to the infrared emissions. A quasi-uniform heating was used to guarantee a temperature variation of the composite materials around 20C/sec. In figure 8 one of the thermographic images is reported. Only liquid infiltrations become visible due to the larger thermal variation of the water with respect to solid insertions.

layers may simply enable memorization of the training set, and not a true general solution

Rule Three: The amount of training data available sets an upper bound for the number of processing elements in the hidden layer(s). To calculate this upper bound, use the number of cases in the training data set and divide that number by the sum of the number of nodes in the input and output layers in the network. Then divide that result again by a scaling factor between five and ten. Larger scaling factors are used for relatively less noisy data. If you use too many artificial neurons the training set will be memorized. If that happens,

The composite material used in the experimental tests has an alloy core with a periodic honeycomb internal structure of 128-ply thicknesses (each ply has a thickness of 0.19 mm). The experiments were carried out on two specimens: the first one presents two water infiltrations whereas the second one presents three solid insertions of brass foil (0.02±0.01 mm thickness). One solid insertion was placed two plies from the tool side surface (TOP INSERTION), one at mid part thickness (MIDDLE INSERTION) and the remaining one two plies from the bag side surface (BOTTOM INSERTION). Brass inserts were introduced to represent voids and delamination. In all the cases the defects or the internal damage were

Figure 7 shows the specimens of sandwich material used in the experiments with the graphical information superimposed indicating the exact location of water infiltrations (in blue) and brass foil insertions i.e. top insertion (T) on the left, middle insertion (M) in the

Fig. 7. the sandwich materials used in the experiments with the superimposed graphical information indicating the exact location of water infiltrations and brass foil insertions

The thermographic image sequence was obtained by using a thermo camera sensitive to the infrared emissions. A quasi-uniform heating was used to guarantee a temperature variation of the composite materials around 20C/sec. In figure 8 one of the thermographic images is reported. Only liquid infiltrations become visible due to the larger thermal variation of the

effective with other data.

generalization of the data will not occur.

**5. Experimental setup and results** 

not detectable with a visual inspection.

centre and bottom insertion (B) on the right.

water with respect to solid insertions.

Ultrasonic data were obtained by an ultrasonic reflection technique that uses a single transducer serving as transmitter and receiver (5MHz).

In figure 9, the signal on the left is relative to a non-defective area whereas the signal on the right is relative to a brass insertion placed in the middle of the material thickness and for this reason the corresponding extrema is far from the boundary ones. The signal in the centre of figure 3 is relative to a brass insertion placed very close to the inspected material surface and then the corresponding extrema is mixed with the tool side one. This shows that defective and non-defective areas can have very similar temporal behaviours under ultrasound scanning and this causes traditional NDT techniques to fail.

Fig. 9. three ultrasound signals relative to the non-defective area (on the left), a brass insertion placed in the middle of the material thickness (on the right) and very close to the inspected material surface (in the centre).

Acquired experimental data were then represented in the wavelet domain by using Daubechies 3 family of filters and the derived coefficients were given as input to two different neural networks in order to specialize each of them to recognize water infiltration and solid insertion respectively. The defect segmentation step is performed by using neural networks with two output neurons. Each available signal is fed into the net, which classifies it as either relative to defective areas or an unflawed area.

Preliminary experiments aimed at defining the best data model through the selected neural paradigm. In particular they allow the definition of the best number of neurons in the hidden layer and the most suited number of training points. To accomplish this fundamental task different set training examples were built. In particular, for each neural

Automatic Inspection of Aircraft Components Using Thermographic and Ultrasonic Techniques 395

60 trainng examples 40 trainng examples 80 trainng examples

Fig. 11. Experiment results for solid insertion detection using ultrasound signals when a

Experiments demonstrated that a lower number of hidden layer nodes (i.e 20-30) is a good choice since a larger number of nodes in the inner layer for such situations can drive the classification model to over-fit the training data and to produce a very high failure score. At the same time, experimental proofs pointed out that a limited number of training points (i.e. 40) is the best choice in term of correct classification rate. In other words this is the minimum number of training examples to allow a proper learning of the data distribution and, at the same time, it is the maximum number to avoid data over-fitting case, i.e. to preserve the fundamental capability to classify unknown data (generalization capacity) . For a better comprehension of experimental results tables I and II report the scatter matrices relative to the experiments performed by using the best network and training set

> Unflawed **184/250 (73,6%)** 66/250 (26,4%) Water insertion 28/250 (11,2%) **222/250 (88,8%)**

Unflawed **169/250 (67,6%)** 81/250 (32,4%) Brass Foil (top) 36/250 (14,4%) **214/250 (85,6%)**  Brass Foil (middle) 15/250 (6,0%) **235/250 (94,0%)**  Brass Foil (bottom) 64/250 (25,6%) **186/250 (74,4%)** 

Table I and II give a quantitative evaluation of the possibility to automatically detect both liquid and solid insertions in composite materials by using thermal and ultrasonic

In particular, Table II illustrates that brass foil insertions at the mid-thickness level were always better classified than those located either at the top or at the bottom. The defect

Table I: Scatter matrix derived in the experiments for water infiltration detection.

Table II. Scatter matrix derived in the experiment 1 for brass fail insertion detection.

techniques in combination with neural approaches.

different number of training examples and hidden neurons were considered

**0 0,05 0,1 0,15 0,2 0,25 0,3 0,35 0,4 0,45 0,5**

**False Positive Rate**

**20 30 40 50 60 70 80 90 100 Number of Hidden Neurons**

Unflawed Water insertion

Unflawed Solid Insertion

40 Training Examples 80 training Examples 60 Training Examples

**0,68 0,70 0,72 0,74 0,76 0,78 0,80 0,82 0,84 0,86**

configuration.

**Tru Positive Rate**

**20 30 40 50 60 70 80 90 100 Number of Hidden Neurons**

network 3 different training sets consisting of 40, 60 and 80 examples (50% corresponding to unflawed and 50% to flawed areas) were used. At the same time different test sets of points were built for each specimen. In particular, for the specimen with water infiltration two data were built: the first set contained 250 signals relative to unflawed points, the second set contained 250 signals relative to defective areas damaged by the water.

Similarly for the specimen with brass foil insertions 4 data sets were built: the first set contained 250 signals relative to unflawed points, the second set contained 250 signals relative to the defective area corresponding to the brass foil positioned two plies from the tool side surface (Top Insertion), the third set contained 250 signals relative to the defective area corresponding to the brass foil positioned at mid part thickness (Middle Insertion) and finally the fourth set contained 250 signals relative to the defective area corresponding to the brass foil positioned two plies from the bag side surface (Bottom Insertion).

In each experiment a training set was selected and the learned network was then used to classify the data in the corresponding test set. The set of training examples consisted of input–output couples (input signal, corresponding desired response). During the training phase the points of known examples were extracted from the considered materials and continuously fed into the net so that the synaptic weights were tuned to ensure the minimum distance between the actual and the desired output of the net.

Training continues until a steady state is reached, i.e., no further significant change in the synaptic weights could be made to improve net performance. This is repeated also using different configurations of the hidden layer. In particular a number of hidden neurons ranging from 20 to 100 were considered.

The results of this demanding experimental phase are summed up in figure 10 and figure 11.

Fig. 10. Experiment results for water infiltration detection using thermal signals when a different number of training examples and hidden neurons were considered

network 3 different training sets consisting of 40, 60 and 80 examples (50% corresponding to unflawed and 50% to flawed areas) were used. At the same time different test sets of points were built for each specimen. In particular, for the specimen with water infiltration two data were built: the first set contained 250 signals relative to unflawed points, the second set

Similarly for the specimen with brass foil insertions 4 data sets were built: the first set contained 250 signals relative to unflawed points, the second set contained 250 signals relative to the defective area corresponding to the brass foil positioned two plies from the tool side surface (Top Insertion), the third set contained 250 signals relative to the defective area corresponding to the brass foil positioned at mid part thickness (Middle Insertion) and finally the fourth set contained 250 signals relative to the defective area corresponding to the

In each experiment a training set was selected and the learned network was then used to classify the data in the corresponding test set. The set of training examples consisted of input–output couples (input signal, corresponding desired response). During the training phase the points of known examples were extracted from the considered materials and continuously fed into the net so that the synaptic weights were tuned to ensure the

Training continues until a steady state is reached, i.e., no further significant change in the synaptic weights could be made to improve net performance. This is repeated also using different configurations of the hidden layer. In particular a number of hidden neurons

The results of this demanding experimental phase are summed up in figure 10 and figure

60 training examples 40 training examples 80 training examples

Fig. 10. Experiment results for water infiltration detection using thermal signals when a

different number of training examples and hidden neurons were considered

**0 0,05 0,1 0,15 0,2 0,25 0,3 0,35 0,4**

**False Positive Rate**

**20 30 40 50 60 70 80 90 100 Number of Hidden Neurons**

60 training examples 40 training examples 80 training examples

contained 250 signals relative to defective areas damaged by the water.

brass foil positioned two plies from the bag side surface (Bottom Insertion).

minimum distance between the actual and the desired output of the net.

ranging from 20 to 100 were considered.

**20 30 40 50 60 70 80 90 100 Number of Hidden Neurons**

11.

**0,65 0,70 0,75 0,80 0,85 0,90**

**True Positive Rate**

Fig. 11. Experiment results for solid insertion detection using ultrasound signals when a different number of training examples and hidden neurons were considered

Experiments demonstrated that a lower number of hidden layer nodes (i.e 20-30) is a good choice since a larger number of nodes in the inner layer for such situations can drive the classification model to over-fit the training data and to produce a very high failure score. At the same time, experimental proofs pointed out that a limited number of training points (i.e. 40) is the best choice in term of correct classification rate. In other words this is the minimum number of training examples to allow a proper learning of the data distribution and, at the same time, it is the maximum number to avoid data over-fitting case, i.e. to preserve the fundamental capability to classify unknown data (generalization capacity) .

For a better comprehension of experimental results tables I and II report the scatter matrices relative to the experiments performed by using the best network and training set configuration.



Table II. Scatter matrix derived in the experiment 1 for brass fail insertion detection.

Table I and II give a quantitative evaluation of the possibility to automatically detect both liquid and solid insertions in composite materials by using thermal and ultrasonic techniques in combination with neural approaches.

In particular, Table II illustrates that brass foil insertions at the mid-thickness level were always better classified than those located either at the top or at the bottom. The defect

Automatic Inspection of Aircraft Components Using Thermographic and Ultrasonic Techniques 397

applicative context, it is critically important to detect all defective points, even at the

Fig. 13. the result of the cleaning based on the point connectivity analysis on the images

In this chapter, we address the problem of developing an automatic system for the analysis of sequences of thermographic images and ultrasonic signals to help safety inspectors in the

In particular, thermographic analysis was carried out to automatically discover water insertions whereas ultrasonic inspection aimed at revealing solid insertions of brass foil. Experiments were carried out on real aircraft specimens and demonstrated the capability of the proposed framework to discover flawed areas. A tolerable number of false positive occurrences were also found in correspondence to the part of the specimens having a sloping surface since their points were not included in the learning phase in order to get the

Future work will focus on investigating the defect identification capability of the proposed approach. This will be achieved by extending the analysis to material with different thicknesses and different defective insertions. In the future, we will also investigate the possibility of using an unsupervised-learning approach in order to reduce human

Kemppainen, M. & Virkkunen, I. (2011). Crack Characteristics and Their Importance to

Chatterjee, K. ; Tuli, S. ; Pickering, S. G. & Almond, D. P. (2011). A comparison of the pulsed,

http://www.sciencedirect.com/science/article/pii/S0963869511000892 McNab, A. & Dunlop, I. (1995). A review of artificial intelligence applied to ultrasonic defect

NDE, *Journal of Nondestructive Evaluation,* 2.06.2011 Issn: 0195-9298 Available from

lock-in and frequency modulated thermography nondestructive evaluation techniques, *NDT & E International,* 29.06.2011 Issn 0963-8695 Available from

best true positive detection rate considering the critical operative context.

http://dx.doi.org/10.1007/s10921-011-0102-z

evaluation, *Insight*, vol. 37, no. 1, pp. 11–16.

expense of generating extra false positives

diagnosis of problems in aircraft components.

reported in figure 8.

**6. Conclusion** 

intervention.

**7. References** 

location is one of the most important factors in ultrasound inspection. The defects placed either at the top or at the bottom of the inspecting structure are in general the most difficult to detect since their echo is mixed with the tool face or the bag side echo. On the contrary, defective areas in the mid-part of the material thickness produce a distinct peak in the signal trend that is straightforward to identify.

In the second part of the experimental phase all the signals extracted by the thermographic and ultrasonic analysis were classified by using the neural networks previously learned.

According to the neural network outputs, a binary image is produced containing black points for defective areas and white points for sound areas.

Fig. 12. graphical representation of the raw classification of all the signals extracted from specimens with water infiltration (on the left) and brass foil insertions (on the right).

In figure 12 the graphical representation of the raw classification of all the signals extracted from specimens with water infiltration (on the left) and brass foil insertions (on the right) is reported. Defective areas are correctly detected but there are also a lot of points in the unflawed areas erroneously classified as flawed. For this reason an additional processing step was introduced in order to analyse the output images considering the vicinity of flawed pixels (region analysis). In other words, considering that these false detections were isolated and did not form connected regions having a considerable area value, the elimination of these points was made more straightforward if some a priori knowledge about the minimum expected size of the defective areas is available.

In figure 13 the final outcome is reported after a filtering process based on the connectivity analysis of the detected defective regions and a selection criterion based on removing the regions having an area less than 20 pixels, are shown.

Most of the false flawed points were removed even if some areas in addition to the real defects were still considered flawed. They mainly occurred in correspondence with a variation of the inclination of the surface (see fig. 7): unfortunately, in this unflawed area both thermographic and ultrasound signals changed their slope more evidently with respect to the corresponding signals used to train the net. This problem could be faced by learning the net also on the points belonging to this particular areas. However, this way of proceeding was not considered in this work since it could be counterproductive: the net could miss some real defective areas (or parts of it) and, in our opinion, considering the applicative context, it is critically important to detect all defective points, even at the expense of generating extra false positives

Fig. 13. the result of the cleaning based on the point connectivity analysis on the images reported in figure 8.

#### **6. Conclusion**

396 Recent Advances in Aircraft Technology

location is one of the most important factors in ultrasound inspection. The defects placed either at the top or at the bottom of the inspecting structure are in general the most difficult to detect since their echo is mixed with the tool face or the bag side echo. On the contrary, defective areas in the mid-part of the material thickness produce a distinct peak in the signal

In the second part of the experimental phase all the signals extracted by the thermographic and ultrasonic analysis were classified by using the neural networks previously learned.

According to the neural network outputs, a binary image is produced containing black

Fig. 12. graphical representation of the raw classification of all the signals extracted from specimens with water infiltration (on the left) and brass foil insertions (on the right).

In figure 12 the graphical representation of the raw classification of all the signals extracted from specimens with water infiltration (on the left) and brass foil insertions (on the right) is reported. Defective areas are correctly detected but there are also a lot of points in the unflawed areas erroneously classified as flawed. For this reason an additional processing step was introduced in order to analyse the output images considering the vicinity of flawed pixels (region analysis). In other words, considering that these false detections were isolated and did not form connected regions having a considerable area value, the elimination of these points was made more straightforward if some a priori knowledge about the

In figure 13 the final outcome is reported after a filtering process based on the connectivity analysis of the detected defective regions and a selection criterion based on removing the

Most of the false flawed points were removed even if some areas in addition to the real defects were still considered flawed. They mainly occurred in correspondence with a variation of the inclination of the surface (see fig. 7): unfortunately, in this unflawed area both thermographic and ultrasound signals changed their slope more evidently with respect to the corresponding signals used to train the net. This problem could be faced by learning the net also on the points belonging to this particular areas. However, this way of proceeding was not considered in this work since it could be counterproductive: the net could miss some real defective areas (or parts of it) and, in our opinion, considering the

trend that is straightforward to identify.

points for defective areas and white points for sound areas.

minimum expected size of the defective areas is available.

regions having an area less than 20 pixels, are shown.

In this chapter, we address the problem of developing an automatic system for the analysis of sequences of thermographic images and ultrasonic signals to help safety inspectors in the diagnosis of problems in aircraft components.

In particular, thermographic analysis was carried out to automatically discover water insertions whereas ultrasonic inspection aimed at revealing solid insertions of brass foil. Experiments were carried out on real aircraft specimens and demonstrated the capability of the proposed framework to discover flawed areas. A tolerable number of false positive occurrences were also found in correspondence to the part of the specimens having a sloping surface since their points were not included in the learning phase in order to get the best true positive detection rate considering the critical operative context.

Future work will focus on investigating the defect identification capability of the proposed approach. This will be achieved by extending the analysis to material with different thicknesses and different defective insertions. In the future, we will also investigate the possibility of using an unsupervised-learning approach in order to reduce human intervention.

#### **7. References**


**18** 

*Poland* 

Mariusz Wazny

*Military University of Technology* 

**The Analysis of the Maintenance** 

This chapter presents the analysis of the maintenance process of a military aircraft with a detailed description of two areas, i.e. the process of maintaining and the process of operating. Each of these processes is briefly characterized. The section also involves methods enabling the determination of: residual durability of specified devices/systems of a military aircraft on the basis of the diagnostic parameters of these devices/systems, and the effectiveness of a combat task execution on the basis of information registered in the process

A modern military aircraft (MMA) is a hybrid of the most up-to-date achievements in the field of materials engineering (the use of light metal alloys and composite structures), electronic engineering (fast microprocessor systems, modern systems in the field of power electronics), and specialized software supporting the maintenance process (automatic flight control system, integrated diagnostic systems). Due to such combination, tasks executed by MMA comprise a wide range that can be divided into two groups: with the use of aerial

Depending on the nature of a mission, tasks including the use of aerial combat means can be

1. The gaining and maintenance of domination of airspace. This type of task is executed by fast and manoeuvrable aircrafts that are equipped with the most modern armament

2. The support for the operations of ground forces and the navy. As regards this task, aircrafts equipped with air-to-ground weaponry, including rockets, bombs, and aircraft

3. The combating of a selected target of an air attack using precision-guided munitions

When analyzing the use of MMA in respect of the combat task realization without the use of

of aiming. Each presented method is illustrated by a computational example.

**2. Tasks executed by the military aircraft** 

combat means and without the use of aerial combat means.

for aerial combat, i.e. air-to-air missiles and aircraft guns.

aerial combat means, we can distinguish the following main tasks:

launched from manned and unmanned aircrafts.

**1. Introduction** 

generally classified as:

guns, play an important role.

**Process of the Military Aircraft** 


## **The Analysis of the Maintenance Process of the Military Aircraft**

Mariusz Wazny *Military University of Technology Poland* 

## **1. Introduction**

398 Recent Advances in Aircraft Technology

Hopgood, A. A. ; Woodcock, N. ; Hallani, N. J. & Picton, P. (1993). Interpreting ultrasonic

Benitez, H. D. ; Loaiza, H. ; Caicedo, E. ; Ibarra-Castanedo, C. ; Bendada, A. & Maldague, X.

Wang, Y. ; Sun, Y. ; Lv, P. & Wang, H. (2008). Detection of line weld defects based on

Hellier, C. (2001). *Handbook of Nondestructive Evaluation*. McGraw-Hill Professional ISBN:

Avdelidis, N. P. ; Hawtin, B. C. & Almond, D. P. (2003). Transient thermography in the

Meola, C. ; Carlomagno, G.M., Squillace A. & Vitiello, A. (2006). Non-destructive evaluation

Silva, M. Z. ; Gouyon, R. & Lepoutre, F. (2003) Hidden corrosion detection in aircraft

41, Issue 7, October 2008, pp. 517-524, ISSN 0963-8695.

Issue 6, pp. 433-439, ISSN 0963-8695.

Volume 13, Issue 3, pp. 380-388, ISSN 1350-6307

*Ultrasonics*, Volume 41, Issue 4, pp. 301-305, ISSN 0041-624X.

2, no. 4, pp. 135–149.

0963-8695.

0070281211

images using rules, algorithms and neural networks*, Eur. J. Nondestruct. Test*., vol.

(2009). Defect characterization in infrared non-destructive testing with learning machines, *NDT & E International*, Volume 42, Issue 7, Pages 630-643, ISSN

multiple thresholds and support vector machine, *NDT & E International*, Volume

assessment of defects of aircraft composites, *NDT & E International*, Volume 36,

of aerospace materials with lock-in thermography, *Engineering Failure Analysis*,

aluminum structures using laser ultrasonics and wavelet transform signal analysis,

This chapter presents the analysis of the maintenance process of a military aircraft with a detailed description of two areas, i.e. the process of maintaining and the process of operating. Each of these processes is briefly characterized. The section also involves methods enabling the determination of: residual durability of specified devices/systems of a military aircraft on the basis of the diagnostic parameters of these devices/systems, and the effectiveness of a combat task execution on the basis of information registered in the process of aiming. Each presented method is illustrated by a computational example.

## **2. Tasks executed by the military aircraft**

A modern military aircraft (MMA) is a hybrid of the most up-to-date achievements in the field of materials engineering (the use of light metal alloys and composite structures), electronic engineering (fast microprocessor systems, modern systems in the field of power electronics), and specialized software supporting the maintenance process (automatic flight control system, integrated diagnostic systems). Due to such combination, tasks executed by MMA comprise a wide range that can be divided into two groups: with the use of aerial combat means and without the use of aerial combat means.

Depending on the nature of a mission, tasks including the use of aerial combat means can be generally classified as:


When analyzing the use of MMA in respect of the combat task realization without the use of aerial combat means, we can distinguish the following main tasks:

The Analysis of the Maintenance Process of the Military Aircraft 401

Fig. 1. Structural diagram of the military aircraft and the air system: FCSA – Flight Control

ACRNEWS – Airborne Communication, Radio Navigation and Electronic Warfare Systems.;

Armament System; ACS – Armament Control System; WCS – Weapon Control System; NAS

Due to many various external factors, which influence negatively on the specified technical elements of the Air System, it can be claimed, that during operating process the elements are getting "used up". Therefore, due to maintain Air System in the appropriate reliability condition there is required to perform technical service. This action contains adjustment, tuning and replacement of particular devices or whole aggregates, in order to slow down

1. maintenance system containing prevention services schedule (recurring maintenance).

System Actuators (frame construction with plating); FCS – Flight Control System;

In practice there are three aircraft maintenance strategies (Fig. 2.):

– Navigation and Aiming System.

2. operational maintenance system.

3. preventive/predictive maintenance system.

the "using up" process.

MRNAP – Multifunctional Radar and Navigation and Aiming Pod; OAS – On-board


The support for the operations of different types of forces by means of, among other things, managing a mission on the basis of spatial information obtained via reconnaissance systems installed, for example, on an AWACS-type platform, or enabling the in-flight refuelling.

The analysis of the operations of the armed forces in recent armed conflicts indicates that MMAs are the basic element of the system of military operations. MMAs are used in the first instance to execute all of the above-mentioned tasks.

## **3. The organization of the maintenance process of the military aircraft**

The technical objects maintenance is defined as a set of intentional organizational and economical operations of the people on the technical objects and the relationships between them from the beginning of the object lifecycle up to the end of lifecycle and object disposal. Relationships recognition and identification of the operations which appear between subjects based on the knowledge and experience of the technical objects designers, developers and engineers. The maintenance compliance and utility of product mainly depends on the engineers and designers crew professional competence. However the design presumptions can be altered many times during object lifecycle. These operations are performed to decrease maintenance "waste effect" and maximize "utility effect".

The modern military aircraft, which is the basic technical object in Polish Air Force organization structure, is the complex product including various constructional, technological, engineering and organizational concepts. Design of so sophisticated product based on tactical and technical military requirements which was created after modern battlefield analysis.

The aircraft construction is based on the module structure (Fig. 1) which allows dividing the specified tasks between separate functional blocks. This solution improves the maintenance process and facilitates service and operational use of the aircraft.

The conditions in which the aircrafts are operated are so specific that involves the specified requirements regarding high level of reliability, durability, effectiveness and safety parameters as far as airborne technology is concerned. Required levels of parameters are provided by determining specified functional structure of devices and specified level of redundancy.

Due to specific character of aircraft operations the aircraft maintenance can be performed only within specified system which provides the conditions indispensable for correct aircraft operation. This specified system is called Air System (AR) and contains the aircraft frame, the people who participate in the maintenance process and the devices building the system which ensure process permanence (in functional way) - Fig. 1.

The primary target in military aircraft maintenance process during peace is maintaining both the technical equipment and the personnel on the specified reliability and training level. It is required to provide high level of efficacy and effectiveness during wartime.

1. Air reconnaissance performed using both aircrafts equipped with specialized apparatus and unmanned flying objects configured for the performance of this type of a mission. 2. Air transport ensuring fast and efficient transfer of both infrastructure elements and

The support for the operations of different types of forces by means of, among other things, managing a mission on the basis of spatial information obtained via reconnaissance systems installed, for example, on an AWACS-type platform, or enabling the in-flight refuelling.

The analysis of the operations of the armed forces in recent armed conflicts indicates that MMAs are the basic element of the system of military operations. MMAs are used in the first

The technical objects maintenance is defined as a set of intentional organizational and economical operations of the people on the technical objects and the relationships between them from the beginning of the object lifecycle up to the end of lifecycle and object disposal. Relationships recognition and identification of the operations which appear between subjects based on the knowledge and experience of the technical objects designers, developers and engineers. The maintenance compliance and utility of product mainly depends on the engineers and designers crew professional competence. However the design presumptions can be altered many times during object lifecycle. These operations are

The modern military aircraft, which is the basic technical object in Polish Air Force organization structure, is the complex product including various constructional, technological, engineering and organizational concepts. Design of so sophisticated product based on tactical and technical military requirements which was created after modern

The aircraft construction is based on the module structure (Fig. 1) which allows dividing the specified tasks between separate functional blocks. This solution improves the maintenance

The conditions in which the aircrafts are operated are so specific that involves the specified requirements regarding high level of reliability, durability, effectiveness and safety parameters as far as airborne technology is concerned. Required levels of parameters are provided by determining specified functional structure of devices and specified level of

Due to specific character of aircraft operations the aircraft maintenance can be performed only within specified system which provides the conditions indispensable for correct aircraft operation. This specified system is called Air System (AR) and contains the aircraft frame, the people who participate in the maintenance process and the devices building the system

The primary target in military aircraft maintenance process during peace is maintaining both the technical equipment and the personnel on the specified reliability and training level. It is required to provide high level of efficacy and effectiveness during wartime.

**3. The organization of the maintenance process of the military aircraft** 

performed to decrease maintenance "waste effect" and maximize "utility effect".

process and facilitates service and operational use of the aircraft.

which ensure process permanence (in functional way) - Fig. 1.

soldiers into the area of a new localization for troops.

instance to execute all of the above-mentioned tasks.

battlefield analysis.

redundancy.

Fig. 1. Structural diagram of the military aircraft and the air system: FCSA – Flight Control System Actuators (frame construction with plating); FCS – Flight Control System; ACRNEWS – Airborne Communication, Radio Navigation and Electronic Warfare Systems.; MRNAP – Multifunctional Radar and Navigation and Aiming Pod; OAS – On-board Armament System; ACS – Armament Control System; WCS – Weapon Control System; NAS – Navigation and Aiming System.

Due to many various external factors, which influence negatively on the specified technical elements of the Air System, it can be claimed, that during operating process the elements are getting "used up". Therefore, due to maintain Air System in the appropriate reliability condition there is required to perform technical service. This action contains adjustment, tuning and replacement of particular devices or whole aggregates, in order to slow down the "using up" process.

In practice there are three aircraft maintenance strategies (Fig. 2.):


The Analysis of the Maintenance Process of the Military Aircraft 403

strategy are executed according to levels of measured diagnostic parameters. The proper control of operational maintenance strategy even for the considerable fleet of aircrafts

The preventive/predictive maintenance strategy defines the reliability as a designed characteristic. The level (value) of the reliability must be provided in the device design and manufacturing process and is maintaining during the device lifecycle. The maintenance schedule which is based on preventive/predictive maintenance strategy provides the desirable or defined levels of both reliability and flight safety. The all of described aircrafts maintenance strategies are followed during the real conditions fleet maintenance process. Due to the development of diagnostic systems, military aircraft on-board systems include diagnostic procedures enabling the assessment of a current technical state of a given system. The procedure of assessing a given system is performed before an air operation. The procedure results provide information on a technical state of a military aircraft. Based on this information, a pilot decides either to perform a task or to withdraw from performing the

Apart from integrated diagnostic systems installed on board, there is a number of devices whose technical state is examined via monitoring and measuring equipment after its disassembly from the board of MMA. During maintenance works, diagnostic parameters of the examined devices are recorded and compared with the range of permissible changes. Any deviation beyond the assumed tolerance limits leads to the implementation of either appropriate maintenance procedures aiming at reducing the resultant deviation or appropriate corrections eliminating the deviation. The ability to predict the service life of MMA when diagnostic parameter tolerance might be exceeded would enable the appropriate management of the maintenance system of MMA. Thus, it is possible to optimize the time when MMA is under certain maintenance works and is not combat ready.

**4.1 The influence of destructive factors on the technical state of devices used on the** 

During the operation process of a military aircraft we can observe the change of technical parameters of selected devices along with the time of their operation. This change causes the deterioration of working conditions of a system and the loss of rated values of technical

The construction of technical systems is based on the assumption that a device fulfils its role when its operational/diagnostic parameters are within acceptable error limits. This assumption depends on the accuracy of work of particular system elements. Thus, in order to assure a faultless functioning of a military aircraft, we cannot allow operational parameters to exceed the acceptable error limits, which can be done in two ways: by frequent checks of operational parameter values of a device/system and its switch off when

requires control of every aircraft separately.

**4. The process of maintaining the military aircraft** 

− changes of temperature and air-pressure,

parameters. Factors influencing the above-mentioned changes include:

task.

**military aircraft** 

− g-forces, − vibrations,

− ageing process, etc.

Fig. 2. Military aircrafts maintenance strategies.

Organization and scheme of military aircrafts recurring maintenance strategy is presented on Fig. 3. The basis of this maintenance strategy is the measurement of the amount of labor executed by the plant. As far as aircraft is concerned the amount of labor is defined as a number of hours in the sky.

Fig. 3. Recurring maintenance strategy scheme.

One of the maintenance states in the recurring maintenance process is the indirect airworthiness state. The aircraft in this state is mostly working correctly but it lost the flying ability in order to circumstances determined on figure 3. After execution the specified amount of labor (hours of fly) the aircraft lifecycle should be either terminated or directed to the professional service to determine the new amount of labor possible to execute.

As far as operational maintenance strategy is concerned there is the rule the aircraft is in operation as long as the levels of specified parameters do not exceed the specified limits of error. The knowledge about the maintenance state of the device is determining by the external and internal diagnostic equipment. The service operations during this maintenance

Aircraft maintenance strategies

Operational maintenance strategy

Organization and scheme of military aircrafts recurring maintenance strategy is presented on Fig. 3. The basis of this maintenance strategy is the measurement of the amount of labor executed by the plant. As far as aircraft is concerned the amount of labor is defined as a

> After technical amount of labor

One of the maintenance states in the recurring maintenance process is the indirect airworthiness state. The aircraft in this state is mostly working correctly but it lost the flying ability in order to circumstances determined on figure 3. After execution the specified amount of labor (hours of fly) the aircraft lifecycle should be either terminated or directed to

As far as operational maintenance strategy is concerned there is the rule the aircraft is in operation as long as the levels of specified parameters do not exceed the specified limits of error. The knowledge about the maintenance state of the device is determining by the external and internal diagnostic equipment. The service operations during this maintenance

the professional service to determine the new amount of labor possible to execute.

service Repair

checked

**Availability**

level of checked state parameter

After maximal amount of labor between services

Periodic

**Indirect availability**

Preventive/predictive maintenance strategy

parameter to failure no service

After emergency damage

**Temporary unavailability**

After emergency damage (trivial)

The rule specifying the range and kind of service in maintenance process

Recurring maintenance strategy

> amount of labor

Fig. 2. Military aircrafts maintenance strategies.

The rule defining aircraft in maintenance process

number of hours in the sky.

After specified time of labor T

Fig. 3. Recurring maintenance strategy scheme.

After flight or halt

> Current service

amount of labor

strategy are executed according to levels of measured diagnostic parameters. The proper control of operational maintenance strategy even for the considerable fleet of aircrafts requires control of every aircraft separately.

The preventive/predictive maintenance strategy defines the reliability as a designed characteristic. The level (value) of the reliability must be provided in the device design and manufacturing process and is maintaining during the device lifecycle. The maintenance schedule which is based on preventive/predictive maintenance strategy provides the desirable or defined levels of both reliability and flight safety. The all of described aircrafts maintenance strategies are followed during the real conditions fleet maintenance process.

Due to the development of diagnostic systems, military aircraft on-board systems include diagnostic procedures enabling the assessment of a current technical state of a given system. The procedure of assessing a given system is performed before an air operation. The procedure results provide information on a technical state of a military aircraft. Based on this information, a pilot decides either to perform a task or to withdraw from performing the task.

Apart from integrated diagnostic systems installed on board, there is a number of devices whose technical state is examined via monitoring and measuring equipment after its disassembly from the board of MMA. During maintenance works, diagnostic parameters of the examined devices are recorded and compared with the range of permissible changes. Any deviation beyond the assumed tolerance limits leads to the implementation of either appropriate maintenance procedures aiming at reducing the resultant deviation or appropriate corrections eliminating the deviation. The ability to predict the service life of MMA when diagnostic parameter tolerance might be exceeded would enable the appropriate management of the maintenance system of MMA. Thus, it is possible to optimize the time when MMA is under certain maintenance works and is not combat ready.

## **4. The process of maintaining the military aircraft**

#### **4.1 The influence of destructive factors on the technical state of devices used on the military aircraft**

During the operation process of a military aircraft we can observe the change of technical parameters of selected devices along with the time of their operation. This change causes the deterioration of working conditions of a system and the loss of rated values of technical parameters. Factors influencing the above-mentioned changes include:


The construction of technical systems is based on the assumption that a device fulfils its role when its operational/diagnostic parameters are within acceptable error limits. This assumption depends on the accuracy of work of particular system elements. Thus, in order to assure a faultless functioning of a military aircraft, we cannot allow operational parameters to exceed the acceptable error limits, which can be done in two ways: by frequent checks of operational parameter values of a device/system and its switch off when

The Analysis of the Maintenance Process of the Military Aircraft 405

operation process and the influence of destructive processes. So, let's consider "the wear of a device" of avionics system as a random process occurring during the operation of an

Getting down to the analytical description of the diagram in Figure 4 and the determination of the density function of the changes of a diagnostic parameter values, the following

1. The technical condition of an element is described by one diagnostic parameter which is

2. The change of the value of the parameter "*z*" happens only during the operation of a

4. The change of the diagnostic parameter "*z*" is described by the following equation (1).

*dN*

1. If [0, ] *<sup>d</sup> z z* then an element is fit for use, in other case the element is considered as

*t P* 

2. The intensity of flights of an aircraft is described by the following dependence (2).

*t* - the range of time in which the flight of an aircraft can be performed with the probability *P*, *P -* the probability of the flight performance within the time interval with length *t*.

*N* 

*dt dz* 

The dynamics of the changes of a diagnostic parameter can be described by the following

*c*

Using the formula (4), the equation (1) can be written in the following form:

The time interval with length *t* shall be selected in such a way as to fulfil the following

*c* - random variable which depends on operational conditions of an element;

*c*

*dz* (1)

(2)

*t* (3)

*t* (4)

(5)

enables the determination of the number of flights of an aircraft up

aircraft.

where:

where:

inequality (3).

The intensity of flights

difference equation (6).

assumptions were accepted:

device, i.e. during the flight of an aircraft.

1

to the moment *t* form the following formula:

3. The parameter "*z*" is non-decreasing.

*N -* the number of flights of an aircraft.

marked as "*z*".

unfit for use.

parameters are close to the fixed limit, or by determining the time after which operational parameters exceed values of the acceptable error.

The first way is onerous with regard to its organization and it is also time consuming and money consuming. Besides, the time spent on checking excludes a military aircraft from its use in a combat task, which consequently leads to a temporal decrease of the fighting efficiency of the air forces.

The second way is based on the use of a particular mathematical method enabling the description of value changes of operational parameters of a device/system and the evaluation of time in which a device/system is in operational state.

It is stated above that military aircrafts undergo changes during the exploitation of operational parameter values of particular devices in avionics system. The changes cause that operational parameter values approximate to the fixed acceptable limit. When parameter values equate with the limit value or exceed it, an adjustment must be done in order to restore nominal conditions of a device/system operation or the operation must be stopped. Figure 4 presents a theoretical course of changes of diagnostic parameter values.

Fig. 4. Diagram of changes of diagnostic parameter values: z0 – nominal value of a parameter, z – current value of a parameter, zd – the limit of acceptable changes of parameter values

The second way is based on the use of a particular mathematical method enabling the description of value changes of operational parameters of a device/system and the evaluation of time in which a device/system is in operational state.

#### **4.2 The model of diagnostic parameter changes in the aspect of the occurrence of destructive factors**

In the figure, current value of a parameter is marked as "z". If *z* < *zd* then an element is fit for use, but if *z* ≥ *zd* the elements losses its operational state. The change of diagnostic parameter values will be of a random character because of a specific character of MA operation process and the influence of destructive processes. So, let's consider "the wear of a device" of avionics system as a random process occurring during the operation of an aircraft.

Getting down to the analytical description of the diagram in Figure 4 and the determination of the density function of the changes of a diagnostic parameter values, the following assumptions were accepted:


$$\frac{dz}{dN} = c \tag{1}$$

where:

404 Recent Advances in Aircraft Technology

parameters are close to the fixed limit, or by determining the time after which operational

The first way is onerous with regard to its organization and it is also time consuming and money consuming. Besides, the time spent on checking excludes a military aircraft from its use in a combat task, which consequently leads to a temporal decrease of the fighting

The second way is based on the use of a particular mathematical method enabling the description of value changes of operational parameters of a device/system and the

It is stated above that military aircrafts undergo changes during the exploitation of operational parameter values of particular devices in avionics system. The changes cause that operational parameter values approximate to the fixed acceptable limit. When parameter values equate with the limit value or exceed it, an adjustment must be done in order to restore nominal conditions of a device/system operation or the operation must be stopped. Figure 4 presents a theoretical course of changes of diagnostic parameter values.

Fig. 4. Diagram of changes of diagnostic parameter values: z0 – nominal value of a parameter, z – current value of a parameter, zd – the limit of acceptable changes of

evaluation of time in which a device/system is in operational state.

The second way is based on the use of a particular mathematical method enabling the description of value changes of operational parameters of a device/system and the

**4.2 The model of diagnostic parameter changes in the aspect of the occurrence of** 

In the figure, current value of a parameter is marked as "z". If *z* < *zd* then an element is fit for use, but if *z* ≥ *zd* the elements losses its operational state. The change of diagnostic parameter values will be of a random character because of a specific character of MA

parameters exceed values of the acceptable error.

evaluation of time in which a device/system is in operational state.

efficiency of the air forces.

parameter values

**destructive factors** 

*c* - random variable which depends on operational conditions of an element;


$$
\mathcal{A} = \frac{P}{\Delta t} \tag{2}
$$

where:

*t* - the range of time in which the flight of an aircraft can be performed with the probability *P*, *P -* the probability of the flight performance within the time interval with length *t*.

The time interval with length *t* shall be selected in such a way as to fulfil the following inequality (3).

$$
\lambda \Lambda t \le 1 \tag{3}
$$

The intensity of flights enables the determination of the number of flights of an aircraft up to the moment *t* form the following formula:

$$N = \mathcal{X}t\tag{4}$$

Using the formula (4), the equation (1) can be written in the following form:

$$\frac{dz}{dt} = \lambda z \tag{5}$$

The dynamics of the changes of a diagnostic parameter can be described by the following difference equation (6).

The Analysis of the Maintenance Process of the Military Aircraft 407

*A t u z t* <sup>2</sup>

2 ( )

<sup>1</sup> ( , )

*b* 

diagnostic parameter "*z*" can be written in the following form:

;

*dz*

 

 

  (the flying time) of the exceedance of the acceptable error value of the parameter *z*.

2 <sup>1</sup> ,

The dependence (14) is the probabilistic characterisation of the increase of the wear in the function of the flying time. However, it is important to know the distribution of the time

The probability of the exceedance of the acceptable value by the current value of the

*at*

*Q t z at*

2 1

*<sup>d</sup> e* <sup>2</sup>

The density function of the time distribution of the exceedance of the acceptable state *zd* has

 *<sup>d</sup> Q t z <sup>t</sup> <sup>f</sup> <sup>t</sup>* ;

<sup>1</sup>

*<sup>t</sup> at <sup>f</sup> <sup>t</sup> at*

*<sup>t</sup> at <sup>f</sup> <sup>t</sup> at*

2 1 

 

2

 <sup>2</sup>

*dz*

 

*dz*

*B t E c dt E c t*

( ) [ ] [ ]

*t*

0 

the density function (11) has the following form:

where:

Assuming that:

the following form:

Thus

*e*

 , *A <sup>t</sup> <sup>E</sup> <sup>c</sup> dt <sup>E</sup> <sup>c</sup> <sup>t</sup> t*

> *E*[*c*] , <sup>2</sup> *a*

*e at u z t* <sup>2</sup>

0 

> *at z bt*

2

*z bt*

2

*z bt*

*z bt*

*e*

*e* <sup>2</sup>

*dz*

*dz*

 

 

2

 

*dz*

(16)

2

 *A t z B t*

2 ( )

[ ]

2 2

[ ] (12)

*E*[*c*] (13)

(14)

(15)

(17)

(18)

(11)

$$\mathcal{U}\_{z,t+\Delta t} = (1 - \mathcal{\lambda}\Delta t)\mathcal{U}\_{z,t} + \mathcal{\lambda}\Delta t \,\mathcal{U}\_{z-\Delta z,t} \tag{6}$$

where:

*Uz*,*t* - the probability that in the moment *t* the value of a diagnostic parameter will be *z*;

*z* - the increment of the diagnostic parameter *z* during one flight of an aircraft.

The functional notation of the equation (6) has the following form:

$$
\mu(z,\ t + \Delta t) = \left(1 - \lambda \Delta t\right)\mu(z,t) + \lambda \Delta t \,\mathrm{u}\left(z - \Delta z, t\right) \tag{7}
$$

where:

*u*(*z*,*t*) - the density function of the probability of the diagnostic parameter value *z* in the moment *t*;

(1 *t*) - the probability that in the time interval with length *t* the flight will not be performed;

*t -* the probability of the flight performance in the time interval with length *t.*

The equation (7) was transformed by substituting the following differential equation (8).

$$\frac{\partial u(z,t)}{\partial z} = -\lambda \Delta z \frac{\partial u(z,t)}{\partial z} + \frac{1}{2} \lambda (\Delta z)^2 \frac{\partial^2 u(z,t)}{\partial z^2} \tag{8}$$

where: *z=c.* 

Due to the fact that *c* is a random variable, the following mean value was introduced:

$$E[c] = \int\_{c\_d}^{c\_g} c \, f(c) \, dc \tag{9}$$

where: *f*(*c*) - the density function of the random variable *c*;

*cg, cu* - the limits of variation of *c*.

Taking into consideration the dependence (9), the differential equation (8) can be written in the following form:

$$\frac{\partial u(z,t)}{\partial t} \Delta t = -\lambda \, E[\boldsymbol{\varepsilon}] \, \frac{\partial u(z,t)}{\partial z} + \frac{1}{2} \lambda (\mathrm{E}[\boldsymbol{\varepsilon}])^2 \, \frac{\partial^2 u(z,t)}{\partial z^2} \tag{10}$$

where: *E*[*c*] - the mean increment of the parameter value per time unit;

 <sup>2</sup> *E*[*c*] - the mean square increment of the value of the diagnostic parameter per time unit.

The solution of the equation (10) is the unknown density function of the probability of the random variable *z* in the following form:

$$u(z,t) = \frac{1}{\sqrt{2\pi \, A(t)}} \, \mathcal{C}^{-\frac{(z-B(t))^2}{2A(t)}} \tag{11}$$

where:

406 Recent Advances in Aircraft Technology

 *<sup>z</sup> <sup>t</sup> <sup>t</sup> <sup>z</sup> <sup>t</sup> Uz <sup>z</sup> <sup>t</sup> U t U t* , 1 

*Uz*,*t* - the probability that in the moment *t* the value of a diagnostic parameter will be *z*;

*z* - the increment of the diagnostic parameter *z* during one flight of an aircraft.

*t -* the probability of the flight performance in the time interval with length *t.*

, , 1

*z*

*z u z t*

where: *f*(*c*) - the density function of the random variable *c*;

*t u z t*

random variable *z* in the following form:

*cg, cu* - the limits of variation of *c*.

the following form:

where:

 <sup>2</sup> 

 

The equation (7) was transformed by substituting the following differential equation (8).

*z u z t*

*E c c f c dc gc*

Taking into consideration the dependence (9), the differential equation (8) can be written in

*z u z t*

*E*[*c*] - the mean square increment of the value of the diagnostic parameter per time unit.

The solution of the equation (10) is the unknown density function of the probability of the

*E*[*c*] - the mean increment of the parameter value per time unit;

, <sup>1</sup> [ ] ,

*t E c*

*dc*

Due to the fact that *c* is a random variable, the following mean value was introduced:

2

The functional notation of the equation (6) has the following form:

*uz*, *t t* 1

where:

where:

(1 

moment *t*;

performed;

where: *z=c.* 

,

*t uz*,*t*

*u*(*z*,*t*) - the density function of the probability of the diagnostic parameter value *z* in the

*t*) - the probability that in the time interval with length *t* the flight will not be

, (6)

*t uz z*,*t* (7)

(8)

(10)

2

[ ] ( ) (9)

2

*z*

 

2

*<sup>u</sup> <sup>z</sup> <sup>t</sup> <sup>E</sup> <sup>c</sup>*

<sup>2</sup> , [ ] <sup>2</sup>

*z u z t*

 

2 <sup>2</sup> ,

*z*

$$B(t) = \int\_0^t \mathcal{A}E[c]dt = \mathcal{A}E[c]t \quad , \quad A(t) = \int\_0^t \mathcal{A}(E[c])^2 dt = \mathcal{A}E[c]^2 \tag{12}$$

Assuming that:

$$b = \mathcal{X}E[\mathcal{c}] \; , \quad \quad a = \mathcal{X}E[\mathcal{c}] \; ^2 \tag{13}$$

the density function (11) has the following form:

$$u(z,t) = \frac{1}{\sqrt{2\pi at}} \mathcal{C}^{-\frac{(z-b\,t)^2}{2at}} \tag{14}$$

The dependence (14) is the probabilistic characterisation of the increase of the wear in the function of the flying time. However, it is important to know the distribution of the time (the flying time) of the exceedance of the acceptable error value of the parameter *z*.

The probability of the exceedance of the acceptable value by the current value of the diagnostic parameter "*z*" can be written in the following form:

$$Q(t; z\_d) = \int\_{z\_d}^{\infty} \frac{1}{\sqrt{2\pi \, at}} \, \mathcal{C}^{-\frac{(z - bt)^2}{2at}} \, dz \tag{15}$$

The density function of the time distribution of the exceedance of the acceptable state *zd* has the following form:

$$f(t) = \frac{\partial}{\partial t} \ Q(t; z\_d) \tag{16}$$

Thus

$$f(t) = \frac{\partial}{\partial t} \int\_{z\_d}^{\phi} \frac{1}{\sqrt{2\pi at}} \mathcal{C}^{-\frac{(z-bt)^2}{2at}} \, dz \tag{17}$$

$$f(t) = \int\_{z\_d}^{\infty} \left\{ \frac{\partial}{\partial t} \left[ \frac{1}{\sqrt{2\pi \, at}} \mathbf{e}^{-\frac{(z-bt)^2}{2at}} \right] \right\} dz \tag{18}$$

The Analysis of the Maintenance Process of the Military Aircraft 409

 *<sup>u</sup> at bt zd* 2

> *at bt z*

 *du bt z bt z*

*<sup>d</sup> <sup>d</sup>*

*bt z bt z at*

( )( ) 2

<sup>1</sup> , *<sup>w</sup> dw*

Taking into consideration the above-mentioned dependencies, the integral (29) can be

1 

*<sup>w</sup> <sup>w</sup> e e* <sup>2</sup> <sup>2</sup> <sup>1</sup> <sup>2</sup> <sup>1</sup>

*z bt <sup>u</sup> d d <sup>d</sup> <sup>u</sup> e e* <sup>1</sup>

2

*dt*

*du u*

2

<sup>2</sup> *<sup>y</sup> <sup>w</sup>* , <sup>2</sup>*wdw <sup>y</sup> dy* , *dy*

2

1 

*dw*

*w*

After the substitution, the integral (25) has the following form (29).

*t at*

*u w* ,

2

2 2

Thus, we obtain the integral in the following form:

the integration limits, we obtain the formula for the reliability:

2

Then, we make the second substitution.

written in the following form:

We make one more substitution.

where:

2 1

 *at dt*

2

 *<sup>d</sup> <sup>d</sup> bt z*

*du*

*w dw dw*

*w <sup>y</sup> dw*

*dy*

*y e* <sup>2</sup> 2

*at bt z*

Substituting the results into the formula (22) and remembering the appropriate notation of

<sup>2</sup> ,

2 1

(26)

*du*

(31)

*<sup>y</sup> dw* (32)

(29)

*du* <sup>2</sup> <sup>2</sup> (27)

<sup>2</sup> 2 (28)

*u*

*du* <sup>2</sup> , *du* <sup>2</sup>*wdw* (30)

2

(33)

*<sup>y</sup> <sup>d</sup>* (34)

We make the substitution in the above-mentioned integral.

Thus

After calculating the derivative, we obtain:

$$(f(t)\_{z\_d} = \int\_{z\_d}^{\infty} \left[ \mu(z, t) \left( \frac{z^2 - b^2 t^2 - at}{2 \, at^2} \right) \right] dz \tag{19}$$

The original function with regard to the integrand of the dependence (19) has the following form (20).

$$w(z,t) = u(z,t) \left( -\frac{z+bt}{2t} \right) \tag{20}$$

We calculate the integral (19).

$$f(t)\_{z\_d} = \mu\left(z, t\right) \left. -\frac{z + bt}{2t} \right|\_{z\_d}^{z\_0} = \frac{z\_d + bt}{2t} \frac{1}{\sqrt{2\pi \, at}} \mathcal{C}^{-\frac{(z\_d - bt)^2}{2at}} \tag{21}$$

Thus, the dependence (21) determines the density function of the time of the first transition of the current value of the parameter "*z*" through the acceptable state.

Having the above-mentioned data, we can determine the durability of a device with respect to the change of the value of the parameter *z*. For this purpose, we can write down that the formula for the reliability of a device has the following form:

$$R(t) = 1 - \int\_0^t f(t)\_{z\_d} dt \tag{22}$$

where the density function *dz f* ( is determined by the formula (21). *t*)

The unreliability of a device can be determined from the dependence (23).

$$Q(t) = \int\_0^t \frac{z\_d + bt}{2t} \cdot \frac{1}{\sqrt{2\pi at}} \, \mathcal{C}e^{-\frac{(z\_d - bt)^2}{2at}} \, dt \tag{23}$$

The integral (23) has to be simplified. It can be observed that the integrand can be written in the following form:

$$\frac{z\_d + bt}{2t} \cdot \frac{1}{\sqrt{2\pi at}} \quad \mathcal{C}^{-\frac{(z\_d - bt)^2}{2at}} = \frac{z\_d + bt}{2t} \cdot \frac{1}{\sqrt{2\pi at}} \quad \mathcal{C}^{-\frac{(bt - z\_d)^2}{2at}} \tag{24}$$

and now we have to solve the indefinite integral.

$$\int \frac{(z\_d + bt)}{2t} \cdot \frac{1}{\sqrt{2\pi at}} \, \mathcal{C}^{-\frac{(bt - z\_d)^2}{2at}} \, dt \tag{25}$$

We make the substitution in the above-mentioned integral.

$$\frac{(bt - z\_d)^2}{2at} = u \tag{26}$$

Thus

408 Recent Advances in Aircraft Technology

*dz*

*dz*

The original function with regard to the integrand of the dependence (19) has the following

 *<sup>t</sup>*

*d*

*z bt*

 

*<sup>z</sup> <sup>b</sup> <sup>t</sup> at <sup>f</sup> <sup>t</sup> <sup>u</sup> <sup>z</sup> <sup>t</sup>*

*dz*

 

*w z t u z t*

*dz dz <sup>e</sup>*

*<sup>z</sup> bt <sup>f</sup> <sup>t</sup> <sup>u</sup> <sup>z</sup> <sup>t</sup>* <sup>2</sup>

Thus, the dependence (21) determines the density function of the time of the first transition

Having the above-mentioned data, we can determine the durability of a device with respect to the change of the value of the parameter *z*. For this purpose, we can write down that the

> *<sup>R</sup> <sup>t</sup> <sup>f</sup> <sup>t</sup> dt dz t* 0

*t at <sup>z</sup> bt <sup>Q</sup> <sup>t</sup> at*

The integral (23) has to be simplified. It can be observed that the integrand can be written in

*at d*

*<sup>d</sup> e* <sup>2</sup>

2 1

*t at z bt at*

2

*z bt* <sup>2</sup>

2

1

*t*

 

formula for the reliability of a device has the following form:

*t at*

and now we have to solve the indefinite integral.

2

1

of the current value of the parameter "*z*" through the acceptable state.

where the density function *dz f* ( is determined by the formula (21). *t*)

The unreliability of a device can be determined from the dependence (23).

*d t*

<sup>0</sup> 2

*bt dz <sup>d</sup> e e*

2

2 2

2

<sup>2</sup> <sup>2</sup> ,

*at*

*z bt*

2 2 2

2

*t at*

2 1

2

*bt dz*

*e* <sup>2</sup>

*d bt z*

*z bt*

*t at*

2 1

2

*dt*

*dt*

 

<sup>2</sup> , (19)

<sup>2</sup> , , (20)

 *at bt dz*

1 (22)

2

(21)

(23)

(24)

*at d bt z* 2

(25)

 

After calculating the derivative, we obtain:

form (20).

We calculate the integral (19).

the following form:

$$\frac{du}{dt} = \frac{bt + z\_d}{2at^2} (bt - z\_d) \tag{27}$$

$$dt = \frac{2at^2}{(bt + z\_d)(bt - z\_d)}\
\
du\tag{28}$$

After the substitution, the integral (25) has the following form (29).

$$\int \frac{z\_d + bt}{2t} \cdot \frac{1}{\sqrt{2\pi \, at}} \, \mathcal{C}^{-u} \cdot \frac{2at^2}{(bt + z\_d)(bt - z\_d)} \, du = \frac{1}{2\sqrt{\pi}} \int \frac{1}{\sqrt{u}} \, \mathcal{C}^{-u} \, du\tag{29}$$

Then, we make the second substitution.

$$
\sqrt{u} = w \,, \rightarrow \frac{dw}{du} = \frac{1}{2\sqrt{u}} \,, \rightarrow \frac{du}{dw} = 2w \,, \rightarrow du = 2w \, dw \tag{30}
$$

Taking into consideration the above-mentioned dependencies, the integral (29) can be written in the following form:

$$\frac{1}{2\sqrt{\pi}}\int \frac{1}{w} \mathcal{C}^{-w^2} \, \mathcal{Z}w \, dw = \frac{1}{\sqrt{\pi}} \int \mathcal{C}^{-w^2} \, dw \tag{31}$$

We make one more substitution.

$$w^2 = \frac{y^2}{2}, \rightarrow \ 2w \, dw = y \, dy \; , \rightarrow \ \, dw = \frac{y}{2w} \, dy \; , \rightarrow \ \, dw = \frac{y}{\sqrt{2}} \tag{32}$$

Thus, we obtain the integral in the following form:

$$\frac{1}{\sqrt{2\pi}}\int \mathcal{C}^{-\frac{y^2}{2}} \, dy \tag{33}$$

where:

$$y = \frac{bt - z\_d}{\sqrt{at}}\tag{34}$$

Substituting the results into the formula (22) and remembering the appropriate notation of the integration limits, we obtain the formula for the reliability:

The Analysis of the Maintenance Process of the Military Aircraft 411

On the basis of analyzing results of checks of a particular population of aiming heads it was established that as the time of operation goes by and as a result of the influence of destructive factors, the values of these parameters undergo changes. Table 1 presents an

T [months] 0 27 40 57 83 94 102 110 116

Table 1. Changes of diagnostic parameter values in an aiming head during an operation

form *<sup>n</sup> <sup>n</sup> z* ,*t* , *z* ,*t* , *z* ,*t* , ..., *z* ,*t* <sup>0</sup> <sup>0</sup> <sup>1</sup> <sup>1</sup> <sup>2</sup> <sup>2</sup> , and basing on the following formulas,

*n*

the acceptable values of deviations of the diagnostic parameters.

Having data describing the values of deviation of a diagnostic parameter in the following

*<sup>z</sup> <sup>b</sup>* \* , 

the values of the density function coefficients for both diagnostic parameters were

0,002; 0,0063; 0,0003; 0,0051 \* \* \* \* *a*

was read on the tables of normal distribution. The parameter *zd* was determined on the basis of a technical documentation which is used for service works and includes information on

after which the values of the diagnostic parameter deviations exceed the limit state was

since the last check of the diagnostic parameters. The values (44) can be used in technical

Summing up, we can state that the above-presented method seems to be correct and enables the analysis of a device/system technical condition with respect to the character of changes of values of the diagnostic parameters. The above-presented calculation example enabled the verification of the developed model and showed application qualities of the method. This method can be useful in future work on the improvement of both the operation process and the way of use of aircrafts with avionics system because it enables the determination of

*a*

Assuming the following level of reliability 0,99 \* *R t* , the value of the parameter

=5[months], *T*

1

<sup>1</sup> \* \* <sup>1</sup> <sup>1</sup> *<sup>n</sup> k k k*

0 1

 

*k k k k t t z z b t t*

*b*

0 0,01 0,01 0,01 0,07 0,48 0,48 0,54 0,73

0 0,23 0,26 0,26 0,39 0,50 0,53 0,56 0,59

 and 

which describe the coordinates

during an

 and 

2

(43)

2,32

*a* (42)

, *zd* were substituted into the equation (41), and the time

=33[months] (44)

condition is described by two diagnostic parameters:

*n n t*

*b*

The values of the parameters *a*, *b*,

 *T*

calculated. In this case, the time comes to:

time during which a device is fit for use.

service depending on the adopted service strategy.

exemplary course of changes of values of the diagnostic parameters

of position of sight marker.

operation process.

process

determined:

$$R(t) = 1 - \frac{1}{\sqrt{2\pi}} \int\_{-\infty}^{\frac{bt - z\_d}{\sqrt{at}}} \int\_{-\infty}^{\infty} e^{-\frac{y^2}{2}} \, dy \tag{35}$$

The distribution function for the standard normal distribution has the following form (36).

$$\Phi(x) = \frac{1}{\sqrt{2\pi}} \int\_{-\infty}^{x} \mathcal{C}^{-\frac{y^2}{2}} \, dy \tag{36}$$

Finally, the formula for the reliability of a system has the form of the following dependence:

$$\mathcal{R}^{\star^\*} (t) = 1 - \Phi \left( \frac{b^\* t - z\_d}{\sqrt{a^\* t}} \right) \tag{37}$$

where *b*\* and *a*\* are coefficients after the estimation on the basis of data obtained from the exploitation of military aircrafts.

Thus, the risk of a device damage can be determined from the following dependence (38).

$$\left(Q\right)^{\*} = 1 - R^{\*}\left(t\right) = \Phi(\mathcal{I})\tag{38}$$

where:

$$\gamma = \frac{b^\* t - z\_d}{\sqrt{a^\* t}} \tag{39}$$

Assuming a specified level of damage risk, we can find (by reading values on the tables of the normal distribution). Knowing the value of , we can determine the durability (i.e. *t*) from the dependence (39). For this purpose, the dependence (39) was transformed into the following square equation (40).

$$\left(b\stackrel{\ast}{}^2 t^2 - \left(\sqrt{\ }^2 a^\ast + 2b\stackrel{\ast}{}^\ast z\_d\right)\mathfrak{k} + z\_d^2 = 0\tag{40}$$

Thus, the durability:

$$T = \frac{\left(\sqrt{\ }^2 a^\* + 2b^\* \ z\_d\right) - \sqrt{\left(2b^\* \ z\_d + \stackrel{\*}{\gamma} a^\*\right)^2 - 4b^{\*\stackrel{\*}{\gamma}} z\_d^2}}{2b^{\*^2}}\tag{41}$$

#### **4.3 A computational example**

The efficiency of the chosen system is determined with the help of diagnostic parameters describing the technical condition of particular devices of the system. An aiming head (a navigation and aiming device) is an important device of avionics system. Its technical

*R t dy*

2 <sup>1</sup> <sup>1</sup>

The distribution function for the standard normal distribution has the following form (36).

 *x dy x y e* <sup>2</sup> 2

Finally, the formula for the reliability of a system has the form of the following dependence:

1

 

*<sup>b</sup> <sup>t</sup> <sup>z</sup> <sup>R</sup> <sup>t</sup> <sup>d</sup>*

where *b*\* and *a*\* are coefficients after the estimation on the basis of data obtained from the

Thus, the risk of a device damage can be determined from the following dependence (38).

*Q R t*

*a t b t zd* \* 

Assuming a specified level of damage risk, we can find (by reading values on the tables of the normal distribution). Knowing the value of , we can determine the durability (i.e. *t*) from the dependence (39). For this purpose, the dependence (39) was transformed into the

<sup>2</sup> <sup>0</sup> <sup>2</sup> <sup>2</sup> \* \* <sup>2</sup> 2\* *<sup>b</sup> <sup>t</sup>*

2\*

2

The efficiency of the chosen system is determined with the help of diagnostic parameters describing the technical condition of particular devices of the system. An aiming head (a navigation and aiming device) is an important device of avionics system. Its technical

*b*

2 2\* 2 2 \* \* \* \* \*

2 2 4

*a b z b z a b z <sup>T</sup> <sup>d</sup> <sup>d</sup> <sup>d</sup>*

 

2

 

exploitation of military aircrafts.

following square equation (40).

**4.3 A computational example** 

Thus, the durability:

where:

*at y d bt z*

*e* <sup>2</sup> 2

> 

*a t*

\*

\* \* 1 (37)

\* \* 1 (38)

*a b zd t zd* (40)

(39)

(41)

(35)

(36)

condition is described by two diagnostic parameters: and which describe the coordinates of position of sight marker.

On the basis of analyzing results of checks of a particular population of aiming heads it was established that as the time of operation goes by and as a result of the influence of destructive factors, the values of these parameters undergo changes. Table 1 presents an exemplary course of changes of values of the diagnostic parameters and during an operation process.


Table 1. Changes of diagnostic parameter values in an aiming head during an operation process

Having data describing the values of deviation of a diagnostic parameter in the following form *<sup>n</sup> <sup>n</sup> z* ,*t* , *z* ,*t* , *z* ,*t* , ..., *z* ,*t* <sup>0</sup> <sup>0</sup> <sup>1</sup> <sup>1</sup> <sup>2</sup> <sup>2</sup> , and basing on the following formulas,

$$b^\* = \frac{z\_n}{t\_n}, \quad a^\* = \frac{1}{n} \sum\_{k=0}^{n-1} \frac{\left[ (z\_{k+1} - z\_k) - b^\* \left( t\_{k+1} - t\_k \right) \right]^2}{\left( t\_{k+1} - t\_k \right)} \tag{42}$$

the values of the density function coefficients for both diagnostic parameters were determined:

$$a\_{\varepsilon}^{\star} = 0.002; \qquad b\_{\varepsilon}^{\star} = 0.0063; \qquad a\_{\beta}^{\star} = 0.0003; \quad b\_{\beta}^{\star} = 0.0051 \tag{43}$$

Assuming the following level of reliability 0,99 \* *R t* , the value of the parameter 2,32 was read on the tables of normal distribution. The parameter *zd* was determined on the basis of a technical documentation which is used for service works and includes information on the acceptable values of deviations of the diagnostic parameters.

The values of the parameters *a*, *b*, , *zd* were substituted into the equation (41), and the time after which the values of the diagnostic parameter deviations exceed the limit state was calculated. In this case, the time comes to:

$$T\_{\mathfrak{s}} \mathsf{=} \mathsf{5}[\mathsf{months}]\_{\mathsf{\prime}} \qquad T\_{\mathfrak{\beta}} \mathsf{=} \mathsf{33}[\mathsf{months}] \tag{44}$$

since the last check of the diagnostic parameters. The values (44) can be used in technical service depending on the adopted service strategy.

Summing up, we can state that the above-presented method seems to be correct and enables the analysis of a device/system technical condition with respect to the character of changes of values of the diagnostic parameters. The above-presented calculation example enabled the verification of the developed model and showed application qualities of the method. This method can be useful in future work on the improvement of both the operation process and the way of use of aircrafts with avionics system because it enables the determination of time during which a device is fit for use.

The Analysis of the Maintenance Process of the Military Aircraft 413

operational effect connected with the aiming process execution can be represented as the

The error of the method for solving the aiming-related equations M characterizes two

1. connected with the relative uncertainty resulting from the processing of initial data

The system configuration error K connects with entering invalid control signals (that

The instrumental error I connects with the accuracy of determining the operational parameters of NAS by particular information transmitters. This error concerns mainly the

The reconstruction error A characterizes the adequacy of a physical combat situation taking place during the execution of the aiming process to the assumed attack diagram which was

The causes of variance between the aiming indicator position and the target C result from

The causes of the failure to maintain the required conditions for aiming and attacking <sup>W</sup> connect with the failure to keep the required angle of diving, flight speed, bank angle, etc., i.e. the exceeding of the nominal values of particular parameters describing a combat

The effect of the weapon position R on the pooled error value , concerns mainly the process of aiming during the execution of the process of attacking with the use of aerial

Environmental conditions determining the value of the error O significantly influence the execution of the aiming process. Due to the fact that an aircraft moves at high speed in a heterogeneous space, it may encounter various conditions prevailing in space layers or areas, which directly translates into the perturbation of flight-related parameter values.

The general error N concerns causes which are not included in the presented classification and are the resultant of the lack of possibility to learn or describe them in an analytical way

All the above-mentioned errors can be of two kinds: determined errors (systematic errors) and probabilistic errors (random errors). So, their accumulated form will be burdened with both types of errors. The phenomenon of the random error occurrence is not precisely determined, that is why an attempt to evaluate its value is fully justified. A random character of compound errors causes that the operational effect of MMA application is

=(M +K +I +A)+ (C +W +R +O) +N (45)

equation for the pooled error of the aiming process execution :

2. concerning the error function of equations for aiming.

characterize the combat task being performed) into NAS.

concerning the aiming process by NAS functional elements, and

groups of causes:

measurement error.

task.

used to determine the aiming equations.

at the present state of knowledge.

burdened with the random error, too.

an incorrect approach of an aircraft to an attack path.

combat means (that are applied in a time series of particular length).

Moreover, due to its universal character, the method can be used to determine the residual life of any technical object whose technical condition is determined by analyzing values of the diagnostic parameters.

## **5. The process of operating the military aircraft**

#### **5.1 The influence of destructive factors on the course of the process of operating the military aircraft**

The use of military aircrafts concerns mainly the performance of a particular combat task, which often involves the use of aerial combat means. As far as an airborne function of a military aircraft is concerned, the main stages of its operation comprise the take-off, the staying in the air, and the landing. On the other hand, when analyzing the process of the operation of the on-board armament system, we can assume that the operational effect is the sum of the partial effects gained during the flight phase in relation to:


The level of effect of munitions on a target is the most commonly assumed rate that characterizes the operational effect obtained during the execution of a combat task involving the use of aerial combat means. As regards the on-board armament system, the obtained effect comes down to the determination of the difference between the value of target coordinates and the coordinate values of a drop point of combat armament.

Based on the structural diagram (Fig. 1) and the functions of the on-board armament system, we can assume that the Armament Control System (ACS) is the basic element that affects the value of the operational effect. Both at the stage of maintenance and operation, ACS provides information that is essential for the accurate functioning of the on-board armament system (OAS). In turn, as regards the ACS, its most crucial element involves the navigation and aiming system (NAS). Its basic task comprises the realization of a set of algorithms. Their solution enables – in the maintenance system - the reconstruction of the nominal values of particular initial parameters; - in the operation system – the proper usage of combat means (the intended use). The latter system is the subject of further discussion.

The analysis of the operational effect can be performed on the basis of the assessment of conditions in which NAS is used and the determination of causes that have a negative impact on the final value of the obtained effect. As regards NAS, during the execution of a combat task, the operational effect is the total angular correction represented as an aiming indicator in a pilot's field of view. The process of aiming and attacking is executed on the basis of the total angular correction. Thus, we can assume that the assessment of the operational effect involves the determination of accuracy in defining and reproducing the position of a moving aiming indicator.

The next aspect concerns the use of the aiming correction by a pilot. When the correction is defined and illustrated, the task comes down to the determination of the flight conditions in which an aiming indicator coincides with a target at the moment of using combat means. Based on the conducted analysis, we can assume that the execution of a combat task under real conditions is not an easy process. The causes of errors affecting the value of the

Moreover, due to its universal character, the method can be used to determine the residual life of any technical object whose technical condition is determined by analyzing values of

**5.1 The influence of destructive factors on the course of the process of operating the** 

The use of military aircrafts concerns mainly the performance of a particular combat task, which often involves the use of aerial combat means. As far as an airborne function of a military aircraft is concerned, the main stages of its operation comprise the take-off, the staying in the air, and the landing. On the other hand, when analyzing the process of the operation of the on-board armament system, we can assume that the operational effect is the

The level of effect of munitions on a target is the most commonly assumed rate that characterizes the operational effect obtained during the execution of a combat task involving the use of aerial combat means. As regards the on-board armament system, the obtained effect comes down to the determination of the difference between the value of target

Based on the structural diagram (Fig. 1) and the functions of the on-board armament system, we can assume that the Armament Control System (ACS) is the basic element that affects the value of the operational effect. Both at the stage of maintenance and operation, ACS provides information that is essential for the accurate functioning of the on-board armament system (OAS). In turn, as regards the ACS, its most crucial element involves the navigation and aiming system (NAS). Its basic task comprises the realization of a set of algorithms. Their solution enables – in the maintenance system - the reconstruction of the nominal values of particular initial parameters; - in the operation system – the proper usage of combat means (the intended use). The latter system is the subject of further discussion.

The analysis of the operational effect can be performed on the basis of the assessment of conditions in which NAS is used and the determination of causes that have a negative impact on the final value of the obtained effect. As regards NAS, during the execution of a combat task, the operational effect is the total angular correction represented as an aiming indicator in a pilot's field of view. The process of aiming and attacking is executed on the basis of the total angular correction. Thus, we can assume that the assessment of the operational effect involves the determination of accuracy in defining and reproducing the

The next aspect concerns the use of the aiming correction by a pilot. When the correction is defined and illustrated, the task comes down to the determination of the flight conditions in which an aiming indicator coincides with a target at the moment of using combat means. Based on the conducted analysis, we can assume that the execution of a combat task under real conditions is not an easy process. The causes of errors affecting the value of the

the diagnostic parameters.

**military aircraft** 

− target detection;

− the execution of the aiming process; − the execution of the process of attacking.

position of a moving aiming indicator.

**5. The process of operating the military aircraft** 

sum of the partial effects gained during the flight phase in relation to:

coordinates and the coordinate values of a drop point of combat armament.

operational effect connected with the aiming process execution can be represented as the equation for the pooled error of the aiming process execution :

$$
\Delta\_{\Sigma} = (\Delta\_{\text{M}} + \Delta\_{\text{K}} + \Delta\_{\text{I}} + \Delta\_{\text{A}}) + (\Delta\_{\text{C}} + \Delta\_{\text{W}} + \Delta\_{\text{R}} + \Delta\_{\text{O}}) + \Delta\_{\text{N}} \tag{45}
$$

The error of the method for solving the aiming-related equations M characterizes two groups of causes:


The system configuration error K connects with entering invalid control signals (that characterize the combat task being performed) into NAS.

The instrumental error I connects with the accuracy of determining the operational parameters of NAS by particular information transmitters. This error concerns mainly the measurement error.

The reconstruction error A characterizes the adequacy of a physical combat situation taking place during the execution of the aiming process to the assumed attack diagram which was used to determine the aiming equations.

The causes of variance between the aiming indicator position and the target C result from an incorrect approach of an aircraft to an attack path.

The causes of the failure to maintain the required conditions for aiming and attacking <sup>W</sup> connect with the failure to keep the required angle of diving, flight speed, bank angle, etc., i.e. the exceeding of the nominal values of particular parameters describing a combat task.

The effect of the weapon position R on the pooled error value , concerns mainly the process of aiming during the execution of the process of attacking with the use of aerial combat means (that are applied in a time series of particular length).

Environmental conditions determining the value of the error O significantly influence the execution of the aiming process. Due to the fact that an aircraft moves at high speed in a heterogeneous space, it may encounter various conditions prevailing in space layers or areas, which directly translates into the perturbation of flight-related parameter values.

The general error N concerns causes which are not included in the presented classification and are the resultant of the lack of possibility to learn or describe them in an analytical way at the present state of knowledge.

All the above-mentioned errors can be of two kinds: determined errors (systematic errors) and probabilistic errors (random errors). So, their accumulated form will be burdened with both types of errors. The phenomenon of the random error occurrence is not precisely determined, that is why an attempt to evaluate its value is fully justified. A random character of compound errors causes that the operational effect of MMA application is burdened with the random error, too.

The Analysis of the Maintenance Process of the Military Aircraft 415

*P10* - the probability that the deviation value along the OZ axis will change by *-h* at the time

*P20* - the probability that the deviation value along the OZ axis will change by *h* at the time

*P01* - the probability that the deviation value along the OY axis will change by *-h* at the time

*P02* - the probability that the deviation value along the OY axis will change by *h* at the time

When we use expressions obtained from the expansion of the function *U(z,y,t)* in the Taylor series in the surrounding of the point *(z,y)* and the time *t* in accordance with the

> 

2 2 2

2 2 2

2 2 2

2 2 2

> 

*y*

 

*y*

 

*z*

 

*z*

2

 

 

20 2

*<sup>U</sup> <sup>P</sup> <sup>U</sup>*

*<sup>U</sup> <sup>h</sup> <sup>h</sup>*

*<sup>U</sup> <sup>h</sup> <sup>h</sup>*

*<sup>U</sup> <sup>h</sup> <sup>h</sup>*

*<sup>U</sup> <sup>h</sup> <sup>h</sup>*

2

2

2

 

*t t*

 

where *U=U(z,y,t)*, and the fact that 1 *P*<sup>00</sup> *P*<sup>10</sup> *P*<sup>20</sup> *P*<sup>01</sup> *P*<sup>02</sup> , the equation (47) takes

*<sup>U</sup> <sup>h</sup> <sup>h</sup>*

 

When adding and subtracting *U* in the equation (48) and multiplying appropriate expressions in the brackets and taking the parameter *U* outside the brackets, the following

 

 

2 2 2

2 2

*y*

2 1

*z*

2

 

*y*

 

2 2

*<sup>U</sup> <sup>t</sup> <sup>U</sup> <sup>P</sup> <sup>P</sup> <sup>P</sup> <sup>P</sup> <sup>P</sup> <sup>U</sup> <sup>P</sup>*

*<sup>U</sup> <sup>h</sup> <sup>h</sup>*

 

2

*<sup>U</sup> <sup>h</sup> <sup>h</sup>*

2

*<sup>U</sup> <sup>h</sup> <sup>h</sup> <sup>z</sup>*

*y*

 

 

2 2

*z*

02 2

 

 

*<sup>U</sup> <sup>P</sup> <sup>z</sup>*

01 2

00 10 20 01 02 10

 

*y*

*y*

*y*

 

*z*

*z*

<sup>1</sup> , ,

*<sup>U</sup> <sup>U</sup> <sup>z</sup> <sup>y</sup> <sup>h</sup> <sup>t</sup> <sup>U</sup>*

*<sup>U</sup> <sup>U</sup> <sup>z</sup> <sup>y</sup> <sup>h</sup> <sup>t</sup> <sup>U</sup>*

<sup>1</sup> ( , , )

<sup>1</sup> ( , , )

<sup>1</sup> ( , , )

*<sup>U</sup> <sup>U</sup> <sup>z</sup> <sup>h</sup> <sup>y</sup> <sup>t</sup> <sup>U</sup>*

*<sup>U</sup> <sup>U</sup> <sup>z</sup> <sup>h</sup> <sup>y</sup> <sup>t</sup> <sup>U</sup>*

*<sup>U</sup> <sup>U</sup> <sup>z</sup> <sup>y</sup> <sup>t</sup> <sup>t</sup> <sup>U</sup>*

 

(47)

 

 

*z*

2 2 2

2 2 2

 

 

2 1

*<sup>U</sup> <sup>h</sup> <sup>h</sup> <sup>z</sup>*

2 2 2

 

*y*

*<sup>U</sup> <sup>h</sup> <sup>h</sup>*

*y*

2 1

*<sup>U</sup> <sup>h</sup> <sup>h</sup>*

 

*z*

2 2 2

> 

 

(48)

(49)

 

*z*

2 1

*<sup>U</sup> <sup>h</sup> <sup>h</sup>*

 

 

> 2 1

*y*

 

*<sup>U</sup> <sup>P</sup> <sup>U</sup>*

*t*;

*t*;

*t*;

*t*;

*U(z,y,t)* - the probability density function of deviation values at the moment *t*;

*t* - the time value between the specified deviations; *h* - the deviation value along the specified axes;

relationships of the following set of equations:

 

01

 

02

1

20

1

 

*<sup>U</sup> <sup>P</sup>*

 

*y <sup>U</sup> <sup>P</sup>*

 

1

*<sup>U</sup> <sup>P</sup> <sup>U</sup>*

00 10

*<sup>U</sup> <sup>t</sup> <sup>P</sup> <sup>U</sup> <sup>P</sup> <sup>U</sup>*

the following form:

result was obtained:

*t U*

 

*t <sup>U</sup> <sup>U</sup>*

*P00* - the probability that the deviation value will not change;

, ,

#### **5.2 The model of the assessment of the execution of a combat mission by the military aircraft**

The execution of the aiming process generally comes down to the process of making an aiming indicator coincide with a target. Significant elements of this process include parameters that determine the aiming indicator position and a set of actions aiming at pointing the indicator at a target. Based on these elements, we can consider the process of aiming as the execution of the process of building the aiming triangle using: a pilot – the system operator, an aiming indicator – the quantity describing the appropriate spatial orientation of an aircraft, and a target – the basic point in the execution of the aiming process. The aim of the process is to align these three elements.

The aiming correction is obtained by recording particular parameters (necessary to solve aiming equations) and processing them in NAS. The aiming correction value is represented as the central point of a moving aiming indicator which is displayed on the reflector of the sight head. Due to the effect of various constraints, the aiming indicator can adopt different positions in the assumed flat coordinate system (Fig. 6) placed on the plane of the sight head reflector. The indicator can either move in one out of four directions or move back to the previously occupied position.

Fig. 6. A graphical representation of the occurrence of possible deviations of the central point of the moving indicator during the execution of the aiming process

*Uz*,*y*,*t* denotes the probability that at the moment *t* the position deviations of the central point of the moving indicator are *z* and *y*, where *t* is the current time of the process of aiming. This probability is characterized by the density function denoted as *U z*, *y*,*t* . Therefore, using the density function *Uz*, *<sup>y</sup>*,*t*, we can describe the dynamics of changes in the position deviations of the central point of the moving indicator by a difference equation.

Regarding the issue being discussed above, the difference equation is as follows:

$$\begin{split} \mathbb{U} \mathbb{U}(z, y, t + \Lambda t) &= P\_{00} \mathbb{U}(z, y, t) + P\_{10} \mathbb{U}(z - h, y, t) + P\_{20} \mathbb{U}(z + h, y, t) + \\ &\quad + P\_{01} \mathbb{U}(z, y - h, t) + P\_{02} \mathbb{U}(z, y + h, t) \end{split} \tag{46}$$

where:

*U(z,y,t)* - the probability density function of deviation values at the moment *t*;

*t* - the time value between the specified deviations;

*h* - the deviation value along the specified axes;

414 Recent Advances in Aircraft Technology

**5.2 The model of the assessment of the execution of a combat mission by the military** 

The execution of the aiming process generally comes down to the process of making an aiming indicator coincide with a target. Significant elements of this process include parameters that determine the aiming indicator position and a set of actions aiming at pointing the indicator at a target. Based on these elements, we can consider the process of aiming as the execution of the process of building the aiming triangle using: a pilot – the system operator, an aiming indicator – the quantity describing the appropriate spatial orientation of an aircraft, and a target – the basic point in the execution of the aiming

The aiming correction is obtained by recording particular parameters (necessary to solve aiming equations) and processing them in NAS. The aiming correction value is represented as the central point of a moving aiming indicator which is displayed on the reflector of the sight head. Due to the effect of various constraints, the aiming indicator can adopt different positions in the assumed flat coordinate system (Fig. 6) placed on the plane of the sight head reflector. The indicator can either move in one out of four directions or move back to the

*(z+h, y)*

Z

we can describe the dynamics of changes in

(46)

*(z, y+h)*

*(z,y)*

*(z, y-h)*

Fig. 6. A graphical representation of the occurrence of possible deviations of the central

*Uz*,*y*,*t* denotes the probability that at the moment *t* the position deviations of the central point of the moving indicator are *z* and *y*, where *t* is the current time of the process of aiming. This probability is characterized by the density function denoted as *U z*, *y*,*t* .

the position deviations of the central point of the moving indicator by a difference equation.

( , , ) ( , , ) ( , , ) ( , , )

00 10 20 *P U z y h t P U z y h t U z y t t P U z y t P U z h y t P U z h y t* 

Regarding the issue being discussed above, the difference equation is as follows:

( , , ) ( , , )

01 02

Y

0

*(z-h, y)*

point of the moving indicator during the execution of the aiming process

Therefore, using the density function *Uz*, *<sup>y</sup>*,*t*,

where:

process. The aim of the process is to align these three elements.

**aircraft** 

previously occupied position.

*P00* - the probability that the deviation value will not change;

*P10* - the probability that the deviation value along the OZ axis will change by *-h* at the time *t*;

*P20* - the probability that the deviation value along the OZ axis will change by *h* at the time *t*; *P01* - the probability that the deviation value along the OY axis will change by *-h* at the time *t*; *P02* - the probability that the deviation value along the OY axis will change by *h* at the time *t*;

When we use expressions obtained from the expansion of the function *U(z,y,t)* in the Taylor series in the surrounding of the point *(z,y)* and the time *t* in accordance with the relationships of the following set of equations:

$$\begin{aligned} \mathcal{U}I(z, y, t + \Delta t) &= \mathcal{U} + \frac{\partial \mathcal{U}}{\partial t} \Delta t \\ \mathcal{U}I(z - h, y, t) &= \mathcal{U} - \frac{\partial \mathcal{U}}{\partial z} \ln + \frac{1}{2} \ln^2 \frac{\partial^2 \mathcal{U}}{\partial z^2} \\ \mathcal{U}I(z + h, y, t) &= \mathcal{U} + \frac{\partial \mathcal{U}}{\partial z} \ln + \frac{1}{2} \ln^2 \frac{\partial^2 \mathcal{U}}{\partial z^2} \\ \mathcal{U}I(z, y - h, t) &= \mathcal{U} - \frac{\partial \mathcal{U}}{\partial y} \ln + \frac{1}{2} \ln^2 \frac{\partial^2 \mathcal{U}}{\partial y^2} \\ \mathcal{U}I(z, y + h, t) &= \mathcal{U} + \frac{\partial \mathcal{U}}{\partial y} \ln + \frac{1}{2} \ln^2 \frac{\partial^2 \mathcal{U}}{\partial y^2} \end{aligned} \tag{47}$$

where *U=U(z,y,t)*, and the fact that 1 *P*<sup>00</sup> *P*<sup>10</sup> *P*<sup>20</sup> *P*<sup>01</sup> *P*<sup>02</sup> , the equation (47) takes the following form:

$$\begin{split} \mathbf{U} \cdot \mathbf{U} + \frac{\partial \mathbf{U}}{\partial t} \Delta t &= P\_{00} \mathbf{U} + P\_{10} \left( \mathbf{U} - \frac{\partial \mathbf{U}}{\partial \mathbf{z}} h + \frac{1}{2} h^{2} \frac{\partial^{2} \mathbf{U}}{\partial \mathbf{z}^{2}} \right) + P\_{20} \left( \mathbf{U} + \frac{\partial \mathbf{U}}{\partial \mathbf{z}} h + \frac{1}{2} h^{2} \frac{\partial^{2} \mathbf{U}}{\partial \mathbf{z}^{2}} \right) + \\ &+ P\_{01} \left( \mathbf{U} - \frac{\partial \mathbf{U}}{\partial \mathbf{y}} h + \frac{1}{2} h^{2} \frac{\partial^{2} \mathbf{U}}{\partial \mathbf{y}^{2}} \right) + P\_{02} \left( \mathbf{U} + \frac{\partial \mathbf{U}}{\partial \mathbf{y}} h + \frac{1}{2} h^{2} \frac{\partial^{2} \mathbf{U}}{\partial \mathbf{y}^{2}} \right) \end{split} \tag{48}$$

When adding and subtracting *U* in the equation (48) and multiplying appropriate expressions in the brackets and taking the parameter *U* outside the brackets, the following result was obtained:

$$\begin{split} \frac{\partial \mathcal{U}}{\partial t} \mathcal{U} &= -\mathcal{U} + \left( P\_{00} + P\_{10} + P\_{20} + P\_{01} + P\_{02} \right) \mathcal{U} + P\_{10} \left( -\frac{\partial \mathcal{U}}{\partial z} \mathcal{h} + \frac{1}{2} h^2 \frac{\partial^2 \mathcal{U}}{\partial z^2} \right) + \\ &+ P\_{20} \left( \frac{\partial \mathcal{U}}{\partial z} h + \frac{1}{2} h^2 \frac{\partial^2 \mathcal{U}}{\partial z^2} \right) + P\_{01} \left( -\frac{\partial \mathcal{U}}{\partial y} h + \frac{1}{2} h^2 \frac{\partial^2 \mathcal{U}}{\partial y^2} \right) + \\ &+ P\_{02} \left( \frac{\partial \mathcal{U}}{\partial y} h + \frac{1}{2} h^2 \frac{\partial^2 \mathcal{U}}{\partial y^2} \right) \end{split} \tag{49}$$

The Analysis of the Maintenance Process of the Military Aircraft 417

Assuming that the probabilities *P10* and *P20* are of the same order, i.e. *P10*=*P20*, we can write

2 <sup>1</sup> 2

*z U*

2 2

*<sup>a</sup> <sup>t</sup>*

*a t a t U z y t* <sup>2</sup>

The explicit form of the density function (58) requires determining the equation coefficients

− determining the likelihood function *L* enabling the determination of the parameter

 

To determine the parameters *a1* and *a2* we can use the method of the maximum likelihood. The method consists in finding the parameter values *a1* and *a2* that maximize the likelihood

> 

2

1 2

ln 2

2

1 2 1

*k k k*

 

*a L a L*

<sup>0</sup> ln

*k k k k*

 

*a t t z z*

1 1

By determining the derivatives of the function *L* relative to specified parameters, the

1

*a n*

2

<sup>0</sup> ln

 

1

 

Therefore, the logarithm of the likelihood function *L* takes the following form:

*a*

 

ln 2

<sup>1</sup> ln

1

*<sup>n</sup> <sup>L</sup> <sup>n</sup>* 

*t t*

*k k*

 

*<sup>k</sup> <sup>k</sup> <sup>n</sup> <sup>k</sup> <sup>k</sup> <sup>k</sup> <sup>n</sup> <sup>n</sup> <sup>a</sup> <sup>t</sup> <sup>t</sup>*

exp

 

2 1 <sup>1</sup> <sup>2</sup> 2 2

 1

*a*

2 1

The following form of the density function is the solution of the equation (57):

<sup>1</sup> ( , , )

*t U*

*a*

*0*. Similarly, we can assume that the probabilities *P01* and *P02* are also

2 2

*y U*

 

 

1

*a t t z z*

1 1

*k k*

*e*

 

*0*. Given these assumptions, the equation (55) takes

(57)

 

(58)

*y y*

1

2 1

 

(60)

 

 

 

1 \*

*k k*

*a t t y y*

2

(61)

2

*k k k k*

 

(59)

 

2

*y a t z*

2

that the coefficient *b1*

the following form:

(57) and connects with: − obtaining input data;

estimates *a1* and *a2*:

2

*L*

− determining the density function (58);

*a a*

1

*n*

following set of equations was obtained:

1

 

ln ln 2

1 2 2

function. So, we seek the solution of the set of equations

*t t*

1

1 1

1

of the same order, so the coefficient *b2*

Using the assumption that the sum of all probabilities describing the weapon angular position equals one, the equation (49) takes the following form:

$$\begin{split} \frac{\partial \mathcal{U}}{\partial t} \Delta t &= -P\_{10} \frac{\partial \mathcal{U}}{\partial z} h + P\_{10} \frac{1}{2} h^2 \frac{\partial^2 \mathcal{U}}{\partial z^2} + P\_{20} \frac{\partial \mathcal{U}}{\partial z} h + P\_{20} \frac{1}{2} h^2 \frac{\partial^2 \mathcal{U}}{\partial z^2} - P\_{01} \frac{\partial \mathcal{U}}{\partial y} h + \\ &+ P\_{01} \frac{1}{2} h^2 \frac{\partial^2 \mathcal{U}}{\partial y^2} + P\_{02} \frac{\partial \mathcal{U}}{\partial y} h + P\_{02} \frac{1}{2} h^2 \frac{\partial^2 \mathcal{U}}{\partial y^2} \end{split} \tag{50}$$

After grouping the quantities from the above equation, the following equation was obtained:

$$\begin{split} \frac{\partial \mathcal{U}}{\partial t} \Delta t &= -P\_{10} \frac{\partial \mathcal{U}}{\partial z} h + P\_{20} \frac{\partial \mathcal{U}}{\partial z} h + P\_{10} \frac{1}{2} h^{2} \frac{\partial^{2} \mathcal{U}}{\partial z^{2}} + P\_{20} \frac{1}{2} h^{2} \frac{\partial^{2} \mathcal{U}}{\partial z^{2}} - P\_{01} \frac{\partial \mathcal{U}}{\partial y} h + \\ &+ P\_{02} \frac{\partial \mathcal{U}}{\partial y} h + P\_{01} \frac{1}{2} h^{2} \frac{\partial^{2} \mathcal{U}}{\partial y^{2}} + P\_{02} \frac{1}{2} h^{2} \frac{\partial^{2} \mathcal{U}}{\partial y^{2}} \end{split} \tag{51}$$

After dividing both sides of the equation (51) by *t*, the following result was obtained:

$$\begin{aligned} \frac{\partial \mathcal{U}}{\partial t} &= -\frac{(P\_{10} - P\_{20})\hbar}{\Delta t} \frac{\partial \mathcal{U}}{\partial z} + \frac{(P\_{10} + P\_{20})\frac{1}{2}h^2}{\Delta t} \frac{\partial^2 \mathcal{U}}{\partial z^2} + \\ &- \frac{(P\_{01} - P\_{02})\hbar}{\Delta t} \frac{\partial \mathcal{U}}{\partial y} + \frac{(P\_{01} + P\_{02})\frac{1}{2}h^2}{\Delta t} \frac{\partial^2 \mathcal{U}}{\partial y^2} \end{aligned} \tag{52}$$

By introducing the following denotations:

$$b\_1 = \frac{(P\_{10} - P\_{20})\mathfrak{h}}{\Delta t}, \quad \mathfrak{b}\_2 = \frac{(P\_{01} - P\_{02})\mathfrak{h}}{\Delta t} \tag{53}$$

$$a\_1 = \frac{\left(P\_{10} + P\_{20}\right)\hbar^2}{\Delta t}, \quad a\_2 = \frac{\left(P\_{01} + P\_{02}\right)\hbar^2}{\Delta t} \tag{54}$$

and substituting them into the equation (52), the following differential equation was obtained:

$$\frac{\partial \mathcal{U}}{\partial t} = -b\_1 \frac{\partial \mathcal{U}}{\partial z} - b\_2 \frac{\partial \mathcal{U}}{\partial y} + \frac{1}{2} a\_1 \frac{\partial^2 \mathcal{U}}{\partial z^2} + \frac{1}{2} a\_2 \frac{\partial^2 \mathcal{U}}{\partial y^2} \tag{55}$$

The following function is the solution of the above equation:

$$\mathcal{U}(z,y,t) = \frac{1}{\sqrt{2\pi n\_1 t}\sqrt{2\pi n\_2 t}} e^{-\frac{1}{2}\left(\frac{(z-b\_1t)^2}{a\_1 t} + \frac{(y-b\_2t)^2}{a\_2 t}\right)}\tag{56}$$

Assuming that the probabilities *P10* and *P20* are of the same order, i.e. *P10*=*P20*, we can write that the coefficient *b10*. Similarly, we can assume that the probabilities *P01* and *P02* are also of the same order, so the coefficient *b20*. Given these assumptions, the equation (55) takes the following form:

$$\frac{\partial \mathcal{U}}{\partial t} = \frac{1}{2} a\_1 \frac{\partial^2 \mathcal{U}}{\partial z^2} + \frac{1}{2} a\_2 \frac{\partial^2 \mathcal{U}}{\partial y^2} \tag{57}$$

The following form of the density function is the solution of the equation (57):

$$\mathcal{U}(z, y, t) = \frac{1}{\sqrt{2\pi a\_1 t}\sqrt{2\pi a\_2 t}} e^{-\frac{1}{2}\left(\frac{z^2}{a\_1 t} + \frac{y^2}{a\_2 t}\right)}\tag{58}$$

The explicit form of the density function (58) requires determining the equation coefficients (57) and connects with:

− obtaining input data;

416 Recent Advances in Aircraft Technology

Using the assumption that the sum of all probabilities describing the weapon angular

20 20 2

 

*z <sup>U</sup> <sup>P</sup>*

> 2 2

> > *z*

 

*<sup>U</sup> <sup>P</sup> <sup>h</sup>*

2 1

2 1

*<sup>U</sup> <sup>h</sup> <sup>P</sup> <sup>h</sup>*

02 02 2

 

*y <sup>U</sup> <sup>P</sup>*

> 2 2

*y*

 

After grouping the quantities from the above equation, the following equation was

02 2

<sup>10</sup> <sup>20</sup> <sup>10</sup> <sup>20</sup>

 

*z U*

*y U*

<sup>01</sup> <sup>02</sup> <sup>01</sup> <sup>02</sup>

 

<sup>1</sup> ,

2

<sup>1</sup> ,

and substituting them into the equation (52), the following differential equation was

<sup>1</sup> <sup>2</sup> <sup>1</sup> 2

 

*y <sup>U</sup> <sup>b</sup>*

*a t a t U z y t* <sup>2</sup>

<sup>1</sup> <sup>2</sup> 2 2

 2 1

*<sup>a</sup> <sup>t</sup>*

*e*

*a*

*a*

2 1

*<sup>U</sup> <sup>h</sup> <sup>P</sup> <sup>h</sup>*

2 2

*z*

 

> 2 2 2

> 2 2 2

> > 2 1

*t P P h* 

*t P P h*

2 2

1

*a t z b t*

1 <sup>2</sup> <sup>1</sup> *a*

01 02

*t P P h*

2

*z U*

 

 

2 1 *t P P h*

2 1

*y*

 

2 1

*<sup>U</sup> <sup>P</sup> <sup>h</sup>*

20 2

*y*

 

2 1

*<sup>U</sup> <sup>h</sup> <sup>P</sup> <sup>h</sup>*

01 2

01 2

*h y <sup>U</sup> <sup>P</sup>*

*h y <sup>U</sup> <sup>P</sup>*

(52)

(55)

(56)

(50)

(51)

2 2

*z*

2 2

*z*

*t*, the following result was obtained:

2 2 2

*y U*

2

2 2

> 

*y U*

 

*y b t*

<sup>2</sup> <sup>2</sup>

 

2 2 2

<sup>01</sup> <sup>02</sup> b2 (53)

2 (54)

*z U*

 

position equals one, the equation (49) takes the following form:

2 1

10 20 10

*<sup>U</sup> <sup>h</sup> <sup>P</sup>*

2

*t P P h*

 *t <sup>P</sup> <sup>P</sup> <sup>h</sup> <sup>b</sup>* <sup>10</sup> <sup>20</sup>

> *t P P h*

*z <sup>U</sup> <sup>b</sup>*

10 20

 

*t P P h*

*<sup>U</sup> <sup>h</sup> <sup>P</sup> <sup>h</sup>*

*z*

 

*<sup>U</sup> <sup>h</sup> <sup>P</sup> <sup>h</sup>*

2 2

*y*

 

10 10

*<sup>U</sup> <sup>P</sup> <sup>h</sup>*

02 01

*y <sup>U</sup> <sup>P</sup>*

 

After dividing both sides of the equation (51) by


*a*

*t U*

The following function is the solution of the above equation:

<sup>1</sup> ( , , )

*t U*

By introducing the following denotations:

*z*

1

*<sup>U</sup> <sup>t</sup> <sup>P</sup>*

*z*

01

*<sup>U</sup> <sup>t</sup> <sup>P</sup>*

1

*t U*

*t U*

obtained:

obtained:

2


$$L = \frac{1}{(2\pi)^{n} (a\_{1}a\_{2})\_{2}^{\frac{n}{2}}} \prod\_{k=1}^{n-1} \frac{1}{(t\_{k+1} - t\_{k})} \exp\left\{-\frac{1}{2} \left[\frac{(z\_{k+1} - z\_{k})^{2}}{a\_{1}(t\_{k+1} - t\_{k})} + \frac{(y\_{k+1} - y\_{k})^{2}}{a\_{2}(t\_{k+1} - t\_{k})}\right]\right\} \tag{59}$$

To determine the parameters *a1* and *a2* we can use the method of the maximum likelihood. The method consists in finding the parameter values *a1* and *a2* that maximize the likelihood function. So, we seek the solution of the set of equations

$$\begin{cases} \frac{\partial \ln L}{\partial a\_1} = 0\\ \frac{\partial \ln L}{\partial a\_2} = 0 \end{cases} \tag{60}$$

Therefore, the logarithm of the likelihood function *L* takes the following form:

$$\begin{aligned} \ln L &= -n \ln 2\pi - \frac{n}{2} \ln a\_1 - \frac{n}{2} \ln a\_2 + \\ &+ \sum\_{k=1}^{n-1} \left[ \ln(t\_{k+1} - t\_k) + \left[ -\frac{1}{2} \left( \frac{(z\_{k+1} - z\_k)^2}{a\_1(t\_{k+1} - t\_k)} + \frac{(y\_{k+1} - y\_{\ast k})^2}{a\_2(t\_{k+1} - t\_k)} \right) \right] \right] \end{aligned} \tag{61}$$

By determining the derivatives of the function *L* relative to specified parameters, the following set of equations was obtained:

The Analysis of the Maintenance Process of the Military Aircraft 419

The process of determining the function parameter (65) is analogous to the way of determining the equation coefficients (59). By determining the derivatives of the function logarithms (65) relative to specified coefficients and comparing them to 0, the following

> 

*t t z z b t t*

*k k k k*

1 1 1

2

*i yi* 1 *.*

*n*

*b*

 *<sup>i</sup>* \* <sup>1</sup> <sup>2</sup> \* <sup>1</sup>

1 1

*z z*

1 1

*y y*

 

 

*n*

*k k i*

 

*n*

*k k i*

 

*y*

<sup>1</sup> , (67)

*i*

 

 

1

*y*

*i*

1

*z*

*i*

*n*

*n*

 

 

2

2

(68)

*n*

*i*

*n*

*i i z* 1

*n*

;

2

(66)

*n n*

*t <sup>y</sup> <sup>b</sup>*

1 2 1

*k k k k*

*t t y y b t t*

1 1

*k k k*

By determining the values of the above coefficients and substituting them into the equation (56), we can determine the density function of the indicator position during the aiming

The indicator path relative to a target (described for subsequent moments *t0, t1, t2, ..., tn*,) can be characterized by horizontal coordinates *z0, z1, z2, ..., zn* and vertical coordinates *y0, y1, y2, ..., yn* of the assumed coordinate system. When converting these quantities to current data, the time of recording the position of the aiming indicator can be replaced by the number of the registered positions (next coordinate values will constitute the sum of previous

*k k k*

1 2

1 1

*n*

*n n*

,

1

*n*

1

*n*

process involving the indicator dislocation relative to a target.

coordinates). Thus, the indicator position will be characterized by:

2. the deviation toward the 0Z axis: *0, z1, (z1+z2), (z1+z2+z3), ...,* 

3. the deviation toward the 0Y axis: *0, y1, (y1+y2), (y1+y2+y3), ...,* 

Based on the above, we can determine the following parameters:

*b*

*n*

*n*

*k*

*n*

*k*

*n*

<sup>1</sup> \*

1

1

1

 

 

*n*

<sup>1</sup> \*

1

*z*

*n*

*i*

2

*a*

1

*a*

2 2

where:

2 1

1. the number of registered positions: *0, 1, 2, ..., n*;

1

*t <sup>z</sup> <sup>b</sup>*

*n*

2

*a*

1

*a*

1

relationships were obtained:

$$\begin{cases} -\frac{n}{2a\_1} + \sum\_{k=1}^{n-1} \frac{(z\_{k+1} - z\_k)^2}{2a\_1^2 (t\_{k+1} - t\_k)} = 0\\ -\frac{n}{2a\_2} + \sum\_{k=1}^{n-1} \frac{(y\_{k+1} - y\_k)^2}{2a\_2^2 (t\_{k+1} - t\_k)} = 0 \end{cases} \tag{62}$$

which after transformation provides the following equations (63):

$$\begin{cases} a\_1 = \frac{1}{n} \sum\_{k=1}^{n-1} \frac{(z\_{k+1} - z\_k)^2}{(t\_{k+1} - t\_k)}\\ a\_2 = \frac{1}{n} \sum\_{k=1}^{n-1} \frac{(y\_{k+1} - y\_k)^2}{(t\_{k+1} - t\_k)} \end{cases} \tag{63}$$

Therefore, the parameters *a1* and *a2* can be defined on the basis of the above set of equations. When analyzing the function notation (58), it can be assumed that in order to determine the variance characterizing the distribution of the indicator central point, the parameters *a1* and *a2* must be multiplied by time, which leads to the following result:

$$\begin{cases} \sigma\_z^2(t\_n) = a\_1 t\_n = \frac{1}{n} \sum\_{k=1}^{n-1} \frac{(z\_{k+1} - z\_k)^2}{(t\_{k+1} - t\_k)} \sum\_{k=1}^{n-1} (t\_{k+1} - t\_k) \\ \sigma\_y^2(t\_n) = a\_2 t\_n = \frac{1}{n} \sum\_{k=1}^{n-1} \frac{(y\_{k+1} - y\_k)^2}{(t\_{k+1} - t\_k)} \sum\_{k=1}^{n-1} (t\_{k+1} - t\_k) \end{cases} \tag{64}$$

The determination of the function parameters (58) will allow defining the probability density function of the correct position of the indicator central point.

As regards the case described, it is assumed that the probability of the occurrence of deviations in any direction of the assumed coordinate axes is the same. Such situation takes place when the process of aiming is performed correctly, i.e. when at the beginning of the aiming process, an aiming indicator coincides with a target and any dislocation of the indicator is compensated with its resetting on the target. A real process of aiming often involves the indicator dislocation relative to a target. The occurrence of such dislocation causes that the probability of the indicator dislocation in a specified direction is higher than the indicator dislocation in an opposite direction. Thus, the values of the parameters *b1* and *b2* are not 0. Therefore, the differential equation describing the aiming process takes the form of the equation (55). Its solution is the density function (56). The parameters *b1*, *b2*, *a1* and *a2* need to be determined for the function. Using the above-described technique, the likelihood function (65) was determined. It was used to estimate the sought parameters:

$$L = \frac{1}{(2\pi)^{\mu}(a\_1 a\_2)^{\frac{n}{2}}} \prod\_{k=1}^{n-1} \frac{1}{(t\_{k+1} - t\_k)} \exp\left[ -\frac{1}{2} \begin{bmatrix} \frac{(\left(\mathbf{z}\_{k+1} - \mathbf{z}\_k\right) - b\_1 \left(t\_{k+1} - t\_k\right))^2}{a\_1 \left(t\_{k+1} - t\_k\right)} + \\ + \frac{(\left(y\_{k+1} - y\_k\right) - b\_2 \left(t\_{k+1} - t\_k\right))^2}{a\_2 \left(t\_{k+1} - t\_k\right)} \end{bmatrix} \tag{65}$$

 

 

Therefore, the parameters *a1* and *a2* can be defined on the basis of the above set of equations. When analyzing the function notation (58), it can be assumed that in order to determine the variance characterizing the distribution of the indicator central point, the parameters *a1* and

The determination of the function parameters (58) will allow defining the probability

As regards the case described, it is assumed that the probability of the occurrence of deviations in any direction of the assumed coordinate axes is the same. Such situation takes place when the process of aiming is performed correctly, i.e. when at the beginning of the aiming process, an aiming indicator coincides with a target and any dislocation of the indicator is compensated with its resetting on the target. A real process of aiming often involves the indicator dislocation relative to a target. The occurrence of such dislocation causes that the probability of the indicator dislocation in a specified direction is higher than the indicator dislocation in an opposite direction. Thus, the values of the parameters *b1* and *b2* are not 0. Therefore, the differential equation describing the aiming process takes the form of the equation (55). Its solution is the density function (56). The parameters *b1*, *b2*, *a1* and *a2* need to be determined for the function. Using the above-described technique, the likelihood

*k k k k*

 

1

1

*t t y y*

 

1

1

*t t z z*

*k k*

1 1

*k k k k k*

 

1 1

*k k k k k*

*n*

1

*n*

1

*n*

*<sup>k</sup> <sup>k</sup> <sup>z</sup> <sup>n</sup> <sup>n</sup>*

1

*n*

function (65) was determined. It was used to estimate the sought parameters:

*t t*

1

1 1

   

2 1  

exp

*a a*

1

1 2 2

*L*

2

 

1

*n <sup>k</sup> <sup>k</sup> <sup>k</sup> <sup>n</sup> <sup>n</sup>*

*n*

*k*

1

1

*n*

1

*k*

1

1

*n*

1

*n*

1

1 1 2 2

*k k k k k*

 

1 1 2 1

*k k k k k*

1

2 2

2

*a n*

1 *n*

*a n*

2 2

*n*

1

 

which after transformation provides the following equations (63):

 

*a2* must be multiplied by time, which leads to the following result:

2

density function of the correct position of the indicator central point.

*y n n*

*t a t*

*t a t*

1

 

2

2

 

2

*a*

1

*a*

 

 

 

1

*a t t z z*

*a t t y y*

 

*t t z z*

1

1

*t t y y*

1

2

*k*

*n*

2

2

*n*

1

 

1 1 1

1 1

*k k k k k k k k k k k k*

 

*a t t y y b t t a t t z z b t t*

1 2 1

2 1

 

 

2

2

 

(65)

1

*t t*

*k k*

1

*t t*

*k k*

*k*

1

1

1

2

2

0

(62)

(63)

(64)

0

2

The process of determining the function parameter (65) is analogous to the way of determining the equation coefficients (59). By determining the derivatives of the function logarithms (65) relative to specified coefficients and comparing them to 0, the following relationships were obtained:

$$\begin{aligned} b\_1 &= \frac{z\_n}{t\_n}, & b\_2 &= \frac{y\_n}{t\_n} \\ a\_1 &= \frac{1}{n} \sum\_{k=1}^{n-1} \frac{[(z\_{k+1} - z\_k) - b\_1(t\_{k+1} - t\_k)]^2}{(t\_{k+1} - t\_k)} \\ a\_2 &= \frac{1}{n} \sum\_{k=1}^{n-1} \frac{[(y\_{k+1} - y\_k) - b\_2(t\_{k+1} - t\_k)]^2}{(t\_{k+1} - t\_k)} \end{aligned} \tag{66}$$

By determining the values of the above coefficients and substituting them into the equation (56), we can determine the density function of the indicator position during the aiming process involving the indicator dislocation relative to a target.

The indicator path relative to a target (described for subsequent moments *t0, t1, t2, ..., tn*,) can be characterized by horizontal coordinates *z0, z1, z2, ..., zn* and vertical coordinates *y0, y1, y2, ..., yn* of the assumed coordinate system. When converting these quantities to current data, the time of recording the position of the aiming indicator can be replaced by the number of the registered positions (next coordinate values will constitute the sum of previous coordinates). Thus, the indicator position will be characterized by:


Based on the above, we can determine the following parameters:

$$b\_1^\* = \frac{\sum\_{i=1}^n z\_i}{n}, \quad \qquad \qquad b\_2^\* = \frac{\sum\_{i=1}^n y\_i}{n} \tag{67}$$

$$\begin{aligned} \sigma\_1^2 = a\_1^\* &= \frac{1}{n} \sum\_{k=1}^{n-1} \left[ \left( \hat{z}\_{k+1} - \hat{z}\_k \right) - \left( \frac{1}{n} \sum\_{i=1}^n z\_i \right) \right]^2 \\ \sigma\_2^2 = a\_2^\* &= \frac{1}{n} \sum\_{k=1}^{n-1} \left[ \left( \hat{y}\_{k+1} - \hat{y}\_k \right) - \left( \frac{1}{n} \sum\_{i=1}^n y\_i \right) \right]^2 \end{aligned} \tag{68}$$

where:

The Analysis of the Maintenance Process of the Military Aircraft 421

*2*

*4*

Fig. 6. Photos taken with a photo-control apparatus during the realization of the attacking

Based on the obtained data, it was possible to determine the aiming indicator path relative to a target. Figure 7 depicts the path. When analyzing the position of the central point of the aiming indicator, we can assume that the position adopting the chaotic motion of the indicator was the proper position that completely reflects the nature of the real process.


**z [T]**

Fig. 7. The course of changes in the position of the aiming indicator relative to a target during the realization of the aiming process with the use of non-guided missiles.


**O3(2,4;-5,8)**

**O8(3,8;0,8)**



0

**O1(0;0)**


**O4(-1,1;2,1)**

**O6(-2,8;5,0) O7(0,8;5,0**

2

4

6

*3*

process with the use of non-guided missiles

**O2(-4,5;-3,3)**

**O5(-5,2;1,1)**

**y [T]**

*1*

$$\begin{aligned} \hat{\boldsymbol{z}}\_{k+1} &= \sum\_{i=1}^{k+1} \boldsymbol{z}\_{i\prime} \cdot \hat{\boldsymbol{z}}\_{k} = \sum\_{i=1}^{k} \boldsymbol{z}\_{i} \\ \hat{\boldsymbol{y}}\_{k+1} &= \sum\_{i=1}^{k+1} \boldsymbol{y}\_{i\prime} \cdot \hat{\boldsymbol{y}}\_{k} = \sum\_{i=1}^{k} \boldsymbol{y}\_{i} \end{aligned} \tag{69}$$

Because

$$
\hat{\boldsymbol{z}}\_{k+1} - \hat{\boldsymbol{z}}\_{k} = \boldsymbol{z}\_{k+1} \qquad \text{and} \qquad \hat{\boldsymbol{y}}\_{k+1} - \hat{\boldsymbol{y}}\_{k} = \boldsymbol{y}\_{k+1} \tag{70}
$$

therefore:

$$\begin{aligned} \sigma\_1^2 &= \frac{1}{n} \sum\_{k=1}^n \left[ z\_k - \frac{1}{n} \sum\_{i=1}^n z\_i \right]^2 \\ \sigma\_2^2 &= \frac{1}{n} \sum\_{k=1}^n \left[ y\_k - \frac{1}{n} \sum\_{i=1}^n y\_i \right]^2 \end{aligned} \tag{71}$$

The above relationships can be used to describe the process of aiming under real-life conditions.

#### **5.3 A computational example**

The execution of a combat task with the use of aerial combat means is characterized by the fact that the possibility of their use is determined by conditions that constitute a set of various factors enabling the performance of a combat task at the required level and with the consideration of a current tactical, navigational, meteorological, and radio-technical situation. The basic determinants of these conditions involve combat capabilities of an aircraft and the level of competence among aircrew members. The essence of the aiming process comes down to the controlling of an aircraft in such a way that it reaches the point in space where the applied weapon will hit a target. This procedure is performed in the NAS environment on the basis of the following data:


A common method for analyzing the aiming process during an attack is the recorded material analysis (using either the film placed in a photo-control apparatus located in front of the sight head or a camera recording a tactical situation in front of MMA.) Based on the recorded material, it is possible to determine a mutual position of an aiming indicator and a target at the moment of a weapon use.

Having the material registered by photo-control devices (Fig. 6) and using the abovementioned method, it is possible to define coordinates of the mutual position of a target and indicator in successive moments of the attacking process.

,

The above relationships can be used to describe the process of aiming under real-life

The execution of a combat task with the use of aerial combat means is characterized by the fact that the possibility of their use is determined by conditions that constitute a set of various factors enabling the performance of a combat task at the required level and with the consideration of a current tactical, navigational, meteorological, and radio-technical situation. The basic determinants of these conditions involve combat capabilities of an aircraft and the level of competence among aircrew members. The essence of the aiming process comes down to the controlling of an aircraft in such a way that it reaches the point in space where the applied weapon will hit a target. This procedure is performed in the

− motion parameters of an aircraft executing an attack, a target, and parameters of the

A common method for analyzing the aiming process during an attack is the recorded material analysis (using either the film placed in a photo-control apparatus located in front of the sight head or a camera recording a tactical situation in front of MMA.) Based on the recorded material, it is possible to determine a mutual position of an aiming indicator and a

Having the material registered by photo-control devices (Fig. 6) and using the abovementioned method, it is possible to define coordinates of the mutual position of a target and

 

1

*y*

1 1

 

1

*z*

1 1

*n*

*k*

*n*

*k*

*n*

2 2

2 1

*n*

*y y y y*

,

*z z z z*

1

1

*k*

*i k i*

1

*k*

1

*i k i*

Because

therefore:

conditions.

**5.3 A computational example** 

NAS environment on the basis of the following data:

centre where an aircraft motion is executed;

indicator in successive moments of the attacking process.

− the comparison between actual and required coordinates of a target.

− the required coordinates of a target; − the actual coordinates of a target;

target at the moment of a weapon use.

1

1

1

<sup>1</sup> <sup>1</sup> <sup>1</sup> 1 and *k <sup>k</sup> <sup>k</sup> <sup>k</sup> <sup>k</sup> <sup>k</sup> <sup>z</sup> <sup>z</sup> <sup>z</sup> <sup>y</sup> <sup>y</sup> <sup>y</sup>* (70)

 

*y*

*z*

2

*n*

1

*i k i*

*n*

*n*

*i k i*

1

*n*

 

2

*i k i*

*k*

1

(69)

(71)

*i k i*

*k*

Fig. 6. Photos taken with a photo-control apparatus during the realization of the attacking process with the use of non-guided missiles

Based on the obtained data, it was possible to determine the aiming indicator path relative to a target. Figure 7 depicts the path. When analyzing the position of the central point of the aiming indicator, we can assume that the position adopting the chaotic motion of the indicator was the proper position that completely reflects the nature of the real process.

Fig. 7. The course of changes in the position of the aiming indicator relative to a target during the realization of the aiming process with the use of non-guided missiles.

The Analysis of the Maintenance Process of the Military Aircraft 423

modernization not only in respect of aircraft engineering but also in respect of any field

The process of operating is inevitably connected with "an operational effect" which results from the completion of a particular combat mission. Depending on a combat mission, this effect will concern, for example hitting the target, intercepting an enemy, identifying the target to attack, etc. The operational effect is always obtained during flight. Due to flying conditions of the military aircraft, we can list a number of destructive factors reducing the value of the obtained operational effect. Analyzing the process of operating, we can state that one of the most significant "cells" in this process is the flying military personnel – a pilot. His task involves the appropriate configuration of the military aircraft systems and the performance of the aiming process that generally comes down to the process of making an aiming indicator coincide with a target. The method presented in the 5th chapter enables the quantitative assessment of the aiming process quality. The results obtained in this way and supported by parameters describing conditions in which a combat task was conducted may constitute the basis for the evaluation of the realization of both a current combat task and the progress in training (considering the series of tasks of a given type in a specified time

Fisz M. (1958). Probability Calculus and Mathematical Statistics. PWN, Warsaw, Poland Kaczmarski W. (1990). Aircraft Weapons. Part II. Aircraft Sights Handbook. DWL, Poznan,

Olearczuk E.; Sikorski M.; Tomaszek H. (1978). Aircrafts maintenance. MON, Warsaw,

Skomra A., Tomaszek H.; Wroblewski M. (1999). Tactical and Technical Characteristics and

Su-22M4 Handbook 7. Weapons. Part VII. Technology of Periodic Service Works. DWLiOP,

Tomaszek H.; Wazny M. (2008). The outline of the assessment of durability against surface

Tomaszek H.; Zurek J.; Loroch L. (2004). The outline of a method of estimation reliability

Wazny M. (2008). The method of determining the time concerning the operation of a chosen

Nr2/2008, 2(38), pp. 4-11. ISSN: 1507-2711, Lublin, Poland

the Effectiveness of Combat Air Munitions. Military Academy of Technology-

wear of a construction element with the use of the distribution of time of the exceedence of limit state (admissible state). ZEM, Vol. 3(155) 2008. pp. 47-59, ISSN:

and durability of aircraft's structure elements on the basis of destruction process description. ZEM, Vol. 3(139) 2004. pp. 73-85, ISSN 0137-5474, Radom, Poland Wazny M. (2003). The analysis of operating causes of the dispersion of selected munition

and their influence on the air weapons effectiveness. Military Academy of

navigation and aiming device in the operation system. Maintenance and Reliability

Moir I.; Seabridge A. (2006). Military Avionics System. Chichester, England: Wiley

where device/system diagnostic parameters are registered.

interval).

**7. References** 

Poland

Poland

Textbook, Warsaw, Poland

0137-5474, Radom, Poland

Technology 2003, Warsaw, Poland

1986, Poznan, Poland

The variance values were determined for the data presented in Fig. 7. The values are as follows:

$$
\sigma\_z^2 = 14 \,\text{24} \begin{bmatrix} \,^{T^2} \\ \, \end{bmatrix} \qquad \sigma\_y^2 = 22 \,\text{80} \begin{bmatrix} \,^{T^2} \\ \, \end{bmatrix} \tag{72}
$$

By substituting the above equation values (58) and on the basis of the recorded data, it was possible to determine a graphical form of the probability density function (Fig. 8) that characterizes the concurrence of the aiming indicator with a target during the execution of the aiming process.

Fig. 8. A graph of the probability density function of indicator deviations during the execution of the aiming process with the use of non-guided missiles

#### **6. Summary**

Works carried out during the process of maintaining aim to ensure the required level of safety concerning aircraft engineering and to maintain it in good working condition. This is achieved by carrying out planned works and systematic checks of diagnostic parameter values. Apart from identification, diagnostic testing includes two more aspects concerning the technical state genesis and prediction. That is why, for safety and reliability reasons, it is important to develop methods enabling prediction of the technical state of devices on the basis of information obtained during the maintenance process. The 4rd chapter comprises the presentation of the probabilistic method for the determination of residual durability of devices on the basis of their diagnostic parameter changes registered during the process of maintaining. The application of the above-mentioned method may facilitate the military aircraft maintenance process by limiting the number of stoppages through the indication of a time of next maintenance works for a specified device/system. It shall be emphasized that the presented method is universal as it can be applied to the maintenance process modernization not only in respect of aircraft engineering but also in respect of any field where device/system diagnostic parameters are registered.

The process of operating is inevitably connected with "an operational effect" which results from the completion of a particular combat mission. Depending on a combat mission, this effect will concern, for example hitting the target, intercepting an enemy, identifying the target to attack, etc. The operational effect is always obtained during flight. Due to flying conditions of the military aircraft, we can list a number of destructive factors reducing the value of the obtained operational effect. Analyzing the process of operating, we can state that one of the most significant "cells" in this process is the flying military personnel – a pilot. His task involves the appropriate configuration of the military aircraft systems and the performance of the aiming process that generally comes down to the process of making an aiming indicator coincide with a target. The method presented in the 5th chapter enables the quantitative assessment of the aiming process quality. The results obtained in this way and supported by parameters describing conditions in which a combat task was conducted may constitute the basis for the evaluation of the realization of both a current combat task and the progress in training (considering the series of tasks of a given type in a specified time interval).

### **7. References**

422 Recent Advances in Aircraft Technology

The variance values were determined for the data presented in Fig. 7. The values are as

14,24 , 22,80 2 2 2 2

By substituting the above equation values (58) and on the basis of the recorded data, it was possible to determine a graphical form of the probability density function (Fig. 8) that characterizes the concurrence of the aiming indicator with a target during the execution of

*<sup>T</sup>*

 

Fig. 8. A graph of the probability density function of indicator deviations during the

Works carried out during the process of maintaining aim to ensure the required level of safety concerning aircraft engineering and to maintain it in good working condition. This is achieved by carrying out planned works and systematic checks of diagnostic parameter values. Apart from identification, diagnostic testing includes two more aspects concerning the technical state genesis and prediction. That is why, for safety and reliability reasons, it is important to develop methods enabling prediction of the technical state of devices on the basis of information obtained during the maintenance process. The 4rd chapter comprises the presentation of the probabilistic method for the determination of residual durability of devices on the basis of their diagnostic parameter changes registered during the process of maintaining. The application of the above-mentioned method may facilitate the military aircraft maintenance process by limiting the number of stoppages through the indication of a time of next maintenance works for a specified device/system. It shall be emphasized that the presented method is universal as it can be applied to the maintenance process

execution of the aiming process with the use of non-guided missiles

*T*

*z* *y*

 

(72)

follows:

the aiming process.

**6. Summary** 

Fisz M. (1958). Probability Calculus and Mathematical Statistics. PWN, Warsaw, Poland


**Part 5** 

**Miscellaneous Topics** 

Wazny M.; Wojtowicz K. (2008). The analysis of the military aircraft maintains system and the modernization proposal.: Maintenance and Reliability Nr3/2008, 3(39), pp. 4- 11, ISSN: 1507-2711, Lublin, Poland

www.airliners.net

**Part 5** 

**Miscellaneous Topics** 

424 Recent Advances in Aircraft Technology

Wazny M.; Wojtowicz K. (2008). The analysis of the military aircraft maintains system and

11, ISSN: 1507-2711, Lublin, Poland

www.airliners.net

the modernization proposal.: Maintenance and Reliability Nr3/2008, 3(39), pp. 4-

**19** 

*USA* 

**Review of Technologies** 

Ramesh K. Agarwal

**to Achieve Sustainable (Green) Aviation** 

Among all major modes of transportation, people travel by airplanes and automobiles continues to experience the fastest growth. As shown in Figure 1 [1], the travel as measured by Passenger - Kilometers (PKM) is forecasted to more than double from the current 2010 level of ~ 40 trillion PKM to approximately 103 trillion PKM by 2050. Among these two modes of transportation, air travel is experiencing the faster growth. The number of Passenger – Kilometers Travelled (PKT)/ capita by various modes of transportation in different countries is shown in Figures 2(a) - 2(d) [1]. Figures 2(a) and 2(c) also show that the use of personal vehicles compared to public transport (in PKT) is highest in U.S. followed by the wealthier nations. Furthermore, as the per capita income of a nation increases, the travel demand will increase (Figure 3) [1] resulting in greater demand for personal vehicles as well as for air transportation as shown in Figure 1. These projections are based on 3% growth in world Gross Domestic Product (GDP), 5.2% growth in passenger traffic and 6.2% increase in cargo movement. Only major policy changes and intervention by governments through development of infrastructure for public transportation is likely to slow down these trends shown in Figure 1. Most of the energy for transportation is currently provided by the fossil fuels (primarily petroleum). Figure 4 shows the oil consumption for transportation in U.S. and its forecast for the future [2]. Figure 5 shows the relative percentage of fuel consumption by various categories of vehicles in U.S [2]. The consequence of burning fossil fuels is well established in their long term impact on climate and global warming due to Greenhouse Gas (GHG) emissions, primary being the CO2 and NOx. Table I gives the current level of CO2 emissions worldwide by ground and air transportation [3] and Figure 6 shows the forecast for the future if the current Business as Usual (BAU) scenario continues [3]. The reduction in GHG emissions due to the burning of fossil fuels is the major goal of "Green Transportation." The "Sustainability" goal is to explore both the technological solutions to increase the efficiency of transportation as well as the alternative carbon neutral fuels (e.g.

Most of the material presented in this section has been taken from the author's William Littlewood Award Lecture [4]. This section provides an overview of issues related

**1. Introduction** 

biofuels among others).

**2. Sustainable (green) air transportation** 

*Department of Mechanical Engineering and Materials Science* 

*Washington University in St. Louis, St. Louis, MO,* 

## **Review of Technologies to Achieve Sustainable (Green) Aviation**

Ramesh K. Agarwal

*Department of Mechanical Engineering and Materials Science Washington University in St. Louis, St. Louis, MO, USA* 

#### **1. Introduction**

Among all major modes of transportation, people travel by airplanes and automobiles continues to experience the fastest growth. As shown in Figure 1 [1], the travel as measured by Passenger - Kilometers (PKM) is forecasted to more than double from the current 2010 level of ~ 40 trillion PKM to approximately 103 trillion PKM by 2050. Among these two modes of transportation, air travel is experiencing the faster growth. The number of Passenger – Kilometers Travelled (PKT)/ capita by various modes of transportation in different countries is shown in Figures 2(a) - 2(d) [1]. Figures 2(a) and 2(c) also show that the use of personal vehicles compared to public transport (in PKT) is highest in U.S. followed by the wealthier nations. Furthermore, as the per capita income of a nation increases, the travel demand will increase (Figure 3) [1] resulting in greater demand for personal vehicles as well as for air transportation as shown in Figure 1. These projections are based on 3% growth in world Gross Domestic Product (GDP), 5.2% growth in passenger traffic and 6.2% increase in cargo movement. Only major policy changes and intervention by governments through development of infrastructure for public transportation is likely to slow down these trends shown in Figure 1. Most of the energy for transportation is currently provided by the fossil fuels (primarily petroleum). Figure 4 shows the oil consumption for transportation in U.S. and its forecast for the future [2]. Figure 5 shows the relative percentage of fuel consumption by various categories of vehicles in U.S [2]. The consequence of burning fossil fuels is well established in their long term impact on climate and global warming due to Greenhouse Gas (GHG) emissions, primary being the CO2 and NOx. Table I gives the current level of CO2 emissions worldwide by ground and air transportation [3] and Figure 6 shows the forecast for the future if the current Business as Usual (BAU) scenario continues [3]. The reduction in GHG emissions due to the burning of fossil fuels is the major goal of "Green Transportation." The "Sustainability" goal is to explore both the technological solutions to increase the efficiency of transportation as well as the alternative carbon neutral fuels (e.g. biofuels among others).

#### **2. Sustainable (green) air transportation**

Most of the material presented in this section has been taken from the author's William Littlewood Award Lecture [4]. This section provides an overview of issues related

Review of Technologies to Achieve Sustainable (Green) Aviation 429

Fig. 2(d). % share of various modes of transportation for inter-city travel in U.S. [1].

Fig. 3. Travel demand/capita with increase in GDP/capita of nations [1].

Fig. 1. Global mobility trends from various modes of transportation [1].

Fig. 2. a: % share of public transport in various countries; b: % share of high speed transport in various countries; c: % share of light-duty vehicle transport in various countries [1].

Fig. 1. Global mobility trends from various modes of transportation [1].

(a) (b)

c Fig. 2. a: % share of public transport in various countries; b: % share of high speed transport in

various countries; c: % share of light-duty vehicle transport in various countries [1].

Fig. 2(d). % share of various modes of transportation for inter-city travel in U.S. [1].

Fig. 3. Travel demand/capita with increase in GDP/capita of nations [1].

Review of Technologies to Achieve Sustainable (Green) Aviation 431

Fig. 6. CO2 emissions due to world passenger travel in Business as Usual (BAU) scenario [3]. to air transportation and its impact on environment. The environmental issues such as noise, emissions and fuel burn (consumption), for both airplane and airport operations, are discussed in the context of energy and environmental sustainability. They are followed by the topics dealing with noise and emissions mitigation by technological solutions including new aircraft and engine designs/technologies, alternative fuels, and materials as well as examination of aircraft operations logistics including Air-Traffic Management (ATM), Airto-Air Refueling (AAR), Close Formation Flying (CFF), and tailored arrivals to minimize fuel burn. The ground infrastructure for sustainable aviation, including the concept of

As mentioned in the 'Introduction', in the next few decades, air travel is forecast to experience the fastest relative growth among all modes of transportation, especially due to many fold increase in demand in major developing nations of Asia and Africa. Based on these demands for air travel, Boeing has determined the outlook for airplane demand by 2025 as shown in Figure 7 [5]. Figure 8 shows various categories of 27,200 airplanes that would be needed by 2025 [5]. The total value of new airplanes is estimated at \$2.6 trillion. As a result of three fold increase in air travel by 2025, it is estimated that the total CO2 emissions due to commercial aviation may reach between 1.2 billion tonnes to 1.5 billion tonnes annually by 2025 from its current level of 670 million tonnes. The amount of nitrogen oxides around airports, generated by aircraft engines, may rise from 2.5 million tonnes in 2000 to 6.1 million tonnes by 2025. The number of people who may be seriously affected by aircraft

'Sustainable Green Airport Design' is also covered.

Table 1. Current level of CO2 emissions from air and ground transportation [3].

Fig. 4. Fuel consumption in U.S by transport vehicles [2].

Fig. 5. Relative fuel consumption in U.S by various categories of vehicles [2].

Fig. 4. Fuel consumption in U.S by transport vehicles [2].

Fig. 5. Relative fuel consumption in U.S by various categories of vehicles [2].

Table 1. Current level of CO2 emissions from air and ground transportation [3].

Fig. 6. CO2 emissions due to world passenger travel in Business as Usual (BAU) scenario [3].

to air transportation and its impact on environment. The environmental issues such as noise, emissions and fuel burn (consumption), for both airplane and airport operations, are discussed in the context of energy and environmental sustainability. They are followed by the topics dealing with noise and emissions mitigation by technological solutions including new aircraft and engine designs/technologies, alternative fuels, and materials as well as examination of aircraft operations logistics including Air-Traffic Management (ATM), Airto-Air Refueling (AAR), Close Formation Flying (CFF), and tailored arrivals to minimize fuel burn. The ground infrastructure for sustainable aviation, including the concept of 'Sustainable Green Airport Design' is also covered.

As mentioned in the 'Introduction', in the next few decades, air travel is forecast to experience the fastest relative growth among all modes of transportation, especially due to many fold increase in demand in major developing nations of Asia and Africa. Based on these demands for air travel, Boeing has determined the outlook for airplane demand by 2025 as shown in Figure 7 [5]. Figure 8 shows various categories of 27,200 airplanes that would be needed by 2025 [5]. The total value of new airplanes is estimated at \$2.6 trillion. As a result of three fold increase in air travel by 2025, it is estimated that the total CO2 emissions due to commercial aviation may reach between 1.2 billion tonnes to 1.5 billion tonnes annually by 2025 from its current level of 670 million tonnes. The amount of nitrogen oxides around airports, generated by aircraft engines, may rise from 2.5 million tonnes in 2000 to 6.1 million tonnes by 2025. The number of people who may be seriously affected by aircraft

Review of Technologies to Achieve Sustainable (Green) Aviation 433

NASA definitions of TRL are given in Reference [8]. TRL 4-6 implies that the key technologies readiness will be somewhere between component/subsystem validation in laboratory environment to system/subsystem model or prototyping demonstration in a

\*\*\* An additional reduction of 10% may be possible through improved operational capability; metroplex concepts will enable optimal use of runways at multiple airports within the metropolitan area Fig. 9. NASA subsonic fixed wing system level metric for improving noise, emission and

The achievement of these goals will not be easy; it will require the cooperation and involvement of airplane manufactures, airline industry, regulatory agencies such as ICAO and FAA, R & D organizations, as well as political will by many governments and support of public. However, these challenges can be met with concerted efforts as stated beautifully by the Chairman, President and CEO of Boeing Company, W. J. McNerney, "Just as employees mastered "impossible" challenges like supersonic flight, stealth, space exploration and super-efficient composite airplanes, now we must focus our spirit of innovation and our resources on reducing greenhouse- gas emissions in our products and

**2.2 A List of new technologies and operational improvements for green aviation** 

exploit fast growing algae to provide a drop-in fuel substitute.

Recently, Aerospace International, published by the Royal Aeronautical Society of U.K., has identified 25 new technologies, initiatives and operational improvements that may make air travel one of the greenest industries by 2050 [9]. These 25 green technologies/concept areas

1. "*Biofuels* – These are already showing promise; the third generation biofuels may

2. *Advanced composites* – The future composites will be lighter and stronger than the present composites which the airplane manufacturers are just learning to work with

performance using technology & operational improvements [7].

relevant environment.

operations."

and use.

are listed below from Reference [9].

noise may rise from 24 million in 2000 to 30.5 million by 2025. Therefore there is urgency to address the problems of emissions and noise abatement through technological innovations in design and operations of the commercial aircraft.

Fig. 7. Boeing market forecast for new airplanes [5].

Fig. 8. Boeing demand forecast for various types of Airplanes by 2025 [5].

#### **2.1 Environmental challenges**

To meet the environmental challenges of the 21st century, as a result of growth in aviation, the Advisory Committee for Aeronautical Research in Europe (ACARE) has set the following three goals for reducing noise and emissions by 2020; (a) reduce the perceived noise to one half of current average levels, (b) reduce the CO2 emissions per passenger kilometer (PKM) by 50%, and (c) reduce the NOx emissions by 80% relative to 2000 reference [6]. NASA has similar objectives for 2020 as shown in Figure 9 for N+2 generation aircraft [7]. It is expected that the technology readiness level (TRL) of N+1, N+2 and N+3 generation will be between 4 and 6 in 2015, 2020 and 2030 timeframes respectively. The

noise may rise from 24 million in 2000 to 30.5 million by 2025. Therefore there is urgency to address the problems of emissions and noise abatement through technological innovations

in design and operations of the commercial aircraft.

Fig. 7. Boeing market forecast for new airplanes [5].

**2.1 Environmental challenges** 

Fig. 8. Boeing demand forecast for various types of Airplanes by 2025 [5].

To meet the environmental challenges of the 21st century, as a result of growth in aviation, the Advisory Committee for Aeronautical Research in Europe (ACARE) has set the following three goals for reducing noise and emissions by 2020; (a) reduce the perceived noise to one half of current average levels, (b) reduce the CO2 emissions per passenger kilometer (PKM) by 50%, and (c) reduce the NOx emissions by 80% relative to 2000 reference [6]. NASA has similar objectives for 2020 as shown in Figure 9 for N+2 generation aircraft [7]. It is expected that the technology readiness level (TRL) of N+1, N+2 and N+3 generation will be between 4 and 6 in 2015, 2020 and 2030 timeframes respectively. The NASA definitions of TRL are given in Reference [8]. TRL 4-6 implies that the key technologies readiness will be somewhere between component/subsystem validation in laboratory environment to system/subsystem model or prototyping demonstration in a relevant environment.


\*\*\* An additional reduction of 10% may be possible through improved operational capability; metroplex concepts will enable optimal use of runways at multiple airports within the metropolitan area

Fig. 9. NASA subsonic fixed wing system level metric for improving noise, emission and performance using technology & operational improvements [7].

The achievement of these goals will not be easy; it will require the cooperation and involvement of airplane manufactures, airline industry, regulatory agencies such as ICAO and FAA, R & D organizations, as well as political will by many governments and support of public. However, these challenges can be met with concerted efforts as stated beautifully by the Chairman, President and CEO of Boeing Company, W. J. McNerney, "Just as employees mastered "impossible" challenges like supersonic flight, stealth, space exploration and super-efficient composite airplanes, now we must focus our spirit of innovation and our resources on reducing greenhouse- gas emissions in our products and operations."

#### **2.2 A List of new technologies and operational improvements for green aviation**

Recently, Aerospace International, published by the Royal Aeronautical Society of U.K., has identified 25 new technologies, initiatives and operational improvements that may make air travel one of the greenest industries by 2050 [9]. These 25 green technologies/concept areas are listed below from Reference [9].


Review of Technologies to Achieve Sustainable (Green) Aviation 435

20. *Morphing aircraft* - Already being researched for UAVs, morphing aircraft that adapt to

21. *Electric/hybrid ground vehicles* – Use of electric, hybrid or hydrogen powered ground support vehicles at airports will reduce the carbon footprint and improve local air

22. *Multi-modal airports* - Future airports will connect passengers seamlessly and quickly with other destinations, by rail, Maglev or water, encouraging them to leave cars at

23. *Sustainable power for airports* - Green airports of 2050 could draw their energy needs

24. *Greener helicopters* - Research into diesel powered helicopters could cut fuel

25. *The return of the airship* - Taking the slow route in a solar-powered airship could be an ultra 'green' way of travel and carve out a new travel niche in 'aerial cruises', without

Some of the ideas listed above require technological innovation in aircraft design and engines, use of alternative fuels and materials while others require operational improvement. Some concepts such as electric, solar and hydrogen powered aircraft are currently feasible but are unlikely to become viable for mass air transportation by 2050. In what follows, we describe the current levels of noise, CO2 and NOx emissions due to air transportation and possible strategies for their mitigation to achieve the ACARE and NASA

Historically, the reduction in airplane noise has been a major focus of airplane manufacturers because of its health effects and impact on the quality of life of communities, especially in the vicinity of major metropolitan airports. As a result, there has been a significant progress in achieving major reduction in noise levels of airplanes in past five decades as shown in Figure 10 [10]. These gains have been achieved by technological innovations by the manufacturers in reducing the noise from airframe, engines and undercarriage as well as by making changes in the operations. Worldwide, there has been ten fold increases in number of airports since the 1970s that now impose the noise related restrictions as shown in Figure 11 [11]. The airports have imposed operating restrictions and also there has been special attention paid to the planning, development and management of airports for sustainability. Since 1980, FAA has invested over \$5billion in airport noise

In recent years, the joint MIT/Cambridge University project on "Silent Aircraft" has produced an innovative aircraft/engine design, shown in Figure 12 that has imperceptible noise outside an urban airport [12]. In order to meet the ACARE and NASA goals of reducing the perceived noise by 50% of the current level by 2020, several new technology ideas are being investigated by the airplane and engine manufacturers to both reduce and shield the noise sources as shown in Figure 13 in the chart by Reynolds [13]. The most promising for the near future are the chevron nozzles, shielded landing gears and the ultra high bypass engines with improved fan (geared fan and contra fan) and fan exhaust ductliner technology. In addition, new flight path designs in ascent and descent flight can reduce

the perceived noise levels in the vicinity of the airports.

consumption by 40%, while advances in blade design will cut the noise.

every phase of flight could promise greater efficiency.

from wave, tidal, thermal, wind or solar power sources.

quality.

home.

goals.

reduction.

harming the planet."

**2.3 Noise & its abatement** 


Some of the ideas listed above require technological innovation in aircraft design and engines, use of alternative fuels and materials while others require operational improvement. Some concepts such as electric, solar and hydrogen powered aircraft are currently feasible but are unlikely to become viable for mass air transportation by 2050. In what follows, we describe the current levels of noise, CO2 and NOx emissions due to air transportation and possible strategies for their mitigation to achieve the ACARE and NASA goals.

#### **2.3 Noise & its abatement**

434 Recent Advances in Aircraft Technology

3. *Fuel cells* - Hydrogen fuel cells will eventually take over from jet turbine Auxiliary Power Units (APU) and allow electrics such as in-flight entertainment (IFE) systems,

4. *Wireless cabins* – The use of Wi-Fi for IFE systems will save weight by cutting wiring -

5. *Recycling* - Initiatives are now underway to recycle up to 85% of an aircraft's components, including composites - rather than the current 60%. By 2050 this could be

6. *Geared Turbofans (GTF)* - Already under testing, GTF could prove to be even more efficient than predicted, with an advanced GTF providing 20% improvement in fuel

7. *Blended wing body aircraft* - These flying wing designs would produce aircraft with increased internal volume and superb flying efficiency, with a 20-30% improvement

8. *Microwave dissipation of contrails* – Using heating condensation behind the aircraft could

9. *Hydrogen-powered aircraft* - By 2050 early versions of hydrogen powered aircraft may be in service - and if the hydrogen is produced by clean power, it could be the ultimate

10. *Laminar flow wings* – It has been the goal of aerodynamicists for many decades to design laminar flow wings; new advances in materials or suction technology will allow new

11. *Advanced air navigation* - Future ATC/ATM systems based on Galileo or advanced GPS, along with international co-operation on airspace, will allow more aircraft to share the

12. *Metal composites* - New metal composites could result in lighter and stronger

13. *Close formation flying* - Using GPS systems to fly close together allows airliners to exploit the same technique as migrating bird flocks, using the slip-stream to save energy. 14. *Quiet aircraft* - Research by Cambridge University and MIT has shown that an airliner with imperceptible noise profile is possible - opening up airport development and

15. *Open-rotor engines* - The development of the open-rotor engines could promise 30%+ breakthrough in fuel efficiency compared to current designs. By 2050, coupled with

16. *Electric-powered aircraft* - Electric battery-powered aircraft such as UAVs are already in service. As battery power improves one can expect to see batteries powered light

17. *Outboard horizontal stabilizers (OHS) configurations* – OHS designs, by placing the horizontal stabilizers on rear-facing booms from the wingtips, increase lift and reduce

18. *Solar-powered aircraft* - After UAV applications and the Solar Impulse round the world attempt, solar-powered aircraft could be practical for light sport, motor gliders, or day-VFR aircraft. Additionally, solar panels built into the upper surfaces of a Blended-Wing-

19. *Air-to-air refueling of airliners* - Using short range airliners on long-haul routes, with

new airplane configurations, this could result in a total saving of 50%.

Body (BWB) could provide additional power for systems.

automated air-to-air refueling could save up to 45% in fuel efficiency.

prevent or reduce contrails formation which leads to cirrus clouds.

galleys etc. to run on green power.

leading to lighter aircraft.

efficiency over today's engines.

aircraft to exploit this highly efficient concept.

same sky, reducing delays and saving fuel.

aircraft and small helicopters as well.

components for key areas.

over current aircraft.

green fuel.

growth.

drag.

at 95%.

Historically, the reduction in airplane noise has been a major focus of airplane manufacturers because of its health effects and impact on the quality of life of communities, especially in the vicinity of major metropolitan airports. As a result, there has been a significant progress in achieving major reduction in noise levels of airplanes in past five decades as shown in Figure 10 [10]. These gains have been achieved by technological innovations by the manufacturers in reducing the noise from airframe, engines and undercarriage as well as by making changes in the operations. Worldwide, there has been ten fold increases in number of airports since the 1970s that now impose the noise related restrictions as shown in Figure 11 [11]. The airports have imposed operating restrictions and also there has been special attention paid to the planning, development and management of airports for sustainability. Since 1980, FAA has invested over \$5billion in airport noise reduction.

In recent years, the joint MIT/Cambridge University project on "Silent Aircraft" has produced an innovative aircraft/engine design, shown in Figure 12 that has imperceptible noise outside an urban airport [12]. In order to meet the ACARE and NASA goals of reducing the perceived noise by 50% of the current level by 2020, several new technology ideas are being investigated by the airplane and engine manufacturers to both reduce and shield the noise sources as shown in Figure 13 in the chart by Reynolds [13]. The most promising for the near future are the chevron nozzles, shielded landing gears and the ultra high bypass engines with improved fan (geared fan and contra fan) and fan exhaust ductliner technology. In addition, new flight path designs in ascent and descent flight can reduce the perceived noise levels in the vicinity of the airports.

Review of Technologies to Achieve Sustainable (Green) Aviation 437

Fig. 12. Silent aircraft SAX – 40: (joint MIT/Cambridge University design) [12].

Aviation worldwide consumes today around 238 million tonnes of jet-kerosene per year. Jetkerosene is only a very small part of the total world consumption of fossil fuel or crude oil. The world consumes 85 million barrels/day in total, aviation only 5 million. At present, aviation contributes only 2-3% to the total CO2 emissions worldwide [14] as shown in Figure 14. However, it contributes 9% relative to the entire transportation sector. With 2050 forecast of air travel to become 40% of total PKT (Figure 1), it will become a major contributor to GHG emissions if immediate steps towards reducing the fuel burn by innovations in technology and operations, as well as alternatives to Jet-kerosene are not sought and put

Fig. 13. Evolution of noise reduction technologies [13].

**2.4 Emissions and fuel burn** 

into effect.

Fig. 10. Reductions in noise levels of aircrafts in past thirty years [11].

Fig. 11. Number of airports with noise related restrictions in past fifty years [10].

Fig. 10. Reductions in noise levels of aircrafts in past thirty years [11].

Fig. 11. Number of airports with noise related restrictions in past fifty years [10].

Fig. 12. Silent aircraft SAX – 40: (joint MIT/Cambridge University design) [12].

Fig. 13. Evolution of noise reduction technologies [13].

## **2.4 Emissions and fuel burn**

Aviation worldwide consumes today around 238 million tonnes of jet-kerosene per year. Jetkerosene is only a very small part of the total world consumption of fossil fuel or crude oil. The world consumes 85 million barrels/day in total, aviation only 5 million. At present, aviation contributes only 2-3% to the total CO2 emissions worldwide [14] as shown in Figure 14. However, it contributes 9% relative to the entire transportation sector. With 2050 forecast of air travel to become 40% of total PKT (Figure 1), it will become a major contributor to GHG emissions if immediate steps towards reducing the fuel burn by innovations in technology and operations, as well as alternatives to Jet-kerosene are not sought and put into effect.

Review of Technologies to Achieve Sustainable (Green) Aviation 439

due to aviation related emissions (excluding that due to contrails and cirrus clouds) from 1992 to 2050 [16]. It should be noted that in Figures 16 and 17, RF scale is given in W/m2. It is usually given in mW/m2; then the numbers in Figures 16 and 17 should be multiplied by 1000 as shown. The horizontal line in Figures 16 and 17 is indicative of the current level of

scientific understanding of the impact of each exhaust species.

Fig. 16. IPCC estimated Radiative Forcing (RF) due to Emissions – 1992 [16].

Fig. 17. IPCC estimated Radiative Forcing (RF) due to emissions – 2050 [16].

Fig. 14. CO2 emissions worldwide contributed by various economic sectors [14].

Fig. 15. Contrails & Cirrus Clouds.

Of the exhausts emitted from the engine core, 92% are O2 and N2, 7.5% are composed of CO2 and H2O with another 0.5% composed of NOx, HC, CO, SOx and other trace chemical species, and carbon based soot particulates. In addition to CO2 and NOx emissions, formation of contrails and cirrus clouds (Figure 15) contribute significantly to radiative forcing (RF) which impacts the climate change. This last effect is unique to aviation (in contrast to ground vehicles) because the majority of aircraft emissions are injected into the upper troposphere and lower stratosphere (typically 9-13 km in altitude). The impact of burning fossil fuels at 9-13 km altitude is approximately double of that due to burning the same fuels at ground level [15]. The present metric used to quantify the climate impact of aviation is radiative forcing (RF). Radiative forcing is a measure of change in earth's radiative balance associated with atmospheric changes. Positive forcing indicates a net warming tendency relative to pre-industrial times. Figures 16 and 17 show the IPCC (Intergovernmental Panel for Climate Change) estimated increase in total anthropogenic RF

Fig. 14. CO2 emissions worldwide contributed by various economic sectors [14].

Of the exhausts emitted from the engine core, 92% are O2 and N2, 7.5% are composed of CO2 and H2O with another 0.5% composed of NOx, HC, CO, SOx and other trace chemical species, and carbon based soot particulates. In addition to CO2 and NOx emissions, formation of contrails and cirrus clouds (Figure 15) contribute significantly to radiative forcing (RF) which impacts the climate change. This last effect is unique to aviation (in contrast to ground vehicles) because the majority of aircraft emissions are injected into the upper troposphere and lower stratosphere (typically 9-13 km in altitude). The impact of burning fossil fuels at 9-13 km altitude is approximately double of that due to burning the same fuels at ground level [15]. The present metric used to quantify the climate impact of aviation is radiative forcing (RF). Radiative forcing is a measure of change in earth's radiative balance associated with atmospheric changes. Positive forcing indicates a net warming tendency relative to pre-industrial times. Figures 16 and 17 show the IPCC (Intergovernmental Panel for Climate Change) estimated increase in total anthropogenic RF

Fig. 15. Contrails & Cirrus Clouds.

due to aviation related emissions (excluding that due to contrails and cirrus clouds) from 1992 to 2050 [16]. It should be noted that in Figures 16 and 17, RF scale is given in W/m2. It is usually given in mW/m2; then the numbers in Figures 16 and 17 should be multiplied by 1000 as shown. The horizontal line in Figures 16 and 17 is indicative of the current level of scientific understanding of the impact of each exhaust species.

Fig. 16. IPCC estimated Radiative Forcing (RF) due to Emissions – 1992 [16].

Fig. 17. IPCC estimated Radiative Forcing (RF) due to emissions – 2050 [16].

Review of Technologies to Achieve Sustainable (Green) Aviation 441

*between minimizing the fuel burn and reducing the climate impact.* The lower NOx emissions can possibly be achieved by new combustor concepts such as flameless catalytic combustor and technological improvements in fuel/air mixers using alternative fuels (biofuels), aided by active combustion control. These concepts/technologies should make it possible to meet the N+1 and N+2 generation goals (Figure 9) of achieving the LTO NOx reductions by 60% and 75% respectively below the ICAO standard adapted at CAEP 6 (Committee on Aviation Environmental Protection). It should result in reducing the steepness of the trade-off between NOx and CO2 emissions and should therefore also help in making a significant contribution to the aircraft performance goal by reducing the fuel burn by 33% and 40% for the N+1 and N+2 generation aircraft respectively. Thus, there are three key drivers in emissions reductions as shown in Figure 18 [19]: (a) innovative engine technologies and aircraft designs, (b) the improvement in ATM and operations, and (c) the alternative fuels e.g. biofuels. The three-prong approach can achieve the goals enunciated by ACARE and

In cruise condition, the amount of fuel burn varies in inverse proportion to propulsion efficiency and lift-to-drag ratio. Aircraft and engine manufacturers in U.S. and Europe along with several research organizations are developing new engine technologies aimed at improving the propulsion efficiency to reduce the fuel burn and also to simultaneously reduce NOx emissions and noise. The greatest gains in fuel burn reduction in the past sixty years (since the appearance of jet engine) have come from better engines. The earliest engines were turbojets in which all the air sucked in at the front is compressed, mixed with fuel and burned, providing thrust through a jet out the back (see Figure 13). Afterwards, more efficient turbofans were designed when it was realized that greater engine efficiency could be achieved by using some of the power of the jet to drive a fan that pushes some of the intake air through ducts around the core (see Figure 13). Other boosts in efficiency have come from better compressors and materials to let the core burn at higher pressure and

NASA by 2020 and beyond. These are discussed in next few sections.

Fig. 18. Key drivers for emissions reductions [19].

**2.5 Innovative engine technologies** 

It should be noted that the RF estimates for 2050 in Figure 17 are based on several assumptions about the growth in aviation, state of technology etc. which are most likely to change. Based on the RF estimates shown in Figures 16 and 17, aviation is expected to account for 0.05K of the 0.9K global mean surface temperature rise expected to occur between 1990 and 2050 [15]. However, RF is not a good metric for weighing the relative importance of short-lived and long-lived emissions. Most importantly, the range of uncertainty about the climate impact of contrails and cirrus cloud remains substantial. According to recent IPCC report, the best estimates for RF in 2005 from linear contrails were 10 (3-30)mW/m2 and 30(10-80)mW/m2 from total aviation induced cloudiness, the numbers in bracket give the range of the 2/3 confidence limit [17]. As noted in Reference [17], "the tradeoff estimate of the CO2 RF in 2000 was 23.5mW/m2. Despite the growth in CO2 RF between 2000 and 2005, aviation induced cloudiness remains the greatest contributor to RF according to these estimates. Because of doubts of RF as a metric as well as data spread in cloudiness related RF, the relative contribution of the two (CO2 and cloudiness) to climate change can not be ascertained with confidence at present time. However, the atmospheric conditions under which an aircraft will generate a persistent contrail – the Schmidt-Appleman criterion [18] – are well understood and can be predicted accurately for a particular aircraft.

Currently there is no technological fix to prevent contrail formation if the atmospheric conditions and engine exhaust characteristics satisfy the Schmidt-Appleman criterion. One assured way of reducing the persistent contrail formation is to reduce aircraft traffic through regions of supersaturated air in which the persistent contrail can form, by flying under, over or around these regions. However, this approach may not be acceptable commercially because of increase in fuel burn, disruption in airline schedule, added ATM workload, and additional operating costs as well as increase in CO2 and NOx emissions. Because contrail reduction involves an increase in CO2 and NOx emissions, the best environmental solution is not the complete avoidance of contrails, but a balanced result that minimizes climate impact. This requires a better understanding of the relationship between the properties of the atmosphere (temperature, humidity etc.), the size of the aircraft, the quantity of its emissions (water and particulates), and extent of the persistent contrail and subsequent cirrus formation that results. The adoption of synthetic kerosene produced by Fischer-Tropsch or some similar process offers the prospect of substantial reduction in sulfate and black carbon particulate emissions. This is likely to reduce the extent of contrail and cirrus formation, but the extent of reduction as well as to what extent it would reduce the fuel burn penalty of operational avoidance measures requires further research. Based on the current status, it appears that fuel additives do not offer a significant reduction in contrail formation. The contrail avoidance measures e.g. making modest changes in altitude can reduce contrail formation appreciably with a small penalty in additional fuel burn." Increasing the cruise altitude and higher engine pressure ratio can reduce CO, HC, and CO2 emissions as well as decrease the fuel burn (improve the fuel efficiency) and facilitate noise reduction. Since higher pressure ratio requires higher flame temperature, the NOx formation rate increases. On the other hand, decreasing the cruise altitude and reducing the engine overall pressure ratio can reduce the NOx but increase the CO2 emissions. This should be an important consideration in the optimization of future aircraft and engine designs. Research is needed in understanding the impact of cruise altitude on climate. *In addition, there is a need for new optimized aircraft and engine designs that provide a compromise* 

It should be noted that the RF estimates for 2050 in Figure 17 are based on several assumptions about the growth in aviation, state of technology etc. which are most likely to change. Based on the RF estimates shown in Figures 16 and 17, aviation is expected to account for 0.05K of the 0.9K global mean surface temperature rise expected to occur between 1990 and 2050 [15]. However, RF is not a good metric for weighing the relative importance of short-lived and long-lived emissions. Most importantly, the range of uncertainty about the climate impact of contrails and cirrus cloud remains substantial. According to recent IPCC report, the best estimates for RF in 2005 from linear contrails were 10 (3-30)mW/m2 and 30(10-80)mW/m2 from total aviation induced cloudiness, the numbers in bracket give the range of the 2/3 confidence limit [17]. As noted in Reference [17], "the tradeoff estimate of the CO2 RF in 2000 was 23.5mW/m2. Despite the growth in CO2 RF between 2000 and 2005, aviation induced cloudiness remains the greatest contributor to RF according to these estimates. Because of doubts of RF as a metric as well as data spread in cloudiness related RF, the relative contribution of the two (CO2 and cloudiness) to climate change can not be ascertained with confidence at present time. However, the atmospheric conditions under which an aircraft will generate a persistent contrail – the Schmidt-Appleman criterion [18] – are well understood and can be predicted accurately for a

Currently there is no technological fix to prevent contrail formation if the atmospheric conditions and engine exhaust characteristics satisfy the Schmidt-Appleman criterion. One assured way of reducing the persistent contrail formation is to reduce aircraft traffic through regions of supersaturated air in which the persistent contrail can form, by flying under, over or around these regions. However, this approach may not be acceptable commercially because of increase in fuel burn, disruption in airline schedule, added ATM workload, and additional operating costs as well as increase in CO2 and NOx emissions. Because contrail reduction involves an increase in CO2 and NOx emissions, the best environmental solution is not the complete avoidance of contrails, but a balanced result that minimizes climate impact. This requires a better understanding of the relationship between the properties of the atmosphere (temperature, humidity etc.), the size of the aircraft, the quantity of its emissions (water and particulates), and extent of the persistent contrail and subsequent cirrus formation that results. The adoption of synthetic kerosene produced by Fischer-Tropsch or some similar process offers the prospect of substantial reduction in sulfate and black carbon particulate emissions. This is likely to reduce the extent of contrail and cirrus formation, but the extent of reduction as well as to what extent it would reduce the fuel burn penalty of operational avoidance measures requires further research. Based on the current status, it appears that fuel additives do not offer a significant reduction in contrail formation. The contrail avoidance measures e.g. making modest changes in altitude can reduce contrail formation appreciably with a small penalty in additional fuel burn." Increasing the cruise altitude and higher engine pressure ratio can reduce CO, HC, and CO2 emissions as well as decrease the fuel burn (improve the fuel efficiency) and facilitate noise reduction. Since higher pressure ratio requires higher flame temperature, the NOx formation rate increases. On the other hand, decreasing the cruise altitude and reducing the engine overall pressure ratio can reduce the NOx but increase the CO2 emissions. This should be an important consideration in the optimization of future aircraft and engine designs. Research is needed in understanding the impact of cruise altitude on climate. *In addition, there is a need for new optimized aircraft and engine designs that provide a compromise* 

particular aircraft.

*between minimizing the fuel burn and reducing the climate impact.* The lower NOx emissions can possibly be achieved by new combustor concepts such as flameless catalytic combustor and technological improvements in fuel/air mixers using alternative fuels (biofuels), aided by active combustion control. These concepts/technologies should make it possible to meet the N+1 and N+2 generation goals (Figure 9) of achieving the LTO NOx reductions by 60% and 75% respectively below the ICAO standard adapted at CAEP 6 (Committee on Aviation Environmental Protection). It should result in reducing the steepness of the trade-off between NOx and CO2 emissions and should therefore also help in making a significant contribution to the aircraft performance goal by reducing the fuel burn by 33% and 40% for the N+1 and N+2 generation aircraft respectively. Thus, there are three key drivers in emissions reductions as shown in Figure 18 [19]: (a) innovative engine technologies and aircraft designs, (b) the improvement in ATM and operations, and (c) the alternative fuels e.g. biofuels. The three-prong approach can achieve the goals enunciated by ACARE and NASA by 2020 and beyond. These are discussed in next few sections.

Fig. 18. Key drivers for emissions reductions [19].

#### **2.5 Innovative engine technologies**

In cruise condition, the amount of fuel burn varies in inverse proportion to propulsion efficiency and lift-to-drag ratio. Aircraft and engine manufacturers in U.S. and Europe along with several research organizations are developing new engine technologies aimed at improving the propulsion efficiency to reduce the fuel burn and also to simultaneously reduce NOx emissions and noise. The greatest gains in fuel burn reduction in the past sixty years (since the appearance of jet engine) have come from better engines. The earliest engines were turbojets in which all the air sucked in at the front is compressed, mixed with fuel and burned, providing thrust through a jet out the back (see Figure 13). Afterwards, more efficient turbofans were designed when it was realized that greater engine efficiency could be achieved by using some of the power of the jet to drive a fan that pushes some of the intake air through ducts around the core (see Figure 13). Other boosts in efficiency have come from better compressors and materials to let the core burn at higher pressure and

Review of Technologies to Achieve Sustainable (Green) Aviation 443

At present in Europe, under the auspices of NACRE (New Aircraft Concept Research Europe), Rolls-Royce and Airbus are making a joint study of the open rotor configurations (Figure 21), including wind-tunnel investigations of power plant installation effects. A key issue in future engine design is how to balance the conflicting aims of reducing fuel burn and NOx emissions (along with the other conflicting aims of reducing noise, weight, initial investment cost and maintenance cost). The results of these types of current and future projects should provide a sounder basis for making decisions between turbofan and open rotor engines for future aircraft. They should also take engine technology well towards its contribution to the goal of a 20% improvement in the installed engine fuel efficiency by

Fig. 20. GE36 Turbo-Prop demonstrator engine on MD-81 aircraft [21].

Fig. 21. Open-Rotor version of pro-active Green Aircraft in NACRE study [17].

2020.

temperature. As a result, according to International Airport Transport Association (IATA), new aircraft are 70% more fuel efficient than they were forty years ago. In 1998, passenger aircraft averaged 4.8 liters of fuel/100km/passenger; the newest aircraft – Airbus A380 and Boeing B787 use only three liters. Figure 19 shows the relative improvement in fuel efficiency of various aircraft engines since 1955 [20]. The current focus is on making turbofans even more efficient by leaving the fan in the open. Such a ductless "open rotor" design (essentially a high-tech propeller) would make larger fans possible; however one may need to address the noise problem and how to fit such engines on the airframe. In the short-to-medium-haul market, where most fuel is burned, the open rotor offers an appreciable reduction in fuel burn relative to a turbofan engine of comparable technology, but at the expense of some reduction in cruise Mach number. It is worth noting here that in mid 1980's GE invested significant effort in advanced turbo-prop technology (ATP). The unducted fan (UDF) on a GE36 ultra high bypass (UHB) engine on MD-81 at Farnborough air show in 1988 (Figure 20 [21]) created enormous buzz in the air transportation industry. The author of this paper was at McDonnell Douglas during that period and played a small role in the airframe – engine integration study of MD81 with GE36 ATP. However, in spite of its potential for 30% savings in fuel consumption over existing turbofan engines with comparable performance at speeds up to Mach 0.8 and altitudes up to 30,000 ft, for a variety of technical and business reasons, the advanced turboprop concept never quite got-off the ground [22].

Fig. 19. Relative improvement in fuel efficiency of various aircraft engines from 1955 to 2010 [20].

temperature. As a result, according to International Airport Transport Association (IATA), new aircraft are 70% more fuel efficient than they were forty years ago. In 1998, passenger aircraft averaged 4.8 liters of fuel/100km/passenger; the newest aircraft – Airbus A380 and Boeing B787 use only three liters. Figure 19 shows the relative improvement in fuel efficiency of various aircraft engines since 1955 [20]. The current focus is on making turbofans even more efficient by leaving the fan in the open. Such a ductless "open rotor" design (essentially a high-tech propeller) would make larger fans possible; however one may need to address the noise problem and how to fit such engines on the airframe. In the short-to-medium-haul market, where most fuel is burned, the open rotor offers an appreciable reduction in fuel burn relative to a turbofan engine of comparable technology, but at the expense of some reduction in cruise Mach number. It is worth noting here that in mid 1980's GE invested significant effort in advanced turbo-prop technology (ATP). The unducted fan (UDF) on a GE36 ultra high bypass (UHB) engine on MD-81 at Farnborough air show in 1988 (Figure 20 [21]) created enormous buzz in the air transportation industry. The author of this paper was at McDonnell Douglas during that period and played a small role in the airframe – engine integration study of MD81 with GE36 ATP. However, in spite of its potential for 30% savings in fuel consumption over existing turbofan engines with comparable performance at speeds up to Mach 0.8 and altitudes up to 30,000 ft, for a variety of technical and business reasons, the advanced turboprop concept never quite got-off the

Fig. 19. Relative improvement in fuel efficiency of various aircraft engines from 1955 to 2010

ground [22].

[20].

Fig. 20. GE36 Turbo-Prop demonstrator engine on MD-81 aircraft [21].

At present in Europe, under the auspices of NACRE (New Aircraft Concept Research Europe), Rolls-Royce and Airbus are making a joint study of the open rotor configurations (Figure 21), including wind-tunnel investigations of power plant installation effects. A key issue in future engine design is how to balance the conflicting aims of reducing fuel burn and NOx emissions (along with the other conflicting aims of reducing noise, weight, initial investment cost and maintenance cost). The results of these types of current and future projects should provide a sounder basis for making decisions between turbofan and open rotor engines for future aircraft. They should also take engine technology well towards its contribution to the goal of a 20% improvement in the installed engine fuel efficiency by 2020.

Fig. 21. Open-Rotor version of pro-active Green Aircraft in NACRE study [17].

Review of Technologies to Achieve Sustainable (Green) Aviation 445

Fig. 23. Boeing/NASA X-48B BWB technology demonstrator aircraft [23].

The other well known approach of reducing the profile drag is by the use of laminar flow control in one of its three forms - natural, hybrid or full. Natural laminar flow control was applied with great success in World War II on the P-51 Mustang fighter to give it an exceptional range. As a result there was significant effort devoted to the development of laminar flow airfoils after the end of World War II. In these airfoils, the reduction in friction drag was achieved by moving the transition farther back on the airfoil. In addition, the location of the maximum airfoil thickness was at about 60% of the chord which moved the shock system farther back and reduced the effects of boundary layer thickening and separation caused by it. However in spite of a large number of studies, the success in the laboratory in reducing the drag was never realized on medium size aircraft with swept wings. Therefore, its application has been restricted by a combination of size and wing sweep either to small aircraft with swept wings or medium-sized aircraft with zero or very little sweep. The Pro-Active Green Aircraft in the NACRE project (Figures 21 & 22) is designed to exploit natural laminar flow control and has slightly swept forward

Fig. 24. Honda Jet [24].

Fig. 22. Turbofan version of pro-active Green Aircraft in NACRE study [17].

#### **2.6 Innovative aircraft designs**

As noted in Reference [17], "the classic swept-winged aircraft with a light alloy structure has been evolving for some sixty years and the scope for increasing its lift-to-drag ratio (*L*/*D*), if its boundary layers remain fully turbulent, is by now exceedingly limited. Nevertheless, it is well established that increasing *L*/*D* is one of the most powerful means of reducing fuel burn. The three ways of increasing *L*/*D* are to (a) increase the wing span, (b) reduce the vortex drag factor κ and (c) reduce the profile drag area. The vortex drag factor is a measure of the degree to which the span-wise lift distribution over the wing departs from the theoretical ideal. Current swept-wing aircraft are highly developed and there is little scope for further improvement. A flying wing may enable some additional small reduction in κ, however realistically; there is no real prospect of a significant reduction in fuel burn by altering span-wise loading distributions. Furthermore, increasing the wing span increases wing weight. Current long-range aircraft are optimized to minimize the fuel burn at current cruise Mach numbers. In a successful design the balance between the wing span and wing weight is close to optimum. However, the change to advanced composite materials for the wing structure should result in an optimized wing of greater span; both the B787 and Airbus A350 reflect this. If cruise Mach number is reduced, reducing wing sweep also enables the wing to be optimized at a greater span. The turbofan version of Pro-Active Green Aircraft (Figure 22) included in the NACRE study features a slightly forward swept wing optimized at a significantly higher than usual span. This aircraft is aimed at an appreciable increase in *L*/*D* at the expense of some reduction in cruise Mach number. The third option for increasing *L*/*D* is to reduce the profile drag of the aircraft. This is seen as the option with the greatest mid-term and long-term potential. For large aircraft, the adoption of a blended wing-body (BWB) layout reduces profile drag by about 30%, providing an increase of around 15% in *L*/*D* (estimates of 15% - 20% have been published)." The work on such configurations, both by Boeing (the X-48B, wind tunnel and flight tested at model scale by NASA [Figure 23]) and by Airbus within the NACRE project are proceeding. At present, the first applications of the Boeing BWB are envisaged to be in military roles or as a freighter, with 2030 suggested as the earliest entry to service date for a civil passenger aircraft.

Fig. 22. Turbofan version of pro-active Green Aircraft in NACRE study [17].

As noted in Reference [17], "the classic swept-winged aircraft with a light alloy structure has been evolving for some sixty years and the scope for increasing its lift-to-drag ratio (*L*/*D*), if its boundary layers remain fully turbulent, is by now exceedingly limited. Nevertheless, it is well established that increasing *L*/*D* is one of the most powerful means of reducing fuel burn. The three ways of increasing *L*/*D* are to (a) increase the wing span, (b) reduce the vortex drag factor κ and (c) reduce the profile drag area. The vortex drag factor is a measure of the degree to which the span-wise lift distribution over the wing departs from the theoretical ideal. Current swept-wing aircraft are highly developed and there is little scope for further improvement. A flying wing may enable some additional small reduction in κ, however realistically; there is no real prospect of a significant reduction in fuel burn by altering span-wise loading distributions. Furthermore, increasing the wing span increases wing weight. Current long-range aircraft are optimized to minimize the fuel burn at current cruise Mach numbers. In a successful design the balance between the wing span and wing weight is close to optimum. However, the change to advanced composite materials for the wing structure should result in an optimized wing of greater span; both the B787 and Airbus A350 reflect this. If cruise Mach number is reduced, reducing wing sweep also enables the wing to be optimized at a greater span. The turbofan version of Pro-Active Green Aircraft (Figure 22) included in the NACRE study features a slightly forward swept wing optimized at a significantly higher than usual span. This aircraft is aimed at an appreciable increase in *L*/*D* at the expense of some reduction in cruise Mach number. The third option for increasing *L*/*D* is to reduce the profile drag of the aircraft. This is seen as the option with the greatest mid-term and long-term potential. For large aircraft, the adoption of a blended wing-body (BWB) layout reduces profile drag by about 30%, providing an increase of around 15% in *L*/*D* (estimates of 15% - 20% have been published)." The work on such configurations, both by Boeing (the X-48B, wind tunnel and flight tested at model scale by NASA [Figure 23]) and by Airbus within the NACRE project are proceeding. At present, the first applications of the Boeing BWB are envisaged to be in military roles or as a freighter,

with 2030 suggested as the earliest entry to service date for a civil passenger aircraft.

**2.6 Innovative aircraft designs** 

Fig. 23. Boeing/NASA X-48B BWB technology demonstrator aircraft [23].

Fig. 24. Honda Jet [24].

The other well known approach of reducing the profile drag is by the use of laminar flow control in one of its three forms - natural, hybrid or full. Natural laminar flow control was applied with great success in World War II on the P-51 Mustang fighter to give it an exceptional range. As a result there was significant effort devoted to the development of laminar flow airfoils after the end of World War II. In these airfoils, the reduction in friction drag was achieved by moving the transition farther back on the airfoil. In addition, the location of the maximum airfoil thickness was at about 60% of the chord which moved the shock system farther back and reduced the effects of boundary layer thickening and separation caused by it. However in spite of a large number of studies, the success in the laboratory in reducing the drag was never realized on medium size aircraft with swept wings. Therefore, its application has been restricted by a combination of size and wing sweep either to small aircraft with swept wings or medium-sized aircraft with zero or very little sweep. The Pro-Active Green Aircraft in the NACRE project (Figures 21 & 22) is designed to exploit natural laminar flow control and has slightly swept forward

Review of Technologies to Achieve Sustainable (Green) Aviation 447

Fig. 25. Reduction in fuel burn for N+1 generation aircraft relative to baseline B737/CFM56

Fig. 26. Reduction in fuel burn for N+2 generation aircraft relative to baseline B777-

200ER/GE96 using advanced technologies [26].

using advanced technologies [26].

wings, to avoid contamination of the flow over the wing by the turbulent boundary layer on the fuselage. "Hybrid laminar flow control employs suction over the forward upper surface of the wing to stabilize the boundary layer. This enables the drag reducing principles that underlie natural laminar flow control to be applied to larger, sweptwinged aircraft up to typically the size of the A310. The use of suction to maintain laminar flow over the first half of an airfoil surface has been successfully demonstrated in flight on a B757 wing and an A320 fin. The aerodynamic principles are well understood but the engineering of efficient, reliable, lightweight suction systems requires further work. Thereafter, demonstration of the practicality of the system and assessment of the maintenance and other operational problems that it may encounter will require an extended period of operational validation. The application of suction to maintain laminar flow over the entire surface of a flying wing airliner was proposed by Handley Page in the early 1960s. The proposal was based on the substantial body of research into full laminar flow control, including flight demonstrations, over the preceding decade. Full laminar flow control may have potential to double *L*/*D* relative to current standards [17]." Recently unveiled "Honda Jet" (Figure 24) has combined several innovative aircraft and engine design features, namely a combination of over the wing (OTW) engine mount design, natural laminar flow wing (NLF), all composite fuselage, HF – 120 turbofan engine, which give it a 30-35% more fuel efficiency and higher cruise speed than conventional light business jets. This is the range of efficiency that can be achieved for the N+1 generation conventional tube and wing aircraft by 2015. Saeed et al. [25] have recently conducted the conceptual design study of a Laminar Flying Wing (LFW) aircraft capable of carrying 120 passengers. They have estimated that, subject to the constraint of a low cruise Mach number of 0.58, LFC has the potential to reduce aircraft fuel-burn by just over 70%, to about 6 gram per passenger-km (PKM), with a trans-Atlantic range of 4125 nautical miles. Studies of this nature do show the promise of innovative aircraft designs to reduce the fuel burn.

Figure 9 shows the NASA goals of achieving a 33% and 40% reduction in fuel burn for N+1 and N+2 generation aircrafts respectively by using the advanced propulsion technologies, advanced materials and structures, and by improvements in aerodynamics and subsystems. Collier [26] from NASA Langley has provided a detailed outline as to how such savings in fuel burn can be achieved. He has estimated that for a N+1 generation conventional small twin aircraft (162 passengers and 2940nm range), 21% reduction in fuel burn can be achieved by using advanced propulsion technologies, advanced materials and structures, and by improvements in aerodynamics and subsystems. For an advanced small twin, additional 12.3% savings in fuel burn can be achieved by using hybrid laminar flow control as shown in Figure 25.

For a N+2 generation aircraft (300 passengers and 7500 nm range) flying at cruise Mach of 0.85, 40% saving in fuel burn relative to baseline B777-200ER/GE90 can be achieved by a combination of hybrid wing-body configuration (with all composite fuselage), advanced engine and airframe technologies, embedded engines with BLI inlets and laminar flow as shown in Figure 22 [24]. For the baseline aircraft, the fuel burn at Mach 0.85 with 300 passengers for a 7500nm mission range is 237,000 lbs. The N+2 generation aircraft should require 141,100lbs of fuel. As discussed in next few sections, additional savings of 10% in fuel burn can be achieved by operational improvements.

wings, to avoid contamination of the flow over the wing by the turbulent boundary layer on the fuselage. "Hybrid laminar flow control employs suction over the forward upper surface of the wing to stabilize the boundary layer. This enables the drag reducing principles that underlie natural laminar flow control to be applied to larger, sweptwinged aircraft up to typically the size of the A310. The use of suction to maintain laminar flow over the first half of an airfoil surface has been successfully demonstrated in flight on a B757 wing and an A320 fin. The aerodynamic principles are well understood but the engineering of efficient, reliable, lightweight suction systems requires further work. Thereafter, demonstration of the practicality of the system and assessment of the maintenance and other operational problems that it may encounter will require an extended period of operational validation. The application of suction to maintain laminar flow over the entire surface of a flying wing airliner was proposed by Handley Page in the early 1960s. The proposal was based on the substantial body of research into full laminar flow control, including flight demonstrations, over the preceding decade. Full laminar flow control may have potential to double *L*/*D* relative to current standards [17]." Recently unveiled "Honda Jet" (Figure 24) has combined several innovative aircraft and engine design features, namely a combination of over the wing (OTW) engine mount design, natural laminar flow wing (NLF), all composite fuselage, HF – 120 turbofan engine, which give it a 30-35% more fuel efficiency and higher cruise speed than conventional light business jets. This is the range of efficiency that can be achieved for the N+1 generation conventional tube and wing aircraft by 2015. Saeed et al. [25] have recently conducted the conceptual design study of a Laminar Flying Wing (LFW) aircraft capable of carrying 120 passengers. They have estimated that, subject to the constraint of a low cruise Mach number of 0.58, LFC has the potential to reduce aircraft fuel-burn by just over 70%, to about 6 gram per passenger-km (PKM), with a trans-Atlantic range of 4125 nautical miles. Studies of this nature do show the promise of innovative aircraft designs to

Figure 9 shows the NASA goals of achieving a 33% and 40% reduction in fuel burn for N+1 and N+2 generation aircrafts respectively by using the advanced propulsion technologies, advanced materials and structures, and by improvements in aerodynamics and subsystems. Collier [26] from NASA Langley has provided a detailed outline as to how such savings in fuel burn can be achieved. He has estimated that for a N+1 generation conventional small twin aircraft (162 passengers and 2940nm range), 21% reduction in fuel burn can be achieved by using advanced propulsion technologies, advanced materials and structures, and by improvements in aerodynamics and subsystems. For an advanced small twin, additional 12.3% savings in fuel burn can be achieved by using hybrid laminar flow control

For a N+2 generation aircraft (300 passengers and 7500 nm range) flying at cruise Mach of 0.85, 40% saving in fuel burn relative to baseline B777-200ER/GE90 can be achieved by a combination of hybrid wing-body configuration (with all composite fuselage), advanced engine and airframe technologies, embedded engines with BLI inlets and laminar flow as shown in Figure 22 [24]. For the baseline aircraft, the fuel burn at Mach 0.85 with 300 passengers for a 7500nm mission range is 237,000 lbs. The N+2 generation aircraft should require 141,100lbs of fuel. As discussed in next few sections, additional savings of 10% in

reduce the fuel burn.

as shown in Figure 25.

fuel burn can be achieved by operational improvements.

Fig. 25. Reduction in fuel burn for N+1 generation aircraft relative to baseline B737/CFM56 using advanced technologies [26].

Fig. 26. Reduction in fuel burn for N+2 generation aircraft relative to baseline B777- 200ER/GE96 using advanced technologies [26].

Review of Technologies to Achieve Sustainable (Green) Aviation 449

means of enabling medium-range designs to be used on long-haul operations. Nangia has now published a number of papers reporting his work on AAR, which indicate substantial fuel burn savings even after the fuel used by the tanker fleet is taken into account [30, 31].

Fig. 28. Savings in fuel burn with Air-to-Air Refuelling (AAR) for long haul flights [31].

Nangia [31] has shown (Figure 28) that an aircraft with *L/D =* 20, would require 46,147 lbs, 161,269 lbs, and 263,073 lbs of fuel to cover a range of 3,000, 6,000 and 9,000 nautical miles (nm) respectively. With AAR, it will require 92,294 lbs and 138, 441 lbs of fuel for a range of 6,000 and 9,000 nm respectively indicating a savings of 43% and 47% in fuel burn relative to that required without AAR. Accounting for the fuel required by the air tanker – 9,000 lbs for one refueling for a range of 6,000nm and 18,000 lbs for two refueling for a range of 9,000nm, the net savings in fuel burn with AAR are 37% and 41% for a range of 6,000nm and 12,000 nm respectively. However it is paramount that with AAR, the absolute safety of the aircraft

Fig. 27. Air-to-Air Refueling [30].

is assured.

#### **2.7 Operational improvements/changes**

#### **2.7.1 Improvement in air traffic management (atm) infrastructure**

There are many improvements in operations that are being introduced, or will be introduced in the relatively near future that can reduce CO2 emissions significantly. Foremost among these is the reduction of inefficiencies in ATM, which give rise to routes with dog-legs, stacking at busy airports, queuing for a departure slot with engines running, etc. U.S. Next Generation Air Transportation System (NextGen) architecture and the European air traffic control infrastructure modernization program, SESAR (Single European Sky ATM Research Program), are an ambitious and comprehensive attack on this problem. As described in the U.S. National Academy of Science (NAS) report [27], "NextGen is an example of active networking technology that updates itself with real time-shared information and tailors itself to the individual needs of all U.S. aircraft. NextGen's computerized air transportation network stresses adaptability by enabling aircraft to immediately adjust to ever-changing factors such as weather, traffic congestion, aircraft position via GPS, flight trajectory patterns and security issues. By 2025, all aircraft and airports in U.S. airspace will be connected to the NextGen network and will continually share information in real time to *improve efficiency, safety, and absorb the predicted increase in air transportation.*" Here it is worth noting that operational measures, which can apply to almost the entire world fleet, can have a greater impact, sooner, than the introduction of new aircraft and engine technologies, which can take perhaps 30 years to fully penetrate the world fleet.

#### **2.7.2 Air-to-air refueling (aar) with medium range aircraft for long-haul travel**

One particular operational measure that has been advocated is the use of medium-range aircraft, with intermediate stops, for long-haul travel. It has been estimated, using a simple parametric analysis, that undertaking a journey of 15,000km in three hops in an aircraft with design range of 5,000km would use 29% less fuel than doing the trip in a single flight in a 15,000km design. Hahn [28] and Creemers & Slingerland [29] have performed analyses to address this issue using sophisticated aircraft design synthesis methods. Hahn [28], analyzing the assessment for a 15,000km journey in one stage or three, predicted a fuel saving of 29%. Creemers & Slingerland [29], considering a B747-400 (range 13,334km) as the baseline long-range aircraft, designed an aircraft with the same fuselage and passenger capacity (420) but for half the design range (6,672km). This aircraft was predicted to do the long-haul journey in two hops with a 27% fuel saving and at a fuel cost of \$70 per barrel, a DOC saving of 9%. Nangia [30] has shown that fuel burn savings of as much as 50% were achievable by using a 5,000km design for a 15,000km journey, since a medium range aircraft can carry a much higher share of their maximum payload as passengers. This difference which appears essentially to be the difference between medium-range single and long-range twin-aisle aircraft — was not a feature of either the study of Hahn [28] or Creemers & Slingerland [29], which used the same fuselage for both long and medium range designs. This highlights the importance of cabin dimensions and layouts in considering future designs in which, both environmentally and commercially, seat-kilometers per gallon becomes an increasingly important objective. The full system assessment of this proposition, using optimized medium-range aircraft needs further investigation. In order to avoid the intermediate refueling stops, air-to-air refueling (AAR) (Figure 27) has been suggested as a means of enabling medium-range designs to be used on long-haul operations. Nangia has now published a number of papers reporting his work on AAR, which indicate substantial fuel burn savings even after the fuel used by the tanker fleet is taken into account [30, 31].

Fig. 27. Air-to-Air Refueling [30].

448 Recent Advances in Aircraft Technology

There are many improvements in operations that are being introduced, or will be introduced in the relatively near future that can reduce CO2 emissions significantly. Foremost among these is the reduction of inefficiencies in ATM, which give rise to routes with dog-legs, stacking at busy airports, queuing for a departure slot with engines running, etc. U.S. Next Generation Air Transportation System (NextGen) architecture and the European air traffic control infrastructure modernization program, SESAR (Single European Sky ATM Research Program), are an ambitious and comprehensive attack on this problem. As described in the U.S. National Academy of Science (NAS) report [27], "NextGen is an example of active networking technology that updates itself with real time-shared information and tailors itself to the individual needs of all U.S. aircraft. NextGen's computerized air transportation network stresses adaptability by enabling aircraft to immediately adjust to ever-changing factors such as weather, traffic congestion, aircraft position via GPS, flight trajectory patterns and security issues. By 2025, all aircraft and airports in U.S. airspace will be connected to the NextGen network and will continually share information in real time to *improve efficiency, safety, and absorb the predicted increase in air transportation.*" Here it is worth noting that operational measures, which can apply to almost the entire world fleet, can have a greater impact, sooner, than the introduction of new aircraft and engine technologies, which can take perhaps 30 years to fully penetrate the

**2.7.2 Air-to-air refueling (aar) with medium range aircraft for long-haul travel** 

One particular operational measure that has been advocated is the use of medium-range aircraft, with intermediate stops, for long-haul travel. It has been estimated, using a simple parametric analysis, that undertaking a journey of 15,000km in three hops in an aircraft with design range of 5,000km would use 29% less fuel than doing the trip in a single flight in a 15,000km design. Hahn [28] and Creemers & Slingerland [29] have performed analyses to address this issue using sophisticated aircraft design synthesis methods. Hahn [28], analyzing the assessment for a 15,000km journey in one stage or three, predicted a fuel saving of 29%. Creemers & Slingerland [29], considering a B747-400 (range 13,334km) as the baseline long-range aircraft, designed an aircraft with the same fuselage and passenger capacity (420) but for half the design range (6,672km). This aircraft was predicted to do the long-haul journey in two hops with a 27% fuel saving and at a fuel cost of \$70 per barrel, a DOC saving of 9%. Nangia [30] has shown that fuel burn savings of as much as 50% were achievable by using a 5,000km design for a 15,000km journey, since a medium range aircraft can carry a much higher share of their maximum payload as passengers. This difference which appears essentially to be the difference between medium-range single and long-range twin-aisle aircraft — was not a feature of either the study of Hahn [28] or Creemers & Slingerland [29], which used the same fuselage for both long and medium range designs. This highlights the importance of cabin dimensions and layouts in considering future designs in which, both environmentally and commercially, seat-kilometers per gallon becomes an increasingly important objective. The full system assessment of this proposition, using optimized medium-range aircraft needs further investigation. In order to avoid the intermediate refueling stops, air-to-air refueling (AAR) (Figure 27) has been suggested as a

**2.7.1 Improvement in air traffic management (atm) infrastructure** 

**2.7 Operational improvements/changes** 

world fleet.

Fig. 28. Savings in fuel burn with Air-to-Air Refuelling (AAR) for long haul flights [31].

Nangia [31] has shown (Figure 28) that an aircraft with *L/D =* 20, would require 46,147 lbs, 161,269 lbs, and 263,073 lbs of fuel to cover a range of 3,000, 6,000 and 9,000 nautical miles (nm) respectively. With AAR, it will require 92,294 lbs and 138, 441 lbs of fuel for a range of 6,000 and 9,000 nm respectively indicating a savings of 43% and 47% in fuel burn relative to that required without AAR. Accounting for the fuel required by the air tanker – 9,000 lbs for one refueling for a range of 6,000nm and 18,000 lbs for two refueling for a range of 9,000nm, the net savings in fuel burn with AAR are 37% and 41% for a range of 6,000nm and 12,000 nm respectively. However it is paramount that with AAR, the absolute safety of the aircraft is assured.

Review of Technologies to Achieve Sustainable (Green) Aviation 451

Fig. 30. Five FedEx aircraft in Formation Flight enroute from Pacific Northwest to Memphis

Bower et al. [34] conducted a case study to examine the effect of formation flight on five FedEx flights from the Pacific Northwest to Memphis, TN. The purpose of this study was to quantify the fuel burn reduction achievable in a commercial setting without changing the flight schedule. With tip-to-tip gaps of about 10% of the span it was shown that fuel savings of approximately 4% could be achieved for the set of five flights. With a tip-to-tip overlap of about 10% of the span the overall fuel savings were about 11.5% if the schedule was unchanged. This translated into saving of approximately 700,000 gallons of fuel per year for this set of five flights. Figure 30 shows the three types of aircrafts employed in the study – two Boeing B 727-200, two DC 10-30 and one Airbus A300 – 600F. It should be noted that in CFF, each aircraft will experience off-design forces and moments. It is important that these are adequately modeled and efficiently controlled. Simply using aileron may trim out the induced roll but at the expense of drag. But as Bower et al. [34] have shown, it is possible to realize savings in fuel burn by using the existing aircraft by suitably tailoring the formation.

Boeing [35] is working with several airports, airlines and other partners around the world in developing tools for "tailored arrivals" which can reduce fuel burn, lower the controller workload and allow for better scheduling and passenger connections (Figure 31). To optimize tailored arrivals, additional controller automation tools are needed. Boeing completed the trial of Speed and Route Advisor (SARA) with Dutch air traffic control agency (LVNL) and Eurocontrol in April/May 2009. SARA delivered traffic within 30 seconds of planned time on 80% of approaches at Schiphol airport in Netherlands compared to within 2 minutes on a baseline of 67%. At San Francisco airport, more than 1700 complete and partial tailored arrivals have been completed between December 2007 and June 2009 using the B777 and B747 aircraft. It has been found that tailored arrivals save an average of 950 kg of fuel and approximately \$950 per approach. Complete tailored arrivals saved approximately 40% of the fuel used in arrivals. For one year period, four participating

[34].

**2.7.4 Tailored arrivals** 

#### **2.7.3 Close Formation Flying (CFF)**

The possibility of using CFF to reduce fuel burn or to extend range is well known. As stated by Nangia [31], "aircraft formations (Figure 29) occur for several reasons e.g. during displays or in AAR but they are not maintained for any significant length of time from the fuel efficiency perspective." The reason is that flying in formation will require extreme safety measures by use of sensors coupled automatically to control systems of individual aircrafts. Furthermore, flying a close formation through clouds or in gusty environment may not be practical. The obvious benefit of flying in formation is a more uniform downwash velocity field, which minimizes the energy transferred into it from propulsive energy consumption. Another benefit is the cancellation of vortices shed from the wing-tips of individual airplanes, except the two outermost ones. How effective this cancellation will be would depend upon the practicality of achievable spacing among the aircrafts. There would also be a substantial benefit in elimination of vortex contrails and cirrus clouds. Recently, NASA conducted tests on two F/A-18 aircraft formations [32]. It was shown that the benefits of CFF occur at certain geometry relationships in the formation, namely the trailing aircraft should overlap the wake of the leading aircraft by 10-15% semi-span in this case. Jenkinson [33] suggested that the CFF of several large aircrafts is more efficient in comparison with flying a very large aircraft. The aircrafts could take-off from different airports and then fly in formation over large distances before peeling off for landing at required destinations. Bower at al. [34] have recently investigated a two aircraft echelon formation and a three aircraft formation of three different aircraft and analyzed the fuel burn. Their study determined the fuel savings and difference in flight times that result from applying CFF to missions of different stage lengths and different spacing between the cities of origin. For a two aircraft formation, the maximum fuel savings were 4% with a tip-to-tip gap between the aircraft equal to 10% of the span and 10% with a tip overlap equal to 10% of the span. For the three aircraft inverted-V formation, the maximum fuel savings were about 7% with tip-to-tip gaps equal to 10% of the span and about 16% with tip overlaps equal to 10% of the span.

Fig. 29. Three different aircraft type in CFF [31].

The possibility of using CFF to reduce fuel burn or to extend range is well known. As stated by Nangia [31], "aircraft formations (Figure 29) occur for several reasons e.g. during displays or in AAR but they are not maintained for any significant length of time from the fuel efficiency perspective." The reason is that flying in formation will require extreme safety measures by use of sensors coupled automatically to control systems of individual aircrafts. Furthermore, flying a close formation through clouds or in gusty environment may not be practical. The obvious benefit of flying in formation is a more uniform downwash velocity field, which minimizes the energy transferred into it from propulsive energy consumption. Another benefit is the cancellation of vortices shed from the wing-tips of individual airplanes, except the two outermost ones. How effective this cancellation will be would depend upon the practicality of achievable spacing among the aircrafts. There would also be a substantial benefit in elimination of vortex contrails and cirrus clouds. Recently, NASA conducted tests on two F/A-18 aircraft formations [32]. It was shown that the benefits of CFF occur at certain geometry relationships in the formation, namely the trailing aircraft should overlap the wake of the leading aircraft by 10-15% semi-span in this case. Jenkinson [33] suggested that the CFF of several large aircrafts is more efficient in comparison with flying a very large aircraft. The aircrafts could take-off from different airports and then fly in formation over large distances before peeling off for landing at required destinations. Bower at al. [34] have recently investigated a two aircraft echelon formation and a three aircraft formation of three different aircraft and analyzed the fuel burn. Their study determined the fuel savings and difference in flight times that result from applying CFF to missions of different stage lengths and different spacing between the cities of origin. For a two aircraft formation, the maximum fuel savings were 4% with a tip-to-tip gap between the aircraft equal to 10% of the span and 10% with a tip overlap equal to 10% of the span. For the three aircraft inverted-V formation, the maximum fuel savings were about 7% with tip-to-tip gaps equal to 10% of the span and about 16% with tip overlaps equal to

**2.7.3 Close Formation Flying (CFF)** 

10% of the span.

Fig. 29. Three different aircraft type in CFF [31].

Fig. 30. Five FedEx aircraft in Formation Flight enroute from Pacific Northwest to Memphis [34].

Bower et al. [34] conducted a case study to examine the effect of formation flight on five FedEx flights from the Pacific Northwest to Memphis, TN. The purpose of this study was to quantify the fuel burn reduction achievable in a commercial setting without changing the flight schedule. With tip-to-tip gaps of about 10% of the span it was shown that fuel savings of approximately 4% could be achieved for the set of five flights. With a tip-to-tip overlap of about 10% of the span the overall fuel savings were about 11.5% if the schedule was unchanged. This translated into saving of approximately 700,000 gallons of fuel per year for this set of five flights. Figure 30 shows the three types of aircrafts employed in the study – two Boeing B 727-200, two DC 10-30 and one Airbus A300 – 600F. It should be noted that in CFF, each aircraft will experience off-design forces and moments. It is important that these are adequately modeled and efficiently controlled. Simply using aileron may trim out the induced roll but at the expense of drag. But as Bower et al. [34] have shown, it is possible to realize savings in fuel burn by using the existing aircraft by suitably tailoring the formation.

#### **2.7.4 Tailored arrivals**

Boeing [35] is working with several airports, airlines and other partners around the world in developing tools for "tailored arrivals" which can reduce fuel burn, lower the controller workload and allow for better scheduling and passenger connections (Figure 31). To optimize tailored arrivals, additional controller automation tools are needed. Boeing completed the trial of Speed and Route Advisor (SARA) with Dutch air traffic control agency (LVNL) and Eurocontrol in April/May 2009. SARA delivered traffic within 30 seconds of planned time on 80% of approaches at Schiphol airport in Netherlands compared to within 2 minutes on a baseline of 67%. At San Francisco airport, more than 1700 complete and partial tailored arrivals have been completed between December 2007 and June 2009 using the B777 and B747 aircraft. It has been found that tailored arrivals save an average of 950 kg of fuel and approximately \$950 per approach. Complete tailored arrivals saved approximately 40% of the fuel used in arrivals. For one year period, four participating

Review of Technologies to Achieve Sustainable (Green) Aviation 453

designs and maximum take-off weight MTOW for design range from 3,000 to 12,000 nm. From Nangia's study [31], it is clear that 3,000nm aircraft can provide substantial savings in fuel burn by having less weight and can be used for long range flight by using AAR. In past twenty years, each new aircraft type has achieved 10-15% gain in fuel efficiency. Additional achievements in fuel efficiency by improvements in airframe and engine design will take some time, however, several studies have shown that it is possible to reduce fuel burn significantly by instituting operational measures such as more efficient Air-Traffic Management (ATM), Air-to-Air Refueling (AAR), Close Formation Flying (CFF), Tailored

Fig. 32. Aircraft designs, with fixed fuselage, 250 passengers and CL, for different ranges of

All forms of powered ground and air transportation are experiencing the pressure of the need to mitigate greenhouse gas (GHG) emissions to arrest their impact on climate change. In addition the high price of fuel (oil reaching \$149/barrel during summer of 2008) as well as the need for energy security are driving an urgent search for alternative fuels, in particular the biofuels. There is emphasis on both the improvements in energy efficiency and new alternative fuels. Aviation is particularly sensitive to these pressures since, for many years, no near term alternative to kerosene has been identified. Until recently, biofuels have not been considered cost competitive to kerosene. An important much desired characteristic of an alternative fuel is whether it can be used without any change to the aircraft or engines. The attractions of such a *drop-in fuel* are clear: it does not require the delivery of new aircraft but the environmental impact of all aircraft flying today can be significantly reduced. Non-drop-in fuels, such as hydrogen or methane hydrates, are unlikely to be used before 2050. The key criteria in identifying that a new alternative fuel would be beneficial in reducing CO2 emissions should be based on the life cycle analysis of CO2; the life-cycle CO2 generation must be less than that of kerosene. Many first generation biofuels have performed poorly against this criterion, though second generation biofuels appear to be far more promising. Furthermore, it is important that there are no adverse sideeffects arising from production of the feedstock for biofuel generation, such as adverse impact on farming land, fresh-water supply, virgin rain-forests and peat-lands, food prices, etc. Algae and halophytes (salt-tolerant plants irrigated with sea/saline water) are emerging as potential sustainable feedstock solutions. The alternative fuels need to meet specific aviation requirements and essentially should have the key chemical characteristics of kerosene, that is they won't freeze at flying altitude and they would have a high enough

Arrivals, and by reducing the ratio of empty weight to payload.

operation [30, 31].

**2.9 Alternative fuels** 

airlines saved more than 524,000 kg of fuel and reduced the carbon emissions by 1.6 million kg.

Fig. 31. Airports and Partners participating in the concept of Tailored Arrivals [35].

## **2.8 Savings in fuel burn by aircraft weight reduction**

It is well known that substantial savings in fuel burn can be achieved by reducing the ratio of the empty weight to payload of an aircraft. It can be accomplished by the development and use of lighter and stronger advanced composites, and by reducing the design range and cruise Mach number.

### **2.8.1 Aircraft weight reduction by use of advanced composites**

Reducing the weight of an aircraft is one of the most powerful means of reducing the fuel burn. Boeing and Airbus, as well as other Business and General Aviation aircraft manufacturers are investing in advanced composites which have the prospects of being lighter and stronger than the present carbon fiber composites (CFC). The replacement of structural aluminum alloy with carbon fiber composite is the most powerful weight reducing option currently available to the aircraft designer working towards a given payload-range requirement. The Boeing B787 and Airbus A350 have both taken this step, having wings and fuselage made with CFC. Most new designs are likely to take this path.

#### **2.8.2 Aircraft weight reduction by reducing the design range**

Although the historic trend has been in the opposite direction, another powerful means of reducing the weight of an aircraft is to reduce its design range. The study by Hahn [28] has shown that by reducing the design range from 15,000km to 5,000km, with the fuselage and passenger accommodation fixed, it is possible to reduce the operational empty weight (OEW) by 29%. The study by Creemers & Slingerland [29] noted a 17% reduction in OEW by halving the design range from 13,334km to 6,672km. Nangia [30, 31] has also shown that, with the fuselage and number of passengers fixed, wing area increases rapidly to contain the fuel needed and to maintain CL as the design range increases. Figure 32 shows the aircraft designs and maximum take-off weight MTOW for design range from 3,000 to 12,000 nm. From Nangia's study [31], it is clear that 3,000nm aircraft can provide substantial savings in fuel burn by having less weight and can be used for long range flight by using AAR. In past twenty years, each new aircraft type has achieved 10-15% gain in fuel efficiency. Additional achievements in fuel efficiency by improvements in airframe and engine design will take some time, however, several studies have shown that it is possible to reduce fuel burn significantly by instituting operational measures such as more efficient Air-Traffic Management (ATM), Air-to-Air Refueling (AAR), Close Formation Flying (CFF), Tailored Arrivals, and by reducing the ratio of empty weight to payload.

Fig. 32. Aircraft designs, with fixed fuselage, 250 passengers and CL, for different ranges of operation [30, 31].

#### **2.9 Alternative fuels**

452 Recent Advances in Aircraft Technology

airlines saved more than 524,000 kg of fuel and reduced the carbon emissions by 1.6 million

Fig. 31. Airports and Partners participating in the concept of Tailored Arrivals [35].

It is well known that substantial savings in fuel burn can be achieved by reducing the ratio of the empty weight to payload of an aircraft. It can be accomplished by the development and use of lighter and stronger advanced composites, and by reducing the design range and

Reducing the weight of an aircraft is one of the most powerful means of reducing the fuel burn. Boeing and Airbus, as well as other Business and General Aviation aircraft manufacturers are investing in advanced composites which have the prospects of being lighter and stronger than the present carbon fiber composites (CFC). The replacement of structural aluminum alloy with carbon fiber composite is the most powerful weight reducing option currently available to the aircraft designer working towards a given payload-range requirement. The Boeing B787 and Airbus A350 have both taken this step, having wings and fuselage made with CFC. Most new designs are likely to take this path.

Although the historic trend has been in the opposite direction, another powerful means of reducing the weight of an aircraft is to reduce its design range. The study by Hahn [28] has shown that by reducing the design range from 15,000km to 5,000km, with the fuselage and passenger accommodation fixed, it is possible to reduce the operational empty weight (OEW) by 29%. The study by Creemers & Slingerland [29] noted a 17% reduction in OEW by halving the design range from 13,334km to 6,672km. Nangia [30, 31] has also shown that, with the fuselage and number of passengers fixed, wing area increases rapidly to contain the fuel needed and to maintain CL as the design range increases. Figure 32 shows the aircraft

**2.8 Savings in fuel burn by aircraft weight reduction** 

**2.8.1 Aircraft weight reduction by use of advanced composites** 

**2.8.2 Aircraft weight reduction by reducing the design range** 

cruise Mach number.

kg.

All forms of powered ground and air transportation are experiencing the pressure of the need to mitigate greenhouse gas (GHG) emissions to arrest their impact on climate change. In addition the high price of fuel (oil reaching \$149/barrel during summer of 2008) as well as the need for energy security are driving an urgent search for alternative fuels, in particular the biofuels. There is emphasis on both the improvements in energy efficiency and new alternative fuels. Aviation is particularly sensitive to these pressures since, for many years, no near term alternative to kerosene has been identified. Until recently, biofuels have not been considered cost competitive to kerosene. An important much desired characteristic of an alternative fuel is whether it can be used without any change to the aircraft or engines. The attractions of such a *drop-in fuel* are clear: it does not require the delivery of new aircraft but the environmental impact of all aircraft flying today can be significantly reduced. Non-drop-in fuels, such as hydrogen or methane hydrates, are unlikely to be used before 2050. The key criteria in identifying that a new alternative fuel would be beneficial in reducing CO2 emissions should be based on the life cycle analysis of CO2; the life-cycle CO2 generation must be less than that of kerosene. Many first generation biofuels have performed poorly against this criterion, though second generation biofuels appear to be far more promising. Furthermore, it is important that there are no adverse sideeffects arising from production of the feedstock for biofuel generation, such as adverse impact on farming land, fresh-water supply, virgin rain-forests and peat-lands, food prices, etc. Algae and halophytes (salt-tolerant plants irrigated with sea/saline water) are emerging as potential sustainable feedstock solutions. The alternative fuels need to meet specific aviation requirements and essentially should have the key chemical characteristics of kerosene, that is they won't freeze at flying altitude and they would have a high enough

Review of Technologies to Achieve Sustainable (Green) Aviation 455

ANZ = Air New Zealand, CAL = Continental Airline, JAL = Japan Airline

Table II: Key Biofuel (Blend) and Jet/Jet A-1 fuel properties comparison [35].

On 24 February 2008, Virgin Atlantic operated a B747-400 on a 20% biofuel/80% kerosene blend on a short flight between London-Heathrow and Amsterdam. This was the first time a commercial aircraft had flown on biofuel and it was the result of a joint initiative between Virgin Atlantic, Boeing and GE. On 30 December 2008, Air New Zealand (ANZ) conducted a two hour test flight of a B747-400 from Auckland airport with one-engine powered by 50-50 blend (B50) of biofuel (from Jatropha) and conventional Jet-A1 fuel. B50 fuel was found to be more efficient. ANZ has announced plans to use the B50 for 10% of its needs by 2013. The test flight was carried out in partnership with Boeing, Rolls-Royce and Honeywell's refining technology subsidiary UOP with support from Terasol Energy. On January 7th, Continental Airline (CAL) completed a 90-minute test flight using biofuel derived from algae and Jatropha. B737-800 flew from Houston with one engine operating on a 50-50 blend of biofuel and conventional fuel (B50) and the other using all conventional fuel for the purpose of comparison. The biofuel mix engine used 3,600 lbs of fuel compared to 3,700 lbs used by the conventional engine. On January 30, 2009, Japan Airline (JAL) became the fourth airline to use B50 blend of Jatropha (16%), algae (<1%) and Camelina (84%) on the third engine of a 747-300 in one-hour test flight. It was again reported that biofuel was more fuel efficient than 100% jet-A fuel. It should be noted that in all the above demos, biofuel came from sustainable feedstocks (see Tables I and II), sources that neither compete with staple food crops nor cause deforestation. It is worth mentioning that on 1 February 2008, Airbus A380 flew from Filton, U.K. to Toulouse, France with one of its Rolls-Royce engines powered by an alternative, synthetic gas-to-liquid (GTL) jet fuel. Airbus and Qatar Airways are now partners in a GTL consortium which also includes Shell International Petroleum to investigate the use of GTL neat/blend vis-à-vis conventional jet fuel. From an environmental standpoint, it is encouraging and very hopeful that both major manufacturers – Boeing and Airbus are positioning themselves to be at the forefront of alternative and bio-jet fuels. It is surmised that by 2050, with the use of synthetic kerosene

energy content to power an aircraft's jet engine. In addition, the alternative fuel should have good high-temperature thermal stability characteristics in the engine and good storage stability over time.

Interest in biofuels for civil aircraft has increased dramatically in recent years and the focus of the aviation industry on what is and what is not credible in this arena has sharpened. It is clear that a *'drop-in'* replacement for kerosene i.e. the synthetic kerosene appears to be the only realistic possibility in the foreseeable future. The potential of such bio-derived synthetic paraffinic kerosene (Bio-SPK) to reduce the net CO2 emissions from aviation may well match or exceed that of advances in airframe and engine technologies, and perhaps may achieve reductions across the world fleet sooner than new technologies. In addition, since synthetic kerosene produces substantially less black carbon and sulphate aerosols than kerosene from oil wells, there is a possibility that its use will reduce contrail and cirrus formation as well.

Boeing, Airbus and the engine manufacturers believe that the present engine technology can operate on biofuels (tests are very promising) and that within 5 to 15 years, the aviation industry can convert to biofuels. On 19 June 2009, Billy Glover of Boeing made a presentation to the press at the Paris air show [35] describing the Boeing's "Sustainable Biofuels Research and Technology Program." Tables I and II show the comparisons of key fuel properties of currently used Jet A/Jet A-1 fuel with those with Bio-SPK fuel derived from three different feed-stocks (Jatropha, Jatropha/Algae, and Jatropha/Algae/Camelina) for neat fuel and blends respectively. All Bio-SPK blends met or exceeded the aviation jet fuel requirements. In this presentation, Boeing declared that they are preparing a comprehensive report on Bio-SPK fuels for submittal to ASTM International and expect an approval in 2010. Boeing is working across the industry on regional biofuel commercialization projects. There have already been a few experimental flights operated by several airlines using the biofuel blends and many more are planned in the near future.


Table I. Key Biofuel (Neat) and Jet/Jet A-1 Fuel properties comparison [35].

energy content to power an aircraft's jet engine. In addition, the alternative fuel should have good high-temperature thermal stability characteristics in the engine and good storage

Interest in biofuels for civil aircraft has increased dramatically in recent years and the focus of the aviation industry on what is and what is not credible in this arena has sharpened. It is clear that a *'drop-in'* replacement for kerosene i.e. the synthetic kerosene appears to be the only realistic possibility in the foreseeable future. The potential of such bio-derived synthetic paraffinic kerosene (Bio-SPK) to reduce the net CO2 emissions from aviation may well match or exceed that of advances in airframe and engine technologies, and perhaps may achieve reductions across the world fleet sooner than new technologies. In addition, since synthetic kerosene produces substantially less black carbon and sulphate aerosols than kerosene from oil wells, there is a possibility that its use will reduce contrail and cirrus formation as well. Boeing, Airbus and the engine manufacturers believe that the present engine technology can operate on biofuels (tests are very promising) and that within 5 to 15 years, the aviation industry can convert to biofuels. On 19 June 2009, Billy Glover of Boeing made a presentation to the press at the Paris air show [35] describing the Boeing's "Sustainable Biofuels Research and Technology Program." Tables I and II show the comparisons of key fuel properties of currently used Jet A/Jet A-1 fuel with those with Bio-SPK fuel derived from three different feed-stocks (Jatropha, Jatropha/Algae, and Jatropha/Algae/Camelina) for neat fuel and blends respectively. All Bio-SPK blends met or exceeded the aviation jet fuel requirements. In this presentation, Boeing declared that they are preparing a comprehensive report on Bio-SPK fuels for submittal to ASTM International and expect an approval in 2010. Boeing is working across the industry on regional biofuel commercialization projects. There have already been a few experimental flights operated by several airlines using the biofuel blends and many more are planned in the near future.

Table I. Key Biofuel (Neat) and Jet/Jet A-1 Fuel properties comparison [35].

stability over time.


ANZ = Air New Zealand, CAL = Continental Airline, JAL = Japan Airline

Table II: Key Biofuel (Blend) and Jet/Jet A-1 fuel properties comparison [35].

On 24 February 2008, Virgin Atlantic operated a B747-400 on a 20% biofuel/80% kerosene blend on a short flight between London-Heathrow and Amsterdam. This was the first time a commercial aircraft had flown on biofuel and it was the result of a joint initiative between Virgin Atlantic, Boeing and GE. On 30 December 2008, Air New Zealand (ANZ) conducted a two hour test flight of a B747-400 from Auckland airport with one-engine powered by 50-50 blend (B50) of biofuel (from Jatropha) and conventional Jet-A1 fuel. B50 fuel was found to be more efficient. ANZ has announced plans to use the B50 for 10% of its needs by 2013. The test flight was carried out in partnership with Boeing, Rolls-Royce and Honeywell's refining technology subsidiary UOP with support from Terasol Energy. On January 7th, Continental Airline (CAL) completed a 90-minute test flight using biofuel derived from algae and Jatropha. B737-800 flew from Houston with one engine operating on a 50-50 blend of biofuel and conventional fuel (B50) and the other using all conventional fuel for the purpose of comparison. The biofuel mix engine used 3,600 lbs of fuel compared to 3,700 lbs used by the conventional engine. On January 30, 2009, Japan Airline (JAL) became the fourth airline to use B50 blend of Jatropha (16%), algae (<1%) and Camelina (84%) on the third engine of a 747-300 in one-hour test flight. It was again reported that biofuel was more fuel efficient than 100% jet-A fuel. It should be noted that in all the above demos, biofuel came from sustainable feedstocks (see Tables I and II), sources that neither compete with staple food crops nor cause deforestation. It is worth mentioning that on 1 February 2008, Airbus A380 flew from Filton, U.K. to Toulouse, France with one of its Rolls-Royce engines powered by an alternative, synthetic gas-to-liquid (GTL) jet fuel. Airbus and Qatar Airways are now partners in a GTL consortium which also includes Shell International Petroleum to investigate the use of GTL neat/blend vis-à-vis conventional jet fuel. From an environmental standpoint, it is encouraging and very hopeful that both major manufacturers – Boeing and Airbus are positioning themselves to be at the forefront of alternative and bio-jet fuels. It is surmised that by 2050, with the use of synthetic kerosene

Review of Technologies to Achieve Sustainable (Green) Aviation 457

to supply renewable solar energy to the four 10HP electric motors. During the day, the solar panels charge the plane's lithium polymer batteries, allowing it to fly at night. To be sure, the fuel-cell propelled electric aircraft and the solar energy driven aircraft are not likely to become feasible for mass air transportation. However, they can become viable for recreation and personal transportation, and possibly as business aircraft in not too distant future. The idea of using liquid hydrogen as a propellant has been around for many decades, but is unlikely to become feasible for commercial aircraft, at least before 2050, because of many challenges that would have to be overcome. Figure 35 shows the artist's rendering of a hydrogen-powered version of A310 Airbus [38]. It is also called a "Cryoplane" because of the very visible cryogenic hydrogen tank located above the passengers. Cryogenic hydrogen is the only possibility for the airplane since the high pressure tanks would be too heavy. The physical properties of the liquid hydrogen determine the appearance of the Cryoplane. Liquid hydrogen occupies 4.2 times the volume of jet fuel for the same energy; therefore the tanks will have to be huge. Jet fuel weighs 2.9 times more than liquid H2 for the same energy. The reduced weight partly compensates for the increased aerodynamic drag of the tanks. The Cryoplane would have less range and speed than A310. It will have higher empty weight. Furthermore, whatever energy source is used, 30% will be lost in hydrogen liquefaction. In addition, the cost, infrastructure and passenger acceptance issues would have to be addressed. The main advantage of using a hydrogen powered airplane is the reduced emissions as shown in Figure 36 from Penner [39]. Since the use of H2 does not

produce any CO2, it is dubbed as clean fuel.

Fig. 35. Artist's rendering of a Hydrogen powered version of A310 Airbus [38].

derived from biomass, the world fleet CO2 emissions per passenger-kilometer (PKM) could be lower at least by a factor of three, NOx emissions lower by a factor of 10 and contrail and contrail-induced cirrus formation lower by a factor of 5 to 15.

## **2.10 Electric, solar or hydrogen powered green aircraft**

For many years, there have been several exploratory studies in academia and industry to build and fly aircraft using sources of energy other than Jet-kerosene or synthetic kerosene (biofuels). There have been several success stories in recent years. In March 2008, Boeing successfully conducted a test flight of a manned aircraft powered by PEM hydrogen fuel cells [36], shown in Figure 33. Since fuel cells convert hydrogen directly into electricity and heat without the products of combustions such as CO2, they use a clean or green source of energy. Fuel cells propelled aircraft is also often called as "an all electric aircraft."

Fig. 33. Boeing PEM Fuel Cell Powered Electric Aircraft [36].

Fig 34. Solar Power Aircraft HB-SIA from SOLAR IMPULSE [37].

Recently in June 2009, the prototype of a new solar-powered manned aircraft was unveiled in Switzerland by the company SOLAR IMPULSE [37]. The airplane is designed to fly both day and night without the need for fuel. The aircraft has a wing span equal to that of a Boeing 747 but weighs only 1.7 tons. It is powered by 12,000 solar cells mounted on the wing

derived from biomass, the world fleet CO2 emissions per passenger-kilometer (PKM) could be lower at least by a factor of three, NOx emissions lower by a factor of 10 and contrail and

For many years, there have been several exploratory studies in academia and industry to build and fly aircraft using sources of energy other than Jet-kerosene or synthetic kerosene (biofuels). There have been several success stories in recent years. In March 2008, Boeing successfully conducted a test flight of a manned aircraft powered by PEM hydrogen fuel cells [36], shown in Figure 33. Since fuel cells convert hydrogen directly into electricity and heat without the products of combustions such as CO2, they use a clean or green source of

energy. Fuel cells propelled aircraft is also often called as "an all electric aircraft."

contrail-induced cirrus formation lower by a factor of 5 to 15.

**2.10 Electric, solar or hydrogen powered green aircraft** 

Fig. 33. Boeing PEM Fuel Cell Powered Electric Aircraft [36].

Fig 34. Solar Power Aircraft HB-SIA from SOLAR IMPULSE [37].

Recently in June 2009, the prototype of a new solar-powered manned aircraft was unveiled in Switzerland by the company SOLAR IMPULSE [37]. The airplane is designed to fly both day and night without the need for fuel. The aircraft has a wing span equal to that of a Boeing 747 but weighs only 1.7 tons. It is powered by 12,000 solar cells mounted on the wing to supply renewable solar energy to the four 10HP electric motors. During the day, the solar panels charge the plane's lithium polymer batteries, allowing it to fly at night. To be sure, the fuel-cell propelled electric aircraft and the solar energy driven aircraft are not likely to become feasible for mass air transportation. However, they can become viable for recreation and personal transportation, and possibly as business aircraft in not too distant future. The idea of using liquid hydrogen as a propellant has been around for many decades, but is unlikely to become feasible for commercial aircraft, at least before 2050, because of many challenges that would have to be overcome. Figure 35 shows the artist's rendering of a hydrogen-powered version of A310 Airbus [38]. It is also called a "Cryoplane" because of the very visible cryogenic hydrogen tank located above the passengers. Cryogenic hydrogen is the only possibility for the airplane since the high pressure tanks would be too heavy. The physical properties of the liquid hydrogen determine the appearance of the Cryoplane. Liquid hydrogen occupies 4.2 times the volume of jet fuel for the same energy; therefore the tanks will have to be huge. Jet fuel weighs 2.9 times more than liquid H2 for the same energy. The reduced weight partly compensates for the increased aerodynamic drag of the tanks. The Cryoplane would have less range and speed than A310. It will have higher empty weight. Furthermore, whatever energy source is used, 30% will be lost in hydrogen liquefaction. In addition, the cost, infrastructure and passenger acceptance issues would have to be addressed. The main advantage of using a hydrogen powered airplane is the reduced emissions as shown in Figure 36 from Penner [39]. Since the use of H2 does not produce any CO2, it is dubbed as clean fuel.

Fig. 35. Artist's rendering of a Hydrogen powered version of A310 Airbus [38].

Review of Technologies to Achieve Sustainable (Green) Aviation 459

locations of emissions release from aircraft in flight; (e) a *Global Climate Module* to investigate global environmental impact of aircraft movements in terms of multiple emissions species and contrails; (f) a *Local Air Quality and Noise Module* to investigate local environmental impacts from dispersion of critical air pollutants and noise from landing and take-off (LTO) operations; and (g) a *Regional Economics Module* to investigate positive and negative economic impacts of aviation in various parts of the world, including the increase in direct and indirect employment opportunities in the region. The schematic of the AIM general

The details of the seven modules and interaction among them are not given here but can be found in many papers listed on the website of the Institute for Aviation and the Environment of Cambridge University in U.K (http://www.iae.damtp.cam.ac.uk/ innovation.html). Here we briefly describe the power of the AIM architecture by reproducing some results from Reynolds et al. [40]. Employing the AIM architecture, Reynolds et al. [40] have performed a case study of the U.S. transportation system, which provides a forecast of air transport passenger demand between 50 major airports in U.S. from 2000 to 2030. The flights between these 50 airports represent over 40% of U.S. scheduled domestic departures in 2000 and nearly 20% the world's scheduled flights. Reynolds et al. [40] conducted simulations under three scenarios: 1. Unconstrained/No Feedback (air transport passenger demands and resulting operations were assumed to grow unconstrained), 2. Feedback of Delay Effects (a simplified airline response to delay is modeled by assuming that the 50% of the cost incurred by the airlines due to delays are passed directly to passengers in the form of higher fares), and 3. Feedback of Delay Effects Plus Per-Km Tax Policy (This is same as scenario 2 , but with a per-Km tax applied to tickets from 2020 onwards with the objective of reducing the Revenue Passenger Km (RPKM) demand in 2020 to 2000 levels, so that the resulting delays and emissions can be directly compared). *Reynolds et al.* [40] *state that these three scenarios, their associated forecasts and environmental impact results are for illustrative purposes only to show the capabilities of AIM; they do not represent realistic evolutions of the U.S. air transportation system.* The main focus of the scenarios is on interactions between the Air Transport Demand and the Airport Activity Modules. However, one can calculate the en route and local emissions

architecture is shown in Figure 37 [40].

Fig. 37. AIM Architecture [40].

Fig. 36. Relative emissions from Jet-kerosene and Hydrogen at various altitudes [39].

## **2.11 Modeling environmental & economic impacts of aviation**

## **2.11.1 Cambridge university aviation integrated modeling project (AIM)**

Institute for Aviation and the Environment at Cambridge University in U.K. has developed one of the most comprehensive projects – called the Aviation Integrated Modeling (AIM) project to develop a policy assessment capability to enable comprehensive analyses of aviation, environment and economic interactions at local and global levels. It contains a set of inter-linked modules of the key elements which include models of aircraft/engine technologies, air transport demand, airport activity and airspace operations, all coupled to global climate, local environment and economic impact blocks. A major benefit of AIM architecture is the ability to model data flow and feedback between the modules allowing for the policy assessment to be conducted by imposing policy effects on upstream modules and determining the implications through down stream modules to the output metrics, which can then be compared to the baseline case [40].

These modules include: (a) an *Aircraft Technology and Cost Module* to simulate aircraft fuel use, emissions production and ownership/operating costs for various airframe/engine technology evolution scenarios which are likely to have an effect during the period of the forecast; (b) an *Air Transport Demand Module* to predict passenger and freight demand into the future between origin-destination pairs within the global air transportation network; (c) an *Airport Activity Module* to investigate the air traffic growth as a function of passenger and freight growth, to calculate delays and future airline response to them, and to model ground and low altitude operations and congestion to determine LTO emissions as a function of growth in air traffic operations within the vicinity of the airport; (d) an *Aircraft Movement Module* to simulate airborne trajectories between city-pairs, accounting for airspace inefficiencies and delays for given Air Traffic Control (ATC) scenarios and to identify the

Fig. 36. Relative emissions from Jet-kerosene and Hydrogen at various altitudes [39].

Institute for Aviation and the Environment at Cambridge University in U.K. has developed one of the most comprehensive projects – called the Aviation Integrated Modeling (AIM) project to develop a policy assessment capability to enable comprehensive analyses of aviation, environment and economic interactions at local and global levels. It contains a set of inter-linked modules of the key elements which include models of aircraft/engine technologies, air transport demand, airport activity and airspace operations, all coupled to global climate, local environment and economic impact blocks. A major benefit of AIM architecture is the ability to model data flow and feedback between the modules allowing for the policy assessment to be conducted by imposing policy effects on upstream modules and determining the implications through down stream modules to the output metrics,

These modules include: (a) an *Aircraft Technology and Cost Module* to simulate aircraft fuel use, emissions production and ownership/operating costs for various airframe/engine technology evolution scenarios which are likely to have an effect during the period of the forecast; (b) an *Air Transport Demand Module* to predict passenger and freight demand into the future between origin-destination pairs within the global air transportation network; (c) an *Airport Activity Module* to investigate the air traffic growth as a function of passenger and freight growth, to calculate delays and future airline response to them, and to model ground and low altitude operations and congestion to determine LTO emissions as a function of growth in air traffic operations within the vicinity of the airport; (d) an *Aircraft Movement Module* to simulate airborne trajectories between city-pairs, accounting for airspace inefficiencies and delays for given Air Traffic Control (ATC) scenarios and to identify the

**2.11 Modeling environmental & economic impacts of aviation** 

which can then be compared to the baseline case [40].

**2.11.1 Cambridge university aviation integrated modeling project (AIM)** 

locations of emissions release from aircraft in flight; (e) a *Global Climate Module* to investigate global environmental impact of aircraft movements in terms of multiple emissions species and contrails; (f) a *Local Air Quality and Noise Module* to investigate local environmental impacts from dispersion of critical air pollutants and noise from landing and take-off (LTO) operations; and (g) a *Regional Economics Module* to investigate positive and negative economic impacts of aviation in various parts of the world, including the increase in direct and indirect employment opportunities in the region. The schematic of the AIM general architecture is shown in Figure 37 [40].

Fig. 37. AIM Architecture [40].

The details of the seven modules and interaction among them are not given here but can be found in many papers listed on the website of the Institute for Aviation and the Environment of Cambridge University in U.K (http://www.iae.damtp.cam.ac.uk/ innovation.html). Here we briefly describe the power of the AIM architecture by reproducing some results from Reynolds et al. [40]. Employing the AIM architecture, Reynolds et al. [40] have performed a case study of the U.S. transportation system, which provides a forecast of air transport passenger demand between 50 major airports in U.S. from 2000 to 2030. The flights between these 50 airports represent over 40% of U.S. scheduled domestic departures in 2000 and nearly 20% the world's scheduled flights. Reynolds et al. [40] conducted simulations under three scenarios: 1. Unconstrained/No Feedback (air transport passenger demands and resulting operations were assumed to grow unconstrained), 2. Feedback of Delay Effects (a simplified airline response to delay is modeled by assuming that the 50% of the cost incurred by the airlines due to delays are passed directly to passengers in the form of higher fares), and 3. Feedback of Delay Effects Plus Per-Km Tax Policy (This is same as scenario 2 , but with a per-Km tax applied to tickets from 2020 onwards with the objective of reducing the Revenue Passenger Km (RPKM) demand in 2020 to 2000 levels, so that the resulting delays and emissions can be directly compared). *Reynolds et al.* [40] *state that these three scenarios, their associated forecasts and environmental impact results are for illustrative purposes only to show the capabilities of AIM; they do not represent realistic evolutions of the U.S. air transportation system.* The main focus of the scenarios is on interactions between the Air Transport Demand and the Airport Activity Modules. However, one can calculate the en route and local emissions

Review of Technologies to Achieve Sustainable (Green) Aviation 461

Fig. 39. Forecast of total system aircraft operations at O'Hare [40].

Fig. 40. System average arrival delays at O'Hare [40].

Fig. 41. LTO NOx emissions at O'Hare [40].

utilizing the capabilities of other modules in AIM integrated structure as given in [40]. Details of the data and assumptions used in the simulation are not presented here. The reader is referred to the paper by Reynolds et al. [40].

Forecasts from 2000 to 2030 for annual demand in terms of Revenue Passenger-Km (RPKM) from the Air Transport Demand Module; and total system aircraft operations, system average arrival delay and local NOx emissions at Chicago O'Hare (ORD) from the Airport Activity Module for the above three scenarios are presented in Figures 38 – 41 from Reynolds et al. [40]. The demand forecasts in Figure 38 include those from Airbus (for U.S market), and Boeing, ICAO and AERO-MS for the North American (NA) market for the purpose of comparison. Since they apply to different route groups and time periods, the start year total RPKM value in each case has been normalized to the historical value for the 50 airports extracted from U.S department of transportation T100 data. Figure 38 shows that for scenario 1, the demand growth measured by increase in RKPM will be 3.5 times the 2000 level by 2030. This is higher than the published estimates as expected given the unconstrained nature of the scenario 1. In scenario 2, the relatively modest feedback of 50% of the increased operating cost to the passenger has a significant effect, particularly over longer time frames. Demand forecast shows a 20% reduction (Figure 38), annual systems operations show a 15% reduction (Figure 39) and average arrival delays show a 50% reduction (Figure 40). Under scenario 3, Figures 38-40 show the effects of distance-based tax; in order to reduce the RPKM demand to 2000 levels in 2020, a 7.7 cents/km charge is required, equating to an additional \$300 on a ticket from New York to Los Angeles. Figure 41 shows the annual local emissions at Chicago O'Hare (ORD); all scenarios show an initial gradual increase in emissions which can be explained in conjunction with Figures 38-40 accounting for the increase in RPKM, aircraft operations and arrival delays. The sharp decrease in emissions in scenario 3 in 2020 is due to the reduced operations caused by the introduction of distance-tax policy. The Local Air Quality and Noise Module of AIM architecture can provide results for local air quality at ORD e.g. the annual average NOx concentration at ORD as well as en route CO2 emissions and global radiative forcing. These results demonstrate that significant insights about environmental and economic impact of aviation can be gained by AIM architecture. It should be noted that many improvements and enhancements to AIM architecture are currently under development at Cambridge.

Fig. 38. Forecast of system Revenue Passenger – Km (RPKM) growth at O'Hare [40].

utilizing the capabilities of other modules in AIM integrated structure as given in [40]. Details of the data and assumptions used in the simulation are not presented here. The reader is

Forecasts from 2000 to 2030 for annual demand in terms of Revenue Passenger-Km (RPKM) from the Air Transport Demand Module; and total system aircraft operations, system average arrival delay and local NOx emissions at Chicago O'Hare (ORD) from the Airport Activity Module for the above three scenarios are presented in Figures 38 – 41 from Reynolds et al. [40]. The demand forecasts in Figure 38 include those from Airbus (for U.S market), and Boeing, ICAO and AERO-MS for the North American (NA) market for the purpose of comparison. Since they apply to different route groups and time periods, the start year total RPKM value in each case has been normalized to the historical value for the 50 airports extracted from U.S department of transportation T100 data. Figure 38 shows that for scenario 1, the demand growth measured by increase in RKPM will be 3.5 times the 2000 level by 2030. This is higher than the published estimates as expected given the unconstrained nature of the scenario 1. In scenario 2, the relatively modest feedback of 50% of the increased operating cost to the passenger has a significant effect, particularly over longer time frames. Demand forecast shows a 20% reduction (Figure 38), annual systems operations show a 15% reduction (Figure 39) and average arrival delays show a 50% reduction (Figure 40). Under scenario 3, Figures 38-40 show the effects of distance-based tax; in order to reduce the RPKM demand to 2000 levels in 2020, a 7.7 cents/km charge is required, equating to an additional \$300 on a ticket from New York to Los Angeles. Figure 41 shows the annual local emissions at Chicago O'Hare (ORD); all scenarios show an initial gradual increase in emissions which can be explained in conjunction with Figures 38-40 accounting for the increase in RPKM, aircraft operations and arrival delays. The sharp decrease in emissions in scenario 3 in 2020 is due to the reduced operations caused by the introduction of distance-tax policy. The Local Air Quality and Noise Module of AIM architecture can provide results for local air quality at ORD e.g. the annual average NOx concentration at ORD as well as en route CO2 emissions and global radiative forcing. These results demonstrate that significant insights about environmental and economic impact of aviation can be gained by AIM architecture. It should be noted that many improvements and enhancements to AIM architecture are currently under development at Cambridge.

Fig. 38. Forecast of system Revenue Passenger – Km (RPKM) growth at O'Hare [40].

referred to the paper by Reynolds et al. [40].

Fig. 39. Forecast of total system aircraft operations at O'Hare [40].

Fig. 40. System average arrival delays at O'Hare [40].

Fig. 41. LTO NOx emissions at O'Hare [40].

Review of Technologies to Achieve Sustainable (Green) Aviation 463

Dr. Tom Reynolds of the Institute of Aviation and Environment at Cambridge University to allow the use of material used in the section on "Modeling Environmental & Economic Impacts of Aviation." All the material in this section has been taken from Reference [40]. The author is also grateful to Dr. Raj Nangia for helpful discussions and allowing the use of material from several of his papers [30, 31]. Reference [17] has also provided a significant amount of material for several sections. The author is thankful to Dr. Richard Wahls of NASA Langley for his permission to use the material from NASA Langley presentations on 'Environmentally Responsible Aviation.' The author would like to thank Professor Raimo J. Hakkinen for reading the manuscript and for making many helpful suggestions that have improved the paper. Finally, it should be noted that the material used in this review paper has been used from a variety of sources listed in the references; any omission is completely

[1] Schafer, A., Heywood, J.B., Jacoby, H.D., and Waitz, I.A., *Transportation in a Climate* 

[2] Salari, K.,"DOE's Effort to Reduce Truck Aerodynamic Drag Through Joint Experiments

[4] Agarwal, R.K., "Sustainable (Green) Aviation: Challenges and Opportunities (2009 William Littlewood Lecture)," SAE Int. J. Aerospace, Vol. 2, pp. 1-20, 2009.

[7] NRC Meeting of Experts on NASA's Plans for System-Level Research in Environmental

[8] Mankins, J.C.,"Technology Readiness Levels," http://www.hq.nasa.gov/office/codeq

[13] Reynolds, T.G., "Environmental Challenges for Aviation – An Overview," Presented to

[15] Lee, J.J., Lukachko, S.P., Waitz, I. A., and Schafer, A., "Historical & Future Trends in

[16] Penner, J.E., *Aviation and the Global Atmosphere*, Cambridge University Press, Cambridge,

[17] Royal Aeronautical Society Annual Report, "Air travel - Greener by Design Annual Report 2007-2008," April 2008 (http://www.greenerbydesign.org.uk/). [18] Schumann, U., "On Conditions for Contrail from Aircraft Exhaust," Meteor. Zeitsch,

[19] NRC Meeting of Experts on NASA's Plans for System-Level Research in Environmental

Aircraft Performance, Cost, and Emissions," Annu. Rev. Energy Environ, Vol. 26,

Mitigation, National Harbor, MD, 14 May 2009; Presentation by A. Strazisar;

[10] Smith, M.J.T., *Aircraft Noise*, Cambridge University Press, Cambridge, U.K., 1989. [11] Erickson, J.D., "Environmental Acceptability" Office of Environment and Energy,

Mitigation, National Harbor, MD, 14 May 2009; Presentation by R.A. Wahls;

*Constraint World,* MIT Press, Cambridge, MA, 2009.

and Computations," LLNL-PRES-401649, 28 February 2008.

[5] http://www.boeing.com/randy/archives/2006/07/in\_the\_year\_202.html

http://www.aeronautics.nasa.gov/calendar/20090514.htm

[9] Aerospace International, *The Green Issue*, Aerosociety, U.K., March 2009.

Low Cost Air Transport Summit, London, 11-12 June 2008.

http://www.aeronautics.nasa.gov/calendar/20090514.htm

unintentional.

**5. References** 

[3] www.pewclimate.org

/trl

[14] www.iea.org

[6] http://www.acare4europe.com/

Presented to FAA, 2000.

[12] http://silentaircraft.org/

pp. 167-200, 2001.

U.K., pp. 76-79, 1999.

Vol. 5, pp. 3-22, 1996.

#### **2.12 Sustainable airports**

The airports and associated ground infrastructure constitute an integral part of Green Aviation. To address the issues of energy and environmental sustainability, the Clean Airport Partnership (CAP) was established in U.S. in 1998 [41] and is the only not-for-profit corporation in the U.S devoted exclusively to improving environmental quality and energy efficiency at airports. CAP believes "that efficient airport operations and sound environmental management must go hand in hand. This approach can reduce costs and uncertainty of environmental compliance; facilitate growth, while setting a visible leadership example for communities and the nation." The airport expansion and the development of new airports should include both the environmental costs and life-cycle costs. Sustainable growth of airports requires that they be developed as inter-modal transport hubs as part of an integrated public transport network. The ground infrastructure development should include low emission service vehicles; LEEDS certified green buildings with low energy requirements, and recyclable water usage. There should be effective land use planning of the area around the airports (including securing land for future development) with active investments into the surrounding communities. Airport expansion must also consider the issue of noise and its impact on the surrounding communities, and should be involved in its mitigation by engaging in the flight path design. The air quality near the airports should be monitored and measures for its continuous improvement should be put in place. In addition, there should be regulatory requirements to set risk limits.

## **3. Opportunities and future prospects**

It is clear that the expected three fold increase in air travel in next twenty years offers enormous challenge to all the stakeholders – airplane manufacturers, airlines, airport ground infrastructure planners and developers, policy makers and consumers to address the urgent issues of energy and environmental sustainability. The emission and noise mitigation goals enunciated by ACARE and NASA can be met by technological innovations in aircraft and engine designs, by use of advanced composites and biofuels, and by improvements in aircraft operations. Some of the changes in operations can be easily and immediately put into effect, such as tailored arrivals and perhaps AAR. Some innovations in aircraft and engine design, use of advanced composites, use of biofuels, and overhauling of the ATM system may take time but are achievable by concerted and coordinated effort of government, industry and academia. They may require significant investment in R&D. It is now recognized by the industry (airlines and manufacturers) as well the relevant government agencies and the policy makers that there is urgent need for action to meet the challenges of climate change; aviation is becoming an important part of it. It is worth noting that in July 2008 in Italy, G8 countries (U.S, Canada, Russia, U.K., France, Italy, Germany and Japan) called for a global emission reduction target of "at least 50%" by 2050, which is in line with goal established by IATA members at their June 2009 Annual General Meeting in Kuala Lumpur, Malaysia. IATA further committed to carbon-neutral traffic growth by 2020. These challenges provide opportunities for breakthrough innovations in all aspects of air transportation.

## **4. Acknowledgements**

The author wants to acknowledge several individuals and sources for their help and permission to use the material from their presentations and papers. The author is grateful to Dr. Tom Reynolds of the Institute of Aviation and Environment at Cambridge University to allow the use of material used in the section on "Modeling Environmental & Economic Impacts of Aviation." All the material in this section has been taken from Reference [40]. The author is also grateful to Dr. Raj Nangia for helpful discussions and allowing the use of material from several of his papers [30, 31]. Reference [17] has also provided a significant amount of material for several sections. The author is thankful to Dr. Richard Wahls of NASA Langley for his permission to use the material from NASA Langley presentations on 'Environmentally Responsible Aviation.' The author would like to thank Professor Raimo J. Hakkinen for reading the manuscript and for making many helpful suggestions that have improved the paper. Finally, it should be noted that the material used in this review paper has been used from a variety of sources listed in the references; any omission is completely unintentional.

#### **5. References**

462 Recent Advances in Aircraft Technology

The airports and associated ground infrastructure constitute an integral part of Green Aviation. To address the issues of energy and environmental sustainability, the Clean Airport Partnership (CAP) was established in U.S. in 1998 [41] and is the only not-for-profit corporation in the U.S devoted exclusively to improving environmental quality and energy efficiency at airports. CAP believes "that efficient airport operations and sound environmental management must go hand in hand. This approach can reduce costs and uncertainty of environmental compliance; facilitate growth, while setting a visible leadership example for communities and the nation." The airport expansion and the development of new airports should include both the environmental costs and life-cycle costs. Sustainable growth of airports requires that they be developed as inter-modal transport hubs as part of an integrated public transport network. The ground infrastructure development should include low emission service vehicles; LEEDS certified green buildings with low energy requirements, and recyclable water usage. There should be effective land use planning of the area around the airports (including securing land for future development) with active investments into the surrounding communities. Airport expansion must also consider the issue of noise and its impact on the surrounding communities, and should be involved in its mitigation by engaging in the flight path design. The air quality near the airports should be monitored and measures for its continuous improvement should be put in place. In addition, there should be regulatory

It is clear that the expected three fold increase in air travel in next twenty years offers enormous challenge to all the stakeholders – airplane manufacturers, airlines, airport ground infrastructure planners and developers, policy makers and consumers to address the urgent issues of energy and environmental sustainability. The emission and noise mitigation goals enunciated by ACARE and NASA can be met by technological innovations in aircraft and engine designs, by use of advanced composites and biofuels, and by improvements in aircraft operations. Some of the changes in operations can be easily and immediately put into effect, such as tailored arrivals and perhaps AAR. Some innovations in aircraft and engine design, use of advanced composites, use of biofuels, and overhauling of the ATM system may take time but are achievable by concerted and coordinated effort of government, industry and academia. They may require significant investment in R&D. It is now recognized by the industry (airlines and manufacturers) as well the relevant government agencies and the policy makers that there is urgent need for action to meet the challenges of climate change; aviation is becoming an important part of it. It is worth noting that in July 2008 in Italy, G8 countries (U.S, Canada, Russia, U.K., France, Italy, Germany and Japan) called for a global emission reduction target of "at least 50%" by 2050, which is in line with goal established by IATA members at their June 2009 Annual General Meeting in Kuala Lumpur, Malaysia. IATA further committed to carbon-neutral traffic growth by 2020. These challenges provide opportunities for

The author wants to acknowledge several individuals and sources for their help and permission to use the material from their presentations and papers. The author is grateful to

**2.12 Sustainable airports** 

requirements to set risk limits.

**4. Acknowledgements** 

**3. Opportunities and future prospects** 

breakthrough innovations in all aspects of air transportation.


**20** 

*Ukraine* 

**Synthetic Aperture Radar Systems** 

**Data Processing Approaches** 

Oleksandr O. Bezvesilniy and Dmytro M. Vavriv

*Institute of Radio Astronomy of the National Academy of Sciences of Ukraine* 

The synthetic aperture radar (SAR) is considered now as the most effective instrument for producing radar images of ground scenes with a high spatial resolution. The usage of small aircrafts as the platform for the deployment of SAR systems is attractive from the point of view of many practical applications. Firstly, this enables for a substantial lowering of the exploitation costs of SAR sensors. Secondly, such solution provides a possibility to perform a rather quick surveillance and imaging of particular ground areas. Finally, the progress in

However, the formation of high-quality SAR images with SAR systems deployed on small aircrafts is still a challenging problem. The main difficulties come from significant variations of the aircraft trajectory and the antenna orientation during real flights. These motion errors

In this chapter, we describe three effective approaches to the SAR data processing, which

1. Time-domain SAR processing with clutter-lock and geometric correction by resampling, 2. Time-domain SAR processing with built-in geometric correction and multi-look

The proposed solutions have been successfully implemented in Ku- and X-band SAR systems developed and produced at the Institute of Radio Astronomy of the National Academy of Sciences of Ukraine. The efficiency of the proposed algorithms is illustrated by

The chapter is organized as follows. In Section 2, basic principles of SAR data processing is described. In Section 3, the problem of motion errors of airborne SAR systems is considered, and the appearance of geometric distortions and radiometric errors in SAR images is discussed. The three data processing approaches are considered in details in Sections 4, 5, and 6. Section 7 describes the RIAN-SAR-Ku and RIAN-SAR-X systems used in our

lead to defocusing, geometric distortions, and radiometric errors in SAR images.

3. Range-Doppler algorithm with the 1-st and 2-nd order motion compensation.

this direction will allow for a much wider application of SAR sensors.

enable the solution of the above problems:

SAR images obtained with these SAR systems.

experiments. The conclusion is given in Section 8.

radiometric correction,

**1. Introduction** 

**for Small Aircrafts:** 


## **Synthetic Aperture Radar Systems for Small Aircrafts: Data Processing Approaches**

Oleksandr O. Bezvesilniy and Dmytro M. Vavriv *Institute of Radio Astronomy of the National Academy of Sciences of Ukraine Ukraine* 

## **1. Introduction**

464 Recent Advances in Aircraft Technology

[25] Saeed, T.I, Graham, W.R., Babinsky, H., Eastwood, J.P., Hall, C.A., Jarrett, J.P., Lone,

[26] Collier, F.S., NASA Langley, "Progress in Environmental Aeronautics," Presentation at

[27] National Academy of Science (NAS) Report, "Assessing the Research and Development

[28] Hahn. A.S., "Staging Airliner Service," AIAA 2007-7759, 7th AIAA ATIO Conference,

[29] Creemers, W.L.H. and Slingerland, R., "Impact of Intermediate Stops on Long-Range

[30] Nangia, R.K., "Air to Air Refueling in Civil Aviation," Paper #9, Royal Aeronautical

[31] Nangia, R.K., "Way Forward to a Step Jump for Highly Efficient & Greener Civil

[32] Wagner, E., Jacques, D., Blake, W., and Pachter, M., "Flight Test Results for Close

[33] Jenkinson, L.R., Caves, R.E, and Rhodes, D.R., "A Preliminary Investigation into the Application of Formation Flying to Civil Operation," AIAA 1995-3898, 1995. [34] Bower, G.C., Flanzer, T.C. and Kroo, I.M., "Formation Geometries and Route

Applied Aerodynamics Conference, San Antonio, TX, 22-25 June 2009.

(http://www.boeing.com/paris2009/media/presentation/june17/glover\_enviro\_

[39] Penner, J.E., *Aviation and the Global Atmosphere*, Cambridge University Press, Cambridge,

[40] Reynolds, T.G., Barrett, S., Dray, L.M., Evans, A.D., Kohler, M.O., Morales, M.V.,

Integration and Operations Conference, Belfast, 18-20 Sept. 2007.

Schafer, A., Wadud, Z., Britter, R., Hallam, H., and Hunsley, R., " Modeling Environmental & Economic Impacts of Aviation: Introducing the Aviation Integrated Modeling Project," AIAA 2007-7751; 7th AIAA Aviation Technology,

http://www.airlines.org/NR/rdonlyres/A78FA93B-986C-4D95-BA87-

Workshop," (http://www.nap.edu/catalog/12447.html), 2008.

Soc. "Greener by Design" Conference, London, 7 October 2008.

Publication RKN-SP-2008-120, September 2008.

Mech. Conf., Monterey, CA, 5-8 August 2002.

[35] Boeing Presentation at Paris Air Show by Billy Glover, June 2009

[38] http://www.planetforlife.com/h2/h2vehicle.html

M.M. and Seffen, K.A., "Conceptual Design of a Laminar Flying Wing Aircraft," AIAA 2009-3616, 27th AIAA Applied Aerodynamics Conference, San Antonio, TX,

Aviation & Environment – A Primer for North American Stakeholders Meeting;

plan for the Next Generation Air Transportation System: Summary of a

Jet-Transport Design," AIAA 2007-7849, 7th AIAA ATIO Conference, Belfast, 18-20

Aviation – An Opportunity for the Present and a Vision for the Future," Personal

Formation Flight for Fuel Savings," AIAA 2002-4490, AIAA Atmospheric Flight

Optimization for Commercial Formation Flight," AIAA 2009-3615, 27th AIAA

[23] http://www.dfrc.nasa.gov/Gallery/Photo/X-48B/HTML/ED08-0092-13.html

[20] www.boeing.com

[24] http://hondajet.honda.com/

22-25 June 2009.

[21] www.b-domke.de/AviationImages/Propfan/0815 [22] www.flightglobal.com/articles/2007/06/12/214520

B4DD961CC369/0/11collier.pdf

Belfast, 18-20 Sept. 2007.

Sept. 2007.

briefing/). [36] www.boeing.com [37] www.solarimpulse.com

U.K., p. 257, 1999.

[41] http://www.cleanairports.com

The synthetic aperture radar (SAR) is considered now as the most effective instrument for producing radar images of ground scenes with a high spatial resolution. The usage of small aircrafts as the platform for the deployment of SAR systems is attractive from the point of view of many practical applications. Firstly, this enables for a substantial lowering of the exploitation costs of SAR sensors. Secondly, such solution provides a possibility to perform a rather quick surveillance and imaging of particular ground areas. Finally, the progress in this direction will allow for a much wider application of SAR sensors.

However, the formation of high-quality SAR images with SAR systems deployed on small aircrafts is still a challenging problem. The main difficulties come from significant variations of the aircraft trajectory and the antenna orientation during real flights. These motion errors lead to defocusing, geometric distortions, and radiometric errors in SAR images.

In this chapter, we describe three effective approaches to the SAR data processing, which enable the solution of the above problems:


The proposed solutions have been successfully implemented in Ku- and X-band SAR systems developed and produced at the Institute of Radio Astronomy of the National Academy of Sciences of Ukraine. The efficiency of the proposed algorithms is illustrated by SAR images obtained with these SAR systems.

The chapter is organized as follows. In Section 2, basic principles of SAR data processing is described. In Section 3, the problem of motion errors of airborne SAR systems is considered, and the appearance of geometric distortions and radiometric errors in SAR images is discussed. The three data processing approaches are considered in details in Sections 4, 5, and 6. Section 7 describes the RIAN-SAR-Ku and RIAN-SAR-X systems used in our experiments. The conclusion is given in Section 8.

Synthetic Aperture Radar Systems for Small Aircrafts: Data Processing Approaches 467

22 2 tan sin cos ( tan ) *Ry H*

 *RH H*

In order to form the synthetic aperture and direct the synthetic beam to the point (,) *R R x y* ,

/2 <sup>1</sup> (, , ) ( ) () *S*

Here (, , ) *R R It x y* is the SAR image pixel, *t* is the time when the aircraft is at the centre of

22 2 2 <sup>2</sup> () ( ) 2 () *R x V y H R xV V RR R*

range resolution cell, then the target signal "migrates" through several range cells. This effect known as the range migration should be taken into account during the aperture

two-dimensional "azimuth – slant range" matrix of the range-compressed radar data by the

2 () ( ) *DC DR dR <sup>f</sup> F F dt* 

2 *<sup>R</sup>*

<sup>2</sup> <sup>2</sup> <sup>2</sup> <sup>1</sup> *<sup>R</sup>*

 *R R* 

It is useful to note that the Doppler centroid determines the synthetic beam direction,

From the point of view of signal processing, the formation of the synthetic aperture (3) is the matched filtering of linear frequency modulated signals (6). Such filtering can be performed

*DC <sup>x</sup> F V* 

*V x <sup>F</sup>* 

*S*

*T R R R R S T It x y s th d T*

 

backscattered from this point should be summed up coherently on the

<sup>2</sup> /2

, <sup>4</sup> () () *<sup>R</sup>*

 

 

is the time within the interval of synthesis, ( ) *Rh*

 

changes during the time of synthesis *TS* more than the size of the

, (6)

*<sup>R</sup>* , (7)

 taking into account the propagation phase

, (3)

 

. (4)

is the weighting window applied to

should be obtained from the

. (8)

is the radar wavelength, and

. (5)

. (2)

( ) 

is the

*hw i R R* ( ) ( )exp ( )

 

The instant Doppler frequency of the received signal is, approximately,

where the Doppler centroid *FDC* and the Doppler rate *FDR* are given by

*DR*

whereas the Doppler rate is responsible for the beam focusing.

(Cumming & Wong, 2005; Franceschetti & Lanari, 1999):

azimuth reference function in the time domain, ( ) *wR*

improve the side-lobe level of the synthetic aperture pattern,

synthesis. The one-dimensional backscattered signal ( ) *Rs t*

the signal ( ) *Rs t*

*R*( ) 

the synthetic aperture (0, 0, ) *H* ,

If the slant range *R*( )

is the slant range to the point:

interpolation along the migration curve (5).

interval of synthesis /2 /2 *T T S S*

#### **2. Principles of SAR data processing**

The synthetic aperture technique is used to obtain high-resolution images of ground surfaces by using a radar with a small antenna installed on an aircraft or a satellite. The radar pulses backscattered from a ground surface and received by the moving antenna can be considered as the pulses received by a set of antennas distributed along the flight trajectory. By coherent processing of these pulses it is possible to build a long virtual antenna – the synthetic aperture that provides a high cross-range resolution. A high range resolution is typically achieved by means of a pulse compression technique that involves transmitting long pulses with a linear frequency modulation or a phase codding.

### **2.1 Concept of the synthetic aperture**

Practical SAR systems are produced to operate in one or several operating modes. Depending on the mode, they are referred as the strip-map SAR, the spot-light SAR, the inverse SAR, the ScanSAR, and the interferometric SAR (Bamler & Hartl, 1998; Carrara at al., 1995; Cumming & Wong, 2005; Franceschetti & Lanari, 1999; Rosen at al., 2000; Wehner, 1995). We shall consider mainly the most popular and practically useful strip-map SAR operating mode. However, the presented further results are applicable to other modes to a large extent.

In the strip-map SAR mode, the radar performs imaging of a strip on the ground aside of the flight trajectory. Geometry of the strip-map mode is shown in Fig. 1. The aircraft flies along the straight line above the *x* -axis with the velocity *V* at the altitude *H* above the ground plane ( ) *xy* .

Fig. 1. Geometry of the strip-map SAR mode.

The orientation of the real antenna beam is described by the pitch angle and the yaw angle that are measured with respect to the flight direction. The line *AB* in Fig. 1 is the intersection of the elevation plane of the real antenna pattern and the ground plane. This line is called the Doppler centroid line. The coordinates of the point (,) *R R x y* on this line at the slant range *R* from the aircraft are given by

$$\alpha\_R = H \tan a \cos \beta + \sin \beta \sqrt{R^2 - H^2 - \left(H \tan a\right)^2},\tag{1}$$

The synthetic aperture technique is used to obtain high-resolution images of ground surfaces by using a radar with a small antenna installed on an aircraft or a satellite. The radar pulses backscattered from a ground surface and received by the moving antenna can be considered as the pulses received by a set of antennas distributed along the flight trajectory. By coherent processing of these pulses it is possible to build a long virtual antenna – the synthetic aperture that provides a high cross-range resolution. A high range resolution is typically achieved by means of a pulse compression technique that involves

Practical SAR systems are produced to operate in one or several operating modes. Depending on the mode, they are referred as the strip-map SAR, the spot-light SAR, the inverse SAR, the ScanSAR, and the interferometric SAR (Bamler & Hartl, 1998; Carrara at al., 1995; Cumming & Wong, 2005; Franceschetti & Lanari, 1999; Rosen at al., 2000; Wehner, 1995). We shall consider mainly the most popular and practically useful strip-map SAR operating mode. However, the presented further results are applicable to other modes to a

In the strip-map SAR mode, the radar performs imaging of a strip on the ground aside of the flight trajectory. Geometry of the strip-map mode is shown in Fig. 1. The aircraft flies along the straight line above the *x* -axis with the velocity *V* at the altitude *H* above the ground

transmitting long pulses with a linear frequency modulation or a phase codding.

**2. Principles of SAR data processing** 

**2.1 Concept of the synthetic aperture** 

Fig. 1. Geometry of the strip-map SAR mode.

the slant range *R* from the aircraft are given by

The orientation of the real antenna beam is described by the pitch angle

 that are measured with respect to the flight direction. The line *AB* in Fig. 1 is the

*RH H*

intersection of the elevation plane of the real antenna pattern and the ground plane. This line is called the Doppler centroid line. The coordinates of the point (,) *R R x y* on this line at

22 2 tan cos sin ( tan ) *Rx H*

  , (1)

and the yaw

large extent.

plane ( ) *xy* .

angle 

$$y\_R = -H \tan \alpha \sin \beta + \cos \beta \sqrt{R^2 - H^2 - \left(H \tan \alpha\right)^2} \,. \tag{2}$$

In order to form the synthetic aperture and direct the synthetic beam to the point (,) *R R x y* , the signal ( ) *Rs t* backscattered from this point should be summed up coherently on the interval of synthesis /2 /2 *T T S S* taking into account the propagation phase ( ) (Cumming & Wong, 2005; Franceschetti & Lanari, 1999):

$$I(t, x\_R, y\_R) = \left| \frac{1}{T\_S} \int\_{-T\_S/2}^{T\_S/2} s\_R(\tau + t) l\_R(\tau) d\tau \right|^2 \tag{3}$$

$$i\hbar\_R(\tau) = w\_R(\tau) \exp\left[-i\varphi(\tau)\right], \quad \varphi(\tau) = -\frac{4\pi}{\lambda} R(\tau) \,. \tag{4}$$

Here (, , ) *R R It x y* is the SAR image pixel, *t* is the time when the aircraft is at the centre of the synthetic aperture (0, 0, ) *H* , is the time within the interval of synthesis, ( ) *Rh* is the azimuth reference function in the time domain, ( ) *wR* is the weighting window applied to improve the side-lobe level of the synthetic aperture pattern, is the radar wavelength, and *R*( ) is the slant range to the point:

$$R(\tau) = \sqrt{\left(\mathbf{x}\_R - V\tau\right)^2 + y\_R^2 + H^2} = \sqrt{\mathbf{R}^2 - 2\mathbf{x}\_R V \tau + \left(V\tau\right)^2} \,. \tag{5}$$

If the slant range *R*( ) changes during the time of synthesis *TS* more than the size of the range resolution cell, then the target signal "migrates" through several range cells. This effect known as the range migration should be taken into account during the aperture synthesis. The one-dimensional backscattered signal ( ) *Rs t* should be obtained from the two-dimensional "azimuth – slant range" matrix of the range-compressed radar data by the interpolation along the migration curve (5).

The instant Doppler frequency of the received signal is, approximately,

$$f(\tau) = -\frac{2}{\lambda} \frac{d R(\tau)}{dt} \approx F\_{\rm DC} + F\_{\rm DR} \tau \,\,\,\,\,\tag{6}$$

where the Doppler centroid *FDC* and the Doppler rate *FDR* are given by

$$F\_{DC} = \frac{2}{\lambda} V \frac{\mathcal{X}\_R}{R} \,' \, \tag{7}$$

$$F\_{DR} = -\frac{2}{\mathcal{Z}} \frac{V^2}{R} \left[1 - \left(\frac{\mathbf{x}\_R}{R}\right)^2\right].\tag{8}$$

It is useful to note that the Doppler centroid determines the synthetic beam direction, whereas the Doppler rate is responsible for the beam focusing.

From the point of view of signal processing, the formation of the synthetic aperture (3) is the matched filtering of linear frequency modulated signals (6). Such filtering can be performed

Synthetic Aperture Radar Systems for Small Aircrafts: Data Processing Approaches 469

Fig. 2. SAR processing in time domain.

(a) (b)

the data buffer consideration.

Fig. 3. Multi-look processing in the time domain: (a) the antenna footprint consideration, (b)

The multi-look processing in the time domain is usually performed directly following the definition (Moreira, 1991; Oliver & Quegan, 1998). Namely, the reference functions and range migration curves are built for the long interval of synthesis *TS*max , which is the time required for the ground target to cross the antenna footprint from point 1 to point 2 in Fig.

either in the time or in the frequency domain. Accordingly, there are time- and frequencydomain SAR processing algorithms.

It is easy to show that the azimuth resolution *<sup>X</sup>* is given by (Cumming & Wong, 2005; Carrara at al., 1995)

$$
\rho\_X = K\_w \frac{V}{\Delta F\_D} \,. \tag{9}
$$

Here *F FT D DR S* is the Doppler frequency bandwidth that corresponds to the interval of the synthesis. The coefficient *Kw* describes the broadening of the main lobe of the synthetic aperture pattern caused by windowing.

In order to improve the quality of SAR images, a multi-look processing technique is used in most modern SAR systems (Moreira, 1991; Oliver & Quegan, 1998). According to such technique, a long synthetic aperture is divided on shorter intervals that are processed independently to build several SAR images of the same ground scene, called SAR looks. It can be considered as building the synthetic aperture with multiple synthetic beams. A non-coherent averaging of the SAR looks into one multi-look image is used to reduce speckle noise and to reveal fine details in SAR images. Multi-look processing can be used for other applications, for example, for measuring the Doppler centroid with a high accuracy and high spatial resolution and retrieving 3D topography of ground surfaces (Bezvesilniy et al., 2006; Bezvesilniy et al., 2007; Bezvesilniy et al., 2008; Vavriv & Bezvesilniy, 2011b).

In the next sections we consider peculiarities of the realization of SAR processing algorithms in time and frequency domains.

#### **2.2 SAR processing in time domain**

The SAR processing in the time domain is performed according to the relations (3)-(5). The block-scheme of the algorithm is shown in Fig. 2.

The received range-compressed radar data are stored in a memory buffer. The buffer size in the range corresponds to the swath width; the buffer size in the azimuth is determined by the time of synthesis. The basic step of the SAR processing procedure for a given range *R* includes the following calculations:


As the result, a single pixel of the SAR image is obtained representing the ground point on the Doppler centroid line at the range *R* . This basic step is repeated for all ranges within the swath producing a single line of the SAR image in the range direction. In order to form the next line of the SAR image, the data in the buffer is shifted in the azimuth and supplemented with new data, and the computations are repeated.

Fig. 2. SAR processing in time domain.

either in the time or in the frequency domain. Accordingly, there are time- and frequency-

*X w*

*<sup>V</sup> <sup>K</sup> F*

Here *F FT D DR S* is the Doppler frequency bandwidth that corresponds to the interval of the synthesis. The coefficient *Kw* describes the broadening of the main lobe of the synthetic

In order to improve the quality of SAR images, a multi-look processing technique is used in most modern SAR systems (Moreira, 1991; Oliver & Quegan, 1998). According to such technique, a long synthetic aperture is divided on shorter intervals that are processed independently to build several SAR images of the same ground scene, called SAR looks. It can be considered as building the synthetic aperture with multiple synthetic beams. A non-coherent averaging of the SAR looks into one multi-look image is used to reduce speckle noise and to reveal fine details in SAR images. Multi-look processing can be used for other applications, for example, for measuring the Doppler centroid with a high accuracy and high spatial resolution and retrieving 3D topography of ground surfaces (Bezvesilniy et al., 2006; Bezvesilniy et al., 2007; Bezvesilniy et al., 2008; Vavriv &

In the next sections we consider peculiarities of the realization of SAR processing algorithms

The SAR processing in the time domain is performed according to the relations (3)-(5). The

The received range-compressed radar data are stored in a memory buffer. The buffer size in the range corresponds to the swath width; the buffer size in the azimuth is determined by the time of synthesis. The basic step of the SAR processing procedure for a given range *R*

1. Calculation of the Doppler centroid (7), the Doppler rate (8), and the required time of

As the result, a single pixel of the SAR image is obtained representing the ground point on the Doppler centroid line at the range *R* . This basic step is repeated for all ranges within the swath producing a single line of the SAR image in the range direction. In order to form the next line of the SAR image, the data in the buffer is shifted in the azimuth and

*D*

*<sup>X</sup>* is given by (Cumming & Wong, 2005;

. (9)

domain SAR processing algorithms.

aperture pattern caused by windowing.

Carrara at al., 1995)

Bezvesilniy, 2011b).

in time and frequency domains.

**2.2 SAR processing in time domain** 

includes the following calculations:

synthesis (9),

4. Coherent summation (3).

block-scheme of the algorithm is shown in Fig. 2.

2. Interpolation along the migration curve (5),

3. Multiplication by the reference function with windowing (4), and

supplemented with new data, and the computations are repeated.

It is easy to show that the azimuth resolution

Fig. 3. Multi-look processing in the time domain: (a) the antenna footprint consideration, (b) the data buffer consideration.

The multi-look processing in the time domain is usually performed directly following the definition (Moreira, 1991; Oliver & Quegan, 1998). Namely, the reference functions and range migration curves are built for the long interval of synthesis *TS*max , which is the time required for the ground target to cross the antenna footprint from point 1 to point 2 in Fig.

Synthetic Aperture Radar Systems for Small Aircrafts: Data Processing Approaches 471

Fig. 4. SAR processing in frequency domain.

Fig. 5. Multi-look processing in frequency domain.

3a. The multi-look processing is performed by splitting the long interval of the synthesis *TS*max on several sub-intervals *TS* , forming in this manner multiple synthetic beams pointed to the same point on the ground at the different moments of time, as shown in Fig. 3b. The number of the looks for a scheme with the half-overlapped sub-intervals is given by

$$N\_L = \text{int}\left\{\frac{T\_{S\text{max}}}{T\_S \,/\,\,\Omega}\right\} - 1\,\,\,\,\tag{10}$$

#### **2.3 SAR processing in frequency domain**

The SAR data processing can be also performed effectively in the frequency domain. It is known that the convolution of two signals in the time domain is equivalent to the multiplication of their Fourier pairs in the frequency domain. The corresponding computations are efficient due to the application of the fast Fourier transform (FFT). A number of FFT-based SAR processing algorithms have been so far developed (Cumming & Wong, 2005).

In particular, the range-Doppler algorithm (RDA) (Cumming & Wong, 2005) is a relatively simple and widely-used FFT-based algorithm. The processing steps of this algorithm are shown in Fig. 4 and illustrated also in Fig. 5. The received radar data are stored in a large memory buffer. The buffer size in the range direction corresponds to the swath width, and the buffer size in the azimuth direction is equal to the length of the FFT that covers many intervals of synthesis. First, the range-compressed data are transformed into the range-Doppler domain by applying FFT in the azimuth. The frequency scale is limited by the pulse repetition frequency (PRF). Then, the range migration correction is performed in the frequency domain. By using the relation (6) between the instant frequency *f* and the time within the interval of synthesis (preserving the square-root law for the slant range) one can derive the formula for the migration curve in the frequency domain from the migration curve (5) in the time domain:

$$R(f) = R\sqrt{1 - \frac{\lambda^2 F\_{\rm DC}^2}{4V^2}} \bigg/ \sqrt{1 - \frac{\lambda^2 f^2}{4V^2}} \,\,\,\,\tag{11}$$

After that, the phase compensation and windowing are applied for the azimuth compression. By using the principle of stationary phase (Cumming & Wong, 2005) an expression for the reference function in the frequency domain is obtained:

$$h\_R(f) = w\_R(f) \exp[-i\theta(f)],\ \theta(f) = -\frac{4\pi}{\lambda} R \left[ \sqrt{1 - \left(\frac{\lambda f}{2V}\right)^2} \sqrt{1 - \left(\frac{\lambda F\_{DC}}{2V}\right)^2} + \left(\frac{\lambda f}{2V}\right) \left(\frac{\lambda F\_{DC}}{2V}\right) \right]. \tag{12}$$

Finally, the SAR image is formed by applying the inverse FFT in the azimuth. Thus, the basic processing step in the frequency domain performed for a given range gives the line of the SAR image in the azimuth. This basic step is repeated for all ranges within the swath producing the complete SAR image of the ground scene presented in the data frame.

3a. The multi-look processing is performed by splitting the long interval of the synthesis *TS*max on several sub-intervals *TS* , forming in this manner multiple synthetic beams pointed to the same point on the ground at the different moments of time, as shown in Fig. 3b. The

> max int 1 / 2 *S*

. (10)

*S*

*T* 

The SAR data processing can be also performed effectively in the frequency domain. It is known that the convolution of two signals in the time domain is equivalent to the multiplication of their Fourier pairs in the frequency domain. The corresponding computations are efficient due to the application of the fast Fourier transform (FFT). A number of FFT-based SAR processing algorithms have been so far developed (Cumming &

In particular, the range-Doppler algorithm (RDA) (Cumming & Wong, 2005) is a relatively simple and widely-used FFT-based algorithm. The processing steps of this algorithm are shown in Fig. 4 and illustrated also in Fig. 5. The received radar data are stored in a large memory buffer. The buffer size in the range direction corresponds to the swath width, and the buffer size in the azimuth direction is equal to the length of the FFT that covers many intervals of synthesis. First, the range-compressed data are transformed into the range-Doppler domain by applying FFT in the azimuth. The frequency scale is limited by the pulse repetition frequency (PRF). Then, the range migration correction is performed in the frequency domain. By using the relation (6) between the instant frequency *f* and the time

 within the interval of synthesis (preserving the square-root law for the slant range) one can derive the formula for the migration curve in the frequency domain from the migration

> 2 2 () 1 1 4 4 *DC <sup>F</sup> <sup>f</sup> Rf R V V*

After that, the phase compensation and windowing are applied for the azimuth compression. By using the principle of stationary phase (Cumming & Wong, 2005) an

Finally, the SAR image is formed by applying the inverse FFT in the azimuth. Thus, the basic processing step in the frequency domain performed for a given range gives the line of the SAR image in the azimuth. This basic step is repeated for all ranges within the swath producing the complete SAR image of the ground scene presented in the data

expression for the reference function in the frequency domain is obtained:

2 2 2 2

<sup>2</sup> <sup>2</sup> <sup>4</sup> () 1 1 2 2 22 *DC DC f f F F f R V V VV*

. (11)

   

. (12)

number of the looks for a scheme with the half-overlapped sub-intervals is given by

*<sup>T</sup> <sup>N</sup>*

*L*

**2.3 SAR processing in frequency domain** 

Wong, 2005).

frame.

curve (5) in the time domain:

( ) ( )exp[ ( )] *R R hf wf if*

,

Fig. 4. SAR processing in frequency domain.

Fig. 5. Multi-look processing in frequency domain.

Synthetic Aperture Radar Systems for Small Aircrafts: Data Processing Approaches 473

curvilinear flight trajectory of the aircraft. Also, the reference antenna pitch and yaw angles

Fig. 6. The scene coordinate system, the reference local coordinate system, and the actual

[ ( ) ]cos ( ) [ ( )]sin ( ) *ref A ref V ref A V x x X t Vt t*

[ ( ) ]sin ( ) [ ( )]cos ( ) *ref A ref V ref A <sup>V</sup> y x X t Vt t*

local coordinates) of this point, the above relations (15), (16) should be used.

Let us define the scene coordinate system (,,) *XYZ* so that the reference flight line goes exactly above the *X* axis. The final-product SAR image is to be sampled on the coordinate grid of the ground plane (,) *X Y* of this coordinate system. The scene coordinate system is shown in Fig. 6 together with the actual local coordinate system (, ,) *xyz* , which slides along the real aircraft flight trajectory, and the reference local coordinate system ( , , ) *ref ref ref x y z* , which slides along the *X* axis (that is along the reference flight line). The current flight

The aircraft trajectory ( ( ), ( ), ( )) *X tY tZ t AAA* is described in the scene coordinate system. The actual local coordinates (, ) *x y* and the reference local coordinates ( , ) *ref ref x y* are related to

the current aircraft velocity vector or, in other words, with respect to the actual local coordinate system. It means that when the synthetic beam is directed to the point (, ) *R R x y* on the Doppler centroid line by using the Doppler centroid (7), the Doppler rate (8), and the migration curve (5) under unstable flight conditions, the coordinates (, ) *R R x y* are given in the actual local coordinate system. In order to find the scene coordinates (or the reference

An example of motion errors typical for a light-weight aircraft AN-2 is shown in Fig. 7. In the figure, one can see the coordinate grid of the radar coordinates "slant range – azimuth"

*<sup>V</sup>* between the horizontal component of the velocity

, (15)

. (16)

*y Yt t*

*y Yt t*

( )*t* angles describe the antenna beam orientation with respect to

*ref* , which describes the averaged orientation of the real antenna beam during the

*ref* and local coordinate system.

vector **V***XY*

The pitch

each other as follows:

direction is described by the angle

and the *X* axis.

( )*t* and yaw

time of the data frame acquisition, should be introduced.

In the cases of a significantly squinted geometry (a large antenna yaw angle) or a very high resolution, or a large number of looks, an additional processing step called "secondary range compression" is required (Cumming & Wong, 2005).

The multi-look processing is performed in the frequency domain by dividing the whole Doppler band *F FT D DR S* max max of the backscattered radar signals into the sub-bands *FD* for the separate azimuth compression (Cumming & Wong, 2005; Carrara at al., 1995):

$$
\Delta F\_D = \frac{\Delta F\_{D\text{max}}}{\left(N\_L + 1\right) / \text{ }\text{ }\text{ }\tag{13}
$$

For the multi-look processing scheme with the half-overlapped sub-bands, the central frequencies of the SAR looks with respect to the Doppler centroid are given by

$$
\Delta F\_{\text{C}}(R, n\_L) = F\_{\text{DC}}(R) - n\_L \frac{\Delta F\_D}{2} \,\prime \tag{14}
$$

where / 2, ..., / 2 1 *nN N LL L* is the SAR look index. Since the Doppler rate (8) is always negative, the first sub-interval in the time domain corresponds to the last sub-band in the frequency domain. Therefore, we write the minus sign in (14).

#### **3. Problem of aircraft motion errors**

Deviations of the aircraft flight trajectory and instabilities of the aircraft orientation significantly complicate the formation of SAR images. Such motion errors lead to defocusing, geometric distortions, and radiometric errors in SAR images (Blacknell et al., 1989; Buckreuss, 1991; Franceschetti & Lanari, 1999; Oliver & Quegan, 1998). In this section, we shall discuss these problems and their solutions in details.

#### **3.1 Aircraft flight with motion errors**

The trajectory of an aircraft may deviate from a straight line significantly in real flights. The orientation of the aircraft could also be unstable. These motion errors should be measured and compensated in order to produce high-quality SAR images. We assume that the navigation system is capable of measuring the aircraft trajectory and the aircraft velocity vector. We suppose also that the orientation of the real antenna beam with respect to the velocity vector is known.

Usually, the final product of the strip-map SAR system is a sequence of SAR images of a particular dimension, built in a projection to the ground plane, with indication of the north direction and the latitude-longitude position. Later, if necessary, several consequent images can be stitched together to produce a larger map of a particular ground area of interest. Thus, the received radar data is processed by data frames. Each frame gives one SAR image from the image sequence. The data frames are usually overlapped to guarantee successful stitching of the produced SAR images without gaps.

In order to produce the SAR image from the data frame, it is needed to define a reference flight line for this frame, the averaged flight altitude *Href* and the averaged velocity *Vref* . Under unstable flight conditions, the reference flight line should be close to the actual

In the cases of a significantly squinted geometry (a large antenna yaw angle) or a very high resolution, or a large number of looks, an additional processing step called "secondary

The multi-look processing is performed in the frequency domain by dividing the whole Doppler band *F FT D DR S* max max of the backscattered radar signals into the sub-bands *FD* for the separate azimuth compression (Cumming & Wong, 2005; Carrara at al., 1995):

*D*

frequencies of the SAR looks with respect to the Doppler centroid are given by

( 1) / 2 *D*

. (13)

*L <sup>F</sup> <sup>F</sup> N* 

For the multi-look processing scheme with the half-overlapped sub-bands, the central

(, ) () <sup>2</sup>

where / 2, ..., / 2 1 *nN N LL L* is the SAR look index. Since the Doppler rate (8) is always negative, the first sub-interval in the time domain corresponds to the last sub-band in the

Deviations of the aircraft flight trajectory and instabilities of the aircraft orientation significantly complicate the formation of SAR images. Such motion errors lead to defocusing, geometric distortions, and radiometric errors in SAR images (Blacknell et al., 1989; Buckreuss, 1991; Franceschetti & Lanari, 1999; Oliver & Quegan, 1998). In this section,

The trajectory of an aircraft may deviate from a straight line significantly in real flights. The orientation of the aircraft could also be unstable. These motion errors should be measured and compensated in order to produce high-quality SAR images. We assume that the navigation system is capable of measuring the aircraft trajectory and the aircraft velocity vector. We suppose also that the orientation of the real antenna beam with respect to the

Usually, the final product of the strip-map SAR system is a sequence of SAR images of a particular dimension, built in a projection to the ground plane, with indication of the north direction and the latitude-longitude position. Later, if necessary, several consequent images can be stitched together to produce a larger map of a particular ground area of interest. Thus, the received radar data is processed by data frames. Each frame gives one SAR image from the image sequence. The data frames are usually overlapped to guarantee successful

In order to produce the SAR image from the data frame, it is needed to define a reference flight line for this frame, the averaged flight altitude *Href* and the averaged velocity *Vref* . Under unstable flight conditions, the reference flight line should be close to the actual

*C L DC L*

*D*

*<sup>F</sup> F Rn F R n* , (14)

range compression" is required (Cumming & Wong, 2005).

max

frequency domain. Therefore, we write the minus sign in (14).

we shall discuss these problems and their solutions in details.

stitching of the produced SAR images without gaps.

**3. Problem of aircraft motion errors** 

**3.1 Aircraft flight with motion errors** 

velocity vector is known.

curvilinear flight trajectory of the aircraft. Also, the reference antenna pitch and yaw angles *ref* and *ref* , which describes the averaged orientation of the real antenna beam during the time of the data frame acquisition, should be introduced.

Fig. 6. The scene coordinate system, the reference local coordinate system, and the actual local coordinate system.

Let us define the scene coordinate system (,,) *XYZ* so that the reference flight line goes exactly above the *X* axis. The final-product SAR image is to be sampled on the coordinate grid of the ground plane (,) *X Y* of this coordinate system. The scene coordinate system is shown in Fig. 6 together with the actual local coordinate system (, ,) *xyz* , which slides along the real aircraft flight trajectory, and the reference local coordinate system ( , , ) *ref ref ref x y z* , which slides along the *X* axis (that is along the reference flight line). The current flight direction is described by the angle *<sup>V</sup>* between the horizontal component of the velocity vector **V***XY* and the *X* axis.

The aircraft trajectory ( ( ), ( ), ( )) *X tY tZ t AAA* is described in the scene coordinate system. The actual local coordinates (, ) *x y* and the reference local coordinates ( , ) *ref ref x y* are related to each other as follows:

$$\mathbf{x} = \begin{bmatrix} X\_{\text{ref}} - X\_A(t) + V\_{\text{ref}}t \end{bmatrix} \cos \phi\_V(t) + \begin{bmatrix} y\_{\text{ref}} - Y\_A(t) \end{bmatrix} \sin \phi\_V(t), \tag{15}$$

$$\mathbf{y} = -[\mathbf{x}\_{\mathrm{ref}} - X\_A(\mathbf{t}) + V\_{\mathrm{ref}}t] \sin \phi \mathbf{y}\_V(\mathbf{t}) + [\mathbf{y}\_{\mathrm{ref}} - Y\_A(\mathbf{t})] \cos \phi \mathbf{y}\_V(\mathbf{t}) \,. \tag{16}$$

The pitch ( )*t* and yaw ( )*t* angles describe the antenna beam orientation with respect to the current aircraft velocity vector or, in other words, with respect to the actual local coordinate system. It means that when the synthetic beam is directed to the point (, ) *R R x y* on the Doppler centroid line by using the Doppler centroid (7), the Doppler rate (8), and the migration curve (5) under unstable flight conditions, the coordinates (, ) *R R x y* are given in the actual local coordinate system. In order to find the scene coordinates (or the reference local coordinates) of this point, the above relations (15), (16) should be used.

An example of motion errors typical for a light-weight aircraft AN-2 is shown in Fig. 7. In the figure, one can see the coordinate grid of the radar coordinates "slant range – azimuth"

Synthetic Aperture Radar Systems for Small Aircrafts: Data Processing Approaches 475

The correction of the phase errors and range migration errors caused by trajectory deviations can be applied to the raw data before the aperture synthesis. After such compensation, the raw data look like be collected from the reference straight line. After such motion compensation, the synthetic beams will be set with respect to the reference local coordinate system. Such approach is widely used with the SAR processing algorithms working in the frequency domain. This motion compensation technique is considered in

The problem of radiometric errors is illustrated in Fig. 8. If there are no orientation errors, the synthetic beam of the central look is directed to the centre of the real antenna beam, and all SAR look beams are within the main lobe of the real antenna pattern, as shown in Fig. 8a. The antenna orientation errors lead to the situation when the SAR beams are directed outside the real antenna beam to not-illuminated ground areas, as shown in Fig. 8b,

Fig. 8. Multi-look processing without antenna orientation errors (a) and with orientation errors: (b) without clutter-lock, (c) with clutter-lock, (d) with extended number of looks.

Instabilities of the aircraft orientation can be compensated by the antenna stabilization by mounting it on a gimbal. It helps to keep the constant antenna beam orientation. However,

The application of a wide-beam antenna firmly mounted on the aircraft is a less expensive way to guarantee a uniform illumination of the ground scene despite of instabilities of the platform orientation. Several shortcomings of this approach should be admitted. The application of a wide antenna beam means some degradation of the radar sensitivity. Also, it calls for a higher PRF to sample the increased Doppler frequency band. Moreover, only the central part of the antenna footprint will be illuminated uniformly limiting the number

The clutter-lock technique (Li at al., 1985; Madsen, 1989) is usually used to avoid radiometric errors in SAR images. According to the clutter lock technique, the azimuth reference functions are built adaptively so that the synthetic beams track the direction of the real antenna beam staying within the main lobe of the real antenna pattern as shown in Fig. 8c. However, the variations of the synthetic beam orientation due to the clutter-lock naturally

of looks that can be built without an additional radiometric compensation.

Section 6 with application to the range-Doppler algorithm.

**3.3 Radiometric errors in SAR images** 

this approach is rather complicated and expensive.

lead to geometric distortions in SAR images.

resulting in radiometric errors.

projected onto the ground plane (,) *X Y* . The horizontal curves are the curves of the constant slant range from the aircraft. They are curved because of deviations of the trajectory from the straight line. The vertical lines are the central lines of the antenna footprint (the Doppler centroid lines) for the consequent aircraft positions. As it is seen, the central lines are not equidistant and not parallel because of variations of the antenna orientation.

Fig. 7. Trajectory deviations and orientation instabilities illustrated by the coordinate grid in the radar coordinates "slant range – azimuth" on the ground plane.

#### **3.2 Geometric distortions in SAR images**

The direction of the synthetic beam is determined by the used Doppler centroid with respect to the current velocity vector. In other words, the Doppler centroid controls the direction of the synthetic beam with respect to the actual local coordinate system. Therefore, if the deflections of the velocity vector from the reference flight direction (described by the angle *<sup>V</sup>* in Fig. 6) are not compensated properly, then the synthetic beam is moving forward or backward along the flight path with respect to the scene coordinate system. It means that the scene will be sampled non-uniformly in the azimuth direction resulting in geometric distortions in SAR images. For example, if the synthetic beams are pointed to the centre of the real beam, i.e. to the Doppler centroid line, then the scene will be sampled on a nonuniform grid like that shown in Fig. 7.

If the aircraft trajectory and the orientation of the synthetic aperture beams are known, the geometric distortions can be corrected by resampling of the obtained SAR images to a rectangular grid on the ground plane. This resampling procedure is described in Section 4. However, this approach could be inefficient in the case of significant geometric distortions.

Alternatively, geometric errors can be avoided if the orientation of the synthetic beams is adjusted at the stage of synthesis by using the trajectory information. The purpose of this adjustment is to keep the beam orientation constant with respect to the reference flight direction. This is the idea of the built-in geometric correction discussed in Section 5.

The correction of the phase errors and range migration errors caused by trajectory deviations can be applied to the raw data before the aperture synthesis. After such compensation, the raw data look like be collected from the reference straight line. After such motion compensation, the synthetic beams will be set with respect to the reference local coordinate system. Such approach is widely used with the SAR processing algorithms working in the frequency domain. This motion compensation technique is considered in Section 6 with application to the range-Doppler algorithm.

## **3.3 Radiometric errors in SAR images**

474 Recent Advances in Aircraft Technology

projected onto the ground plane (,) *X Y* . The horizontal curves are the curves of the constant slant range from the aircraft. They are curved because of deviations of the trajectory from the straight line. The vertical lines are the central lines of the antenna footprint (the Doppler centroid lines) for the consequent aircraft positions. As it is seen, the central lines are not

Fig. 7. Trajectory deviations and orientation instabilities illustrated by the coordinate grid in

The direction of the synthetic beam is determined by the used Doppler centroid with respect to the current velocity vector. In other words, the Doppler centroid controls the direction of the synthetic beam with respect to the actual local coordinate system. Therefore, if the deflections of the velocity vector from the reference flight direction (described by the angle

*<sup>V</sup>* in Fig. 6) are not compensated properly, then the synthetic beam is moving forward or backward along the flight path with respect to the scene coordinate system. It means that the scene will be sampled non-uniformly in the azimuth direction resulting in geometric distortions in SAR images. For example, if the synthetic beams are pointed to the centre of the real beam, i.e. to the Doppler centroid line, then the scene will be sampled on a non-

If the aircraft trajectory and the orientation of the synthetic aperture beams are known, the geometric distortions can be corrected by resampling of the obtained SAR images to a rectangular grid on the ground plane. This resampling procedure is described in Section 4. However, this approach could be inefficient in the case of significant geometric distortions. Alternatively, geometric errors can be avoided if the orientation of the synthetic beams is adjusted at the stage of synthesis by using the trajectory information. The purpose of this adjustment is to keep the beam orientation constant with respect to the reference flight

direction. This is the idea of the built-in geometric correction discussed in Section 5.

the radar coordinates "slant range – azimuth" on the ground plane.

**3.2 Geometric distortions in SAR images** 

uniform grid like that shown in Fig. 7.

equidistant and not parallel because of variations of the antenna orientation.

The problem of radiometric errors is illustrated in Fig. 8. If there are no orientation errors, the synthetic beam of the central look is directed to the centre of the real antenna beam, and all SAR look beams are within the main lobe of the real antenna pattern, as shown in Fig. 8a. The antenna orientation errors lead to the situation when the SAR beams are directed outside the real antenna beam to not-illuminated ground areas, as shown in Fig. 8b, resulting in radiometric errors.

Fig. 8. Multi-look processing without antenna orientation errors (a) and with orientation errors: (b) without clutter-lock, (c) with clutter-lock, (d) with extended number of looks.

Instabilities of the aircraft orientation can be compensated by the antenna stabilization by mounting it on a gimbal. It helps to keep the constant antenna beam orientation. However, this approach is rather complicated and expensive.

The application of a wide-beam antenna firmly mounted on the aircraft is a less expensive way to guarantee a uniform illumination of the ground scene despite of instabilities of the platform orientation. Several shortcomings of this approach should be admitted. The application of a wide antenna beam means some degradation of the radar sensitivity. Also, it calls for a higher PRF to sample the increased Doppler frequency band. Moreover, only the central part of the antenna footprint will be illuminated uniformly limiting the number of looks that can be built without an additional radiometric compensation.

The clutter-lock technique (Li at al., 1985; Madsen, 1989) is usually used to avoid radiometric errors in SAR images. According to the clutter lock technique, the azimuth reference functions are built adaptively so that the synthetic beams track the direction of the real antenna beam staying within the main lobe of the real antenna pattern as shown in Fig. 8c. However, the variations of the synthetic beam orientation due to the clutter-lock naturally lead to geometric distortions in SAR images.

Synthetic Aperture Radar Systems for Small Aircrafts: Data Processing Approaches 477

Fig. 2. At each step of the synthesis, the reference function and migration curves are adjusted according to the estimated orientation angles of the real antenna beam providing the clutter-lock. Due to the clutter-lock, it is possible to avoid radiometric errors. Geometric errors are corrected by resampling of the obtained SAR images on the post-processing stage.

**4.1 Estimation of the antenna orientation angles from Doppler centroid measurements**  According to the clutter-lock technique, the synthetic beams are built adaptively to track the direction of the real antenna beam. The orientation angles of the aircraft can be measured by a navigation system. The commonly used navigation systems are based on Inertial Measurement Unit (IMU) or on a combination of IMU and attitude GPS. They are typically rather expensive and do not always provide the required accuracy and the needed rate of measurements. We have proposed an effective method for the estimation of the antenna orientation angles – pitch and yaw – from the Doppler measurements. The application of this technique has allowed us to simplify the navigation system by reducing it to a simple GPS receiver to measure the platform velocity and coordinates

The mathematical background of this technique is as follows. The dependence of the

*x V HV <sup>F</sup>*

2( ) 2 *Rx z*

. In opposite to (7), formula (17) accounts for the possible vertical

*R V*

. (17)

*x*

. (18)

 *R R* **R V** 

The slant range vector ( , , ) *R R* **<sup>R</sup>** *<sup>x</sup> <sup>y</sup> <sup>H</sup>* is directed from the antenna phase centre to the point (,) *R R x y* on the Doppler centroid line as shown in Fig. 1, and the aircraft velocity

direction of the aircraft motion. Substituting in (17) the expression (1) for the coordinate of a

<sup>2</sup> 22 2 ( , , ) tan cos sin ( tan ) *x z DC*

The behaviour of the Doppler centroid on range depends strongly on particular values of the antenna pitch and yaw angles as illustrated in Fig. 9. It means that theoretically the antenna beam orientation angles can be estimated via an analysis of the dependence of the measured Doppler centroid on range. However, for the practical implementation of this idea, it was needed to answer the questions: Would it be a reliable estimate? Is it possible to achieve the required accuracy of the angle measurements? And, is it possible to realize this estimation in real time? Fortunately, we have found solutions which provide positive

We have found that the pitch and yaw angles can be estimated by fitting the theoretical dependence of the Doppler centroid on range (18) into a set of Doppler centroid values [ ] ( ) *<sup>n</sup> F FR DC DC n* roughly estimated from the received data at each range gate from the

Doppler spectra calculated by using the FFT. Here *n* is the range gate index.

 

*V V FR H RH H H*

only.

vector is ( , 0, ) **V** *V V x z*

 

answers on the above questions.

Doppler centroid on the slant range is given by

*DC*

point on the Doppler centroid line, we rewrite the above dependence as

The clutter lock technique is effective if the variations of the antenna beam orientation are slow in time and small as compared to the real antenna beam width in the azimuth. In this case, the geometric distortions can be corrected by re-sampling. Provided that the orientation instabilities are fast and significant, the clutter-lock leads to strong geometric distortions in SAR images which cannot be easily corrected by re-sampling.

We have proposed an alternative radiometric correction approach, which is based on a multi-look SAR processing with an extended number of SAR looks (Bezvesilniy et al., 2010c; Bezvesilniy et al., 2010d; Bezvesilniy et al., 2011b; Bezvesilniy et al., 2011c). This technique can be used instead of the clutter-lock. The idea of the approach consists in the formation of an extended number of looks to cover directions beyond the main lobe of the real antenna pattern as illustrated in Fig. 8d. In such approach, some of the SAR look beams are always presented within the real antenna beam despite of the orientation errors. In Section 5, we describe how to combine these extended SAR looks to produce the multi-look SAR image without radiometric errors. This approach is appropriate for the cases when the clutter-lock cannot be applied because of fast orientation instabilities or for SAR processing algorithms that cannot be used together with the clutter-lock. The proposed method also allows correcting the radiometric errors in SAR images if the antenna orientation is not known accurately.

### **3.4 Dilemma: geometric distortions vs. radiometric errors**

From the above considerations, one can conclude that an attempt to avoid geometric errors by the appropriate pointing of the synthetic beams leads to radiometric errors. And vice versa, the clutter-lock results in geometric errors. So the dilemma of "geometric distortions vs. radiometric errors" should be resolved when developing any SAR data processing approach for SAR systems with motion errors.

We describe three alternative approaches to this problem. In the first approach, described in Section 4, the priority is set to avoiding radiometric errors and the clutter-lock is applied. Geometric errors are corrected by resampling of the obtained SAR images. In the second approach, considered in Section 5, the geometric accuracy of SAR images is the primary goal and we implement a synthetic beam control algorithm called "built-in geometric correction" to point the beams to the nodes of a correct rectangular grid on the ground plane. Radiometric errors are corrected by multi-look processing with extended number of looks. In the third approach, discussed in Section 6, a range-Doppler algorithm with the 1-st and 2 nd order motion compensation is considered, which allows obtaining SAR images without significant geometric errors. The application of a wide-beam real antenna could be a solution of the problem of radiometric errors for this approach.

## **4. Time-domain SAR processing with clutter-lock and geometric correction by resampling**

In this section, we consider a time-domain SAR data processing algorithm assuming that the aircraft flight altitude and velocity, as well as the antenna beam orientation angles are changed slowly in the sense that they can be considered constant during the time of the synthesis. The main steps of the algorithm are the same as in the case of the straight-line motion with a constant orientation. These steps are described in the block-scheme shown in

The clutter lock technique is effective if the variations of the antenna beam orientation are slow in time and small as compared to the real antenna beam width in the azimuth. In this case, the geometric distortions can be corrected by re-sampling. Provided that the orientation instabilities are fast and significant, the clutter-lock leads to strong geometric

We have proposed an alternative radiometric correction approach, which is based on a multi-look SAR processing with an extended number of SAR looks (Bezvesilniy et al., 2010c; Bezvesilniy et al., 2010d; Bezvesilniy et al., 2011b; Bezvesilniy et al., 2011c). This technique can be used instead of the clutter-lock. The idea of the approach consists in the formation of an extended number of looks to cover directions beyond the main lobe of the real antenna pattern as illustrated in Fig. 8d. In such approach, some of the SAR look beams are always presented within the real antenna beam despite of the orientation errors. In Section 5, we describe how to combine these extended SAR looks to produce the multi-look SAR image without radiometric errors. This approach is appropriate for the cases when the clutter-lock cannot be applied because of fast orientation instabilities or for SAR processing algorithms that cannot be used together with the clutter-lock. The proposed method also allows correcting the radiometric errors in SAR images if the antenna orientation is not known

From the above considerations, one can conclude that an attempt to avoid geometric errors by the appropriate pointing of the synthetic beams leads to radiometric errors. And vice versa, the clutter-lock results in geometric errors. So the dilemma of "geometric distortions vs. radiometric errors" should be resolved when developing any SAR data processing

We describe three alternative approaches to this problem. In the first approach, described in Section 4, the priority is set to avoiding radiometric errors and the clutter-lock is applied. Geometric errors are corrected by resampling of the obtained SAR images. In the second approach, considered in Section 5, the geometric accuracy of SAR images is the primary goal and we implement a synthetic beam control algorithm called "built-in geometric correction" to point the beams to the nodes of a correct rectangular grid on the ground plane. Radiometric errors are corrected by multi-look processing with extended number of looks. In the third approach, discussed in Section 6, a range-Doppler algorithm with the 1-st and 2 nd order motion compensation is considered, which allows obtaining SAR images without significant geometric errors. The application of a wide-beam real antenna could be a

**4. Time-domain SAR processing with clutter-lock and geometric correction** 

In this section, we consider a time-domain SAR data processing algorithm assuming that the aircraft flight altitude and velocity, as well as the antenna beam orientation angles are changed slowly in the sense that they can be considered constant during the time of the synthesis. The main steps of the algorithm are the same as in the case of the straight-line motion with a constant orientation. These steps are described in the block-scheme shown in

distortions in SAR images which cannot be easily corrected by re-sampling.

**3.4 Dilemma: geometric distortions vs. radiometric errors** 

solution of the problem of radiometric errors for this approach.

approach for SAR systems with motion errors.

accurately.

**by resampling** 

Fig. 2. At each step of the synthesis, the reference function and migration curves are adjusted according to the estimated orientation angles of the real antenna beam providing the clutter-lock. Due to the clutter-lock, it is possible to avoid radiometric errors. Geometric errors are corrected by resampling of the obtained SAR images on the post-processing stage.

#### **4.1 Estimation of the antenna orientation angles from Doppler centroid measurements**

According to the clutter-lock technique, the synthetic beams are built adaptively to track the direction of the real antenna beam. The orientation angles of the aircraft can be measured by a navigation system. The commonly used navigation systems are based on Inertial Measurement Unit (IMU) or on a combination of IMU and attitude GPS. They are typically rather expensive and do not always provide the required accuracy and the needed rate of measurements. We have proposed an effective method for the estimation of the antenna orientation angles – pitch and yaw – from the Doppler measurements. The application of this technique has allowed us to simplify the navigation system by reducing it to a simple GPS receiver to measure the platform velocity and coordinates only.

The mathematical background of this technique is as follows. The dependence of the Doppler centroid on the slant range is given by

$$F\_{DC} = \frac{2}{\lambda} \frac{(\vec{\mathbf{R}} \cdot \vec{\mathbf{V}})}{R} = \frac{2}{\lambda} \frac{\mathbf{x}\_R V\_x - HV\_z}{R} \,. \tag{17}$$

The slant range vector ( , , ) *R R* **<sup>R</sup>** *<sup>x</sup> <sup>y</sup> <sup>H</sup>* is directed from the antenna phase centre to the point (,) *R R x y* on the Doppler centroid line as shown in Fig. 1, and the aircraft velocity vector is ( , 0, ) **V** *V V x z* . In opposite to (7), formula (17) accounts for the possible vertical direction of the aircraft motion. Substituting in (17) the expression (1) for the coordinate of a point on the Doppler centroid line, we rewrite the above dependence as

$$F\_{\rm DC}(R,\alpha,\beta) = \frac{2}{\lambda} \frac{V\_x}{R} \left[ H \tan a \cos \beta + \sin \beta \sqrt{R^2 - H^2 - (H \tan a)^2} - H \frac{V\_z}{V\_x} \right].\tag{18}$$

The behaviour of the Doppler centroid on range depends strongly on particular values of the antenna pitch and yaw angles as illustrated in Fig. 9. It means that theoretically the antenna beam orientation angles can be estimated via an analysis of the dependence of the measured Doppler centroid on range. However, for the practical implementation of this idea, it was needed to answer the questions: Would it be a reliable estimate? Is it possible to achieve the required accuracy of the angle measurements? And, is it possible to realize this estimation in real time? Fortunately, we have found solutions which provide positive answers on the above questions.

We have found that the pitch and yaw angles can be estimated by fitting the theoretical dependence of the Doppler centroid on range (18) into a set of Doppler centroid values [ ] ( ) *<sup>n</sup> F FR DC DC n* roughly estimated from the received data at each range gate from the Doppler spectra calculated by using the FFT. Here *n* is the range gate index.

Synthetic Aperture Radar Systems for Small Aircrafts: Data Processing Approaches 479

coordinates "slant range – azimuth" to a correct rectangular grid on the ground plane (,) *X Y* by taking into account the measured aircraft trajectory and the orientation of the

1. Define the reference flight line and the reference parameters, as well as the corresponding scene coordinate system for a given SAR image frame as it was

2. Perform the resampling (interpolation) of the SAR image ( ,) *SAR X R <sup>A</sup>* from the slant

2.1. Calculate the coordinates of the image pixels ( ,) *SAR X R <sup>A</sup>* in the actual local

2.2. Re-calculate the coordinates of the image pixels from the actual local coordinate system to the scene coordinate system according to (15), (16) and obtain the

2.3. Perform a one-dimensional interpolation of the SAR image line-by-line in the range direction from the uniform grid in the slant range to the uniform grid in the

2.4. Find the coordinates ( ,) *X XY SAR A* of the image samples ( ,) *SAR X Y <sup>A</sup>* in the scene coordinate system from the coordinates ( ,) *X XR SAR A* by the same one-dimensional

3.1 Perform a joint sorting of the pairs of the range-interpolated image samples ( ,) *SAR X Y <sup>A</sup>* and their azimuth coordinates ( ,) *X XY SAR A* in the ascending order with respect to the *XA* -coordinate. This step is required to correct significant

3. Perform the interpolation of the SAR image in the azimuth direction in the following

forward-backward sweeps of the synthetic beam caused by motion errors. 3.2 Perform a one-dimensional interpolation of the SAR image samples ( ,) *SAR X Y <sup>A</sup>* from the initial non-uniform grid of the along-track azimuth coordinate *XA* to the uniform grid *X* . The result is the desired image *SAR X Y* (,) in the ground scene

The above described resampling algorithm is typically performed as a post-processing

The described in Sections 4.1 and 4.2 SAR processing approach has been implemented in the airborne RIAN-SAR-Ku system (Vavriv at al., 2006; Vavriv & Bezvesilniy, 2011a; Vavriv at al., 2011). The light-weight aircrafts Antonov AN-2 and Y-12 were used as the

An example of a single-look SAR image built by the described SAR processing algorithm with the clutter-lock is shown in Fig. 10a. This is the SAR image before the correction of the geometric distortions by resampling. The image resolution is 3 m. The "forward-backward-

ground range. As the result, we obtain the image ( ,) *SAR X Y <sup>A</sup>* .

synthetic aperture beams.

two steps:

procedure.

platform.

coordinates.

**4.3 Experimental results** 

described in Section 3.1.

The resampling procedure consists of the following steps.

coordinate system: ( ( , ), ( , )) *SAR A SAR A x X Ry X R* .

coordinates ( ( , ), ( , )) *X X RY X R SAR A SAR A* .

interpolation in the range direction.

range to the ground range in four steps:

Fig. 9. Dependence of the Doppler centroid on slant range.

We have developed the following fast and effective fitting procedure. By introducing new variables *<sup>i</sup> Xn* , *Yn* as

$$X\_n^l = \sqrt{R\_n^2 - H^2 - (H \tan \alpha\_{i-1})^2} \quad Y\_n = \frac{\lambda F\_{\rm DC}^{\|\mathbf{n}\|}}{2V\_x} R\_n + H \frac{V\_z}{V\_x} \, \text{s} \tag{19}$$

the dependence (18) is transformed into the equation of a straight line:

$$Y\_n = \left(H \tan \alpha\_i \cos \beta\_i\right) + X\_n^i \left(\sin \beta\_i\right) \,. \tag{20}$$

Thus, the problem of fitting of the non-linear dependence (18) is turned into the well-known task of fitting of a line into a set of experimental points. The only difficulty is that the unknown pitch angle appears in the transformation of the coordinates (19). We have solved this difficulty by using an iteration procedure. The fitting is performed iteratively with respect to the pitch angle considered as a small parameter. The index *i* 1, 2, 3, ... in (19), (20) is the iteration index. At the first iteration, the pitch angle is assumed to be zero: 0 0 . It has been found that two iteration are typically enough to achieve the required accuracy of about 0.1° in real time. The method has been implemented in SAR systems developed and produced at the Institute of Radio Astronomy (Vavriv at al., 2006; Vavriv & Bezvesilniy, 2011a; Vavriv at al., 2011).

#### **4.2 Correction of geometric distortions in SAR images by resampling**

In the considered time-domain SAR processing algorithm with the clutter-lock, each line of a SAR image in the range direction represents the ground scene on the Doppler centroid line determined by the current antenna beam orientation angles ( )*t* and ( )*t* in the actual local coordinate system (see Fig. 6). Thus, the application of the clutter-lock under unstable flight conditions leads to geometric distortions in SAR images as illustrated in Fig. 7. Such geometric distortions can be corrected by resampling of the images from the radar native coordinates "slant range – azimuth" to a correct rectangular grid on the ground plane (,) *X Y* by taking into account the measured aircraft trajectory and the orientation of the synthetic aperture beams.

The resampling procedure consists of the following steps.

478 Recent Advances in Aircraft Technology

We have developed the following fast and effective fitting procedure. By introducing new

[ ]

*n DC <sup>z</sup> n n x x*

*<sup>F</sup> <sup>V</sup> Y RH V V*

, (19)

. (20)

0 .

( )*t* in the actual local

2

 

> ( )*t* and

*<sup>i</sup>* ,

( tan cos ) (sin ) *<sup>i</sup> YH X n i in i* 

Thus, the problem of fitting of the non-linear dependence (18) is turned into the well-known task of fitting of a line into a set of experimental points. The only difficulty is that the unknown pitch angle appears in the transformation of the coordinates (19). We have solved this difficulty by using an iteration procedure. The fitting is performed iteratively with respect to the pitch angle considered as a small parameter. The index *i* 1, 2, 3, ... in (19), (20) is the iteration index. At the first iteration, the pitch angle is assumed to be zero: 0

It has been found that two iteration are typically enough to achieve the required accuracy of about 0.1° in real time. The method has been implemented in SAR systems developed and produced at the Institute of Radio Astronomy (Vavriv at al., 2006; Vavriv & Bezvesilniy,

In the considered time-domain SAR processing algorithm with the clutter-lock, each line of a SAR image in the range direction represents the ground scene on the Doppler centroid line

coordinate system (see Fig. 6). Thus, the application of the clutter-lock under unstable flight conditions leads to geometric distortions in SAR images as illustrated in Fig. 7. Such geometric distortions can be corrected by resampling of the images from the radar native

2 2 2 <sup>1</sup> ( tan ) *<sup>i</sup> X RH H n n*

the dependence (18) is transformed into the equation of a straight line:

**4.2 Correction of geometric distortions in SAR images by resampling** 

determined by the current antenna beam orientation angles

Fig. 9. Dependence of the Doppler centroid on slant range.

variables *<sup>i</sup> Xn* , *Yn* as

2011a; Vavriv at al., 2011).

	- 2.1. Calculate the coordinates of the image pixels ( ,) *SAR X R <sup>A</sup>* in the actual local coordinate system: ( ( , ), ( , )) *SAR A SAR A x X Ry X R* .
	- 2.2. Re-calculate the coordinates of the image pixels from the actual local coordinate system to the scene coordinate system according to (15), (16) and obtain the coordinates ( ( , ), ( , )) *X X RY X R SAR A SAR A* .
	- 2.3. Perform a one-dimensional interpolation of the SAR image line-by-line in the range direction from the uniform grid in the slant range to the uniform grid in the ground range. As the result, we obtain the image ( ,) *SAR X Y <sup>A</sup>* .
	- 2.4. Find the coordinates ( ,) *X XY SAR A* of the image samples ( ,) *SAR X Y <sup>A</sup>* in the scene coordinate system from the coordinates ( ,) *X XR SAR A* by the same one-dimensional interpolation in the range direction.
	- 3.1 Perform a joint sorting of the pairs of the range-interpolated image samples ( ,) *SAR X Y <sup>A</sup>* and their azimuth coordinates ( ,) *X XY SAR A* in the ascending order with respect to the *XA* -coordinate. This step is required to correct significant forward-backward sweeps of the synthetic beam caused by motion errors.
	- 3.2 Perform a one-dimensional interpolation of the SAR image samples ( ,) *SAR X Y <sup>A</sup>* from the initial non-uniform grid of the along-track azimuth coordinate *XA* to the uniform grid *X* . The result is the desired image *SAR X Y* (,) in the ground scene coordinates.

The above described resampling algorithm is typically performed as a post-processing procedure.

#### **4.3 Experimental results**

The described in Sections 4.1 and 4.2 SAR processing approach has been implemented in the airborne RIAN-SAR-Ku system (Vavriv at al., 2006; Vavriv & Bezvesilniy, 2011a; Vavriv at al., 2011). The light-weight aircrafts Antonov AN-2 and Y-12 were used as the platform.

An example of a single-look SAR image built by the described SAR processing algorithm with the clutter-lock is shown in Fig. 10a. This is the SAR image before the correction of the geometric distortions by resampling. The image resolution is 3 m. The "forward-backward-

Synthetic Aperture Radar Systems for Small Aircrafts: Data Processing Approaches 481

The multi-look processing in the time domain is usually performed by the coherent processing on sub-intervals of a long interval of the synthesis as described in Section 2.2. According to such approach, it is assumed that there are no significant uncompensated phase errors during the long time of the synthesis *TS*max determined by (10). However, as a matter of fact, in order to achieve the desired azimuth resolution it is sufficient to perform the coherent processing on the short time interval *TS* given by (9). This fact gives an alternative realization of the multi-look processing in the time domain, which is more preferable in the case of significant motion errors. The idea of the algorithm is to process the data collected during the short time of synthesis *TS* with a set of different reference functions and migration curves to form the SAR look beams. We have called this approach "the multi-look processing on a single-look interval of synthesis". The proposed approach is

Fig. 11. The multi-look processing on a single-look interval of synthesis: (a) the antenna

The reference functions of the different SAR looks should be built with the central frequencies (14) similar to the multi-look processing scheme in the frequency domain. The SAR look beam formed with the central frequency (, ) *F Rn C L* is directed to some point

short interval of the synthesis as illustrated in Fig. 11a. Let us derive formulas for these coordinates. The position of the point in the azimuth direction is related to its Doppler

<sup>2</sup> ( ( , )) (, ) *R Lx z*

*R*

*x*

*V* 

*D*

*x R n V HV F Rn*

(, ) 2 2

*R F Rn n*

*L L*

, which appears at the same slant range *R* at the centre of the

. (21)

. (22)

**5.1 Multi-look SAR processing on a single-look interval of synthesis** 

illustrated in Fig. 11.

(a) (b)

( ( , ), ( , )) *R LR L x Rn y Rn*

centroid (17), so we can write:

footprint consideration, (b) the raw data buffer consideration.

*C L*

Substituting the expressions (14) and (17) into (21), we obtain:

Fig. 10. Geometric distortions in a single-look SAR image built by using the clutter-lock (a), radiometric errors in the multi-look SAR image built without the clutter lock (b), the multilook SAR image without errors after the resampling procedure (c).

forward" motion of the antenna beam leads to the evident distortions of the road lines and the contours of the forest areas in this image.

A 5-look SAR image formed without the clutter-lock is shown in Fig. 10b. The characteristic amplitude of the antenna beam orientation instabilities was larger than the 1-degree antenna beam width what resulted in significant radiometric errors. It should be noted that the proposed clutter-lock method based on the estimation of the antenna beam from the Doppler centroid measurements is efficient enough to avoid these radiometric errors in Fig. 10a.

The SAR images in Figs. 10a and 10b illustrates the dilemma of "geometric distortions vs. radiometric errors". Radiometric errors are removed due to the clutter-lock in Fig. 10a at the expense of geometric errors. And, vice versa, geometric errors are eliminated in Fig. 10b built without the clutter-lock, but at the cost of significant radiometric errors.

The application of the proposed resampling procedure resolves the dilemma, as it is illustrated in Fig. 10c. In this figure, both geometrical and radiometric errors are corrected.

## **5. Time-domain SAR processing with built-in geometric correction and multilook radiometric correction**

In this section, we describe a SAR processing approach, in which the correction of geometric distortions in SAR images is considered as the primary goal. We proposed (Bezvesilniy et al., 2010a; Bezvesilniy et al., 2010b; Bezvesilniy et al., 2010d; Bezvesilniy et al., 2011a) an algorithm called "built-in geometric correction" to control the synthetic beam direction so that the beams are pointed to the nodes of a correct rectangular grid on the ground plane. As the result, the SAR images are geometrically correct after the synthesis. The synthetic beams are obviously set to the nodes regardless of the real antenna beam orientation. The radiometric errors that arise in this case are corrected by a multi-look processing with extended number of looks.

(a) (b) (c)

look SAR image without errors after the resampling procedure (c).

the contours of the forest areas in this image.

**look radiometric correction** 

extended number of looks.

Fig. 10. Geometric distortions in a single-look SAR image built by using the clutter-lock (a), radiometric errors in the multi-look SAR image built without the clutter lock (b), the multi-

forward" motion of the antenna beam leads to the evident distortions of the road lines and

A 5-look SAR image formed without the clutter-lock is shown in Fig. 10b. The characteristic amplitude of the antenna beam orientation instabilities was larger than the 1-degree antenna beam width what resulted in significant radiometric errors. It should be noted that the proposed clutter-lock method based on the estimation of the antenna beam from the Doppler

The SAR images in Figs. 10a and 10b illustrates the dilemma of "geometric distortions vs. radiometric errors". Radiometric errors are removed due to the clutter-lock in Fig. 10a at the expense of geometric errors. And, vice versa, geometric errors are eliminated in Fig. 10b

The application of the proposed resampling procedure resolves the dilemma, as it is illustrated in Fig. 10c. In this figure, both geometrical and radiometric errors are corrected.

**5. Time-domain SAR processing with built-in geometric correction and multi-**

In this section, we describe a SAR processing approach, in which the correction of geometric distortions in SAR images is considered as the primary goal. We proposed (Bezvesilniy et al., 2010a; Bezvesilniy et al., 2010b; Bezvesilniy et al., 2010d; Bezvesilniy et al., 2011a) an algorithm called "built-in geometric correction" to control the synthetic beam direction so that the beams are pointed to the nodes of a correct rectangular grid on the ground plane. As the result, the SAR images are geometrically correct after the synthesis. The synthetic beams are obviously set to the nodes regardless of the real antenna beam orientation. The radiometric errors that arise in this case are corrected by a multi-look processing with

centroid measurements is efficient enough to avoid these radiometric errors in Fig. 10a.

built without the clutter-lock, but at the cost of significant radiometric errors.

#### **5.1 Multi-look SAR processing on a single-look interval of synthesis**

The multi-look processing in the time domain is usually performed by the coherent processing on sub-intervals of a long interval of the synthesis as described in Section 2.2. According to such approach, it is assumed that there are no significant uncompensated phase errors during the long time of the synthesis *TS*max determined by (10). However, as a matter of fact, in order to achieve the desired azimuth resolution it is sufficient to perform the coherent processing on the short time interval *TS* given by (9). This fact gives an alternative realization of the multi-look processing in the time domain, which is more preferable in the case of significant motion errors. The idea of the algorithm is to process the data collected during the short time of synthesis *TS* with a set of different reference functions and migration curves to form the SAR look beams. We have called this approach "the multi-look processing on a single-look interval of synthesis". The proposed approach is illustrated in Fig. 11.

Fig. 11. The multi-look processing on a single-look interval of synthesis: (a) the antenna footprint consideration, (b) the raw data buffer consideration.

The reference functions of the different SAR looks should be built with the central frequencies (14) similar to the multi-look processing scheme in the frequency domain. The SAR look beam formed with the central frequency (, ) *F Rn C L* is directed to some point ( ( , ), ( , )) *R LR L x Rn y Rn* , which appears at the same slant range *R* at the centre of the short interval of the synthesis as illustrated in Fig. 11a. Let us derive formulas for these coordinates. The position of the point in the azimuth direction is related to its Doppler centroid (17), so we can write:

$$
\Delta F\_{\mathbb{C}}(R, n\_{\perp}) = \frac{2}{\lambda} \frac{(\mathbf{x}\_{R} + \xi(R, n\_{\perp}))V\_{\mathbf{x}} - HV\_{\mathbf{z}}}{R} \,. \tag{21}
$$

Substituting the expressions (14) and (17) into (21), we obtain:

$$
\xi(R, n\_L) = -n\_L \frac{\lambda R}{2V\_\chi} \frac{\Delta F\_D}{2} \,. \tag{22}
$$

Synthetic Aperture Radar Systems for Small Aircrafts: Data Processing Approaches 483

before the synthesis of the aperture. The reference parameters of the data frame are used to calculate the Doppler centroid values *F R DC*( ) , the central Doppler frequencies (, ) *F Rn C L*

points on the ground in the reference local coordinate system. The found points are situated on the central frequency lines, which are similar to the Doppler centroid line *AB* in Fig. 1. The synthetic beams of the SAR looks should be pointed to the grid nodes, which are closest

To point the SAR look beam to the found grid node, it is needed to recalculate the coordinates of this node from the reference local coordinate system to the actual local coordinate system by using (15), (16), taking into account the actual aircraft position and the orientation of the aircraft velocity vector. This recalculation is performed at each step of the synthesis. After that, the appropriate range migration curves (7), the Doppler centroids (8), and the Doppler rates (9) can be determined. Finally, the synthetic beam is formed to be

The proposed built-in geometric correction algorithm cannot be combined with the clutterlock technique since the SAR beams do not follow the orientation of the real antenna beam. Therefore, the algorithm works well without an additional radiometric correction only for a wide-beam antenna and only for the central SAR looks. In order to use all possible SAR looks to form a multi-look SAR image without radiometric errors, we have proposed an effective radiometric correction technique based on multi-look processing with extended number of looks (Bezvesilniy et al., 2010c; Bezvesilniy et al., 2010d; Bezvesilniy et al., 2011b;

**5.3 Radiometric correction by multi-look processing with extended number of looks**  Let us denote an error-free SAR image to be obtained as *IXY* (,) , where (,) *X Y* are the ground coordinates of the image pixels. This image is not corrupted by speckle noise and not distorted by radiometric errors. Whereas, a real SAR look image ( ,,) *<sup>L</sup> In XY* ( *nL* is the index of the SAR looks) is corrupted by speckle noise ( ,,) *Sn X Y <sup>L</sup>* and distorted by

The speckle noise in a single-look SAR image (Oliver & Quegan, 1998) is a multiplicative noise with the exponential probability density function with the mean and the variance,

> *Sn X Y <sup>L</sup>* , { ( , , )} 1

The speckle noise is different for all SAR looks what is indicated here by the SAR look index *nL* . The radiometric errors caused by instabilities of the antenna orientation can be considered as low-frequency multiplicative errors. The highest spatial frequencies of the radiometric error function ( ,,) *Rn X Y <sup>L</sup>* are inversely proportional to the width of the real antenna footprint in the azimuth direction. Similar to the speckle noise, the radiometric

{ ( , , )} 1

( , ,) (,) ( , ,) ( , ,) *L L <sup>L</sup> In X Y IX Y Sn X Y Rn X Y* . (25)

*Sn X Y <sup>L</sup>* . (26)

*R R L L x Rn y Rn*

  of the corresponding

of the SAR looks, and the coordinates ( ( , ), ( , )) *ref ref*

to the corresponding frequencies lines.

directed to this node.

Bezvesilniy et al., 2011c).

correspondingly,

radiometric errors 0 ( ,,)1 *Rn X Y <sup>L</sup>* so that

errors are different for different SAR looks.

Since the points, to which the synthetic beams of the SAR looks are directed, appear at the same slant range at the centre of the short interval of the synthesis, we can write:

$$\mathbf{x}\_{R}^{2} + y\_{R}^{2} = \left(\mathbf{x}\_{R} + \xi(\mathbf{R}, n\_{L})\right)^{2} + \left(y\_{R} + \eta(\mathbf{R}, n\_{L})\right)^{2},\tag{23}$$

and, finally,

$$\log(\mathbf{R}, n\_L) = \sqrt{\mathbf{x}\_R^2 + \mathbf{y}\_R^2 - \left(\mathbf{x}\_R + \xi(\mathbf{R}, n\_L)\right)^2} - \mathbf{y}\_R \,. \tag{24}$$

Thus, in order to form the set of the synthetic beams of the different SAR looks on the short interval of synthesis for the slant range ( ,,) *Rn X Y <sup>L</sup>* , we should first calculate the points ( ( , ), ( , )) *R LR L x Rn y Rn* from (22) and (24), which correspond to the required central frequencies (14). Then, we should process the same raw data on the interval of the synthesis *TS* with the appropriate range migration curves (5), the Doppler centroids (7) and the Doppler rates (8), by substituting the calculated coordinates ( ( , ), ( , )) *R LR L x Rn y Rn* instead of the coordinates (, ) *R R x y* in these formulas.

The described approach to the multi-look processing has the following benefits. First, it is much easier to keep a low level of the phase errors on the short interval of synthesis, as compared to the long coherent processing time for all looks. Second, the orientation of the real antenna beam does not change significantly during the short processing time. This fact simplifies considerably the calculation of the orientation of all SAR look beams with respect to the real antenna beam for the subsequent radiometric correction. Third, a more accurate motion error compensation can be introduced in this processing scheme as compared to FFT-based algorithms. The compensation is performed based on the measured aircraft trajectory individually for each pixel of the SAR image accounting for both the range and the azimuth dependence of the phase and migration errors without any approximations. In other words, the accuracy of the motion error compensation is limited only by the accuracy of the trajectory measurements.

In the described approach, all SAR look beams are aimed at different points on the ground. It means that the obtained SAR look images are sampled on different grids. Therefore, the SAR look images should be first resampled to the same ground grid and only then they can be averaged to produce the multi-look image. Deviations of the aircraft trajectory introduce further complexity into the re-sampling process. We have proposed (Bezvesilniy et al., 2010a; Bezvesilniy et al., 2010b; Bezvesilniy et al., 2010d; Bezvesilniy et al., 2011a) an algorithm named "the built-in correction of geometric distortions" to solve this problem. This algorithm is described in the next section.

#### **5.2 Built-in geometric correction**

In order to avoid the interpolation steps in the above-described multi-look processing approach, the reference functions and the migration curves should be specially designed to point the multi-look SAR beams exactly to the nodes of a rectangular grid on the ground plane. The grid nodes to which the multi-look SAR beams should be pointed can be found as follows. The radar data are processed frame-by-frame forming a sequence of overlapped SAR images. For each frame, we define the reference flight line and the reference parameters

Since the points, to which the synthetic beams of the SAR looks are directed, appear at the

2 2 <sup>2</sup> <sup>2</sup> ( ( , )) ( ( , )) *RR R L R L x y x Rn y Rn*

2 2 <sup>2</sup> ( , ) ( ( , )) *Rn x L RR R L R*

Thus, in order to form the set of the synthetic beams of the different SAR looks on the short interval of synthesis for the slant range ( ,,) *Rn X Y <sup>L</sup>* , we should first calculate the points

frequencies (14). Then, we should process the same raw data on the interval of the synthesis *TS* with the appropriate range migration curves (5), the Doppler centroids (7) and the Doppler rates (8), by substituting the calculated coordinates ( ( , ), ( , )) *R LR L x Rn y Rn*

The described approach to the multi-look processing has the following benefits. First, it is much easier to keep a low level of the phase errors on the short interval of synthesis, as compared to the long coherent processing time for all looks. Second, the orientation of the real antenna beam does not change significantly during the short processing time. This fact simplifies considerably the calculation of the orientation of all SAR look beams with respect to the real antenna beam for the subsequent radiometric correction. Third, a more accurate motion error compensation can be introduced in this processing scheme as compared to FFT-based algorithms. The compensation is performed based on the measured aircraft trajectory individually for each pixel of the SAR image accounting for both the range and the azimuth dependence of the phase and migration errors without any approximations. In other words, the accuracy of the motion error compensation is limited only by the accuracy

In the described approach, all SAR look beams are aimed at different points on the ground. It means that the obtained SAR look images are sampled on different grids. Therefore, the SAR look images should be first resampled to the same ground grid and only then they can be averaged to produce the multi-look image. Deviations of the aircraft trajectory introduce further complexity into the re-sampling process. We have proposed (Bezvesilniy et al., 2010a; Bezvesilniy et al., 2010b; Bezvesilniy et al., 2010d; Bezvesilniy et al., 2011a) an algorithm named "the built-in correction of geometric distortions" to solve this problem.

In order to avoid the interpolation steps in the above-described multi-look processing approach, the reference functions and the migration curves should be specially designed to point the multi-look SAR beams exactly to the nodes of a rectangular grid on the ground plane. The grid nodes to which the multi-look SAR beams should be pointed can be found as follows. The radar data are processed frame-by-frame forming a sequence of overlapped SAR images. For each frame, we define the reference flight line and the reference parameters

   

*y x Rn y* . (24)

 

from (22) and (24), which correspond to the required central

, (23)

same slant range at the centre of the short interval of the synthesis, we can write:

instead of the coordinates (, ) *R R x y* in these formulas.

and, finally,

( ( , ), ( , )) *R LR L x Rn y Rn*

of the trajectory measurements.

This algorithm is described in the next section.

**5.2 Built-in geometric correction** 

  before the synthesis of the aperture. The reference parameters of the data frame are used to calculate the Doppler centroid values *F R DC*( ) , the central Doppler frequencies (, ) *F Rn C L* of the SAR looks, and the coordinates ( ( , ), ( , )) *ref ref R R L L x Rn y Rn* of the corresponding points on the ground in the reference local coordinate system. The found points are situated on the central frequency lines, which are similar to the Doppler centroid line *AB* in Fig. 1. The synthetic beams of the SAR looks should be pointed to the grid nodes, which are closest to the corresponding frequencies lines.

To point the SAR look beam to the found grid node, it is needed to recalculate the coordinates of this node from the reference local coordinate system to the actual local coordinate system by using (15), (16), taking into account the actual aircraft position and the orientation of the aircraft velocity vector. This recalculation is performed at each step of the synthesis. After that, the appropriate range migration curves (7), the Doppler centroids (8), and the Doppler rates (9) can be determined. Finally, the synthetic beam is formed to be directed to this node.

The proposed built-in geometric correction algorithm cannot be combined with the clutterlock technique since the SAR beams do not follow the orientation of the real antenna beam. Therefore, the algorithm works well without an additional radiometric correction only for a wide-beam antenna and only for the central SAR looks. In order to use all possible SAR looks to form a multi-look SAR image without radiometric errors, we have proposed an effective radiometric correction technique based on multi-look processing with extended number of looks (Bezvesilniy et al., 2010c; Bezvesilniy et al., 2010d; Bezvesilniy et al., 2011b; Bezvesilniy et al., 2011c).

#### **5.3 Radiometric correction by multi-look processing with extended number of looks**

Let us denote an error-free SAR image to be obtained as *IXY* (,) , where (,) *X Y* are the ground coordinates of the image pixels. This image is not corrupted by speckle noise and not distorted by radiometric errors. Whereas, a real SAR look image ( ,,) *<sup>L</sup> In XY* ( *nL* is the index of the SAR looks) is corrupted by speckle noise ( ,,) *Sn X Y <sup>L</sup>* and distorted by radiometric errors 0 ( ,,)1 *Rn X Y <sup>L</sup>* so that

$$I(\mathfrak{n}\_{L'}, X, Y) = I(X, Y) \cdot S(\mathfrak{n}\_{L'}, X, Y) \cdot R(\mathfrak{n}\_{L'}, X, Y) \,. \tag{25}$$

The speckle noise in a single-look SAR image (Oliver & Quegan, 1998) is a multiplicative noise with the exponential probability density function with the mean and the variance, correspondingly,

$$\left| \, \mu \{ \mathrm{S} (n\_{L}, X, Y) \} \right| = 1 \,, \quad \sigma \{ \mathrm{S} (n\_{L}, X, Y) \} = 1 \, \, \, \, \, \, \tag{26}$$

The speckle noise is different for all SAR looks what is indicated here by the SAR look index *nL* . The radiometric errors caused by instabilities of the antenna orientation can be considered as low-frequency multiplicative errors. The highest spatial frequencies of the radiometric error function ( ,,) *Rn X Y <sup>L</sup>* are inversely proportional to the width of the real antenna footprint in the azimuth direction. Similar to the speckle noise, the radiometric errors are different for different SAR looks.

Synthetic Aperture Radar Systems for Small Aircrafts: Data Processing Approaches 485

Fig. 12. The main steps of the multi-look radiometric correction algorithm.

the RIAN-SAR-Ku and RIAN-SAR-X systems described in Section 7.

described algorithm are shown in Fig. 12.

processing.

the scene.

**5.4 Experimental results** 

By using the estimated radiometric error functions, radiometric errors for all SAR looks can be corrected before combining them into the multi-look SAR image. The main steps of the

If the navigation system is capable of measuring accurately the fast variations of the real antenna beam orientation, and if the real antenna pattern is known, the radiometric error functions (32) can be calculated directly from the relative orientation of the synthetic beam and the real antenna beam. This approach is more rigorous and accurate than the abovedescribed empirical approach with the image brightness estimation. Nevertheless, with this approach, it is still necessary to build extended number of SAR looks, select the best parts of SAR images among all looks, and form the composite looks for multi-look

The proposed approach has been used for post-processing of the radar data obtained with

The performance of the built-in geometric correction is illustrated in Fig. 13. The SAR image shown in Fig. 13a is built by using the clutter-lock technique. One can see geometric distortions caused by instabilities of the antenna orientation. The undistorted SAR image shown in Fig. 13b is formed by using the algorithm with the built-in geometric correction. Both images have 3-m resolution and are built of 3 looks. The accuracy of the geometric correction is illustrated in Fig. 13c, where the SAR image built of 45 looks and formed by using the built-in geometric correction is imposed on the Google Map image of

In order to compensate the radiometric errors, they should be estimated. For this purpose, we use a low-pass filtering **F** to measure the local brightness of the SAR images. This filter is designed to pass the radiometric errors and, at the same time, to suppress the speckle noise to some extent:

$$\mathbf{F}\{\mathcal{R}(\mathfrak{n}\_{L'},X,Y)\} \approx \mathcal{R}(\mathfrak{n}\_{L'},X,Y), \quad \mathbf{F}\{\mathcal{S}(\mathfrak{n}\_{L'},X,Y)\} \approx 1. \tag{27}$$

The application of this filter to the SAR look image (25) gives, approximately:

$$I\_{LF}(\mathfrak{n}\_{L'}, X, Y) = \mathbf{F}\{I(\mathfrak{n}\_{L'}, X, Y)\} \approx I\_{LF}(X, Y) \cdot R(\mathfrak{n}\_{L'}, X, Y) \,. \tag{28}$$

Here (,) *LF I XY* is the low-frequency component of the error-free SAR image to be reconstructed. The corresponding components of the real SAR looks ( ,,) *LF L I n XY* (28) contain information about the radiometric errors and they are almost not corrupted by speckle noise. These images can be used to compare radiometric errors on different SAR looks and, via such comparison, to estimate the radiometric errors. The idea of this empirical approach to the radiometric correction is based on the fact that one of many looks is pointed very closely to the centre of the real antenna beam. This look demonstrates the maximum power (brightness) among all looks, and this power is not distorted by radiometric errors.

Let us denote the number of looks to be summed up into the multi-look image as *pro NL* . This number of looks is slightly less than the number of the looks within the real antenna beam *NL* since the orientation instabilities may corrupt the side looks considerably. By using the low-pass filter, it is possible to select the brightest (best-illuminated) parts of the scene among all extended SAR looks with the indexes 1, ..., *ext n N L L* and compose only *pro NL* SAR looks (called the composite looks) for further processing. It is convenient to build the following sequence of the pairs of the composite looks and their low-frequency components:

$$\{I^{pro}(n\_{L}^{pro}, X, Y), I\_{LF}^{pro}(n\_{L}^{pro}, X, Y)\} \,, \quad n\_{L}^{pro} = 1 \,, \dots \, N\_{L}^{pro} \, . \tag{29}$$

This sequence is kept in the ascending order with respect to the brightness:

$$I\_{\rm LF}^{pro}(\mathfrak{n}\_{\rm L}^{pro}, X, Y) \le I\_{\rm LF}^{pro}(\mathfrak{n}\_{\rm L}^{pro} + \mathbf{1}, X, Y) \,. \tag{30}$$

After processing of all the extended SAR looks, the brightest composite look is the look with the index *pro pro n N L L* . These brightest values are obtained with the synthetic beams that are directed very closely to the centre of the real beam. Therefore, these brightness values are not distorted by the radiometric errors and give the estimate of the low-frequency component of the error-free SAR image to be reconstructed:

$$I\_{LF}^{pro}(\mathcal{N}\_L^{pro}, \mathcal{X}, \mathcal{Y}) \approx I\_{LF}(\mathcal{X}, \mathcal{Y}) \cdot \tag{31}$$

This image can be used as the reference to estimate the radiometric error functions for all SAR looks:

$$R(n\_{L'} \,\, \mathbf{X}, \,\, \mathbf{Y}) \approx \frac{I\_{LF}(n\_{L'} \,\, \mathbf{X}, \,\, \mathbf{Y})}{I\_{LF}^{pro}(\mathbf{N}\_{L}^{pro} \,\, \mathbf{X}, \,\, \mathbf{Y})} \,. \tag{32}$$

Fig. 12. The main steps of the multi-look radiometric correction algorithm.

By using the estimated radiometric error functions, radiometric errors for all SAR looks can be corrected before combining them into the multi-look SAR image. The main steps of the described algorithm are shown in Fig. 12.

If the navigation system is capable of measuring accurately the fast variations of the real antenna beam orientation, and if the real antenna pattern is known, the radiometric error functions (32) can be calculated directly from the relative orientation of the synthetic beam and the real antenna beam. This approach is more rigorous and accurate than the abovedescribed empirical approach with the image brightness estimation. Nevertheless, with this approach, it is still necessary to build extended number of SAR looks, select the best parts of SAR images among all looks, and form the composite looks for multi-look processing.

## **5.4 Experimental results**

484 Recent Advances in Aircraft Technology

In order to compensate the radiometric errors, they should be estimated. For this purpose, we use a low-pass filtering **F** to measure the local brightness of the SAR images. This filter is designed to pass the radiometric errors and, at the same time, to suppress the speckle

Here (,) *LF I XY* is the low-frequency component of the error-free SAR image to be reconstructed. The corresponding components of the real SAR looks ( ,,) *LF L I n XY* (28) contain information about the radiometric errors and they are almost not corrupted by speckle noise. These images can be used to compare radiometric errors on different SAR looks and, via such comparison, to estimate the radiometric errors. The idea of this empirical approach to the radiometric correction is based on the fact that one of many looks is pointed very closely to the centre of the real antenna beam. This look demonstrates the maximum power (brightness) among all looks, and this power is not distorted by radiometric errors.

Let us denote the number of looks to be summed up into the multi-look image as *pro NL* . This number of looks is slightly less than the number of the looks within the real antenna beam *NL* since the orientation instabilities may corrupt the side looks considerably. By using the low-pass filter, it is possible to select the brightest (best-illuminated) parts of the scene among all extended SAR looks with the indexes 1, ..., *ext n N L L* and compose only *pro NL* SAR looks (called the composite looks) for further processing. It is convenient to build the following sequence of the pairs of the composite looks and their low-frequency components:

( , , ) ( 1, , ) *pro pro pro pro*

After processing of all the extended SAR looks, the brightest composite look is the look with the index *pro pro n N L L* . These brightest values are obtained with the synthetic beams that are directed very closely to the centre of the real beam. Therefore, these brightness values are not distorted by the radiometric errors and give the estimate of the low-frequency

( , , ) (, ) *pro pro*

This image can be used as the reference to estimate the radiometric error functions for all

( ,,) ( ,,) ( ,,) *LF L*

*LF L*

*I N XY*

*L pro pro*

*I n XY Rn X Y*

The application of this filter to the SAR look image (25) gives, approximately:

{ ( , , ), ( , , )} *pro pro pro pro*

This sequence is kept in the ascending order with respect to the brightness:

component of the error-free SAR image to be reconstructed:

SAR looks:

{ ( , , )} ( , , ) **F** *Rn X Y Rn X Y L L* , { ( , , )} 1 **F** *Sn X Y <sup>L</sup>* . (27)

( , , ) { ( , , )} ( , ) ( , , ) *LF L <sup>L</sup> LF <sup>L</sup> I n X Y In X Y I X Y Rn X Y* **F** . (28)

*L LF L I n XY I n XY* , 1, ..., *pro pro n N L L* . (29)

*LF L LF L I n XY I n XY* . (30)

*LF L LF I N XY I x y* . (31)

. (32)

noise to some extent:

The proposed approach has been used for post-processing of the radar data obtained with the RIAN-SAR-Ku and RIAN-SAR-X systems described in Section 7.

The performance of the built-in geometric correction is illustrated in Fig. 13. The SAR image shown in Fig. 13a is built by using the clutter-lock technique. One can see geometric distortions caused by instabilities of the antenna orientation. The undistorted SAR image shown in Fig. 13b is formed by using the algorithm with the built-in geometric correction. Both images have 3-m resolution and are built of 3 looks. The accuracy of the geometric correction is illustrated in Fig. 13c, where the SAR image built of 45 looks and formed by using the built-in geometric correction is imposed on the Google Map image of the scene.

Synthetic Aperture Radar Systems for Small Aircrafts: Data Processing Approaches 487

The performance of the proposed radiometric correction by the multi-look processing with extended number of looks is illustrated in Fig. 14. The SAR image in Fig. 14a is built by simple averaging of all extended SAR looks. The image demonstrates good geometric accuracy; however the radiometric errors are presented. One can see dark and light strips in the image caused by the non-uniform illumination of the scene. The dark areas are due to the illumination for a short time when the real antenna footprint quickly moves to the neighbour areas of the scene. The light areas are correspondently illuminated for a longer time. The SAR image shown in Fig. 14b is built by using the proposed method of the multilook radiometric correction with extended number of looks. The image is built of 5 composite SAR looks. One can see that the radiometric errors have been corrected

The obtained results prove that the described SAR processing approach can be effectively used for SAR systems installed on light-weight aircrafts with a non-stabilized antenna. An important advantage of the algorithm is that the produced SAR images are already geometrically correct at once after the synthesis, and there is no need in any additional interpolation. Another important advantage of the algorithm is the reduced requirements to the SAR navigation system. Although the aircraft velocity vector should be measured quite accurately to point the synthetic beams at the proper points on the ground, the aircraft trajectory should be measured and compensated with the high accuracy of a fraction of the radar wavelength only during the short time of the synthesis of one look. There is no need to keep so high accuracy of the trajectory measurement during the long time of the data

The range-Doppler algorithm (RDA) is one of the most popular SAR processing algorithms. A high computational efficiency and a simplicity of the implementation are its main advantages. This algorithm belongs to the frame-based SAR processing algorithms, which use the FFT and work in the frequency domain. The motion compensation within the data frame is required. The SAR images are geometrically correct but they are originally produced in the radar coordinates "slant range – azimuth". Therefore, the ground mapping by an interpolation is required followed by stitching of the obtained image frames into the SAR image of the ground strip. Possible radiometric errors should be additionally corrected.

The geometry of the motion compensation problem is illustrated in Fig. 15. The point *A H* (0, 0, ) indicates the expected position of the aircraft on the reference straight line trajectory. The point (, , ) *A x yH z EE E E* corresponds to the actual position on the real trajectory. The slant range error for the synthetic beam directed to the point ( , , 0) *Px y R R* on

(, ) (, ) *Rx y Rx y R ER R ER R* , (33)

2 22 (, ) ( ) ( ) ( ) *Rx y x x y y H z ER R E R E R <sup>E</sup>* . (34)

**6. Range-Doppler algorithm with the 1-st and 2-nd order motion** 

**6.1 The 1-st and 2-nd order motion compensation** 

the Doppler centroid line (1), (2) at the slant range *R* can be written as

successfully.

acquisition for all looks.

**compensation** 

Fig. 13. Illustration of the geometric correction: (a) the 3-look SAR image built by using the clutter-lock technique, (b) the 3-look SAR image formed by using the built-in geometric correction, and (c) the 45-look SAR image formed by using the built-in geometric correction is imposed on the Google Maps image of the scene.

Fig. 14. Radiometric errors in the SAR image built by simple averaging of all extended SAR looks (a). SAR image formed of 5 composite SAR looks by using the proposed radiometric correction with extended number of looks.

(a) (b) (c)

is imposed on the Google Maps image of the scene.

correction with extended number of looks.

Fig. 13. Illustration of the geometric correction: (a) the 3-look SAR image built by using the clutter-lock technique, (b) the 3-look SAR image formed by using the built-in geometric correction, and (c) the 45-look SAR image formed by using the built-in geometric correction

(a) (b)

Fig. 14. Radiometric errors in the SAR image built by simple averaging of all extended SAR looks (a). SAR image formed of 5 composite SAR looks by using the proposed radiometric

The performance of the proposed radiometric correction by the multi-look processing with extended number of looks is illustrated in Fig. 14. The SAR image in Fig. 14a is built by simple averaging of all extended SAR looks. The image demonstrates good geometric accuracy; however the radiometric errors are presented. One can see dark and light strips in the image caused by the non-uniform illumination of the scene. The dark areas are due to the illumination for a short time when the real antenna footprint quickly moves to the neighbour areas of the scene. The light areas are correspondently illuminated for a longer time. The SAR image shown in Fig. 14b is built by using the proposed method of the multilook radiometric correction with extended number of looks. The image is built of 5 composite SAR looks. One can see that the radiometric errors have been corrected successfully.

The obtained results prove that the described SAR processing approach can be effectively used for SAR systems installed on light-weight aircrafts with a non-stabilized antenna. An important advantage of the algorithm is that the produced SAR images are already geometrically correct at once after the synthesis, and there is no need in any additional interpolation. Another important advantage of the algorithm is the reduced requirements to the SAR navigation system. Although the aircraft velocity vector should be measured quite accurately to point the synthetic beams at the proper points on the ground, the aircraft trajectory should be measured and compensated with the high accuracy of a fraction of the radar wavelength only during the short time of the synthesis of one look. There is no need to keep so high accuracy of the trajectory measurement during the long time of the data acquisition for all looks.

#### **6. Range-Doppler algorithm with the 1-st and 2-nd order motion compensation**

The range-Doppler algorithm (RDA) is one of the most popular SAR processing algorithms. A high computational efficiency and a simplicity of the implementation are its main advantages. This algorithm belongs to the frame-based SAR processing algorithms, which use the FFT and work in the frequency domain. The motion compensation within the data frame is required. The SAR images are geometrically correct but they are originally produced in the radar coordinates "slant range – azimuth". Therefore, the ground mapping by an interpolation is required followed by stitching of the obtained image frames into the SAR image of the ground strip. Possible radiometric errors should be additionally corrected.

#### **6.1 The 1-st and 2-nd order motion compensation**

The geometry of the motion compensation problem is illustrated in Fig. 15. The point *A H* (0, 0, ) indicates the expected position of the aircraft on the reference straight line trajectory. The point (, , ) *A x yH z EE E E* corresponds to the actual position on the real trajectory. The slant range error for the synthetic beam directed to the point ( , , 0) *Px y R R* on the Doppler centroid line (1), (2) at the slant range *R* can be written as

$$
\Delta R\_E(\mathbf{x}\_{R'}, y\_R) = R\_E(\mathbf{x}\_{R'}, y\_R) - R\_{'} \tag{33}
$$

$$R\_E(\mathbf{x}\_{R'}, y\_R) = \sqrt{(\Delta \mathbf{x}\_E - \mathbf{x}\_R)^2 + (\Delta y\_E - y\_R)^2 + (H + \Delta z\_E)^2} \,. \tag{34}$$

Synthetic Aperture Radar Systems for Small Aircrafts: Data Processing Approaches 489

Otherwise, different corrections applied for the neighbour range bins will introduce phase

Fig. 16. Range-Doppler algorithm with the 1-st and 2-nd order motion compensation.

errors in azimuth direction.

These relations describe both the range migration errors and the corresponding phase errors

$$
\Delta \rho\_E(\mathbf{x}\_{R'} \, y\_R) = -\frac{4\pi}{\lambda} \Delta R\_E(\mathbf{x}\_{R'} \, y\_R) \tag{35}
$$

caused by the trajectory deviations (, ,) *E E EE* **<sup>r</sup>** *xyz* .

Fig. 15. Geometry of trajectory deviations.

In order to compensate the motion errors, we should correct the range migration errors (33), (34) by introducing an addition interpolation in the range direction and also correct the phase errors (35) in the azimuth direction. The corresponding correction should be performed individually for each pulse on the interval of the synthesis in the accordance with the current aircraft position error (, ,) *E EEE* **<sup>r</sup>** *xyz* . The problem is that the range error *RE* depends not only on the slant range, but also on the direction to the point ( , , 0) *Px y R R* . It means that the motion errors depend on both range and azimuth and are different for different points on the scene. In other words, the same radar pulses on two overlapped intervals of the synthesis should be compensated individually for the neighbour points in the azimuth direction. Such complete and accurate motion error compensation is possible only in those SAR processing algorithms, which allow the application of an individual reference function and range migration curve for each point of SAR image. It is possible, for example, in the time-domain SAR processing algorithms considered here. However, for the most SAR processing algorithms, including the range-Doppler algorithm, the dependence of the error *RE* on the azimuth must be disregarded and the range dependence is taken into account only.

The motion error correction should not interfere with the range and azimuth compression. Any range-dependent motion compensation can not be applied before the range compression of the received radar pulses. Otherwise, the range LFM waveform of the transmitted pulse will be distorted. Also, the range-dependent compensation cannot be applied before the range migration correction step of the SAR processing algorithm.

These relations describe both the range migration errors and the corresponding phase errors

<sup>4</sup> (, ) (, ) *ER R ER R x y Rx y* 

In order to compensate the motion errors, we should correct the range migration errors (33), (34) by introducing an addition interpolation in the range direction and also correct the phase errors (35) in the azimuth direction. The corresponding correction should be performed individually for each pulse on the interval of the synthesis in the accordance with the current aircraft position error (, ,) *E EEE* **<sup>r</sup>** *xyz* . The problem is that the range error *RE* depends not only on the slant range, but also on the direction to the point ( , , 0) *Px y R R* . It means that the motion errors depend on both range and azimuth and are different for different points on the scene. In other words, the same radar pulses on two overlapped intervals of the synthesis should be compensated individually for the neighbour points in the azimuth direction. Such complete and accurate motion error compensation is possible only in those SAR processing algorithms, which allow the application of an individual reference function and range migration curve for each point of SAR image. It is possible, for example, in the time-domain SAR processing algorithms considered here. However, for the most SAR processing algorithms, including the range-Doppler algorithm, the dependence of the error *RE* on the azimuth must be disregarded and the range dependence is taken into

The motion error correction should not interfere with the range and azimuth compression. Any range-dependent motion compensation can not be applied before the range compression of the received radar pulses. Otherwise, the range LFM waveform of the transmitted pulse will be distorted. Also, the range-dependent compensation cannot be applied before the range migration correction step of the SAR processing algorithm.

(35)

caused by the trajectory deviations (, ,) *E E EE* **<sup>r</sup>** *xyz* .

Fig. 15. Geometry of trajectory deviations.

account only.

Otherwise, different corrections applied for the neighbour range bins will introduce phase errors in azimuth direction.

Fig. 16. Range-Doppler algorithm with the 1-st and 2-nd order motion compensation.

Synthetic Aperture Radar Systems for Small Aircrafts: Data Processing Approaches 491

Moreover, the application of the motion compensation can make the problem of radiometric errors even worse (Bezvesilniy et al., 2011c). After the motion compensation, the location of the antenna footprint on the ground should be described with respect to the position of the

centroid values of the corrected radar data are apparently different from the Doppler centroid values before the motion compensation. For example, even if the antenna orientation is constant with respect to the actual local coordinate system, the orientation of the antenna beam can demonstrate variations with respect to the reference flight line. It means that the raw data with the constant Doppler centroid could demonstrate Doppler centroid variations after applying the motion compensation. This effect becomes more

The problem of the correction of radiometric errors in SAR images formed by using the range-Doppler algorithm can be solved by the multi-look processing with extended number of looks as it was described in Section 5. It should be pointed out that the clutter-lock based on the estimation of the antenna beam orientation angles from the Doppler centroid measurements can be used together with the range-Doppler algorithm only to estimate the

The range-Doppler algorithm with the 1-st and 2-nd order motion compensation procedures was implemented in the airborne RIAN-SAR-X system, what allows us to obtain multi-look SAR images in real time. The application of a wide-beam antenna enables avoiding radiometric errors in real time. An example of a 7-look SAR image with a 2-m resolution is given in Fig. 17. The application of the multi-look radiometric correction with the extended

number of looks can be applied as a post-processing task to the recorded raw data.

Fig. 17. An example of a 7-look SAR image obtained with the X-band SAR system.

of the actual local coordinate system. The Doppler

*MoCo* and

*MoCo* which are

aircraft on the reference trajectory by the orientation angles

 and 

significant in the case of notably curved trajectories.

reference orientation angles for each data frame.

different from the angles

**6.3 Experimental results** 

To cope with the above problems, the motion compensation procedure for the range-Doppler algorithm (and similar FFT-based algorithms) is usually divided on two steps (Franceschetti & Lanari, 1999):


The first-order motion compensation includes the range delay (33), (34) of the received pulses (with interpolation) and the phase compensation (35), which are calculated for some reference range, for example, for the centre range of the swath *RC* :

$$\log\_{\mathbb{E}}^{(I)}(\mathcal{R}\_{\mathbb{C}},t) = \exp\left[-i\frac{4\pi}{\lambda}\Delta\mathcal{R}\_{\mathbb{E}}^{(I)}(\mathcal{R}\_{\mathbb{C}},t)\right].\tag{36}$$

Here *t* is the flight time. The first-order motion compensation can be incorporated into the range compression step but it should be performed before any processing step in the azimuth, in particular, before the range migration correction in the range-Doppler algorithm, as shown in Fig. 16.

The second-order range-dependent motion compensation is performed after the range compression and the range migration correction steps. It includes the phase compensation and may (or may not) include the following range interpolation step:

$$
\Delta R\_E^{(II)}(R, t) = \Delta R\_E(R\_\prime, t) - \Delta R\_E^{(I)}(R\_{C'}, t) \,. \tag{37}
$$

$$\varphi\_{\rm E}^{(II)}(\mathcal{R},t) = \exp\left[-i\frac{4\pi}{\lambda}\Delta R\_{\rm E}^{(II)}(\mathcal{R},t)\right].\tag{38}$$

Since the motion errors depend on time, it is needed to return from the range Doppler domain into the time domain by the inverse FFT, apply the corrections (37), (38), and come back into the range-Doppler domain by applying the direct FFT again, as shown in Fig. 16. After that, we can perform the azimuth compression.

After the compensation, the raw data seem like they are collected from the reference straight line trajectory, and the range-Doppler processing is performed by using the reference parameters of the data frame.

#### **6.2 Problem of radiometric errors caused by motion compensation**

The RDA performs processing of data blocks in the azimuth frequency domain assuming that the aircraft goes along a straight trajectory with a constant orientation during the time of the data frame accumulation. Therefore, instabilities of the antenna beam orientation within the data frame lead to radiometric errors in SAR images formed by the RDA.

After applying the above-described 1-st and 2-nd order motion compensation procedures, the corrected raw data demonstrate the range migration and phase behaviour as if the data were collected from the reference straight trajectory. However, the illumination of the scene by the real antenna is not changed, and radiometric errors are still presented.

Moreover, the application of the motion compensation can make the problem of radiometric errors even worse (Bezvesilniy et al., 2011c). After the motion compensation, the location of the antenna footprint on the ground should be described with respect to the position of the aircraft on the reference trajectory by the orientation angles *MoCo* and *MoCo* which are different from the angles and of the actual local coordinate system. The Doppler centroid values of the corrected radar data are apparently different from the Doppler centroid values before the motion compensation. For example, even if the antenna orientation is constant with respect to the actual local coordinate system, the orientation of the antenna beam can demonstrate variations with respect to the reference flight line. It means that the raw data with the constant Doppler centroid could demonstrate Doppler centroid variations after applying the motion compensation. This effect becomes more significant in the case of notably curved trajectories.

The problem of the correction of radiometric errors in SAR images formed by using the range-Doppler algorithm can be solved by the multi-look processing with extended number of looks as it was described in Section 5. It should be pointed out that the clutter-lock based on the estimation of the antenna beam orientation angles from the Doppler centroid measurements can be used together with the range-Doppler algorithm only to estimate the reference orientation angles for each data frame.

## **6.3 Experimental results**

490 Recent Advances in Aircraft Technology

To cope with the above problems, the motion compensation procedure for the range-Doppler algorithm (and similar FFT-based algorithms) is usually divided on two steps

The first-order motion compensation includes the range delay (33), (34) of the received pulses (with interpolation) and the phase compensation (35), which are calculated for some

> ( ) <sup>4</sup> ( ) ( , ) exp ( , ) *I I E E Rt i R Rt C C*

Here *t* is the flight time. The first-order motion compensation can be incorporated into the range compression step but it should be performed before any processing step in the azimuth, in particular, before the range migration correction in the range-Doppler

The second-order range-dependent motion compensation is performed after the range compression and the range migration correction steps. It includes the phase compensation

> ( ) <sup>4</sup> ( ) ( , ) exp ( , ) *II II E E Rt i R Rt*

Since the motion errors depend on time, it is needed to return from the range Doppler domain into the time domain by the inverse FFT, apply the corrections (37), (38), and come back into the range-Doppler domain by applying the direct FFT again, as shown in Fig. 16.

After the compensation, the raw data seem like they are collected from the reference straight line trajectory, and the range-Doppler processing is performed by using the reference

The RDA performs processing of data blocks in the azimuth frequency domain assuming that the aircraft goes along a straight trajectory with a constant orientation during the time of the data frame accumulation. Therefore, instabilities of the antenna beam orientation within the data frame lead to radiometric errors in SAR images formed by the

After applying the above-described 1-st and 2-nd order motion compensation procedures, the corrected raw data demonstrate the range migration and phase behaviour as if the data were collected from the reference straight trajectory. However, the illumination of the scene

( ) ( ) ( ,) ( ,) ( ,) *II <sup>I</sup> R Rt R Rt R R t E E E C* . (37)

. (36)

. (38)

(Franceschetti & Lanari, 1999):

algorithm, as shown in Fig. 16.

parameters of the data frame.

RDA.

1. First-order range-independent motion compensation, 2. Second-order range-dependent motion compensation.

reference range, for example, for the centre range of the swath *RC* :

and may (or may not) include the following range interpolation step:

**6.2 Problem of radiometric errors caused by motion compensation** 

by the real antenna is not changed, and radiometric errors are still presented.

After that, we can perform the azimuth compression.

The range-Doppler algorithm with the 1-st and 2-nd order motion compensation procedures was implemented in the airborne RIAN-SAR-X system, what allows us to obtain multi-look SAR images in real time. The application of a wide-beam antenna enables avoiding radiometric errors in real time. An example of a 7-look SAR image with a 2-m resolution is given in Fig. 17. The application of the multi-look radiometric correction with the extended number of looks can be applied as a post-processing task to the recorded raw data.

Fig. 17. An example of a 7-look SAR image obtained with the X-band SAR system.

Synthetic Aperture Radar Systems for Small Aircrafts: Data Processing Approaches 493

(M-sequences)

Linear frequency modulation

Horn Slotted-waveguide

Parameter **RIAN-SAR-Ku RIAN-SAR-X** 

Transmitter type TWT PA\* SSPA\*\* Operating frequency Ku-band X-band Transmitted peak power 100 W 120 W Pulse repetition frequency 5 – 20 kHz 3 – 5 kHz Pulse repetition rate < 200 Hz / (m/s) Not used

Pulse bandwidth 50 MHz 100 MHz Pulse duration 5.12 µs 5 – 16 µs

Receiver type Analogue Digital Receiver bandwidth 100 MHz 100 MHz Receiver noise figure 2.5 dB 2.0 dB System losses 4.0 dB 1.5 dB ADC sampling frequency 100 MHz 200 MHz ADC capacity 12 bit 14 bit

azimuth 1° / 7° 10°

elevation 40° / 40° 40° Antenna gain 30 dB / 21 dB 20 dB Polarization HH or VV / VV VV

Aircraft flight velocity 30 – 80 m/s 30 – 80 m/s Aircraft flight altitude 1000 – 5000 m 1000 – 5000 m Aircrafts used AN-2, Y-12 AN-2

Second, the antenna beam orientation should be measured with a high accuracy of about 0.1° (that is 1/10th of the antenna beam width) to avoid radiometric errors in SAR images. The application of the antenna with a wider beam would simplify this requirement. An alternative horn antenna with a 7-degree beam was used to make the system capable of producing high-quality SAR images with many looks by processing of the recorded

Radar data processing is performed with a special PCI-board equipped with a DSP and an

FPGA. Characteristics of the SAR data processing system are given in Table 2.

Pulse compression technique Binary phase codding

Antenna type Slotted-waveguide /

\* TWT PA is an acronym for a traveling-wave tube power amplifier.

\*\* SSPA is an acronym for a solid-state power amplifier. Table 1. Characteristics of the SAR hardware systems.

**7.1.2 Signal processing solutions** 

**Transmitter** 

**Receiver** 

**Antenna** 

Antenna beam width in

Antenna beam width in

**SAR Platform** 

data.

## **7. Practical Ku- and X-band SAR systems**

In this section we describe the design and basic technical characteristics of the mentioned already the Ku- and X-band SAR systems developed and produced at the Institute of Radio Astronomy of the National Academy of Sciences of Ukraine (Vavriv at al., 2006; Vavriv & Bezvesilniy, 2011a; Vavriv at al., 2011). The SAR systems were designed to be deployed on a light-weight aircraft. The systems were successfully operated from AN-2 and Y-12 aircrafts.

## **7.1 Airborne system RIAN-SAR-Ku**

The Ku-band SAR system RIAN-SAR-Ku Ukraine (Vavriv at al., 2006; Vavriv & Bezvesilniy, 2011a; Vavriv at al., 2011) operates in a strip-map mode producing single-look SAR images with a 3-meter resolution in real time. The radar can perform measurements at two linear polarizations. The system has also a Motion Target Indication (MTI) capability. Characteristics of the system hardware are listed in Table 1.

## **7.1.1 Hardware solutions**

The radar transmitter is based on a traveling-wave tube power amplifier (TWT PA). The radar transmits long pulses with the duration of 5 µs. The binary phase codding technique is used for the pulse compression to achieve a 3-meter range resolution. The M-sequences of the length of 255 are used for phase codding. The transmitted pulse bandwidth is 50 MHz.

A high pulse repetition frequency (PRF) of 20 kHz is required in the system for detection of moving targets. The application of binary phase codding allows us to simplify dramatically the hardware realization of the range compression as compared to the well-known pulse compression technique of pulses with linear frequency modulation (LFM). It is critical to manage the range compression in real time at the high PRF of 20 kHz.

The pulse repetition frequency is adjusted continuously to keep the ratio of the aircraft velocity to the PRF constant. It means that the aircraft always flights the same distance during the pulse repetition period. Such approach is used to simplify the further SAR processing.

A sensitive receiver with the noise figure of 2.5 dB is used in the SAR. The system losses are 4.0 dB. The received data are sampled with two 12-bit ADCs at the sampling frequency of 100 MHz.

For the detection of moving targets, we used the following simple principle: All signals, which are detected outside of the Doppler spectrum of the ground echo, are assumed to be signals of moving targets. This approach calls for using of a narrow-beam antenna so that the Doppler spectrum from the ground is narrow. Therefore, a long slotted-waveguide antenna of the length of 1.8 m with a 1-degree beam has been used. The antenna is actually built of two separate antennas so that the SAR system can operate at two orthogonal linear polarizations.

The usage of such narrow-beam antennas is not common for airborne SAR systems. It imposes the following limitations on SAR imaging.

First, the azimuth resolution in the strip-map mode is limited by the half of the antenna length that is about 1 m for this system. If we degrade this resolution to 3 m, which is equal to the range resolution, it is possible to build only 5 half-overlapped SAR looks.

In this section we describe the design and basic technical characteristics of the mentioned already the Ku- and X-band SAR systems developed and produced at the Institute of Radio Astronomy of the National Academy of Sciences of Ukraine (Vavriv at al., 2006; Vavriv & Bezvesilniy, 2011a; Vavriv at al., 2011). The SAR systems were designed to be deployed on a light-weight aircraft. The systems were successfully operated from AN-2 and Y-12 aircrafts.

The Ku-band SAR system RIAN-SAR-Ku Ukraine (Vavriv at al., 2006; Vavriv & Bezvesilniy, 2011a; Vavriv at al., 2011) operates in a strip-map mode producing single-look SAR images with a 3-meter resolution in real time. The radar can perform measurements at two linear polarizations. The system has also a Motion Target Indication (MTI) capability.

The radar transmitter is based on a traveling-wave tube power amplifier (TWT PA). The radar transmits long pulses with the duration of 5 µs. The binary phase codding technique is used for the pulse compression to achieve a 3-meter range resolution. The M-sequences of the length of 255 are used for phase codding. The transmitted pulse bandwidth is 50 MHz. A high pulse repetition frequency (PRF) of 20 kHz is required in the system for detection of moving targets. The application of binary phase codding allows us to simplify dramatically the hardware realization of the range compression as compared to the well-known pulse compression technique of pulses with linear frequency modulation (LFM). It is critical to

The pulse repetition frequency is adjusted continuously to keep the ratio of the aircraft velocity to the PRF constant. It means that the aircraft always flights the same distance during the pulse

A sensitive receiver with the noise figure of 2.5 dB is used in the SAR. The system losses are 4.0 dB. The received data are sampled with two 12-bit ADCs at the sampling frequency of

For the detection of moving targets, we used the following simple principle: All signals, which are detected outside of the Doppler spectrum of the ground echo, are assumed to be signals of moving targets. This approach calls for using of a narrow-beam antenna so that the Doppler spectrum from the ground is narrow. Therefore, a long slotted-waveguide antenna of the length of 1.8 m with a 1-degree beam has been used. The antenna is actually built of two separate antennas so that the SAR system can operate at two orthogonal linear

The usage of such narrow-beam antennas is not common for airborne SAR systems. It

First, the azimuth resolution in the strip-map mode is limited by the half of the antenna length that is about 1 m for this system. If we degrade this resolution to 3 m, which is equal

to the range resolution, it is possible to build only 5 half-overlapped SAR looks.

**7. Practical Ku- and X-band SAR systems** 

Characteristics of the system hardware are listed in Table 1.

manage the range compression in real time at the high PRF of 20 kHz.

imposes the following limitations on SAR imaging.

repetition period. Such approach is used to simplify the further SAR processing.

**7.1 Airborne system RIAN-SAR-Ku** 

**7.1.1 Hardware solutions** 

100 MHz.

polarizations.


\* TWT PA is an acronym for a traveling-wave tube power amplifier.

\*\* SSPA is an acronym for a solid-state power amplifier.

Table 1. Characteristics of the SAR hardware systems.

Second, the antenna beam orientation should be measured with a high accuracy of about 0.1° (that is 1/10th of the antenna beam width) to avoid radiometric errors in SAR images. The application of the antenna with a wider beam would simplify this requirement. An alternative horn antenna with a 7-degree beam was used to make the system capable of producing high-quality SAR images with many looks by processing of the recorded data.

#### **7.1.2 Signal processing solutions**

Radar data processing is performed with a special PCI-board equipped with a DSP and an FPGA. Characteristics of the SAR data processing system are given in Table 2.

Synthetic Aperture Radar Systems for Small Aircrafts: Data Processing Approaches 495

In order to measure accurately the antenna orientation, the algorithm described in Section 4 for the estimation of the antenna orientation angles directly from Doppler frequencies of backscattered radar signals was introduced. The accuracy of the estimation is about 0.1°. The angles are updated about 10 times per second what is sufficient to track fast variations of the

The estimated angles are used to realize the clutter-lock. The pre-filter and the SAR reference functions are updated rapidly to track variations of the antenna orientation, and

The radar system is able to record the range-compressed data at the data rate of about 12 MB/s to hard disk drives for post-processing. A 7-times decimation of the input data stream is used to reduce the data rate for recording. The pre-filtered radar data, the navigation data,

The X-band SAR system RIAN-SAR-X Ukraine (Vavriv & Bezvesilniy, 2011a; Vavriv at al., 2011) is capable of producing high-quality multi-look SAR images with a 2-meter resolution in real time. The system is designed to operate from light-weight aircraft platforms in sidelooking or squinted strip-map modes. Characteristics of the radar hardware and the signal

The radar operates in the X-band. The transmitter is based on a modern solid-state power amplifier (SSPA). The peak transmitted power is 120 W. The radar transmits long pulses with a linear frequency modulation. A direct digital synthesizer (DDS) provides frequency sweeping. The pulse duration can be chosen from 5 to 16 µs. The transmitted pulse bandwidth is 100 MHz. It gives the range resolution of 2 m. The pulse repetition frequency is from 3 kHz to 5 kHz, and that guarantees an unambiguous data sampling in the azimuth. A digital receiver technique has been implemented. The noise figure of the receiver is 2 dB. The system losses are 1.5 dB. We have used one 200-MHz ADC with a 14-bit capacity.

The radar uses a compact slotted-waveguide antenna with a 10-degree beam. The wide beam is used, first, to avoid radiometric errors during the formation of SAR images in real time, and, second, to enable building of high-quality SAR images with a large number of looks at a post-processing stage. The antenna is firmly mounted on the aircraft; however it

The SAR system is designed to be operated from a light-weight aircrafts. During test flights, the SAR system was successfully deployed on an AN-2 aircraft. The aircraft flight altitude could be from 1000 m to 5000 m, and the aircraft flight velocity is expected to be from 30 m/s to 80 m/s. The implemented SAR processing algorithms can operate beyond of these

A strip-map SAR processing is performed by using a frame-based range-Doppler algorithm with motion compensation, as described in Section 6. The SAR system is capable of

can be installed either into a side-looking or a 40-degree-squinted position.

intervals of flight parameters with minor adjustments.

**7.2.2 Signal processing solutions** 

antenna beam orientation.

thus to avoid radiometric errors in SAR images.

processing systems are listed in Tables 1 and 2.

and SAR images are recorded as well.

**7.2 Airborne system RIAN-SAR-X** 

**7.2.1 Hardware solutions** 


\* Estimation of the antenna beam orientation angles from the backscattered radar data and updating the SAR reference functions.

Table 2. Characteristics of the SAR data processing systems.

The procedure of pre-filtering was implemented to reduce the high input data rate by a coherent accumulation and down-sampling of the data in azimuth from 20 kHz to about 100 Hz that is determined by the antenna beam width.

The time-domain convolution-based SAR processing algorithm with range migration correction by interpolation is implemented, as described in Section 4. This algorithm forms each pixel of the SAR image with a separate reference function and a migration curve. Therefore, the algorithm works well under unstable flight conditions. The algorithm is fast enough for the operation in real time, if the length of the convolution is not too long. With the narrow-beam antenna and the pre-filtering procedure, this requirement has been satisfied. The SAR processing system is able to build single-look SAR images with 3-meter resolution in real time. The number of range gates is 1024 resulting in 1536-meter range swath width.

In order to measure accurately the antenna orientation, the algorithm described in Section 4 for the estimation of the antenna orientation angles directly from Doppler frequencies of backscattered radar signals was introduced. The accuracy of the estimation is about 0.1°. The angles are updated about 10 times per second what is sufficient to track fast variations of the antenna beam orientation.

The estimated angles are used to realize the clutter-lock. The pre-filter and the SAR reference functions are updated rapidly to track variations of the antenna orientation, and thus to avoid radiometric errors in SAR images.

The radar system is able to record the range-compressed data at the data rate of about 12 MB/s to hard disk drives for post-processing. A 7-times decimation of the input data stream is used to reduce the data rate for recording. The pre-filtered radar data, the navigation data, and SAR images are recorded as well.

## **7.2 Airborne system RIAN-SAR-X**

The X-band SAR system RIAN-SAR-X Ukraine (Vavriv & Bezvesilniy, 2011a; Vavriv at al., 2011) is capable of producing high-quality multi-look SAR images with a 2-meter resolution in real time. The system is designed to operate from light-weight aircraft platforms in sidelooking or squinted strip-map modes. Characteristics of the radar hardware and the signal processing systems are listed in Tables 1 and 2.

### **7.2.1 Hardware solutions**

494 Recent Advances in Aircraft Technology

**Parameter RIAN-SAR-Ku RIAN-SAR-X** 

Number of range gates 1024 2048 (processed) / 4096

(stream-based)

compensation (trajectory) No Yes, 1st- and 2nd-order

Clutter-lock\* Line-by-line Frame-by-frame Pre-filtering Yes Yes Azimuth resolution 3.0 m 2.0 m

time) 1 1 – 15

images Post-processing In real time

Recorded raw data rate 12 MB/s 80 MB/s

moving targets Yes No

7-times decimated

\* Estimation of the antenna beam orientation angles from the backscattered radar data and updating the

The procedure of pre-filtering was implemented to reduce the high input data rate by a coherent accumulation and down-sampling of the data in azimuth from 20 kHz to about 100

The time-domain convolution-based SAR processing algorithm with range migration correction by interpolation is implemented, as described in Section 4. This algorithm forms each pixel of the SAR image with a separate reference function and a migration curve. Therefore, the algorithm works well under unstable flight conditions. The algorithm is fast enough for the operation in real time, if the length of the convolution is not too long. With the narrow-beam antenna and the pre-filtering procedure, this requirement has been satisfied. The SAR processing system is able to build single-look SAR images with 3-meter resolution in real time. The number of range gates is 1024 resulting in 1536-meter range

Yes Yes

(raw)

Range-Doppler algorithm (frame-based)

MOCO

Uncompressed, no decimation

Range resolution 3 m 2 m Range sampling interval 1.5 m 1.5 m

Range swath width 1536 m 3072 m

SAR processing algorithm Time-domain convolution

Raw data Range-compressed,

Table 2. Characteristics of the SAR data processing systems.

Hz that is determined by the antenna beam width.

**Range processing** 

**Azimuth processing** 

Real-time motion error

Number of looks (in real

Ground mapping of SAR

**Data recording** 

Pre-filtered data, navigation data, SAR

**Other capabilities** 

SAR reference functions.

swath width.

Detection and indication of

images, etc.

The radar operates in the X-band. The transmitter is based on a modern solid-state power amplifier (SSPA). The peak transmitted power is 120 W. The radar transmits long pulses with a linear frequency modulation. A direct digital synthesizer (DDS) provides frequency sweeping. The pulse duration can be chosen from 5 to 16 µs. The transmitted pulse bandwidth is 100 MHz. It gives the range resolution of 2 m. The pulse repetition frequency is from 3 kHz to 5 kHz, and that guarantees an unambiguous data sampling in the azimuth.

A digital receiver technique has been implemented. The noise figure of the receiver is 2 dB. The system losses are 1.5 dB. We have used one 200-MHz ADC with a 14-bit capacity.

The radar uses a compact slotted-waveguide antenna with a 10-degree beam. The wide beam is used, first, to avoid radiometric errors during the formation of SAR images in real time, and, second, to enable building of high-quality SAR images with a large number of looks at a post-processing stage. The antenna is firmly mounted on the aircraft; however it can be installed either into a side-looking or a 40-degree-squinted position.

The SAR system is designed to be operated from a light-weight aircrafts. During test flights, the SAR system was successfully deployed on an AN-2 aircraft. The aircraft flight altitude could be from 1000 m to 5000 m, and the aircraft flight velocity is expected to be from 30 m/s to 80 m/s. The implemented SAR processing algorithms can operate beyond of these intervals of flight parameters with minor adjustments.

#### **7.2.2 Signal processing solutions**

A strip-map SAR processing is performed by using a frame-based range-Doppler algorithm with motion compensation, as described in Section 6. The SAR system is capable of

Synthetic Aperture Radar Systems for Small Aircrafts: Data Processing Approaches 497

with extended number of looks has demonstrated a high efficiency for the correction of radiometric errors. The suggested approaches have been successfully implemented in and tested with Ku- and X-band SAR systems deployed on small aircrafts. It should be pointed

The authors would like to thank all of their colleagues at the Department of Microwave Electronics, Institute of Radio Astronomy of the National Academy of Sciences of Ukraine for their help and fruitful discussions. In particular, we indebt to Dr. V. V. Vynogradov, Dr. V. A. Volkov, Dr. S. V. Sosnytskiy, Mr. R. V. Kozhyn, Mr. S. S. Sekretarov, Mr. A. Kravtsov, Mr. A. Suvid, and Mr. I. Gorovyi for their essential contributions to the development of

Bamler, R. & Hartl, P. (1998). Synthetic aperture radar interferometry. *Inverse Problems*, Vol.

Bezvesilniy, O. O., Dukhopelnykova, I. V., Vynogradov, V. V., & Vavriv, D. M. (2006).

Bezvesilniy, O. O., Dukhopelnykova, I. V., Vynogradov, V. V. & Vavriv, D. M. (2007).

Bezvesilniy, O. O., Gorovyi, I. M., Sosnytskiy, S. V., Vynogradov V. V. & Vavriv D. M.

*Radar (EUSAR2010)*. 7-10 June 2010, Aachen, Germany. pp. 712-715. Bezvesilniy, O. O., Gorovyi I. M., Sosnytskiy, S. V., Vynogradov V.V. & Vavriv D.M. (2010b).

Retrieving 3D relief from radar returns with single-antenna, strip-map airborne SAR. *Proceedings of the 6th European Conference on Synthetic Aperture Radar (EUSAR2006)*. 16-18 May 2006, Dresden, Germany. pp. 1-4. (CD-ROM

Retrieving 3-D topography by using a single-antenna squint-mode airborne SAR. *IEEE Transactions on Geoscience and Remote Sensing*, Vol. 45, No. 11, pp. 3574-3582. Bezvesilniy, O. O., Vynogradov, V. V. & Vavriv, D. M. (2008). High-accuracy Doppler

measurements for airborne SAR applications. *Proceedings of the 5th European Radar Conference (EuRAD2008)*. 30–31 Oct. 2008, Amsterdam, The Netherlands. pp. 29-32.

(2010a). Multi-look stripmap SAR processing algorithm with built-in correction of geometric distortions. *Proceedings of the 8th European Conference on Synthetic Aperture* 

Multi-look SAR processing with build-in geometric correction, *Proc. of the 11th Int. Radar Symposium (IRS-2010)*. June 16-18, Vilnius, Lithuania. Vol. 1. pp. 30-33. Bezvesilniy, O. O., Gorovyi, I. M., Vynogradov V.V. & Vavriv D.M. (2010c). Correction of

radiometric errors by multi-look processing with extended number of looks, *Proceedings of the 11th Int. Radar Symposium (IRS-2010)*. June 16-18, Vilnius,

(2010d). Improving SAR images: Built-in geometric and multi-look radiometric corrections. *Proceedings of the 7th European Radar Conference (EuRAD2010)*. 30

(2011a). SAR processing algorithm with built-in geometric correction. *Radio Physics* 

Bezvesilniy, O. O., Gorovyi, I. M., Sosnytskiy, S. V., Vynogradov, V. V. & Vavriv, D. M.

Bezvesilniy, O. O., Gorovyi, I. M., Sosnytskiy, S. V., Vynogradov, V. V. & Vavriv, D. M.

September - 1 October 2010, Paris, France. pp. 256-259.

*and Radio Astronomy*, Vol. 16, No. 1, pp. 98-108.

that these solutions are as well useful for SAR systems deployed on other platforms.

**9. Acknowledgment** 

practical SAR systems.

14, pp. R1-R54.

Proceedings).

Lithuania. Vol. 1. pp. 26-29.

**10. References** 

producing SAR images with a 2-meter resolution formed of up to 15 looks in real time. A scheme with half-overlapped frames is implemented to provide continuous surveillance of the strip without gaps despite of possible motion instabilities.

The SAR navigation system is based on a simple GPS-receiver capable of measuring the aircraft position and the aircraft velocity vector. The measured position is used to link the obtained SAR images to ground maps, and also to know the flight altitude above the ground. The aircraft flight trajectory is integrated from the measured aircraft velocity with a sufficient accuracy to perform the motion compensation. The antenna beam orientation is estimated from Doppler frequencies of the backscattered radar signals. The pitch and yaw antenna orientation angles are used both for motion compensation and for the aperture synthesis. Such angle estimation is a kind of clutter-lock processing allowing to track variations of the antenna beam orientation by adjusting the SAR data processing algorithm from one radar data frame to another.

The signal processing system is divided on two main parts. The first part of the system performs: 1) range compression of LFM pulses combined with the 1st-order motion compensation, 2) calculation of Doppler centroid values for each range gate (by FFT in azimuth) and estimation of the antenna orientation angles, and 3) pre-filtering of the rangecompressed data. This processing is performed in a special PCI board with a DSP and an FPGA.

The second part of the data processing system forms multi-look SAR images by using a range-Doppler algorithm with the 2nd-order motion compensation. This processing is performed on a PC with an Intel Quad Core CPU (the above-mentioned PCI board is installed on this PC). It gives a flexibility in setting the azimuth processing parameters and allows using the developed SAR system as a suitable test-bed for testing new modifications of various frame-based SAR algorithms.

Stitching of the obtained SAR images into a continuous strip map can be performed on a client PC (or a notebook), while viewing the data in real time or offline.

The SAR system is capable of recording the original uncompressed radar data on a solidstate drives organized in a RAID-0 array at the full pulse repetition rate up to 5 kHz. These data are stored together with the navigation data (original GPS measurements, integrated trajectories, estimated orientation angles, motion compensation curves, etc.), as well as the pre-filtered range-compressed data and the SAR images formed in real time. Recorded data are used further in our research and development activity on SAR systems.

## **8. Conclusion**

The presented results indicate that some of the essential problems that limited the development of SAR systems for small aircrafts are solved. In particular, the problem of the antenna beam orientation evaluation has been solved by extracting this information from the Doppler shift of the radar echoes. This technique enables to use only a simple GPS receiver to provide a reliable SAR operation. Simultaneously, the problem of the correction of the geometrical distortions in SAR images has been solved via the introduction of a signal processing algorithm, which provides pointing multi-look SAR beams exactly to the nodes of a rectangular grid on the ground plane. The proposed multi-look processing algorithm with extended number of looks has demonstrated a high efficiency for the correction of radiometric errors. The suggested approaches have been successfully implemented in and tested with Ku- and X-band SAR systems deployed on small aircrafts. It should be pointed that these solutions are as well useful for SAR systems deployed on other platforms.

## **9. Acknowledgment**

496 Recent Advances in Aircraft Technology

producing SAR images with a 2-meter resolution formed of up to 15 looks in real time. A scheme with half-overlapped frames is implemented to provide continuous surveillance of

The SAR navigation system is based on a simple GPS-receiver capable of measuring the aircraft position and the aircraft velocity vector. The measured position is used to link the obtained SAR images to ground maps, and also to know the flight altitude above the ground. The aircraft flight trajectory is integrated from the measured aircraft velocity with a sufficient accuracy to perform the motion compensation. The antenna beam orientation is estimated from Doppler frequencies of the backscattered radar signals. The pitch and yaw antenna orientation angles are used both for motion compensation and for the aperture synthesis. Such angle estimation is a kind of clutter-lock processing allowing to track variations of the antenna beam orientation by adjusting the SAR data processing algorithm

The signal processing system is divided on two main parts. The first part of the system performs: 1) range compression of LFM pulses combined with the 1st-order motion compensation, 2) calculation of Doppler centroid values for each range gate (by FFT in azimuth) and estimation of the antenna orientation angles, and 3) pre-filtering of the rangecompressed data. This processing is performed in a special PCI board with a DSP and an

The second part of the data processing system forms multi-look SAR images by using a range-Doppler algorithm with the 2nd-order motion compensation. This processing is performed on a PC with an Intel Quad Core CPU (the above-mentioned PCI board is installed on this PC). It gives a flexibility in setting the azimuth processing parameters and allows using the developed SAR system as a suitable test-bed for testing new modifications

Stitching of the obtained SAR images into a continuous strip map can be performed on a

The SAR system is capable of recording the original uncompressed radar data on a solidstate drives organized in a RAID-0 array at the full pulse repetition rate up to 5 kHz. These data are stored together with the navigation data (original GPS measurements, integrated trajectories, estimated orientation angles, motion compensation curves, etc.), as well as the pre-filtered range-compressed data and the SAR images formed in real time. Recorded data

The presented results indicate that some of the essential problems that limited the development of SAR systems for small aircrafts are solved. In particular, the problem of the antenna beam orientation evaluation has been solved by extracting this information from the Doppler shift of the radar echoes. This technique enables to use only a simple GPS receiver to provide a reliable SAR operation. Simultaneously, the problem of the correction of the geometrical distortions in SAR images has been solved via the introduction of a signal processing algorithm, which provides pointing multi-look SAR beams exactly to the nodes of a rectangular grid on the ground plane. The proposed multi-look processing algorithm

client PC (or a notebook), while viewing the data in real time or offline.

are used further in our research and development activity on SAR systems.

the strip without gaps despite of possible motion instabilities.

from one radar data frame to another.

of various frame-based SAR algorithms.

FPGA.

**8. Conclusion** 

The authors would like to thank all of their colleagues at the Department of Microwave Electronics, Institute of Radio Astronomy of the National Academy of Sciences of Ukraine for their help and fruitful discussions. In particular, we indebt to Dr. V. V. Vynogradov, Dr. V. A. Volkov, Dr. S. V. Sosnytskiy, Mr. R. V. Kozhyn, Mr. S. S. Sekretarov, Mr. A. Kravtsov, Mr. A. Suvid, and Mr. I. Gorovyi for their essential contributions to the development of practical SAR systems.

## **10. References**


**21** 

*USA* 

**Avionics Design for a Sub-Scale Fault-**

The increasingly widespread use of Unmanned Aerial Vehicles (UAVs) has provided

1. For carrying remote sensing or other scientific payloads. Highly publicized examples of such applications include the forest fire detection effort jointly conducted by NASA Ames research centre and the US Forest Service (Ambrosia et al., 2004), and the mission

3. As a sub-scale test bed to help solving known or potential issues facing full-scale manned aircraft. For example, a series of flight test experiments were performed at Rockwell Collins (Jourdan et al., 2010) with a sub-scale F-18 aircraft to control and recover the aircraft after wing damages. Another example is the X-48B blended wing body aircraft (Liebeck, 2004) jointly developed by Boeing and NASA to investigate new

Each of these applications poses different requirements on the design of the on-board avionic package. For example, the remote sensing platforms are often tele-operated by a ground pilot or controlled with a Commercial-off-the-Shelf (COTS) or open-source autopilot (Chao et al., 2009). The UAVs for sensing and decision making research often requires a higher level of customization for the avionic system. This can be achieved through either

1 Research Assistant Professor, Mechanical and Aerospace Engineering (MAE) Department, West

Virginia University (WVU), Morgantown, WV 26506, Email: Yu.Gu@mail.wvu.edu; 2 Ph.D., MAE Dept., WVU, now at Jet Propulsion Laboratory, Pasadena, CA, Email:

4 Post-Doctoral Research Fellow, MAE Dept, WVU, Email: Haiyang.Chao@mail.wvu.edu;

3 M.S. Student, MAE Dept., WVU, Email: fjbarchesky@gmail.com;

5 Professor, MAE Dept., WVU, Email: Marcello.Napolitano@mail.wvu.edu.

into the eye of hurricane Ophelia by an Aerosonde® UAV (Cione et al., 2008); 2. For evaluating different sensing and decision-making strategies as an autonomous vehicle. For examples, an obstacle and terrain avoidance experiment was performed at Brigham Young University to navigate a small UAV in the Goshen canyon (Griffiths et al., 2006); an autonomous formation flight experiment was performed at West Virginia

University (WVU) with three turbine-powered UAVs (Gu et al., 2009);

design concepts for future-generation transport aircraft.

researchers with platforms for several different applications:

**1. Introduction** 

Jason.Gross@jpl.nasa.gov;

**Tolerant Flight Control Test-Bed** 

Yu Gu1, Jason Gross2, Francis Barchesky3, Haiyang Chao4 and Marcello Napolitano5

*West Virginia University* 


## **Avionics Design for a Sub-Scale Fault-Tolerant Flight Control Test-Bed**

Yu Gu1, Jason Gross2, Francis Barchesky3, Haiyang Chao4 and Marcello Napolitano5 *West Virginia University USA* 

### **1. Introduction**

498 Recent Advances in Aircraft Technology

Bezvesilniy, O. O., Gorovyi, I. M., Vynogradov, V. V. & Vavriv, D. M. (2011b). Multi-look

Bezvesilniy, O. O., Gorovyi, I. M., Vynogradov, V. V. & Vavriv, D. M. (2011c). Range-

Blacknell, D., Freeman, A., Quegan, S., Ward, I. A., Finley, I. P., Oliver, C. J., White, R. G. & J.

Buckreuss, S. (1991). Motion errors in an airborne synthetic aperture radar system. *European* 

Carrara, W. G., Goodman, R. S. & Majewski, R. M. (1995). *Spotlight Synthetic Aperture Radar:* 

Cumming, I. G. & Wong, F. H. (2005). *Digital Processing of Synthetic Aperture Radar Data:* 

Franceschetti, G. & Lanari, R. (1999). *Synthetic Aperture Radar Processing*, CRC Press, ISBN 0-

Li, F.-K., Held, D. N., Curlander, J. C. & Wu, C. (1985). Doppler parameter estimation for

Madsen, S. N. (1989). Estimating the Doppler centroid of SAR data. *IEEE Transactions on* 

Moreira, A. (1991). Improved multilook techniques applied to SAR and SCANSAR imagery. *IEEE Transactions on Geoscience and Remote Sensing*, Vol. 29, No. 4, pp. 529-534. Oliver, C. J. & Quegan, S. (1998). *Understanding Synthetic Aperture Radar Images*, Artech

Rosen, P. A., Hensley, S., Joughin, I. R., Li, F.-K., Madsen, S. N., Rodriguez, E. & Goldstein,

Vavriv, D. M., Vynogradov, V. V., Volkov, V. A., Kozhyn, R. V., Bezvesilniy, O. O.,

Vavriv, D. M. & Bezvesilniy, O. O. (2011a). Developing SAR for small aircrafts in Ukraine.

Vavriv, D. M. & Bezvesilniy, O. O. (2011b). Potential of multi-look SAR processing.

Vavriv, D. M., Bezvesilniy, O. O., Kozhyn, R. V., Vynogradov, V. V., Volkov, V. A. &

Wehner, D.R. (1995). *High-Resolution Radar (2nd Ed.)*, Artech House, ISBN 0-89006-727-9.

10 June 2011, Baltimore, USA. pp. 1-4. (CD-ROM Proceedings).

*2011)*. 9-11 June 2011, Istanbul, Turkey. pp. 365-369.

R. M. (2000). Synthetic aperture radar interferometry. *Proceedings of the IEEE*, Vol.

Alekseenkov, S. V., Shevchenko, A. V., Belikov, A., Vasilevsky, M.P. & Zaikin D. I. (2006). Cost-effective airborne SAR. *Radio Physics and Radio Astronomy*, Vol. 11, No.

*Proceedings of the 2011 IEEE MTT-S International Microwave Symposium (IMS 2011)*. 5-

*Proceedings of the 5th Int. Conference on Recent Advances in Space Technologies (RAST* 

Sekretarov, S. S. (2011). SAR systems for light-weight aircrafts. *Proceedings of the 2011 Microwaves, Radar and Remote Sensing Symposium (MRRS-2011)*. August 25-27,

spaceborne synthetic-aperture radars. *IEEE Transactions on Geoscience and Remote* 

*Aerospace and Electronic Systems*, Vol. 25, No. 2, pp. 241-258.

*Transactions on Telecommunications*, Vol. 2, No. 6, pp. 655–664.

*Signal Processing Algorithms*, Artech House, ISBN 0-89006-728-7.

*Algorithms and Implementation*, Artech House, ISBN 1-58053-058-3.

*Aerospace and Electronic Systems*, Vol. 25, No. 2, pp. 134-140.

No. 4, pp. ???-??? (Accepted for publication).

Ukraine. pp. 203–206.

8493-7899-0.

*Sensing*, Vol. 23, No. 1, pp. 47-56.

House, ISBN 0-89006-850-X.

88, No. 3, pp. 333-382.

Kiev, Ukraine. pp. 15-19.

3, pp. 276-297.

radiometric correction of SAR images. *Radio Physics and Radio Astronomy*, Vol. 16,

Doppler algorithm with extended number of looks, *Proceedings of the 2011 Microwaves, Radar and Remote Sensing Symposium (MRRS-2011)*. August 25-27, Kiev,

W. Wood (1989). Geometric accuracy in airborne SAR images. *IEEE Transactions on* 

The increasingly widespread use of Unmanned Aerial Vehicles (UAVs) has provided researchers with platforms for several different applications:


Each of these applications poses different requirements on the design of the on-board avionic package. For example, the remote sensing platforms are often tele-operated by a ground pilot or controlled with a Commercial-off-the-Shelf (COTS) or open-source autopilot (Chao et al., 2009). The UAVs for sensing and decision making research often requires a higher level of customization for the avionic system. This can be achieved through either

<sup>1</sup> Research Assistant Professor, Mechanical and Aerospace Engineering (MAE) Department, West Virginia University (WVU), Morgantown, WV 26506, Email: Yu.Gu@mail.wvu.edu;

<sup>2</sup> Ph.D., MAE Dept., WVU, now at Jet Propulsion Laboratory, Pasadena, CA, Email: Jason.Gross@jpl.nasa.gov;

<sup>3</sup> M.S. Student, MAE Dept., WVU, Email: fjbarchesky@gmail.com;

<sup>4</sup> Post-Doctoral Research Fellow, MAE Dept, WVU, Email: Haiyang.Chao@mail.wvu.edu;

<sup>5</sup> Professor, MAE Dept., WVU, Email: Marcello.Napolitano@mail.wvu.edu.

Avionics Design for a Sub-Scale Fault-Tolerant Flight Control Test-Bed 501

to many factors, such as weather conditions (e.g. icing or turbulence), pilot errors (e.g. disorientation or mis-judgment), air and ground traffic management errors, and a variety of sub-system failures (e.g. sensor, actuator, or propulsion system failures). Furthermore, the introduction of new technologies in aviation systems poses new threats to the safe operation of an aircraft. For example, modern fly-by-wire flight control systems are known to introduce new failure modes due to their dependence on computers and avionics (Yeh, 1998). Increased automation and flight deck complexity could also potentially degrade situational awareness, and require increased and highly aircraft-specific pilot training. These factors could potentially create new failure scenarios that have not yet been recognized as

Due to the complexity of aviation accidents, a multi-functional avionics design is needed to support the fault-tolerant flight control research. The most important requirements for such a design include maintaining accurate and timely measurements of aircraft states, having the ability to emulate various aircraft upset or failure conditions, and providing a flexible interface between humans and automatic control systems. A breakdown of more specific avionics requirements for several aviation safety related research topics is summarized in

**Research Topic Specific Avionics Requirements** 

Table 1. Design requirements for typical fault-tolerant flight control research topics.

A fundamental difference between operating a sub-scale and a full-scale aircraft is the absence of humans on-board. The removal of the physical presence of human pilots allows the testing of high-risk flight conditions and reduces the cost of the experiment. However, pilots are integral components of modern aviation systems and contributed to 29% of "*fatal accidents involving commercial aircraft, world-wide, from 1950 thru 2009 for which a specific cause is known*" (Planecrashinfo.com, 2011). Pilots are also the ultimate decision-makers on-board; therefore, the evaluations of their response under adverse situations and the detailed

High quality sensor measurements; adequate update rate; monitoring of pilot activities; precise timealignment of all measured channels.

Ability to automatically apply pre-specified waveform inputs to control effectors.

Ability to inject and remove simulated aircraft subsystem failures, such as failures in a particular sensor, actuator, or propulsion unit, or in the control command transmission link.

Ability to command and reconfigure individual aircraft control effectors; having low system latency and abundant computational resources.

Ability to augment the pilot command with automatic control algorithms.

causes of accidents.

Table 1.

**2.1 Research requirements** 

Aircraft modelling with manually injected manoeuvres

Aircraft modelling with an On-Board Excitation System (OBES)

Failure emulation

Fault-tolerant flight control (automatic)

Fault-tolerant flight control (pilot-in-the-loop)

**2.2 Operational scenarios** 

augmenting a COTS autopilot with a dedicated payload computer (Miller et al., 2005), or by having an entirely specialized avionics design (Evans et al., 2001). An alternative approach for smaller UAVs is to instrument an indoor testing environment (How et al., 2008) for measuring aircraft states so that a less complex avionic system could be used on-board the aircraft.

The avionic systems for sub-scale aircraft aimed at improving the safety of full-scale manned aircraft have a different set of design requirements. In addition to providing the standard measurement and control functions, the avionic system also needs to enable the simulation of different aircraft upset or failure conditions. Two general approaches have been used by different research groups. The first approach is to develop a highly realistic experimental environment in simulating a full-scale aircraft operation. For example, the Airborne Subscale Transport Aircraft Research Test bed (AirSTAR) program at NASA Langley research centre uses dynamically scaled airframe equipped with customized avionics for aviation safety research (Jordan et al., 2006) (Murch, 2008). During the research portion of the flight, the aircraft is controlled by a ground research pilot augmented by control algorithms running at a mobile ground station. An alternative approach is to develop a low-cost and expansible aircraft/avionic system for evaluating high-risk flight conditions (Christophersen et al., 2004).

Sub-scale aircraft have played critical complimentary roles to full-scale flight testing programs due to lower risks, costs, and turn-around time. The objective of this chapter is to discuss the specific avionics design requirements for supporting these experiments, and to share the design experience and lessons learned at WVU over the last decade of flight testing research. Specifically, in this chapter, detailed information for a WVU Generation-V (Gen-V) avionic system design is presented, which is based on an innovative approach for integrating both human and autonomous decision-making capabilities. Due to the high risk and uncertain nature of experiments that explore adverse flight conditions, the avionics itself is designed to reduce the risk of a Single Point of Failure (SPOF). This makes it possible to achieve a reliable operation and seamless flight mode switching. The Gen-V avionics design builds upon several earlier generations of WVU avionics that supported a variety of research topics such as aircraft Parameter Identification (PID) (Phillips et al., 2010), formation flight control (Gu et al., 2009), fault-tolerant flight control (Perhinschi et al., 2005), and sensor fusion (Gross et al., 2011).

The rest of the chapter is organized as follows. Section 2 introduces the general design requirements for avionic systems used in fault-tolerant flight control research. Section 3 discusses the overall hardware design architecture and main sub-systems. Section 4 presents the control command signal distribution logic that enables the flexible and reliable transition among different flight modes. Section 5 presents the aircraft on-board software architecture and the real-time Global Positioning System/Inertial Navigation System (GPS/INS) sensor fusion algorithm. Ground and flight testing procedures and results for validating avionics functionalities are discussed in Section 6, and finally, Section 7 concludes the chapter.

## **2. Avionics design requirements for fault-tolerant flight control research**

Fault tolerant flight control research pose special challenges for avionics design due to the complex nature of aviation accidents. The occurrence of aviation accidents can be attributed to many factors, such as weather conditions (e.g. icing or turbulence), pilot errors (e.g. disorientation or mis-judgment), air and ground traffic management errors, and a variety of sub-system failures (e.g. sensor, actuator, or propulsion system failures). Furthermore, the introduction of new technologies in aviation systems poses new threats to the safe operation of an aircraft. For example, modern fly-by-wire flight control systems are known to introduce new failure modes due to their dependence on computers and avionics (Yeh, 1998). Increased automation and flight deck complexity could also potentially degrade situational awareness, and require increased and highly aircraft-specific pilot training. These factors could potentially create new failure scenarios that have not yet been recognized as causes of accidents.

## **2.1 Research requirements**

500 Recent Advances in Aircraft Technology

augmenting a COTS autopilot with a dedicated payload computer (Miller et al., 2005), or by having an entirely specialized avionics design (Evans et al., 2001). An alternative approach for smaller UAVs is to instrument an indoor testing environment (How et al., 2008) for measuring aircraft states so that a less complex avionic system could be used on-board the

The avionic systems for sub-scale aircraft aimed at improving the safety of full-scale manned aircraft have a different set of design requirements. In addition to providing the standard measurement and control functions, the avionic system also needs to enable the simulation of different aircraft upset or failure conditions. Two general approaches have been used by different research groups. The first approach is to develop a highly realistic experimental environment in simulating a full-scale aircraft operation. For example, the Airborne Subscale Transport Aircraft Research Test bed (AirSTAR) program at NASA Langley research centre uses dynamically scaled airframe equipped with customized avionics for aviation safety research (Jordan et al., 2006) (Murch, 2008). During the research portion of the flight, the aircraft is controlled by a ground research pilot augmented by control algorithms running at a mobile ground station. An alternative approach is to develop a low-cost and expansible aircraft/avionic system for evaluating high-risk flight

Sub-scale aircraft have played critical complimentary roles to full-scale flight testing programs due to lower risks, costs, and turn-around time. The objective of this chapter is to discuss the specific avionics design requirements for supporting these experiments, and to share the design experience and lessons learned at WVU over the last decade of flight testing research. Specifically, in this chapter, detailed information for a WVU Generation-V (Gen-V) avionic system design is presented, which is based on an innovative approach for integrating both human and autonomous decision-making capabilities. Due to the high risk and uncertain nature of experiments that explore adverse flight conditions, the avionics itself is designed to reduce the risk of a Single Point of Failure (SPOF). This makes it possible to achieve a reliable operation and seamless flight mode switching. The Gen-V avionics design builds upon several earlier generations of WVU avionics that supported a variety of research topics such as aircraft Parameter Identification (PID) (Phillips et al., 2010), formation flight control (Gu et al., 2009), fault-tolerant flight control (Perhinschi et al., 2005),

The rest of the chapter is organized as follows. Section 2 introduces the general design requirements for avionic systems used in fault-tolerant flight control research. Section 3 discusses the overall hardware design architecture and main sub-systems. Section 4 presents the control command signal distribution logic that enables the flexible and reliable transition among different flight modes. Section 5 presents the aircraft on-board software architecture and the real-time Global Positioning System/Inertial Navigation System (GPS/INS) sensor fusion algorithm. Ground and flight testing procedures and results for validating avionics functionalities are discussed in Section 6, and finally, Section 7 concludes the chapter.

**2. Avionics design requirements for fault-tolerant flight control research** 

Fault tolerant flight control research pose special challenges for avionics design due to the complex nature of aviation accidents. The occurrence of aviation accidents can be attributed

aircraft.

conditions (Christophersen et al., 2004).

and sensor fusion (Gross et al., 2011).

Due to the complexity of aviation accidents, a multi-functional avionics design is needed to support the fault-tolerant flight control research. The most important requirements for such a design include maintaining accurate and timely measurements of aircraft states, having the ability to emulate various aircraft upset or failure conditions, and providing a flexible interface between humans and automatic control systems. A breakdown of more specific avionics requirements for several aviation safety related research topics is summarized in Table 1.


Table 1. Design requirements for typical fault-tolerant flight control research topics.

#### **2.2 Operational scenarios**

A fundamental difference between operating a sub-scale and a full-scale aircraft is the absence of humans on-board. The removal of the physical presence of human pilots allows the testing of high-risk flight conditions and reduces the cost of the experiment. However, pilots are integral components of modern aviation systems and contributed to 29% of "*fatal accidents involving commercial aircraft, world-wide, from 1950 thru 2009 for which a specific cause is known*" (Planecrashinfo.com, 2011). Pilots are also the ultimate decision-makers on-board; therefore, the evaluations of their response under adverse situations and the detailed

Avionics Design for a Sub-Scale Fault-Tolerant Flight Control Test-Bed 503

To support the previously described research topics and the two operational scenarios, the

1. *Manual Mode I – Direct Vision*. An R/C safety pilot has full authority on all control channels in the basic stick-to-surface format. The pilot should always have the option of switching to this mode instantaneously under any conditions as long as the R/C link is available. This mode can be used for aircraft manual take-off and landing, manual PID manoeuvre injection, as well as emergency recovery from other operational modes; 2. *Manual Mode II – Virtual Flight Display*. A research pilot inside the ground control

3. *Fully Autonomous Mode.* The on-board flight control system has full control of the aircraft, while the R/C pilot is only serving as an observer and safety backup; 4. *Partially Autonomous Mode.* A subset of the flight control channels is under autonomous

5. *Pilot-In-The-Loop Mode.* The pilot command is supplied as input to a Stability Augmentation System (SAS) or a Control Augmentation System (CAS). This mode allows for studying the interaction between a human pilot and the automatic control

6. *Failure Emulation Mode*. A simulated failure condition is induced by the on-board computer to one or multiple control channels, while the remaining channels could be

7. *Fail-Safe Modes*. In the event that the ground pilot could not maintain manual control of the aircraft due to loss of an R/C link, the avionic system should explore redundant communication links and on-board autonomy to help in regaining the aircraft control or

Due to the high-risk involved in testing various adverse flight conditions and the need for switching between multiple operational modes, the reliability requirements for the avionics hardware are significantly higher than that of a conventional autopilot system for a similar class of UAV. In other words, the avionic system needs to be fault-tolerant itself, and its design should minimize the risk of a SPOF condition. For example, redundant command and control links are needed in case the primary link is lost or interfered. Additionally, the safety pilot should be able to instantaneously switch back to the manual mode from any other operational mode, even in the event of main computer shutdown or power loss.

Additional requirements to the avionics hardware design typically include low-cost, lowweight, low-power consumption, low Electromagnetic Interference (EMI), configurable and

Based on design requirements outlined in the previous section, a Gen-V avionic system is being developed for a WVU '*Phastball*' sub-scale research aircraft. The '*Phastball*' aircraft has a 2.2 meter wingspan and a 2.2 meter total length. The typical take-off weight is 10.5 Kg with a 3.2 Kg payload capacity. The aircraft is propelled by two brushless electric ducted fans;

control while other channels are still operated by the ground pilot;

under manual, autonomous, or pilot-in-the-loop control;

**3. WVU avionics architecture and main sub-systems** 

minimize the damage of a potential accident.

**2.3 Operational modes** 

system;

**2.4 Hardware requirement** 

expandable, and user-friendly.

following operational modes are typically required:

station has full authority on all control channels;

understanding of their interaction with the rest of the flight control system play crucial roles in improving aviation safety (NRC, 1997). From this point of view, a realistic fault-tolerant flight testing program should not only take advantage of the low-cost and low-risk features of the sub-scale aircraft, but also to provide a highly relevant operational environment for human pilots. Figure 1 illustrates two potential sub-scale flight testing scenarios for different research topics.

Fig. 1. Two sub-scale aircraft operational scenarios.

Scenario #1 can be used for modelling the aircraft dynamics under different flight conditions and to evaluate automatic Guidance, Navigation, and Control (GNC) algorithms. Within this scenario, a Remote Control (R/C) pilot either directly controls the test bed aircraft or serves as a safety monitor to the on-board flight control system during the test. This scenario provides a simple but reliable method for operating a research aircraft.

Scenario #2 expands upon the first scenario by adding an additional Ground Control Station (GCS), a research pilot, and a flight engineer. The GCS provides a simulated cockpit for the research pilot, who controls the aircraft based on the transmitted flight data and video. This configuration allows the research pilot to have a first-person perspective and enables a fully instrumented flight operation. The role of the flight engineer is to control the configuration of the aircraft by adjusting controller modes/parameters or inject/remove different failure scenarios during the flight. The R/C safety pilot monitors the flight and takes over the aircraft control under emergency situations or during non-research portions of the experiment. Scenario #2 provides additional capabilities for studying the pilot's role in a flight.

## **2.3 Operational modes**

502 Recent Advances in Aircraft Technology

understanding of their interaction with the rest of the flight control system play crucial roles in improving aviation safety (NRC, 1997). From this point of view, a realistic fault-tolerant flight testing program should not only take advantage of the low-cost and low-risk features of the sub-scale aircraft, but also to provide a highly relevant operational environment for human pilots. Figure 1 illustrates two potential sub-scale flight testing scenarios for different

**R/C Safety Pilot**

Scenario #1 can be used for modelling the aircraft dynamics under different flight conditions and to evaluate automatic Guidance, Navigation, and Control (GNC) algorithms. Within this scenario, a Remote Control (R/C) pilot either directly controls the test bed aircraft or serves as a safety monitor to the on-board flight control system during the test.

Scenario #2 expands upon the first scenario by adding an additional Ground Control Station (GCS), a research pilot, and a flight engineer. The GCS provides a simulated cockpit for the research pilot, who controls the aircraft based on the transmitted flight data and video. This configuration allows the research pilot to have a first-person perspective and enables a fully instrumented flight operation. The role of the flight engineer is to control the configuration of the aircraft by adjusting controller modes/parameters or inject/remove different failure scenarios during the flight. The R/C safety pilot monitors the flight and takes over the aircraft control under emergency situations or during non-research portions of the experiment. Scenario #2 provides additional capabilities for studying the pilot's role in a

This scenario provides a simple but reliable method for operating a research aircraft.

**Research Pilot**

**Ground Control Station**

**Scenario #2: Research Pilot In the Loop**

**Flight Engineer**

**Test Bed Aircraft**

research topics.

**Scenario #1: R/C and Autonomous Flying**

**Test Bed Aircraft**

**Data Display**

Fig. 1. Two sub-scale aircraft operational scenarios.

**R/C Pilot**

flight.

To support the previously described research topics and the two operational scenarios, the following operational modes are typically required:


#### **2.4 Hardware requirement**

Due to the high-risk involved in testing various adverse flight conditions and the need for switching between multiple operational modes, the reliability requirements for the avionics hardware are significantly higher than that of a conventional autopilot system for a similar class of UAV. In other words, the avionic system needs to be fault-tolerant itself, and its design should minimize the risk of a SPOF condition. For example, redundant command and control links are needed in case the primary link is lost or interfered. Additionally, the safety pilot should be able to instantaneously switch back to the manual mode from any other operational mode, even in the event of main computer shutdown or power loss.

Additional requirements to the avionics hardware design typically include low-cost, lowweight, low-power consumption, low Electromagnetic Interference (EMI), configurable and expandable, and user-friendly.

## **3. WVU avionics architecture and main sub-systems**

Based on design requirements outlined in the previous section, a Gen-V avionic system is being developed for a WVU '*Phastball*' sub-scale research aircraft. The '*Phastball*' aircraft has a 2.2 meter wingspan and a 2.2 meter total length. The typical take-off weight is 10.5 Kg with a 3.2 Kg payload capacity. The aircraft is propelled by two brushless electric ducted fans;

Avionics Design for a Sub-Scale Fault-Tolerant Flight Control Test-Bed 505

9 Ch PWM

PC-104 General Purpose Computer (GPC) with Analog to Digital (A/D) Converter

AND Gate

AND Gate

PWM Switch 2

RF Modem

Alpha, Beta, Temperature, Humidity Sensors

> NetBurner Mod 5213

Pressure Sensors, Laser Range Finder

SCI SPI

Left Actuators

Right Actuators

Engine/Servo Battery R

PWM

Engine/Servo Battery L

GPS Receiver

SCI

A/D PPS

SCI

**Flight Computer Nose Board**

A/D

IDE

SCI SCI

R-A Fail Safe

9 Ch Selection

> Black box Recorder

Compact Flash Card

9 Duplexers

9 AND Gates

Duplexer

R/C Battery B

Relay

Computer Battery

**3.1 Flight computer** 

attitude estimation;

the avionic system;

R/C Receiver B

PWM Switch 1

Optical Isolators PWM

PPM Encoder PPM

R/C Receiver A

R/C Battery A 9 Ch PWM

PWM

Ctrl Switch (PWM)

Ctrl Switch

Kill Power

Receiver A Fail Safe

Data (SCI) Command

Fig. 3. Functional block diagram for the WVU Gen-V avionics hardware design.

The Gen-V flight computer integrates the functions of data acquisition, signal conditioning, GPS/INS sensor fusion, failure emulation, automatic control command generation, and control command distribution into a compact package. In terms of hardware, the following

1. An Analog Devices® ADIS16405 Inertial Measurement Unit (IMU) that measures the aircraft 3-axis accelerations and 3-axis angular rates. Additionally, it provides readings of the magnetic field for potential use in the navigation filter for an improved aircraft

2. A Novatel® OEMV-1 GPS receiver that provides aircraft position and velocity measurements. It also provides the precision time information in the form of Pulse Per Second (PPS) signal, which is used to synchronize measurements from different parts of

3. A Netburner® MOD5213 Embedded Micro-Processor (EMP) provides lower level interfaces for measuring human pilot commands in the Pulse-Position Modulation (PPM) format, generating on-board control command in the Pulse-Width Modulation (PWM) format, collecting data from the IMU through a Serial Peripheral Interface (SPI), monitoring battery voltages, and communicating with a general-purpose computer. The EMP also monitors two important PWM signals from the R/C receiver: a *ctrl*-switch and a *kill*-switch. The state of the *ctrl*-switch determines whether the aircraft will be operating in the manual mode or one of the other modes. The *kill*-switch gives pilot the option to power-off the computer during flight if needed for achieving improved

IMU

main components are included in the flight computer:

SPI

A/D

NetBurner Mod 5213 Embedded Micro-Processor (EMP)

A/D

D/O

each can provide up to 30 N of static thrust. The use of electric propulsion systems simplifies the flight operations and reduces vibrations on the airframe. Additionally, the low time constant associated with an electric ducted fan allows it to be used directly as an actuator or for simulating the dynamics of a slower jet engine. The cruise speed of the '*Phastball*' aircraft is approximately 30 m/s. As a dedicated test-bed for fault-tolerant flight control research, the following nine channels can be independently controlled on the '*Phastball*' aircraft: left/right elevators, left/right ailerons, left/right engines, rudder, nose gear, and longitudinal thrust vectoring.

The avionic system features a flight computer, a nose sensor connection board, a control signal distribution board, a sensor suite, an R/C sub-system, a communication sub-system, a power sub-system, and a set of real-time software. It performs functions such as data acquisition, signal conditioning & distribution, GPS/INS sensor fusion, GNC, failure emulation, aircraft health monitoring, and failsafe functions. Figure 2 shows the '*Phastball*' aircraft along with the main avionics hardware components.

Fig. 2. '*Phastball*' aircraft and main avionics hardware components.

A detailed functioning block diagram for the Gen-V avionics hardware design is provided in Figure 3. The functionality of each main sub-system is described in the following sections.

Fig. 3. Functional block diagram for the WVU Gen-V avionics hardware design.

#### **3.1 Flight computer**

504 Recent Advances in Aircraft Technology

each can provide up to 30 N of static thrust. The use of electric propulsion systems simplifies the flight operations and reduces vibrations on the airframe. Additionally, the low time constant associated with an electric ducted fan allows it to be used directly as an actuator or for simulating the dynamics of a slower jet engine. The cruise speed of the '*Phastball*' aircraft is approximately 30 m/s. As a dedicated test-bed for fault-tolerant flight control research, the following nine channels can be independently controlled on the '*Phastball*' aircraft: left/right elevators, left/right ailerons, left/right engines, rudder, nose gear, and

The avionic system features a flight computer, a nose sensor connection board, a control signal distribution board, a sensor suite, an R/C sub-system, a communication sub-system, a power sub-system, and a set of real-time software. It performs functions such as data acquisition, signal conditioning & distribution, GPS/INS sensor fusion, GNC, failure emulation, aircraft health monitoring, and failsafe functions. Figure 2 shows the '*Phastball*'

aircraft along with the main avionics hardware components.

Fig. 2. '*Phastball*' aircraft and main avionics hardware components.

A detailed functioning block diagram for the Gen-V avionics hardware design is provided in Figure 3. The functionality of each main sub-system is described in the following sections.

longitudinal thrust vectoring.

The Gen-V flight computer integrates the functions of data acquisition, signal conditioning, GPS/INS sensor fusion, failure emulation, automatic control command generation, and control command distribution into a compact package. In terms of hardware, the following main components are included in the flight computer:


Avionics Design for a Sub-Scale Fault-Tolerant Flight Control Test-Bed 507

The GCS computer collects the aircraft downlink telemetry data, the nose camera video, the weather information, the GPS time/position measurements, the voice communication, as well as inputs from the R/C pilot, the research pilot, and the flight engineer. The data stream is accessible by the research pilot, the flight engineer, and field researchers in near real-time through a local network. For the research pilot station, three displays are provided including an X-Plane® based synthetic-vision primary flight display overlapped with a Heads-Up Display (HUD) that shows the flight parameters and mission constraints, a flight instrumentation display with a navigation window, and a screen showing the real-time flight video transmitted from the aircraft nose camera. The research pilot flies the aircraft through a set of joystick, rudder pedals, and throttle handles. The flight engineer has access to all available flight data and can change the aircraft operational mode or inject/remove failures with or without notifying the research pilot. Figure 4 shows the layout of the GCS

**Synthetic Vision Display w/ HUD**

**Flight Engineer Station**

**Flight Instrument**

**Nose Camera Video**

Fig. 4. The exterior (left) and interior (right) of the ground control station vehicle.

The duplex communications between the ground control station and the test bed aircraft is provided with a pair of 900 MHz Freewave ® Radio Frequency (RF) modems. The downlink communication packet contains information about aircraft states and avionics health conditions. The uplink packet integrates both the research pilot control commands and the flight engineer configuration commands. Both the uplink and downlink data are transmitted

One of the important features of the WVU Gen-V avionics design is its ability to provide a flexible and reliable interface between control commands generated by humans and automatic controllers. This capability is achieved through the interaction of different

**3.4 Ground control station** 

vehicle.

**Weather Sensor**

**RF Modem Antenna**

**GPS Antenna**

at a rate of 50Hz.

**4. Control command signal distribution** 

hardware components and software functions.

ground-control reliability during the safety critical (such as landing) portion of the flight;


A detailed description of the aircraft control command generation and distribution is provided in Section 4.

#### **3.2 Sensor suite**

In addition to the IMU and the GPS receiver embedded inside the Gen-V flight computer, the '*Phastball*' aircraft is also equipped with three P3America® MP1545A inductive potentiometers for measuring aircraft flow angles, two Sensor Technics® pressure sensors for measuring the dynamic and static pressures, a Measurement Specialities® HTM2500 temperature and relative humidity sensor, and an Opti-Logic® RS400 laser range finder. Additionally, the pilot input, engine operating parameters, and R/C receiver status are also recorded in flight. The aircraft attitude angles are provided with a real-time GPS/INS sensor fusion algorithm, which will be described in Section 5.

#### **3.3 Power system**

To reduce SPOF, the arrangement of battery power has been carefully determined. A total of five battery packs are used to power different components of the avionic system. Specifically, an R/C battery-A is connected to R/C receiver-A and an logic network for control command distribution; an R/C battery-B is used to power receiver-B; the computer battery powers EMP, GPC, and all sensing, communication, and data storage devices; engine/servo batteries L and R power the left and right side engines and R/C servos independently. With this configuration, the failure of any given battery would not cause a total loss of aircraft control during the flight. Specifically, if EMP detects that receiver-A battery is low, it activates a relay to tie up R/C batteries A and B so that there would be enough power for a safe landing. If the computer battery loses its power, the logic network powered by receiver-A battery automatically switches to the manual mode and gives the R/C pilot full control authority. If one of the engine/servo batteries fails, the pilot still has independent control for half of the aircraft actuators (propulsion and control surfaces) and would be able to perform a controlled landing.

## **3.4 Ground control station**

506 Recent Advances in Aircraft Technology

4. Two COTS PWM switches provide independent monitoring of the critical *ctrl*-switch

5. An 800 MHz PC-104+ form factor General-Purpose Computer (GPC) hosts the aircraft on-board software. It also provides additional 16 Analog to Digital Conversion (ADC) channels and 6 Serial Communication Interfaces (SCI) for communicating with the GPS

6. A logic network that distributes control command from both human pilots and automatic control systems to individual actuators based on the selected operational

7. A compact flash memory card storing the operating system, the on-board software, and

8. A black-box data recorder stores a real-time stream of sensory data, control command,

A detailed description of the aircraft control command generation and distribution is

In addition to the IMU and the GPS receiver embedded inside the Gen-V flight computer, the '*Phastball*' aircraft is also equipped with three P3America® MP1545A inductive potentiometers for measuring aircraft flow angles, two Sensor Technics® pressure sensors for measuring the dynamic and static pressures, a Measurement Specialities® HTM2500 temperature and relative humidity sensor, and an Opti-Logic® RS400 laser range finder. Additionally, the pilot input, engine operating parameters, and R/C receiver status are also recorded in flight. The aircraft attitude angles are provided with a real-time GPS/INS sensor

To reduce SPOF, the arrangement of battery power has been carefully determined. A total of five battery packs are used to power different components of the avionic system. Specifically, an R/C battery-A is connected to R/C receiver-A and an logic network for control command distribution; an R/C battery-B is used to power receiver-B; the computer battery powers EMP, GPC, and all sensing, communication, and data storage devices; engine/servo batteries L and R power the left and right side engines and R/C servos independently. With this configuration, the failure of any given battery would not cause a total loss of aircraft control during the flight. Specifically, if EMP detects that receiver-A battery is low, it activates a relay to tie up R/C batteries A and B so that there would be enough power for a safe landing. If the computer battery loses its power, the logic network powered by receiver-A battery automatically switches to the manual mode and gives the R/C pilot full control authority. If one of the engine/servo batteries fails, the pilot still has independent control for half of the aircraft actuators (propulsion and control surfaces) and

receiver, the EMP, the nose board assembly, and the ground control station;

flight;

and *kill*-switch;

mode of the avionic system;

and the avionics health information during the flight.

fusion algorithm, which will be described in Section 5.

would be able to perform a controlled landing.

the collected flight data;

provided in Section 4.

**3.2 Sensor suite** 

**3.3 Power system** 

ground-control reliability during the safety critical (such as landing) portion of the

The GCS computer collects the aircraft downlink telemetry data, the nose camera video, the weather information, the GPS time/position measurements, the voice communication, as well as inputs from the R/C pilot, the research pilot, and the flight engineer. The data stream is accessible by the research pilot, the flight engineer, and field researchers in near real-time through a local network. For the research pilot station, three displays are provided including an X-Plane® based synthetic-vision primary flight display overlapped with a Heads-Up Display (HUD) that shows the flight parameters and mission constraints, a flight instrumentation display with a navigation window, and a screen showing the real-time flight video transmitted from the aircraft nose camera. The research pilot flies the aircraft through a set of joystick, rudder pedals, and throttle handles. The flight engineer has access to all available flight data and can change the aircraft operational mode or inject/remove failures with or without notifying the research pilot. Figure 4 shows the layout of the GCS vehicle.

Fig. 4. The exterior (left) and interior (right) of the ground control station vehicle.

The duplex communications between the ground control station and the test bed aircraft is provided with a pair of 900 MHz Freewave ® Radio Frequency (RF) modems. The downlink communication packet contains information about aircraft states and avionics health conditions. The uplink packet integrates both the research pilot control commands and the flight engineer configuration commands. Both the uplink and downlink data are transmitted at a rate of 50Hz.

## **4. Control command signal distribution**

One of the important features of the WVU Gen-V avionics design is its ability to provide a flexible and reliable interface between control commands generated by humans and automatic controllers. This capability is achieved through the interaction of different hardware components and software functions.

Avionics Design for a Sub-Scale Fault-Tolerant Flight Control Test-Bed 509

The command signal distribution system manages and distributes the R/C Pilot Control Command (PCC) provided by receiver-A and the on-board Software-generated Control Commands (SCC) to individual control actuators. Based on the operational mode, the SCC can be one of or a combination of the R/C pilot commands provided by receiver-B, research

The *ctrl*-switch, which the R/C pilot can turn on/off at any given time during the operation, plays a central role in determining the operational mode of the system. Specifically, based on measured receiver-B *ctrl*-switch signal, the EMP sends out a logic (high/low) signal indicating the status (on/off) of the *ctrl*-switch. This status indicator meets with the output of PWM switch-2, which measures the receiver-A *ctrl*-switch signal, at an AND gate. The output of the AND gate, which is called as Confirmed Ctrl Switch Signal (CCSS), becomes logic high only if both input signals are high. This provides a cross-check avoiding accidental activation of the on-board control due to either an EMP or PWM switch-2 failure. If both receiver-A and B are functioning in the normal mode, a low CCSS initiates the logic network to feed the receiver-A pilot command directly to the control actuators for enabling the pilot manual control. The CCSS can only be overridden in the situation that receiver-A is in the fail-safe mode. Under this condition, the avionic system is able to relay the receiver-B output to actuators through EMP and GPC even if the CCSS signal is low. To achieve this capability, a duplexer is used to switch between CCSS and an EMP provided receiver-A failsafe indicator. The switching signal for the duplexer is generated by the GPC, which

To further improve the flexibility of the avionic system and for enabling the partially autonomous mode, another level of logic is provided before the SCC reaches an actuator. Specifically, the GPC is sending out a set of 9-channel selection signals through digital output ports. These channel selection signals are then joined with CCSS at nine AND gates to independently control a 9-channel duplexer network with both SCC and PCC as inputs. Within this configuration, if CCSS is low, all channels will be under manual control. If CCSS is high, the on-board software controls any channel with a high channel selection signal

The configuration of channel selection signals is normally defined prior to flight based on mission requirements. They can also be modified by the GCS flight engineer during the operation through changing uplink communication packets. Additionally, if receiver-A goes into the fail-safe mode the GPC will activate all channel selection signals along with the failsafe indicator. This allows the pilot command registered from receiver-B to reach actuators,

The above-mentioned command signal distribution between PCC and SCC relies on a collaboration of both hardware and software functions. For generating SCC, the integration of commands from R/C pilot, research pilot, FCS, FES, and OBES are performed by the GPC software and are determined based on the specific flight mode. To help clarify the command signal distribution process, pseudo-codes for the EMP software and the command signal distribution portion of the GPC software are provided in Figures 5 and 6 respectively.

**4.2 Command signal distribution** 

pilot command, and commands from FCS, FES, and OBES.

provides a second confirmation that receiver-A is in the fail-safe mode.

with the rest channels being controlled by the R/C pilot.

maintaining the R/C pilot control.

#### **4.1 Control command generation**

Depending on mission requirements, the aircraft control command could come from several potential sources:

1. *The R/C pilot*. The R/C pilot commands are provided to the flight computer through two redundant R/C receivers (A & B). The antennas of the two receivers are installed at different locations of the aircraft to reduce the likelihood that both are interfered at the same time. Each receiver can operate in either a nominal mode, when maintaining a good reception of the radio signal, or a fail-safe mode, when the communication with the radio transmitter is lost. In the fail-safe mode, each receiver channel output is either independently programmed to a pre-set value, or assigned to latch on to the last received value from the R/C transmitter.

For R/C receiver-A, 9-channel pilot control commands are sent directly to a duplexer network for later distribution to individual actuators. Two additional channels are used as *ctrl*-switch and *kill*-switch. In order to provide information about both operational modes and the R/C receiver status, the *ctrl*-switch is programmed to have three different output levels: a lower pulse width for '*ctrl*-switch off', a higher pulse width for '*ctrl*-switch on', and a median pulse width indicating the receiver went into a fail-safe mode. A pulse width indicating '*ctrl*-switch on' may trigger a fully autonomous, a partially autonomous, or a pilot-in-the-loop mode depending on additional hardware and software settings.

The output of receiver-B is first processed with a PPM encoder before being measured with an EMP General-Purpose Timer (GPT). This pilot input is then transmitted to GPC to be used by the flight control system;


The R/C pilot command is in a PWM format recognizable by R/C servos and engine speed controllers. The commands from the research pilot, FCS, FES, and OBES are first integrated (selected or combined) within the on-board software before being converted into a set of PWM signals. Due to the existence of these two parallel streams of PWM commands, there are several layers of checking and signal distribution to ensure the reliability and the flexibility of the transition.

## **4.2 Command signal distribution**

508 Recent Advances in Aircraft Technology

Depending on mission requirements, the aircraft control command could come from several

1. *The R/C pilot*. The R/C pilot commands are provided to the flight computer through two redundant R/C receivers (A & B). The antennas of the two receivers are installed at different locations of the aircraft to reduce the likelihood that both are interfered at the same time. Each receiver can operate in either a nominal mode, when maintaining a good reception of the radio signal, or a fail-safe mode, when the communication with the radio transmitter is lost. In the fail-safe mode, each receiver channel output is either independently programmed to a pre-set value, or assigned to latch on to the last

For R/C receiver-A, 9-channel pilot control commands are sent directly to a duplexer network for later distribution to individual actuators. Two additional channels are used as *ctrl*-switch and *kill*-switch. In order to provide information about both operational modes and the R/C receiver status, the *ctrl*-switch is programmed to have three different output levels: a lower pulse width for '*ctrl*-switch off', a higher pulse width for '*ctrl*-switch on', and a median pulse width indicating the receiver went into a fail-safe mode. A pulse width indicating '*ctrl*-switch on' may trigger a fully autonomous, a partially autonomous, or a pilot-in-the-loop mode depending on additional hardware

The output of receiver-B is first processed with a PPM encoder before being measured with an EMP General-Purpose Timer (GPT). This pilot input is then transmitted to GPC

2. *The research pilot at GCS*. Commands from the research pilot are transmitted through a pair of RF modems to the flight computer. This signal can be used to control the aircraft

3. *The Flight Control System (FCS)*. The FCS running inside GPC generates the automatic control command based on sensor feedbacks as well as pilot commands provided through either receiver-B (for the R/C safety pilot) or the RF modems (for the research

4. *Failure Emulation Software (FES)*. A faulty actuator locked at a given deflection or a failed engine can both be simulated by sending a constant value to the selected control channel. A slower responding engine can be simulated by inserting additional dynamics between the control command and the engine speed controller. A floating control surface can be simulated with the feedback from a local flow indicator. More complicated failure scenarios can also be introduced through exploring feedbacks from

5. *On-Board Excitation System (OBES)*. OBES provides specified waveform to be applied on aircraft control actuators. The OBES manoeuver can be either stand-alone or

The R/C pilot command is in a PWM format recognizable by R/C servos and engine speed controllers. The commands from the research pilot, FCS, FES, and OBES are first integrated (selected or combined) within the on-board software before being converted into a set of PWM signals. Due to the existence of these two parallel streams of PWM commands, there are several layers of checking and signal distribution to ensure the reliability and the

**4.1 Control command generation** 

and software settings.

pilot);

various sensors;

flexibility of the transition.

received value from the R/C transmitter.

to be used by the flight control system;

directly or indirectly through a SAS or CAS controller;

superimposed onto the pilot or controller commands.

potential sources:

The command signal distribution system manages and distributes the R/C Pilot Control Command (PCC) provided by receiver-A and the on-board Software-generated Control Commands (SCC) to individual control actuators. Based on the operational mode, the SCC can be one of or a combination of the R/C pilot commands provided by receiver-B, research pilot command, and commands from FCS, FES, and OBES.

The *ctrl*-switch, which the R/C pilot can turn on/off at any given time during the operation, plays a central role in determining the operational mode of the system. Specifically, based on measured receiver-B *ctrl*-switch signal, the EMP sends out a logic (high/low) signal indicating the status (on/off) of the *ctrl*-switch. This status indicator meets with the output of PWM switch-2, which measures the receiver-A *ctrl*-switch signal, at an AND gate. The output of the AND gate, which is called as Confirmed Ctrl Switch Signal (CCSS), becomes logic high only if both input signals are high. This provides a cross-check avoiding accidental activation of the on-board control due to either an EMP or PWM switch-2 failure.

If both receiver-A and B are functioning in the normal mode, a low CCSS initiates the logic network to feed the receiver-A pilot command directly to the control actuators for enabling the pilot manual control. The CCSS can only be overridden in the situation that receiver-A is in the fail-safe mode. Under this condition, the avionic system is able to relay the receiver-B output to actuators through EMP and GPC even if the CCSS signal is low. To achieve this capability, a duplexer is used to switch between CCSS and an EMP provided receiver-A failsafe indicator. The switching signal for the duplexer is generated by the GPC, which provides a second confirmation that receiver-A is in the fail-safe mode.

To further improve the flexibility of the avionic system and for enabling the partially autonomous mode, another level of logic is provided before the SCC reaches an actuator. Specifically, the GPC is sending out a set of 9-channel selection signals through digital output ports. These channel selection signals are then joined with CCSS at nine AND gates to independently control a 9-channel duplexer network with both SCC and PCC as inputs. Within this configuration, if CCSS is low, all channels will be under manual control. If CCSS is high, the on-board software controls any channel with a high channel selection signal with the rest channels being controlled by the R/C pilot.

The configuration of channel selection signals is normally defined prior to flight based on mission requirements. They can also be modified by the GCS flight engineer during the operation through changing uplink communication packets. Additionally, if receiver-A goes into the fail-safe mode the GPC will activate all channel selection signals along with the failsafe indicator. This allows the pilot command registered from receiver-B to reach actuators, maintaining the R/C pilot control.

The above-mentioned command signal distribution between PCC and SCC relies on a collaboration of both hardware and software functions. For generating SCC, the integration of commands from R/C pilot, research pilot, FCS, FES, and OBES are performed by the GPC software and are determined based on the specific flight mode. To help clarify the command signal distribution process, pseudo-codes for the EMP software and the command signal distribution portion of the GPC software are provided in Figures 5 and 6 respectively.

Avionics Design for a Sub-Scale Fault-Tolerant Flight Control Test-Bed 511

The use of a general-purpose computer within the avionics design facilitates the use of abundant COTS and open source software products. The on-board Operating System (OS) for GPC is the Linux kernel 2.6.9 patched with Real-Time Application Interface (RTAI) 3.2. An RTAI target was implemented so that Simulink® schemes can be compiled into real-time executable files using the Matlab Real Time Workshop®. The auto-coding capability allows for a rapid integration and testing of algorithms developed by independent researchers.

The NetBurner® MOD5213 EMP uses a μC/OS real-time operating system. The main

The GPC software has a modular structure that is first implemented in Simulink® before being compiled into real-time executable files. Each module is either a combination of existing Simulink blocks or a custom S-function written in C language. The modular structure allows for parallel development and debugging, quick and easy configuration for different mission requirements, and intuitive visual interpolation of the software. Additionally, without any modification the same software module can be first simulated in the Simulink® environment before being tested in flight. The main modules of the GPC

> Receive GPS Data

GPS/INS Sensor Fusion

Receive EMP Data

**Data Acquisition**

*Command*

*Data*

**Flight Control**

Send Command to EMP

Command Calibration

On-Board Excitation

Receive GCS Command

functionality of the EMP software was outlined in Figure 5.

software and their connectivity are shown in Figure 7.

Sensory Data Conditioning, Calibration, & Organization

Inner Loop Control Law

*Main Data Bus R/C Pilot Command*

Command Signal Distribution

A/D Conversion

Receive Nose Board Data

Data Logging / Send to GCS

Outer-Loop Guidance Law

Control Channel Selection

Fig. 7. GPC on-board software architecture.

Failure Emulation

**5. On-board software 5.1 Operating systems** 

**5.2 GPC software** 

Fig. 5. Pseudo-code for the embedded micro-processor software.

Fig. 6. Pseudo-code for the command signal distribution portion of the GPC software.

## **5. On-board software**

510 Recent Advances in Aircraft Technology

while (1) // start an infinite loop

function GPC\_Command\_Distribution (CC\_RCP, CC\_GCS, CC\_FCS, CC\_FES, CC\_OBES, CSD,

receive control command packet from GPC;

 // CC - control command, RCP- R/C pilot, GCS- ground control station, // FCS – flight control system, FES – failure emulation software, // OBES – on-board excitation system, CSD – channel selection data

if (fail\_safe = 'on') // indicating receiver-A fail safe

case 'Fail Safe' ctrl\_command = CC\_RCP;

case 'Manual I' ctrl\_command = CC\_RCP; CSD = 0; // no channel; case 'Manual II' ctrl\_command = CC\_GCS;

case 'Pilot\_in\_the\_loop' ctrl\_command = CC\_FCS; // the FCS will have pilot input as input in this case. case 'Failure Emulation' ctrl\_command = CC\_FES; case 'OBES' ctrl\_command = CC\_OBES;

case 'OBES + Manual II' ctrl\_command = CC\_GCS+OBES; // additional operational modes are available through different

case 'Autonomous' ctrl\_command = CC\_FCS;

// combinations of control commands. set channel selection digital I/O pins according to CSD;

send control command packet to EMP;

return flight\_mode;

CSD = 511; // all 9 channels;

Fig. 5. Pseudo-code for the embedded micro-processor software.

flight\_mode, ctrl\_switch, fail\_safe)

if (ctrl\_switch = 'off')

else

 set fail-safe pin high; flight\_mode= 'Fail Safe';

 set fail-safe pin low; switch (flight\_mode)

flight\_mode= 'Manual I';

function EMP\_software

else

else

else

read ADC;

read IMU data;

read receiver-B PPM signal; if (kill\_switch = 'on')

 set kill\_switch pin low; read receiver-A ctrl-switch; if (ctrl\_switch = 'fail-safe') set fail\_safe pin high;

set fail\_safe pin low;

set ctrl\_switch pin low;

if (Receiver\_A\_battery = 'low') tie receiver batteries A&B;

send data packet to GPC;

generate PWM Signal;

set ctrl\_switch pin high;

if (ctrl\_switch = 'on')

set kill\_switch pin high;

Fig. 6. Pseudo-code for the command signal distribution portion of the GPC software.

#### **5.1 Operating systems**

The use of a general-purpose computer within the avionics design facilitates the use of abundant COTS and open source software products. The on-board Operating System (OS) for GPC is the Linux kernel 2.6.9 patched with Real-Time Application Interface (RTAI) 3.2. An RTAI target was implemented so that Simulink® schemes can be compiled into real-time executable files using the Matlab Real Time Workshop®. The auto-coding capability allows for a rapid integration and testing of algorithms developed by independent researchers.

The NetBurner® MOD5213 EMP uses a μC/OS real-time operating system. The main functionality of the EMP software was outlined in Figure 5.

#### **5.2 GPC software**

The GPC software has a modular structure that is first implemented in Simulink® before being compiled into real-time executable files. Each module is either a combination of existing Simulink blocks or a custom S-function written in C language. The modular structure allows for parallel development and debugging, quick and easy configuration for different mission requirements, and intuitive visual interpolation of the software. Additionally, without any modification the same software module can be first simulated in the Simulink® environment before being tested in flight. The main modules of the GPC software and their connectivity are shown in Figure 7.

Fig. 7. GPC on-board software architecture.

Avionics Design for a Sub-Scale Fault-Tolerant Flight Control Test-Bed 513


*B B BB B B B B B kk k k k k kk kk k kk kk*

*p q r*

*q r*

*LL L L LL k k k xk k yk k z*

 

Vertical Gyro EKF Estimate

Fig. 8. Validation of the GPS/INS sensor fusion algorithm performance.

*x xvy yvz zv*

( sin cos

The nine predicted state variables are then regulated by the GPS position and velocity measurements during the measurement update process with a simple observation equation:

*VV vVV vVVv*

The solution of the GPS/INS sensor fusion problem follows the classis EKF approach as outlined in (Simon, 2006). The filter tuning is performed through the selection of the process noise covariance matrix *Q* and the measurement noise covariance matrix *R*. Specifically, the process noise is approximated by the sensor-level noise present on the IMU measurement.

2 2 2 222 2 ([0,0,0, , , , , , ]) *ax ay az p q r Q dia v v v vvv s <sup>g</sup>*

where the first three zeros indicate that no uncertainty is associated with Equation (2). Similarly, the variance of the GPS measurement noise calculated with a ground static test is

> 2222 2 2 ([ , , , , , ]) *x y z Vx Vy Vz R diag*

*q r*

*<sup>T</sup> LL LL LL x k x k Vx y k y k Vy z k z k Vz*

1 1| 1 )sec

 

(6)

*vvvv v v* (8)

Pitch Estimation Vs. Vertical Gyro

<sup>250</sup> <sup>300</sup> <sup>350</sup> <sup>400</sup> <sup>450</sup> <sup>500</sup> <sup>550</sup> -30

Time (s)

 



0

Degrees

The performance and robustness of the attitude estimation algorithm was evaluated against multiple sets of flight data. Within these flights, a Goodrich VG34® mechanical vertical gyroscope was carried on-board to provide independent pitch and roll angle measurements

10

20

30

sin tan cos tan cos sin

*B B*

  *k k*

*T* (7)

Vertical Gyro EKF Estimate

 

*s*

(5)

*T*

where 's' and 'c' are abbreviated sine and cosine functions respectively.


**z**

 

 

 

used for providing the *R* matrix:

<sup>250</sup> <sup>300</sup> <sup>350</sup> <sup>400</sup> <sup>450</sup> <sup>500</sup> <sup>550</sup> -70

Time (s)

Roll Estimation Vs. Vertical Gyro


Degrees

*B B B B BB kk k k k kk k kk B B BB B kk k k k kk k kk*

The aircraft Euler angles are predicted with the 3-axis angular rate measurements:


#### **5.3 GPS/INS sensor fusion**

A low-cost INS is regulated with measurements from a GPS receiver to provide navigation solutions to the avionic system. Including a real-time GPS/INS sensor fusion algorithm eliminates the need of heavier and more expensive navigation-grade inertial sensors for a small and low-cost research aircraft.

A 9-state Extended Kalman Filter (EKF) based GPS/INS sensor fusion algorithm is selected for the Gen-V avionics design after a comprehensive comparison study of different sensor fusion formulations and nonlinear filtering algorithms (Rhudy et al., 2011) (Gross et al., 2011). This solution provides a good balance between attitude estimation performance and computational requirements. Within this formulation, the state vector includes the aircraft 3-axis position (*x*, *y*, *z*) and velocity (*Vx*, *Vy*, *Vz*) defined in a Local Cartesian frame (*L*), and aircraft attitude represented by three Euler angles (*φ*, *θ*, *ψ*) defined in the aircraft Body-axis (*B*):

$$\mathbf{x} = \begin{bmatrix} x^L & y^L & z^L & V\_x^L & V\_y^L & V\_z^L & \boldsymbol{\theta}^B & \boldsymbol{\theta}^B & \boldsymbol{\nu}^B \end{bmatrix}^T \tag{1}$$

During the state prediction stage, the inertial measurements in terms of three axis accelerations ( , , ) *bb bb bb x x ax y y ay z z az a a va a va av* , and 3-axis angular rates (, ,) *b b bb bb p q <sup>r</sup> p p v q q vr r v* are integrated to provide an estimate of the state vector **x** . Each measurement (e.g. *<sup>b</sup> <sup>x</sup> a* ) is a combination of the true measured parameter (e.g. *b <sup>x</sup> a* ) and an noise term (e.g. *ax v* ). The noise is assumed to be zero mean and normally distributed, with its variance approximated by statistical analyses from static ground tests.

The three position states are predicted through straight forward integration, as represented in discrete-time:

$$
\begin{bmatrix} \mathbf{x}\_{k|k-1}^{L} \\ \mathbf{y}\_{k|k-1}^{L} \\ \mathbf{z}\_{k|k-1}^{L} \end{bmatrix} = \begin{bmatrix} \mathbf{x}\_{k-1|k-1}^{L} \\ \mathbf{y}\_{k-1|k-1}^{L} \\ \mathbf{z}\_{k-1|k-1}^{L} \end{bmatrix} + \begin{bmatrix} \mathbf{V}\_{x}^{L} \\ \mathbf{V}\_{y}^{L} \\ \mathbf{V}\_{y}^{L} \mathbf{z}\_{k-1|k-1} \\ \mathbf{V}\_{x}^{L} \mathbf{z}\_{k-1|k-1} \end{bmatrix} \mathbf{T}\_{s} \tag{2}
$$

where *Ts = 0.02 s* is the length of the discrete time step. For velocity prediction, the 3D acceleration measurements are integrated and transformed from the aircraft body-axis (B) to the local Cartesian navigation frame:

$$
\begin{bmatrix} V\_{x\ k|k-1}^{L} \\ V\_{y\ k|k-1}^{L} \\ V\_{z\ k|k-1}^{L} \end{bmatrix} = \begin{bmatrix} V\_{x\ k-1|k-1}^{L} \\ V\_{y\ k-1|k-1}^{L} \\ V\_{z\ k-1|k-1}^{L} \end{bmatrix} + D\text{CM}(\boldsymbol{\phi}\_{k-1|k-1}^{\mathcal{B}}, \boldsymbol{\theta}\_{k-1|k-1}^{\mathcal{B}}, \boldsymbol{\nu}\_{k-1|k-1}^{\mathcal{B}}) \begin{bmatrix} \tilde{\boldsymbol{a}}\_{x\ k}^{\mathcal{B}} \\ \tilde{\boldsymbol{a}}\_{y\ k}^{\mathcal{B}} \\ \tilde{\boldsymbol{a}}\_{z\ k}^{\mathcal{B}} \end{bmatrix} \boldsymbol{T}\_{s} + \begin{bmatrix} 0 \\ 0 \\ \mathcal{S} \end{bmatrix} \tag{3}
$$

where *g* is the earth's gravity, DCM stands for the Direction Cosine Matrix:

$$DCM(\boldsymbol{\phi}, \boldsymbol{\theta}, \boldsymbol{\psi}) = \begin{bmatrix} \mathbf{c}\boldsymbol{\nu}\,\mathbf{c}\,\boldsymbol{\theta} & -\mathbf{s}\boldsymbol{\nu}\,\mathbf{c}\,\boldsymbol{\phi} + \mathbf{c}\boldsymbol{\nu}\,\mathbf{s}\,\boldsymbol{\theta}\,\mathbf{s}\,\boldsymbol{\phi} & \mathbf{s}\boldsymbol{\nu}\,\mathbf{s}\,\boldsymbol{\phi} + \mathbf{c}\boldsymbol{\nu}\,\mathbf{s}\,\boldsymbol{\theta}\,\mathbf{c}\,\boldsymbol{\phi} \\ \mathbf{s}\boldsymbol{\nu}\,\mathbf{c}\,\boldsymbol{\theta} & \mathbf{c}\boldsymbol{\nu}\,\mathbf{c}\,\boldsymbol{\phi} + \mathbf{s}\boldsymbol{\nu}\,\mathbf{s}\,\boldsymbol{\theta}\,\mathbf{s}\,\boldsymbol{\phi} & -\mathbf{c}\boldsymbol{\nu}\,\mathbf{s}\,\boldsymbol{\phi} + \mathbf{s}\boldsymbol{\nu}\,\mathbf{s}\,\boldsymbol{\theta}\,\mathbf{c}\,\boldsymbol{\phi} \\ -\mathbf{s}\,\boldsymbol{\theta} & \mathbf{c}\boldsymbol{\theta}\,\mathbf{s}\,\boldsymbol{\phi} & \mathbf{c}\boldsymbol{\theta}\,\mathbf{c}\,\boldsymbol{\phi} \end{bmatrix} \tag{4}$$

A low-cost INS is regulated with measurements from a GPS receiver to provide navigation solutions to the avionic system. Including a real-time GPS/INS sensor fusion algorithm eliminates the need of heavier and more expensive navigation-grade inertial sensors for a

A 9-state Extended Kalman Filter (EKF) based GPS/INS sensor fusion algorithm is selected for the Gen-V avionics design after a comprehensive comparison study of different sensor fusion formulations and nonlinear filtering algorithms (Rhudy et al., 2011) (Gross et al., 2011). This solution provides a good balance between attitude estimation performance and computational requirements. Within this formulation, the state vector includes the aircraft 3-axis position (*x*, *y*, *z*) and velocity (*Vx*, *Vy*, *Vz*) defined in a Local Cartesian frame (*L*), and aircraft attitude

*<sup>T</sup> LLL L L LBB B*

During the state prediction stage, the inertial measurements in terms of three axis

*p q <sup>r</sup> p p v q q vr r v* are integrated to provide an estimate of the state

*<sup>x</sup> a* ) and an noise term (e.g. *ax v* ). The noise is assumed to be zero mean and normally distributed, with its variance approximated by statistical analyses from static ground tests. The three position states are predicted through straight forward integration, as represented

> | 1 1| 1 1| 1 | 1 1| 1 1| 1 | 1 1| 1 1| 1

 

 

*y y VT*

where *Ts = 0.02 s* is the length of the discrete time step. For velocity prediction, the 3D acceleration measurements are integrated and transformed from the aircraft body-axis (B) to

*y k k y k k kk kk kk y ks s*

(,, ) s c c c s s s c s s s c

 

 

c c s c c ss s s c sc

s cs c c

*V V DCM aT T g V V a*

*L L L xk k kk k k LL L kk k k y kk s*

*x x V*

*L L L kk k k zk k*

*z z V*


where *g* is the earth's gravity, DCM stands for the Direction Cosine Matrix:

 

 

*V V a*

*L L B xkk xk k x k L L BB B B*

*L L B zkk zk k z k*

> 

**<sup>x</sup>** (1)

(,, ) 0

 

> 

 

> 

 

*<sup>x</sup> a* ) is a combination of the true measured parameter (e.g.

(2)

(3)

(4)

0

*x x ax y y ay z z az a a va a va av* , and 3-axis angular rates

represented by three Euler angles (*φ*, *θ*, *ψ*) defined in the aircraft Body-axis (*B*):

*xyz x y zVVV*

accelerations ( , , ) *bb bb bb*

**5.3 GPS/INS sensor fusion** 

small and low-cost research aircraft.

(, ,) *b b bb bb*

vector **x** . Each measurement (e.g. *<sup>b</sup>*

the local Cartesian navigation frame:

*DCM*

 



*b*

in discrete-time:

where 's' and 'c' are abbreviated sine and cosine functions respectively.

The aircraft Euler angles are predicted with the 3-axis angular rate measurements:

$$
\begin{bmatrix}
\boldsymbol{\theta}\_{k|k-1}^{B} \\
\boldsymbol{\theta}\_{k|k-1}^{B} \\
\boldsymbol{\theta}\_{k|k-1}^{B}
\end{bmatrix} = \begin{bmatrix}
\boldsymbol{\theta}\_{k-1|k-1}^{B} \\
\boldsymbol{\theta}\_{k-1|k-1}^{B} \\
\boldsymbol{\theta}\_{k-1|k-1}^{B}
\end{bmatrix} + \begin{bmatrix}
\boldsymbol{\tilde{p}}\_{k}^{B} + \boldsymbol{\tilde{q}}\_{k}^{B}\boldsymbol{\tilde{\sigma}}\boldsymbol{\phi}\_{k-1|k-1}^{B}\boldsymbol{\Pi}\boldsymbol{\sigma}\_{k-1|k-1}^{B} + \boldsymbol{\tilde{r}}\_{k}^{B}\boldsymbol{\tilde{\sigma}}\boldsymbol{\phi}\_{k-1|k-1}^{B}\boldsymbol{\Lambda}\boldsymbol{\sigma}\boldsymbol{\theta}\_{k-1|k-1}^{B} \\
& \left(\boldsymbol{\tilde{q}}\_{k}^{B}\cos\boldsymbol{\phi}\_{k-1|k-1}^{B} - \boldsymbol{\tilde{r}}\_{k}^{B}\sin\boldsymbol{\phi}\_{k-1|k-1}^{B}\right) \\
& \left((\boldsymbol{\tilde{q}}\_{k}^{B}\sin\boldsymbol{\phi}\_{k-1|k-1}^{B} + \boldsymbol{\tilde{r}}\_{k}^{B}\cos\boldsymbol{\phi}\_{k-1|k-1}^{B})\boldsymbol{\rm{sc}\boldsymbol{\sigma}}\boldsymbol{\phi}\_{k-1|k-1}^{B}\right)
\end{bmatrix}T\_{s} \tag{5}$$

The nine predicted state variables are then regulated by the GPS position and velocity measurements during the measurement update process with a simple observation equation:

$$\begin{aligned} \mathbf{z}\_k &= \begin{bmatrix} \tilde{\mathbf{x}}\_k^L = \mathbf{x}\_k^L + \boldsymbol{\upsilon}\_x & \tilde{\mathbf{y}}\_k^L = \mathbf{y}\_k^L + \boldsymbol{\upsilon}\_y & \tilde{\mathbf{z}}\_k^L = \mathbf{z}\_k^L + \boldsymbol{\upsilon}\_z \end{bmatrix} \\ \tilde{V}\_{\mathbf{x}}^L &= V\_{\mathbf{x}}^L + \boldsymbol{\upsilon}\_{V\mathbf{x}} & \tilde{V}\_{\mathbf{y}}^L = V\_{\mathbf{y}}^L + \boldsymbol{\upsilon}\_{V\mathbf{y}} & \tilde{V}\_{\mathbf{z}}^L = V\_{\mathbf{z}}^L + \boldsymbol{\upsilon}\_{V\mathbf{z}} \end{aligned} \tag{6}$$

The solution of the GPS/INS sensor fusion problem follows the classis EKF approach as outlined in (Simon, 2006). The filter tuning is performed through the selection of the process noise covariance matrix *Q* and the measurement noise covariance matrix *R*. Specifically, the process noise is approximated by the sensor-level noise present on the IMU measurement.

$$Q = \text{diag}([0, 0, 0, \sigma\_{v\_{\text{ax}}}^2, \sigma\_{v\_{\text{ay}}}^2, \sigma\_{v\_{\text{ax}}}^2, \sigma\_{v\_{\text{y}}}^2, \sigma\_{v\_{\text{y}}}^2, \sigma\_{v\_{\text{y}}}^2) \| T\_s^2 \tag{7}$$

where the first three zeros indicate that no uncertainty is associated with Equation (2). Similarly, the variance of the GPS measurement noise calculated with a ground static test is used for providing the *R* matrix:

$$R = \text{diag}([\sigma\_{v\_x}^2, \sigma\_{v\_y}^2, \sigma\_{v\_z}^2, \sigma\_{v\_{\forall x}}^2, \sigma\_{v\_{\forall y}}^2, \sigma\_{v\_{\forall z}}^2])\tag{8}$$

Fig. 8. Validation of the GPS/INS sensor fusion algorithm performance.

The performance and robustness of the attitude estimation algorithm was evaluated against multiple sets of flight data. Within these flights, a Goodrich VG34® mechanical vertical gyroscope was carried on-board to provide independent pitch and roll angle measurements

Avionics Design for a Sub-Scale Fault-Tolerant Flight Control Test-Bed 515

4. *Reliability testing*, which includes a number of duration tests under simulated dynamic

5. *Calibration*, which includes the calibration of individual sensors, PWM reading and generating processes, individual control actuators, and pilot input devices such as R/C

6. *Modelling*, this includes the development of mathematical models for the test-bed aircraft, actuators, propulsion systems, and sensors, as well as the identification of

7. *Simulation*, which includes model-based simulation for initial validation of missionspecific research algorithms, and hardware-in-the-loop simulation for evaluating the

Flight testing provides the final validation of the aircraft and its flight control system. However, it is also well known that experimental flight testing program, either with a fullscale or a sub-scale aircraft, is associated with substantial risks. A general strategy for flight

An adverse flight condition could be caused by improper/inadequate planning, pilot error, atmospheric condition, and aircraft sub-system (e.g. mechanical, electrical, power, control, and communication) failures. Quite often, an aviation accident has multiple inter-connected

It is worth noting that the general objective of a fault-tolerant flight control research program with a sub-scale aircraft is usually to facilitate the development of the fault prevention, identification, and recovery methods for a full-scale manned aircraft. During flight experiments, the aircraft is often commanded to enter deliberately-planned adverse conditions, while minimizing other flight-associated potential risks. This high level of uncertainty, with both expected and unexpected failure contributing factors, provides valuable experiences and insights for understanding aviation accidents and the unique

Two effective approaches for improving the operational safety of a sub-scale flight testing program are incremental testing and the standardization of flight protocols. The incremental flight testing method utilizes a 'divide and conquer' approach to build-up individual subsystem capabilities and allows them to mature over a series of increasingly complex experiments. Each step should be a logic extension of previous steps, but should also be large enough to ensure a timely completion of the project. For example, an experiment to study the aircraft dynamics at high angle of attack flight conditions could be built upon the

operating environments;

model parameters;

risk mitigation focus on three steps:

contributing factors (Boeing, 2009).

following key steps:

**6.3 Flight testing** 

transmitter and the research pilot control station;

integration between hardware and software sub-systems.

1. *Prevent* the aircraft from entering an adverse flight condition;

opportunity to practice and refine risk mitigation approaches.

**6.3.1 Risk mitigation and flight testing protocol** 

3. *Recover* the aircraft or minimize its damage during the accident.

2. Timely *identification* of the problem when an emergency situation develops;

A flight test is considered after all related ground tests are performed.

and is used as the reference for evaluating the GPS/INS sensor fusion performance. The VG34 has a self-erection system, and reported accuracy of within 0.25° of true vertical. Figure 8 shows a comparison between the GPS/INS estimates and VG34 measurements on both roll and pitch channels for one of the May 27, 2011 flight tests. The mean absolute error and standard deviation error for roll estimation are 2.64° and 2.29° respectively in this particular flight. The mean absolute error and standard deviation error for pitch estimation are 2.22° and 1.93° respectively.

## **6. Avionic system testing**

Extensive ground and flight testing experiments were performed to verify the functionality and performance of the Gen-V avionics design and to enable different aviation safety related flight experiments.

## **6.1 Avionics integration**

The integration of avionics components into an airframe is constrained by many practical factors, such as aircraft balance, sensor alignment, signal interference, heat-dissipation, vibration damping, and user accessibility. Particularly, a key consideration for the avionic integration is to minimize the EMI effect. Within a sub-scale aircraft, the EMI issue is recurrent due to the close proximity of electrical components within a confined space. The effect of EMI includes reduced sensor measurement quality and disruptions of the command and control link, which could potentially lead to the loss of an aircraft. An integrated approach is used to mitigate the EMI problem. This include careful circuit design to reduce cross-interferences; providing redundancy on safety-critical components; proper shielding of main electronic components and cables; separation of EMI sources from R/C receivers; and reducing the number and length of cables. Once every avionics sub-system is installed, a comprehensive spectrum analysis and ground range tests are performed to identify residual EMI issues. Remaining problems can usually be alleviated through application of additional shielding materials, addition of ferrite chokes on selected cables, or through alternative antenna placements for RF modem and R/C receivers. Finally, a systematic ground range check procedure is performed before each flight to ensure a safe operation.

## **6.2 Ground testing**

The ground testing procedure for the WVU Gen-V avionics system involves the following main categories:


A flight test is considered after all related ground tests are performed.

## **6.3 Flight testing**

514 Recent Advances in Aircraft Technology

and is used as the reference for evaluating the GPS/INS sensor fusion performance. The VG34 has a self-erection system, and reported accuracy of within 0.25° of true vertical. Figure 8 shows a comparison between the GPS/INS estimates and VG34 measurements on both roll and pitch channels for one of the May 27, 2011 flight tests. The mean absolute error and standard deviation error for roll estimation are 2.64° and 2.29° respectively in this particular flight. The mean absolute error and standard deviation error for pitch estimation

Extensive ground and flight testing experiments were performed to verify the functionality and performance of the Gen-V avionics design and to enable different aviation safety related

The integration of avionics components into an airframe is constrained by many practical factors, such as aircraft balance, sensor alignment, signal interference, heat-dissipation, vibration damping, and user accessibility. Particularly, a key consideration for the avionic integration is to minimize the EMI effect. Within a sub-scale aircraft, the EMI issue is recurrent due to the close proximity of electrical components within a confined space. The effect of EMI includes reduced sensor measurement quality and disruptions of the command and control link, which could potentially lead to the loss of an aircraft. An integrated approach is used to mitigate the EMI problem. This include careful circuit design to reduce cross-interferences; providing redundancy on safety-critical components; proper shielding of main electronic components and cables; separation of EMI sources from R/C receivers; and reducing the number and length of cables. Once every avionics sub-system is installed, a comprehensive spectrum analysis and ground range tests are performed to identify residual EMI issues. Remaining problems can usually be alleviated through application of additional shielding materials, addition of ferrite chokes on selected cables, or through alternative antenna placements for RF modem and R/C receivers. Finally, a systematic ground range check procedure is performed before each flight to ensure a safe

The ground testing procedure for the WVU Gen-V avionics system involves the following

1. *Hardware testing*, which includes the basic conductivity tests, evaluation of system power consumption and heat dissipation, EMI tests, and range tests for the R/C and

2. *Software testing*, which includes the latency measurement of the real-time operating system and profiling the computational resource use by different software components; 3. *Hardware/software integration*, which includes the evaluation of sensor measurement quality, communication dropouts, PWM reading and generating accuracy, control

system delay, and the functionality of the flight mode transition logics;

are 2.22° and 1.93° respectively.

**6. Avionic system testing** 

flight experiments.

operation.

**6.2 Ground testing** 

main categories:

data links;

**6.1 Avionics integration** 

Flight testing provides the final validation of the aircraft and its flight control system. However, it is also well known that experimental flight testing program, either with a fullscale or a sub-scale aircraft, is associated with substantial risks. A general strategy for flight risk mitigation focus on three steps:


An adverse flight condition could be caused by improper/inadequate planning, pilot error, atmospheric condition, and aircraft sub-system (e.g. mechanical, electrical, power, control, and communication) failures. Quite often, an aviation accident has multiple inter-connected contributing factors (Boeing, 2009).

It is worth noting that the general objective of a fault-tolerant flight control research program with a sub-scale aircraft is usually to facilitate the development of the fault prevention, identification, and recovery methods for a full-scale manned aircraft. During flight experiments, the aircraft is often commanded to enter deliberately-planned adverse conditions, while minimizing other flight-associated potential risks. This high level of uncertainty, with both expected and unexpected failure contributing factors, provides valuable experiences and insights for understanding aviation accidents and the unique opportunity to practice and refine risk mitigation approaches.

#### **6.3.1 Risk mitigation and flight testing protocol**

Two effective approaches for improving the operational safety of a sub-scale flight testing program are incremental testing and the standardization of flight protocols. The incremental flight testing method utilizes a 'divide and conquer' approach to build-up individual subsystem capabilities and allows them to mature over a series of increasingly complex experiments. Each step should be a logic extension of previous steps, but should also be large enough to ensure a timely completion of the project. For example, an experiment to study the aircraft dynamics at high angle of attack flight conditions could be built upon the following key steps:

Avionics Design for a Sub-Scale Fault-Tolerant Flight Control Test-Bed 517

post flight discussion session reviews the flight performance, problems encountered and pilot feedbacks. After returning to the lab, a detailed data analysis is performed, follows by

> **On the Runway** System Start-Up

**In the Air** Aircraft Takeoff Procedure

Aircraft Trim Procedure

Command Hands-off Procedure

Research Procedure

Aircraft Landing Procedure

Emergency Handling

Procedures System Power-Off

Ground Communication Range Test Flight Operation CheckList

Check Go/No-Go Criteria

Flight Approval

a post-flight meeting to conclude the flight session.

**Airfield**

Airframe/Avionics Field Inspection

Aircraft Ground Test

Flight Preparation Checklist Final Flight Readiness Review Pre-Flight Pilot De-Briefing

Data Download and Preliminary Analysis Post-Flight Discussion and Pilot Feedback

Data Backup

Two flight test examples are presented in this section to show the effectiveness of the designed avionics system. The first example is to collect data for identifying mathematical models of the 'Phastball' aircraft under high angle of attack flight conditions. The second

The objective of the first experiment is to study the aircraft dynamics under high angle of attack conditions. This is particularly important for T-tail aircraft, where the turbulent airflow from the stalled wing can blanket the elevators during a deep stall. For this experiment, the OBES manoeuver is designed with a multi-sine frequency-sweep approach (Klein & Morelli, 2006) to minimize disturbances to the flight condition. Specifically, it composes of six discrete frequency components ranging between 0.2 and 2.2 Hz. During the flight, a set of aircraft inner-loop controllers are activated with the *ctrl*-switch. The innerloop controllers track zero degree roll angle and 12-degree pitch angle as reference inputs, while holding the throttle positions constant. After 2-seconds into the autonomous flight, a stream of 8-second of OBES manoeuvres are superimposed onto the elevator command generated by the inner-loop controllers. Several flight tests were performed with this configuration. Figure 10 shows a section of data collected from an October, 10, 2011 flight

example is to evaluate the human pilot performace with delayed control signals.

Fig. 9. WVU flight testing operation protocol.

Aircraft Ground Test

Airframe/Avionics Lab Inspection

Preliminary Flight Readiness Review

Post Flight Data Analysis

Post Flight Meeting

test.

**6.3.2 Flight test examples** 

Data Archiving

Research Algorithm Development

**Lab** Flight Planning Meeting


The standardization of flight testing protocols reduces human error both before and during the flight. It allows a systematic planning, resource allocation, testing, and inspection during the flight preparation. During the flight, having a standard procedure and flight pattern reduces pilot stress and improve the consistency among flights. Additionally, having an emergency handling procedure reduces the pilot reaction time and avoids making arbitrary decisions under adverse flight conditions. The flight testing protocol builds upon years of flight testing experience and provides a media to store and apply lessons learned from past mistakes.

A flow chart for the flight testing operation procedure developed at WVU is shown in Figure 9. A flight test session starts with a flight planning meeting in the lab discussing mission objectives, test methods, and personal responsibilities. A preliminary flight readiness review is normally performed a day before the flight date, following successful efforts in research algorithm development, ground test, and aircraft inspection.

At the airfield, another round of aircraft inspection and ground tests are performed to ensure that all aircraft sub-systems are operational after ground transportation. This is enforced with a flight preparation check-list, which covers airframe, avionics, R/C system, power system, firmware, research software, communication system, and the ground station. Additionally, the aircraft weight and balance are checked before the first flight of each aircraft. A final flight readiness review is then performed after the checklist is completed. Finally, a pre-flight pilot de-briefing discusses the flight procedures, research manoeuvres, and potential risks of this particular flight.

Once the aircraft is positioned at its starting position on the runway, the propulsion, R/C, and avionics systems are powered following an aircraft start-up procedure. A series of range tests are then performed to evaluate the R/C and data link range. A flight operation checklist is filled to verify the general functionality of the aircraft, such as control surface deflections, propulsion system condition, and R/C system fail-safe settings. A set of 'go/nogo' criteria, which includes wind-speed, wind-direction, communication range, and ground crew readiness, are then evaluated before a final approval of the flight by the flight director.

The flight operation itself follows a set of pre-defined take-off, trim, command hands-off, research, and landing procedures. In case of an emergency, such as a single engine failure, both engine failure, controller failure, actuator failure, aircraft upset condition, or changing weather condition, a set of specific emergency handling procedures are followed to abort the flight and recover the aircraft.

After landing and powering off the aircraft, flight data are downloaded and analysed in the field to provide an initial assessment of data quality and determine any potential issues. A post flight discussion session reviews the flight performance, problems encountered and pilot feedbacks. After returning to the lab, a detailed data analysis is performed, follows by a post-flight meeting to conclude the flight session.

Fig. 9. WVU flight testing operation protocol.

## **6.3.2 Flight test examples**

516 Recent Advances in Aircraft Technology

1. R/C flights for evaluating aircraft handling quality, stall characteristics, and payload

2. Data acquisition flights to evaluate avionics measurement quality and GPS/INS sensor

3. Closed-loop flights with a set of inner-loop control laws stabilizing the aircraft at the

The standardization of flight testing protocols reduces human error both before and during the flight. It allows a systematic planning, resource allocation, testing, and inspection during the flight preparation. During the flight, having a standard procedure and flight pattern reduces pilot stress and improve the consistency among flights. Additionally, having an emergency handling procedure reduces the pilot reaction time and avoids making arbitrary decisions under adverse flight conditions. The flight testing protocol builds upon years of flight testing experience and provides a media to store and apply lessons learned from past

A flow chart for the flight testing operation procedure developed at WVU is shown in Figure 9. A flight test session starts with a flight planning meeting in the lab discussing mission objectives, test methods, and personal responsibilities. A preliminary flight readiness review is normally performed a day before the flight date, following successful

At the airfield, another round of aircraft inspection and ground tests are performed to ensure that all aircraft sub-systems are operational after ground transportation. This is enforced with a flight preparation check-list, which covers airframe, avionics, R/C system, power system, firmware, research software, communication system, and the ground station. Additionally, the aircraft weight and balance are checked before the first flight of each aircraft. A final flight readiness review is then performed after the checklist is completed. Finally, a pre-flight pilot de-briefing discusses the flight procedures, research manoeuvres,

Once the aircraft is positioned at its starting position on the runway, the propulsion, R/C, and avionics systems are powered following an aircraft start-up procedure. A series of range tests are then performed to evaluate the R/C and data link range. A flight operation checklist is filled to verify the general functionality of the aircraft, such as control surface deflections, propulsion system condition, and R/C system fail-safe settings. A set of 'go/nogo' criteria, which includes wind-speed, wind-direction, communication range, and ground crew readiness, are then evaluated before a final approval of the flight by the flight director. The flight operation itself follows a set of pre-defined take-off, trim, command hands-off, research, and landing procedures. In case of an emergency, such as a single engine failure, both engine failure, controller failure, actuator failure, aircraft upset condition, or changing weather condition, a set of specific emergency handling procedures are followed to abort the

After landing and powering off the aircraft, flight data are downloaded and analysed in the field to provide an initial assessment of data quality and determine any potential issues. A

efforts in research algorithm development, ground test, and aircraft inspection.

4. Closed-loop flights around the trim condition with OBES injection; 5. Closed-loop flights at high angle of attack conditions with OBES injection.

capacity;

mistakes.

fusion algorithm performance;

and potential risks of this particular flight.

flight and recover the aircraft.

trim flight condition;

Two flight test examples are presented in this section to show the effectiveness of the designed avionics system. The first example is to collect data for identifying mathematical models of the 'Phastball' aircraft under high angle of attack flight conditions. The second example is to evaluate the human pilot performace with delayed control signals.

The objective of the first experiment is to study the aircraft dynamics under high angle of attack conditions. This is particularly important for T-tail aircraft, where the turbulent airflow from the stalled wing can blanket the elevators during a deep stall. For this experiment, the OBES manoeuver is designed with a multi-sine frequency-sweep approach (Klein & Morelli, 2006) to minimize disturbances to the flight condition. Specifically, it composes of six discrete frequency components ranging between 0.2 and 2.2 Hz. During the flight, a set of aircraft inner-loop controllers are activated with the *ctrl*-switch. The innerloop controllers track zero degree roll angle and 12-degree pitch angle as reference inputs, while holding the throttle positions constant. After 2-seconds into the autonomous flight, a stream of 8-second of OBES manoeuvres are superimposed onto the elevator command generated by the inner-loop controllers. Several flight tests were performed with this configuration. Figure 10 shows a section of data collected from an October, 10, 2011 flight test.

Avionics Design for a Sub-Scale Fault-Tolerant Flight Control Test-Bed 519

the '*Manual Mode I*' with the *ctrl*-switch off. During the flight test, the pilot turns on the *ctrl*switch at the beginning of a straight path. The pilot first injects an elevator doublet, wait until it settles, and then performs a turn manoeuvre. This process repeats multiple times in

The first experiment of this kind was performed on October 18, 2011. The pilot reported "*…*t*he whole flight was normal, and decay in elevator was negligible…*" during the post-flight discussion. However, flight data collected clearly indicates increased pilot activity with increased command transmission delay, which is shown in Figure 11. A later experiment with a different pilot showed similar results. Additional flight experiments are planned to investigate larger transmission delay, random data dropouts, and flight conditions that

The use of sub-scale research aircraft provides unique opportunities for investigating adverse flight conditions that are too risky or costly to be tested on a full-scale aircraft. It can be considered as an intermediate validation tool between a flight simulator and a full-scale aircraft. It allows the testing of different system design, modelling, control, fault detection,

Sub-scale flight testing for fault tolerant flight control research also poses many challenges to the avionic system design: 1) it requires new capabilities for simulating different aircraft upset and failure conditions; and 2) it requires a flexible interface for integrating both human and machine decision-making capabilities; and 3) it needs to be reliable and faulttolerant to both planned and unexpected failures. The Gen-V avionics system, designed and being developed at WVU meets these complex research requirements, along with strict power, weight, size, and cost limitations. Preliminary flight testing results demonstrate the capability of the proposed avionics design and its flexibility in supporting a variety of

This study was conducted with partial support from NASA grant # NNX07AT53A and

and risk mitigation approaches within a realistic physical environment.

flight but with increasing transmission delay up to 300ms.

require precise control actions.

**7. Conclusions** 

research objectives.

**8. Acknowledgment** 

grant # NNX10AI14G

**9. Appendix A: List of achronoymes** 

ADC – Analog to Digital Conversion CAS – Control Augmentation System CCSS – Confirmed Ctrl Switch Signal COTS – Commercial-off-the-Shelf EKF – Extended Kalman Filter EMI – Electromagnetic Interference EMP – Embedded Micro-Processor FCS – Flight Control System FES – Failure Emulation Software GCS – Ground Control Station

Fig. 10. Elevator control command (left) and aircraft response (right) with OBES manoeuvres at high angle of attack.

Within Figure 10, the red line indicates the turning on/off of the *ctrl*-switch. The angle of attack gradually increases to approximately 7 degrees with the deceleration of the aircraft. Additional flight experiments are planned to investigate higher angles of attack, as well as pre-stall and post-stall flight conditions.

Fig. 11. Pilot elevator command vs. the actual elevator input during a command transmission delay experiment.

The objective of the second experiment is to study how the transmission delay of a fly-bywire flight control system affects the handling quality of an aircraft. The on-board software is designed to relay the recorded pilot input to actuators with added delay. This occurs whenever the *ctrl*-switch was turned on, and a 100 ms increment is added to the total transmission delay during each *ctrl*-switch activation. The pilot flies the aircraft directly in the '*Manual Mode I*' with the *ctrl*-switch off. During the flight test, the pilot turns on the *ctrl*switch at the beginning of a straight path. The pilot first injects an elevator doublet, wait until it settles, and then performs a turn manoeuvre. This process repeats multiple times in flight but with increasing transmission delay up to 300ms.

The first experiment of this kind was performed on October 18, 2011. The pilot reported "*…*t*he whole flight was normal, and decay in elevator was negligible…*" during the post-flight discussion. However, flight data collected clearly indicates increased pilot activity with increased command transmission delay, which is shown in Figure 11. A later experiment with a different pilot showed similar results. Additional flight experiments are planned to investigate larger transmission delay, random data dropouts, and flight conditions that require precise control actions.

## **7. Conclusions**

518 Recent Advances in Aircraft Technology

0

5

Pitch Angle and Angle of Attack (deg)

Fig. 10. Elevator control command (left) and aircraft response (right) with OBES manoeuvres

Within Figure 10, the red line indicates the turning on/off of the *ctrl*-switch. The angle of attack gradually increases to approximately 7 degrees with the deceleration of the aircraft. Additional flight experiments are planned to investigate higher angles of attack, as well as

Increased Command Transmission Delay on the Elevator Channel

260 280 300 320 340 360

The objective of the second experiment is to study how the transmission delay of a fly-bywire flight control system affects the handling quality of an aircraft. The on-board software is designed to relay the recorded pilot input to actuators with added delay. This occurs whenever the *ctrl*-switch was turned on, and a 100 ms increment is added to the total transmission delay during each *ctrl*-switch activation. The pilot flies the aircraft directly in

Fig. 11. Pilot elevator command vs. the actual elevator input during a command

Time (s)

+0 ms +100 ms +200 ms +300 ms

10

15

<sup>360</sup> <sup>365</sup> <sup>370</sup> <sup>375</sup> <sup>380</sup> -5

Time (s)

Pilot Input Elevator Command Ctrl-Switch

OBES at High Alpha - Aircraft Response

Pitch Angle Angle of Attack Ctrl-Switch

Elevator Command Ctrl-Switch

<sup>360</sup> <sup>365</sup> <sup>370</sup> <sup>375</sup> <sup>380</sup> -15

pre-stall and post-stall flight conditions.

Time (s)

OBES at High Alpha - Control Command


at high angle of attack.


transmission delay experiment.



Elevator Deflection (deg)

0

5

10


Commanded Elevator Deflection (deg)

0

5

The use of sub-scale research aircraft provides unique opportunities for investigating adverse flight conditions that are too risky or costly to be tested on a full-scale aircraft. It can be considered as an intermediate validation tool between a flight simulator and a full-scale aircraft. It allows the testing of different system design, modelling, control, fault detection, and risk mitigation approaches within a realistic physical environment.

Sub-scale flight testing for fault tolerant flight control research also poses many challenges to the avionic system design: 1) it requires new capabilities for simulating different aircraft upset and failure conditions; and 2) it requires a flexible interface for integrating both human and machine decision-making capabilities; and 3) it needs to be reliable and faulttolerant to both planned and unexpected failures. The Gen-V avionics system, designed and being developed at WVU meets these complex research requirements, along with strict power, weight, size, and cost limitations. Preliminary flight testing results demonstrate the capability of the proposed avionics design and its flexibility in supporting a variety of research objectives.

## **8. Acknowledgment**

This study was conducted with partial support from NASA grant # NNX07AT53A and grant # NNX10AI14G

## **9. Appendix A: List of achronoymes**


Avionics Design for a Sub-Scale Fault-Tolerant Flight Control Test-Bed 521

Griffiths, S.; Saunders, J.; Curtis, A.; Barber, B.; McLain, T. & Beard, R. (2006). Maximizing

Gross, J.; Gu, Y.; Rhudy, M.; Gururajan, S. & Napolitano, M.R. (2011). Flight Test Evaluation

Gu, Y.; Campa, G.; Seanor, B.; Gururajan, S. & Napolitano, M.R. (2009). Autonomous

Jordan, T. L.; Foster, J. V.; Bailey, R. M.; & Belcastro, C. M. (2006). AirSTAR: A UAV Platform

Jourdan, D.B.; Piedmonte,M.D.; Gavrilets,V. & Vos,D.W. (2010). Enhancing UAV

Klein, V. & Morelli, E.A. (2006). *Aircraft System Identification – Theory and Practice*, AIAA

Liebeck, R.H. (2004). Design of the Blended Wing Body Subsonic Transport, *Journal of* 

Miller, J.A.; Minear, P.D.; Niessner, A.F.; DeLullo, A.M.; Geiger, B.R.; Long, L.L. & Horn, J.F.

Murch, A. M. (2008). A Flight Control System Architecture for the NASA AirSTAR Flight

NRC (National Research Council) (1997). Aviation Safety And Pilot Control: Under-standing and Preventing Unfavorable Pilot-Vehicle Interactions, *National Academy Press.* Perhinschi, M.; Napolitano, M.R.; Campa, G.; Seanor, B.; Gururajan, S. & Gu, Y. (2005).

Phillips, K.; Gururajan, S.; Campa, G.; Seanor, B.; Gu, Y. & Napolitano, M.R. (2010).

Planecrashinfo.com (2011). Causes of Fatal Accidents by Decade, Avaliable:

Rhudy, M.; Gu, Y.; Gross, J. & Napolitano, M. R. (2011). Sensitivity Analysis of EKF and

*Control Conference*, Toronto, Ontario, Canada, August, 2010.

*Aircraft*, pp. 10-25, Vol. 41, No. 1, January–February, 2004.

October, 2001.

34-43, Sept. 2006.

April, 2008.

2006.

*and Electronic Systems*, In Press, June, 2011.

Education Series, AIAA, Reston, VA.

Virginia, September, 2005.

California, August, 2005.

Canada, August 2010.

Portland, OR, August, 2011.

http://planecrashinfo.com/cause.htm.

HI, August, 2008.

*Systems, DASC. The 20th Conference*, vol.1, pp.1C3/1-1C3/12, Daytona Beach, FL,

Miniature Aerial Vehicles, *IEEE Robotics & Automation Magazine*, vol.13, no.3, pp.

of Sensor Fusion Algorithms for Attitude Estimation, *IEEE Transactions on Aerospace* 

Formation Flight – Design and Experiments, Chapter, *Aerial Vehicles*, ISBN 978-953- 7619-41-1, I-Tech Education and Publishing, Austria, EU, Chapter 12, pp. 233-256. How, J.P.; Bethke, B.; Frank, A.; Dale, D. & Vian, J. (2008). Real-Time Indoor Autonomous

Vehicle Test Environment, *IEEE Control Systems Magzine*, vol.28, no.2, pp.51-64,

for Flight Dynamics and Control System Testing, *25th AIAA Aerodynamic Measurement Technology and Ground Testing Conference*, San Francisco, CA, June,

Survivability Through Damage Tolerant Control, *AIAA Guidnace Navigation and* 

(2005). Intelligent Unmanned Air Vehicle Flight Systems, *Infotec@AIAA*, Arlington,

Test Infrastructure, *AIAA Guidance, Navigation, and Control Conference*, Honolulu,

Design and Flight Testing of Intelligent Flight Control Laws for the WVU YF-22 Model Aircraft, *AIAA Guidance, Navigation, and Control Conference*, San Francisco,

Nonlinear Aircraft Model Identification and Validation for a Fault-Tolerant Flight Control System, *AIAA Atmospheric Flight Mechanics Conference*, Toronto, Ontario,

UKF in GPS/INS Sensor Fusion, *AIAA Guidance, Navigation, and Control Conference*,

GNC – Guidance, Navigation, Control GPC – General-Purpose Computer GPS – Global Positioning System GPT – General-Purpose Timer HUD – Heads-Up Display IMU – Inertial Measurement Unit INS – Inertial Navigation System OBES – On-Board Excitation System OS – Operating System PCC – Pilot Control Command PID – Parameter Identification PPM – Pulse-Position Modulation PWM – Pulse-Width Modulation R/C – Remote Control RF – Radio Frequency RTAI – Real-Time Application Interface SAS – Stability Augmentation System SCC – Software-generated Control Commands SCI – Serial Communication Interfaces SPI – Serial Peripheral Interface SPOF – Single Point of Failure UAV – Unmanned Aerial Vehicle WVU – West Virginia University

#### **10. References**


Ambrosia, V.G.; Brass, J.A.; Greenfield, P. & Wegener, S. (2004). *Collaborative Efforts in R&D* 

http://geo.arc.nasa.gov/sge/WRAP/projects/docs/RS2004\_PAPER.PDF. Boeing Commercial Airplanes, (2009) Statistical Summary of Commercial Jet Airplane

Chao, H.Y.; Jensen, A.M.; Han, Y.; Chen, Y.Q. & McKee, M. (2009). AggieAir: Towards Low-

Christophersen, H.B.; Pickell, W.J.; Koller, A.A.; Kannan, S.K & Johnson, E.N. (2004). Small

Cione, J.J.; Uhlhorn, E. W.; Cascella, G.; Majumdar, S. J.; Sisko, C.; Carrasco, N.; Powell, M.

Evans, J.; Inalhan, G.; Jang J.S.; Teo, R. & Tomlin, C.J. (2001). DragonFly: a Versatile UAV

*and Applications of Imaging Wildfires*, US Forest Service. Available:

Accidents, World Wide Operations, 1959-2008, Seattle, WA, Available:

cost Cooperative Multispectral Remote Sensing Using Small Unmanned Aircraft Systems, Chapter, *Advances in Geoscience and Remote Sensing*, Gary Jedlovec,

Adaptive Flight Control Systems for UAVs using FPGA/DSP Technology, *Proceedings of the AIAA "Unmanned Unlimited" Technical Conference, Workshop, and* 

D.; Bale, P.; Holland, G.; Turlington, P.; Fowler, D.; Landsea, C. W. & Yuhas, C. L. (2008). The First Successful Unmanned Aerial System (UAS) Mission into a Tropical Cyclone (Ophelia 2005), *12th Conference on IOAS-AOLS*, New Orleans, LA, January

Platform for the Advancement of Aircraft Navigation and Control, *Digital Avionics* 

GNC – Guidance, Navigation, Control GPC – General-Purpose Computer GPS – Global Positioning System GPT – General-Purpose Timer HUD – Heads-Up Display IMU – Inertial Measurement Unit INS – Inertial Navigation System OBES – On-Board Excitation System

OS – Operating System

R/C – Remote Control RF – Radio Frequency

**10. References** 

2008.

PCC – Pilot Control Command PID – Parameter Identification PPM – Pulse-Position Modulation PWM – Pulse-Width Modulation

RTAI – Real-Time Application Interface SAS – Stability Augmentation System

SCI – Serial Communication Interfaces SPI – Serial Peripheral Interface SPOF – Single Point of Failure UAV – Unmanned Aerial Vehicle WVU – West Virginia University

SCC – Software-generated Control Commands

http://www.boeing.com/news/techissues.

Ed.Vukovar,Croatia:IN-TECH,pp.467–490.

*Exhibit*, Chicago, IL, September, 2004.

*Systems, DASC. The 20th Conference*, vol.1, pp.1C3/1-1C3/12, Daytona Beach, FL, October, 2001.


**22** 

*1Russia* 

**Study of Effects** 

*1Istra, Moscow region, 2Cardiff University,* 

*2United Kingdom* 

**of Lightning Strikes to an Aircraft** 

N.I. Petrov1, A. Haddad2, G.N. Petrova1, H. Griffiths2 and R.T. Waters2

It is difficult to avoid thunderstorm regions by aircraft, so that on average every commercial airliner is struck by lightning once per year. Defining test and design criteria of aircraft is becoming important since aircraft safety is increasingly dependent on electronic equipment and the development of new materials (carbon composites, etc.) to replace the metallic airframes.

In-flight statistics show that most strikes occurred 3-5 km above sea level, where the temperature is ~ 0C (Uman & Rakov, 2003; Larsson, 2002). There are two different types of lightning strikes to aircraft. The first type is that the aircraft initiates the lightning discharge when it is found in the intense electric field region of a thundercloud, and the second is the interception by the aircraft of an approaching lightning leader. The mechanism for lightning initiation by aircraft is often explained using the "bidirectional leader" theory (Clifford & Casemir, 1982; Mazur, 1989; Mazur et al., 1990; Mazur & Moreau, 1992), which describes the aircraft-initiated lightning process as a positive leader starting from the aircraft in the direction of the ambient electric field; this is followed, a few milliseconds later, by a negative leader developing in the opposite direction. This order of events is a consequence of the lower electric strength of air in the vicinity of a divergent (anode) field. The ambient thundercloud electric field measured under such conditions is typically in the range 50 - 100

Radome "measles" (coloured spots on the inner radome surface) have been observed in many instances during service (Lalande et al., 1999; Ulmann et al., 1999). Each spot corresponds to a pin hole through the sandwich panel of the radome material. A possible explanation of the origin of these pin holes is that they were caused by breakdown due to double-layer charge accumulation on the radome. However, the physical mechanisms of the

The purpose of this chapter was to investigate the physical processes involved in lightning strikes to aircraft and to compare simulation results with other studies involving instrumented aircraft flying in thunderstorms. 3-*D* electric field calculations were performed to determine the field distributions at the nose of aircraft and inside the dielectric radome (nosecone). The influence of the thickness and dielectric constant of the radome wall on the electric field

**1. Introduction** 

*kV/m* (Marshall & Rust, 1991).

occurrence of "measles" are not fully established yet.


## **Study of Effects of Lightning Strikes to an Aircraft**

N.I. Petrov1, A. Haddad2, G.N. Petrova1, H. Griffiths2 and R.T. Waters2

*1Istra, Moscow region, 2Cardiff University, 1Russia 2United Kingdom* 

## **1. Introduction**

522 Recent Advances in Aircraft Technology

Simon, D. (2006). *Optimal State Estimation: Kalman, H-Innity, and Nonlinear Approaches*. Wiley

Yeh, Y.C. (1998). Design Considerations in Boeing 777 Fly-By-Wire Computers, *Third IEEE* 

*International High-Assurance Systems Engineering Symposium*, Washington, DC,

& Sons, 1. edition.

November, 1998.

It is difficult to avoid thunderstorm regions by aircraft, so that on average every commercial airliner is struck by lightning once per year. Defining test and design criteria of aircraft is becoming important since aircraft safety is increasingly dependent on electronic equipment and the development of new materials (carbon composites, etc.) to replace the metallic airframes.

In-flight statistics show that most strikes occurred 3-5 km above sea level, where the temperature is ~ 0C (Uman & Rakov, 2003; Larsson, 2002). There are two different types of lightning strikes to aircraft. The first type is that the aircraft initiates the lightning discharge when it is found in the intense electric field region of a thundercloud, and the second is the interception by the aircraft of an approaching lightning leader. The mechanism for lightning initiation by aircraft is often explained using the "bidirectional leader" theory (Clifford & Casemir, 1982; Mazur, 1989; Mazur et al., 1990; Mazur & Moreau, 1992), which describes the aircraft-initiated lightning process as a positive leader starting from the aircraft in the direction of the ambient electric field; this is followed, a few milliseconds later, by a negative leader developing in the opposite direction. This order of events is a consequence of the lower electric strength of air in the vicinity of a divergent (anode) field. The ambient thundercloud electric field measured under such conditions is typically in the range 50 - 100 *kV/m* (Marshall & Rust, 1991).

Radome "measles" (coloured spots on the inner radome surface) have been observed in many instances during service (Lalande et al., 1999; Ulmann et al., 1999). Each spot corresponds to a pin hole through the sandwich panel of the radome material. A possible explanation of the origin of these pin holes is that they were caused by breakdown due to double-layer charge accumulation on the radome. However, the physical mechanisms of the occurrence of "measles" are not fully established yet.

The purpose of this chapter was to investigate the physical processes involved in lightning strikes to aircraft and to compare simulation results with other studies involving instrumented aircraft flying in thunderstorms. 3-*D* electric field calculations were performed to determine the field distributions at the nose of aircraft and inside the dielectric radome (nosecone). The influence of the thickness and dielectric constant of the radome wall on the electric field

Study of Effects of Lightning Strikes to an Aircraft 525

For given ellipsoid parameters, it is possible to determine the critical value of the ambient electric field which predicts a successful leader development from the aircraft. Using the criteria from (Petrov & Waters, 1994), we find that ambient field magnitudes of *E*cr 50 - 80kV/m (Fig. 2). This is insufficient at sea level to initiate leaders from the aircraft tip. However, at an altitude of 4000m, where the relative air density is around 0.58, triggering of leaders originating from the nose could certainly occur. Ambient fields of 50kV/m agree well with the fields measured inside storm-cloud, consistent with the in-flight

The critical electric field dependence on the half-length of the aircraft, can be approximated

0.68 570 *E a cr*

Similar relationship with slightly different coefficient was obtained in (Petrov &

0 50 100 150 200

Fig. 2. Critical ambient electric field as a function of aircraft half-length (Ecr = 65 kV/m at

An aircraft can, in principle, intercept an approaching lightning leader, although no direct evidence is available. Nevertheless, in this case, the striking distance concept usually used for earthed structures may be applied to estimate the risk factor. The striking distance and the probability of lightning strikes are functions of aircraft geometry and lightning current. Electric field intensification of the field of a nearby lightning leader as a function of the distance from the aircraft tip is presented for different values of lightning peak current in Fig. 3. The aircraft is again modelled as an ellipsoid with half-width of 3m and half-length of 25m. The lightning leader channel is modeled by a charge per length, *q,* and leader tip charge, *Q,* at a distance, S, from the aircraft. The values for *q* and *Q* correspond to a prospective lightning return stroke current *i*0, evaluated from (Petrov & Waters, 1995), i.e.

*a*, m

, (2)

measurements of lightning strikes to aircraft (Lalande et al., 1999).

with high accuracy using the empirical relationship

0

50

100

*E*cr, kV/m

150

200

250

where *a* is in m, and *E*cr in kV/m.

a=25 m, b=3m.).

**2.2 Aircraft-intercepted lightning** 

D'Alessandro, 2002) for earthed structures.

penetration inside the radome was also investigated. The screening effect caused by ice and water layers on the radome wall is demonstrated. A new proposal for radome protection is made possible by the development of strips using materials such as non-linear *ZnO*, which behave as dielectrics under low-field conditions and acquire properties of conductors if the external electric field exceeds the critical value. Experimental tests of the strips on a real aircraft radome were carried out, and the test results reported in this paper.

#### **2. Lightning attachment to aircraft**

It was recently reported that about 90% of lightning strikes to aircraft are initiated by the aircraft (Uman & Rakov, 2003). This indicates that the aircraft extremities provide the region of high electric field needed to initiate a lightning discharge by enhancing the ambient electric field. The aircraft geometry and ambient atmospheric conditions are the most important factors in determining the local electric field intensification. Since pressure, absolute humidity and temperature decrease with increasing altitude, the variation of streamer properties with altitude can be inferred from laboratory experiments and incorporated into lightning modelling.

It is inferred from (Petrov & Waters, 1994, 1995) that the electric field needed to initiate a lightning discharge at 4km altitude is only about half of the value at sea level. Calculations show that the required striking distance increases significantly with increasing altitude, causing a corresponding increase in the risk of lightning strikes for aircraft in flight. It is shown, in the following, that ambient electric fields of between 50-80 *kV/m* can initiate positive leaders at the nose of aircraft at such altitudes.

#### **2.1 Aircraft-initiated lightning**

Consider the aircraft body as an electrically floating conducting ellipsoid placed in a uniform ambient electric field *E*0 (Fig.1). An analytical expression may be obtained for the enhanced electric field in the vicinity of the nose for the case where the major axis is parallel to *E*0 (Petrov & Waters, 1994):

$$E(\mathbf{x}, a, b) = E\_0 \left[ \frac{1 - \frac{\operatorname{ar}\tanh(aA^{1/2} \text{ /} \mathbf{x}) - aA^{1/2} \text{ /} \mathbf{x}}{\operatorname{ar}\tanh A^{1/2} - A^{1/2}} + \right] \tag{1}$$

$$\left[ \frac{A}{(\mathbf{x}^2 \, / \ a^2 + b^2 \, / \ a^2 - 1)} \frac{aA^{1/2} \, / \mathbf{x}}{(\operatorname{ar}\tanh A^{1/2} - A^{1/2})} \right] \tag{1}$$

where *A =* 1 *- b*2*/a*2, *a* and *b* are the half-length and half-width of the ellipsoid and (*x – a)* is the distance from the ellipsoid tip.

Fig. 1. Aircraft model representation and field intensification.

penetration inside the radome was also investigated. The screening effect caused by ice and water layers on the radome wall is demonstrated. A new proposal for radome protection is made possible by the development of strips using materials such as non-linear *ZnO*, which behave as dielectrics under low-field conditions and acquire properties of conductors if the external electric field exceeds the critical value. Experimental tests of the strips on a real

It was recently reported that about 90% of lightning strikes to aircraft are initiated by the aircraft (Uman & Rakov, 2003). This indicates that the aircraft extremities provide the region of high electric field needed to initiate a lightning discharge by enhancing the ambient electric field. The aircraft geometry and ambient atmospheric conditions are the most important factors in determining the local electric field intensification. Since pressure, absolute humidity and temperature decrease with increasing altitude, the variation of streamer properties with altitude can be inferred from laboratory experiments and

It is inferred from (Petrov & Waters, 1994, 1995) that the electric field needed to initiate a lightning discharge at 4km altitude is only about half of the value at sea level. Calculations show that the required striking distance increases significantly with increasing altitude, causing a corresponding increase in the risk of lightning strikes for aircraft in flight. It is shown, in the following, that ambient electric fields of between 50-80 *kV/m* can initiate

Consider the aircraft body as an electrically floating conducting ellipsoid placed in a uniform ambient electric field *E*0 (Fig.1). An analytical expression may be obtained for the enhanced electric field in the vicinity of the nose for the case where the major axis is parallel

0 1 2

where *A =* 1 *- b*2*/a*2, *a* and *b* are the half-length and half-width of the ellipsoid and (*x – a)* is

tanh (,,) /

*ar A A Exab E*

Fig. 1. Aircraft model representation and field intensification.

tanh( / ) / <sup>1</sup>

*ar aA x aA x*

1 2 1 2 12 12

2222 12 12

*A aA x xaba ar A A* (1)

( / / 1) ( tanh )

aircraft radome were carried out, and the test results reported in this paper.

**2. Lightning attachment to aircraft** 

incorporated into lightning modelling.

**2.1 Aircraft-initiated lightning** 

to *E*0 (Petrov & Waters, 1994):

the distance from the ellipsoid tip.

positive leaders at the nose of aircraft at such altitudes.

For given ellipsoid parameters, it is possible to determine the critical value of the ambient electric field which predicts a successful leader development from the aircraft. Using the criteria from (Petrov & Waters, 1994), we find that ambient field magnitudes of *E*cr 50 - 80kV/m (Fig. 2). This is insufficient at sea level to initiate leaders from the aircraft tip. However, at an altitude of 4000m, where the relative air density is around 0.58, triggering of leaders originating from the nose could certainly occur. Ambient fields of 50kV/m agree well with the fields measured inside storm-cloud, consistent with the in-flight measurements of lightning strikes to aircraft (Lalande et al., 1999).

The critical electric field dependence on the half-length of the aircraft, can be approximated with high accuracy using the empirical relationship

$$E\_{cr} \cong 570 \cdot a^{-0.68} \text{ \AA} \tag{2}$$

where *a* is in m, and *E*cr in kV/m.

Similar relationship with slightly different coefficient was obtained in (Petrov & D'Alessandro, 2002) for earthed structures.

Fig. 2. Critical ambient electric field as a function of aircraft half-length (Ecr = 65 kV/m at a=25 m, b=3m.).

#### **2.2 Aircraft-intercepted lightning**

An aircraft can, in principle, intercept an approaching lightning leader, although no direct evidence is available. Nevertheless, in this case, the striking distance concept usually used for earthed structures may be applied to estimate the risk factor. The striking distance and the probability of lightning strikes are functions of aircraft geometry and lightning current. Electric field intensification of the field of a nearby lightning leader as a function of the distance from the aircraft tip is presented for different values of lightning peak current in Fig. 3. The aircraft is again modelled as an ellipsoid with half-width of 3m and half-length of 25m. The lightning leader channel is modeled by a charge per length, *q,* and leader tip charge, *Q,* at a distance, S, from the aircraft. The values for *q* and *Q* correspond to a prospective lightning return stroke current *i*0, evaluated from (Petrov & Waters, 1995), i.e.

Study of Effects of Lightning Strikes to an Aircraft 527

flashes/km2/s, the aircraft would be expected to intercept 0.6N flashes/s. Active storms can generate 2 flashes/minute over 10 km2, which suggests an interception rate of 1 per 500s at

0 2 4 6 8 10

Fig. 4. Lightning interception distances by aircraft of different half-lengths as function of

Radar and communications antennae are usually located at the nose or tail of the aircraft where lightning is most likely to attach. Lightning strikes damage non-metallic radomes, so the diverter strips were developed to mitigate this problem. The diverter strips screen the lightning induced electric fields on the antenna surface, i.e. they move the internal streamer initiation points forward so that strips cause the collapse of electric field inside the radome. Solid strips (permanent conductors) have been used for this purpose. However, they were found to interfere with antenna radiation patterns because they usually extend beyond the antenna. For this reason, segmented diverter strips were developed to reduce the interference effects on antenna radiation (Amason et al., 1975; Plumer & Hoots, 1978). Although they have better electromagnetic transparency for radar, segmented strips need a significant voltage gradient to light up, and their efficiency needs to be further proved.

For a simplified analytical calculation of 3-D electric field, consider a hemi-spherical radome with thickness *d* placed in uniform field *E*0 (Fig. 5)*.* This is equivalent to the floating

altitude above sea level for lightning peak current values of 10 kA and 30 kA.

*z*, km

*a* = 25 m

*a* = 50 m

*a* = 25 m

*i* 0 = 10 kA

, internal and external radii *a* and *b*) placed in the

*i* 0 = 30 kA

*a* = 50 m

the heart of a storm.

0

**3. Electric field around radomes** 

dielectric hollow sphere (permittivity

electric field *E*0.

**3.1 Electric field distribution at radome without strips** 

50

100

150

*rs*, m

200

250

$$q \approx 0.43 \cdot 10^{-6} i\_0^{2B} \qquad \text{[C/m, A]} \tag{3}$$

Note that there are similar relationships between the leader channel charge and the return stroke current obtained from other models. A review of data concerning this relationship was made in (Cooray et al., 2004).

Fig. 3. Electric field intensification as a function of distance from the aircraft tip for different values of lightning peak current.

In Fig. 4, the striking distance of negative lightning to the aircraft as a function of lightning current is presented for different altitudes above sea level. Note, that for positive lightning, these distances are substantially less than those obtained for negative polarity lightning (Petrov & Waters, 1999).

A semi-quantitative estimate of the risk of lightning strike interception by an aircraft can be obtained from the concept of attractive area as used in lightning protection standards for ground structures, which can also be derived from lightning models (Petrov & Waters, 1995). For a grounded structure of the size of a commercial aircraft, the attractive area to a powerful lightning stroke of 100kA is of the order of 0.2km2. At 4000m altitude, this would increase to 0.6km2. Then, if the flash activity (cloud-cloud and cloud-ground) is N

6 2 3

Note that there are similar relationships between the leader channel charge and the return stroke current obtained from other models. A review of data concerning this relationship

0 1 2 3 4 5 6 7 8 9 10

*x,* m

Fig. 3. Electric field intensification as a function of distance from the aircraft tip for different

In Fig. 4, the striking distance of negative lightning to the aircraft as a function of lightning current is presented for different altitudes above sea level. Note, that for positive lightning, these distances are substantially less than those obtained for negative polarity lightning

A semi-quantitative estimate of the risk of lightning strike interception by an aircraft can be obtained from the concept of attractive area as used in lightning protection standards for ground structures, which can also be derived from lightning models (Petrov & Waters, 1995). For a grounded structure of the size of a commercial aircraft, the attractive area to a powerful lightning stroke of 100kA is of the order of 0.2km2. At 4000m altitude, this would increase to 0.6km2. Then, if the flash activity (cloud-cloud and cloud-ground) is N

3 - *i*0 = 30 kA 2 - *i*0 = 20 kA

*x*0

1 - *i*0 = 10 kA

was made in (Cooray et al., 2004).

1

2

3

*E*, kV/m

values of lightning peak current.

(Petrov & Waters, 1999).

<sup>0</sup> *q i* 0.43 10 [C/m, A], (3)

*S* = 200 m

flashes/km2/s, the aircraft would be expected to intercept 0.6N flashes/s. Active storms can generate 2 flashes/minute over 10 km2, which suggests an interception rate of 1 per 500s at the heart of a storm.

Fig. 4. Lightning interception distances by aircraft of different half-lengths as function of altitude above sea level for lightning peak current values of 10 kA and 30 kA.

## **3. Electric field around radomes**

Radar and communications antennae are usually located at the nose or tail of the aircraft where lightning is most likely to attach. Lightning strikes damage non-metallic radomes, so the diverter strips were developed to mitigate this problem. The diverter strips screen the lightning induced electric fields on the antenna surface, i.e. they move the internal streamer initiation points forward so that strips cause the collapse of electric field inside the radome. Solid strips (permanent conductors) have been used for this purpose. However, they were found to interfere with antenna radiation patterns because they usually extend beyond the antenna. For this reason, segmented diverter strips were developed to reduce the interference effects on antenna radiation (Amason et al., 1975; Plumer & Hoots, 1978). Although they have better electromagnetic transparency for radar, segmented strips need a significant voltage gradient to light up, and their efficiency needs to be further proved.

#### **3.1 Electric field distribution at radome without strips**

For a simplified analytical calculation of 3-D electric field, consider a hemi-spherical radome with thickness *d* placed in uniform field *E*0 (Fig. 5)*.* This is equivalent to the floating dielectric hollow sphere (permittivity , internal and external radii *a* and *b*) placed in the electric field *E*0.

Study of Effects of Lightning Strikes to an Aircraft 529

In Figs. 6a and 6b, the radial and tangential electric field distributions are presented for radomes having different dielectric constants. It is seen that the screening of the electric field by the radome itself increases when the dielectric constant of the radome material increases.

0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4

a. Radial electric field distribution inside and outside the one-layer semi-spherical radome

*r,* m

0 10 20 30 40 50 60 70 80 90

degree

b. Tangential electric field distribution inside the one-layer semi-spherical radome

0.0

0.0

Fig. 6. Electric field distribution in the vicinity of a radome.

0.2

0.4

0.6

*E*

*E*

0.8

1.0

0.5

1.0

1.5

*E/E*0

2.0

2.5

3.0

Fig. 5. Simplified model of radome exposed to electric field *E*0.

The analytical solution of Laplace's equation for the potentials outside the sphere (Region 1) and inside the sphere (Region 3) can be obtained as:

$$\begin{aligned} \rho\_1 &= -E\_0 \cos \theta \left( r - \frac{A}{r^2} \right), & [r > b]; \\ \rho\_3 &= -BE\_0 r \cos \theta, & [r < a] \end{aligned} \tag{4}$$

and the potential inside the dielectric layer (Region 2)

$$
\rho\_2 = -CE\_0 \cos \theta \left( r - \frac{D}{r^2} \right) \quad \text{ [} a < r < b \text{]} \tag{5}
$$

where *A, B, C, D* are constants determined from the continuity condition for and */r* on the boundaries of regions *1-2* and *2-3*. Calculation of these constants leads to the following expressions:

$$A = a^3 \left\{ 1 - \frac{3\left[1 + 2\varepsilon + (\varepsilon - 1)b^3 \;/\; a^3\right]}{(\varepsilon + 2)(2\varepsilon + 1) - 2(\varepsilon - 1)^2 b^3 \;/\; a^3\right\},$$

$$B = \frac{9\varepsilon}{(\varepsilon + 2)(2\varepsilon + 1) - 2(\varepsilon - 1)^2 b^3 \;/\; a^3},$$

$$C = \frac{3(2\varepsilon + 1)}{(\varepsilon + 2)(2\varepsilon + 1) - 2(\varepsilon - 1)^2 b^3 \;/\; a^3}, \; D = -\frac{b^3(\varepsilon - 1)}{2\varepsilon + 1}.\tag{6}$$

For the radial and tangential components of the electric field outside the radome surface, we obtain

$$E\_r = -\frac{\partial \mathcal{Q}}{\partial r} = E\_0 \cos \theta \left( 1 + \frac{2A}{r^3} \right), \quad \left[ r > b \right] \tag{7}$$

$$E\_\theta = -\frac{1}{r} \frac{\partial \mathcal{Q}}{\partial \theta} = -E\_0 \sin \theta \left( 1 - \frac{A}{r^2} \right), \quad \left[ r > b \right]$$

The analytical solution of Laplace's equation for the potentials outside the sphere (Region 1)

 

2 0 <sup>2</sup> cos , [ ] *<sup>D</sup> CE r a r b r*

the boundaries of regions *1-2* and *2-3*. Calculation of these constants leads to the following

3 1 2 ( 1) / <sup>1</sup>

3(2 1) ( 2)(2 1) 2( 1) /

,

 

 

9 ( 2)(2 1) 2( 1) /

For the radial and tangential components of the electric field outside the radome surface, we

0 3 <sup>2</sup> cos 1 , [ ] *<sup>r</sup> <sup>A</sup> E E r b r r*

0 2 <sup>1</sup> sin 1 , [ ] *<sup>A</sup> E E r b r r*

 ,

23 3

*b a*

( 2)(2 1) 2( 1) /

 

cos , [ ];

*<sup>A</sup> E r rb r BE r r a*

cos , [ ]

(4)

 and */r* on

. (6)

(5)

,

3 3

23 3

*b a*

*<sup>b</sup> <sup>D</sup>*

(7)

*b a*

23 3

*b a*

3 ( 1) 2 1

1 0 2

 

 

where *A, B, C, D* are constants determined from the continuity condition for

 

3 0

Fig. 5. Simplified model of radome exposed to electric field *E*0.

and inside the sphere (Region 3) can be obtained as:

and the potential inside the dielectric layer (Region 2)

expressions:

obtain

3

*B*

*A a*

*C*

In Figs. 6a and 6b, the radial and tangential electric field distributions are presented for radomes having different dielectric constants. It is seen that the screening of the electric field by the radome itself increases when the dielectric constant of the radome material increases.

a. Radial electric field distribution inside and outside the one-layer semi-spherical radome

b. Tangential electric field distribution inside the one-layer semi-spherical radome Fig. 6. Electric field distribution in the vicinity of a radome.

Study of Effects of Lightning Strikes to an Aircraft 531

 e 4 2

 e 6

0.3 0.4 0.5 0.6 0.7 0.8 0.9

*r*, m

0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9

*r*, m

0.46 0.48 0.50 0.52 0.54 0.56 0.58 0.60

e3 = 6 for external layer; *b*)

*r,* m

Fig. 7. Electric field distribution inside and outside the two-layer semi-sphere radome: *a*)

 *e* 

 *e* 

 *e*  c)

i = 3; e1 = 2, e2 = 100,

<sup>e</sup>

> <sup>e</sup>

<sup>e</sup>

e a)

b)

0.0 0.2 0.4 0.6 0.8 1.0 1.2

0.0

0.0

0.5

1.0

*E*/*E*0

e1 = 2, e2 = 4, 

i=3 for internal layer;

e3=200; c) expanded scale of *b*).

1.5

2.0

0.5

1.0

*E*/*E*0

1.5

2.0

*E*/*E*0

#### i. 2-layer radome wall

By analogy, the potentials and electric fields may be obtained for the 2-layer radome wall placed in the field *E*0:

$$\begin{aligned} \rho\_1 &= -E\_0 \cos \theta \left( r - \frac{A}{r^2} \right), \qquad [r > c] \\\\ \rho\_2 &= -CE\_0 \cos \theta \left( r - \frac{D}{r^2} \right), \qquad [a < r < c] \end{aligned}$$

$$\begin{aligned} \rho\_3 &= -FE\_0 \cos \theta \left( r - \frac{G}{r^2} \right), \qquad [b < r < a] \end{aligned}$$

$$\begin{aligned} \rho\_4 &= -BE\_0 r \cos \theta, \qquad [r < b] \end{aligned} \tag{8}$$

$$A = c^3 - C(c^3 - D), \; B = \frac{F(b^3 - G)}{b^3}$$

$$C = \frac{3 \varepsilon\_1 c^3}{2 \varepsilon\_1 (c^3 - D) + \varepsilon\_2 (c^3 + 2D)},$$

$$D = \frac{a^3 \left[ 1 - \left( \varepsilon\_2 / \varepsilon\_3 \right) (a^3 - G) \right] / \left( a^3 + 2G \right)}{1 + 2 \left( \varepsilon\_2 / \varepsilon\_3 \right) (a^3 - G) / \left( a^3 + 2G \right)},$$

$$F = \frac{C(a^3 - D)}{a^3 - G}, \; G = \frac{b^3 \left( 1 - \varepsilon\_3 \right) / \varepsilon\_4}{1 + 2 \varepsilon\_3 / \varepsilon\_4},$$

where *b* < *a* < *c*, with *b* the internal radius of the inner layer, *a* and *c* are the internal and external radii of the exterior layer, 1, 2, 3, 4 are the dielectric constants of outside medium (air), exterior and interior layers, and inside medium (air), accordingly.

Radial and tangential components of the electric field outside the radome surface are expressed by

$$E\_r = -\frac{\partial \rho}{\partial r} = E\_0 \cos \theta \left( 1 + \frac{2A}{r^3} \right), \qquad [r > c]$$

$$E\_\theta = -\frac{1}{r} \frac{\partial \rho}{\partial \theta} = -E\_0 \sin \theta \left( 1 - \frac{A}{r^2} \right), \qquad [r > c] \tag{9}$$

In Fig. 7, the electric field distributions inside and outside the two-layer semi-sphere radome are presented for different values of dielectric constants of layers. It can be seen that the field intensification at the tip of a radome increases with the dielectric constant of the radome layers.

By analogy, the potentials and electric fields may be obtained for the 2-layer radome wall

1 0 <sup>2</sup> cos , [ ] *<sup>A</sup> E r rc r*

2 0 <sup>2</sup> cos , [ ] *<sup>D</sup> CE r a r c r*

3 0 <sup>2</sup> cos , [ ]; *<sup>G</sup> FE r bra r*

 *BE r r b* cos , [ ] 

> 3 1 3 3 1 2 3 2( ) ( 2)

*cD c D* 

3 3

*aG a G*

1 / 12 /

,

 

 

3 4 3 4

(9)

4 are the dielectric constants of outside medium

 ,

1 ( / )( ) /( 2 ) , 1 2( / )( ) /( 2 ) *a aG a G*

> *b G*

, <sup>3</sup>

where *b* < *a* < *c*, with *b* the internal radius of the inner layer, *a* and *c* are the internal and

Radial and tangential components of the electric field outside the radome surface are

0 3 <sup>2</sup> cos 1 , [ ] *<sup>r</sup> <sup>A</sup> E E r c r r*

0 2 <sup>1</sup> sin 1 , [ ] *<sup>A</sup> E E r c r r*

In Fig. 7, the electric field distributions inside and outside the two-layer semi-sphere radome are presented for different values of dielectric constants of layers. It can be seen that the field intensification at the tip of a radome increases with the dielectric constant of the radome

3 33 2 3

3 3 *Fb G* ( ) *<sup>B</sup> b* 

(8)

 

 

> 

4 0

3 3 *A c Cc D* ( ) ,

*<sup>c</sup> <sup>C</sup>*

2 3

 

 

3 3 *Ca D* ( ) *<sup>F</sup> a G*

> 1, 2, 3,

(air), exterior and interior layers, and inside medium (air), accordingly.

*D*

external radii of the exterior layer,

expressed by

layers.

i. 2-layer radome wall

placed in the field *E*0:

Fig. 7. Electric field distribution inside and outside the two-layer semi-sphere radome: *a*) i=3 for internal layer; e1 = 2, e2 = 4, e3 = 6 for external layer; *b*) i = 3; e1 = 2, e2 = 100, e3=200; c) expanded scale of *b*).

Study of Effects of Lightning Strikes to an Aircraft 533

radome with a base diameter of 0.7m (Fig. 9). For this radome, electric field measurement results at its base were reported (Ulmann et al., 2001; Delannoy et al., 2001), which allows comparison of simulations with experimental data. Here, solid strips were considered as inclined isolated rods in a uniform external electric field, since the analytical expressions for the electric field distribution exist in this case. In Fig. 10, the electric field at the radome base is shown as a function of strip length for different numbers of strips. It can be observed that the electric field at the radome base decreases by 50 % if 6 solid diverter strips of 0.4m length were installed on the radome surface. This is in good agreement with the measurements

a) side view b) view from top

0.0 0.2 0.4 0.6 0.8 1.0

Fig. 10. Calculated electric field at the radome base as a function of strip length for different

Preliminary tests on a radome, used on a commercial aircraft, with a thickness of ~5 *mm*, a diameter of ~ 1.6 *m* and having six solid strips of 1*m* length were performed in the high

*Ls* , m 8

6

4

2

*n*

Fig. 9. A conical radome with conducting solid strips:

*E*z, kV/m

**4. Laboratory lightning impulse tests** 

numbers of strips

(Ulmann et al., 2001).

#### ii. Effect of ice and water layers

In in-flight environmental conditions, the radome may be covered by ice or water layers. The tests on radomes in rain and icing conditions were conducted recently (Hardwick et al., 1999, 2003), and it was shown that the ice layers increase the light up voltages by a factor 2 to 3.

Calculations of the electric field distributions in the case of ice and water on the radome surface show that radome produces significant shielding effect (Fig. 8). In this case, the lightning leader can be initiated from the radome tip, so the strips will not operate as usual. In Fig. 8, the electric field distributions are presented for different values of permittivity of the radome wall material. The radar is represented by a conducting hemisphere having a radius of 0.2m. Note, that for a wide range of frequencies, the dielectric constants of water and ice are equal to *H2O* = 87.9 and ice = 99, respectively (Handbook of Chemistry, 2001).

Fig. 8. Electric field distribution inside and outside the hemisphere radome with different dielectric constants of radome material.

#### **3.2 Electric field shielding effect of strips**

As was shown above, the electric field inside a dielectric radome is not disturbed significantly by the radome wall itself, so the radome does not produce screening effects. Low-level shielding permits the inception of a discharge from the internal electrode, so the solid strips are usually used to produce the shielding effect. However, high quality shielding has undesirable interference effects on antenna radiation. Therefore, the optimal length and number of strips should be determined. In the following, we consider a conical shaped

In in-flight environmental conditions, the radome may be covered by ice or water layers. The tests on radomes in rain and icing conditions were conducted recently (Hardwick et al., 1999, 2003), and it was shown that the ice layers increase the light up voltages by a factor 2

Calculations of the electric field distributions in the case of ice and water on the radome surface show that radome produces significant shielding effect (Fig. 8). In this case, the lightning leader can be initiated from the radome tip, so the strips will not operate as usual. In Fig. 8, the electric field distributions are presented for different values of permittivity of the radome wall material. The radar is represented by a conducting hemisphere having a radius of 0.2m. Note, that for a wide range of frequencies, the dielectric constants of water

0.2 0.4 0.6 0.8 1.0

*r*, m

Fig. 8. Electric field distribution inside and outside the hemisphere radome with different

As was shown above, the electric field inside a dielectric radome is not disturbed significantly by the radome wall itself, so the radome does not produce screening effects. Low-level shielding permits the inception of a discharge from the internal electrode, so the solid strips are usually used to produce the shielding effect. However, high quality shielding has undesirable interference effects on antenna radiation. Therefore, the optimal length and number of strips should be determined. In the following, we consider a conical shaped

ice = 99, respectively (Handbook of Chemistry, 2001).

ii. Effect of ice and water layers

to 3.

and ice are equal to

*H2O* = 87.9 and

0.0

0.5

1.0

1.5

*E/E*0

dielectric constants of radome material.

**3.2 Electric field shielding effect of strips** 

2.0

2.5

3.0

radome with a base diameter of 0.7m (Fig. 9). For this radome, electric field measurement results at its base were reported (Ulmann et al., 2001; Delannoy et al., 2001), which allows comparison of simulations with experimental data. Here, solid strips were considered as inclined isolated rods in a uniform external electric field, since the analytical expressions for the electric field distribution exist in this case. In Fig. 10, the electric field at the radome base is shown as a function of strip length for different numbers of strips. It can be observed that the electric field at the radome base decreases by 50 % if 6 solid diverter strips of 0.4m length were installed on the radome surface. This is in good agreement with the measurements (Ulmann et al., 2001).

Fig. 9. A conical radome with conducting solid strips:

Fig. 10. Calculated electric field at the radome base as a function of strip length for different numbers of strips

#### **4. Laboratory lightning impulse tests**

Preliminary tests on a radome, used on a commercial aircraft, with a thickness of ~5 *mm*, a diameter of ~ 1.6 *m* and having six solid strips of 1*m* length were performed in the high

Study of Effects of Lightning Strikes to an Aircraft 535

Fig. 12. Influence of isolated disks on the trajectory of breakdown under positive impulse

Fig. 13. Influence of isolated disks on the trajectory of breakdown under negative impulse

voltage in an air gap. The time to the surface breakdown was *t*br~ 30-50

corresponds to a leader velocity *vl* ~ 2 *cm*/

discharges (Petrov & Waters, 1999).

isolated electrodes.

factor of 3-4, i.e. the leader velocity becomes *vl* ~ 6-8 *cm*/

with *D* waveform for the segmented strips (Hardwick et al., 1999).

Breakdown occurs between the electrode and the closest point on the radome surface, propagating further along the radome surface until the end of the solid strip, even if the air gap distance between the electrode and the end of the solid strip is shorter. This indicates that the breakdown voltage along the surface of the radome is lower than the breakdown

distance between the electrode and the end of the solid strip on the radome wall. This

the case of the isolated multiple-electrode diverters, the time to breakdown decreases by a

time indicates that simultaneous development of the discharge in the gaps between different

The light up electric field was about 3.3 *kV/cm*, which is close to typical light up voltages

Tests have shown that the isolated electrode strips divert the discharge channel of both polarities. Leaders develop along the surface without any damage to it. For the same applied voltage, the breakdown gap with the isolated multiple-electrode diverter strip can be twice as long as the gap without strip. The diversion ability of isolated multiple-electrode diverter strip is higher for negative polarity discharge than for positive discharge (Figs. 12, 13). This is due to different mechanisms of breakdown for negative and positive polarity

*s.* The decrease of the breakdown

*s,* which is usually recorded in long air gaps. In

*s* depending on the

voltage laboratory at Cardiff University. Lightning impulses of 1.2/50 shape, positive and negative polarity, were applied to the output electrode (sphere of 10*cm* diameter or rod with spherical end of 1.2*cm* diameter), which was placed at different distances (10-30 *cm)* from the surface of the radome. Breakdown channels were recorded using a video-camera having a picture rate of 50 fps.

## **4.1 Segmented diverter strips**

Tests were also conducted on two commercially available segmented diverter strips of one meter length each. The diverter strips were attached to the aircraft radome surface for testing. It was found that the diverter with smaller buttons (segment diameter 1.524 mm) has higher breakdown voltage.

The segmented diverters had breakdown voltages of 50-60*kV* while the time to breakdown *t*br varied between 3 and 7*s.* This corresponds to leader velocities *vl* in the range 15 to 30 *cm*/*s,* which is ten times higher than usually registered leader velocities in long air gaps. Dependence of the applied voltage on the polarity is weak, if the rod-type high voltage electrode is placed close (~10-15cm) to the strip end.

Although the segmented strips have good diversion properties, tests have shown problems with multi-impulse lightning strikes. After a number of strikes, damage was observed on the strip buttons (Fig. 11). However, the resistance of the strips after tests was still more than 600 M. This indicates that the discharge current mainly flows not through the strip buttons but in air over the strip.

Fig. 11. Segmented diverter strips after the test.

## **4.2 Isolated multiple-electrode diverter**

Isolated rings or disks with diameters of 17*mm* and 20*mm* with a separation of 5-15*cm* were mounted on the radome surface with the help of dielectric tape. These types of strips have several advantages: (a) they have negligible interference effects on antenna radiation due to the small total surface of metal elements and (b) they do not initiate a leader discharge before an approaching lightning leader streamer zone attaches to the radome surface. Both positive and negative polarity impulses of amplitude 200-250 kV were applied to gaps of 10- 20cm between the electrode and the radome tip (Fig. 12 and Fig. 13).

voltage laboratory at Cardiff University. Lightning impulses of 1.2/50 shape, positive and negative polarity, were applied to the output electrode (sphere of 10*cm* diameter or rod with spherical end of 1.2*cm* diameter), which was placed at different distances (10-30 *cm)* from the surface of the radome. Breakdown channels were recorded using a video-camera having a

Tests were also conducted on two commercially available segmented diverter strips of one meter length each. The diverter strips were attached to the aircraft radome surface for testing. It was found that the diverter with smaller buttons (segment diameter 1.524 mm)

The segmented diverters had breakdown voltages of 50-60*kV* while the time to breakdown

Although the segmented strips have good diversion properties, tests have shown problems with multi-impulse lightning strikes. After a number of strikes, damage was observed on the strip buttons (Fig. 11). However, the resistance of the strips after tests was still more than 600 M. This indicates that the discharge current mainly flows not through the strip buttons

Isolated rings or disks with diameters of 17*mm* and 20*mm* with a separation of 5-15*cm* were mounted on the radome surface with the help of dielectric tape. These types of strips have several advantages: (a) they have negligible interference effects on antenna radiation due to the small total surface of metal elements and (b) they do not initiate a leader discharge before an approaching lightning leader streamer zone attaches to the radome surface. Both positive and negative polarity impulses of amplitude 200-250 kV were applied to gaps of 10-

20cm between the electrode and the radome tip (Fig. 12 and Fig. 13).

*s,* which is ten times higher than usually registered leader velocities in long air gaps. Dependence of the applied voltage on the polarity is weak, if the rod-type high voltage

*s.* This corresponds to leader velocities *vl* in the range 15 to 30

picture rate of 50 fps.

**4.1 Segmented diverter strips** 

has higher breakdown voltage.

electrode is placed close (~10-15cm) to the strip end.

Fig. 11. Segmented diverter strips after the test.

**4.2 Isolated multiple-electrode diverter** 

*t*br varied between 3 and 7

but in air over the strip.

*cm*/

Fig. 12. Influence of isolated disks on the trajectory of breakdown under positive impulse

Fig. 13. Influence of isolated disks on the trajectory of breakdown under negative impulse

Breakdown occurs between the electrode and the closest point on the radome surface, propagating further along the radome surface until the end of the solid strip, even if the air gap distance between the electrode and the end of the solid strip is shorter. This indicates that the breakdown voltage along the surface of the radome is lower than the breakdown voltage in an air gap. The time to the surface breakdown was *t*br~ 30-50 *s* depending on the distance between the electrode and the end of the solid strip on the radome wall. This corresponds to a leader velocity *vl* ~ 2 *cm*/*s,* which is usually recorded in long air gaps. In the case of the isolated multiple-electrode diverters, the time to breakdown decreases by a factor of 3-4, i.e. the leader velocity becomes *vl* ~ 6-8 *cm*/*s.* The decrease of the breakdown time indicates that simultaneous development of the discharge in the gaps between different isolated electrodes.

The light up electric field was about 3.3 *kV/cm*, which is close to typical light up voltages with *D* waveform for the segmented strips (Hardwick et al., 1999).

Tests have shown that the isolated electrode strips divert the discharge channel of both polarities. Leaders develop along the surface without any damage to it. For the same applied voltage, the breakdown gap with the isolated multiple-electrode diverter strip can be twice as long as the gap without strip. The diversion ability of isolated multiple-electrode diverter strip is higher for negative polarity discharge than for positive discharge (Figs. 12, 13). This is due to different mechanisms of breakdown for negative and positive polarity discharges (Petrov & Waters, 1999).

Study of Effects of Lightning Strikes to an Aircraft 537

Fig. 15. Designed diverter strips with ZnO material.

optimised lightning protection using strips.

and the bottom electrode is 1.2m.

examples of computed voltage contours.

**5. 3D numerical computation of electric field around radomes** 

radome has a radius of 0.5m and a thickness of 4mm and a dielectric constant

The electric field and potential distributions inside and outside the aircraft radome placed in an external electric field were analyzed using COULOMB software which is based on the boundary element method. This results of this analysis were used to determine the necessary number and length of strips to be utilized to provide the radome with the

A simulation model of the aircraft radome having a hemispherical shape placed in a uniform ambient electric field was used in a plane-plane gap (Fig. 16). The gap length is 5.2 m and the applied voltage is 2 MV. The dielectric hemispherical radome is placed on top of a metal cylinder of 1.5 m length to simulate the end of the fuselage. The hemispherical

strips of 1cm width and 3mm thickness were considered. The segmented strips have a 5mm diameter and a 3mm thickness of and a gap distance of 1mm. The distance between the radome tip and the upper electrode is 2m. The distance between the bottom of the cylinder

Fig. 17 shows the solid and segmented strips attached to the radome surface. Fig. 18 shows

*<sup>r</sup>* = 10. Solid

#### **4.3 Flashover across the radome wall**

The discharge develops along the surface of the radome even if the air gap distance between the output electrode and the termination of the strips is shorter. This indicates that the breakdown voltage along the dielectric surface is lower than the breakdown voltage in air.

A leader discharge can be initiated from the internal radar antenna. In the model, the antenna was represented by a grounded metal hemisphere at the radome base. The leader channel from the antenna was modeled as a metal rod of different lengths connected with the antenna.

The laboratory experiments have shown that both positive and negative polarity discharges can cause a puncture through the radome wall when the internal electrode (antenna) extends beyond the strips and, hence, when it is no longer screened.

In Fig. 14, the flashover path can be seen initially propagating along the surface and then passing through the radome wall to the internal grounded electrode. The distance from the surface puncture point to the grounded outer electrode was only 7.5*cm*. This indicates that a voltage drop of less than 20 *kV* is sufficient to cause a flashover across the radome wall.

Fig. 14. Puncture through the radome wall with an earth electrode inside the radome.

#### **4.4 Diverter strips with ZnO material**

Segmented strips consisting of *ZnO* material between *Al* segments of 3x3 *mm* size were designed and tested (Fig. 15). Experiments have shown that the influence of *ZnO* material on the discharge properties of strips depends on the distances between the segments. Although no significant influence was observed for gaps *d* > 10 *mm,* at *d* ~ 1-3 *mm*, the influence of *ZnO* material becomes significant. The competitive breakdown tests showed that all discharges pass through the strip consisting of *ZnO* material, which indicates that electric fields created between the segments are sufficient for the ZnO material to become conductive. The breakdown time for these strips is comparable to that of commercial segmented strips. The velocity of leader propagation increases 4-5 times in comparison to the velocity of the surface leader discharge without the strips.

The discharge develops along the surface of the radome even if the air gap distance between the output electrode and the termination of the strips is shorter. This indicates that the breakdown voltage along the dielectric surface is lower than the breakdown voltage in air. A leader discharge can be initiated from the internal radar antenna. In the model, the antenna was represented by a grounded metal hemisphere at the radome base. The leader channel from the antenna was modeled as a metal rod of different lengths connected with

The laboratory experiments have shown that both positive and negative polarity discharges can cause a puncture through the radome wall when the internal electrode (antenna)

In Fig. 14, the flashover path can be seen initially propagating along the surface and then passing through the radome wall to the internal grounded electrode. The distance from the surface puncture point to the grounded outer electrode was only 7.5*cm*. This indicates that a voltage drop of less than 20 *kV* is sufficient to cause a flashover across the radome wall.

Fig. 14. Puncture through the radome wall with an earth electrode inside the radome.

Segmented strips consisting of *ZnO* material between *Al* segments of 3x3 *mm* size were designed and tested (Fig. 15). Experiments have shown that the influence of *ZnO* material on the discharge properties of strips depends on the distances between the segments. Although no significant influence was observed for gaps *d* > 10 *mm,* at *d* ~ 1-3 *mm*, the influence of *ZnO* material becomes significant. The competitive breakdown tests showed that all discharges pass through the strip consisting of *ZnO* material, which indicates that electric fields created between the segments are sufficient for the ZnO material to become conductive. The breakdown time for these strips is comparable to that of commercial segmented strips. The velocity of leader propagation increases 4-5 times in comparison to

extends beyond the strips and, hence, when it is no longer screened.

**4.3 Flashover across the radome wall** 

**4.4 Diverter strips with ZnO material** 

the velocity of the surface leader discharge without the strips.

the antenna.

Fig. 15. Designed diverter strips with ZnO material.

## **5. 3D numerical computation of electric field around radomes**

The electric field and potential distributions inside and outside the aircraft radome placed in an external electric field were analyzed using COULOMB software which is based on the boundary element method. This results of this analysis were used to determine the necessary number and length of strips to be utilized to provide the radome with the optimised lightning protection using strips.

A simulation model of the aircraft radome having a hemispherical shape placed in a uniform ambient electric field was used in a plane-plane gap (Fig. 16). The gap length is 5.2 m and the applied voltage is 2 MV. The dielectric hemispherical radome is placed on top of a metal cylinder of 1.5 m length to simulate the end of the fuselage. The hemispherical radome has a radius of 0.5m and a thickness of 4mm and a dielectric constant *<sup>r</sup>* = 10. Solid strips of 1cm width and 3mm thickness were considered. The segmented strips have a 5mm diameter and a 3mm thickness of and a gap distance of 1mm. The distance between the radome tip and the upper electrode is 2m. The distance between the bottom of the cylinder and the bottom electrode is 1.2m.

Fig. 17 shows the solid and segmented strips attached to the radome surface. Fig. 18 shows examples of computed voltage contours.

Study of Effects of Lightning Strikes to an Aircraft 539

Detailed analaysis has shown that an increase of the number of solid strips results in a decrease in the electric field at the base of the radome. On the other hand, the electric field was forced out to the frontal area of the radome, so that too strong shielding of the internal electrode can cause undesired field intensification at a radome front. This is a disadvantage with solid strips, in addition to their interference effect on the radiation field from the antenna. In the case of segmented strips, there is no shielding effect. This indicates that there will be no interference effect with the radiation field until the breakdown along the strip

of strips 4 6 8

*kV/m* 482 335 456 270 435 226

*kV/m* 493 534 515 588 524 570

Table 1. Electric field magnitudes at radome base (point A in Fig.16) and radome tip (point B

strips 4 6 8

Length, m 0.25 0.5 0.25 0.5 0.25 0.5

*kV/m* 517 524 526 508 551 515

*kV/m* 477 472 481 497 477 475

Table 2. Electric field magnitudes at radome base (point A in Fig. 16) and radome tip (point

The radome simulations described in this chapter show clearly that the critical electric field magnitude, which is necessary to originate leaders from the aircraft tip, decreases with the aircraft length. The magnitude of the critical electric field decreases from 100 kV/m to 40 kV/m as the aircraft length increases from 20m to 100m. These values are in good agreement with the in-flight measurements of the ambient fields inside storm-cloud

Furthermore, the simulations demonstrated that the electric field inside the radome is not reduced significantly by the radome wall itself, which indicates that the radome does not produce screening effects. This shows that leader can start from the internal electrode (radar

Solid strips

0.25 0.5 0.25 0.5 0.25 0.5

Segmented strips

takes place, under which condition the strip behaves like a conductor.

*N*umber

Length, m

*E*(A),

*E*(B),

in Fig. 16) for different numbers of solid strips.

*Nunber* of

*E*(A),

*E*(B),

**6. Discussion** 

(Lalande et al., 1999).

B in Fig. 16) for different numbers of segmented strips.

Fig. 16. Model representation: semi-spherical radome.

Fig. 17. Modeled solid (thickness: 3mm, width: 10 mm) and segmented (thickness: 3mm, radius: 2.5 mm, gap: 1 mm) strips.

Fig. 18. Voltage contour and section of a radome with a solid strip.

Tables 1 and 2 summarise the computed magnitudes of electric field at the radome base and tip. It can be observed that the shielding effect increases with the length of solid strips and the number of strips. The electric field at the base of the radome is only 50% of the external field if 6 solid strips of 0.5m length are used. Segmented strips do not produce any visible shielding effects.

Fig. 17. Modeled solid (thickness: 3mm, width: 10 mm) and segmented (thickness: 3mm,

Tables 1 and 2 summarise the computed magnitudes of electric field at the radome base and tip. It can be observed that the shielding effect increases with the length of solid strips and the number of strips. The electric field at the base of the radome is only 50% of the external field if 6 solid strips of 0.5m length are used. Segmented strips do not produce any visible

Fig. 18. Voltage contour and section of a radome with a solid strip.

Fig. 16. Model representation: semi-spherical radome.

radius: 2.5 mm, gap: 1 mm) strips.

shielding effects.

Detailed analaysis has shown that an increase of the number of solid strips results in a decrease in the electric field at the base of the radome. On the other hand, the electric field was forced out to the frontal area of the radome, so that too strong shielding of the internal electrode can cause undesired field intensification at a radome front. This is a disadvantage with solid strips, in addition to their interference effect on the radiation field from the antenna. In the case of segmented strips, there is no shielding effect. This indicates that there will be no interference effect with the radiation field until the breakdown along the strip takes place, under which condition the strip behaves like a conductor.


Table 1. Electric field magnitudes at radome base (point A in Fig.16) and radome tip (point B in Fig. 16) for different numbers of solid strips.


Table 2. Electric field magnitudes at radome base (point A in Fig. 16) and radome tip (point B in Fig. 16) for different numbers of segmented strips.

### **6. Discussion**

The radome simulations described in this chapter show clearly that the critical electric field magnitude, which is necessary to originate leaders from the aircraft tip, decreases with the aircraft length. The magnitude of the critical electric field decreases from 100 kV/m to 40 kV/m as the aircraft length increases from 20m to 100m. These values are in good agreement with the in-flight measurements of the ambient fields inside storm-cloud (Lalande et al., 1999).

Furthermore, the simulations demonstrated that the electric field inside the radome is not reduced significantly by the radome wall itself, which indicates that the radome does not produce screening effects. This shows that leader can start from the internal electrode (radar

Study of Effects of Lightning Strikes to an Aircraft 541

Besides direct strikes to aircraft radome, the aircraft could be subjected to indirect strikes. Lightning strike entrance and exit points are usually found at sharp structures of the aircraft, around which the electric field enhancement takes place, but also can occur at any part of the aircraft, including the fuselage, stabilisers, antennas, etc. Observations of such strikes were conducted in a laboratory experiments with aircraft models (Chernov et al., 1992; Petrov et al., 1996). It is seen from Fig. 19, that the nose radome can also be exit point of lightning strike depending on the aircraft position with respect to the approaching

It is worth highlighting here that the lightning diverter strips concept could be adapted for use in protection of ground antennas for ultra-high-frequency communications, which are difficult to protect from direct lightning strikes because interference to the radiation field

Theoretical analysis and numerical simulations together with experimental laboratory tests of lightning discharge interaction with aircraft radome demonstrated the applicability of existing lightning attachment models to create optimal protection systems against lightning

arises when standard air-terminal shielding is installed (Bruel et al., 2004).

Fig. 19. Laboratory testing of lightning strikes to an aircraft model.

lightning threat.

**7. Conclusion** 

strikes.

antenna) causing flashover across the radome. Therefore, strips to produce the screening effect must be used to avoid the initiation of streamers from the antenna. The lightning strike to the radome does not damage the radome surface if discharges do not occur from the metal parts inside the radome. This points out that the main purpose of the protection system should be the screening (shielding) of the electric field inside the radome. Poor shielding permits the inception of a discharge from the internal electrode, so the solid strips are usually used to produce the shielding effect. However, effective shielding has undesirable interference effects on antenna radiation. Therefore, the optimal length and number of the strips should be determined.

Significant shielding effect is created by water and ice layers on the radome surface. Under these conditions, the lightning leader can be initiated from the radome tip. Note that the dielectric constant values of ice depend on the frequency of the external field or the rate of voltage rise, and these values affect the electric field magnitude. For example, the values of ice = 5 for 1000 *kV/s* and ice = 70 for 10 *kV/s* were used in (Hardwick et al., 2003). This work has shown that the ice layer does not screen the high frequency radiation associated with the radar.

In high ambient humidity conditions (>60%), the radome becomes moderately conductive because of humidity absorption at its surface (Ulmann et al., 2001; Delannoy et al., 2001). Although this decreases the internal field due to shielding effect, it also reduces the efficiency of the strips.

Numerical simulations have shown that the shielding effect is produced only by solid strips, there is no practical shielding by segmented strips in the absence of a discharge. It was demonstrated that the field intensification area is forced out from the metal electrode (antenna) surface to the front of the radome, thereby preventing discharge initiation from the antenna. However, too strong shielding of antenna surface by increasing the number and the length of strips can cause the field intensification at the frontal area of the radome which can be sufficient to initiate the discharge. Hence, the shielding of the antenna surface as much as possible is not the best solution to the problem. It is necessary to optimize the electric field distribution with respect to the streamer and leader discharge initiation conditions.

Both the fast and slow waveforms (MIL STD 1757 Waveforms *A* and *D* respectively) are used for testing radomes (Ulmann et al., 1999). Waveform *A* has 1000 *kV/s* rate of rise, and Waveform *D* has 50-250 *s* rise time. It was concluded (Ulmann et al., 1999) that Waveform *D* represents the in-flight environment more accurately than Waveform *A*. For aircraft intercepting approaching leaders, rates of rise of the electric field, *dU/dt* of 108 to 1010 *V/m/s* were estimated (Lalande et al., 1999) at the aircraft. If 1 *MV/s* (waveform *A*) is applied over a 1m gap, this will give *dU/dt*  1012 *V/m/s*. Hence, the slower voltage Waveform *D* tests might be more appropriate. In our tests, we have *dU/dt U*/f/*L* 2.8105*V*/210-6*s*/0.7*m* 21011 *V/m/s*. However, the voltage rise time is important when the voltage is applied directly to the strip. If the high-voltage electrode is placed far from the strip, the breakdown process of the strip is determined by the field generated by the ionization front of the discharge, i.e. by the space charge of the streamers. The magnitude of this field is affected by the velocity of the streamer/leader ionization front, but not by the applied voltage waveform.

antenna) causing flashover across the radome. Therefore, strips to produce the screening effect must be used to avoid the initiation of streamers from the antenna. The lightning strike to the radome does not damage the radome surface if discharges do not occur from the metal parts inside the radome. This points out that the main purpose of the protection system should be the screening (shielding) of the electric field inside the radome. Poor shielding permits the inception of a discharge from the internal electrode, so the solid strips are usually used to produce the shielding effect. However, effective shielding has undesirable interference effects on antenna radiation. Therefore, the optimal length and

Significant shielding effect is created by water and ice layers on the radome surface. Under these conditions, the lightning leader can be initiated from the radome tip. Note that the dielectric constant values of ice depend on the frequency of the external field or the rate of voltage rise, and these values affect the electric field magnitude. For example, the values of

work has shown that the ice layer does not screen the high frequency radiation associated

In high ambient humidity conditions (>60%), the radome becomes moderately conductive because of humidity absorption at its surface (Ulmann et al., 2001; Delannoy et al., 2001). Although this decreases the internal field due to shielding effect, it also reduces the

Numerical simulations have shown that the shielding effect is produced only by solid strips, there is no practical shielding by segmented strips in the absence of a discharge. It was demonstrated that the field intensification area is forced out from the metal electrode (antenna) surface to the front of the radome, thereby preventing discharge initiation from the antenna. However, too strong shielding of antenna surface by increasing the number and the length of strips can cause the field intensification at the frontal area of the radome which can be sufficient to initiate the discharge. Hence, the shielding of the antenna surface as much as possible is not the best solution to the problem. It is necessary to optimize the electric field distribution with respect to the streamer and leader discharge initiation

Both the fast and slow waveforms (MIL STD 1757 Waveforms *A* and *D* respectively) are

*D* represents the in-flight environment more accurately than Waveform *A*. For aircraft intercepting approaching leaders, rates of rise of the electric field, *dU/dt* of 108 to 1010 *V/m/s*

a 1m gap, this will give *dU/dt*  1012 *V/m/s*. Hence, the slower voltage Waveform *D* tests

21011 *V/m/s*. However, the voltage rise time is important when the voltage is applied directly to the strip. If the high-voltage electrode is placed far from the strip, the breakdown process of the strip is determined by the field generated by the ionization front of the discharge, i.e. by the space charge of the streamers. The magnitude of this field is affected by the velocity of the streamer/leader ionization front, but not by the applied voltage

*s* rise time. It was concluded (Ulmann et al., 1999) that Waveform

used for testing radomes (Ulmann et al., 1999). Waveform *A* has 1000 *kV/*

were estimated (Lalande et al., 1999) at the aircraft. If 1 *MV/*

might be more appropriate. In our tests, we have *dU/dt U*/

*s* were used in (Hardwick et al., 2003). This

*s* (waveform *A*) is applied over

f/*L* 2.8105*V*/210-6*s*/0.7*m*

*s* rate of rise, and

ice = 70 for 10 *kV/*

number of the strips should be determined.

*s* and 

ice = 5 for 1000 *kV/*

efficiency of the strips.

with the radar.

conditions.

waveform.

Waveform *D* has 50-250

Besides direct strikes to aircraft radome, the aircraft could be subjected to indirect strikes. Lightning strike entrance and exit points are usually found at sharp structures of the aircraft, around which the electric field enhancement takes place, but also can occur at any part of the aircraft, including the fuselage, stabilisers, antennas, etc. Observations of such strikes were conducted in a laboratory experiments with aircraft models (Chernov et al., 1992; Petrov et al., 1996). It is seen from Fig. 19, that the nose radome can also be exit point of lightning strike depending on the aircraft position with respect to the approaching lightning threat.

It is worth highlighting here that the lightning diverter strips concept could be adapted for use in protection of ground antennas for ultra-high-frequency communications, which are difficult to protect from direct lightning strikes because interference to the radiation field arises when standard air-terminal shielding is installed (Bruel et al., 2004).

Fig. 19. Laboratory testing of lightning strikes to an aircraft model.

## **7. Conclusion**

Theoretical analysis and numerical simulations together with experimental laboratory tests of lightning discharge interaction with aircraft radome demonstrated the applicability of existing lightning attachment models to create optimal protection systems against lightning strikes.

Study of Effects of Lightning Strikes to an Aircraft 543

Hardwick, C.; Hawkins, K. & Sanders, M. (2003). Effect of water and icing on segmented

Larsson, A. (2002). The interaction between a lightning flash and an aircraft in flight. *C.R.* 

Lalande, P.; Bondiou-Clergerie, A. & Laroche, P. (1999). Analysis of available in-flight

Mazur, V. (1989). Triggered lightning strikes to aircraft and natural intracloud discharges. *J.* 

Mazur, V. (1989). A physical model of lightning initiation on aircraft in thunderstorms. *J.* 

Mazur, V.; Fisher, B. & Brown, P. (1990). Multistroke cloud-to-ground strike to the NASA F-

Mazur, V. & Moreau, J. (1992). Aircraft-triggered lightning: processes following strike

Marshall, T. & Rust, W. (1991). Electric field soundings through thunderstorms. *J. Geophys.* 

Petrov, N. & Waters, R. (1994). Conductor height and altitude: effect on striking distance,

Petrov, N. & Waters, R. (1995). Determination of the striking distance of lightning to earthed

Petrov, N.; Avansky, V.; Efimova, N. & Petrova, G. (1996). Experimental and theoretical

Petrov, N. & D'Alessandro, F. (2002). Theoretical analysis of the processes involved in

Petrov, N. & Waters, R. (1999). Striking distance of Lightning to earthed structures: effect of

Plumer, J. & Hoots, L. (1978). Lightning protection with segmented diverters, *Proceedings of* 

Ulmann, A.; Hardwick, J. & Plumer, A. (1999). Laboratory Reproduction of In-Flight

Ulmann, A.; Brechet, P.; Bondiou-Clergerie, A.; et.al. (2001). New investigations of the

*IEEE Int. Symp. Electromagnetic Compatibility*, pp.196-203, 1978

*Res*., Vol. 96(22), (December 1991), pp. 297-306, ISSN 0148-0227

1857681525, 9781857681529, Blackpool, UK, September 16-18, 2003

*Geophys. Res*., Vol. 94, (March 1989), pp. 3311-3325, ISSN 0148-0227

*Geophys. Res.,* Vol. 94, (March 1989), pp. 3326-3340, ISSN 0148-0227

*Physique*, Vol 3, (December 2002), pp. 1423-1444

Toulouse, France, June 22-24, 1999

ISSN 0148-0227

0021-8669

6-9, 1994.

ISSN 1471-2946

Italy, September 23-27, 1996

London, UK, August 23-27, 1999

Toulouse,France, June 22-24, 1999

(July 2002), pp. 1788-1795, ISSN 0022-3727

diverter strip performance, *Proceedings of ICOLSE'03*, pp. 80.1-80.8, ISBN

measurements of lightning strikes to aircraft, *Proceedings of ICOLSE'99*, pp.401-408,

106B airplane, *J. Geophysical Research*, Vol. 95, no. D5, (May 1990), pp. 5471-5484,

initiation that affect aircraft, *J. Aircr*., Vol. 29, (August 1992), pp. 575-580, ISSN

*Proc. Int. Conf. Lightning and Mountains*, pp. 52-57, SEE, Chamonix-Mont-Blanc, June

structures, *Proc. R. Soc. Lond. A*, Vol. 450, No. 1940, (September 1995), pp. 589-601,

investigations of the orientation of leader discharge to isolated and earthed objects, Proceedings of 23th Int. Conf. on Lightning Protection, pp.254-259, Vol.1, Firenze,

lightning attachment to earthed structures, *J. Phys. D: Appl. Phys*., Vol. 35, No. 14,

stroke polarity, *Proc. 11th Int. Symp. on High Voltage Engineering*, pp.220-223, Vol. 2,

Failures of Radomes, *Proceedings of ICOLSE'99*, pp.493-496, ISBN 0768003938,

mechanisms of lightning strike to radomes Part I: Experimental study in high

The following points can be concluded from the analysis:


#### **8. Acknowledgment**

N.I.P. and G.N.P. thank colleagues of the High Voltage Group of the Cardiff School of Engineering for hospitality while they worked as guests in their laboratory.

#### **9. References**


i. Electric field intensification by aircraft flying at high altitudes exceeds the threshold to initiate the lightning leader (50-100 *kV/m*), this explains why about 90% of lightning

ii. The shielding effect of dielectric radome material itself is less than 10%, so the lightning

iii. The penetration of the electric field, created by the lightning channel or storm-cloud, into the radome is significantly decreased by ice and/or water layers on the radome

iv. Strong diversion effect for the strips comprising isolated metal disks or rings is observed for positive as well as for negative polarity discharges; this type of diverter strip can be used together with the solid strips in order to decrease the interference

v. Numerical simulations have shown strong radar shielding effects produced by solid strips and no practical shielding by segmented strips in the absence of a

N.I.P. and G.N.P. thank colleagues of the High Voltage Group of the Cardiff School of

Amason, M.; et al. (1975). Aircraft application of segmented-strip lightning protection

Bruel, C.; Barilleau, D. & Rousseau, A. (2004). Application of aircraft lightning protection to

Chernov, E.; Lupeiko, A. & Petrov, N. (1992). Repulsion effect in orientation of Lightning

Clifford, D. & Casemir, H. (1982). Triggered lightning. *IEEE Trans. Electromagnetic* 

Cooray, V.; Rakov, V. & Theethayi, N. (2004). The relationship between the leader charge

*on Lightning Protection*, pp. 145-150, Avignon, France, September 13-16, 2004 Delannoy, A.; Bondiou-Clergerie, A.; Lalande, P.; et.al. (2001). New investigations of the

Hardwick, J.; Plumer, A. & Ulmann, A. (1999). Review of the joint radome programme,

*Compatibility,* Vol. 21, (January 1982), pp. 112-122, ISSN 0018-9375

systems, *Proceedings of Conf. on Lightning and Static Electricity*, pp.1-14, London, UK,

radar stations, *Proceedings of 27th Int. Conf. on Lightning Protection*, pp. 975-977,

and the return stroke current – Berger's data revisited, *Proceedings of 27th Int. Conf.* 

mechanisms of lightning strike to radomes Part II: Modeling of the protection efficiency, *Proceedings of Int. Conf. on Lightning and Static Electricity*, paper No 2001-

*Proceedings of ICOLSE'99*, pp.59-65, ISBN 0768003938, Toulouse, France, June 22-24,

surface; however, this may cause also the occurrence of punctures.

Engineering for hospitality while they worked as guests in their laboratory.

discharge. *J.de Phys. III*, Vol. 2, (July 1992), pp. 1359-1365

Handbook of Chemistry and Physics. (2001). CRC Press, ISBN 0849304822

Avignon, France, September 13-16, 2004

01-2884, Seattle, USA, September 11-13, 2001

The following points can be concluded from the analysis:

strikes to aircraft are initiated by the aircraft.

leader can be initiated from the radar antenna.

effect on antenna radiation.

discharge.

**9. References** 

1975

1999

**8. Acknowledgment** 


voltage laboratory, *Proceedings of Int. Conf. on Lightning and Static Electricity*, paper No 2001-01-2883, Seattle, USA, September 11-13, 2001

Uman, M. & Rakov, V. (2003). The interaction of lightning with airborne vehicles. *Progress in Aerospace Sciences*, Vol.39, No 1, (January 2003), pp. 61-81, ISSN 0376-0421

Uman, M. & Rakov, V. (2003). The interaction of lightning with airborne vehicles. *Progress in Aerospace Sciences*, Vol.39, No 1, (January 2003), pp. 61-81, ISSN 0376-0421

No 2001-01-2883, Seattle, USA, September 11-13, 2001

voltage laboratory, *Proceedings of Int. Conf. on Lightning and Static Electricity*, paper

## *Edited by Ramesh K. Agarwal*

The book describes the state of the art and latest advancements in technologies for various areas of aircraft systems. In particular it covers wide variety of topics in aircraft structures and advanced materials, control systems, electrical systems, inspection and maintenance, avionics and radar and some miscellaneous topics such as green aviation. The authors are leading experts in their fields. Both the researchers and the students should find the material useful in their work.

Photo by XavierMarchant / iStock

Recent Advances in Aircraft Technology

Recent Advances in

Aircraft Technology

*Edited by Ramesh K. Agarwal*