**4. Extending the proposition: testing data for predictive maintenance**

This section extends a method for managing the iteration of design and testing during the product development stage [3] to predictive maintenance during the product use phase. First, the previous work will be described briefly, then how the work can be extended for the purpose of the predictive maintenance will be explained.

In an iterative design and testing process, testing results usually drive the subsequent re(design) activities. A control system analogy can be used to describe an iterative design and testing process. A control system monitors, compares and adjusts at a sequence of time points. A monitoring device makes a measurement, and reports it to the comparator, which compares it with the pre-determined desired value. A decision rule uses the result from the comparator to adjust an effector. Similarly, in a test, actual measurements of a parameter are taken and compared with pre-determined values identified in design analysis to identify if the design is satisfactory.

> Deviation, at a time point, is identified as the difference between test measurements and expected value. The magnitude of the deviation is shown with a double-headed arrow in **Figure 10** which depicts a case of under-design, with measured product performance gradually degrading and the deviation increasing monotonically. This considerable simplification is an assumption of the model developed here. The sloping line represents the evolution of test results over time, which tends to show increases in deviation of the design from expected

Testing and PLM: Connecting Process and Product Models in Product Development

http://dx.doi.org/10.5772/intechopen.80364

87

The difference between test measurements at different times, can reveal the 'degree of evolution' [12], i.e. how fast the deviation is changing in approach to the final value of the deviation

A similar proposition can be used for predictive maintenance. The design stage identifies the expected product performance in use, i.e. a range of expected values of a parameter can be specified by design, CAE and tests during the development stages. Product's health measure-

in **Figure 9**. Using a similar approach as explained above, the "amount of deviation" and the

**Figure 10.** Comparison of CAE and test data with field data to identify product's performance level.

and finish t<sup>f</sup>

 (ts , t1 , t2

…..tn….t<sup>f</sup>

), as

performance. The deviation does not, in practice, decline linearly.

**Figure 9.** A simplified model of deviations between expected and measured values during a test.

ments are taken at a sequence of time points between start ts

at tf

. Details can be found in [12].

'degree of evolution' can identified.

During a lengthy durability test, for example in a "Deterioration Factor" test, intermediary test measurements are taken at a sequence of time points between start ts and finish t<sup>f</sup> (ts , t1 , t 2…t n…t f ), as in **Figure 7**. Engineers know that the performance of an engine will change over the time and they allow an acceptable margin for each time point. This is illustrated in **Figure 8** with a range of expected values specified by design and CAE prior to the test. Engineers will know how much they expect the product to deteriorate after say 200 or 500 hours of running the test. If the product deteriorates below an allowable limit, or margin, at that time, then it is deemed under-designed. If an engine performs above the margin then it is assumed to be over-designed. Therefore, if the engine produces any value under or above the expected values (including margins) then these deviations are not acceptable (see **Figure 8**) and indicate that redesign is required. 'Deviation' is the difference between the expected value of a parameter and an actual measurement of that parameter, at the time of an assessment (e.g. test).

**Figure 9** shows a schematic, which presents a simplified case of **Figure 8** in which the expected value is a single value rather than a range. In practice this might be the mean of the distribution of expected values and is represented as the upper straight line (in red). The lower line (in green) represents the measured values. A physical test starts at ts and finishes at t<sup>f</sup> . Since the design meets specification based on the best knowledge available at t<sup>s</sup> , (or rather there is no information to indicate that it does not) the red and green line meet at ts . During the testing process, test measurements are taken and the actual value of a parameter at any point is identified.

**Figure 8.** A schematic of expected and measured value and associated deviations at different times during a test.

Testing and PLM: Connecting Process and Product Models in Product Development http://dx.doi.org/10.5772/intechopen.80364 87

**Figure 9.** A simplified model of deviations between expected and measured values during a test.

the previous work will be described briefly, then how the work can be extended for the pur-

In an iterative design and testing process, testing results usually drive the subsequent re(design) activities. A control system analogy can be used to describe an iterative design and testing process. A control system monitors, compares and adjusts at a sequence of time points. A monitoring device makes a measurement, and reports it to the comparator, which compares it with the pre-determined desired value. A decision rule uses the result from the comparator to adjust an effector. Similarly, in a test, actual measurements of a parameter are taken and compared with pre-determined values identified in design analysis to identify if

During a lengthy durability test, for example in a "Deterioration Factor" test, intermediary

time and they allow an acceptable margin for each time point. This is illustrated in **Figure 8** with a range of expected values specified by design and CAE prior to the test. Engineers will know how much they expect the product to deteriorate after say 200 or 500 hours of running the test. If the product deteriorates below an allowable limit, or margin, at that time, then it is deemed under-designed. If an engine performs above the margin then it is assumed to be over-designed. Therefore, if the engine produces any value under or above the expected values (including margins) then these deviations are not acceptable (see **Figure 8**) and indicate that redesign is required. 'Deviation' is the difference between the expected value of a parameter

and an actual measurement of that parameter, at the time of an assessment (e.g. test).

green) represents the measured values. A physical test starts at ts

design meets specification based on the best knowledge available at t<sup>s</sup>

information to indicate that it does not) the red and green line meet at ts

**Figure 9** shows a schematic, which presents a simplified case of **Figure 8** in which the expected value is a single value rather than a range. In practice this might be the mean of the distribution of expected values and is represented as the upper straight line (in red). The lower line (in

cess, test measurements are taken and the actual value of a parameter at any point is identified.

**Figure 8.** A schematic of expected and measured value and associated deviations at different times during a test.

), as in **Figure 7**. Engineers know that the performance of an engine will change over the

and finish t<sup>f</sup>

and finishes at t<sup>f</sup>

 (ts , t1 ,

. Since the

, (or rather there is no

. During the testing pro-

test measurements are taken at a sequence of time points between start ts

pose of the predictive maintenance will be explained.

86 Product Lifecycle Management - Terminology and Applications

the design is satisfactory.

t 2…t n…t f Deviation, at a time point, is identified as the difference between test measurements and expected value. The magnitude of the deviation is shown with a double-headed arrow in **Figure 10** which depicts a case of under-design, with measured product performance gradually degrading and the deviation increasing monotonically. This considerable simplification is an assumption of the model developed here. The sloping line represents the evolution of test results over time, which tends to show increases in deviation of the design from expected performance. The deviation does not, in practice, decline linearly.

The difference between test measurements at different times, can reveal the 'degree of evolution' [12], i.e. how fast the deviation is changing in approach to the final value of the deviation at tf . Details can be found in [12].

A similar proposition can be used for predictive maintenance. The design stage identifies the expected product performance in use, i.e. a range of expected values of a parameter can be specified by design, CAE and tests during the development stages. Product's health measurements are taken at a sequence of time points between start ts and finish t<sup>f</sup> (ts , t1 , t2 …..tn….t<sup>f</sup> ), as in **Figure 9**. Using a similar approach as explained above, the "amount of deviation" and the 'degree of evolution' can identified.

**Figure 10.** Comparison of CAE and test data with field data to identify product's performance level.

Once, these two factors are identified, i.e. how fast and how much within a time interval a product is degrading can be determined, an effective maintenance plan can be made.

Across these processes for maintenance, refit and retrofit, the aggregated benefits of combining physical test, simulation and use data can be considerable. This can result in reducing time to introduction of revised maintenance schedules, to designing and fitting new technologies, as well as reduced costs to manufacturers and users. When all taken together the benefits to product lifecycle accumulate and make the argument for PLM systems to provide consistent

Testing and PLM: Connecting Process and Product Models in Product Development

http://dx.doi.org/10.5772/intechopen.80364

89

In extending the model of overlapping test and design, using convergence between data sources, to these processes in the product lifecycle several additional descriptions arise in the PLM product model. These are driven by the necessity to manage the revised processes of product lifecycle which arise from the new data and new information flows, particularly in

New process models and new product models develop hand in hand. This section has considered how product development and support through life cycle combines test, simulation and use data. Some general issues affecting PLM product models include how to compare this field data with simulation and test, the potential effects on information flows in the process models and the application of field data from one phase of product to the development process for next generation products, where fundamental analysis of the configuration and architecture of a product is undertaken over and above retrofitting new components and new

Comparing field data with physical test is not straight forward. Usually, the case study company uses the accelerated testing methods in which tests are conducted in peak harshness and tougher condition for a reasonably short period of time. Most of the accelerated testing is to verify that the product will perform reliably during the useful life, until it starts to wear out. Physical test results might not be readily useful for comparing with the field data as the use conditions could be different, load cycle and sensor loading location could be different, for instance, CAE analysis and virtual testing can play an important role in comparing these test and field data. CAE analysis can model and control these conditions and can focus on individual parameters. The information of CAE analysis can be disaggregated into cycles, for example. Parameters can be analysed individually if required to support decision making. Analysis of these three data, i.e. CAE analysis, physical test and field data could provide useful information for predictive maintenance, as to analysis why and how a product might fail. This may also help to record/capture field data in an appropriate manner to be used by the design engineers for the next genera-

The potential implications for PLM systems of the integration of design, test and field data in making information available in preliminary form to be used by PLM for dependent activities. This effectively overlaps activities previously linearly sequenced and reduces times and costs for customers and suppliers. However, such integration comes with a significant overhead. Increased numbers of cycles of revisions to the PLM descriptions is entailed as some preliminary information although sufficient to start subsequent activities may not be enough to finish them especially when on-site assurance and regulatory conformation are necessary

and up to date information flows in supporting these processes.

use and service data.

tion of the product.

before customer use.

technologies to the existing products.
