**2.9. Design of DIVOM and OSE**

The increased complexity of Equipment Types 3 and 4 require the Concurrent Engineering approach inherent in the System Engineering process [12] to minimize costly mistakes late in the EPP. In the same way that products must be *Designed for Manufacturing and Assembly* (DFMA) [13] I4-EPPs must be designed and not simply left to chance if they are to succeed.

By applying Quality Function Deployment (QFD) [14], to the I4-EPP this study established both the high level and detailed functional requirements required to deliver the customer requirements of data driven acceptance test, excellent regulatory compliance, high OEE in production, and a fast OEE ramp. The House of Quality (HoQ) structure at the core of QFD is widely accepted by product designers, but it is a complex format and requires considerable effort to achieve an acceptable level of familiarity. This study did not consider the HoQ a suitable method of communicating the status of requirements with the various disciplines of an I4-EPP team. This study created an optimized format by taking the outputs of the QFD process and grouping the design requirements by department as follows:

*The Project Manager is responsible for the Design of the I4-EPP to ensure that the engineers utilize the appropriate level of Integration to meet the specified needs of the Validation, Operations and Maintenance customers. There we have it; DIVOM for I4-EPP like we have DFMA for products.*

DIVOM provides each Department with the ability to focus on the relevant design requirements of the equipment from their departmental perspective by enabling the definition of clear boundaries of responsibility but simultaneously maintaining a concurrent, cross Department focus on the complete design requirements and I4-EPP objectives.

The design of the I4-PS and I4-ES scorecards are appealing to users because they do not; (1) have too many choices, (2) require too much thought or (3) suffer from lack of clarity. These three factors are extremely important because they all increase *cognitive load* [15] in short term memory which is only capable of holding *Seven, Plus or Minus Two Objects* [16]. The DIVOM process, which is based upon thousands of requirements, had to be broken down into several stages to achieve similar levels of *simplicity* but also must be capable of rolling up to provide an overall Key Performance Indicator (KPI) which can be easily explained and understood.

An analysis of the OEE metric concludes that the *"understandability"* of the metrics [17, 18] as opposed to its *numerical accuracy* [19, 20] has enabled it to gain widespread acceptance. The key technical metric in the DIVOM benchmarking process is the Integration metric. The integration metric focuses primarily on the cyber systems as opposed to the physical equipment. This led to the suggestion of the Overall Systems Effectiveness (OSE) metric which was defined as follows:

OSE = Design<sup>∗</sup> Average (Integration, Validation, Operation, Maintenance). (1)

This formula for the OSE proposes that the Design of the I4-EPP is the enabler for achieving the highest level of OSE and as such must have the single biggest influence. Each of the other metrics are of equal importance but will have less overall impact than the Design metric. This research is not suggesting that such a simplistic formula is capable of accurately representing every situation and it will undoubtedly require future refinement. But in its current format it leverages the lessons learned from OEE and is sufficient to provide a quantifiable benchmark metric which is fit for purpose.

A hierarchical structure and an executable application (The OSE Calculator) were developed to support the DIVOM process. The OSE Rating is at the top of the hierarchy. It is composed of five metrics (D, I, V, O and M) each with three components consisting of 10 Attributes. Attributes are composed of a variable number of requirements which are omitted from The OSE Calculator application to minimize the cognitive load on participants. It is important to note that omitting the requirements significantly increase the dependence on the facilitator.

The 10-Attribute scale is organized in order of achievement with 00 being worst to 10 being best in class. This approach has been utilized to expedite user comprehension of the measurement process. The clarity of the Attributes is further augmented by adoption of the standard color coding convention of green = low risk, orange = medium risk, red = high risk and displaying the rating graphically (see **Figure 4**).

**Figure 4.** Design of the OSE calculator.

*equipment uptime, reduce costs for servicing (e.g. travel costs), increase service efficiency (e.g. firstvisit-fix-rates) and accelerate innovation processes (e.g. remote update of device software)*" [11].

The increased complexity of Equipment Types 3 and 4 require the Concurrent Engineering approach inherent in the System Engineering process [12] to minimize costly mistakes late in the EPP. In the same way that products must be *Designed for Manufacturing and Assembly* (DFMA) [13] I4-EPPs must be designed and not simply left to chance if they are to succeed.

By applying Quality Function Deployment (QFD) [14], to the I4-EPP this study established both the high level and detailed functional requirements required to deliver the customer requirements of data driven acceptance test, excellent regulatory compliance, high OEE in production, and a fast OEE ramp. The House of Quality (HoQ) structure at the core of QFD is widely accepted by product designers, but it is a complex format and requires considerable effort to achieve an acceptable level of familiarity. This study did not consider the HoQ a suitable method of communicating the status of requirements with the various disciplines of an I4-EPP team. This study created an optimized format by taking the outputs of the QFD

*The Project Manager is responsible for the Design of the I4-EPP to ensure that the engineers utilize the appropriate level of Integration to meet the specified needs of the Validation, Operations and Maintenance customers. There we have it; DIVOM for I4-EPP like we have DFMA for products.*

DIVOM provides each Department with the ability to focus on the relevant design requirements of the equipment from their departmental perspective by enabling the definition of clear boundaries of responsibility but simultaneously maintaining a concurrent, cross Department

The design of the I4-PS and I4-ES scorecards are appealing to users because they do not; (1) have too many choices, (2) require too much thought or (3) suffer from lack of clarity. These three factors are extremely important because they all increase *cognitive load* [15] in short term memory which is only capable of holding *Seven, Plus or Minus Two Objects* [16]. The DIVOM process, which is based upon thousands of requirements, had to be broken down into several stages to achieve similar levels of *simplicity* but also must be capable of rolling up to provide an overall Key Performance Indicator (KPI) which can be easily explained and understood.

An analysis of the OEE metric concludes that the *"understandability"* of the metrics [17, 18] as opposed to its *numerical accuracy* [19, 20] has enabled it to gain widespread acceptance. The key technical metric in the DIVOM benchmarking process is the Integration metric. The integration metric focuses primarily on the cyber systems as opposed to the physical equipment. This led to the suggestion of the Overall Systems Effectiveness (OSE) metric which was defined as follows:

OSE = Design<sup>∗</sup> Average (Integration, Validation, Operation, Maintenance). (1)

This formula for the OSE proposes that the Design of the I4-EPP is the enabler for achieving the highest level of OSE and as such must have the single biggest influence. Each of the other metrics are of equal importance but will have less overall impact than the Design metric. This

process and grouping the design requirements by department as follows:

focus on the complete design requirements and I4-EPP objectives.

**2.9. Design of DIVOM and OSE**

66 New Trends in Industrial Automation

A three-step process was utilized with The OSE Calculator to score the metrics for the calculation of the OSE Rating; (1) Specify the *Validation, Operation* and *Maintenance* customer requirements, (2) determine the appropriate level of *Integration* (3) ensure that the *Design* of the EPP is correct. The metrics are evaluated by examining each component's Attributes in turn (from 00 to 10) to determine which Attributes will be achieved (see **Figure 4**).

evaluating the same item more than once) and *Reproducible* (inspectors getting the same result when evaluating the same item) as gauges [21]. The first stage of validating the scorecards was conducted at the end of the first semester on the MEng in Mechatronics, University of Limerick, 2017. Eight students worked as a group and utilized the I4-PS and I4-ES to rate two pieces of equipment. The second stage of validating the scorecards was conducted at the end of the second semester. Four random students, who were members of the original team, were requested to utilize the I4-PS and I4-ES again to rate the same two pieces of equipment. The results were analyzed, and significant variation was observed. On the five-point scale of the scorecards the Lower Control Limit (LCL) lay close to 0 across all metrics and equipment while the Upper Control Limit (UCL) ranged between 3 and 5. Several factors such as group dynamic versus individual score, new knowledge attained, knowledge forgotten or simply confusion may have influenced these outcomes. Regardless of the root cause of the variability these results do highlight the fact that gauges which appeal to our desire to not increase our cognitive load [15] and are easy to memorize [16] in

Industry 3.0 to Industry 4.0: Exploring the Transition http://dx.doi.org/10.5772/intechopen.80347 69

But all is not lost. A detailed review with the students revealed that they had significantly different interpretations of the iconography, the words simply were not descriptive enough and open to interpretation (e.g. What does" *connected"* really mean?) while many were not *mutually exclusive*. Thus, it can be concluded that with further experiments the content of the scorecards can be optimized to minimize variability and increase the accuracy to a point whereby the scorecard methods can be generally relied upon to achieve their *Function*.

The first stage of the validation of The OSE Calculator and DIVOM Method focused on four industrial EPPs from 2012 to 2016. During these EPPs the researcher performed a DIVOM assessment and facilitated OSE Optimization sessions which evaluated how useful the participants found the overall tools and process. Informal interview and data capture techniques

The case studies clearly demonstrated that DIVOM benchmarking process achieved its *Function* of delivering tangible business benefits in the form of a Data Driven FAT, increased OEE and improved regulatory compliance, but with two strict provisos; the Project Sponsor must be a Change Agent focused on Industry 4.0 (Case Study 1 and 4). If the Project Sponsor is not empowered to enact change (Case Study 3) or is a diehard I3-EPP supporter (Case Study 2), then these methods are worthless and should not be utilized. Even though general awareness of I4 should have progressed since the recommendations were published [22], this work has

Most specialists, observed during these case studies, were unwilling to gain an understating of an Attribute which they felt was not part of their primary discipline. It appears they were intimidated by having to admit that they needed to learn about these Attributes. They were *"the teachers"* not the *"students"*. They were extremely quick to disown these Attributes and assign them to other disciplines without personally gaining any knowledge. Even though it is outside the scope

uncovered underlying inhibiting factors which must be addressed.

no way guarantee that they are accurate.

**3.2. The OSE calculator and DIVOM method**

were utilized throughout these sessions.

Designing The OSE Rating and The Calculator in this fashion significantly increased the potential of conveying a large amount of specialized requirements to a general audience in an extremely short time period and providing five key metrics and an overall KPI to enable a Six Sigma approach to the EPP.

Even though the 5, 3, 10 formats of Metric, Component and Attribute creates a uniformity to reduce the cognitive load it introduces a constraint which although not immediately apparent may be problematic in some situations. The constraint is that the Attributes in The OSE Calculator may not be applicable in every situation, thus there is a requirement for the facilitator to state this as they navigate through the process. Another observation was that the participants frequently wanted to "score high". When these two items are combined they frequently attempt to utilize various justifications to claim that critical Attributes are not applicable to them. In this scenario strong leadership skills by the Facilitator are required.
