**3. Screening**

The challenge for machine operators is to get the run-in process done as quickly as possible. This means a minimum number of experiments with a maximum ability to describe cause and effect. The operators should be sensitized to the fact that small adjustments in the setup of the machine can have a great impact on the quality of the injection molding parts. Therefore, a well-structured approach and high quality data are needed. To reach these goals, the method of "Design of Experiments" will be introduced on the basis of the software "Modde"5. This is done because further steps "optimization" and the "Design Space Estimation" are incorporated tools within the software "Modde".

In general, the software is used when there is a lack of knowledge how cause and effect are related.

The use of "Design of Experiments" is an admission that the correlation of factors and effect could not be fully captured. This condition is depicted as a black-box (Fig. 11). By varying the factors within and according to a structured design, a regression model can be derived. From this model, the effect of the factors can be calculated. Since the experiments and thus the factors are varied to an estimated optimal range, some of the results are, of course, likely to deviate from the optimal targets. Nevertheless, all experiments and their results are very

<sup>5</sup> Modde is a software product of Umetrics, a company of MKS Instruments Inc.

Fig. 11. Process Black-Box.

Therefore, it has to be considered that doing a research on all desired functions will take time and will consume money and resources. In most projects, there is the sword of Damocles over the project team, which means that there is always not enough time, money and resources. In this context, one often hears a contradiction in terms such as, "We do not

To briefly illustrate, it can be assumed that the functions to be examined need too much time. These circumstances can only be compensated by moving the timeline or tapping additional resources. Both options will impact the budget. Thus, it is always important to check at regular intervals if any (planned) actions are still result-related and necessary.

have the time for experiments ". This interaction is visualized in Fig. 10.

Fig. 10. Interaction between the main components of project management.

Space Estimation" are incorporated tools within the software "Modde".

5 Modde is a software product of Umetrics, a company of MKS Instruments Inc.

The challenge for machine operators is to get the run-in process done as quickly as possible. This means a minimum number of experiments with a maximum ability to describe cause and effect. The operators should be sensitized to the fact that small adjustments in the setup of the machine can have a great impact on the quality of the injection molding parts. Therefore, a well-structured approach and high quality data are needed. To reach these goals, the method of "Design of Experiments" will be introduced on the basis of the software "Modde"5. This is done because further steps "optimization" and the "Design

In general, the software is used when there is a lack of knowledge how cause and effect are

The use of "Design of Experiments" is an admission that the correlation of factors and effect could not be fully captured. This condition is depicted as a black-box (Fig. 11). By varying the factors within and according to a structured design, a regression model can be derived. From this model, the effect of the factors can be calculated. Since the experiments and thus the factors are varied to an estimated optimal range, some of the results are, of course, likely to deviate from the optimal targets. Nevertheless, all experiments and their results are very

**3. Screening** 

related.

important because the basis of an entire image of it can be mapped on the work-space. The screening-process starts to extract the most influential factors from the familiarization process. These factors will be used to start with the Design of Experiments method. But not necessarily all factors must be examined in light of variability. So, some of these less important factors should be frozen at a certain level, which still ensures a good product quality. This is because the number of factors substantially affects the sum of the experiments; the number of evaluation criteria (responses) is of secondary importance. In the best case, the factors are quantitative, and so simple geometric designs can be generated. It is more difficult when they are qualitative, e. g. "machine 1" or "machine 2". Such qualitative or attributive parameters increase the number of experiments because they hamper the generation of the design. Once the factors have been identified, it is necessary to assess their effect. The effect is that which is exerted on the target variable when the factor is varied from its minimum to its maximum setting. Since all the factors are changed simultaneously in a factorial design, this effect is difficult to estimate.

It is therefore useful to debate and determine the factor variations within a group of experienced staff. Some factors are even trickier to formulate than the qualitative factors, such as temperature or pressure profiles. Just as in the machine, in the factorial experiment, the profiles can be programmed with some nodes such as (initial value + 9 nodes). The start and end values of the profile are known. Moreover, the process specifies a sloping curve (Fig 12, 13). If the profiles were programmed with real numbers, the sloping profile would necessitate the use of a great many programmed extra constraints. These factor restrictions limit the choice of experimental models and greatly increase the number of necessary experiments. For this reason, a mathematical formulation of the profiles is recommended which allows restrictions to be dispensed with entirely. Thus, the pressure profile is calculated, for instance, from the given initial value and the maximum decrease in pressure (in bar) per node:

$$
\Delta \mathbf{p} = \frac{\text{initial value max} - \text{end value min}}{\text{number of nodes } - 1} \qquad \qquad \frac{\text{The following variation thus}}{\text{arises for each node in (2):}} \tag{1}
$$

$$\text{Node (i+1)} = \text{node (i)} - \text{(min. 0, max. } \Delta \mathbf{p} \text{)} \tag{2}$$

Another way to represent the profile is the use of a simple two point (FU 3).

$$\text{value of factor setting}\_{\text{(x)}} = \text{mx} + b\_0 + \varepsilon \qquad \begin{array}{l} \text{m} = \text{Cf. (4)} : \text{x} = \text{node of factor profile}; \\ \text{b} \text{ - bias} ; \varepsilon = \text{noise} \end{array} \tag{3}$$

Considering that the individual nodes from the first approach can be quantitatively described independently of each other, the experimental scope and so the following experiments are extremely reduced due to freezing of these node factors at a promising level or due to explaining them with the second formula approach (constant amount/node) and smaller variance space. The formulation with the second approach (slope) is also a good approach to describe the constraints with a very limited number of experiments, but not as independent and individual an approach as the first described approach. In the most circumstances, the more effective second approach is recommended. The profile of the

The target variables could be geometrical variables, various criteria pertaining to surface quality, as well as some measured process parameters. Because the model which will be calculated later on could only be as good as the quantified quality of the test runs, it is very beneficial if the response or target values could be measured as quantitative numbers. If this is not the case, a qualitative ranking method should be discussed and implemented which contains at least 3 graduations or even better several more to support a better predictive model. If it is necessary to set up new qualifications-judgments, one should ensure that the ranking is symmetrical for instance "1" = too hard, "5" = optimum, "10" = too soft. Otherwise, if for instance "1" is optimal-filling and "5" could be more easily less-filled or over-filled, two sources of failure are mixed up, which is suboptimal to the predictability model. The mathematical reason is that the distribution model will be skewed (Fig. 16). Also

injection values could be formulated in the same manner.

Fig. 16. Skew distribution (From Wikipedia).

Fig. 15. Hold pressure 2.

**3.1 Responses and targets** 

Fig. 12. Injection profile 2.

Fig. 13. Hold pressure 2.

For this, the initial value and the maximum slope (FU 4) of "initial valuemax" and "end valuemin" is required. From this data the increasing/decreasing constant slope/node can be described with two factors instead of several nodes (Fig.14, 15). These factors are the "startvalue" and the constant amount to increase/ decrease per node, both must have a min./max. variation. Therefore, a constraint (FU 4) needs to be defined that, if decreasing, or increasing with a bigger constant amount beginning from a varied start level cannot lead to exceeding the final max. or min. final-profile-levels.

���initial value��� � end value��� <sup>2</sup> � <sup>Δ</sup>p = const. for each node, note (5) (4) Node (i +1) = node (i) – Δp Value��� � � node� ��� ��� � Value��� (5)

Fig. 14. Injection profile 2.

Fig. 15. Hold pressure 2.

For this, the initial value and the maximum slope (FU 4) of "initial valuemax" and "end valuemin" is required. From this data the increasing/decreasing constant slope/node can be described with two factors instead of several nodes (Fig.14, 15). These factors are the "startvalue" and the constant amount to increase/ decrease per node, both must have a min./max. variation. Therefore, a constraint (FU 4) needs to be defined that, if decreasing, or increasing with a bigger constant amount beginning from a varied start level cannot lead to

Node (i +1) = node (i) – Δp Value��� � � node�

<sup>2</sup> � <sup>Δ</sup>p = const. for each node, note (5) (4)

��� ���

� Value��� (5)

Fig. 12. Injection profile 2.

Fig. 13. Hold pressure 2.

Fig. 14. Injection profile 2.

exceeding the final max. or min. final-profile-levels.

���initial value��� � end value���

Considering that the individual nodes from the first approach can be quantitatively described independently of each other, the experimental scope and so the following experiments are extremely reduced due to freezing of these node factors at a promising level or due to explaining them with the second formula approach (constant amount/node) and smaller variance space. The formulation with the second approach (slope) is also a good approach to describe the constraints with a very limited number of experiments, but not as independent and individual an approach as the first described approach. In the most circumstances, the more effective second approach is recommended. The profile of the injection values could be formulated in the same manner.

#### **3.1 Responses and targets**

The target variables could be geometrical variables, various criteria pertaining to surface quality, as well as some measured process parameters. Because the model which will be calculated later on could only be as good as the quantified quality of the test runs, it is very beneficial if the response or target values could be measured as quantitative numbers. If this is not the case, a qualitative ranking method should be discussed and implemented which contains at least 3 graduations or even better several more to support a better predictive model. If it is necessary to set up new qualifications-judgments, one should ensure that the ranking is symmetrical for instance "1" = too hard, "5" = optimum, "10" = too soft. Otherwise, if for instance "1" is optimal-filling and "5" could be more easily less-filled or over-filled, two sources of failure are mixed up, which is suboptimal to the predictability model. The mathematical reason is that the distribution model will be skewed (Fig. 16). Also

Fig. 16. Skew distribution (From Wikipedia).

like relative humidity or changes in raw-material, should be, if potentially influential, documented as uncontrolled factors. Later on, if the disturbance variation affects the responses, the correlation can be calculated and interpreted to a certain degree. The calculations want be as cause and effect but could be analyzed as a trend to work out a plan of verification or compensation if necessary. The reason why the factor influence could not be ideally assessed is because the uncontrolled factors vary randomly and not as

To specify the minimum and maximum of the target factor values, only a few preliminary tests integrated into the factorial design are needed -- usually, two successful experiments are commonly sufficient in which all factors are set to the lowest or the highest levels. This ensures that variation in the factors is still measurable in the outcome (responses). Otherwise, the variation in individual factors would have to be reduced as displayed in Fig. 19 or extended in a modified factorial design. Once the min., max. and one-center-point experiments have been performed, it should also be considered if the range of the factor variation is necessary because if these variations are too big, non- linear behavior of some factors will probably occur. In most cases only a small linear area is of interest (Fig. 18). At the start, because of the lack of knowledge about the factors' effects and especially because of the reinforcing interactions, the factor ranges are often sub-optimally set. If this is the case, the factors' impact could be non- linear. Usually, factorial experiments commence with a large number of factors which are initially studied only for their linear influence. This

Fig. 18. Response-factor interaction.

geometrically organized within the design (Fig. 19).

Fig. 19. Example of two factor design adjustment.

the special responses (as results of product-life-tests) tend to deliver a Weibull distribution (Fig. 17) or skew distributed data; so, to handle this, additional knowledge of data, very sensitive factor-settings and possibly transformation of the response data are required in order to achieve good prediction models. For more detailed mathematical information see (From Wikipedia).

Fig. 17. Weibull probability distribution (From Wikipedia).

Also it should be verified that the quality judgments are reproducible, as well as have been measured with same accuracy over the whole experimental space (Fig. 18). Before any experiments are performed, the capabilities of test methods and test equipment need to be verified. The minimum of such tests should be a test of linearity and reproducibility, representative at the extreme factor settings within the experiment space. If the experimental procedure and the measurement tasks of more than one person are carried out, it should also be verified that the tasks are equally well conducted and reproducible. Optionally, the participants of the experimental design are documented as block variables *(uncontrollable factors).* In an ideal case, the person-dependent influence can later be repudiated in a hypothesis test. The influence of the employee should then in such a case appear in a very small bar annonation in the diagram "coefficient plot" and therefore assessed as "not significant".

#### **3.2 Safeguarding experiment design space/worksheet**

The factorial design is the foundation upon which all further analyses are developed. Because the experimental scale is kept to a minimum, it is essential to measure the target variables of almost all experiments, as otherwise it is difficult to model reality from the results. The circumstances in which the experiments are preformed should as equal as possible *(same raw material, same machine, same operator and room condition);* otherwise, the occurring side effects will be represented by the model noise or modeled fuzzy into the terms of the model. Because some of all possible factors need to be assumed as constant in order to keep the number of varied factors small, the number of varied lasting factors and their variation will always describe a reduced reality. Factors which cannot be controlled,

Fig. 18. Response-factor interaction.

the special responses (as results of product-life-tests) tend to deliver a Weibull distribution (Fig. 17) or skew distributed data; so, to handle this, additional knowledge of data, very sensitive factor-settings and possibly transformation of the response data are required in order to achieve good prediction models. For more detailed mathematical information see

Also it should be verified that the quality judgments are reproducible, as well as have been measured with same accuracy over the whole experimental space (Fig. 18). Before any experiments are performed, the capabilities of test methods and test equipment need to be verified. The minimum of such tests should be a test of linearity and reproducibility, representative at the extreme factor settings within the experiment space. If the experimental procedure and the measurement tasks of more than one person are carried out, it should also be verified that the tasks are equally well conducted and reproducible. Optionally, the participants of the experimental design are documented as block variables *(uncontrollable factors).* In an ideal case, the person-dependent influence can later be repudiated in a hypothesis test. The influence of the employee should then in such a case appear in a very small bar annonation in the diagram "coefficient plot" and therefore assessed as "not

The factorial design is the foundation upon which all further analyses are developed. Because the experimental scale is kept to a minimum, it is essential to measure the target variables of almost all experiments, as otherwise it is difficult to model reality from the results. The circumstances in which the experiments are preformed should as equal as possible *(same raw material, same machine, same operator and room condition);* otherwise, the occurring side effects will be represented by the model noise or modeled fuzzy into the terms of the model. Because some of all possible factors need to be assumed as constant in order to keep the number of varied factors small, the number of varied lasting factors and their variation will always describe a reduced reality. Factors which cannot be controlled,

Fig. 17. Weibull probability distribution (From Wikipedia).

**3.2 Safeguarding experiment design space/worksheet** 

(From Wikipedia).

significant".

like relative humidity or changes in raw-material, should be, if potentially influential, documented as uncontrolled factors. Later on, if the disturbance variation affects the responses, the correlation can be calculated and interpreted to a certain degree. The calculations want be as cause and effect but could be analyzed as a trend to work out a plan of verification or compensation if necessary. The reason why the factor influence could not be ideally assessed is because the uncontrolled factors vary randomly and not as geometrically organized within the design (Fig. 19).

Fig. 19. Example of two factor design adjustment.

To specify the minimum and maximum of the target factor values, only a few preliminary tests integrated into the factorial design are needed -- usually, two successful experiments are commonly sufficient in which all factors are set to the lowest or the highest levels. This ensures that variation in the factors is still measurable in the outcome (responses). Otherwise, the variation in individual factors would have to be reduced as displayed in Fig. 19 or extended in a modified factorial design. Once the min., max. and one-center-point experiments have been performed, it should also be considered if the range of the factor variation is necessary because if these variations are too big, non- linear behavior of some factors will probably occur. In most cases only a small linear area is of interest (Fig. 18). At the start, because of the lack of knowledge about the factors' effects and especially because of the reinforcing interactions, the factor ranges are often sub-optimally set. If this is the case, the factors' impact could be non- linear. Usually, factorial experiments commence with a large number of factors which are initially studied only for their linear influence. This

The solid line in Fig. 21 visualizes, that within the structured Design of Experiments approach, the largest proportion of experiments is done at the project beginning. This ensures a better and faster process-knowledge growth and a more robust production launch. While in the other case (dotted line) a lot of resources are needed for improvements after the production launch, which block resources needed for instance for developing the

Table 4 shows number of runs (excluding replicates) and alternative supported models. With six or more factors for instance, "Rechtschaffner"designs (L. Eriksson) may constitute a

> **Plackett Burmann design**

2 4 8\_l n/a 9\_l(q) 3 8 8\_l 7\_i 9\_l(q) 4 16 8\_l 11\_i 9\_l(q) 5 16 8\_l 16\_i 18\_l(q) 6 16\_l or 32\_i 8\_l 22\_i 18\_l(q) 7 16\_l or 32\_i 8\_l 29\_i 18\_l(q) 8 16\_l or 32\_i 12\_l 37\_i 18\_l(q) 9 32\_l or 64\_i 12\_l 46\_i 27\_l(q) 10 32\_l or 64\_i 12\_l 56\_i 27\_l(q) Table 4. Summary of screening design families l= linear; i= interaction; q= quadratic (AB, 2009).

**Rechtschaffner** 

**design L-design** 

viable alternative to the fractional factorial designs (for the interaction models).

Fig. 20. Structured versus random experiments (AB, 2009).

next product- or process generation.

Fig. 21. DoE vers. COST (Kennedy, 2003).

**Factorial / fractional factorial Design** 

**Number of Factors** 

approach has the advantage of requiring just a few experiments to find out which factors influence the targets, and to what extent. It is also possible to obtain an initial indication of the suitability of the evaluation criteria and the measuring technology. Since the focus of the investigation is on linear correlations, experiments are not conducted on the interaction between the factors. Were all factors and their interactions to be investigated, the experimental effort would be much greater. A complete linear description of the factors is given by the following rule of thumb. (FU 6)

$$\text{Number of experiments } = 2^{\text{number of factors}} \tag{6}$$

within full factorial designs. It is recommended that designs with more than four factors should be examined with advanced designs in order to keep the number of experiments small. A comparison of the linear design is in (Tab. 3).


Table 3. Summary of fractional factorials designs, excluding replicates (AB, 2009).

Now one could say "With this many experiments, I can do it without a plan/ design." This is probably right and therefore, there are more efficient designs in Tab. 4. But the difference will be that, in case there is no statistic software available, results cannot be visualized adequately; hence, most of standard spreadsheet programs are limited to 2 parameter diagrams. Besides, the whole statistic has to be calculated by hand, which makes this approach highly susceptible to calculation errors6. In the case of evolutionary-by-handexperiments, some time is always necessary between the experiments to discuss and decide what has to be done next. In contrast to this approach, the structured Design of Experiments method enables more experimentation within shorter intervals. From the perspective of identical-process-conditions, the time-consuming, by hand process is also more susceptible to errors and must be more critically conducted. Fig. 20 represents the "by-hand" versus the "statistically-structured" process. The Fig. 20 also highlights another benefit of the structured approach: after each set of experiments, the number of factors to be discussed could be reduced on the basis of the regression models. This will further decrease the number of factors and so the experimental design experiments by freezing the unimportant factors to promising levels.

<sup>6</sup> One option could also be to not calculate anything but just perform experiments. Not performing any statistics at all, will never enable experimenters to judge the quality of their process-setup. And even more important is the fact that if the process is running less sufficiently, the room for improvement cannot be described and project sense and definition cannot be estimated.

approach has the advantage of requiring just a few experiments to find out which factors influence the targets, and to what extent. It is also possible to obtain an initial indication of the suitability of the evaluation criteria and the measuring technology. Since the focus of the investigation is on linear correlations, experiments are not conducted on the interaction between the factors. Were all factors and their interactions to be investigated, the experimental effort would be much greater. A complete linear description of the factors is

within full factorial designs. It is recommended that designs with more than four factors should be examined with advanced designs in order to keep the number of experiments

**No of Factors Full Factorial Frac. Factoriall**  2 4 4 3 8 4 4 16 8 5 32 16 6 64 16 7 128 16 8 256 16 9-16 >512 32

Table 3. Summary of fractional factorials designs, excluding replicates (AB, 2009).

Now one could say "With this many experiments, I can do it without a plan/ design." This is probably right and therefore, there are more efficient designs in Tab. 4. But the difference will be that, in case there is no statistic software available, results cannot be visualized adequately; hence, most of standard spreadsheet programs are limited to 2 parameter diagrams. Besides, the whole statistic has to be calculated by hand, which makes this approach highly susceptible to calculation errors6. In the case of evolutionary-by-handexperiments, some time is always necessary between the experiments to discuss and decide what has to be done next. In contrast to this approach, the structured Design of Experiments method enables more experimentation within shorter intervals. From the perspective of identical-process-conditions, the time-consuming, by hand process is also more susceptible to errors and must be more critically conducted. Fig. 20 represents the "by-hand" versus the "statistically-structured" process. The Fig. 20 also highlights another benefit of the structured approach: after each set of experiments, the number of factors to be discussed could be reduced on the basis of the regression models. This will further decrease the number of factors and so the experimental design experiments by freezing the unimportant

6 One option could also be to not calculate anything but just perform experiments. Not performing any statistics at all, will never enable experimenters to judge the quality of their process-setup. And even more important is the fact that if the process is running less sufficiently, the room for improvement

cannot be described and project sense and definition cannot be estimated.

ൌ ʹ୬୳୫ୠୣ୰୭ୟୡ୲୭୰ୱ (6)

given by the following rule of thumb. (FU 6)

factors to promising levels.

small. A comparison of the linear design is in (Tab. 3).

Fig. 20. Structured versus random experiments (AB, 2009).

The solid line in Fig. 21 visualizes, that within the structured Design of Experiments approach, the largest proportion of experiments is done at the project beginning. This ensures a better and faster process-knowledge growth and a more robust production launch. While in the other case (dotted line) a lot of resources are needed for improvements after the production launch, which block resources needed for instance for developing the next product- or process generation.

Fig. 21. DoE vers. COST (Kennedy, 2003).

Table 4 shows number of runs (excluding replicates) and alternative supported models. With six or more factors for instance, "Rechtschaffner"designs (L. Eriksson) may constitute a viable alternative to the fractional factorial designs (for the interaction models).


Table 4. Summary of screening design families l= linear; i= interaction; q= quadratic (AB, 2009).

screening examines only the linear correlations, only the linear effects of all factors are plotted in a coefficient bar graph (Fig. 24).The next set up to study is the "N- residual

Fig. 22. Replicate plot 5.

Fig. 23. Histogram 5.

Fig. 24. Coefficient plot 5.

probability plot" (Fig. 25).
