**3.3 Reproducibility**

So- called center-points experiments serve to evaluate the reproducibility of the measuring method and the process relative to the variation within the factorial experiment. In order for the influence of the respective factors to be determined, these are varied about a mean value upward and downward in equal measure. The software "Modde" visualizes these relationships with the replicate plot (Fig. 22) and the 4th bar within the summary plot (Fig. 27, 28). If the center points are not close together within the replicate plot, the reason must be analyzed.

Deviating center points could, for instance, be potentially caused by missing factors, incapable measurement equipment or measurement methods, different machine operators, different machines, different batches of raw material and much more. Also it must be assumed that other experiments' results fluctuate the same as the center- point experiments, otherwise the quality of the prediction model would be weak.

#### **3.4 Design**

A common geometric factorial design is chosen to create the following exemplary but actually performed experiment (Moser & Madl, 08/2009). This fractional factorial design entails far fewer experiments than full factorial design. For comparison: Were a full dual level factorial design to be applied to this factorial experiment, 227 experiments would be needed. Although other designs, such as "Plackett Burmann" designs (L. Eriksson), would entail fewer experiments (28+), they would rule out the possibility of studying interactions or quadratic terms at a later date.

This screening design consists of two profiles, each of which has ten nodes and seven quantitative factors. This yields 27 factors from which, with the aid of fractional factorial design, an experimental scale of 64 + 3 center-point experiments are generated. This number is derived from the experimental design in which all the factors are studied independently and without interaction at two levels (min./max.). A feasibility study was conducted in which the part is injected in a 1-cavity mold. To ensure constant melt and mold temperatures in the process, 30 moldings were initially produced. Two molded parts were then produced and characterized. After the experiments, the moldings were measured in a coordinate measuring table.

The values were transferred to the software. Before a model can be generated out of the data, it is very important to look at the raw data. Therefore, common statistical software supplies the replicate, histogram and correlation plots. The first plot to look at is the replicate plot (Fig. 22), which plots all the experiments in a row in order to see if experiments with equal factor settings (center- points and replicates) cause equal results. The second plot is the histogram (Fig. 23), which is important for reviewing the data and ensuring a symmetrical Gaussian distribution (Fig. 34, bell distribution). Because of the limited number of experiments it is very important to have close to normally distributed data in order to get a good regression model, fitted with the method of least squares.

After this, the "black box" (Fig. 11) between factor setting (cause) and targets (effect) can be graphically modeled. The model itself is a tailor polynomial, the mathematical formulation of the relationships being derived from the results of the factorial experiment. Since

Fig. 22. Replicate plot 5.

So- called center-points experiments serve to evaluate the reproducibility of the measuring method and the process relative to the variation within the factorial experiment. In order for the influence of the respective factors to be determined, these are varied about a mean value upward and downward in equal measure. The software "Modde" visualizes these relationships with the replicate plot (Fig. 22) and the 4th bar within the summary plot (Fig. 27, 28). If the center points are not close together within the replicate plot, the reason must

Deviating center points could, for instance, be potentially caused by missing factors, incapable measurement equipment or measurement methods, different machine operators, different machines, different batches of raw material and much more. Also it must be assumed that other experiments' results fluctuate the same as the center- point experiments,

A common geometric factorial design is chosen to create the following exemplary but actually performed experiment (Moser & Madl, 08/2009). This fractional factorial design entails far fewer experiments than full factorial design. For comparison: Were a full dual level factorial design to be applied to this factorial experiment, 227 experiments would be needed. Although other designs, such as "Plackett Burmann" designs (L. Eriksson), would entail fewer experiments (28+), they would rule out the possibility of studying interactions

This screening design consists of two profiles, each of which has ten nodes and seven quantitative factors. This yields 27 factors from which, with the aid of fractional factorial design, an experimental scale of 64 + 3 center-point experiments are generated. This number is derived from the experimental design in which all the factors are studied independently and without interaction at two levels (min./max.). A feasibility study was conducted in which the part is injected in a 1-cavity mold. To ensure constant melt and mold temperatures in the process, 30 moldings were initially produced. Two molded parts were then produced and characterized. After the experiments, the moldings were measured in a

The values were transferred to the software. Before a model can be generated out of the data, it is very important to look at the raw data. Therefore, common statistical software supplies the replicate, histogram and correlation plots. The first plot to look at is the replicate plot (Fig. 22), which plots all the experiments in a row in order to see if experiments with equal factor settings (center- points and replicates) cause equal results. The second plot is the histogram (Fig. 23), which is important for reviewing the data and ensuring a symmetrical Gaussian distribution (Fig. 34, bell distribution). Because of the limited number of experiments it is very important to have close to normally distributed

data in order to get a good regression model, fitted with the method of least squares.

After this, the "black box" (Fig. 11) between factor setting (cause) and targets (effect) can be graphically modeled. The model itself is a tailor polynomial, the mathematical formulation of the relationships being derived from the results of the factorial experiment. Since

otherwise the quality of the prediction model would be weak.

**3.3 Reproducibility** 

be analyzed.

**3.4 Design** 

or quadratic terms at a later date.

coordinate measuring table.

Fig. 23. Histogram 5.

Fig. 24. Coefficient plot 5.

screening examines only the linear correlations, only the linear effects of all factors are plotted in a coefficient bar graph (Fig. 24).The next set up to study is the "N- residual probability plot" (Fig. 25).

stand for the quality of the results (1th bar), the prognosis for targets in new experiments (2th bar), validity (3th bar) and the reproducibility of the model (4th bar). The bars are shown normalized (0 to 1), i. e. the closer the values are to 1, the better. The model calculated after the screening phase (Fig. 27) was of surprisingly good quality. Fig. 28 is the summary of the enhanced "optimization model". In detail the "summary plot" is calculated as follows: (L.

**R²** (Fig.27, 28; 1St bar): Quality of results or the Goodness of fit is calculated from the fraction of the variation of the response explained by the model (FU 7): The R2 value is always between 0 and 1. Values close to 1 for both R2 and Q2 indicate a very good model with

> SSREG = the sum of squares of the Response (Y) corrected for the mean, explained by the model. SS = the total sum of squares of Y corrected for the

(7)

Fig. 27. Summary plot after screening 5.

Fig. 28. Summary plot after optimization 5.

Eriksson) (AB, 2009)

excellent predictive power.

ܴ; ൌ ܵܵோாீ ܵܵ

mean.

Fig. 25. N-Residual Plot 5.

Fig. 26. Variable importance plot 5.

The residuals are plotted on a cumulative normal probability scale. This plot makes it easy to detect normality of the residuals. If the residuals are normally distributed, the points on the "N- residual probability plot" follow close to a straight line and will also support detection of outliers.

Points deviating from the normal probability line with large absolute values of studentized7 residuals, i.e. a larger than 4 standard deviations indicated by red lines on the plot.

According to the Pareto Principle, about 20 % of the factors account for 80 % of the effect. In line with this hypothesis, any factor without influence (significance) can be graphically removed from the coefficient plot, while the software works in the background to calculate the updated model from the remaining terms8. In the next step, the experimenter can focus on the significant factors. To see what factors are important for all considered responses, the "variable importance plot" (Fig. 26) could be analyzed. This is crucial because the number of factors decisively determines the extent of further experiments. The quality of the test series and its calculated model are also illustrated as several four bar plots (Fig. 27, 28). These

<sup>7</sup> For more details see Student's t-distribution at (From Wikipedia)

<sup>8</sup> For this reason, the order in which non-significant terms are removed is important.

Fig. 27. Summary plot after screening 5.

Fig. 26. Variable importance plot 5.

7 For more details see Student's t-distribution at (From Wikipedia)

8 For this reason, the order in which non-significant terms are removed is important.

The residuals are plotted on a cumulative normal probability scale. This plot makes it easy to detect normality of the residuals. If the residuals are normally distributed, the points on the "N- residual probability plot" follow close to a straight line and will also support

Points deviating from the normal probability line with large absolute values of studentized7

According to the Pareto Principle, about 20 % of the factors account for 80 % of the effect. In line with this hypothesis, any factor without influence (significance) can be graphically removed from the coefficient plot, while the software works in the background to calculate the updated model from the remaining terms8. In the next step, the experimenter can focus on the significant factors. To see what factors are important for all considered responses, the "variable importance plot" (Fig. 26) could be analyzed. This is crucial because the number of factors decisively determines the extent of further experiments. The quality of the test series and its calculated model are also illustrated as several four bar plots (Fig. 27, 28). These

residuals, i.e. a larger than 4 standard deviations indicated by red lines on the plot.

Fig. 25. N-Residual Plot 5.

detection of outliers.

Fig. 28. Summary plot after optimization 5.

stand for the quality of the results (1th bar), the prognosis for targets in new experiments (2th bar), validity (3th bar) and the reproducibility of the model (4th bar). The bars are shown normalized (0 to 1), i. e. the closer the values are to 1, the better. The model calculated after the screening phase (Fig. 27) was of surprisingly good quality. Fig. 28 is the summary of the enhanced "optimization model". In detail the "summary plot" is calculated as follows: (L. Eriksson) (AB, 2009)

**R²** (Fig.27, 28; 1St bar): Quality of results or the Goodness of fit is calculated from the fraction of the variation of the response explained by the model (FU 7): The R2 value is always between 0 and 1. Values close to 1 for both R2 and Q2 indicate a very good model with excellent predictive power.

$$R^2 = \frac{\text{SSREG}}{\text{SS}} \qquad \begin{array}{l} \text{SSREG} = \text{the sum of squares of the Respsones (Y)}\\ \text{corrected for the mean, explained by the model.}\\ \text{SS} = \text{the total sum of squares of Y corrected for the}\\ \text{mean.} \end{array} \tag{7}$$

While the color shading is value-neutral, it is predefined by default with blue as low and red

Since humans are incapable of thinking in more than three dimensions, it is difficult to display more than two factors, plus a target. However, it is possible for software to calculate the degree of fulfillment of any number of evaluation criteria as a function of several factors.

As in set theory, the target functions obtained are displayed on top of each other in different

The injection prototype tool was created to ascertain the process capability of the tool. The determination of the process optimum from the model proved to be so good and reproducible that the project team eschewed a study of the robustness of the optimum. On production tools, and especially when it comes to large piece numbers of very high

colors. The region in which all targets are met is called the sweet spot (green).

requirements to produce, further verification steps are inevitable.

Fig. 29. Contour plot 5.

as high response values.

Fig. 30. Sweet spot plot 5.

**4.1 Finding an optimal process-setup** 

This can be shown in a sweet-spot plot (Fig. 30).

The second column in the summary plot is **Q2** (Fig.27, 28) and is the fraction of the variation of the response predicted by the model according to cross validation and expressed in the same units as R2. Q2 underestimates the Goodness of fit (FU 8). The Q2 is usually between 0 and 1. Q2 can be negative for very poor models. With PLS negative Q2 are truncated to zero for computational purposes. Values close to 1 for both R2 and Q2 indicate a very good model with excellent predictive power.

$$\mathbf{Q}^2 = 1 - \frac{\text{PRESS}}{\text{SS}} \qquad \begin{array}{l} \text{PRESS} = \text{the prediction residual sum of squares} \\ \text{SS} = \text{the total sum of squares of Y corrected for the} \\ \text{mean.} \end{array} \tag{8}$$

The third column in the summary plot is the "model validity" (Fig.27, 28) and a measure it. (FU 9). When the model validity column is larger than 0.25, there is no lack of fit of the model. This means that the model error is in the same range as the pure error. When the model validity is less than 0.25 there is a significant lack of fit and the model error is significantly larger than the pure error (reproducibility). A model validity value of 1 represents a perfect model.

$$\text{Velocity} = 1 + 0.57647 \ast \log(\text{plot}) \qquad \text{where } \text{plot} \neq \text{p for lack of fit.} \tag{9}$$

The forth column in the summary plot is the Reproducibility (Fig.27, 28) which is the variation of the response under the same conditions (pure error) (FU 10), often at the center points, compared to the total variation of the response. A reproducibility value of 1 represents perfect reproducibility.

Re�r���ci�i�i�� � � � �S�P�re err�r� �S����a� SS c�rrec�e�� �S� �� �ean� ���are��� �r� Variance ����

#### **4. Optimization (of process parameters)**

The screening experiments for this project revealed the region in which the profiles for holding pressure and injection speed must lie. This enables the nodes to be dispensed with in favor of a description of the profiles with a varying initial value that decreases constantly Formula (FU 3). Every experimental approach also got less significant factors, such as in this case: The back pressure and the temperature of the hot runner nozzles. These factors were frozen at a calculated optimal value. This reduced the number of factors to be varied from twenty-seven to six. These six factors holding pressure, injection speed, mold temperature, cooling time, switching point and barrel temperature have been studied further in a multilevel geometrical experimental design ("Central Composite Face") with 44 experiments and three center-points. After executing the additional experiments the raw data analysis has to be performed again, such as checking the data for normal distribution and possibly transformation of responses, checking the reproducibility and pruning the model terms at the "coefficient plot" in order to get the best possible prediction model. The summary plot is visualized within (Fig. 28):

Now with regard to the predictive quality of the model, contour plots (Fig. 29) can be generated. These plots are like a map of a process; the ordinate and abscise represent factors, and the contour lines visualize response values.

Fig. 29. Contour plot 5.

54 Some Critical Issues for Injection Molding

The second column in the summary plot is **Q2** (Fig.27, 28) and is the fraction of the variation of the response predicted by the model according to cross validation and expressed in the same units as R2. Q2 underestimates the Goodness of fit (FU 8). The Q2 is usually between 0 and 1. Q2 can be negative for very poor models. With PLS negative Q2 are truncated to zero for computational purposes. Values close to 1 for both R2 and Q2 indicate a very good

The third column in the summary plot is the "model validity" (Fig.27, 28) and a measure it. (FU 9). When the model validity column is larger than 0.25, there is no lack of fit of the model. This means that the model error is in the same range as the pure error. When the model validity is less than 0.25 there is a significant lack of fit and the model error is significantly larger than the pure error (reproducibility). A model validity value of 1

Va�i�i�� � � � ������� � ��������� where plof = p for lack of fit. (9) The forth column in the summary plot is the Reproducibility (Fig.27, 28) which is the variation of the response under the same conditions (pure error) (FU 10), often at the center points, compared to the total variation of the response. A reproducibility value of 1

The screening experiments for this project revealed the region in which the profiles for holding pressure and injection speed must lie. This enables the nodes to be dispensed with in favor of a description of the profiles with a varying initial value that decreases constantly Formula (FU 3). Every experimental approach also got less significant factors, such as in this case: The back pressure and the temperature of the hot runner nozzles. These factors were frozen at a calculated optimal value. This reduced the number of factors to be varied from twenty-seven to six. These six factors holding pressure, injection speed, mold temperature, cooling time, switching point and barrel temperature have been studied further in a multilevel geometrical experimental design ("Central Composite Face") with 44 experiments and three center-points. After executing the additional experiments the raw data analysis has to be performed again, such as checking the data for normal distribution and possibly transformation of responses, checking the reproducibility and pruning the model terms at the "coefficient plot" in order to get the best possible prediction model. The summary plot is

Now with regard to the predictive quality of the model, contour plots (Fig. 29) can be generated. These plots are like a map of a process; the ordinate and abscise represent factors,

�S� �� �ean� ���are��� �r� Variance ����

PRESS= the prediction residual sum of squares SS = the total sum of squares of Y corrected for the

(8)

model with excellent predictive power.

SS

mean.

�� � � � PRESS

represents a perfect model.

represents perfect reproducibility.

visualized within (Fig. 28):

Re�r���ci�i�i�� � � � �S�P�re err�r�

**4. Optimization (of process parameters)** 

and the contour lines visualize response values.

�S����a� SS c�rrec�e��

#### **4.1 Finding an optimal process-setup**

While the color shading is value-neutral, it is predefined by default with blue as low and red as high response values.

Since humans are incapable of thinking in more than three dimensions, it is difficult to display more than two factors, plus a target. However, it is possible for software to calculate the degree of fulfillment of any number of evaluation criteria as a function of several factors. This can be shown in a sweet-spot plot (Fig. 30).

Fig. 30. Sweet spot plot 5.

As in set theory, the target functions obtained are displayed on top of each other in different colors. The region in which all targets are met is called the sweet spot (green).

The injection prototype tool was created to ascertain the process capability of the tool. The determination of the process optimum from the model proved to be so good and reproducible that the project team eschewed a study of the robustness of the optimum. On production tools, and especially when it comes to large piece numbers of very high requirements to produce, further verification steps are inevitable.

1. **Case one:** All target values are fulfilled; a good predictive model can be developed. This means the process is robust, but the variation of the factors is still influential on the target functions. It should be discussed if some of the process conditions should be

2. **Case two:** All target values are fulfilled; no model can be achieved. This is the best outcome because factor variations seem to be too small to affect the results. This case should also be discussed if the process setup can be simplified in order to make the

3. **Case three:** No target values are fulfilled; a model can be developed. This means that the model prediction was not as accurate as expected. The quality of the "predictive basis model" should be rechecked; maybe the Q2 and the validity are not as good as supposed, or there are outliers in the experiments. By implementing these test experiments in the Modde-File, the model-quality can be checked as to whether it can

Finally after a process setup has been found and verified to produce parts with a sufficient quality, the process capability should be checked. This is a very important step for summarizing the quality and robustness of the process. Within this step, a variation caused by realistic production conditions can be attached to the calculated optimum of the regression model. This is done by performing a Monte Carlo Simulation. This means that 100,000 simulations are computed while the uncertainty of factor settings is randomly applied. One way is to define a narrow factor range, such as in a robustness model or a certain factor setting spot within an optimization design. To explain the concept, a more practical Fig. 33 is shown, where the process target borders are displayed by the road

be enhanced or whether these results deviate and if so, for which the reasons. 4. **Case four:** No target values are fulfilled; no model can be developed. Typical reasons for this outcome are that process conditions have been changed among the experimental blocks, likewise different raw materials, machines, operators, tools etc. Another common reason is that due to the models' quality, the requirement has been

Fig. 32. Response simplex evaluation 5.

process more cost efficient

underestimated.

**6. Validation/process capability** 

shoulders and the process itself as a car on this road.

improved.

#### **5. Robustness testing**

In most cases, some of the target definitions could be fully met, others only to a certain degree. Sometimes a compromise has to be defined to outline actionable new demands of targets and specifications to be derived from this updated knowledge.

After this important step of reflecting the possible and defining new process specifications in the light of costs over benefits, a potential optimum can be calculated with the optimizer and visualized with the sweet spot plot. The Modde-Optimizer is a software tool which cannot optimize anything but is a very helpful tool for searching in a multidimensional space for a setting in which all targets are met max. To do this, the target limits and the factor settings have to be taken or updated.

At this point, it is also possible to narrow the factor limits or to expand them. This inter- or extrapolation is combined with the possibility of estimating the accuracy of the factors setting; this means how accurately this factor can be adjusted to a certain limit. From this data, a reduced linear design will be used to place small, mathematical isosceles triangles into the multidimensional working space. From there, these triangles will be mirrored on their flanks in order to check if the new additional peaks of the mirrored triangles (Fig. 31) are better positioned to fulfill the predefined targets' requirements.

Fig. 31. Sketch: simplex algorithm, (L. Eriksson).

This process will be repeated iteratively till no better solution can be found. Because one of the small triangles could be easily trapped at local maxima or minima, a small proportional number of triangles are used to form a different position of the factor space. This can be seen with the ending lines in Fig. 32. This so-called "simplex algorithm" is a very simple but is powerful tool to check to which degree targets' values can be simultaneously fulfilled. The previously discussed target priorities can be weighted within the response target settings. The process model serves as a basis for finding a setting in which the molded part can be produced within the required quality. For the purpose of evaluating robustness in the region of the calculated optimum, it often takes only a few experiments, so that the effect of the factor variations may be adequately described. To do this in fine-tuning or a robustness check, a new small set of linear experiments with a factor variation similar to realistic process conditions should be performed. After executing the experiments and doing the repetitive data analysis, the results can be summed up in one of these four cases:

Fig. 32. Response simplex evaluation 5.

In most cases, some of the target definitions could be fully met, others only to a certain degree. Sometimes a compromise has to be defined to outline actionable new demands of

After this important step of reflecting the possible and defining new process specifications in the light of costs over benefits, a potential optimum can be calculated with the optimizer and visualized with the sweet spot plot. The Modde-Optimizer is a software tool which cannot optimize anything but is a very helpful tool for searching in a multidimensional space for a setting in which all targets are met max. To do this, the target limits and the

At this point, it is also possible to narrow the factor limits or to expand them. This inter- or extrapolation is combined with the possibility of estimating the accuracy of the factors setting; this means how accurately this factor can be adjusted to a certain limit. From this data, a reduced linear design will be used to place small, mathematical isosceles triangles into the multidimensional working space. From there, these triangles will be mirrored on their flanks in order to check if the new additional peaks of the mirrored triangles (Fig. 31)

This process will be repeated iteratively till no better solution can be found. Because one of the small triangles could be easily trapped at local maxima or minima, a small proportional number of triangles are used to form a different position of the factor space. This can be seen with the ending lines in Fig. 32. This so-called "simplex algorithm" is a very simple but is powerful tool to check to which degree targets' values can be simultaneously fulfilled. The previously discussed target priorities can be weighted within the response target settings. The process model serves as a basis for finding a setting in which the molded part can be produced within the required quality. For the purpose of evaluating robustness in the region of the calculated optimum, it often takes only a few experiments, so that the effect of the factor variations may be adequately described. To do this in fine-tuning or a robustness check, a new small set of linear experiments with a factor variation similar to realistic process conditions should be performed. After executing the experiments and doing the

repetitive data analysis, the results can be summed up in one of these four cases:

targets and specifications to be derived from this updated knowledge.

are better positioned to fulfill the predefined targets' requirements.

**5. Robustness testing** 

factor settings have to be taken or updated.

Fig. 31. Sketch: simplex algorithm, (L. Eriksson).


#### **6. Validation/process capability**

Finally after a process setup has been found and verified to produce parts with a sufficient quality, the process capability should be checked. This is a very important step for summarizing the quality and robustness of the process. Within this step, a variation caused by realistic production conditions can be attached to the calculated optimum of the regression model. This is done by performing a Monte Carlo Simulation. This means that 100,000 simulations are computed while the uncertainty of factor settings is randomly applied. One way is to define a narrow factor range, such as in a robustness model or a certain factor setting spot within an optimization design. To explain the concept, a more practical Fig. 33 is shown, where the process target borders are displayed by the road shoulders and the process itself as a car on this road.

Another way to describe the process quality is to calculate the DPMO which is short for "Defects per Million opportunities outside specifications" and is used as stop criteria in the

Within the Table 5 higher the Cpk is, the better the process capability and robustness are. This can be seen also within the numbers of "DPMO" or the "%Outside" specification. To calculate these numbers from the Design of Experiments experiment data, the "Monte Carlo simulation" (MCS) can be computed round the previously calculated optimal. The factor variance is adjusted to the approximated variance of the process settings within normal working conditions (Assumption 5%). The response variance calculated by the MCS is caused then by the factor setting -- and the model uncertainty. At Fig. 36 The black T-bars represent the space in which one factor can be varied while freezing the other factors and still keeping the calculated response fulfillment (Fig. 37). This is a very important information to set the process tolerances as closely as necessary and as wide as possible.

> **Cpk DPMO % Outside**  0,4 115070 11,51 0,6 35930 3,59 0,8 8198 0,82 1,0 1350 0,13 1,2 159 0,02 1,5 3,4 7,93328E-05 1,8 0,03 3.33204E-06 2,0 0,0010 9.86588E-08

for predictions.

USL = Upper specification limit, LSL = Lower

Ho = Hits outside specifications, Ns = Number of

specification limit and µ = estimated standard deviation

simulations based on an infinite number of predictions. (10)

(9)

��� � ���� ���� � �

���� � ���� ∗

3∗σ ,

design space estimation (FU 10).

� � ��� 3∗σ �

1 000 000 ��

Table 5. Six Sigma, source: (From Wikipedia).

Fig. 36. Design space estimation 5.

Fig. 33. Car example.

The smaller the car, in comparison to the road width, the more easily the process can be controlled. If a grid is applied on the street and the horizontal car positions during driving is added up, the positions within the grid intervals can be plotted as bars in a histogram (Fig. 23) or distribution plot (Fig. 34). Again, it is easy to interpret how close the car is driving righthanded relative to the middle lane marking and thus how safe the driving is. If the standard deviation to the "ideal driving line" is checked as a critical indicator between the expected value *(ideal driving line)* and each process border *(road shoulder),* then the process can be described in terms of quality and robustness. Consequently, it can be assumed that process quality can be described as a function of tolerance and process variation. This number is called Cpk (FU 9), the process capability or capability index and has its origin in "SixSigma" statistics. Within in Fig. 35 normal distributions with differed sigma levels are plotted. At higher CpK levels the underlying data is centered closer to the expected value (µ).

Fig. 34. Normal distribution.

Fig. 35. Sigma distributions 1-6 sigma.

58 Some Critical Issues for Injection Molding

The smaller the car, in comparison to the road width, the more easily the process can be controlled. If a grid is applied on the street and the horizontal car positions during driving is added up, the positions within the grid intervals can be plotted as bars in a histogram (Fig. 23) or distribution plot (Fig. 34). Again, it is easy to interpret how close the car is driving righthanded relative to the middle lane marking and thus how safe the driving is. If the standard deviation to the "ideal driving line" is checked as a critical indicator between the expected value *(ideal driving line)* and each process border *(road shoulder),* then the process can be described in terms of quality and robustness. Consequently, it can be assumed that process quality can be described as a function of tolerance and process variation. This number is called Cpk (FU 9), the process capability or capability index and has its origin in "SixSigma" statistics. Within in Fig. 35 normal distributions with differed sigma levels are plotted. At

higher CpK levels the underlying data is centered closer to the expected value (µ).

Fig. 33. Car example.

Fig. 34. Normal distribution.

Fig. 35. Sigma distributions 1-6 sigma.

��� � ���� ���� � � 3∗σ , � � ��� 3∗σ � USL = Upper specification limit, LSL = Lower specification limit and µ = estimated standard deviation for predictions. (9)

$$\text{LDPMO} = \text{ } \stackrel{\text{I}}{\text{Ho}} \ast \frac{\text{1}\,000\,000}{\text{NS}} \quad \text{Ho} = \text{Hits outside specificity}, \text{Ns} = \text{Number of} \quad \text{(10)}$$

$$\text{LDPMO} = \text{(no)} \quad \text{simulations based on an infinite number of predictions.} \quad \text{(11)}$$

Another way to describe the process quality is to calculate the DPMO which is short for "Defects per Million opportunities outside specifications" and is used as stop criteria in the design space estimation (FU 10).

Within the Table 5 higher the Cpk is, the better the process capability and robustness are. This can be seen also within the numbers of "DPMO" or the "%Outside" specification. To calculate these numbers from the Design of Experiments experiment data, the "Monte Carlo simulation" (MCS) can be computed round the previously calculated optimal. The factor variance is adjusted to the approximated variance of the process settings within normal working conditions (Assumption 5%). The response variance calculated by the MCS is caused then by the factor setting -- and the model uncertainty. At Fig. 36 The black T-bars represent the space in which one factor can be varied while freezing the other factors and still keeping the calculated response fulfillment (Fig. 37). This is a very important information to set the process tolerances as closely as necessary and as wide as possible.


Table 5. Six Sigma, source: (From Wikipedia).


Fig. 36. Design space estimation 5.

In addition to this, the approach is always to minimize the work, time and budget with efficient designs *(fewer experiments)* and fewer factors, so that not all important factors are implanted or the effect of factors are underestimated, and are, for instance, of a higher order than assumed. So, if a process is not linear and linear designs are used, the predictive capability of this model is very limited. If necessary due to interaction-, squared- or cubic factor-terms, the design could be complemented step-wise. The design of higher-order processes, complex designs are not recommended at the beginning, since these drastically increase the number of experiments. Complexity can always be reduced by focusing only on

After reading the chapter, the readers should now have a good understanding how far the combined methods could help them to achieve the predefined requirements. In addition, they should also be sensitized to the fact that non-structured approaches are weak and timeconsuming. It is also important to understand that while the design of experiments "DoE" does not necessarily lead to good results or capable processes, they can help to describe and document the potential of a process. Even if the targets could not be achieved, it is still possible to derive useful, cost-effective and robust knowledge with this structured approach in order to identify and assess possible disturbance factors or possible process constraints. This can provide beneficial clues for fine-tuning the factors and conditions in order to

By following this good "DoE" practice recommendation, the iterative difficulties in finding the fulcrum or lever at the beginning of an optimization process first hand can now be reduced--- if not eliminated. And by following this consistently structured approach, the right things can be done in the right way with the right tools. Thus, Pareto's law can be intelligently leveraged, and finally, the optimization team can operate in the most efficient

Kennedy, M. N. (2003). *Product Development for the Lean Enterprise.* USA, Richmond Verginia,

Klein, B. (2007). *Versuchsplanung-DoE.* Oldenbourg Wissenschaftsverlag GmbH, ISBN 978-3-

L. Eriksson, E. J.-W. (n.d.). *Design of Experiments:Principles and Applications,.* Sweden, Umea,

Lunau, S. (2006, 2007). *Six Sigma + Lean Toolset.* Berlin, Heidelberg, ISBN 978-3-540-69714-5:

Michael L. George, D. R. (2005). The Lean Six Sigma Pocket Toolbook. ISBN 0-07-144119-0,

Moser, S., & Madl, D. (08/2009). *Effective Run-In of an Injection Molding Process.* Kunststoffe

AB, U. (2009). Software & Help File Modde 9.0. ISBN-10 91-973730-4-4, Sweden.

From Wikipedia, t. f. (n.d.). *Wikipedia*. Retrieved 09 15, 2011, from:

ISBN 1-892538-09-1: The Oaklea Oress.

interantional: Carl Hanser Verlag Munich.

486-58352-6: München, Germany.

a small process space (Fig. 18).

**8. Conclusion/summary** 

and effective fashion.

**9. References** 

ensure and optimize process capability and success.

http://en.wikipedia.org

ISBN 10 91-973730-4-4.

Springer Verlag.

USA: McGraw-Hill.

Fig. 37. Predictive design space histogram 5.

Chart legend of Modde-predicted response profile


The histograms of Fig. 37 represent the response targets based on the regression model at an optimal factor setting including the factor uncertainty variation calculated by the MCS.
