**1. Introduction**

This chapter is written to give developers and machine operators a better idea how to install robust processes or how to review and optimize these. The clear structure moves from basic introduction to in-depth application of the methods and tools, thus guiding readers through these processes. While this paper cannot replace a further deepening in this matter, it can assess its usefulness.

Since the publication of my article "Effective Run-In of an Injection Molding Process", (Moser & Madl, 08/2009) I have noticed that both the start phase of an optimization process and the end phase ("verification / validation") are the most critical parts. Due to this problem, I have decided to extend the upcoming article with the following chapters. Increasingly, "Processes Capability" is a necessary basis for accomplishing design transfer with the customer on a valid foundation. Also "Quality by Design" and "Design Space Estimations" are no longer foreign words within the injection molding business. Especially, the medical and automotive businesses call for process validation. This new chapter will, therefore, be divided into the following sections:


This procedure will help process' manager move through the setup or optimization process. Most students who joined, for instance, a "Process Capability Statistics"- or a "Design of Experiments" course, have difficulties finding the fulcrum or lever to complete the first steps. Consequently, they often just invest in "trial and error methods" to get their process to work. Also, common paradigms like "change one parameter at a time" will not help accelerate optimization or enable the improvement team to map the whole process, including interactions or nonlinear behaviours. Therefore, this chapter will outline tools to collect the main process factors, identify the disturbance factors and also some more special tools to interpret the impact of these on the process. The best way to get a run in or on optimization process started is to get a "complementary" team of experts at the table. Within in this team, it

if some responses are not measurable or quantifiable, continuing without adaption

 It is not beneficial for the experimental room to extend far beyond the realm of objective target functions. Because this will automatically lead to more experiments and fuzziness due to more complex mathematics which is needed to describe cause and

After the "test and analyzing phase" the gap between cause and effect could be closed with a mathematical prediction model. The purpose of the model is to reflect how factors and responses are related. On the basis of this, "model contour plots" (Fig. 29) can be generated

Within in the oncoming Improve Phase, the optimum should be verified. After this verification, the robustness of this optimum setting could be rechecked with a reduced factor variation around this optimum to ensure the model-based calculations. In a last or parallel step to the robustness testing, the capability of the optimal setting, including the naturally given process variation, can be determined by using "Monte Carlo Simulations". The output will be, for instance: "Cpk"-value or "defects per million" within an estimation of the work-point design space. These, "key process indicators" (which will be explained later) will then be a base for validating the process and making it comparable to other sub-

But again where to start? The following small collection of tools is a good start to get the first

One of the easiest and most straightforward tools for getting familiar with a process setup or process problems is just to ask why, why and why again. Inquire if tree- or bubble diagrams can be used to document the root cause analysis. This and the following tools should be performed in a team only after it is certain that it follows the basic rules of good brainstorming / communication practice. This means: no direct "pointers" or school assignments should take place. Also there should be no criticism during the creative phase

steps done, to reflect and research the process setup or process problem.

but rather only at the right time and then only constructively expressed.

Because the filling and cooling process are not as robust as they should be.

**Small Example of constructive, drill-down questioning:** 

**Why is the injection part not of sufficient quality?**  Because it contained some color strikes and dells. **Why does the part contain color strikes and dells?** 

**Why are the cooling and filling process not robust?** 

 Because the polymer granulate was not dry enough. **Why did the drier not work as it was supposed to?**  Because the service hadn't been properly done.

 Because the density of the melted polymer is not homogenous. **Why is the melted polymer-density-distribution not as it should be?** 

**2.1 Ask why 5 times! (Michael L. George, 2005)** 

leads to fuzziness, respectively to bad "goodness of prediction models"

and potential optima could be calculated and visualized.

effect. (Fig. 18)

processes. *(Cf.3)* 

**2. Familiarization** 

is important to lift the members into a mode where they are willing and motivated to work on the problem in an open, cooperative, and productive atmosphere. Beside this, because of its rules and structure, "team leading", "mediation skills", and "creativity tools" comprise an indispensable base to build up a mutual attitude towards the improvement process. Furthermore, these tools will lead the team from reflecting to describing a problem and to joint agreement on supported decisions, work-methods and actions. An additional advantage will be a clear structure, such as, for instance, the "DMAIC Cycle".

Fig. 1. DMAIC Cycle (Lunau, 2006, 2007).

The DMAIC Cycle "which as a logical further development of the Deming cycle, provides a good structure to get into the "problem solving process". Within this approach, the question "what is the 'real problem'?" is asked. Two different symptoms form cause and effect, so it is helpful to discuss this in the team of experts, for instance, with the following tools and methods. After the "Real Problem" has been defined, it is necessary to find a way to measure cause and effect of the problem. This might sound straight forward and logical, but in most cases, it is not done. This means, for instance, that a check of the capability of the measuring equipment is often not requested for measuring the whole process variation range of the "Process working space". Measurement methods and also the equipment calibration and capability should be validated *(each time)* prior to execution of the experiments. Otherwise, it may happen that a lot expensive, time- consuming experiments are preformed and also a lot of measurements are taken, but these are not adequate to describe cause and effect. (Process space)

The next step to get factor settings and systems- or product-attributes measurably defined is to analyze their correlation with a structured approach. Design of Experiments is a very powerful tool to do this. During the experiments, it is recommended to request every step in planning, such as:

 It has to be verified whether the latest setup (factor variation) of the worksheet is adequate for focusing on the desired responses of the targets (Fig. 18)


After the "test and analyzing phase" the gap between cause and effect could be closed with a mathematical prediction model. The purpose of the model is to reflect how factors and responses are related. On the basis of this, "model contour plots" (Fig. 29) can be generated and potential optima could be calculated and visualized.

Within in the oncoming Improve Phase, the optimum should be verified. After this verification, the robustness of this optimum setting could be rechecked with a reduced factor variation around this optimum to ensure the model-based calculations. In a last or parallel step to the robustness testing, the capability of the optimal setting, including the naturally given process variation, can be determined by using "Monte Carlo Simulations". The output will be, for instance: "Cpk"-value or "defects per million" within an estimation of the work-point design space. These, "key process indicators" (which will be explained later) will then be a base for validating the process and making it comparable to other subprocesses. *(Cf.3)* 
