**2. Theorical Review**

History of Quality Control is as old as the history of the industry itself. Before the Industrial Revolution, the quality was controlled by the vast experience of the artisans of the time, which guarantee product quality. The industrial system has suffered a new technical era, where the production process split complex operations into simple tasks that could be per‐ formed by workers with specific skills. Thus, the worker is no longer responsible for all product manufacturing, leaving the responsibility of only a part of it (Juran, 1993).

It is within this context that the inspection, which sought to separate the non-conforming items from the establishment of specifications and tolerances. A simple inspection did not

improve the quality of products, only provided information on the quality level of these and pick the items conform, those not complying. The constant concern with costs and produc‐ tivity has led to the question: how to use information obtained through inspection to im‐ prove the quality of products?

**3. Statistical Quality Control**

and / or persons (Montgomery, 1997).

more and more.

The statistical quality control (SQC) is a technique of analyzing the process, setting stand‐ ards, comparing performance, verify and study deviations, to seek and implement solutions, analyze the process again after the changes, seeking the best performance of machinery

Applications of Control Charts Arima for Autocorrelated Data

http://dx.doi.org/10.5772/50990

33

Another definition is given by Triola (1999), which states that the SQC is a preventive meth‐ od where the results are compared continuously through statistical data, identifying trends for significant changes, and eliminating or controlling these changes in order to reduce them

SPC charts are designed to detect shifts among natural fluctuations caused by chance noises. For example, the Shewhart chart utilizes the standard deviation (SD) statistic to measure the size of the in-control process variability. By graphically contrasting the observed deviations against a multiple (usually, triple) of SDs, the control chart is intended to identify unusual

Under certain assumptions, when the observed deviation from the mean exceeds three SDs, it is said that the process is out of control since there is only a probability of 0.0026 for the observation to fall outside the three SD limits given an unshifted mean chance the process mean is shifted. This Shewhart chart scheme is in effect a statistical hypothesis testing that

To better understand the technical statistical quality control, it is necessary to bear in mind that the quality of a product manufactured by a process is inevitably subject to variation,

The *special cause* is a factor that generates variations that affect the process behavior in un‐ predictable ways, it is therefore possible to obtain a standard or a probability distribution.

The *common cause* is defined as a source of variation that affects all the individual values of a process. It results from various sources, without having any predominance over the other.

When these variations are significant in relation to the specifications, it runs the risk of hav‐ ing non-compliant products, ie products that do not meet specifications. The elimination of requiring special causes a local action, which can be made by people close to the process, for example, workers. Since the common causes require actions on the system of work that can only be taken by the administration, since the process is itself consistent, but still unable to

According to Woodall et al (2004), Statistical Quality Control is a collection of tools that are

According to Reid and Sanders (2002), descriptive statistics can be helpful in describing cer‐ tain characteristics of a product and a process. The most important descriptive statistics are

departures of the process from its normal state (controlled state).

and which can be described in terms of two types concerned.

meet specifications (Ramos, 2000).

*Descriptive Statistics*

essential in quality improvement activities.

reveals only whether the process is still in-control (Chen and Elsayed, 2000).

The solution of this question led to the recognition that variability was a factor inherent in industrial processes and could be understood through the statistics and probability, noting that could be measurements made during the manufacturing process without having to wait for the completion of the production cycle.

In 1924, Dr. Walter. A. Shewhart of Bell Telephone Laboratories, developed a statistical graph to monitor and control the production process, being one of the tools of Statistical Quality Control. The purpose of these graphs was differentiate between aleatórias1 causes unavoidable and causes a remarkable process. According to Shewhart (1931), if the random causes were present, one should not tamper with the process, if assignable causes are present, one should detect them and eliminate them. In other words, these graphics monitor the change or lack of instability in the process thus ensuring quality products.

Studies by Johnson and Basgshaw (1974) and Harris and Ross (1991) showed that the graph‐ ics Shewhart and cumulative sums (CUSUM) are sensitive to the presence of autocorrelated data (data that are not independent of each other over time), especially when the autocorre‐ lation is extreme, ie tools are not suitable for the process control.

You will need to process the data first and then control them statistically. The presence of autocorrelation in the data leads to growth in the number of false alarms. Alwan and Rob‐ erts (1988) show that many false alarms (signals of special causes) may occur in the presence of moderate levels of autocorrelation, and the resulting measurement system, the dynamics of the process or both aspects, and conventional control charts are used without knowing the presence or absence of correlation, much effort can be spent in vain.

Many methods have been proposed to deal with statistical data autocorrelation. The interest in the area was stimulated by the work of Box and Jenkins, published in 1970 work entitled Time Series Analysis: Forecasting and Control, where it was presented among several quan‐ titative methods, methodology used to analyze the behavior of the time series. The method of Box and Jenkins uses the concept of filter composed of three components: component au‐ toregressive (AR), the integration filter (I) component and the moving average (MA).

The reason for monitoring residual processes is that they are independent and identically distributed with mean zero, when the process is controlled and remains independent of pos‐ sible differences in the mean when the process gets out of control. Zhang (1998), the tradi‐ tional graphics Shewhart, CUSUM graphics, the graphics may be applied to the EWMA waste, since the use of graphics residual control has the advantage that they can be applied to autocorrelated data, even if the data is nonstationary processes. When a graph of residual control is applied to a non stationary, it can only be concluded that the process has some deviation in the system because of a non stationary there is no constant average and / or constant variance.
