Control chart

In statistical process control, the control chart, also known as the 'Shewhart chart' or 'process-behaviour chart' is a tool to determine whether a manufacturing or business process is in a state of statistical control or not. If the chart indicates that the process being monitored is not in control, the pattern it reveals can help determine the source of variation to be eliminated to bring the process back into control. A control chart is a specific kind of run chart.

The control chart is one of the seven basic tools of quality control (along with the histogram, Pareto chart, check sheet, cause-and-effect diagram, flowchart, and scatter diagram). See quality management glossary.

History
The control chart was invented by Walter A. Shewhart while working for Bell Labs in the 1920s. The company's engineers had been seeking to improve the reliability of their telephony transmission systems. Because amplifiers and other equipment had to be buried underground, there was a business need to reduce the frequency of failures and repairs. By 1920 they had already realized the importance of reducing variation in a manufacturing process. Moreover, they had realized that continual process-adjustment in reaction to non-conformance actually increased variation and degraded quality. Shewhart framed the problem in terms of Common- and special-causes of variation and, on May 16 1924, wrote an internal memo introducing the control chart as a tool for distinguishing between the two. Dr. Shewhart's boss, George Edwards, recalled: "Dr. Shewhart prepared a little memorandum only about a page in length. About a third of that page was given over to a simple diagram which we would all recognize today as a schematic control chart. That diagram, and the short text which preceded and followed it, set forth all of the essential principles and considerations which are involved in what we know today as process quality control." Shewhart stressed that bringing a production process into a state of statistical control, where there is only common-cause variation, and keeping it in control, is necessary to predict future output and to manage a process economically.

Dr. Shewhart created the basis for the control chart and the concept of a state of statistical control by carefully designed experiments. While Dr. Shewhart drew from pure mathematical statistical theories, he understood data from physical processes never produce a "normal distribution curve" (a Gaussian distribution, also commonly referred to as a "bell curve"). He discovered that observed variation in manufacturing data did not always behave the same way as data in nature (Brownian motion of particles). Dr. Shewhart concluded that while every process displays variation, some processes display controlled variation that is natural to the process, while others display uncontrolled variation that is not present in the process causal system at all times.

In 1924 or 1925, Shewhart's innovation came to the attention of W. Edwards Deming, then working at the Hawthorne facility. Deming later worked at the United States Department of Agriculture and then became the mathematical advisor to the United States Census Bureau. Over the next half a century, Deming became the foremost champion and exponent of Shewhart's work. After the defeat of Japan at the close of World War II, Deming served as statistical consultant to the Supreme Commander of the Allied Powers. His ensuing involvement in Japanese life, and long career as an industrial consultant there, spread Shewhart's thinking, and the use of the control chart, widely in Japanese manufacturing industry throughout the 1950s and 1960s.

More recent use and development of control charts in the Shewhart-Deming tradition has been championed by Donald J. Wheeler.

Details
A control chart consists of the following:
 * Points representing averages of measurements of a quality characteristic in samples taken from the process versus time
 * A centre line, drawn at the process mean
 * Upper and lower control limits ("natural process limits") that indicate the threshold at which the process output is considered statistically unlikely

The chart may contain other optional features, including:
 * Upper and lower warning limits, drawn as separate lines, typically two standard deviations above and below the centre line
 * Division into zones, with the addition of rules governing frequencies of observations in each zone
 * Annotation with events of interest, as determined by the Quality Engineer in charge of the process's quality



If the process is in control, most points will plot within the control limits. Any observations outside the limits, or systematic patterns within, suggest the introduction of a new (and likely unanticipated) source of variation, known as a special-cause. Since increased variation means increased costs, the control chart "signaling" the presence of special requires immediate investigation.

Note that in practice, the long-term process mean (and hence the centre line) may not coincide exactly with the ideal value (or target) of the quality characteristic because equipment simply can't control the process to the desired precision or because it's too costly to put the process on target.

Control charts omit specification limits because of the tendency of those involved with the process (e.g., machine operators) to focus on performing to specification when in fact the least-cost course of action is to keep process variation as low as possible. It is generally much easier to put a process on target than it is to keep variability from creeping into the process and omitting the specification limits reinforces this thinking. Process capability studies do examine the relationship between the natural process limits (that drive the control limits) and specification limits, however.

The purpose in adding warning limits or subdividing the control chart into zones is to provide early notification if something is amiss. Instead of immediately launching a process improvement effort to determine whether special causes are present, the Quality Engineer may temporarily increase the rate at which samples are taken from the process output until it's clear that the process is truly in control. Note that with three sigma limits, one expects to be signaled approximately once out of every 370 points on average, just due to common-causes.

Choice of limits
Shewhart set 3-sigma limits on the following basis.


 * The coarse result of Chebyshev's inequality that, for any probability distribution, the probability of an outcome greater than k standard deviations from the mean is at most 1/k2.
 * The finer result of the Vysochanskii-Petunin inequality, that for any unimodal probability distribution, the probability of an outcome greater than k standard deviations from the mean is at most 4/(9k2).
 * The empirical investigation of sundry probability distributions reveals that at least 99% of observations occurred within three standard deviations of the mean.

Shewhart summarised the conclusions by saying:

 ''... the fact that the criterion which we happen to use has a fine ancestry in highbrow statistical theorems does not justify its use. Such justification must come from empirical evidence that it works. As the practical engineer might say, the proof of the pudding is in the eating.'' 

Though he initially experimented with limits based on probability distributions, Shewhart ultimately wrote:

 Some of the earliest attempts to characterise a state of statistical control were inspired by the belief that there existed a special form of frequency function f ''and it was early argued that the normal law characterised such a state. When the normal law was found to be inadequate, then generalised functional forms were tried. Today, however, all hopes of finding a unique functional form f are blasted.'' 

The control chart is intended as a heuristic. Deming insisted that it is not an hypothesis test and is not motivated by the Neyman-Pearson lemma. He contended that the disjoint nature of population and sampling frame in most industrial situations compromised the use of conventional statistical techniques. Deming's intention was to seek insights into the cause system of a process ...under a wide range of unknowable circumstances, future and past .... He claimed that, under such conditions, 3-sigma limits provided ''... a rational and economic guide to minimum economic loss...'' from the two errors:


 * 1) Ascribe a variation or a mistake to a special cause when in fact the cause belongs to the system (common cause). (Also known as a Type I error)
 * 2) Ascribe a variation or a mistake to the system (common causes) when in fact the cause was special. (Also known as a Type II error)

Calculation of standard deviation
As for the calculation of control limits, the standard deviation required is that of the common-cause variation in the process. Hence, the usual estimator, in terms of sample variance, is not used as this estimates the total squared-error loss from both common- and special-causes of variation.

An alternative method is to use the relationship between the range of a sample and its standard deviation derived by Leonard H. C. Tippett, an estimator which tends to be less influenced by the extreme observations which typify special-causes.

Rules for detecting signals
The most common sets are:


 * The Western Electric rules
 * The Wheeler rules
 * The Nelson rules

There has been particular controversy as to how long a run of observations, all on the same side of the centre line, should count as a signal, with 7, 8 and 9 all being advocated by various writers.

The most important principle for choosing a set of rules is that the choice be made before the data is inspected. Choosing rules once the data have been seen tends to increase the economic losses arising from error 1 owing to testing effects suggested by the data.

Alternative bases
In 1935, the British Standards Institution, under the influence of Egon Pearson and against Shewhart's spirit, adopted control charts, replacing 3-sigma limits with limits based on percentiles of the normal distribution. This move continues to be represented by John Oakland and others but has been widely deprecated by writers in the Shewhart-Deming tradition.

Performance of control charts
When a point falls outside of the limits established for a given control chart, those responsible for the underlying process are expected to determine whether a special cause has occurred. If one has, then that cause should be eliminated if possible. It is known that even when a process is in control (that is, no special causes are present in the system), there is approximately a 0.27% probability of a point exceeding 3-sigma control limits. Since the control limits are evaluated each time a point is added to the chart, it readily follows that every control chart will eventually signal the possible presence of a special cause, even though one may not have actually occurred. For a Shewhart control chart using 3-sigma limits, this false alarm occurs on average once every 1/0.0027 or 370.4 observations. Therefore, the in-control average run length (or in-control ARL) of a Shewhart chart is 370.4.

Meanwhile, if a special cause does occur, it may not be of sufficient magnitude for the chart to produce an immediate alarm condition. If a special cause occurs, one can describe that cause by measuring the change in the mean and/or variance of the process in question. When those changes are quantified, it is possible to determine the out-of-control ARL for the chart.

It turns out that Shewhart charts are quite good at detecting large changes in the process mean or variance, as their out-of-control ARLs are fairly short in these cases. However, for smaller changes (such as a 1- or 2-sigma change in the mean), the Shewhart chart does not detect these changes efficiently. Other types of control charts have been developed, such as the EWMA chart and the CUSUM chart, which detect smaller changes more efficiently by making use of information from observations collected prior to the most recent data point.

Criticisms
Several authors have criticised the control chart on the grounds that it violates the likelihood principle. However, the principle is itself controversial and supporters of control charts further argue that, in general, it is impossible to specify a likelihood function for a process not in statistical control, especially where knowledge about the cause system of the process is weak.

Some authors have criticised the use of average run lengths (ARLs) for comparing control chart performance, because that average usually follows a geometric distribution, which has high variability.