A LinkedIn discussion started by Tham Nguyen Khoa asks:

Why [are] control limits on control chart are [sic] drawn at 3σ?

Control limits on a control chart are commonly drawn at 3σ from the center line because 3-sigma limits are a good balance point between two types of errors:

Type I or alpha errors occur when a point falls outside the control limits even though no special cause is operating. The result is a witch-hunt for special causes and adjustment of things here and there. The tampering usually distorts a stable process as well as wasting time and energy.

Type II or beta errors occur when you miss a special cause because the chart isn’t sensitive enough to detect it. In this case, you will go along unaware that the problem exists and thus unable to root it out.

Are there any more reasons?

The discussion goes on at great length (48 comments at the time this is written), but I’ll just post my comment here:

Things like type I and type II errors apply to enumerative statistics. Control charts are analytic statistical tools, so these terms do not apply here. Type I and Type II errors can be stated with precision because, as enumerative statistics, inferences based on them apply to a static population. Analytic statistics, in contrast, are used to make inferences about the future performance of a dynamic process. Errors related to inferences about the future can never be precisely calculated.

That being said, the idea that tampering occurs when a process that is not being influenced by special causes of variation is changed as if it were, and that tampering makes matters worse, is certainly true. When we want to determine if a special cause is present in a process, we make use of data to help us decide. No matter what the data show, there is always a chance that we mistakenly conclude that a special cause exists (or doesn’t exist.) It’s obvious that the further a data point is from the “norm,” the smaller the probability that we’ll mistakenly conclude that a special cause is present. Shewhart did not base control limits on precise calculations of Type I or Type II error. He based them on the fact that in practice engineers at Western Electric were able to easily identify the special cause of variation when observations fell 3 or more sigma from the long term mean. They were more challenged to find a special cause for observations closer to the mean.

Think about it like this: if you created a list of everything that caused a process to change even a small amount you would have a very, very long list. You could never pin down the one big thing from this long list, because there is no one big thing. But if you ask for a list of everything that caused a process to change a lot, say by 3 sigma, that list would be relatively short. In between these two extremes are changes of intermediate magnitude and lists that vary between the long “any change list” and the short “3-sigma change list.” Just where to draw the line depends on a large number of things, such as the cost of checking out the possible causes on the list, the cost of missing something, the frequency that changes of a given magnitude occur, etc.. As a default starting point we can use 3-sigma to trigger our special cause search, if for no other reason than this has worked pretty well for 93 years. But that doesn’t mean that it should be accepted as dogma. What we are solving for are lines (control limits) that minimize total costs. In the end, it’s a management decision, hopefully one that’s based on facts and data.


Leave a Reply

Your email address will not be published. Required fields are marked *