**Your cart is currently empty!**

# Sigma Level Versus Process Capability Estimates

## Question from a Lean Six Sigma Green Belt Student

I am a little bit confused about the “process sigma level calculation” :

- The lesson presents a first method based on the RTY: First we have to compute the Normalized Yield and then derive the corresponding Z value. Finally we have to add 1.5 to account for long-term process variations and this is how we obtain the process sigma level.
- A second method is proposed based on Process Capability Analysis. More specifically, using Z
_{u}and Z_{L}. These values are used to evaluate the overall defective rate so that we can extract the Z value. But this time, we are not adding 1.5 to get the sigma level? Why?

Thank you for your feedback,

## Answer

It’s a difference in terminology: *process capability* versus *process sigma*. Process capability analysis has been around a lot longer than Six Sigma. It does not use the 1.5 sigma “fudge factor” used in process sigma calculations. In the lesson on process capability analysis I note that “*in Six Sigma we usually assume that the customer’s actual experience will be worse than what we predict in capability analysis. We add a fudge factor that assumes that the process will shift by approximately 1.5 standard deviations.*” However, when calculating process capability, statistical software such as Minitab will just show what the capability is based on the actual data, i.e., the traditional way. If you wish you can convert Minitab’s PPM data to process sigma by using this calculator. This will include the 1.5 sigma shift.

Now let me confuse the issue even further. Let’s say that Minitab’s process capability analysis (normal) shows you the expected overall performance is a PPM of 50,000. I.e., 5%. If you look up Z for a 5% tail area in a normal table you will find it is approximately 1.645. However, if you enter 50,000 ppm into the sigma calculator it will return a process sigma value of 3.14 because it adds 1.5 sigma to the 1.645 and rounds the result down. One might ask why it does this. After all, it makes the process look better rather than worse. The answer is that *the 1.5 sigma adjustment should not be applied to ppm estimates from process capability analysis*. The 1.5 sigma shift is meant to be applied to failures observed in the field to extrapolate back to what the PCA estimate originally was. For the above example, if we see a field failure rate of 5%, then the PCA at the production process probably was much better, 3.14 sigma.

## 2 responses to “Sigma Level Versus Process Capability Estimates”

How is applying an arbitrary adjustment to a number (either way) helpful? There is no empirical basis for 1.5 sigma, nor is there a statistical reason to do so. It is not only the minitab calculator which completely disregards sigma shift, e.g. https://www.gigacalculator.com/calculators/six-sigma-dpmo-calculator.php It also reports only estimations based on actual numbers, not actual numbers plus arbitrary “fudge factor”. It’s about time sigma shift is dropped as a concept with any relevance to statistical process control.

You’re preaching to the choir. If it were up to me, I’d drop the 1.5-sigma shift too. Any idea as to how to accomplish this? After 30 years it seems embedded into the very fabric of Six Sigma.

## Leave a Reply