I am often asked my opinion regarding the future of Six Sigma.

Regarding the future of Six Sigma, it continues on despite rumors of its death which began shortly after its birth in the 1980s. For those doing it right (arguably a minority, but a sizable one,) the Six Sigma approach has evolved into a new way to lead and manage an organization. Many have rebranded the approach to shed the baggage which Six Sigma has accumulated during its 27-year run as a “fad.” The new approach to leadership and management is distinguished from the traditional approach by four characteristics:

  1. a balanced approach to stakeholder demands (versus managing primarily for shareholders,)
  2. a balance of short- and long-term goals (versus a focus on quarterly results,)
  3. emphasis on facts and data, (versus reliance on expert opinion,) and
  4. a “horizontal” value stream perspective (versus a top-down command-and-control hierarchy.)

Any one of these things would be a game-changer. Taken as a whole they would ordinarily be thought of as revolutionary. However, probably due to the fact that the changes happened over nearly three decades, they haven’t been widely recognized as having the impact that they’ve had. Instead, as organizations using this approach have pushed their usage upstream to suppliers and downstream to customers, their adoption has slowly spread from United States manufacturers to all industries globally. As a result it is now commonplace for career guidance counselors to advise people to become Six Sigma certified. Some advise recipients of Bachelors degrees to become Six Sigma certified before pursuing Masters degrees.

The Next Big Thing: Big Data

One thing I’d like to see embraced by Six Sigma is the Big Data Revolution, which is a theory-free approach to using data in corporate data warehouses. Big Data is akin to part of the Measure Phase of a Six Sigma project, except that instead of using information in a data warehouse to test ad hoc theories, Big Data crunches the data warehouse contents to look for correlations. Correlations are then used for planning activities and, usually, the cause of the correlation is not pursued. This is very different than the use of data in a Six Sigma project, where the analysis is focused on achieving a particular goal. I don’t see Big Data as a competitor but as an opportunity for the Six Sigma community to move into another area. After all, analysis is a skill set Six Sigma practitioners have. We need to add a few new tools to our toolkit (e.g., data mining tools,) but these are similar to the statistical tools we already use .

Six Sigma and the quality profession can add a dimension to Big Data by filling in the gap between correlation and causation. By employing our ability to assemble interdisciplinary teams and utilizing the tools of experimental design, we can go beyond Big Data’s casual acceptance of correlation and answer the all-important question: why does this correlation exist? This is essential if we are to avoid the many traps that result from blindly acting on correlation without a deeper understanding of cause-and-effect. For example, a call center using Big Data discovered that callers who were kept on hold for as long as 1-hour were no less satisfied with their experience than callers whose calls were answered immediately, providing their issue was resolved. Further research into the cause of this unexpected result led to the determination that the missing variable was that many callers hung up rather than wait an hour for their calls to be answered. The customers who abandoned the call were not asked to complete the after-call survey. When these callers were contacted and their satisfaction scores added to the data, the  correlation not only disappeared, it was reversed. I.e., customer satisfaction declined as hold time increased.

Big Data also misses the boat in a number of other ways that Six Sigma and quality professionals can address. There are inherent problems with relying solely on data in data warehouses. These data are generally operational data, not data from planned experiments. Thus, they are often missing important variables. When variables are not manipulated in a planned way, statisticians are often not able to disentangle their interrelationships. They are also not able to properly explore important interactions between the variables. Operational processes are carefully controlled, so the variables involved don’t vary by much, leading to the “range restriction effect” that hides underlying relationships. These and other shortcomings of “happenstance data” analysis are well-known to Black Belts and Quality Engineers.

Speaking of skilled professionals, the obvious preferred group for addressing Big Data issues is Statisticians. However, Statisticians are in notoriously short supply and have been for decades (if not always.) Six Sigma “belts,” quality engineers, and reliability engineers are trained in a significant subset of useful statistical techniques. This pool of skilled workers can be leveraged to greatly expand the reach of the few statisticians available in most organizations.


Leave a Reply

Your email address will not be published. Required fields are marked *