Search This Blog

Saturday, January 18, 2020

Quality management

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Quality_management
Quality management ensures that an organization, product or service is consistent. It has four main components: quality planning, quality assurance, quality control and quality improvement. Quality management is focused not only on product and service quality, but also on the means to achieve it. Quality management, therefore, uses quality assurance and control of processes as well as products to achieve more consistent quality. What a customer wants and is willing to pay for it determines quality. It is a written or unwritten commitment to a known or unknown consumer in the market. Thus, quality can be defined as fitness for intended use or, in other words, how well the product performs its intended function.

Evolution

Quality management is a recent phenomenon but important for an organization. Civilizations that supported the arts and crafts allowed clients to choose goods meeting higher quality standards rather than normal goods. In societies where arts and crafts are the responsibility of master craftsmen or artists, these masters would lead their studios and train and supervise others. The importance of craftsmen diminished as mass production and repetitive work practices were instituted. The aim was to produce large numbers of the same goods. The first proponent in the US for this approach was Eli Whitney who proposed (interchangeable) parts manufacture for muskets, hence producing the identical components and creating a musket assembly line. The next step forward was promoted by several people including Frederick Winslow Taylor, a mechanical engineer who sought to improve industrial efficiency. He is sometimes called "the father of scientific management." He was one of the intellectual leaders of the Efficiency Movement and part of his approach laid a further foundation for quality management, including aspects like standardization and adopting improved practices. Henry Ford was also important in bringing process and quality management practices into operation in his assembly lines. In Germany, Karl Benz, often called the inventor of the motor car, was pursuing similar assembly and production practices, although real mass production was properly initiated in Volkswagen after World War II. From this period onwards, North American companies focused predominantly upon production against lower cost with increased efficiency.

Walter A. Shewhart made a major step in the evolution towards quality management by creating a method for quality control for production, using statistical methods, first proposed in 1924. This became the foundation for his ongoing work on statistical quality control. W. Edwards Deming later applied statistical process control methods in the United States during World War II, thereby successfully improving quality in the manufacture of munitions and other strategically important products.

Quality leadership from a national perspective has changed over the past decades. After the second world war, Japan decided to make quality improvement a national imperative as part of rebuilding their economy, and sought the help of Shewhart, Deming and Juran, amongst others. W. Edwards Deming championed Shewhart's ideas in Japan from 1950 onwards. He is probably best known for his management philosophy establishing quality, productivity, and competitive position. He has formulated 14 points of attention for managers, which are a high level abstraction of many of his deep insights. They should be interpreted by learning and understanding the deeper insights. These 14 points include key concepts such as:
  • Break down barriers between departments
  • Management should learn their responsibilities, and take on leadership
  • Supervision should be to help people and machines and gadgets to do a better job
  • Improve constantly and forever the system of production and service
  • Institute a vigorous program of education and self-improvement
In the 1950s and 1960s, Japanese goods were synonymous with cheapness and low quality, but over time their quality initiatives began to be successful, with Japan achieving high levels of quality in products from the 1970s onward. For example, Japanese cars regularly top the J.D. Power customer satisfaction ratings. In the 1980s Deming was asked by Ford Motor Company to start a quality initiative after they realized that they were falling behind Japanese manufacturers. A number of highly successful quality initiatives have been invented by the Japanese (see for example on this pages: Genichi Taguchi, QFD, Toyota Production System). Many of the methods not only provide techniques but also have associated quality culture (i.e. people factors). These methods are now adopted by the same western countries that decades earlier derided Japanese methods.

Customers recognize that quality is an important attribute in products and services. Suppliers recognize that quality can be an important differentiator between their own offerings and those of competitors (quality differentiation is also called the quality gap). In the past two decades this quality gap has been greatly reduced between competitive products and services. This is partly due to the contracting (also called outsourcing) of manufacture to countries like China and India, as well internationalization of trade and competition. These countries, among many others, have raised their own standards of quality in order to meet international standards and customer demands. The ISO 9000 series of standards are probably the best known International standards for quality management.
Customer satisfaction is the backbone of Quality Management. Setting up a million dollar company without taking care of needs of customer will ultimately decrease its revenue

There are many books available on quality management. Some themes have become more significant including quality culture, the importance of knowledge management, and the role of leadership in promoting and achieving high quality. Disciplines like systems thinking are bringing more holistic approaches to quality so that people, process and products are considered together rather than independent factors in quality management.

The influence of quality thinking has spread to non-traditional applications outside of walls of manufacturing, extending into service sectors and into areas such as sales, marketing and customer service.

Principles

The International Standard for Quality management (ISO 9001:2015) adopts a number of management principles, that can be used by top management to guide their organizations towards improved performance. 

Customer focus

The primary focus of quality management is to meet customer requirements and to strive to exceed customer expectations. 

Rationale

Sustained success is achieved when an organization attracts and retains the confidence of customers and other interested parties on whom it depends. Every aspect of customer interaction provides an opportunity to create more value for the customer. Understanding current and future needs of customers and other interested parties contributes to sustained success of an organization.

Leadership

Leaders at all levels establish unity of purpose and direction and create conditions in which people are engaged in achieving the organization’s quality objectives. Leadership has to take up the necessary changes required for quality improvement and encourage a sense of quality throughout organisation.

Rationale

Creation of unity of purpose and direction and engagement of people enable an organization to align its strategies, policies, processes and resources to achieve its objectives.
 

Engagement of people

Competent, empowered and engaged people at all levels throughout the organization are essential to enhance its capability to create and deliver value.

Rationale

To manage an organization effectively and efficiently, it is important to involve all people at all levels and to respect them as individuals. Recognition, empowerment and enhancement of competence facilitate the engagement of people in achieving the organization’s quality objectives.

Process approach

Consistent and predictable results are achieved more effectively and efficiently when activities are understood and managed as interrelated processes that function as a coherent system.

Rationale

The quality management system consists of interrelated processes. Understanding how results are produced by this system enables an organization to optimize the system and its performance.

Improvement

Successful organizations have an ongoing focus on improvement.

Rationale

Improvement is essential for an organization to maintain current levels of performance, to react to changes in its internal and external conditions and to create new opportunities.

Evidence based decision making

Decisions based on the analysis and evaluation of data and information are more likely to produce desired results.

Rationale

Decision making can be a complex process, and it always involves some uncertainty. It often involves multiple types and sources of inputs, as well as their interpretation, which can be subjective. It is important to understand cause-and-effect relationships and potential unintended consequences. Facts, evidence and data analysis lead to greater objectivity and confidence in decision making.

Relationship management

For sustained success, an organization manages its relationships with interested parties, such as suppliers, retailers.

Rationale

Interested parties influence the performance of an organizations and industry. Sustained success is more likely to be achieved when the organization manages relationships with all of its interested parties to optimize their impact on its performance. Relationship management with its supplier and partner networks is of particular importance.

Criticism

The social scientist Bettina Warzecha (2017) describes the central concepts of Quality Management (QM), such as e.g. process orientation, controllability, and zero defects as modern myths. She demonstrates that zero-error processes and the associated illusion of controllability involve the epistemological problem of self-referentiality. The emphasis on the processes in QM also ignores the artificiality and thus arbitrariness of the difference between structure and process. Above all, the complexity of management cannot be reduced to standardized (mathematical) procedures. According to her, the risks and negative side effects of QM are usually greater than the benefits (see also brand eins, 2010).

Quality improvement and more

The PDCA cycle
 
There are many methods for quality improvement. These cover product improvement, process improvement and people based improvement. In the following list are methods of quality management and techniques that incorporate and drive quality improvement:
  1. ISO 9004:2008 — guidelines for performance improvement.
  2. ISO 9001:2015 - a certified quality management system (QMS) for organisations who want to prove their ability to consistently provide products and services that meet the needs of their customers and other relevant stakeholders.
  3. ISO 15504-4: 2005 — information technology — process assessment — Part 4: Guidance on use for process improvement and process capability determination.
  4. QFD — quality function deployment, also known as the house of quality approach.
  5. Kaizen — 改善, Japanese for change for the better; the common English term is continuous improvement.
  6. Zero Defect Program — created by NEC Corporation of Japan, based upon statistical process control and one of the inputs for the inventors of Six Sigma.
  7. Six Sigma — 6σ, Six Sigma combines established methods such as statistical process control, design of experiments and failure mode and effects analysis (FMEA) in an overall framework.
  8. PDCA — plan, do, check, act cycle for quality control purposes. (Six Sigma's DMAIC method (define, measure, analyze, improve, control) may be viewed as a particular implementation of this.)
  9. Quality circle — a group (people oriented) approach to improvement.
  10. Taguchi methods — statistical oriented methods including quality robustness, quality loss function, and target specifications.
  11. The Toyota Production System — reworked in the west into lean manufacturing.
  12. Kansei Engineering — an approach that focuses on capturing customer emotional feedback about products to drive improvement.
  13. TQMtotal quality management is a management strategy aimed at embedding awareness of quality in all organizational processes. First promoted in Japan with the Deming prize which was adopted and adapted in USA as the Malcolm Baldrige National Quality Award and in Europe as the European Foundation for Quality Management award (each with their own variations).
  14. TRIZ — meaning "theory of inventive problem solving"
  15. BPRbusiness process reengineering, a management approach aiming at optimizing the workflows and processes within an organisation.
  16. OQRM — Object-oriented Quality and Risk Management, a model for quality and risk management.
  17. Top Down & Bottom Up Approaches—Leadership approaches to change
Proponents of each approach have sought to improve them as well as apply them for small, medium and large gains. Simple one is Process Approach, which forms the basis of ISO 9001:2008 Quality Management System standard, duly driven from the 'Eight principles of Quality management', process approach being one of them. Thareja writes about the mechanism and benefits: "The process (proficiency) may be limited in words, but not in its applicability. While it fulfills the criteria of all-round gains: in terms of the competencies augmented by the participants; the organisation seeks newer directions to the business success, the individual brand image of both the people and the organisation, in turn, goes up. The competencies which were hitherto rated as being smaller, are better recognized and now acclaimed to be more potent and fruitful". The more complex Quality improvement tools are tailored for enterprise types not originally targeted. For example, Six Sigma was designed for manufacturing but has spread to service enterprises. Each of these approaches and methods has met with success but also with failures.

Some of the common differentiators between success and failure include commitment, knowledge and expertise to guide improvement, scope of change/improvement desired (Big Bang type changes tend to fail more often compared to smaller changes) and adaption to enterprise cultures. For example, quality circles do not work well in every enterprise (and are even discouraged by some managers), and relatively few TQM-participating enterprises have won the national quality awards.

There have been well publicized failures of BPR, as well as Six Sigma. Enterprises therefore need to consider carefully which quality improvement methods to adopt, and certainly should not adopt all those listed here.

It is important not to underestimate the people factors, such as culture, in selecting a quality improvement approach. Any improvement (change) takes time to implement, gain acceptance and stabilize as accepted practice. Improvement must allow pauses between implementing new changes so that the change is stabilized and assessed as a real improvement, before the next improvement is made (hence continual improvement, not continuous improvement).

Improvements that change the culture take longer as they have to overcome greater resistance to change. It is easier and often more effective to work within the existing cultural boundaries and make small improvements (that is 'Kaizen') than to make major transformational changes. Use of Kaizen in Japan was a major reason for the creation of Japanese industrial and economic strength.

On the other hand, transformational change works best when an enterprise faces a crisis and needs to make major changes in order to survive. In Japan, the land of Kaizen, Carlos Ghosn led a transformational change at Nissan Motor Company which was in a financial and operational crisis. Well organized quality improvement programs take all these factors into account when selecting the quality improvement methods. 

Quality standards


ISO standards

The International Organization for Standardization (ISO) created the Quality Management System (QMS) standards in 1987. They were the ISO 9000:1987 series of standards comprising ISO 9001:1987, ISO 9002:1987 and ISO 9003:1987; which were applicable in different types of industries, based on the type of activity or process: designing, production or service delivery.

The standards are reviewed every few years by the International Organization for Standardization. The version in 1994 was called the ISO 9000:1994 series; consisting of the ISO 9001:1994, 9002:1994 and 9003:1994 versions.

The last major revision was in the year 2000 and the series was called ISO 9000:2000 series. The ISO 9002 and 9003 standards were integrated into one single certifiable standard: ISO 9001:2000. After December 2003, organizations holding ISO 9002 or 9003 standards had to complete a transition to the new standard.

ISO released a minor revision, ISO 9001:2008 on 14 October 2008. It contains no new requirements. Many of the changes were to improve consistency in grammar, facilitating translation of the standard into other languages for use by over 950,000 certified organization in the 175 countries (as at Dec 2007) that use the standard.

The ISO 9004:2009 document gives guidelines for performance improvement over and above the basic standard (ISO 9001:2000). This standard provides a measurement framework for improved quality management, similar to and based upon the measurement framework for process assessment.

The Quality Management System standards created by ISO are meant to certify the processes and the system of an organization, not the product or service itself. ISO 9000 standards do not certify the quality of the product or service.

In 2005 the International Organization for Standardization released a standard, ISO 22000, meant for the food industry. This standard covers the values and principles of ISO 9000 and the HACCP standards. It gives one single integrated standard for the food industry and is expected to become more popular in the coming years in such industry.

ISO has also released standards for other industries. For example, Technical Standard TS 16949 defines requirements in addition to those in ISO 9001:2008 specifically for the automotive industry.

ISO has a number of standards that support quality management. One group describes processes (including ISO/IEC 12207 and ISO/IEC 15288) and another describes process assessment and improvement ISO 15504. 

CMMI and IDEAL methods

The Software Engineering Institute has its own process assessment and improvement methods, called CMMI (Capability Maturity Model Integration) and IDEAL respectively.

Capability Maturity Model Integration (CMMI) is a process improvement training and appraisal program and service administered and marketed by Carnegie Mellon University and required by many DOD and U.S. Government contracts, especially in software development. Carnegie Mellon University claims CMMI can be used to guide process improvement across a project, division, or an entire organization. Under the CMMI methodology, processes are rated according to their maturity levels, which are defined as: Initial, Managed, Defined, Quantitatively Managed, Optimizing. Currently supported is CMMI Version 1.3. CMMI is registered in the U.S. Patent and Trademark Office by Carnegie Mellon University.

Three constellations of CMMI are:
  • Product and service development (CMMI for Development)
  • Service establishment, management, and delivery (CMMI for Services)
  • Product and service acquisition (CMMI for Acquisition).
CMMI Version 1.3 was released on November 1, 2010. This release is noteworthy because it updates all three CMMI models (CMMI for Development, CMMI for Services, and CMMI for Acquisition) to make them consistent and to improve their high maturity practices. The CMMI Product Team has reviewed more than 1,150 change requests for the models and 850 for the appraisal method. 

As part of its mission to transition mature technology to the software community, the SEI has transferred CMMI-related products and activities to the CMMI Institute, a 100%-controlled subsidiary of Carnegie Innovations, Carnegie Mellon University’s technology commercialization enterprise.

Other quality management information

  • VDA: Organisation developed for the German automobile industry VDA
  • AVSQ: Organisation developed for the Italian automobile industry AVSQ
  • EAQF: Organisation developed for the French automobile industry EAQF
  • QS-9000: Standard developed for the US automobile industry QS9000
  • ISO 19011 Standard developed for auditing a management system (international) ISO 19011

Awards

  • EFQM Excellence Award (Formerly the European Quality-Award: European award for Total Quality Management
and organizational excellence which has been presented since 1991 by the European Foundation for Quality Management (EFQM). www.efqm.org Similar awards are presented by the EFQM's National Partner organisations across Europe. For example, in the UK the British Quality Foundation (BQF) run the UK Excellence Awards. These awards are based on the EFQM Excellence Model, an organizational framework. www.bqf.org.uk
  • Deming-Award: Japanese award for Quality management since 1951.www.deming.org
  • Malcolm Baldrige National Quality Award: US-American Award for performance excellence created in 1987.

Certification

Since 1995, the American Society for Quality has offered a Certified Manager of Quality/ Organizational Excellence (CMQ/OE). This was known until 2005 as the Certified Quality Manager (CQM).ASQ
 

Quality management software

Quality Management Software is a category of technologies used by organizations to manage the delivery of high quality products. Solutions range in functionality, however, with the use of automation capabilities they typically have components for managing internal and external risk, compliance, and the quality of processes and products. Pre-configured and industry-specific solutions are available and generally require integration with existing IT architecture applications such as ERP, SCM, CRM, and PLM.

Quality Management Software Functionalities
Enterprise Quality Management Software
 
The intersection of technology and quality management software prompted the emergence of a new software category: Enterprise Quality Management Software (EQMS). EQMS is a platform for cross-functional communication and collaboration that centralizes, standardizes, and streamlines quality management data from across the value chain. The software breaks down functional silos created by traditionally implemented standalone and targeted solutions. Supporting the proliferation and accessibility of information across supply chain activities, design, production, distribution, and service, it provides a holistic viewpoint for managing the quality of products and processes.

Quality terms

  • Quality Improvement can be distinguished from Quality Control in that Quality Improvement is the purposeful change of a process to improve the reliability of achieving an outcome.
  • Quality Control is the ongoing effort to maintain the integrity of a process to maintain the reliability of achieving an outcome.
  • Quality Assurance is the planned or systematic actions necessary to provide enough confidence that a product or service will satisfy the given requirements.

Academic resources

Six Sigma

From Wikipedia, the free encyclopedia

Six Sigma () is a set of techniques and tools for process improvement. It was introduced by American engineer Bill Smith while working at Motorola in 1980. Jack Welch made it central to his business strategy at General Electric in 1995. A six sigma process is one in which 99.99966% of all opportunities to produce some feature of a part are statistically expected to be free of defects.

Six Sigma strategies seek to improve the quality of the output of a process by identifying and removing the causes of defects and minimizing variability in manufacturing and business processes. It uses a set of quality management methods, mainly empirical, statistical methods, and creates a special infrastructure of people within the organization who are experts in these methods. Each Six Sigma project carried out within an organization follows a defined sequence of steps and has specific value targets, for example: reduce process cycle time, reduce pollution, reduce costs, increase customer satisfaction, and increase profits.

The term Six Sigma (capitalized because it was written that way when registered as a Motorola trademark on December 28, 1993) originated from terminology associated with statistical modeling of manufacturing processes. The maturity of a manufacturing process can be described by a sigma rating indicating its yield or the percentage of defect-free products it creates—specifically, within how many standard deviations of a normal distribution the fraction of defect-free outcomes corresponds to. Motorola set a goal of "six sigma" for all of its manufacturing.

Doctrine

The common Six Sigma symbol

Six Sigma doctrine asserts:
  • Continuous efforts to achieve stable and predictable process results (e.g. by reducing process variation) are of vital importance to business success.
  • Manufacturing and business processes have characteristics that can be defined, measured, analyzed, improved, and controlled.
  • Achieving sustained quality improvement requires commitment from the entire organization, particularly from top-level management.
Features that set Six Sigma apart from previous quality-improvement initiatives include:
  • A clear focus on achieving measurable and quantifiable financial returns from any Six Sigma project.
  • An increased emphasis on strong and passionate management leadership and support.
  • A clear commitment to making decisions on the basis of verifiable data and statistical methods, rather than assumptions and guesswork.
The term "six sigma" comes from statistics and is used in statistical quality control, which evaluates process capability. Originally, it referred to the ability of manufacturing processes to produce a very high proportion of output within specification. Processes that operate with "six sigma quality" over the short term are assumed to produce long-term defect levels below 3.4 defects per million opportunities (DPMO). The 3.4 dpmo is based on a "shift" of ± 1.5 sigma explained by Dr. Mikel J. Harry. This figure is based on the tolerance in the height of a stack of discs. Six Sigma's implicit goal is to improve all processes, but not to the 3.4 DPMO level necessarily. Organizations need to determine an appropriate sigma level for each of their most important processes and strive to achieve these. As a result of this goal, it is incumbent on management of the organization to prioritize areas of improvement.

"Six Sigma" was registered June 11, 1991 as U.S. Service Mark 1,647,704. In 2005 Motorola attributed over US$17 billion in savings to Six Sigma.

Other early adopters of Six Sigma include Honeywell and General Electric, where Jack Welch introduced the method. By the late 1990s, about two-thirds of the Fortune 500 organizations had begun Six Sigma initiatives with the aim of reducing costs and improving quality.

In recent years, some practitioners have combined Six Sigma ideas with lean manufacturing to create a methodology named Lean Six Sigma. The Lean Six Sigma methodology views lean manufacturing, which addresses process flow and waste issues, and Six Sigma, with its focus on variation and design, as complementary disciplines aimed at promoting "business and operational excellence".

In 2011, the International Organization for Standardization (ISO) has published the first standard "ISO 13053:2011" defining a Six Sigma process. Other standards have been created mostly by universities or companies that have first-party certification programs for Six Sigma. 

Difference from lean management

Lean management and Six Sigma are two concepts which share similar methodologies and tools. Both programs are Japanese-influenced, but they are two different programs. Lean management is focused on eliminating waste using a set of proven standardized tools and methodologies that target organizational efficiencies while integrating a performance improvement system utilized by everyone, while Six Sigma's focus is on eliminating defects and reducing variation. Both systems are driven by data, though Six Sigma is much more dependent on accurate data. 

Methodologies

Six Sigma projects follow two project methodologies inspired by Deming's Plan–Do–Study–Act Cycle. These methodologies, composed of five phases each, bear the acronyms DMAIC and DMADV.
  • DMAIC ("duh-may-ick", /də.ˈmeɪ.ɪk/) is used for projects aimed at improving an existing business process.
  • DMADV ("duh-mad-vee", /də.ˈmæd.vi/) is used for projects aimed at creating new product or process designs.

DMAIC

The five steps of DMAIC

The DMAIC project methodology has five phases:
  • Define the system, the voice of the customer and their requirements, and the project goals, specifically.
  • Measure key aspects of the current process and collect relevant data; calculate the 'as-is' Process Capability.
  • Analyze the data to investigate and verify cause-and-effect relationships. Determine what the relationships are, and attempt to ensure that all factors have been considered. Seek out root cause of the defect under investigation.
  • Improve or optimize the current process based upon data analysis using techniques such as design of experiments, poka yoke or mistake proofing, and standard work to create a new, future state process. Set up pilot runs to establish process capability.
  • Control the future state process to ensure that any deviations from the target are corrected before they result in defects. Implement control systems such as statistical process control, production boards, visual workplaces, and continuously monitor the process. This process is repeated until the desired quality level is obtained.
Some organizations add a Recognize step at the beginning, which is to recognize the right problem to work on, thus yielding an RDMAIC methodology.

DMADV or DFSS

The five steps of DMADV

The DMADV project methodology, known as DFSS ("Design For Six Sigma"),[7] features five phases:
  • Define design goals that are consistent with customer demands and the enterprise strategy.
  • Measure and identify CTQs (characteristics that are Critical To Quality), measure product capabilities, production process capability, and measure risks.
  • Analyze to develop and design alternatives
  • Design an improved alternative, best suited per analysis in the previous step
  • Verify the design, set up pilot runs, implement the production process and hand it over to the process owner(s).

Quality management tools and methods

Within the individual phases of a DMAIC or DMADV project, Six Sigma utilizes many established quality-management tools that are also used outside Six Sigma. The following table shows an overview of the main methods used.

Implementation roles

One key innovation of Six Sigma involves the absolute "professionalizing" of quality management functions. Prior to Six Sigma, quality management in practice was largely relegated to the production floor and to statisticians in a separate quality department. Formal Six Sigma programs adopt a kind of elite ranking terminology (similar to some martial arts systems, like judo) to define a hierarchy (and special career path) that includes all business functions and levels. 

Six Sigma identifies several key roles for its successful implementation.
  • Executive Leadership includes the CEO and other members of top management. They are responsible for setting up a vision for Six Sigma implementation. They also empower the other role holders with the freedom and resources to explore new ideas for breakthrough improvements by transcending departmental barriers and overcoming inherent resistance to change.
  • Champions take responsibility for Six Sigma implementation across the organization in an integrated manner. The Executive Leadership draws them from upper management. Champions also act as mentors to Black Belts.
  • Master Black Belts, identified by Champions, act as in-house coaches on Six Sigma. They devote 100% of their time to Six Sigma. They assist Champions and guide Black Belts and Green Belts. Apart from statistical tasks, they spend their time on ensuring consistent application of Six Sigma across various functions and departments.
  • Black Belts operate under Master Black Belts to apply Six Sigma methodology to specific projects. They devote 100% of their valued time to Six Sigma. They primarily focus on Six Sigma project execution and special leadership with special tasks, whereas Champions and Master Black Belts focus on identifying projects/functions for Six Sigma.
  • Green Belts are the employees who take up Six Sigma implementation along with their other job responsibilities, operating under the guidance of Black Belts.
According to proponents of the system, special training is needed for all of these practitioners to ensure that they follow the methodology and use the data-driven approach correctly. 

Some organizations use additional belt colours, such as Yellow Belts, for employees that have basic training in Six Sigma tools and generally participate in projects and "White belts" for those locally trained in the concepts but do not participate in the project team. "Orange belts" are also mentioned to be used for special cases.

Certification

General Electric and Motorola developed certification programs as part of their Six Sigma implementation, verifying individuals' command of the Six Sigma methods at the relevant skill level (Green Belt, Black Belt etc.). Following this approach, many organizations in the 1990s started offering Six Sigma certifications to their employees. In 2008 Motorola University later co-developed with Vative and the Lean Six Sigma Society of Professionals a set of comparable certification standards for Lean Certification. Criteria for Green Belt and Black Belt certification vary; some companies simply require participation in a course and a Six Sigma project. There is no standard certification body, and different certification services are offered by various quality associations and other providers against a fee. The American Society for Quality for example requires Black Belt applicants to pass a written exam and to provide a signed affidavit stating that they have completed two projects or one project combined with three years' practical experience in the body of knowledge.

Etymology of "six sigma process"

The term "six sigma process" comes from the notion that if one has six standard deviations between the process mean and the nearest specification limit, as shown in the graph, practically no items will fail to meet specifications. This is based on the calculation method employed in process capability studies

Capability studies measure the number of standard deviations between the process mean and the nearest specification limit in sigma units, represented by the Greek letter σ (sigma). As process standard deviation goes up, or the mean of the process moves away from the center of the tolerance, fewer standard deviations will fit between the mean and the nearest specification limit, decreasing the sigma number and increasing the likelihood of items outside specification. One should also note that calculation of Sigma levels for a process data is independent of the data being normally distributed. In one of the criticisms to Six Sigma, practitioners using this approach spend a lot of time transforming data from non-normal to normal using transformation techniques. It must be said that Sigma levels can be determined for process data that has evidence of non-normality.

Graph of the normal distribution, which underlies the statistical assumptions of the Six Sigma model. In the centre at 0, the Greek letter μ (mu) marks the mean, with the horizontal axis showing distance from the mean, marked in standard deviations and given the letter σ (sigma). The greater the standard deviation, the greater is the spread of values encountered. For the green curve shown above, μ = 0 and σ = 1. The upper and lower specification limits (marked USL and LSL) are at a distance of 6σ from the mean. Because of the properties of the normal distribution, values lying that far away from the mean are extremely unlikely: approximately 1 in a billion too low, and the same too high. Even if the mean were to move right or left by 1.5σ at some point in the future (1.5 sigma shift, coloured red and blue), there is still a good safety cushion. This is why Six Sigma aims to have processes where the mean is at least 6σ away from the nearest specification limit.
 

Role of the 1.5 sigma shift

Experience has shown that processes usually do not perform as well in the long term as they do in the short term. As a result, the number of sigmas that will fit between the process mean and the nearest specification limit may well drop over time, compared to an initial short-term study. To account for this real-life increase in process variation over time, an empirically based 1.5 sigma shift is introduced into the calculation. According to this idea, a process that fits 6 sigma between the process mean and the nearest specification limit in a short-term study will in the long term fit only 4.5 sigma – either because the process mean will move over time, or because the long-term standard deviation of the process will be greater than that observed in the short term, or both.

Hence the widely accepted definition of a six sigma process is a process that produces 3.4 defective parts per million opportunities (DPMO). This is based on the fact that a process that is normally distributed will have 3.4 parts per million outside the limits, when the limits are six sigma from the "original" mean of zero and the process mean is then shifted by 1.5 sigma (and therefore, the six sigma limits are no longer symmetrical about the mean). The former six sigma distribution, when under the effect of the 1.5 sigma shift, is commonly referred to as a 4.5 sigma process. The failure rate of a six sigma distribution with the mean shifted 1.5 sigma is not equivalent to the failure rate of a 4.5 sigma process with the mean centered on zero. This allows for the fact that special causes may result in a deterioration in process performance over time and is designed to prevent underestimation of the defect levels likely to be encountered in real-life operation.

The role of the sigma shift is mainly academic. The purpose of six sigma is to generate organizational performance improvement. It is up to the organization to determine, based on customer expectations, what the appropriate sigma level of a process is. The purpose of the sigma value is as a comparative figure to determine whether a process is improving, deteriorating, stagnant or non-competitive with others in the same business. Six sigma (3.4 DPMO) is not the goal of all processes. 

Sigma levels

A control chart depicting a process that experienced a 1.5 sigma drift in the process mean toward the upper specification limit starting at midnight. Control charts are used to maintain 6 sigma quality by signaling when quality professionals should investigate a process to find and eliminate special-cause variation.

The table below gives long-term DPMO values corresponding to various short-term sigma levels.

These figures assume that the process mean will shift by 1.5 sigma toward the side with the critical specification limit. In other words, they assume that after the initial study determining the short-term sigma level, the long-term Cpk value will turn out to be 0.5 less than the short-term Cpk value. So, now for example, the DPMO figure given for 1 sigma assumes that the long-term process mean will be 0.5 sigma beyond the specification limit (Cpk = –0.17), rather than 1 sigma within it, as it was in the short-term study (Cpk = 0.33). Note that the defect percentages indicate only defects exceeding the specification limit to which the process mean is nearest. Defects beyond the far specification limit are not included in the percentages.

The formula used here to calculate the DPMO is thus
Sigma level Sigma (with 1.5σ shift) DPMO Percent defective Percentage yield Short-term Cpk Long-term Cpk
1 −0.5 691,462 69% 31% 0.33 −0.17
2 0.5 308,538 31% 69% 0.67 0.17
3 1.5 66,807 6.7% 93.3% 1.00 0.5
4 2.5 6,210 0.62% 99.38% 1.33 0.83
5 3.5 233 0.023% 99.977% 1.67 1.17
6 4.5 3.4 0.00034% 99.99966% 2.00 1.5
7 5.5 0.019 0.0000019% 99.9999981% 2.33 1.83

Software

 

Application

Six Sigma mostly finds application in large organizations. An important factor in the spread of Six Sigma was GE's 1998 announcement of $350 million in savings thanks to Six Sigma, a figure that later grew to more than $1 billion. According to industry consultants like Thomas Pyzdek and John Kullmann, companies with fewer than 500 employees are less suited to Six Sigma implementation or need to adapt the standard approach to make it work for them. Six Sigma however contains a large number of tools and techniques that work well in small to mid-size organizations. The fact that an organization is not big enough to be able to afford black belts does not diminish its abilities to make improvements using this set of tools and techniques. The infrastructure described as necessary to support Six Sigma is a result of the size of the organization rather than a requirement of Six Sigma itself.

Although the scope of Six Sigma differs depending on where it is implemented, it can successfully deliver its benefits to different applications. 

Manufacturing

After its first application at Motorola in the late 1980s, other internationally recognized firms currently recorded high number of savings after applying Six Sigma. Examples of these are Johnson and Johnson, with $600 million of reported savings, Texas Instruments, which saved over $500 million as well as Telefónica de Espana, which reported €30 million in savings in the first 10 months. On top of this, other organizations like Sony and Boeing achieved large percentages in waste reduction.

Engineering and construction

Although companies have considered common quality control and process improvement strategies, there’s still a need for more reasonable and effective methods as all the desired standards and client satisfaction have not always been reached. There is still a need for an essential analysis that can control the factors affecting concrete cracks and slippage between concrete and steel. After conducting a case study on Tinjin Xianyi Construction Technology Co, Ltd., it was found that construction time and construction waste were reduced by 26.2% and 67% accordingly after adopting Six Sigma. Similarly, Six Sigma implementation was studied at one of the largest engineering and construction companies in the world: Bechtel Corporation, where after an initial investment of $30 million in a Six Sigma program that included identifying and preventing rework and defects, over $200 million were saved.

Finance

Six Sigma has played an important role by improving accuracy of allocation of cash to reduce bank charges, automatic payments, improving accuracy of reporting, reducing documentary credits defects, reducing check collection defects, and reducing variation in collector performance. Two of the financial institutions that have reported considerable improvements in their operations are Bank of America and American Express. By 2004 Bank of America increased customer satisfaction by 10.4% and decreased customer issues by 24% by applying Six Sigma tools in their streamline operations. Similarly, American Express successfully eliminated non-received renewal credit cards and improved their overall processes by applying Six Sigma principles. This strategy is also currently being applied by other financial institutions like GE Capital Corp., JP Morgan Chase, and SunTrust Bank, with customer satisfaction being their main objective.

Supply chain

In this field, it is important to ensure that products are delivered to clients at the right time while preserving high-quality standards from the beginning to the end of the supply chain. By changing the schematic diagram for the supply chain, Six Sigma can ensure quality control on products (defect free) and guarantee delivery deadlines, which are the two major issues involved in the supply chain.

Healthcare

This is a sector that has been highly matched with this doctrine for many years because of the nature of zero tolerance for mistakes and potential for reducing medical errors involved in healthcare. The goal of Six Sigma in healthcare is broad and includes reducing the inventory of equipment that brings extra costs, altering the process of healthcare delivery in order to make more efficient and refining reimbursements. A study at the University of Texas MD Anderson Cancer Center, which recorded an increase in examinations with no additional machines of 45% and reduction in patients' preparation time of 40 minutes; from 45 minutes to 5 minutes in multiple cases.

Criticism


Lack of originality

Quality expert Joseph M. Juran described Six Sigma as "a basic version of quality improvement", stating that "there is nothing new there. It includes what we used to call facilitators. They've adopted more flamboyant terms, like belts with different colors. I think that concept has merit to set apart, to create specialists who can be very helpful. Again, that's not a new idea. The American Society for Quality long ago established certificates, such as for reliability engineers."

Inadequate for complex manufacturing

Quality expert Philip B. Crosby pointed out that the Six Sigma standard does not go far enough—customers deserve defect-free products every time. For example, under the Six Sigma standard, semiconductors which require the flawless etching of millions of tiny circuits onto a single chip are all defective.

Role of consultants

The use of "Black Belts" as itinerant change agents has fostered an industry of training and certification. Critics have argued there is overselling of Six Sigma by too great a number of consulting firms, many of which claim expertise in Six Sigma when they have only a rudimentary understanding of the tools and techniques involved or the markets or industries in which they are acting.

Potential negative effects

A Fortune article stated that "of 58 large companies that have announced Six Sigma programs, 91 percent have trailed the S&P 500 since". The statement was attributed to "an analysis by Charles Holland of consulting firm Qualpro (which espouses a competing quality-improvement process)". The summary of the article is that Six Sigma is effective at what it is intended to do, but that it is "narrowly designed to fix an existing process" and does not help in "coming up with new products or disruptive technologies."

Over-reliance on statistical tools

A more direct criticism is the "rigid" nature of Six Sigma with its over-reliance on methods and tools. In most cases, more attention is paid to reducing variation and searching for any significant factors and less attention is paid to developing robustness in the first place (which can altogether eliminate the need for reducing variation). The extensive reliance on significance testing and use of multiple regression techniques increases the risk of making commonly unknown types of statistical errors or mistakes. A possible consequence of Six Sigma's array of P-value misconceptions is the false belief that the probability of a conclusion being in error can be calculated from the data in a single experiment without reference to external evidence or the plausibility of the underlying mechanism. One of the most serious but all-too-common misuses of inferential statistics is to take a model that was developed through exploratory model building and subject it to the same sorts of statistical tests that are used to validate a model that was specified in advance.

Another comment refers to the often mentioned Transfer Function, which seems to be a flawed theory if looked at in detail. Since significance tests were first popularized many objections have been voiced by prominent and respected statisticians. The volume of criticism and rebuttal has filled books with language seldom used in the scholarly debate of a dry subject. Much of the first criticism was already published more than 40 years ago.

Articles featuring critics have appeared in the November–December 2006 issue of USA Army Logistician regarding Six-Sigma: "The dangers of a single paradigmatic orientation (in this case, that of technical rationality) can blind us to values associated with double-loop learning and the learning organization, organization adaptability, workforce creativity and development, humanizing the workplace, cultural awareness, and strategy making."

Nassim Nicholas Taleb considers risk managers little more than "blind users" of statistical tools and methods. He states that statistics is fundamentally incomplete as a field as it cannot predict the risk of rare events—something Six Sigma is specially concerned with. Furthermore, errors in prediction are likely to occur as a result of ignorance for or distinction between epistemic and other uncertainties. These errors are the biggest in time variant (reliability) related failures.

Stifling creativity in research environments

According to an article by John Dodge, editor in chief of Design News, use of Six Sigma is inappropriate in a research environment. Dodge states "excessive metrics, steps, measurements and Six Sigma's intense focus on reducing variability water down the discovery process. Under Six Sigma, the free-wheeling nature of brainstorming and the serendipitous side of discovery is stifled." He concludes "there's general agreement that freedom in basic or pure research is preferable while Six Sigma works best in incremental innovation when there's an expressed commercial goal."

A BusinessWeek article says that James McNerney's introduction of Six Sigma at 3M had the effect of stifling creativity and reports its removal from the research function. It cites two Wharton School professors who say that Six Sigma leads to incremental innovation at the expense of blue skies research. This phenomenon is further explored in the book Going Lean, which describes a related approach known as lean dynamics and provides data to show that Ford's "6 Sigma" program did little to change its fortunes.

Lack of systematic documentation

One criticism voiced by Yasar Jarrar and Andy Neely from the Cranfield School of Management's Centre for Business Performance is that while Six Sigma is a powerful approach, it can also unduly dominate an organization's culture; and they add that much of the Six Sigma literature – in a remarkable way (six-sigma claims to be evidence, scientifically based) – lacks academic rigor:
One final criticism, probably more to the Six Sigma literature than concepts, relates to the evidence for Six Sigma’s success. So far, documented case studies using the Six Sigma methods are presented as the strongest evidence for its success. However, looking at these documented cases, and apart from a few that are detailed from the experience of leading organizations like GE and Motorola, most cases are not documented in a systemic or academic manner. In fact, the majority are case studies illustrated on websites, and are, at best, sketchy. They provide no mention of any specific Six Sigma methods that were used to resolve the problems. It has been argued that by relying on the Six Sigma criteria, management is lulled into the idea that something is being done about quality, whereas any resulting improvement is accidental (Latzko 1995). Thus, when looking at the evidence put forward for Six Sigma success, mostly by consultants and people with vested interests, the question that begs to be asked is: are we making a true improvement with Six Sigma methods or just getting skilled at telling stories? Everyone seems to believe that we are making true improvements, but there is some way to go to document these empirically and clarify the causal relations.

1.5 sigma shift

The statistician Donald J. Wheeler has dismissed the 1.5 sigma shift as "goofy" because of its arbitrary nature. Its universal applicability is seen as doubtful.

The 1.5 sigma shift has also become contentious because it results in stated "sigma levels" that reflect short-term rather than long-term performance: a process that has long-term defect levels corresponding to 4.5 sigma performance is, by Six Sigma convention, described as a "six sigma process." The accepted Six Sigma scoring system thus cannot be equated to actual normal distribution probabilities for the stated number of standard deviations, and this has been a key bone of contention over how Six Sigma measures are defined. The fact that it is rarely explained that a "6 sigma" process will have long-term defect rates corresponding to 4.5 sigma performance rather than actual 6 sigma performance has led several commentators to express the opinion that Six Sigma is a confidence trick.

Operator (computer programming)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Operator_(computer_programmin...