Search This Blog

Saturday, January 18, 2020

Six Sigma

From Wikipedia, the free encyclopedia

Six Sigma () is a set of techniques and tools for process improvement. It was introduced by American engineer Bill Smith while working at Motorola in 1980. Jack Welch made it central to his business strategy at General Electric in 1995. A six sigma process is one in which 99.99966% of all opportunities to produce some feature of a part are statistically expected to be free of defects.

Six Sigma strategies seek to improve the quality of the output of a process by identifying and removing the causes of defects and minimizing variability in manufacturing and business processes. It uses a set of quality management methods, mainly empirical, statistical methods, and creates a special infrastructure of people within the organization who are experts in these methods. Each Six Sigma project carried out within an organization follows a defined sequence of steps and has specific value targets, for example: reduce process cycle time, reduce pollution, reduce costs, increase customer satisfaction, and increase profits.

The term Six Sigma (capitalized because it was written that way when registered as a Motorola trademark on December 28, 1993) originated from terminology associated with statistical modeling of manufacturing processes. The maturity of a manufacturing process can be described by a sigma rating indicating its yield or the percentage of defect-free products it creates—specifically, within how many standard deviations of a normal distribution the fraction of defect-free outcomes corresponds to. Motorola set a goal of "six sigma" for all of its manufacturing.

Doctrine

The common Six Sigma symbol

Six Sigma doctrine asserts:
  • Continuous efforts to achieve stable and predictable process results (e.g. by reducing process variation) are of vital importance to business success.
  • Manufacturing and business processes have characteristics that can be defined, measured, analyzed, improved, and controlled.
  • Achieving sustained quality improvement requires commitment from the entire organization, particularly from top-level management.
Features that set Six Sigma apart from previous quality-improvement initiatives include:
  • A clear focus on achieving measurable and quantifiable financial returns from any Six Sigma project.
  • An increased emphasis on strong and passionate management leadership and support.
  • A clear commitment to making decisions on the basis of verifiable data and statistical methods, rather than assumptions and guesswork.
The term "six sigma" comes from statistics and is used in statistical quality control, which evaluates process capability. Originally, it referred to the ability of manufacturing processes to produce a very high proportion of output within specification. Processes that operate with "six sigma quality" over the short term are assumed to produce long-term defect levels below 3.4 defects per million opportunities (DPMO). The 3.4 dpmo is based on a "shift" of ± 1.5 sigma explained by Dr. Mikel J. Harry. This figure is based on the tolerance in the height of a stack of discs. Six Sigma's implicit goal is to improve all processes, but not to the 3.4 DPMO level necessarily. Organizations need to determine an appropriate sigma level for each of their most important processes and strive to achieve these. As a result of this goal, it is incumbent on management of the organization to prioritize areas of improvement.

"Six Sigma" was registered June 11, 1991 as U.S. Service Mark 1,647,704. In 2005 Motorola attributed over US$17 billion in savings to Six Sigma.

Other early adopters of Six Sigma include Honeywell and General Electric, where Jack Welch introduced the method. By the late 1990s, about two-thirds of the Fortune 500 organizations had begun Six Sigma initiatives with the aim of reducing costs and improving quality.

In recent years, some practitioners have combined Six Sigma ideas with lean manufacturing to create a methodology named Lean Six Sigma. The Lean Six Sigma methodology views lean manufacturing, which addresses process flow and waste issues, and Six Sigma, with its focus on variation and design, as complementary disciplines aimed at promoting "business and operational excellence".

In 2011, the International Organization for Standardization (ISO) has published the first standard "ISO 13053:2011" defining a Six Sigma process. Other standards have been created mostly by universities or companies that have first-party certification programs for Six Sigma. 

Difference from lean management

Lean management and Six Sigma are two concepts which share similar methodologies and tools. Both programs are Japanese-influenced, but they are two different programs. Lean management is focused on eliminating waste using a set of proven standardized tools and methodologies that target organizational efficiencies while integrating a performance improvement system utilized by everyone, while Six Sigma's focus is on eliminating defects and reducing variation. Both systems are driven by data, though Six Sigma is much more dependent on accurate data. 

Methodologies

Six Sigma projects follow two project methodologies inspired by Deming's Plan–Do–Study–Act Cycle. These methodologies, composed of five phases each, bear the acronyms DMAIC and DMADV.
  • DMAIC ("duh-may-ick", /də.ˈmeɪ.ɪk/) is used for projects aimed at improving an existing business process.
  • DMADV ("duh-mad-vee", /də.ˈmæd.vi/) is used for projects aimed at creating new product or process designs.

DMAIC

The five steps of DMAIC

The DMAIC project methodology has five phases:
  • Define the system, the voice of the customer and their requirements, and the project goals, specifically.
  • Measure key aspects of the current process and collect relevant data; calculate the 'as-is' Process Capability.
  • Analyze the data to investigate and verify cause-and-effect relationships. Determine what the relationships are, and attempt to ensure that all factors have been considered. Seek out root cause of the defect under investigation.
  • Improve or optimize the current process based upon data analysis using techniques such as design of experiments, poka yoke or mistake proofing, and standard work to create a new, future state process. Set up pilot runs to establish process capability.
  • Control the future state process to ensure that any deviations from the target are corrected before they result in defects. Implement control systems such as statistical process control, production boards, visual workplaces, and continuously monitor the process. This process is repeated until the desired quality level is obtained.
Some organizations add a Recognize step at the beginning, which is to recognize the right problem to work on, thus yielding an RDMAIC methodology.

DMADV or DFSS

The five steps of DMADV

The DMADV project methodology, known as DFSS ("Design For Six Sigma"),[7] features five phases:
  • Define design goals that are consistent with customer demands and the enterprise strategy.
  • Measure and identify CTQs (characteristics that are Critical To Quality), measure product capabilities, production process capability, and measure risks.
  • Analyze to develop and design alternatives
  • Design an improved alternative, best suited per analysis in the previous step
  • Verify the design, set up pilot runs, implement the production process and hand it over to the process owner(s).

Quality management tools and methods

Within the individual phases of a DMAIC or DMADV project, Six Sigma utilizes many established quality-management tools that are also used outside Six Sigma. The following table shows an overview of the main methods used.

Implementation roles

One key innovation of Six Sigma involves the absolute "professionalizing" of quality management functions. Prior to Six Sigma, quality management in practice was largely relegated to the production floor and to statisticians in a separate quality department. Formal Six Sigma programs adopt a kind of elite ranking terminology (similar to some martial arts systems, like judo) to define a hierarchy (and special career path) that includes all business functions and levels. 

Six Sigma identifies several key roles for its successful implementation.
  • Executive Leadership includes the CEO and other members of top management. They are responsible for setting up a vision for Six Sigma implementation. They also empower the other role holders with the freedom and resources to explore new ideas for breakthrough improvements by transcending departmental barriers and overcoming inherent resistance to change.
  • Champions take responsibility for Six Sigma implementation across the organization in an integrated manner. The Executive Leadership draws them from upper management. Champions also act as mentors to Black Belts.
  • Master Black Belts, identified by Champions, act as in-house coaches on Six Sigma. They devote 100% of their time to Six Sigma. They assist Champions and guide Black Belts and Green Belts. Apart from statistical tasks, they spend their time on ensuring consistent application of Six Sigma across various functions and departments.
  • Black Belts operate under Master Black Belts to apply Six Sigma methodology to specific projects. They devote 100% of their valued time to Six Sigma. They primarily focus on Six Sigma project execution and special leadership with special tasks, whereas Champions and Master Black Belts focus on identifying projects/functions for Six Sigma.
  • Green Belts are the employees who take up Six Sigma implementation along with their other job responsibilities, operating under the guidance of Black Belts.
According to proponents of the system, special training is needed for all of these practitioners to ensure that they follow the methodology and use the data-driven approach correctly. 

Some organizations use additional belt colours, such as Yellow Belts, for employees that have basic training in Six Sigma tools and generally participate in projects and "White belts" for those locally trained in the concepts but do not participate in the project team. "Orange belts" are also mentioned to be used for special cases.

Certification

General Electric and Motorola developed certification programs as part of their Six Sigma implementation, verifying individuals' command of the Six Sigma methods at the relevant skill level (Green Belt, Black Belt etc.). Following this approach, many organizations in the 1990s started offering Six Sigma certifications to their employees. In 2008 Motorola University later co-developed with Vative and the Lean Six Sigma Society of Professionals a set of comparable certification standards for Lean Certification. Criteria for Green Belt and Black Belt certification vary; some companies simply require participation in a course and a Six Sigma project. There is no standard certification body, and different certification services are offered by various quality associations and other providers against a fee. The American Society for Quality for example requires Black Belt applicants to pass a written exam and to provide a signed affidavit stating that they have completed two projects or one project combined with three years' practical experience in the body of knowledge.

Etymology of "six sigma process"

The term "six sigma process" comes from the notion that if one has six standard deviations between the process mean and the nearest specification limit, as shown in the graph, practically no items will fail to meet specifications. This is based on the calculation method employed in process capability studies

Capability studies measure the number of standard deviations between the process mean and the nearest specification limit in sigma units, represented by the Greek letter σ (sigma). As process standard deviation goes up, or the mean of the process moves away from the center of the tolerance, fewer standard deviations will fit between the mean and the nearest specification limit, decreasing the sigma number and increasing the likelihood of items outside specification. One should also note that calculation of Sigma levels for a process data is independent of the data being normally distributed. In one of the criticisms to Six Sigma, practitioners using this approach spend a lot of time transforming data from non-normal to normal using transformation techniques. It must be said that Sigma levels can be determined for process data that has evidence of non-normality.

Graph of the normal distribution, which underlies the statistical assumptions of the Six Sigma model. In the centre at 0, the Greek letter μ (mu) marks the mean, with the horizontal axis showing distance from the mean, marked in standard deviations and given the letter σ (sigma). The greater the standard deviation, the greater is the spread of values encountered. For the green curve shown above, μ = 0 and σ = 1. The upper and lower specification limits (marked USL and LSL) are at a distance of 6σ from the mean. Because of the properties of the normal distribution, values lying that far away from the mean are extremely unlikely: approximately 1 in a billion too low, and the same too high. Even if the mean were to move right or left by 1.5σ at some point in the future (1.5 sigma shift, coloured red and blue), there is still a good safety cushion. This is why Six Sigma aims to have processes where the mean is at least 6σ away from the nearest specification limit.
 

Role of the 1.5 sigma shift

Experience has shown that processes usually do not perform as well in the long term as they do in the short term. As a result, the number of sigmas that will fit between the process mean and the nearest specification limit may well drop over time, compared to an initial short-term study. To account for this real-life increase in process variation over time, an empirically based 1.5 sigma shift is introduced into the calculation. According to this idea, a process that fits 6 sigma between the process mean and the nearest specification limit in a short-term study will in the long term fit only 4.5 sigma – either because the process mean will move over time, or because the long-term standard deviation of the process will be greater than that observed in the short term, or both.

Hence the widely accepted definition of a six sigma process is a process that produces 3.4 defective parts per million opportunities (DPMO). This is based on the fact that a process that is normally distributed will have 3.4 parts per million outside the limits, when the limits are six sigma from the "original" mean of zero and the process mean is then shifted by 1.5 sigma (and therefore, the six sigma limits are no longer symmetrical about the mean). The former six sigma distribution, when under the effect of the 1.5 sigma shift, is commonly referred to as a 4.5 sigma process. The failure rate of a six sigma distribution with the mean shifted 1.5 sigma is not equivalent to the failure rate of a 4.5 sigma process with the mean centered on zero. This allows for the fact that special causes may result in a deterioration in process performance over time and is designed to prevent underestimation of the defect levels likely to be encountered in real-life operation.

The role of the sigma shift is mainly academic. The purpose of six sigma is to generate organizational performance improvement. It is up to the organization to determine, based on customer expectations, what the appropriate sigma level of a process is. The purpose of the sigma value is as a comparative figure to determine whether a process is improving, deteriorating, stagnant or non-competitive with others in the same business. Six sigma (3.4 DPMO) is not the goal of all processes. 

Sigma levels

A control chart depicting a process that experienced a 1.5 sigma drift in the process mean toward the upper specification limit starting at midnight. Control charts are used to maintain 6 sigma quality by signaling when quality professionals should investigate a process to find and eliminate special-cause variation.

The table below gives long-term DPMO values corresponding to various short-term sigma levels.

These figures assume that the process mean will shift by 1.5 sigma toward the side with the critical specification limit. In other words, they assume that after the initial study determining the short-term sigma level, the long-term Cpk value will turn out to be 0.5 less than the short-term Cpk value. So, now for example, the DPMO figure given for 1 sigma assumes that the long-term process mean will be 0.5 sigma beyond the specification limit (Cpk = –0.17), rather than 1 sigma within it, as it was in the short-term study (Cpk = 0.33). Note that the defect percentages indicate only defects exceeding the specification limit to which the process mean is nearest. Defects beyond the far specification limit are not included in the percentages.

The formula used here to calculate the DPMO is thus
Sigma level Sigma (with 1.5σ shift) DPMO Percent defective Percentage yield Short-term Cpk Long-term Cpk
1 −0.5 691,462 69% 31% 0.33 −0.17
2 0.5 308,538 31% 69% 0.67 0.17
3 1.5 66,807 6.7% 93.3% 1.00 0.5
4 2.5 6,210 0.62% 99.38% 1.33 0.83
5 3.5 233 0.023% 99.977% 1.67 1.17
6 4.5 3.4 0.00034% 99.99966% 2.00 1.5
7 5.5 0.019 0.0000019% 99.9999981% 2.33 1.83

Software

 

Application

Six Sigma mostly finds application in large organizations. An important factor in the spread of Six Sigma was GE's 1998 announcement of $350 million in savings thanks to Six Sigma, a figure that later grew to more than $1 billion. According to industry consultants like Thomas Pyzdek and John Kullmann, companies with fewer than 500 employees are less suited to Six Sigma implementation or need to adapt the standard approach to make it work for them. Six Sigma however contains a large number of tools and techniques that work well in small to mid-size organizations. The fact that an organization is not big enough to be able to afford black belts does not diminish its abilities to make improvements using this set of tools and techniques. The infrastructure described as necessary to support Six Sigma is a result of the size of the organization rather than a requirement of Six Sigma itself.

Although the scope of Six Sigma differs depending on where it is implemented, it can successfully deliver its benefits to different applications. 

Manufacturing

After its first application at Motorola in the late 1980s, other internationally recognized firms currently recorded high number of savings after applying Six Sigma. Examples of these are Johnson and Johnson, with $600 million of reported savings, Texas Instruments, which saved over $500 million as well as Telefónica de Espana, which reported €30 million in savings in the first 10 months. On top of this, other organizations like Sony and Boeing achieved large percentages in waste reduction.

Engineering and construction

Although companies have considered common quality control and process improvement strategies, there’s still a need for more reasonable and effective methods as all the desired standards and client satisfaction have not always been reached. There is still a need for an essential analysis that can control the factors affecting concrete cracks and slippage between concrete and steel. After conducting a case study on Tinjin Xianyi Construction Technology Co, Ltd., it was found that construction time and construction waste were reduced by 26.2% and 67% accordingly after adopting Six Sigma. Similarly, Six Sigma implementation was studied at one of the largest engineering and construction companies in the world: Bechtel Corporation, where after an initial investment of $30 million in a Six Sigma program that included identifying and preventing rework and defects, over $200 million were saved.

Finance

Six Sigma has played an important role by improving accuracy of allocation of cash to reduce bank charges, automatic payments, improving accuracy of reporting, reducing documentary credits defects, reducing check collection defects, and reducing variation in collector performance. Two of the financial institutions that have reported considerable improvements in their operations are Bank of America and American Express. By 2004 Bank of America increased customer satisfaction by 10.4% and decreased customer issues by 24% by applying Six Sigma tools in their streamline operations. Similarly, American Express successfully eliminated non-received renewal credit cards and improved their overall processes by applying Six Sigma principles. This strategy is also currently being applied by other financial institutions like GE Capital Corp., JP Morgan Chase, and SunTrust Bank, with customer satisfaction being their main objective.

Supply chain

In this field, it is important to ensure that products are delivered to clients at the right time while preserving high-quality standards from the beginning to the end of the supply chain. By changing the schematic diagram for the supply chain, Six Sigma can ensure quality control on products (defect free) and guarantee delivery deadlines, which are the two major issues involved in the supply chain.

Healthcare

This is a sector that has been highly matched with this doctrine for many years because of the nature of zero tolerance for mistakes and potential for reducing medical errors involved in healthcare. The goal of Six Sigma in healthcare is broad and includes reducing the inventory of equipment that brings extra costs, altering the process of healthcare delivery in order to make more efficient and refining reimbursements. A study at the University of Texas MD Anderson Cancer Center, which recorded an increase in examinations with no additional machines of 45% and reduction in patients' preparation time of 40 minutes; from 45 minutes to 5 minutes in multiple cases.

Criticism


Lack of originality

Quality expert Joseph M. Juran described Six Sigma as "a basic version of quality improvement", stating that "there is nothing new there. It includes what we used to call facilitators. They've adopted more flamboyant terms, like belts with different colors. I think that concept has merit to set apart, to create specialists who can be very helpful. Again, that's not a new idea. The American Society for Quality long ago established certificates, such as for reliability engineers."

Inadequate for complex manufacturing

Quality expert Philip B. Crosby pointed out that the Six Sigma standard does not go far enough—customers deserve defect-free products every time. For example, under the Six Sigma standard, semiconductors which require the flawless etching of millions of tiny circuits onto a single chip are all defective.

Role of consultants

The use of "Black Belts" as itinerant change agents has fostered an industry of training and certification. Critics have argued there is overselling of Six Sigma by too great a number of consulting firms, many of which claim expertise in Six Sigma when they have only a rudimentary understanding of the tools and techniques involved or the markets or industries in which they are acting.

Potential negative effects

A Fortune article stated that "of 58 large companies that have announced Six Sigma programs, 91 percent have trailed the S&P 500 since". The statement was attributed to "an analysis by Charles Holland of consulting firm Qualpro (which espouses a competing quality-improvement process)". The summary of the article is that Six Sigma is effective at what it is intended to do, but that it is "narrowly designed to fix an existing process" and does not help in "coming up with new products or disruptive technologies."

Over-reliance on statistical tools

A more direct criticism is the "rigid" nature of Six Sigma with its over-reliance on methods and tools. In most cases, more attention is paid to reducing variation and searching for any significant factors and less attention is paid to developing robustness in the first place (which can altogether eliminate the need for reducing variation). The extensive reliance on significance testing and use of multiple regression techniques increases the risk of making commonly unknown types of statistical errors or mistakes. A possible consequence of Six Sigma's array of P-value misconceptions is the false belief that the probability of a conclusion being in error can be calculated from the data in a single experiment without reference to external evidence or the plausibility of the underlying mechanism. One of the most serious but all-too-common misuses of inferential statistics is to take a model that was developed through exploratory model building and subject it to the same sorts of statistical tests that are used to validate a model that was specified in advance.

Another comment refers to the often mentioned Transfer Function, which seems to be a flawed theory if looked at in detail. Since significance tests were first popularized many objections have been voiced by prominent and respected statisticians. The volume of criticism and rebuttal has filled books with language seldom used in the scholarly debate of a dry subject. Much of the first criticism was already published more than 40 years ago.

Articles featuring critics have appeared in the November–December 2006 issue of USA Army Logistician regarding Six-Sigma: "The dangers of a single paradigmatic orientation (in this case, that of technical rationality) can blind us to values associated with double-loop learning and the learning organization, organization adaptability, workforce creativity and development, humanizing the workplace, cultural awareness, and strategy making."

Nassim Nicholas Taleb considers risk managers little more than "blind users" of statistical tools and methods. He states that statistics is fundamentally incomplete as a field as it cannot predict the risk of rare events—something Six Sigma is specially concerned with. Furthermore, errors in prediction are likely to occur as a result of ignorance for or distinction between epistemic and other uncertainties. These errors are the biggest in time variant (reliability) related failures.

Stifling creativity in research environments

According to an article by John Dodge, editor in chief of Design News, use of Six Sigma is inappropriate in a research environment. Dodge states "excessive metrics, steps, measurements and Six Sigma's intense focus on reducing variability water down the discovery process. Under Six Sigma, the free-wheeling nature of brainstorming and the serendipitous side of discovery is stifled." He concludes "there's general agreement that freedom in basic or pure research is preferable while Six Sigma works best in incremental innovation when there's an expressed commercial goal."

A BusinessWeek article says that James McNerney's introduction of Six Sigma at 3M had the effect of stifling creativity and reports its removal from the research function. It cites two Wharton School professors who say that Six Sigma leads to incremental innovation at the expense of blue skies research. This phenomenon is further explored in the book Going Lean, which describes a related approach known as lean dynamics and provides data to show that Ford's "6 Sigma" program did little to change its fortunes.

Lack of systematic documentation

One criticism voiced by Yasar Jarrar and Andy Neely from the Cranfield School of Management's Centre for Business Performance is that while Six Sigma is a powerful approach, it can also unduly dominate an organization's culture; and they add that much of the Six Sigma literature – in a remarkable way (six-sigma claims to be evidence, scientifically based) – lacks academic rigor:
One final criticism, probably more to the Six Sigma literature than concepts, relates to the evidence for Six Sigma’s success. So far, documented case studies using the Six Sigma methods are presented as the strongest evidence for its success. However, looking at these documented cases, and apart from a few that are detailed from the experience of leading organizations like GE and Motorola, most cases are not documented in a systemic or academic manner. In fact, the majority are case studies illustrated on websites, and are, at best, sketchy. They provide no mention of any specific Six Sigma methods that were used to resolve the problems. It has been argued that by relying on the Six Sigma criteria, management is lulled into the idea that something is being done about quality, whereas any resulting improvement is accidental (Latzko 1995). Thus, when looking at the evidence put forward for Six Sigma success, mostly by consultants and people with vested interests, the question that begs to be asked is: are we making a true improvement with Six Sigma methods or just getting skilled at telling stories? Everyone seems to believe that we are making true improvements, but there is some way to go to document these empirically and clarify the causal relations.

1.5 sigma shift

The statistician Donald J. Wheeler has dismissed the 1.5 sigma shift as "goofy" because of its arbitrary nature. Its universal applicability is seen as doubtful.

The 1.5 sigma shift has also become contentious because it results in stated "sigma levels" that reflect short-term rather than long-term performance: a process that has long-term defect levels corresponding to 4.5 sigma performance is, by Six Sigma convention, described as a "six sigma process." The accepted Six Sigma scoring system thus cannot be equated to actual normal distribution probabilities for the stated number of standard deviations, and this has been a key bone of contention over how Six Sigma measures are defined. The fact that it is rarely explained that a "6 sigma" process will have long-term defect rates corresponding to 4.5 sigma performance rather than actual 6 sigma performance has led several commentators to express the opinion that Six Sigma is a confidence trick.

Statistical process control

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Statistical_process_control

Statistical process control (SPC) is a method of quality control which employs statistical methods to monitor and control a process. This helps to ensure that the process operates efficiently, producing more specification-conforming products with less waste (rework or scrap). SPC can be applied to any process where the "conforming product" (product meeting specifications) output can be measured. Key tools used in SPC include run charts, control charts, a focus on continuous improvement, and the design of experiments. An example of a process where SPC is applied is manufacturing lines.

SPC must be practiced in 2 phases: The first phase is the initial establishment of the process, and the second phase is the regular production use of the process. In the second phase, a decision of the period to be examined must be made, depending upon the change in 5M&E conditions (Man, Machine, Material, Method, Movement, Environment) and wear rate of parts used in the manufacturing process (machine parts, jigs, and fixtures).

An advantage of SPC over other methods of quality control, such as "inspection", is that it emphasizes early detection and prevention of problems, rather than the correction of problems after they have occurred.

In addition to reducing waste, SPC can lead to a reduction in the time required to produce the product. SPC makes it less likely the finished product will need to be reworked or scrapped.

History

SPC was pioneered by Walter A. Shewhart at Bell Laboratories in the early 1920s. Shewhart developed the control chart in 1924 and the concept of a state of statistical control. Statistical control is equivalent to the concept of exchangeability developed by logician William Ernest Johnson also in 1924 in his book Logic, Part III: The Logical Foundations of Science. Along with a team at AT&T that included Harold Dodge and Harry Romig he worked to put sampling inspection on a rational statistical basis as well. Shewhart consulted with Colonel Leslie E. Simon in the application of control charts to munitions manufacture at the Army's Picatinny Arsenal in 1934. That successful application helped convince Army Ordnance to engage AT&T's George Edwards to consult on the use of statistical quality control among its divisions and contractors at the outbreak of World War II.

W. Edwards Deming invited Shewhart to speak at the Graduate School of the U.S. Department of Agriculture, and served as the editor of Shewhart's book Statistical Method from the Viewpoint of Quality Control (1939) which was the result of that lecture. Deming was an important architect of the quality control short courses that trained American industry in the new techniques during WWII. The graduates of these wartime courses formed a new professional society in 1945, the American Society for Quality Control, which elected Edwards as its first president. Deming traveled to Japan during the Allied Occupation and met with the Union of Japanese Scientists and Engineers (JUSE) in an effort to introduce SPC methods to Japanese industry.

'Common' and 'special' sources of variation

Shewhart read the new statistical theories coming out of Britain, especially the work of William Sealy Gosset, Karl Pearson, and Ronald Fisher. However, he understood that data from physical processes seldom produced a normal distribution curve (that is, a Gaussian distribution or 'bell curve'). He discovered that data from measurements of variation in manufacturing did not always behave the way as data from measurements of natural phenomena (for example, Brownian motion of particles). Shewhart concluded that while every process displays variation, some processes display variation that is natural to the process ("common" sources of variation); these processes he described as being in (statistical) control. Other processes additionally display variation that is not present in the causal system of the process at all times ("special" sources of variation), which Shewhart described as not in control.

Application to non-manufacturing processes

In 1988, the Software Engineering Institute suggested that SPC could be applied to non-manufacturing processes, such as software engineering processes, in the Capability Maturity Model (CMM). The Level 4 and Level 5 practices of the Capability Maturity Model Integration (CMMI) use this concept.

The notion that SPC is a useful tool when applied to non-repetitive, knowledge-intensive processes such as research and development or systems engineering has encountered skepticism and remains controversial.

In his seminal article No Silver Bullet, Fred Brooks points out that the complexity, conformance requirements, changeability, and invisibility of software results in inherent and essential variation that cannot be removed. This implies that SPC is less effective in the domain of software development than in, e.g., manufacturing.

Variation in manufacturing

In manufacturing, quality is defined as conformance to specification. However, no two products or characteristics are ever exactly the same, because any process contains many sources of variability. In mass-manufacturing, traditionally, the quality of a finished article is ensured by post-manufacturing inspection of the product. Each article (or a sample of articles from a production lot) may be accepted or rejected according to how well it meets its design specifications. In contrast, SPC uses statistical tools to observe the performance of the production process in order to detect significant variations before they result in the production of a sub-standard article. Any source of variation at any point of time in a process will fall into one of two classes.
(1) Common causes
'Common' causes are sometimes referred to as 'non-assignable', or 'normal' sources of variation. It refers to any source of variation that consistently acts on process, of which there are typically many. This type of causes collectively produce a statistically stable and repeatable distribution over time.
(2) Special causes
'Special' causes are sometimes referred to as 'assignable' sources of variation. The term refers to any factor causing variation that affects only some of the process output. They are often intermittent and unpredictable.
Most processes have many sources of variation; most of them are minor and may be ignored. If the dominant assignable sources of variation are detected, potentially they can be identified and removed. When they are removed, the process is said to be 'stable'. When a process is stable, its variation should remain within a known set of limits. That is, at least, until another assignable source of variation occurs. 

For example, a breakfast cereal packaging line may be designed to fill each cereal box with 500 grams of cereal. Some boxes will have slightly more than 500 grams, and some will have slightly less. When the package weights are measured, the data will demonstrate a distribution of net weights.
If the production process, its inputs, or its environment (for example, the machine on the line) change, the distribution of the data will change. For example, as the cams and pulleys of the machinery wear, the cereal filling machine may put more than the specified amount of cereal into each box. Although this might benefit the customer, from the manufacturer's point of view it is wasteful, and increases the cost of production. If the manufacturer finds the change and its source in a timely manner, the change can be corrected (for example, the cams and pulleys replaced). 

Application of SPC

The application of SPC involves three main phases of activity:
  1. Understanding the process and the specification limits.
  2. Eliminating assignable (special) sources of variation, so that the process is stable.
  3. Monitoring the ongoing production process, assisted by the use of control charts, to detect significant changes of mean or variation.

Control charts

The data from measurements of variations at points on the process map is monitored using control charts. Control charts attempt to differentiate "assignable" ("special") sources of variation from "common" sources. "Common" sources, because they are an expected part of the process, are of much less concern to the manufacturer than "assignable" sources. Using control charts is a continuous activity, ongoing over time. 

Stable process

When the process does not trigger any of the control chart "detection rules" for the control chart, it is said to be "stable". A process capability analysis may be performed on a stable process to predict the ability of the process to produce "conforming product" in the future.

A stable process can be demonstrated by a process signature that is free of variances outside of the capability index. A process signature is the plotted points compared with the capability index.

Excessive variations

When the process triggers any of the control chart "detection rules", (or alternatively, the process capability is low), other activities may be performed to identify the source of the excessive variation. The tools used in these extra activities include: Ishikawa diagram, designed experiments, and Pareto charts. Designed experiments are a means of objectively quantifying the relative importance (strength) of sources of variation. Once the sources of (special cause) variation are identified, they can be minimized or eliminated. Steps to eliminating a source of variation might include: development of standards, staff training, error-proofing, and changes to the process itself or its inputs. 

Process stability metrics

When monitoring many processes with control charts, it is sometimes useful to calculate quantitative measures of the stability of the processes. These metrics can then be used to identify/prioritize the processes that are most in need of corrective actions. These metrics can also be viewed as supplementing the traditional process capability metrics. Several metrics have been proposed, as described in Ramirez and Runger. [11] They are (1) a Stability Ratio which compares the long-term variability to the short-term variability, (2) an ANOVA Test which compares the within-subgroup variation to the between-subgroup variation, and (3) an Instability Ratio which compares the number of subgroups that have one or more violations of the Western Electric rules to the total number of subgroups.

Mathematics of control charts

Digital control charts use logic-based rules that determine "derived values" which signal the need for correction. For example,
derived value = last value + average absolute difference between the last N numbers.

Trump may not be removed by the Senate — but he’s still terrified of his trial. Here’s why.


Donald Trump is scared. The Senate trial following his impeachment for a blackmail and campaign cheating scheme starts next week, and it’s driving him to distraction. He was supposed to host a lame event at the White House on Thursday to bolster fake concerns that white evangelicals are being oppressed, but blew off pandering to his strongest supporters for an hour, likely because he couldn’t pry himself away from news coverage of the impeachment trial’s kickoff. After ending the event swiftly, Trump then tweeted angrily, “I JUST GOT IMPEACHED FOR MAKING A PERFECT PHONE CALL!”

(As with most things the president says, this was untrue — he was impeached weeks ago, in December.)

Trump’s cold sweats are significant, because everyone who has been following this case knows that the Senate will acquit him. Not because he’s innocent — no one who has actually consulted the evidence is foolish enough to believe that — but because Senate Majority Leader Mitch McConnell and the Republicans who control the Senate decided long ago that they would cover up for their shamelessly corrupt president no matter what he does. With such an assured outcome, Trump’s fears seem overblown and silly, even for someone crippled by sociopathic narcissism and its accompanying paranoia.

But it’s also true that high-profile travesties of justice, such as the one Senate Republicans are currently preparing to commit, can often provoke major political backlash. Getting a jury to acquit the obviously guilty can, as history shows, cause a public that’s already outraged about the crime to get even more furious. That, I suspect, is what Trump is sweating.

What the Senate is about to do is akin to the practice of jury nullification. That’s where a jury decides that either they don’t think the crime should be a crime at all, or that they believe people like the defendant should above the law, and so refuse to convict no matter how guilty the defendant is. This something that in theory, and sometimes in practice, can be used for good as when a jury refuses to throw someone in prison for a low-level drug offense, or refuses to enforce a law restricting free speech. But historically in the U.S., jury nullification has tended to be used to uphold injustice and reinforce racist or sexist systems of power.

In other words, exactly what Senate Republicans are planning to do. That becomes more obvious every day as more evidence of Trump’s guilt comes out, from the revelations by Rudy Giuliani’s former associate Lev Parnas to the Government Accountability Office declaring that Trump broke the law by withholding military aid to Ukraine.

The most disturbing and frequent historical examples of jury nullification come from the Jim Crow South, where it was normal for all-white juries to acquit Klansmen and others who committed racist murders — not because they genuinely believed they were innocent, but because they believed it should be legal for white people to murder black people in cold blood.

The most famous of these cases was that of Roy Bryant and J.W. Milam, two white men who murdered a black teenager named Emmett Till in Mississippi in 1955. That the men had committed the crime was not in doubt — they described the murder in great detail to a reporter for Look magazine. But the all-white, all-male jury refused to convict, and didn’t really bother to hide the fact that they did so because they didn’t think white men should be punished for killing black people.

Unfortunately, this problem of white jurors refusing to convict in cases where the victims are black has not gone away. For instance, in the 2012 Florida killing of black teenager Trayvon Martin by George Zimmerman, a nearly all-white jury voted to acquit Zimmerman, even though Martin was apparently just walking home after buying some snacks and Zimmerman had been warned by a 911 operator not to pursue him — and even though Zimmerman’s only basis for suspecting Martin of anything was his race. The one woman of color on the jury has since publicly lamented the process and describes what sounds a lot like bullying from the white women in the room.

The defendants in those cases walked free, but the outrage that followed had political ramifications. Till’s murder helped draw national attention to the evils of the Jim Crow South and helped bolster support for the burgeoning civil rights movement. Martin’s murder, decades later, helped build support for what became known as the Black Lives Matter movement.

Sometimes the backlash to injustice can be earth-shaking, as happened in 1992, when Los Angeles was torn up by riots in the wake of the acquittal by a majority-white jury of four cops who were caught on video severely beating Rodney King, a black motorist they had pulled over for speeding.

These are all racially loaded cases, of course, which sets them apart from Trump’s impeachment over his efforts to cheat in the 2020 election and his cavalier willingness to use government resources to force foreign leaders to help him do so. Trump’s inevitable acquittal in the Senate won’t be quite the gut-punch so many people feel when white men get sprung for committing racist crimes.

Still, the social circumstances of Trump’s upcoming acquittal go straight back to those same forces of white supremacy that have led to so many other travesties of justice in the past. After all, the main reason Senate Republicans are averse to taking what seems to be an easy way out — convicting the obviously guilty Trump and letting his Republican Vice President, Mike Pence, take over — is because they fear crossing the notoriously loyal Trump base, who represent their only possible chance of holding onto the Senate or retaking the House this November.

And the reason that base is so loyal, as with many things in this country, relates to racism. Trump’s base is motivated by what sociologists delicately call “racial resentment,” which is a nice way of saying that these white people see changing demographics in the U.S. and growing challenges to white domination, and they’re angry about it. Furthermore, they see President Trump, a blatant and shameless racist, as their best weapon to fight to preserve a system of white supremacy.

As long as Trump keeps delivering on the racism — which he has done in a myriad of ways — his base doesn’t care what crimes he commits. After all, Trump committed his crime to hang onto power so he can continue to inflict cruel racist policies on our entire nation. In that sense, this case shares a common root with those more explicitly racist acquittals of the past. They’re all part of the long and ugly American tradition of letting white people get away with crime, so long as they do it in the name of white supremacy.

But watching obviously guilty people get away with it can also have a galvanizing political effect, and not just when the crime itself is racially provocative. As the #MeToo movement and the Women’s March demonstrated, Americans have also been roused to outrage when men commit sexual assaults and get away with it. And the ongoing fascination with gangsters who finally get caught after evading justice for years — Al Capone, Whitey Bulger, John Gotti — suggests a real hunger to see bad guys pay for what they do.

That’s what Donald Trump fears: That his acquittal will not be read as an exoneration, but as yet another famous miscarriage of justice that leads to outrage across the nation. Let’s hope his worst fears come true.

Design for manufacturability

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Design_for_manufacturability

Redesigned for manufacturability

Design for manufacturability (also sometimes known as design for manufacturing or DFM) is the general engineering practice of designing products in such a way that they are easy to manufacture. The concept exists in almost all engineering disciplines, but the implementation differs widely depending on the manufacturing technology. DFM describes the process of designing or engineering a product in order to facilitate the manufacturing process in order to reduce its manufacturing costs. DFM will allow potential problems to be fixed in the design phase which is the least expensive place to address them. Other factors may affect the manufacturability such as the type of raw material, the form of the raw material, dimensional tolerances, and secondary processing such as finishing.

Depending on various types of manufacturing processes there are set guidelines for DFM practices. These DFM guidelines help to precisely define various tolerances, rules and common manufacturing checks related to DFM.

While DFM is applicable to the design process, a similar concept called DFSS (Design for Six Sigma) is also practiced in many organizations. 

For printed circuit boards (PCB)

In the PCB design process, DFM leads to a set of design guidelines that attempt to ensure manufacturability. By doing so, probable production problems may be addressed during the design stage.

Ideally, DFM guidelines take into account the processes and capabilities of the manufacturing industry. Therefore, DFM is constantly evolving.

As manufacturing companies evolve and automate more and more stages of the processes, these processes tend to become cheaper. DFM is usually used to reduce these costs.[1] For example, if a process may be done automatically by machines (i.e. SMT component placement and soldering), such process is likely to be cheaper than doing so by hand. 

For integrated circuits (IC)

Achieving high-yielding designs, in the state of the art VLSI technology has become an extremely challenging task due to the miniaturization as well as the complexity of leading-edge products. Here, the DFM methodology includes a set of techniques to modify the design of integrated circuits (IC) in order to make them more manufacturable, i.e., to improve their functional yield, parametric yield, or their reliability. 

Background

Traditionally, in the pre-nanometer era, DFM consisted of a set of different methodologies trying to enforce some soft (recommended) design rules regarding the shapes and polygons of the physical layout of an integrated circuit. These DFM methodologies worked primarily at the full chip level. Additionally, worst-case simulations at different levels of abstraction were applied to minimize the impact of process variations on performance and other types of parametric yield loss. All these different types of worst-case simulations were essentially based on a base set of worst-case (or corner) SPICE device parameter files that were intended to represent the variability of transistor performance over the full range of variation in a fabrication process. 

Taxonomy of yield loss mechanisms

The most important yield loss models (YLMs) for VLSI ICs can be classified into several categories based on their nature.
  • Functional yield loss is still the dominant factor and is caused by mechanisms such as misprocessing (e.g., equipment-related problems), systematic effects such as printability or planarization problems, and purely random defects.
  • High-performance products may exhibit parametric design marginalities caused by either process fluctuations or environmental factors (such as supply voltage or temperature).
  • The test-related yield losses, which are caused by incorrect testing, can also play a significant role.

Techniques

After understanding the causes of yield loss, the next step is to make the design as resistant as possible. Techniques used for this include:
  • Substituting higher yield cells where permitted by timing, power, and routability.
  • Changing the spacing and width of the interconnect wires, where possible
  • Optimizing the amount of redundancy in internal memories.
  • Substituting fault tolerant (redundant) vias in a design where possible
All of these require a detailed understanding of yield loss mechanisms, since these changes trade off against one another. For example, introducing redundant vias will reduce the chance of via problems, but increase the chance of unwanted shorts. Whether this is good idea, therefore, depends on the details of the yield loss models and the characteristics of the particular design. 

For CNC machining


Objective

The objective is to design for lower cost. The cost is driven by time, so the design must minimize the time required to not just machine (remove the material), but also the set-up time of the CNC machine, NC programming, fixturing and many other activities that are dependent on the complexity and size of the part. 

Set-Up Time of Operations (Flip of the Part)

Unless a 4th- &/or 5th-Axis is used, a CNC can only approach the part from a single direction. One side must be machined at a time (called an operation or Op). Then the part must be flipped from side to side to machine all of the features. The geometry of the features dictates whether the part must be flipped over or not. The more Ops (flip of the part), the more expensive the part because it incurs substantial "Set-up" and "Load/Unload" time. 

Each operation (flip of the part) has set-up time, machine time, time to load/unload tools, time to load/unload parts, and time to create the NC program for each operation. If a part has only 1 operation, then parts only have to be loaded/unloaded once. If it has 5 operations, then load/unload time is significant. 

The low hanging fruit is minimizing the number of operations (flip of the part) to create significant savings. For example, it may take only 2 minutes to machine the face of a small part, but it will take an hour to set the machine up to do it. Or, if there are 5 operations at 1.5 hours each, but only 30 minutes total machine time, then 7.5 hours is charged for just 30 minutes of machining.

Lastly, the volume (number of parts to machine) plays a critical role in amortizing the set-up time, programming time and other activities into the cost of the part. In the example above, the part in quantities of 10 could cost 7–10X the cost in quantities of 100.

Typically, the law of diminishing returns presents itself at volumes of 100–300 because set-up times, custom tooling and fixturing can be amortized into the noise.

Material type

The most easily machined types of metals include aluminum, brass, and softer metals. As materials get harder, denser and stronger, such as steel, stainless steel, titanium, and exotic alloys, they become much harder to machine and take much longer, thus being less manufacturable. Most types of plastic are easy to machine, although additions of fiberglass or carbon fiber can reduce the machinability. Plastics that are particularly soft and gummy may have machinability problems of their own. 

Material form

Metals come in all forms. In the case of aluminum as an example, bar stock and plate are the two most common forms from which machined parts are made. The size and shape of the component may determine which form of material must be used. It is common for engineering drawings to specify one form over the other. Bar stock is generally close to 1/2 of the cost of plate on a per pound basis. So although the material form isn't directly related to the geometry of the component, cost can be removed at the design stage by specifying the least expensive form of the material.

Tolerances

A significant contributing factor to the cost of a machined component is the geometric tolerance to which the features must be made. The tighter the tolerance required, the more expensive the component will be to machine. When designing, specify the loosest tolerance that will serve the function of the component. Tolerances must be specified on a feature by feature basis. There are creative ways to engineer components with lower tolerances that still perform as well as ones with higher tolerances. 

Design and shape

As machining is a subtractive process, the time to remove the material is a major factor in determining the machining cost. The volume and shape of the material to be removed as well as how fast the tools can be fed will determine the machining time. When using milling cutters, the strength and stiffness of the tool which is determined in part by the length to diameter ratio of the tool will play the largest role in determining that speed. The shorter the tool is relative to its diameter the faster it can be fed through the material. A ratio of 3:1 (L:D) or under is optimum. If that ratio cannot be achieved, a solution like this depicted here can be used. For holes, the length to diameter ratio of the tools are less critical, but should still be kept under 10:1.

There are many other types of features which are more or less expensive to machine. Generally chamfers cost less to machine than radii on outer horizontal edges. 3D interpolation is used to create radii on edges that are not on the same plane which incur 10X the cost. Undercuts are more expensive to machine. Features that require smaller tools, regardless of L:D ratio, are more expensive. 

Design for Inspection

The concept of Design for Inspection (DFI) should complement and work in collaboration with Design for Manufacturability (DFM) and Design for Assembly (DFA) to reduce product manufacturing cost and increase manufacturing practicality. There are instances when this method could cause calendar delays since it consumes many hours of additional work such as the case of the need to prepare for design review presentations and documents. To address this, it is proposed that instead of periodic inspections, organizations could adopt the framework of empowerment, particularly at the stage of product development, wherein the senior management empowers the project leader to evaluate manufacturing processes and outcomes against expectations on product performance, cost, quality and development time. Experts, however, cite the necessity for the DFI because it is crucial in performance and quality control, determining key factors such as product reliability, safety, and life cycles. For an aerospace components company, where inspection is mandatory, there is the requirement for the suitability of the manufacturing process for inspection. Here, a mechanism is adopted such as an inspectability index, which evaluates design proposals. Another example of DFI is the concept of cumulative count of conforming chart (CCC chart), which is applied in inspection and maintenance planning for systems where different types of inspection and maintenance are available.

Design for additive manufacturing

Additive manufacturing broadens the ability of a designer to optimize the design of a product or part (to save materials for example). Designs tailored for additive manufacturing are sometimes very different from designs tailored for machining or forming manufacturing operations. 

In addition, due to some size constraints of additive manufacturing machines, sometimes the related bigger designs are split into smaller sections with self-assembly features or fasteners locators.

Technology CAD

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Technology_CAD
 
Technology computer-aided design (technology CAD or TCAD) is a branch of electronic design automation that models semiconductor fabrication and semiconductor device operation. The modeling of the fabrication is termed Process TCAD, while the modeling of the device operation is termed Device TCAD. Included are the modelling of process steps (such as diffusion and ion implantation), and modelling of the behavior of the electrical devices based on fundamental physics, such as the doping profiles of the devices. TCAD may also include the creation of compact models (such as the well known SPICE transistor models), which try to capture the electrical behavior of such devices but do not generally derive them from the underlying physics. (However, the SPICE simulator itself is usually considered as part of ECAD rather than TCAD.)

Hierarchy of technology CAD tools building from the process level to circuits. Left side icons show typical manufacturing issues; right side icons reflect MOS scaling results based on TCAD (CRC Electronic Design Automation for IC Handbook, Chapter 25)
 
From the diagram on the right:

Introduction

Technology files and design rules are essential building blocks of the integrated circuit design process. Their accuracy and robustness over process technology, its variability and the operating conditions of the IC — environmental, parasitic interactions and testing, including adverse conditions such as electro-static discharge — are critical in determining performance, yield and reliability. Development of these technology and design rule files involves an iterative process that crosses boundaries of technology and device development, product design and quality assurance. Modeling and simulation play a critical role in support of many aspects of this evolution process. 

The goals of TCAD start from the physical description of integrated circuit devices, considering both the physical configuration and related device properties, and build the links between the broad range of physics and electrical behavior models that support circuit design. Physics-based modeling of devices, in distributed and lumped forms, is an essential part of the IC process development. It seeks to quantify the underlying understanding of the technology and abstract that knowledge to the device design level, including extraction of the key parameters that support circuit design and statistical metrology.

Although the emphasis here is on Metal Oxide Semiconductor (MOS) transistors — the workhorse of the IC industry — it is useful to briefly overview the development history of the modeling tools and methodology that has set the stage for the present state-of-the-art. 

History

The evolution of technology computer-aided design (TCAD) — the synergistic combination of process, device and circuit simulation and modeling tools — finds its roots in bipolar technology, starting in the late 1960s, and the challenges of junction isolated, double-and triple-diffused transistors. These devices and technology were the basis of the first integrated circuits; nonetheless, many of the scaling issues and underlying physical effects are integral to IC design, even after four decades of IC development. With these early generations of IC, process variability and parametric yield were an issue — a theme that will reemerge as a controlling factor in future IC technology as well.

Process control issues — both for the intrinsic devices and all the associated parasitics — presented formidable challenges and mandated the development of a range of advanced physical models for process and device simulation. Starting in the late 1960s and into the 1970s, the modeling approaches exploited were dominantly one- and two-dimensional simulators. While TCAD in these early generations showed exciting promise in addressing the physics-oriented challenges of bipolar technology, the superior scalability and power consumption of MOS technology revolutionized the IC industry. By the mid-1980s, CMOS became the dominant driver for integrated electronics. Nonetheless, these early TCAD developments set the stage for their growth and broad deployment as an essential toolset that has leveraged technology development through the VLSI and ULSI eras which are now the mainstream. 

IC development for more than a quarter-century has been dominated by the MOS technology. In the 1970s and 1980s NMOS was favored owing to speed and area advantages, coupled with technology limitations and concerns related to isolation, parasitic effects and process complexity. During that era of NMOS-dominated LSI and the emergence of VLSI, the fundamental scaling laws of MOS technology were codified and broadly applied. It was also during this period that TCAD reached maturity in terms of realizing robust process modeling (primarily one-dimensional) which then became an integral technology design tool, used universally across the industry. At the same time device simulation, dominantly two-dimensional owing to the nature of MOS devices, became the work-horse of technologists in the design and scaling of devices. The transition from NMOS to CMOS technology resulted in the necessity of tightly coupled and fully 2D simulators for process and device simulations. This third generation of TCAD tools became critical to address the full complexity of twin-well CMOS technology (see Figure 3a), including issues of design rules and parasitic effects such as latchup. An abbreviated but prospective view of this period, through the mid-1980s, is given in; and from the point of view of how TCAD tools were used in the design process.

Modern TCAD

Today the requirements for and use of TCAD cross-cut a very broad landscape of design automation issues, including many fundamental physical limits. At the core are still a host of process and device modeling challenges that support intrinsic device scaling and parasitic extraction. These applications include technology and design rule development, extraction of compact models and more generally design for manufacturability (DFM). The dominance of interconnects for giga-scale integration (transistor counts in O(billion)) and clocking frequencies in O (10 gigahertz)) have mandated the development of tools and methodologies that embrace patterning by electro-magnetic simulations—both for optical patterns and electronic and optical interconnect performance modeling—as well as circuit-level modeling. This broad range of issues at the device and interconnect levels, including links to underlying patterning and processing technologies, is summarized in Figure 1 and provides a conceptual framework for the discussion that now follows.

Figure 1: Hierarchy of technology CAD tools building from the process level to circuits. Left side icons show typical manufacturing issues; right side icons reflect MOS scaling results based on TCAD (CRC Electronic Design Automation for IC Handbook, Chapter 25)
 
Figure 1 depicts a hierarchy of process, device and circuit levels of simulation tools. On each side of the boxes indicating modeling level are icons that schematically depict representative applications for TCAD. The left side gives emphasis to Design For Manufacturing (DFM) issues such as: shallow-trench isolation (STI), extra features required for phase-shift masking (PSM) and challenges for multi-level interconnects that include processing issues of chemical-mechanical planarization (CMP), and the need to consider electro-magnetic effects using electromagnetic field solvers. The right side icons show the more traditional hierarchy of expected TCAD results and applications: complete process simulations of the intrinsic devices, predictions of drive current scaling and extraction of technology files for the complete set of devices and parasitics.

Figure 2 again looks at TCAD capabilities but this time more in the context of design flow information and how this relates to the physical layers and modeling of the electronic design automation (EDA) world. Here the simulation levels of process and device modeling are considered as integral capabilities (within TCAD) that together provide the "mapping" from mask-level information to the functional capabilities needed at the EDA level such as compact models ("technology files") and even higher-level behavioral models. Also shown is the extraction and electrical rule checking (ERC); this indicates that many of the details that to date have been embedded in analytical formulations, may in fact also be linked to the deeper TCAD level in order to support the growing complexity of technology scaling. 

Providers

Current major suppliers of TCAD tools include Synopsys, Silvaco, Crosslight, Cogenda Software, Global TCAD Solutions and Tiberlab. The open source GSS, Archimedes, Aeneas, NanoTCAD ViDES, DEVSIM, and GENIUS have some of the capabilities of the commercial products.

Anarchism

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Anarchism   Anarchism is a political philosophy and movement t...