Search This Blog

Tuesday, October 23, 2018

Data analysis

From Wikipedia, the free encyclopedia

Data analysis is a process of inspecting, cleansing, transforming, and modeling data with the goal of discovering useful information, informing conclusions, and supporting decision-making. Data analysis has multiple facets and approaches, encompassing diverse techniques under a variety of names, while being used in different business, science, and social science domains.

Data mining is a particular data analysis technique that focuses on modeling and knowledge discovery for predictive rather than purely descriptive purposes, while business intelligence covers data analysis that relies heavily on aggregation, focusing mainly on business information. In statistical applications, data analysis can be divided into descriptive statistics, exploratory data analysis (EDA), and confirmatory data analysis (CDA). EDA focuses on discovering new features in the data while CDA focuses on confirming or falsifying existing hypotheses. Predictive analytics focuses on application of statistical models for predictive forecasting or classification, while text analytics applies statistical, linguistic, and structural techniques to extract and classify information from textual sources, a species of unstructured data. All of the above are varieties of data analysis.

Data integration is a precursor to data analysis, and data analysis is closely linked to data visualization and data dissemination. The term data analysis is sometimes used as a synonym for data modeling.

The process of data analysis

Data science process flowchart from "Doing Data Science", Cathy O'Neil and Rachel Schutt, 2013

Analysis refers to breaking a whole into its separate components for individual examination. Data analysis is a process for obtaining raw data and converting it into information useful for decision-making by users. Data is collected and analyzed to answer questions, test hypotheses or disprove theories.

Statistician John Tukey defined data analysis in 1961 as: "Procedures for analyzing data, techniques for interpreting the results of such procedures, ways of planning the gathering of data to make its analysis easier, more precise or more accurate, and all the machinery and results of (mathematical) statistics which apply to analyzing data."

There are several phases that can be distinguished, described below. The phases are iterative, in that feedback from later phases may result in additional work in earlier phases.

Data requirements

The data is necessary as inputs to the analysis, which is specified based upon the requirements of those directing the analysis or customers (who will use the finished product of the analysis). The general type of entity upon which the data will be collected is referred to as an experimental unit (e.g., a person or population of people). Specific variables regarding a population (e.g., age and income) may be specified and obtained. Data may be numerical or categorical (i.e., a text label for numbers).

Data collection

Data is collected from a variety of sources. The requirements may be communicated by analysts to custodians of the data, such as information technology personnel within an organization. The data may also be collected from sensors in the environment, such as traffic cameras, satellites, recording devices, etc. It may also be obtained through interviews, downloads from online sources, or reading documentation.

Data processing

The phases of the intelligence cycle used to convert raw information into actionable intelligence or knowledge are conceptually similar to the phases in data analysis.

Data initially obtained must be processed or organised for analysis. For instance, these may involve placing data into rows and columns in a table format (i.e., structured data) for further analysis, such as within a spreadsheet or statistical software.

Data cleaning

Once processed and organised, the data may be incomplete, contain duplicates, or contain errors. The need for data cleaning will arise from problems in the way that data is entered and stored. Data cleaning is the process of preventing and correcting these errors. Common tasks include record matching, identifying inaccuracy of data, overall quality of existing data, deduplication, and column segmentation. Such data problems can also be identified through a variety of analytical techniques. For example, with financial information, the totals for particular variables may be compared against separately published numbers believed to be reliable. Unusual amounts above or below pre-determined thresholds may also be reviewed. There are several types of data cleaning that depend on the type of data such as phone numbers, email addresses, employers etc. Quantitative data methods for outlier detection can be used to get rid of likely incorrectly entered data. Textual data spell checkers can be used to lessen the amount of mistyped words, but it is harder to tell if the words themselves are correct.

Exploratory data analysis

Once the data is cleaned, it can be analyzed. Analysts may apply a variety of techniques referred to as exploratory data analysis to begin understanding the messages contained in the data. The process of exploration may result in additional data cleaning or additional requests for data, so these activities may be iterative in nature. Descriptive statistics, such as the average or median, may be generated to help understand the data. Data visualization may also be used to examine the data in graphical format, to obtain additional insight regarding the messages within the data.

Modeling and algorithms

Mathematical formulas or models called algorithms may be applied to the data to identify relationships among the variables, such as correlation or causation. In general terms, models may be developed to evaluate a particular variable in the data based on other variable(s) in the data, with some residual error depending on model accuracy (i.e., Data = Model + Error).

Inferential statistics includes techniques to measure relationships between particular variables. For example, regression analysis may be used to model whether a change in advertising (independent variable X) explains the variation in sales (dependent variable Y). In mathematical terms, Y (sales) is a function of X (advertising). It may be described as Y = aX + b + error, where the model is designed such that a and b minimize the error when the model predicts Y for a given range of values of X. Analysts may attempt to build models that are descriptive of the data to simplify analysis and communicate results.

Data product

A data product is a computer application that takes data inputs and generates outputs, feeding them back into the environment. It may be based on a model or algorithm. An example is an application that analyzes data about customer purchasing history and recommends other purchases the customer might enjoy.

Communication

Data visualization to understand the results of a data analysis.

Once the data is analyzed, it may be reported in many formats to the users of the analysis to support their requirements. The users may have feedback, which results in additional analysis. As such, much of the analytical cycle is iterative.

When determining how to communicate the results, the analyst may consider data visualization techniques to help clearly and efficiently communicate the message to the audience. Data visualization uses information displays (such as tables and charts) to help communicate key messages contained in the data. Tables are helpful to a user who might lookup specific numbers, while charts (e.g., bar charts or line charts) may help explain the quantitative messages contained in the data.

Quantitative messages

A time series illustrated with a line chart demonstrating trends in U.S. federal spending and revenue over time.
 
A scatterplot illustrating correlation between two variables (inflation and unemployment) measured at points in time.

Stephen Few described eight types of quantitative messages that users may attempt to understand or communicate from a set of data and the associated graphs used to help communicate the message. Customers specifying requirements and analysts performing the data analysis may consider these messages during the course of the process.
  1. Time-series: A single variable is captured over a period of time, such as the unemployment rate over a 10-year period. A line chart may be used to demonstrate the trend;
  2. Ranking: Categorical subdivisions are ranked in ascending or descending order, such as a ranking of sales performance (the measure) by sales persons (the category, with each sales person a categorical subdivision) during a single period. A bar chart may be used to show the comparison across the sales persons;
  3. Part-to-whole: Categorical subdivisions are measured as a ratio to the whole (i.e., a percentage out of 100%). A pie chart or bar chart can show the comparison of ratios, such as the market share represented by competitors in a market;
  4. Deviation: Categorical subdivisions are compared against a reference, such as a comparison of actual vs. budget expenses for several departments of a business for a given time period. A bar chart can show comparison of the actual versus the reference amount;
  5. Frequency distribution: Shows the number of observations of a particular variable for given interval, such as the number of years in which the stock market return is between intervals such as 0–10%, 11–20%, etc. A histogram, a type of bar chart, may be used for this analysis;
  6. Correlation: Comparison between observations represented by two variables (X,Y) to determine if they tend to move in the same or opposite directions. For example, plotting unemployment (X) and inflation (Y) for a sample of months. A scatter plot is typically used for this message;
  7. Nominal comparison: Comparing categorical subdivisions in no particular order, such as the sales volume by product code. A bar chart may be used for this comparison;
  8. Geographic or geospatial: Comparison of a variable across a map or layout, such as the unemployment rate by state or the number of persons on the various floors of a building. A cartogram is a typical graphic used.

Techniques for analyzing quantitative data

Author Jonathan Koomey has recommended a series of best practices for understanding quantitative data. These include:
  • Check raw data for anomalies prior to performing your analysis;
  • Re-perform important calculations, such as verifying columns of data that are formula driven;
  • Confirm main totals are the sum of subtotals;
  • Check relationships between numbers that should be related in a predictable way, such as ratios over time;
  • Normalize numbers to make comparisons easier, such as analyzing amounts per person or relative to GDP or as an index value relative to a base year;
  • Break problems into component parts by analyzing factors that led to the results, such as DuPont analysis of return on equity.
For the variables under examination, analysts typically obtain descriptive statistics for them, such as the mean (average), median, and standard deviation. They may also analyze the distribution of the key variables to see how the individual values cluster around the mean.

An illustration of the MECE principle used for data analysis.

The consultants at McKinsey and Company named a technique for breaking a quantitative problem down into its component parts called the MECE principle. Each layer can be broken down into its components; each of the sub-components must be mutually exclusive of each other and collectively add up to the layer above them. The relationship is referred to as "Mutually Exclusive and Collectively Exhaustive" or MECE. For example, profit by definition can be broken down into total revenue and total cost. In turn, total revenue can be analyzed by its components, such as revenue of divisions A, B, and C (which are mutually exclusive of each other) and should add to the total revenue (collectively exhaustive).

Analysts may use robust statistical measurements to solve certain analytical problems. Hypothesis testing is used when a particular hypothesis about the true state of affairs is made by the analyst and data is gathered to determine whether that state of affairs is true or false. For example, the hypothesis might be that "Unemployment has no effect on inflation", which relates to an economics concept called the Phillips Curve. Hypothesis testing involves considering the likelihood of Type I and type II errors, which relate to whether the data supports accepting or rejecting the hypothesis.

Regression analysis may be used when the analyst is trying to determine the extent to which independent variable X affects dependent variable Y (e.g., "To what extent do changes in the unemployment rate (X) affect the inflation rate (Y)?"). This is an attempt to model or fit an equation line or curve to the data, such that Y is a function of X.

Necessary condition analysis (NCA) may be used when the analyst is trying to determine the extent to which independent variable X allows variable Y (e.g., "To what extent is a certain unemployment rate (X) necessary for a certain inflation rate (Y)?"). Whereas (multiple) regression analysis uses additive logic where each X-variable can produce the outcome and the X's can compensate for each other (they are sufficient but not necessary), necessary condition analysis (NCA) uses necessity logic, where one or more X-variables allow the outcome to exist, but may not produce it (they are necessary but not sufficient). Each single necessary condition must be present and compensation is not possible.

Analytical activities of data users

Users may have particular data points of interest within a data set, as opposed to general messaging outlined above. Such low-level user analytic activities are presented in the following table. The taxonomy can also be organized by three poles of activities: retrieving values, finding data points, and arranging data points.

# Task General
Description
Pro Forma
Abstract
Examples
1 Retrieve Value Given a set of specific cases, find attributes of those cases. What are the values of attributes {X, Y, Z, ...} in the data cases {A, B, C, ...}? - What is the mileage per gallon of the Ford Mondeo? - How long is the movie Gone with the Wind?
2 Filter Given some concrete conditions on attribute values, find data cases satisfying those conditions. Which data cases satisfy conditions {A, B, C...}? - What Kellogg's cereals have high fiber? - What comedies have won awards?
- Which funds underperformed the SP-500?
3 Compute Derived Value Given a set of data cases, compute an aggregate numeric representation of those data cases. What is the value of aggregation function F over a given set S of data cases? - What is the average calorie content of Post cereals? - What is the gross income of all stores combined?
- How many manufacturers of cars are there?
4 Find Extremum Find data cases possessing an extreme value of an attribute over its range within the data set. What are the top/bottom N data cases with respect to attribute A? - What is the car with the highest MPG? - What director/film has won the most awards?
- What Marvel Studios film has the most recent release date?
5 Sort Given a set of data cases, rank them according to some ordinal metric. What is the sorted order of a set S of data cases according to their value of attribute A? - Order the cars by weight. - Rank the cereals by calories.
6 Determine Range Given a set of data cases and an attribute of interest, find the span of values within the set. What is the range of values of attribute A in a set S of data cases? - What is the range of film lengths? - What is the range of car horsepowers?
- What actresses are in the data set?
7 Characterize Distribution Given a set of data cases and a quantitative attribute of interest, characterize the distribution of that attribute’s values over the set. What is the distribution of values of attribute A in a set S of data cases? - What is the distribution of carbohydrates in cereals? - What is the age distribution of shoppers?
8 Find Anomalies Identify any anomalies within a given set of data cases with respect to a given relationship or expectation, e.g. statistical outliers. Which data cases in a set S of data cases have unexpected/exceptional values? - Are there exceptions to the relationship between horsepower and acceleration? - Are there any outliers in protein?
9 Cluster Given a set of data cases, find clusters of similar attribute values. Which data cases in a set S of data cases are similar in value for attributes {X, Y, Z, ...}? - Are there groups of cereals w/ similar fat/calories/sugar? - Is there a cluster of typical film lengths?
10 Correlate Given a set of data cases and two attributes, determine useful relationships between the values of those attributes. What is the correlation between attributes X and Y over a given set S of data cases? - Is there a correlation between carbohydrates and fat? - Is there a correlation between country of origin and MPG?
- Do different genders have a preferred payment method?
- Is there a trend of increasing film length over the years?
11 Contextualization Given a set of data cases, find contextual relevancy of the data to the users. Which data cases in a set S of data cases are relevant to the current users' context? - Are there groups of restaurants that have foods based on my current caloric intake?

Barriers to effective analysis

Barriers to effective analysis may exist among the analysts performing the data analysis or among the audience. Distinguishing fact from opinion, cognitive biases, and innumeracy are all challenges to sound data analysis.

Confusing fact and opinion

You are entitled to your own opinion, but you are not entitled to your own facts.
 
Daniel Patrick Moynihan

Effective analysis requires obtaining relevant facts to answer questions, support a conclusion or formal opinion, or test hypotheses. Facts by definition are irrefutable, meaning that any person involved in the analysis should be able to agree upon them. For example, in August 2010, the Congressional Budget Office (CBO) estimated that extending the Bush tax cuts of 2001 and 2003 for the 2011–2020 time period would add approximately $3.3 trillion to the national debt. Everyone should be able to agree that indeed this is what CBO reported; they can all examine the report. This makes it a fact. Whether persons agree or disagree with the CBO is their own opinion.

As another example, the auditor of a public company must arrive at a formal opinion on whether financial statements of publicly traded corporations are "fairly stated, in all material respects." This requires extensive analysis of factual data and evidence to support their opinion. When making the leap from facts to opinions, there is always the possibility that the opinion is erroneous.

Cognitive biases

There are a variety of cognitive biases that can adversely affect analysis. For example, confirmation bias is the tendency to search for or interpret information in a way that confirms one's preconceptions. In addition, individuals may discredit information that does not support their views.

Analysts may be trained specifically to be aware of these biases and how to overcome them. In his book Psychology of Intelligence Analysis, retired CIA analyst Richards Heuer wrote that analysts should clearly delineate their assumptions and chains of inference and specify the degree and source of the uncertainty involved in the conclusions. He emphasized procedures to help surface and debate alternative points of view.

Innumeracy

Effective analysts are generally adept with a variety of numerical techniques. However, audiences may not have such literacy with numbers or numeracy; they are said to be innumerate. Persons communicating the data may also be attempting to mislead or misinform, deliberately using bad numerical techniques.

For example, whether a number is rising or falling may not be the key factor. More important may be the number relative to another number, such as the size of government revenue or spending relative to the size of the economy (GDP) or the amount of cost relative to revenue in corporate financial statements. This numerical technique is referred to as normalization or common-sizing. There are many such techniques employed by analysts, whether adjusting for inflation (i.e., comparing real vs. nominal data) or considering population increases, demographics, etc. Analysts apply a variety of techniques to address the various quantitative messages described in the section above.

Analysts may also analyze data under different assumptions or scenarios. For example, when analysts perform financial statement analysis, they will often recast the financial statements under different assumptions to help arrive at an estimate of future cash flow, which they then discount to present value based on some interest rate, to determine the valuation of the company or its stock. Similarly, the CBO analyzes the effects of various policy options on the government's revenue, outlays and deficits, creating alternative future scenarios for key measures.

Other topics

Smart buildings

A data analytics approach can be used in order to predict energy consumption in buildings. The different steps of the data analysis process are carried out in order to realise smart buildings, where the building management and control operations including heating, ventilation, air conditioning, lighting and security are realised automatically by miming the needs of the building users and optimising resources like energy and time.

Analytics and business intelligence

Analytics is the "extensive use of data, statistical and quantitative analysis, explanatory and predictive models, and fact-based management to drive decisions and actions." It is a subset of business intelligence, which is a set of technologies and processes that use data to understand and analyze business performance.

Education

Analytic activities of data visualization users

In education, most educators have access to a data system for the purpose of analyzing student data. These data systems present data to educators in an over-the-counter data format (embedding labels, supplemental documentation, and a help system and making key package/display and content decisions) to improve the accuracy of educators’ data analyses.

Practitioner notes

Initial data analysis

The most important distinction between the initial data analysis phase and the main analysis phase, is that during initial data analysis one refrains from any analysis that is aimed at answering the original research question. The initial data analysis phase is guided by the following four questions:

Quality of data

The quality of the data should be checked as early as possible. Data quality can be assessed in several ways, using different types of analysis: frequency counts, descriptive statistics (mean, standard deviation, median), normality (skewness, kurtosis, frequency histograms, n: variables are compared with coding schemes of variables external to the data set, and possibly corrected if coding schemes are not comparable.
The choice of analyses to assess the data quality during the initial data analysis phase depends on the analyses that will be conducted in the main analysis phase.

Quality of measurements

The quality of the measurement instruments should only be checked during the initial data analysis phase when this is not the focus or research question of the study. One should check whether structure of measurement instruments corresponds to structure reported in the literature.

There are two ways to assess measurement: [NOTE: only one way seems to be listed]
  • Analysis of homogeneity (internal consistency), which gives an indication of the reliability of a measurement instrument. During this analysis, one inspects the variances of the items and the scales, the Cronbach's α of the scales, and the change in the Cronbach's alpha when an item would be deleted from a scale.

Initial transformations

After assessing the quality of the data and of the measurements, one might decide to impute missing data, or to perform initial transformations of one or more variables, although this can also be done during the main analysis phase.

Possible transformations of variables are:
  • Square root transformation (if the distribution differs moderately from normal);
  • Log-transformation (if the distribution differs substantially from normal);
  • Inverse transformation (if the distribution differs severely from normal);
  • Make categorical (ordinal / dichotomous) (if the distribution differs severely from normal, and no transformations help).

Did the implementation of the study fulfill the intentions of the research design?

One should check the success of the randomization procedure, for instance by checking whether background and substantive variables are equally distributed within and across groups.
If the study did not need or use a randomization procedure, one should check the success of the non-random sampling, for instance by checking whether all subgroups of the population of interest are represented in sample.

Other possible data distortions that should be checked are:
  • dropout (this should be identified durin. the initial data analysis phase);
  • Item nonresponse (whether this is random or not should be assessed during the initial data analysis phase);
  • Treatment quality (using manipulation checks).

Characteristics of data sample

In any report or article, the structure of the sample must be accurately described. It is especially important to exactly determine the structure of the sample (and specifically the size of the subgroups) when subgroup analyses will be performed during the main analysis phase.

The characteristics of the data sample can be assessed by looking at:
  • Basic statistics of important variables;
  • Scatter plots;
  • Correlations and associations;
  • Cross-tabulations;

Final stage of the initial data analysis

During the final stage, the findings of the initial data analysis are documented, and necessary, preferable, and possible corrective actions are taken.

Also, the original plan for the main data analyses can and should be specified in more detail or rewritten.

In order to do this, several decisions about the main data analyses can and should be made:
  • In the case of non-normals: should one transform variables; make variables categorical (ordinal/dichotomous); adapt the analysis method?
  • In the case of missing data: should one neglect or impute the missing data; which imputation technique should be used?
  • In the case of outliers: should one use robust analysis techniques?
  • In case items do not fit the scale: should one adapt the measurement instrument by omitting items, or rather ensure comparability with other (uses of the) measurement instrument(s)?
  • In the case of (too) small subgroups: should one drop the hypothesis about inter-group differences, or use small sample techniques, like exact tests or bootstrapping?
  • In case the randomization procedure seems to be defective: can and should one calculate propensity scores and include them as covariates in the main analyses?

Analysis

Several analyses can be used during the initial data analysis phase:
  • Univariate statistics (single variable);
  • Bivariate associations (correlations);
  • Graphical techniques (scatter plots).
It is important to take the measurement levels of the variables into account for the analyses, as special statistical techniques are available for each level:
  • Nominal and ordinal variables
    • Frequency counts (numbers and percentages);
    • Associations
      • circumambulations (crosstabulations);
      • hierarchical loglinear analysis (restricted to a maximum of 8 variables);
      • loglinear analysis (to identify relevant/important variables and possible confounders);
    • Exact tests or bootstrapping (in case subgroups are small);
    • Computation of new variables;
  • Continuous variables
    • Distribution;
      • Statistics (M, SD, variance, skewness, kurtosis);
      • Stem-and-leaf displays;
      • Box plots;

Nonlinear analysis

Nonlinear analysis will be necessary when the data is recorded from a nonlinear system. Nonlinear systems can exhibit complex dynamic effects including bifurcations, chaos, harmonics and subharmonics that cannot be analyzed using simple linear methods. Nonlinear data analysis is closely related to nonlinear system identification.

Main data analysis

In the main analysis phase analyses aimed at answering the research question are performed as well as any other relevant analysis needed to write the first draft of the research report.

Exploratory and confirmatory approaches

In the main analysis phase either an exploratory or confirmatory approach can be adopted. Usually the approach is decided before data is collected. In an exploratory analysis no clear hypothesis is stated before analysing the data, and the data is searched for models that describe the data well. In a confirmatory analysis clear hypotheses about the data are tested.

Exploratory data analysis should be interpreted carefully. When testing multiple models at once there is a high chance on finding at least one of them to be significant, but this can be due to a type 1 error. It is important to always adjust the significance level when testing multiple models with, for example, a Bonferroni correction. Also, one should not follow up an exploratory analysis with a confirmatory analysis in the same dataset. An exploratory analysis is used to find ideas for a theory, but not to test that theory as well. When a model is found exploratory in a dataset, then following up that analysis with a confirmatory analysis in the same dataset could simply mean that the results of the confirmatory analysis are due to the same type 1 error that resulted in the exploratory model in the first place. The confirmatory analysis therefore will not be more informative than the original exploratory analysis.

Stability of results

It is important to obtain some indication about how generalizable the results are. While this is hard to check, one can look at the stability of the results. Are the results reliable and reproducible? There are two main ways of doing this:
  • Cross-validation: By splitting the data in multiple parts we can check if an analysis (like a fitted model) based on one part of the data generalizes to another part of the data as well;
  • Sensitivity analysis: A procedure to study the behavior of a system or model when global parameters are (systematically) varied. One way to do this is with bootstrapping.

Statistical methods

Many statistical methods have been used for statistical analyses. A very brief list of four of the more popular methods is:

Free software for data analysis

  • DevInfo – a database system endorsed by the United Nations Development Group for monitoring and analyzing human development;
  • ELKI – data mining framework in Java with data mining oriented visualization functions;
  • KNIME – the Konstanz Information Miner, a user friendly and comprehensive data analytics framework;
  • Orange – A visual programming tool featuring interactive data visualization and methods for statistical data analysis, data mining, and machine learning;
  • PAST – free software for scientific data analysis;
  • PAW – FORTRAN/C data analysis framework developed at CERN;
  • R – a programming language and software environment for statistical computing and graphics;
  • ROOT – C++ data analysis framework developed at CERN;
  • SciPy and Pandas – Python libraries for data analysis.

International data analysis contests

Different companies or organizations hold a data analysis contests to encourage researchers utilize their data or to solve a particular question using data analysis. A few examples of well-known international data analysis contests are as follows.

Quality assurance

From Wikipedia, the free encyclopedia
Quality assurance (QA) is a way of preventing mistakes and defects in manufactured products and avoiding problems when delivering solutions or services to customers; which ISO 9000 defines as "part of quality management focused on providing confidence that quality requirements will be fulfilled". This defect prevention in quality assurance differs subtly from defect detection and rejection in quality control, and has been referred to as a shift left as it focuses on quality earlier in the process i.e. to the left of a linear process diagram reading left to right.

The terms "quality assurance" and "quality control" are often used interchangeably to refer to ways of ensuring the quality of a service or product. For instance, the term "assurance" is often used as follows: Implementation of inspection and structured testing as a measure of quality assurance in a television set software project at Philips Semiconductors is described. The term "control", however, is used to describe the fifth phase of the Define, Measure, Analyze, Improve, Control (DMAIC) model. DMAIC is a data-driven quality strategy used to improve processes.

Quality assurance comprises administrative and procedural activities implemented in a quality system so that requirements and goals for a product, service or activity will be fulfilled. It is the systematic measurement, comparison with a standard, monitoring of processes and an associated feedback loop that confers error prevention. This can be contrasted with quality control, which is focused on process output.

Quality assurance includes two principles: "Fit for purpose" (the product should be suitable for the intended purpose); and "right first time" (mistakes should be eliminated). QA includes management of the quality of raw materials, assemblies, products and components, services related to production, and management, production and inspection processes. The two principles also manifest before the background of developing (engineering) a novel technical product: The task of engineering is to make it work once, while the task of quality assurance is to make it work all the time.

Historically, defining what suitable product or service quality means has been a more difficult process, determined in many ways, from the subjective user-based approach that contains "the different weights that individuals normally attach to quality characteristics," to the value-based approach which finds consumers linking quality to price and making overall conclusions of quality based on such a relationship.

History

Initial efforts to control the quality of production

During the Middle Ages, guilds adopted responsibility for the quality of goods and services offered by their members, setting and maintaining certain standards for guild membership.

Royal governments purchasing material were interested in quality control as customers. For this reason, King John of England appointed William de Wrotham to report about the construction and repair of ships. Centuries later, Samuel Pepys, Secretary to the British Admiralty, appointed multiple such overseers to standardize sea rations and naval training.

Prior to the extensive division of labor and mechanization resulting from the Industrial Revolution, it was possible for workers to control the quality of their own products. The Industrial Revolution led to a system in which large groups of people performing a specialized type of work were grouped together under the supervision of a foreman who was appointed to control the quality of work manufactured.

Wartime production

During the time of the First World War, manufacturing processes typically became more complex, with larger numbers of workers being supervised. This period saw the widespread introduction of mass production and piece work, which created problems as workmen could now earn more money by the production of extra products, which in turn occasionally led to poor quality workmanship being passed on to the assembly lines. Pioneers such as Frederick Winslow Taylor and Henry Ford recognized the limitations of the methods being used in mass production at the time and the subsequent varying quality of output. Taylor, utilizing the concept of scientific management, helped separate production tasks into many simple steps (the assembly line) and limited quality control to a few specific individuals, limiting complexity. Ford emphasized standardization of design and component standards to ensure a standard product was produced, while quality was the responsibility of machine inspectors, "placed in each department to cover all operations ... at frequent intervals, so that no faulty operation shall proceed for any great length of time."

Out of this also came statistical process control (SPC), which was pioneered by Walter A. Shewhart at Bell Laboratories in the early 1920s. Shewhart developed the control chart in 1924 and the concept of a state of statistical control. Statistical control is equivalent to the concept of exchangeability developed by logician William Ernest Johnson also in 1924 in his book Logic, Part III: The Logical Foundations of Science. Along with a team at AT&T that included Harold Dodge and Harry Romig, he worked to put sampling inspection on a rational statistical basis as well. Shewhart consulted with Colonel Leslie E. Simon in the application of control charts to munitions manufacture at the Army's Picatinny Arsenal in 1934. That successful application helped convince Army Ordnance to engage AT&T's George Edwards to consult on the use of statistical quality control among its divisions and contractors at the outbreak of World War II.

Postwar

In the period following World War II, many countries' manufacturing capabilities that had been destroyed during the war were rebuilt. General Douglas MacArthur oversaw the re-building of Japan. During this time, General MacArthur involved two key individuals in the development of modern quality concepts: W. Edwards Deming and Joseph Juran. Both individuals, as well as others, promoted the collaborative concepts of quality to Japanese business and technical groups, and these groups utilized these concepts in the redevelopment of the Japanese economy.

Although there were many individuals trying to lead United States industries towards a more comprehensive approach to quality, the U.S. continued to apply the Quality Control (QC) concepts of inspection and sampling to remove defective product from production lines, essentially unaware of or ignoring advances in QA for decades.

Approaches

Failure testing

A valuable process to perform on a whole consumer product is failure testing or stress testing. In mechanical terms this is the operation of a product until it fails, often under stresses such as increasing vibration, temperature, and humidity. This exposes many unanticipated weaknesses in a product, and the data is used to drive engineering and manufacturing process improvements. Often quite simple changes can dramatically improve product service, such as changing to mold-resistant paint or adding lock-washer placement to the training for new assembly personnel.

Statistical control

Statistical control is based on analyses of objective and subjective data. Many organizations use statistical process control as a tool in any quality improvement effort to track quality data. Any product can be statistically charted as long as they have a common cause variance or special cause variance to track.

Walter Shewart of Bell Telephone Laboratories recognized that when a product is made, data can be taken from scrutinized areas of a sample lot of the part and statistical variances are then analyzed and charted. Control can then be implemented on the part in the form of rework or scrap, or control can be implemented on the process that made the part, ideally eliminating the defect before more parts can be made like it.

Total quality management

The quality of products is dependent upon that of the participating constituents, some of which are sustainable and effectively controlled while others are not. The process(es) which are managed with QA pertain to Total Quality Management.

If the specification does not reflect the true quality requirements, the product's quality cannot be guaranteed. For instance, the parameters for a pressure vessel should cover not only the material and dimensions but operating, environmental, safety, reliability and maintainability requirements.

Models and standards

ISO 17025 is an international standard that specifies the general requirements for the competence to carry out tests and or calibrations. There are 15 management requirements and 10 technical requirements. These requirements outline what a laboratory must do to become accredited. Management system refers to the organization's structure for managing its processes or activities that transform inputs of resources into a product or service which meets the organization's objectives, such as satisfying the customer's quality requirements, complying with regulations, or meeting environmental objectives. WHO has developed several tools and offers training courses for quality assurance in public health laboratories.

The Capability Maturity Model Integration (CMMI) model is widely used to implement Process and Product Quality Assurance (PPQA) in an organization. The CMMI maturity levels can be divided into 5 steps, which a company can achieve by performing specific activities within the organization.

Company quality

During the 1980s, the concept of "company quality" with the focus on management and people came to the fore in the U.S. It was considered that, if all departments approached quality with an open mind, success was possible if management led the quality improvement process.

The company-wide quality approach places an emphasis on four aspects (enshrined in standards such as ISO 9001):
  1. Elements such as controls, job management, adequate processes, performance and integrity criteria and identification of records;
  2. Competence such as knowledge, skills, experiences, qualifications
  3. Soft elements, such as personnel integrity, confidence, organizational culture, motivation, team spirit and quality relationships;
  4. Infrastructure (as it enhances or limits functionality).
The quality of the outputs is at risk if any of these aspects is deficient.

QA is not limited to manufacturing, and can be applied to any business or non-business activity, including: design, consulting, banking, insurance, computer software development, retailing, investment, transportation, education, and translation.

It comprises a quality improvement process, which is generic in the sense that it can be applied to any of these activities and it establishes a behavior pattern, which supports the achievement of quality.
This in turn is supported by quality management practices which can include a number of business systems and which are usually specific to the activities of the business unit concerned.

In manufacturing and construction activities, these business practices can be equated to the models for quality assurance defined by the International Standards contained in the ISO 9000 series and the specified Specifications for quality systems.

In the system of Company Quality, the work being carried out was shop floor inspection which did not reveal the major quality problems. This led to quality assurance or total quality control, which has come into being recently.

In practice

Medical industry

QA is very important in the medical field because it helps to identify the standards of medical equipments and services. Hospitals and laboratories make use of external agencies in order to ensure standards for equipment such as X-ray machines, Diagnostic Radiology and AERB. QA is particularly applicable throughout the development and introduction of new medicines and medical devices. The Research Quality Association (RQA) supports and promotes the quality of research in life sciences, through its members and regulatory bodies.

Aerospace industry

The term product assurance (PA) is often used instead of quality assurance and is, alongside project management and engineering, one of the three primary project functions. Quality assurance is seen as one part of product assurance. Due to the sometimes catastrophic consequences a single failure can have for human lives, the environment, a device, or a mission, product assurance plays a particularly important role here. It has organizational, budgetary and product developmental independence meaning that it reports to highest management only, has its own budget, and does not expend labor to help build a product. Product assurance stands on an equal footing with project management but embraces the customer's point of view.

Software development

Software Quality Assurance consists of a means of monitoring the software engineering processes and methods used to ensure quality. The methods by which this is accomplished are many and varied, and may include ensuring conformance to one or more standards, such as ISO 9000 or a model such as CMMI. In addition, enterprise quality management software is used to correct issues such as: supply chain disaggregation and regulatory compliance which are vital among medical device manufacturers.

Using contractors or consultants

Consultants and contractors are sometimes employed when introducing new quality practices and methods, particularly where the relevant skills and expertise are not available within the organization or when allocating the available internal resources are not available. Consultants and contractors will often employ Quality Management Systems (QMS), auditing and procedural documentation writing CMMI, Six Sigma, Measurement Systems Analysis (MSA), Quality Function Deployment (QFD), Failure Mode and Effects Analysis (FMEA), and Advance Product Quality Planning (APQP).

Japanese economic miracle

From Wikipedia, the free encyclopedia
The foundations of the aviation industry survived the war.
 
Japanese-made TV sets during the economic boom.
 
Steel plant of Nippon Steel Corporation in Chiba Prefecture – Japanese coal- and metal-related industry experienced an annual growth rate of 25% in the 1960s.
 
The low-cost Nissan Sunny became a symbol of the Japanese middle class in the 1960s.
 
The Japanese economic miracle was Japan's record period of economic growth between the post-World War II era to the end of the Cold War. During the economic boom, Japan rapidly became the world's second largest economy (after the United States). By the 1990s, Japan's demographics began stagnating and the workforce was no longer expanding as it did in previous decades, despite per-worker productivity remaining high.

Background

This economic miracle was the result of post-World War II Japan and West Germany benefiting from the Cold War. It occurred chiefly due to the economic interventionism of the Japanese government and partly due to the aid and assistance of the U.S. Marshall Plan. After World War II, the U.S. established a significant presence in Japan to slow the expansion of Soviet influence in the Pacific. The U.S. was also concerned with the growth of the economy of Japan because there was a risk after World War II that an unhappy and poor Japanese population would turn to communism and by doing so ensure that the Soviet Union would control the Pacific.

The distinguishing characteristics of the Japanese economy during the "economic miracle" years included: the cooperation of manufacturers, suppliers, distributors, and banks in closely knit groups called keiretsu; the powerful enterprise unions and shuntō; good relations with government bureaucrats, and the guarantee of lifetime employment (shūshin koyō) in big corporations and highly unionized blue-collar factories.

Governmental contributions

The Japanese financial recovery continued even after SCAP departed and the economic boom propelled by the Korean War abated. The Japanese economy survived from the deep recession caused by a loss of the U.S. payments for military procurement and continued to make gains. By the late 1960s, Japan had risen from the ashes of World War II to achieve an astoundingly rapid and complete economic recovery. According to Mikiso Hane, the period leading up to the late 1960s saw "the greatest years of prosperity Japan had seen since the Sun Goddess shut herself up behind a stone door to protest her brother Susano-o's misbehavior." The Japanese government contributed to the post-war Japanese economic miracle by stimulating private sector growth, first by instituting regulations and protectionism that effectively managed economic crises and later by concentrating on trade expansion.

History

Brief introduction to the Japanese Economic Miracle

Japanese economic miracle refers to the significant increase in the Japanese economy during the time between the end of World War II and the end of the Cold War (1945-1993). The economical miracle can be divided into four stages: the recovery (1946-1954), the high increase (1955-1972), the steady increase (1972-1992), and the low increase (1992-2018).

Though heavily destroyed by the nuclear bombardment in Hiroshima and Nagasaki, and other Allied air raids on Japan, Japan was able to recover from the trauma of WWII, and managed to become the second largest economic entity of the world (after the United States) by the 1960s (Soviet Union excluded). However, after three decades, Japan had experienced the so-called "recession in growth", as the United States had been imposing economic protection policy in oppressing Japanese production and forcing the appreciation of the Japanese yen. In preventing further oppression, Japan greatly improved its technological advances and raised the value of the yen, since to devalue, the yen would have brought further risk and a possible depressing effect on trade. The appreciation of the yen led to significant economic recession in the 1980s. To alleviate the influence of recession, Japan imposed a series of economical and financial policy to stimulate the domestic demand. Nevertheless, the bubble economy that took place in the late 1980s and early 1990s and the subsequent deflationary policy destroyed the Japanese economy. After the deflationary policy, the Japanese economy has been through a time of low increase period which has lasted until today.

The Recovery Stage (1946-1954)

Japan was seriously harmed in WWII. For instance, during the wartime, "the Japanese cotton industry was brought to its knees by the end of the Second World War. Two-thirds of its prewar cotton spindles were scrapped by wartime administrators, and bombing and destruction of urban areas had caused a further loss of 20 percent of spinning and 14 percent of weaving capacity". Nonetheless, the ability of recovery astonished the world, earning the title of "Japanese Economic Miracle". By and large, every country has experienced some degree of industrial growth in the postwar period, those countries that achieved a heavy drop in industrial output due to war damage such as Japan, West Germany and Italy, have achieved a most rapid recovery. In the case of Japan, industrial production had fallen in 1946 to 27.6% of the pre-war level, but regained this pre-war level in 1951 and reached 350% in 1960.

The first reason for Japan to recover from war trauma swiftly was the successful economic reform by the government. The government body principally concerned with industrial policy in Japan is the Ministry of International Trade and Industry. One of the major economic reforms was to adopt the "Inclined Production Mode" (けいしゃせいさんほうしき). The "Inclined Production Mode" refers to the inclined production that primarily focus on the production of raw material including steel, coal and cotton. Textile production occupied more than 23.9% of the total industrial production. Moreover, to stimulate the production, Japanese government supported the new recruitment of labour, especially female labour. By enhancing the recruitment of female labour, Japan managed to recover from the destruction. The legislation on recruitment contains three components: the restriction placed on regional recruitment and relocation of workers, the banning of the direct recruitment of new school leavers, and the direct recruitment of non-school leavers under explicitly detailed regulations issued by the Ministry of Labor.

The second reason that accounts for Japan's rapid recovery from WWII was the outbreak of Korean War, as Japan was favored by the Special Procurement. The Korean War was fought on the Korean Peninsula, and the United States eventually participated in the war, providing an opportunity for the Japanese economy. The Korean Peninsula is distant from US territory, so the logistics soon became a significant problem. As one of the major supporters of the United States in Asia, Japan stood out, providing ample support to logistical operations, and also benefitting from the production of firearms. The order of mass firearms and other material by the United States greatly stimulated the Japanese economy, enabling Japan to recover from the wartime destruction and providing Japan the basis for the upcoming high increasing stage.

The High Increasing Stage (1954-1972)

After gaining support from the United States and achieving domestic economic reform, Japan was able to soar from the 1950s to the 1970s. Furthermore, Japan also completed its process toward industrialization, and became one of the first developed countries in East Asia. The Japanese Economic Yearbooks from 1967 to 1971 witnessed the significant increase. In 1967, the year book said: the Japanese economy in 1966 thus made an advance more rapidly than previously expected. In 1968, the year book said that the Japanese economy continued to make a sound growth after it had a bottom in the autumn of 1965. The words "increase", "growth" and "upswing" filled with summaries of the year books from 1967 to 1971. The reasons for Japan to complete industrialization are also complicated, and the major characteristic of this time is the influence of governmental policies of the Hayato Ikeda administration, vast consumption, and vast export.

Influence of Governmental Policies: Ikeda administration and keiretsu

In 1954, the economic system MITI had cultivated from 1949 to 1953 came into full effect. Prime Minister Hayato Ikeda, who Johnson calls "the single most important individual architect of the Japanese economic miracle," pursued a policy of heavy industrialization. This policy led to the emergence of 'over-loaning' (a practice that continues today) in which the Bank of Japan issues loans to city banks who in turn issue loans to industrial conglomerates. Since there was a shortage of capital in Japan at the time, industrial conglomerates borrowed beyond their capacity to repay, often beyond their net worth, causing city banks in turn to overborrow from the Bank of Japan. This gave the national Bank of Japan complete control over dependent local banks.

The system of over-loaning, combined with the government's relaxation of anti-monopoly laws (a remnant of SCAP control) also led to the reemergence of conglomerate groups called keiretsu that mirrored the wartime conglomerates, or zaibatsu. Led by the economic improvements of Sony businessmen Masaru Ibuka and Akio Morita, the keiretsu efficiently allocated resources and became competitive internationally.

At the heart of the keiretsu conglomerates' success lay city banks, which lent generously, formalizing cross-share holdings in diverse industries. The keiretsu spurred both horizontal and vertical integration, locking out foreign companies from Japanese industries. Keiretsu had close relations with MITI and each other through the cross-placement of shares, providing protection from foreign take-overs. For example, 83% of Japan's Development Bank's finances went toward strategic industries: shipbuilding, electric power, coal and steel production. Keiretsu proved crucial to protectionist measures that shielded Japan's sapling economy.

Keiretsu also fostered an attitude shift among Japanese managers that tolerated low profits in the short-run because keiretsu were less concerned with increasing stock dividends and profits and more concerned about interest payments. Approximately only two-thirds of the shares of a given company were traded, cushioning keiretsu against market fluctuations and allowing keiretsu managers to plan for the long-term and maximize market shares instead of focusing on short-term profits.

The Ikeda Administration also instituted the Foreign Exchange Allocation Policy, a system of import controls designed to prevent the flooding of Japan's markets by foreign goods. MITI used the foreign exchange allocation to stimulate the economy by promoting exports, managing investment and monitoring production capacity. In 1953, MITIs revised the Foreign Exchange Allocation Policy to promote domestic industries and increase the incentive for exports by revising the export-link system. A later revision based production capacity on foreign exchange allocation to prevent foreign dumping.

Vast Consumption: From survival to recreation

During the time of reconstruction and before the 1973 oil crisis, Japan managed to complete its industrialization process, gaining significant improvement in living standards and witnessing a significant increase in consumption. The average monthly consumption of urban family households doubled from 1955 to 1970. Moreover, the proportions of consumption in Japan was also changing. The consumption in daily necessities, such as food and clothing and footwear, was decreasing. Contrastingly, the consumption in recreational, entertainment activities and goods increased, including furniture, transportation, communications, and reading. The great increase in consumption stimulated the growth in GDP as it incentivized production.
Vast Export: "Golden Sixties" and shift to export trade
The period of rapid economic growth between 1955 and 1961 paved the way for the "Golden Sixties," the second decade that is generally associated with the Japanese economic miracle. In 1965, Japan's nominal GDP was estimated at just over $91 billion. Fifteen years later, in 1980, the nominal GDP had soared to a record $1.065 trillion.

Under the leadership of Prime Minister Ikeda, former minister of MITI, the Japanese government undertook an ambitious "income-doubling plan" (所得倍増). Ikeda lowered interest rates and taxes to private players to motivate spending. In addition, due to the financial flexibility afforded by the FILP, Ikeda's government rapidly expanded government investment in Japan's infrastructure: building highways, high-speed railways, subways, airports, port facilities, and dams. Ikeda's government also expanded government investment in the previously neglected communications sector of the Japanese economy. Each of these acts continued the Japanese trend towards a managed economy that epitomized the mixed economic model.

Besides Ikeda's adherence to government intervention and regulation of the economy, his government pushed trade liberalization. By April 1960, trade imports had been 41 percent liberalized (compared to 22 percent in 1956). Ikeda planned to liberalize trade to 80 percent within three years. His plans however met severe opposition from both industries who had thrived on over-loaning and the nationalist public who feared foreign enterprise takeovers. The Japanese press likened liberalization to "the second coming of the black ships," "the defenselessness of the Japanese islands in the face of attack from huge foreign capitalist powers," and "the readying of the Japanese economy for a bloodstained battle between national capital and foreign capital." Ikeda's income-doubling plan was largely a response to this growing opposition and widespread panic over liberalization, adopted to quell public protests. Ikeda's motivations were purely pragmatic and foreign policy based, however. He moved toward liberalization of trade only after securing a protected market through internal regulations that favored Japanese products and firms.

Ikeda also set up numerous allied foreign aid distribution agencies to demonstrate Japan's willingness to participate in the international order and to promote exports. The creation of these agencies not only acted as a small concession to international organizations, but also dissipated some public fears about liberalization of trade. Ikeda furthered Japan's global economic integration by joining the GATT in 1955, the IMF, and the OECD in 1964. By the time Ikeda left office, the GNP was growing at a phenomenal rate of 13.9 percent.

In 1962, Kaname Akamatsu published his famous article introducing the Flying Geese Paradigm. It postulated that Asian nations will catch up with the West as a part of a regional hierarchy where the production of commoditized goods would continuously move from the more advanced countries to the less advanced ones. The paradigm was named this way due to Akamatsu's envisioning this pattern as geese flying in unison with Japan being an obvious leader.

The Steady Increasing Stage (1973-1992)

In 1973, the first oil-price shock struck Japan (1973 oil crisis). The price of oil increased from 3 dollars per barrel to over 13 dollars per barrel. During this time, Japan's industrial production was decreased by 20%, as the supply capacity could not respond effectively to the rapid expansion of demand, and increased investments in equipment often invited unwanted results—tighter supply and higher prices of commodities. Moreover, the Second Oil Shock in 1978 and 1979 exacerbated the situation as the oil price again increased from 13 dollars per barrel to 39.5 dollars per barrel. Despite being seriously impacted by the two oil crises, Japan was able to withstand the impact and managed to transfer from a product-concentrating to a technology-concentrating production form.

The transformation was in fact, a product of the oil crises and United States intervention. Since the oil price rose tenfold, the cost of production also soared. After the oil crises, to save costs, Japan had to produce products more environmentally-friendly, and with less oil consumption. The biggest factor that invited industrial changes after the oil crises was the increase in energy prices including crude oil. As a result, Japan converted to a technology-concentrating program, ensuring the steady increase of its economy, and standing out beyond other capitalist countries that had been significantly wounded during the oil crises. Another factor was United States frictions with Japan, as Japan's rapid economic growth could potentially harm the economic interests of the United States. In 1985, the United States signed the "Plaza Accord" with Japan, West Germany, France and Britain. The "Plaza Accord" was an attempt to devalue the US dollar, yet harmed Japan the most. Japan attempted to expand international markets through the appreciation of the Japanese yen, yet they over-appreciated, creating a bubble economy. The Plaza Accord was successful in reducing the U.S. trade deficit with Western European nations but largely failed to fulfill its primary objective of alleviating the trade deficit with Japan.

Role of the Ministry of International Trade and Industry

The Ministry of International Trade and Industry (MITI) was instrumental in Japan's post-war economic recovery. According to some scholars, no other governmental regulation or organization had more economic impact than MITI. "The particular speed, form, and consequences of Japanese economic growth," Chalmers Johnson writes, "are not intelligible without reference to the contributions of MITI" (Johnson, vii). Established in 1949, MITI's role began with the "Policy Concerning Industrial Rationalization" (1950) that coordinated efforts by industries to counteract the effects of SCAP's deflationary regulations. In this way, MITI formalized cooperation between the Japanese government and private industry. The extent of the policy was such that if MITI wished to "double steel production, the neo-zaibatsu already has the capital, the construction assets, the makers of production machinery, and most of the other necessary factors already available in-house". The Ministry coordinated various industries, including the emerging keiretsu, toward a specific end, usually toward the intersection of national production goals and private economic interests.

MITI also boosted the industrial security by untying the imports of technology from the imports of other goods. MITI's Foreign Capital Law granted the ministry power to negotiate the price and conditions of technology imports. This element of technological control allowed it to promote industries it deemed promising. The low cost of imported technology allowed for rapid industrial growth. Productivity was greatly improved through new equipment, management, and standardization.

MITI gained the ability to regulate all imports with the abolition of the Economic Stabilization Board and the Foreign Exchange Control Board in August 1952. Although the Economic Stabilization Board was already dominated by MITI, the Yoshida Governments transformed it into the Economic Deliberation Agency, a mere "think tank," in effect giving MITI full control over all Japanese imports. Power over the foreign exchange budget was also given directly to MITI.

MITI's establishment of the Japan Development Bank also provided the private sector with low-cost capital for long-term growth. The Japan Development Bank introduced access to the Fiscal Investment and Loan Plan, a massive pooling of individual and national savings. At the time FILP controlled four times the savings of the world's largest commercial bank. With this financial power, FILP was able to maintain an abnormally high number of Japanese construction firms (more than twice the number of construction firms of any other nation with a similar GDP).

Conclusion

Coincidentally, the conclusion of the economic miracle coincided with the conclusion of the Cold War. While the Japanese stock market hit its all-time peak at the end of 1989, making a recovery later in 1990, it dropped precipitously in 1991. The year of the conclusion of the Japanese asset price bubble coincided with the Gulf War and the dissolution of the Soviet Union.

W. Edwards Deming

From Wikipedia, the free encyclopedia

W. Edwards Deming
W. Edwards Deming.jpg
BornOctober 14, 1900
Sioux City, Iowa
DiedDecember 20, 1993 (aged 93)
Washington, D.C.
Alma materUniversity of Wyoming BS
University of Colorado MS
Yale University PhD
Scientific career
FieldsStatistician
InfluencesWalter A. Shewhart

William Edwards Deming (October 14, 1900 – December 20, 1993) was an American engineer, statistician, professor, author, lecturer, and management consultant. Educated initially as an electrical engineer and later specializing in mathematical physics, he helped develop the sampling techniques still used by the U.S. Department of the Census and the Bureau of Labor Statistics. In his book, The New Economics for Industry, Government, and Education, Deming championed the work of Walter Shewhart, including statistical process control, operational definitions, and what Deming called the "Shewhart Cycle" which had evolved into Plan-Do-Study-Act (PDSA). This was in response to the growing popularity of PDCA, which Deming viewed as tampering with the meaning of Shewhart's original work. Deming is best known for his work in Japan after WWII, particularly his work with the leaders of Japanese industry. That work began in August 1950 at the Hakone Convention Center in Tokyo, when Deming delivered a speech on what he called "Statistical Product Quality Administration". Many in Japan credit Deming as one of the inspirations for what has become known as the Japanese post-war economic miracle of 1950 to 1960, when Japan rose from the ashes of war on the road to becoming the second-largest economy in the world through processes partially influenced by the ideas Deming taught:
  1. Better design of products to improve service
  2. Higher level of uniform product quality
  3. Improvement of product testing in the workplace and in research centers
  4. Greater sales through side [global] markets
Deming is best known in the United States for his 14 Points (Out of the Crisis, by W. Edwards Deming, preface) and his system of thought he called the "System of Profound Knowledge". The system includes four components or "lenses" through which to view the world simultaneously:
  1. Appreciating a system
  2. Understanding variation
  3. Psychology
  4. Epistemology, the theory of knowledge
Deming made a significant contribution to Japan's reputation for innovative, high-quality products, and for its economic power. He is regarded as having had more impact on Japanese manufacturing and business than any other individual not of Japanese heritage. Despite being honored in Japan in 1951 with the establishment of the Deming Prize, he was only just beginning to win widespread recognition in the U.S. at the time of his death in 1993. President Ronald Reagan awarded him the National Medal of Technology in 1987. The following year, the National Academy of Sciences gave Deming the Distinguished Career in Science award.

Overview

Deming received a BS in electrical engineering from the University of Wyoming at Laramie (1921), an MS from the University of Colorado (1925), and a PhD from Yale University (1928). Both graduate degrees were in mathematics and physics. He had an internship at Western Electric's Hawthorne Works in Cicero, Illinois, while studying at Yale. He later worked at the U.S. Department of Agriculture and the Census Department. While working under Gen. Douglas MacArthur as a census consultant to the Japanese government, he was asked to teach a short seminar on statistical process control (SPC) methods to members of the Radio Corps, at the invitation of Homer Sarasohn. During this visit, he was contacted by the Japanese Union of Scientists and Engineers (JUSE) to talk directly to Japanese business leaders, not about SPC, but about his theories of management, returning to Japan for many years to consult. Later, he became a professor at New York University, while engaged as an independent consultant in Washington, DC.

Deming was the author of Quality Productivity and Competitive Position, Out of the Crisis (1982–1986), and The New Economics for Industry, Government, Education (1993), and books on statistics and sampling. Deming played the flute and drums and composed music throughout his life, including sacred choral compositions and an arrangement of The Star Spangled Banner.

In 1993, he founded the W. Edwards Deming Institute in Washington, DC, where the Deming Collection at the U.S. Library of Congress includes an extensive audiotape and videotape archive. The aim of the institute is to "Enrich society through the Deming philosophy."

Deming's teachings and philosophy are clearly illustrated by examining the results they produced after they were adopted by Japanese industry, as the following example shows. Ford Motor Company was simultaneously manufacturing a car model with transmissions made in Japan and the United States. Soon after the car model was on the market, Ford customers were requesting the model with Japanese transmissions over the US-made transmissions, and they were willing to wait for the Japanese model. As both transmissions were made to the same specifications, Ford engineers could not understand the customer preference for the model with Japanese transmissions. Finally, Ford engineers decided to take apart the two different transmissions. The American-made car parts were all within specified tolerance levels. However, the Japanese car parts were virtually identical to each other, and much closer to the nominal values for the parts—e.g., if a part was supposed to be one foot long, plus or minus 1/8 of an inch—then the Japanese parts were all within 1/16 of an inch, less variation. This made the Japanese cars run more smoothly and customers experienced fewer problems.

Family

Born in Sioux City, Iowa, William Edwards Deming was raised in Polk City, Iowa, on his grandfather Henry Coffin Edwards's chicken farm, then later on a 40-acre (16 ha) farm purchased by his father in Powell, Wyoming. He was the son of William Albert Deming and Pluma Irene Edwards, His parents were well educated and emphasized the importance of education to their children. Pluma had studied in San Francisco and was a musician. William Albert had studied mathematics and law.

He was a direct descendant of John Deming, (1615–1705) an early Puritan settler and original patentee of the Connecticut Colony, and Honor Treat, the daughter of Richard Treat (1584–1669), an early New England settler, deputy to the Connecticut Legislature and also a patentee of the Royal Charter of Connecticut, 1662.

Deming married Agnes Bell in 1922. She died in 1930, a little more than a year after they had adopted a daughter, Dorothy (-1984). Deming made use of various private homes to help raise the infant, and following his marriage in 1932 to Lola Elizabeth Shupe (- 1986), with whom he coauthored several papers, he brought her back home to stay. Lola and he had two more children, Diana (b. 1934) and Linda (b. 1943). Deming was survived by Diana and Linda, along with seven grandchildren.

Early life and work

Deming was a professor of statistics at New York University's graduate school of business administration (1946–1993), and taught at Columbia University's graduate school of business (1988–1993). He also was a consultant for private business.

In 1927, Deming was introduced to Walter A. Shewhart of the Bell Telephone Laboratories by C.H. Kunsman of the United States Department of Agriculture (USDA). Deming found great inspiration in the work of Shewhart, the originator of the concepts of statistical control of processes and the related technical tool of the control chart, as Deming began to move toward the application of statistical methods to industrial production and management. Shewhart's idea of common and special causes of variation led directly to Deming's theory of management. Deming saw that these ideas could be applied not only to manufacturing processes, but also to the processes by which enterprises are led and managed. This key insight made possible his enormous influence on the economics of the industrialized world after 1950.

In 1936, he studied under Sir Ronald Fisher and Jerzy Neyman at University College, London, England.

Deming edited a series of lectures delivered by Shewhart at USDA, Statistical Method from the Viewpoint of Quality Control, into a book published in 1939. One reason he learned so much from Shewhart, Deming remarked in a videotaped interview, was that, while brilliant, Shewhart had an "uncanny ability to make things difficult." Deming thus spent a great deal of time both copying Shewhart's ideas and devising ways to present them with his own twist.

Deming developed the sampling techniques that were used for the first time during the 1940 U.S. Census, formulating the Deming-Stephan algorithm for iterative proportional fitting in the process. During World War II, Deming was a member of the five-man Emergency Technical Committee. He worked with H.F. Dodge, A.G. Ashcroft, Leslie E. Simon, R.E. Wareham, and John Gaillard in the compilation of the American War Standards (American Standards Association Z1.1–3 published in 1942) and taught SPC techniques to workers engaged in wartime production. Statistical methods were widely applied during World War II, but faded into disuse a few years later in the face of huge overseas demand for American mass-produced products.

Work in Japan

In 1947, Deming was involved in early planning for the 1951 Japanese Census. The Allied powers were occupying Japan, and he was asked by the United States Department of the Army to assist with the census. He was brought over at the behest of General Douglas MacArthur, who grew frustrated at being unable to complete so much as a phone call without the line going dead due to Japan's shattered postwar economy. While in Japan, his expertise in quality-control techniques, combined with his involvement in Japanese society, brought him an invitation from the Japanese Union of Scientists and Engineers (JUSE).

JUSE members had studied Shewhart's techniques, and as part of Japan's reconstruction efforts, they sought an expert to teach statistical control. From June–August 1950, Deming trained hundreds of engineers, managers, and scholars in SPC and concepts of quality. He also conducted at least one session for top management (including top Japanese industrialists of the likes of Akio Morita, the cofounder of Sony Corp.) Deming's message to Japan's chief executives was that improving quality would reduce expenses, while increasing productivity and market share. Perhaps the best known of these management lectures was delivered at the Mt. Hakone Conference Center in August 1950.

A number of Japanese manufacturers applied his techniques widely and experienced heretofore unheard-of levels of quality and productivity. The improved quality combined with the lowered cost created new international demand for Japanese products.

Deming declined to receive royalties from the transcripts of his 1950 lectures, so JUSE's board of directors established the Deming Prize (December 1950) to repay him for his friendship and kindness. Within Japan, the Deming Prize continues to exert considerable influence on the disciplines of quality control and quality management.

Honors

In 1960, the Prime Minister of Japan (Nobusuke Kishi), acting on behalf of Emperor Hirohito, awarded Deming Japan's Order of the Sacred Treasure, Second Class. The citation on the medal recognizes Deming's contributions to Japan's industrial rebirth and its worldwide success. The first section of the meritorious service record describes his work in Japan:
The second half of the record lists his service to private enterprise through the introduction of epochal ideas, such as quality control and market survey techniques.

Among his many honors, an exhibit memorializing Deming's contributions and his famous Red Bead Experiment is on display outside the board room of the American Society for Quality.

He was inducted into the Automotive Hall of Fame in 1991.

Later work in the U.S.

David Salsburg wrote:
"He was known for his kindness to and consideration for those he worked with, for his robust, if very subtle, humor, and for his interest in music. He sang in a choir, played drums and flute, and published several original pieces of sacred music."
Later, from his home in Washington, DC, Deming continued running his own consultancy business in the United States, largely unknown and unrecognized in his country of origin and work. In 1980, he was featured prominently in an NBC TV documentary titled If Japan can... Why can't we? about the increasing industrial competition the United States was facing from Japan. As a result of the broadcast, demand for his services increased dramatically, and Deming continued consulting for industry throughout the world until his death at the age of 93.

Ford Motor Company was one of the first American corporations to seek help from Deming. In 1981, Ford's sales were falling. Between 1979 and 1982, Ford had incurred $3 billion in losses. Ford's newly appointed Corporate Quality Director, Larry Moore, was charged with recruiting Deming to help jump-start a quality movement at Ford. Deming questioned the company's culture and the way its managers operated. To Ford's surprise, Deming talked not about quality, but about management. He told Ford that management actions were responsible for 85% of all problems in developing better cars. In 1986, Ford came out with a profitable line of cars, the Taurus-Sable line. In a letter to Autoweek, Donald Petersen, then Ford chairman, said, "We are moving toward building a quality culture at Ford and the many changes that have been taking place here have their roots directly in Deming's teachings." By 1986, Ford had become the most profitable American auto company. For the first time since the 1920s, its earnings had exceeded those of archrival General Motors (GM). Ford had come to lead the American automobile industry in improvements. Ford's following years' earnings confirmed that its success was not a fluke, for its earnings continued to exceed GM and Chrysler's.

In 1982, Deming's book Quality, Productivity, and Competitive Position was published by the MIT Center for Advanced Engineering, and was renamed Out of the Crisis in 1986. In it, he offers a theory of management based on his famous 14 Points for Management. Management's failure to plan for the future brings about loss of market, which brings about loss of jobs. Management must be judged not only by the quarterly dividend, but also by innovative plans to stay in business, protect investment, ensure future dividends, and provide more jobs through improved products and services. "Long-term commitment to new learning and new philosophy is required of any management that seeks transformation. The timid and the fainthearted, and the people that expect quick results, are doomed to disappointment."

In 1982, Deming, along with Paul Hertz and Howard Gitlow of the University of Miami Graduate School of Business in Coral Gables, founded the W. Edwards Deming Institute for the Improvement of Productivity and Quality. In 1983, the institute trained consultants of Ernst and Whinney Management Consultants in the Deming teachings. E&W then founded its Deming Quality Consulting Practice which is still active today.

His methods and workshops regarding Total Quality Management have had broad influence. For example, they were used to define how the U.S. Environmental Protection Agency's Underground Storage Tanks program would work.

Over the course of his career, Deming received dozens of academic awards, including another, honorary, PhD from Oregon State University. In 1987, he was awarded the National Medal of Technology: "For his forceful promotion of statistical methodology, for his contributions to sampling theory, and for his advocacy to corporations and nations of a general management philosophy that has resulted in improved product quality." In 1988, he received the Distinguished Career in Science award from the National Academy of Sciences.

Deming and his staff continued to advise businesses large and small. From 1985 through 1989, Deming served as a consultant to Vernay Laboratories, a rubber manufacturing firm in Yellow Springs, Ohio, with fewer than 1,000 employees. He held several week-long seminars for employees and suppliers of the small company where his infamous example "Workers on the Red Beads" spurred several major changes in Vernay's manufacturing processes.

Deming joined the Graduate School of Business at Columbia University in 1988. In 1990, during his last year, he founded the W. Edwards Deming Center for Quality, Productivity, and Competitiveness at Columbia Business School to promote operational excellence in business through the development of research, best practices and strategic planning.

In 1990, Marshall Industries (NYSE:MI, 1984–1999) CEO Robert Rodin trained with the then 90-year-old Deming and his colleague Nida Backaitis. Marshall Industries' dramatic transformation and growth from $400 million to $1.8 billion in sales was chronicled in Deming's last book The New Economics, a Harvard Case Study, and Rodin's book, Free, Perfect and Now.

In 1993, Deming published his final book, The New Economics for Industry, Government, Education, which included the System of Profound Knowledge and the 14 Points for Management. It also contained educational concepts involving group-based teaching without grades, as well as management without individual merit or performance reviews.

Deming died in his sleep at the age of 93 in his Washington home from cancer on December 20, 1993. When asked, toward the end of his life, how he would wish to be remembered in the U.S., he replied, "I probably won't even be remembered." After a pause, he added, "Well, maybe ... as someone who spent his life trying to keep America from committing suicide."

Deming philosophy synopsis

The philosophy of W. Edwards Deming has been summarized as follows:
Dr. W. Edwards Deming taught that by adopting appropriate principles of management, organizations can increase quality and simultaneously reduce costs (by reducing waste, rework, staff attrition and litigation while increasing customer loyalty). The key is to practice continual improvement and think of manufacturing as a system, not as bits and pieces."
In the 1970s, Deming's philosophy was summarized by some of his Japanese proponents with the following "a"-versus-"b" comparison:
(a) When people and organizations focus primarily on quality, defined by the following ratio,
\text{Quality} = \frac{\text{Results of work efforts}}{\text{Total costs}}
quality tends to increase and costs fall over time;
(b) However, when people and organizations focus primarily on costs, costs tend to rise and quality declines over time.

The Deming System of Profound Knowledge

"The prevailing style of management must undergo transformation. A system cannot understand itself. The transformation requires a view from outside. The aim of this chapter is to provide an outside view—a lens—that I call a system of profound knowledge. It provides a map of theory by which to understand the organizations that we work in.

"The first step is transformation of the individual. This transformation is discontinuous. It comes from understanding of the system of profound knowledge. The individual, transformed, will perceive new meaning to his life, to events, to numbers, to interactions between people."

"Once the individual understands the system of profound knowledge, he will apply its principles in every kind of relationship with other people. He will have a basis for judgment of his own decisions and for transformation of the organizations that he belongs to."

Deming advocated that all managers need to have what he called a System of Profound Knowledge, consisting of four parts:
  1. Appreciation of a system: understanding the overall processes involving suppliers, producers, and customers (or recipients) of goods and services (explained below);
  2. Knowledge of variation: the range and causes of variation in quality, and use of statistical sampling in measurements;
  3. Theory of knowledge: the concepts explaining knowledge and the limits of what can be known;
  4. Knowledge of psychology: concepts of human nature.
He explained, "One need not be eminent in any part nor in all four parts in order to understand it and to apply it. The 14 points for management in industry, education, and government follow naturally as application of this outside knowledge, for transformation from the present style of Western management to one of optimization."

"The various segments of the system of profound knowledge proposed here cannot be separated. They interact with each other. Thus, knowledge of psychology is incomplete without knowledge of variation.

"A manager of people needs to understand that all people are different. This is not ranking people. He needs to understand that the performance of anyone is governed largely by the system that he works in, the responsibility of management. A psychologist that possesses even a crude understanding of variation as will be learned in the experiment with the Red Beads (Ch. 7) could no longer participate in refinement of a plan for ranking people."

The Appreciation of a system involves understanding how interactions (i.e., feedback) between the elements of a system can result in internal restrictions that force the system to behave as a single organism that automatically seeks a steady state. It is this steady state that determines the output of the system rather than the individual elements. Thus it is the structure of the organization rather than the employees, alone, which holds the key to improving the quality of output.

The Knowledge of variation involves understanding that everything measured consists of both "normal" variation due to the flexibility of the system and of "special causes" that create defects. Quality involves recognizing the difference to eliminate "special causes" while controlling normal variation. Deming taught that making changes in response to "normal" variation would only make the system perform worse. Understanding variation includes the mathematical certainty that variation will normally occur within six standard deviations of the mean.

The System of Profound Knowledge is the basis for application of Deming's famous 14 Points for Management, described below.

Key principles

Deming offered 14 key principles to managers for transforming business effectiveness. The points were first presented in his book Out of the Crisis. (p. 23–24) Although Deming does not use the term in his book, it is credited with launching the Total Quality Management movement.
  1. Create constancy of purpose toward improvement of product and service, with the aim to become competitive, to stay in business and to provide jobs;
  2. Adopt the new philosophy. We are in a new economic age. Western management must awaken to the challenge, must learn their responsibilities, and take on leadership for change;
  3. Cease dependence on inspection to achieve quality. Eliminate the need for massive inspection by building quality into the product in the first place;
  4. End the practice of awarding business on the basis of a price tag. Instead, minimize total cost. Move towards a single supplier for any one item, on a long-term relationship of loyalty and trust;
  5. Improve constantly and forever the system of production and service, to improve quality and productivity, and thus constantly decrease costs;
  6. Institute training on the job;
  7. Institute leadership (see Point 12 and Ch. 8 of Out of the Crisis). The aim of supervision should be to help people and machines and gadgets do a better job. Supervision of management is in need of overhaul, as well as supervision of production workers;
  8. Drive out fear, so that everyone may work effectively for the company;
  9. Break down barriers between departments. People in research, design, sales, and production must work as a team, to foresee problems of production and usage that may be encountered with the product or service;
  10. Eliminate slogans, exhortations, and targets for the work force asking for zero defects and new levels of productivity. Such exhortations only create adversarial relationships, as the bulk of the causes of low quality and low productivity belong to the system and thus lie beyond the power of the work force;

    1. Eliminate work standards (quotas) on the factory floor. Substitute with leadership;
    2. Eliminate management by objective. Eliminate management by numbers and numerical goals. Instead substitute with leadership;
  11. Remove barriers that rob the hourly worker of his right to pride of workmanship. The responsibility of supervisors must be changed from sheer numbers to quality;
  12. Remove barriers that rob people in management and in engineering of their right to pride of workmanship. This means, inter alia, abolishment of the annual or merit rating and of management by objectives;
  13. Institute a vigorous program of education and self-improvement;
  14. Put everybody in the company to work to accomplish the transformation. The transformation is everybody's job.
"Massive training is required to instill the courage to break with tradition. Every activity and every job is a part of the process."

PDCA myth

It is a common myth to credit Plan-Do-Check-Act (PDCA) to Deming. Deming referred to the PDCA cycle as a "corruption." Deming worked from the Shewhart cycle and over time eventually developed the Plan-Do-Study-Act (PDSA) cycle, which has the idea of deductive and inductive learning built into the learning and improvement cycle. Deming finally published the PDSA cycle in 1993, in The New Economics on p. 132. Deming has added to the myth that he taught the Japanese the PDSA cycle with this quote on p. 247, "The PDSA Cycle originated in my teaching in Japan in 1950. It appeared in the booklet Elementary Principles of the Statistical Control of Quality (JUSE, 1950: out of print)."

Seven Deadly Diseases

The "Seven Deadly Diseases" include:
  1. Lack of constancy of purpose
  2. Emphasis on short-term profits
  3. Evaluation by performance, merit rating, or annual review of performance
  4. Mobility of management
  5. Running a company on visible figures alone
  6. Excessive medical costs
  7. Excessive costs of warranty, fueled by lawyers who work for contingency fees
"A Lesser Category of Obstacles" includes:
  1. Neglecting long-range planning
  2. Relying on technology to solve problems
  3. Seeking examples to follow rather than developing solutions
  4. Excuses, such as "our problems are different"
  5. The mistaken belief that management skills can be taught in classes
  6. Reliance on quality control departments rather than management, supervisors, managers of purchasing, and production workers
  7. Placing blame on workforces who are only responsible for 15% of mistakes where the system designed by management is responsible for 85% of the unintended consequences
  8. Relying on quality inspection rather than improving product quality
Deming's advocacy of the Plan-Do-Study-Act cycle, his 14 Points and Seven Deadly Diseases have had tremendous influence outside manufacturing and have been applied in other arenas, such as in the relatively new field of sales process engineering.

Works


  • Deming, W. Edwards (1964) [1943]. Statistical Adjustment of Data. Dover. ISBN 0-486-64685-8. LCCN 64-24416.
  • Deming, W. Edwards (1966) [1950]. Some Theory of Sampling. Dover. ISBN 0-486-64684-X. LCCN 66-30538.
  • Introduction to entropy

    From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Introduct...