Search This Blog

Saturday, January 18, 2020

Statistical process control

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Statistical_process_control

Statistical process control (SPC) is a method of quality control which employs statistical methods to monitor and control a process. This helps to ensure that the process operates efficiently, producing more specification-conforming products with less waste (rework or scrap). SPC can be applied to any process where the "conforming product" (product meeting specifications) output can be measured. Key tools used in SPC include run charts, control charts, a focus on continuous improvement, and the design of experiments. An example of a process where SPC is applied is manufacturing lines.

SPC must be practiced in 2 phases: The first phase is the initial establishment of the process, and the second phase is the regular production use of the process. In the second phase, a decision of the period to be examined must be made, depending upon the change in 5M&E conditions (Man, Machine, Material, Method, Movement, Environment) and wear rate of parts used in the manufacturing process (machine parts, jigs, and fixtures).

An advantage of SPC over other methods of quality control, such as "inspection", is that it emphasizes early detection and prevention of problems, rather than the correction of problems after they have occurred.

In addition to reducing waste, SPC can lead to a reduction in the time required to produce the product. SPC makes it less likely the finished product will need to be reworked or scrapped.

History

SPC was pioneered by Walter A. Shewhart at Bell Laboratories in the early 1920s. Shewhart developed the control chart in 1924 and the concept of a state of statistical control. Statistical control is equivalent to the concept of exchangeability developed by logician William Ernest Johnson also in 1924 in his book Logic, Part III: The Logical Foundations of Science. Along with a team at AT&T that included Harold Dodge and Harry Romig he worked to put sampling inspection on a rational statistical basis as well. Shewhart consulted with Colonel Leslie E. Simon in the application of control charts to munitions manufacture at the Army's Picatinny Arsenal in 1934. That successful application helped convince Army Ordnance to engage AT&T's George Edwards to consult on the use of statistical quality control among its divisions and contractors at the outbreak of World War II.

W. Edwards Deming invited Shewhart to speak at the Graduate School of the U.S. Department of Agriculture, and served as the editor of Shewhart's book Statistical Method from the Viewpoint of Quality Control (1939) which was the result of that lecture. Deming was an important architect of the quality control short courses that trained American industry in the new techniques during WWII. The graduates of these wartime courses formed a new professional society in 1945, the American Society for Quality Control, which elected Edwards as its first president. Deming traveled to Japan during the Allied Occupation and met with the Union of Japanese Scientists and Engineers (JUSE) in an effort to introduce SPC methods to Japanese industry.

'Common' and 'special' sources of variation

Shewhart read the new statistical theories coming out of Britain, especially the work of William Sealy Gosset, Karl Pearson, and Ronald Fisher. However, he understood that data from physical processes seldom produced a normal distribution curve (that is, a Gaussian distribution or 'bell curve'). He discovered that data from measurements of variation in manufacturing did not always behave the way as data from measurements of natural phenomena (for example, Brownian motion of particles). Shewhart concluded that while every process displays variation, some processes display variation that is natural to the process ("common" sources of variation); these processes he described as being in (statistical) control. Other processes additionally display variation that is not present in the causal system of the process at all times ("special" sources of variation), which Shewhart described as not in control.

Application to non-manufacturing processes

In 1988, the Software Engineering Institute suggested that SPC could be applied to non-manufacturing processes, such as software engineering processes, in the Capability Maturity Model (CMM). The Level 4 and Level 5 practices of the Capability Maturity Model Integration (CMMI) use this concept.

The notion that SPC is a useful tool when applied to non-repetitive, knowledge-intensive processes such as research and development or systems engineering has encountered skepticism and remains controversial.

In his seminal article No Silver Bullet, Fred Brooks points out that the complexity, conformance requirements, changeability, and invisibility of software results in inherent and essential variation that cannot be removed. This implies that SPC is less effective in the domain of software development than in, e.g., manufacturing.

Variation in manufacturing

In manufacturing, quality is defined as conformance to specification. However, no two products or characteristics are ever exactly the same, because any process contains many sources of variability. In mass-manufacturing, traditionally, the quality of a finished article is ensured by post-manufacturing inspection of the product. Each article (or a sample of articles from a production lot) may be accepted or rejected according to how well it meets its design specifications. In contrast, SPC uses statistical tools to observe the performance of the production process in order to detect significant variations before they result in the production of a sub-standard article. Any source of variation at any point of time in a process will fall into one of two classes.
(1) Common causes
'Common' causes are sometimes referred to as 'non-assignable', or 'normal' sources of variation. It refers to any source of variation that consistently acts on process, of which there are typically many. This type of causes collectively produce a statistically stable and repeatable distribution over time.
(2) Special causes
'Special' causes are sometimes referred to as 'assignable' sources of variation. The term refers to any factor causing variation that affects only some of the process output. They are often intermittent and unpredictable.
Most processes have many sources of variation; most of them are minor and may be ignored. If the dominant assignable sources of variation are detected, potentially they can be identified and removed. When they are removed, the process is said to be 'stable'. When a process is stable, its variation should remain within a known set of limits. That is, at least, until another assignable source of variation occurs. 

For example, a breakfast cereal packaging line may be designed to fill each cereal box with 500 grams of cereal. Some boxes will have slightly more than 500 grams, and some will have slightly less. When the package weights are measured, the data will demonstrate a distribution of net weights.
If the production process, its inputs, or its environment (for example, the machine on the line) change, the distribution of the data will change. For example, as the cams and pulleys of the machinery wear, the cereal filling machine may put more than the specified amount of cereal into each box. Although this might benefit the customer, from the manufacturer's point of view it is wasteful, and increases the cost of production. If the manufacturer finds the change and its source in a timely manner, the change can be corrected (for example, the cams and pulleys replaced). 

Application of SPC

The application of SPC involves three main phases of activity:
  1. Understanding the process and the specification limits.
  2. Eliminating assignable (special) sources of variation, so that the process is stable.
  3. Monitoring the ongoing production process, assisted by the use of control charts, to detect significant changes of mean or variation.

Control charts

The data from measurements of variations at points on the process map is monitored using control charts. Control charts attempt to differentiate "assignable" ("special") sources of variation from "common" sources. "Common" sources, because they are an expected part of the process, are of much less concern to the manufacturer than "assignable" sources. Using control charts is a continuous activity, ongoing over time. 

Stable process

When the process does not trigger any of the control chart "detection rules" for the control chart, it is said to be "stable". A process capability analysis may be performed on a stable process to predict the ability of the process to produce "conforming product" in the future.

A stable process can be demonstrated by a process signature that is free of variances outside of the capability index. A process signature is the plotted points compared with the capability index.

Excessive variations

When the process triggers any of the control chart "detection rules", (or alternatively, the process capability is low), other activities may be performed to identify the source of the excessive variation. The tools used in these extra activities include: Ishikawa diagram, designed experiments, and Pareto charts. Designed experiments are a means of objectively quantifying the relative importance (strength) of sources of variation. Once the sources of (special cause) variation are identified, they can be minimized or eliminated. Steps to eliminating a source of variation might include: development of standards, staff training, error-proofing, and changes to the process itself or its inputs. 

Process stability metrics

When monitoring many processes with control charts, it is sometimes useful to calculate quantitative measures of the stability of the processes. These metrics can then be used to identify/prioritize the processes that are most in need of corrective actions. These metrics can also be viewed as supplementing the traditional process capability metrics. Several metrics have been proposed, as described in Ramirez and Runger. [11] They are (1) a Stability Ratio which compares the long-term variability to the short-term variability, (2) an ANOVA Test which compares the within-subgroup variation to the between-subgroup variation, and (3) an Instability Ratio which compares the number of subgroups that have one or more violations of the Western Electric rules to the total number of subgroups.

Mathematics of control charts

Digital control charts use logic-based rules that determine "derived values" which signal the need for correction. For example,
derived value = last value + average absolute difference between the last N numbers.

Trump may not be removed by the Senate — but he’s still terrified of his trial. Here’s why.


Donald Trump is scared. The Senate trial following his impeachment for a blackmail and campaign cheating scheme starts next week, and it’s driving him to distraction. He was supposed to host a lame event at the White House on Thursday to bolster fake concerns that white evangelicals are being oppressed, but blew off pandering to his strongest supporters for an hour, likely because he couldn’t pry himself away from news coverage of the impeachment trial’s kickoff. After ending the event swiftly, Trump then tweeted angrily, “I JUST GOT IMPEACHED FOR MAKING A PERFECT PHONE CALL!”

(As with most things the president says, this was untrue — he was impeached weeks ago, in December.)

Trump’s cold sweats are significant, because everyone who has been following this case knows that the Senate will acquit him. Not because he’s innocent — no one who has actually consulted the evidence is foolish enough to believe that — but because Senate Majority Leader Mitch McConnell and the Republicans who control the Senate decided long ago that they would cover up for their shamelessly corrupt president no matter what he does. With such an assured outcome, Trump’s fears seem overblown and silly, even for someone crippled by sociopathic narcissism and its accompanying paranoia.

But it’s also true that high-profile travesties of justice, such as the one Senate Republicans are currently preparing to commit, can often provoke major political backlash. Getting a jury to acquit the obviously guilty can, as history shows, cause a public that’s already outraged about the crime to get even more furious. That, I suspect, is what Trump is sweating.

What the Senate is about to do is akin to the practice of jury nullification. That’s where a jury decides that either they don’t think the crime should be a crime at all, or that they believe people like the defendant should above the law, and so refuse to convict no matter how guilty the defendant is. This something that in theory, and sometimes in practice, can be used for good as when a jury refuses to throw someone in prison for a low-level drug offense, or refuses to enforce a law restricting free speech. But historically in the U.S., jury nullification has tended to be used to uphold injustice and reinforce racist or sexist systems of power.

In other words, exactly what Senate Republicans are planning to do. That becomes more obvious every day as more evidence of Trump’s guilt comes out, from the revelations by Rudy Giuliani’s former associate Lev Parnas to the Government Accountability Office declaring that Trump broke the law by withholding military aid to Ukraine.

The most disturbing and frequent historical examples of jury nullification come from the Jim Crow South, where it was normal for all-white juries to acquit Klansmen and others who committed racist murders — not because they genuinely believed they were innocent, but because they believed it should be legal for white people to murder black people in cold blood.

The most famous of these cases was that of Roy Bryant and J.W. Milam, two white men who murdered a black teenager named Emmett Till in Mississippi in 1955. That the men had committed the crime was not in doubt — they described the murder in great detail to a reporter for Look magazine. But the all-white, all-male jury refused to convict, and didn’t really bother to hide the fact that they did so because they didn’t think white men should be punished for killing black people.

Unfortunately, this problem of white jurors refusing to convict in cases where the victims are black has not gone away. For instance, in the 2012 Florida killing of black teenager Trayvon Martin by George Zimmerman, a nearly all-white jury voted to acquit Zimmerman, even though Martin was apparently just walking home after buying some snacks and Zimmerman had been warned by a 911 operator not to pursue him — and even though Zimmerman’s only basis for suspecting Martin of anything was his race. The one woman of color on the jury has since publicly lamented the process and describes what sounds a lot like bullying from the white women in the room.

The defendants in those cases walked free, but the outrage that followed had political ramifications. Till’s murder helped draw national attention to the evils of the Jim Crow South and helped bolster support for the burgeoning civil rights movement. Martin’s murder, decades later, helped build support for what became known as the Black Lives Matter movement.

Sometimes the backlash to injustice can be earth-shaking, as happened in 1992, when Los Angeles was torn up by riots in the wake of the acquittal by a majority-white jury of four cops who were caught on video severely beating Rodney King, a black motorist they had pulled over for speeding.

These are all racially loaded cases, of course, which sets them apart from Trump’s impeachment over his efforts to cheat in the 2020 election and his cavalier willingness to use government resources to force foreign leaders to help him do so. Trump’s inevitable acquittal in the Senate won’t be quite the gut-punch so many people feel when white men get sprung for committing racist crimes.

Still, the social circumstances of Trump’s upcoming acquittal go straight back to those same forces of white supremacy that have led to so many other travesties of justice in the past. After all, the main reason Senate Republicans are averse to taking what seems to be an easy way out — convicting the obviously guilty Trump and letting his Republican Vice President, Mike Pence, take over — is because they fear crossing the notoriously loyal Trump base, who represent their only possible chance of holding onto the Senate or retaking the House this November.

And the reason that base is so loyal, as with many things in this country, relates to racism. Trump’s base is motivated by what sociologists delicately call “racial resentment,” which is a nice way of saying that these white people see changing demographics in the U.S. and growing challenges to white domination, and they’re angry about it. Furthermore, they see President Trump, a blatant and shameless racist, as their best weapon to fight to preserve a system of white supremacy.

As long as Trump keeps delivering on the racism — which he has done in a myriad of ways — his base doesn’t care what crimes he commits. After all, Trump committed his crime to hang onto power so he can continue to inflict cruel racist policies on our entire nation. In that sense, this case shares a common root with those more explicitly racist acquittals of the past. They’re all part of the long and ugly American tradition of letting white people get away with crime, so long as they do it in the name of white supremacy.

But watching obviously guilty people get away with it can also have a galvanizing political effect, and not just when the crime itself is racially provocative. As the #MeToo movement and the Women’s March demonstrated, Americans have also been roused to outrage when men commit sexual assaults and get away with it. And the ongoing fascination with gangsters who finally get caught after evading justice for years — Al Capone, Whitey Bulger, John Gotti — suggests a real hunger to see bad guys pay for what they do.

That’s what Donald Trump fears: That his acquittal will not be read as an exoneration, but as yet another famous miscarriage of justice that leads to outrage across the nation. Let’s hope his worst fears come true.

Design for manufacturability

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Design_for_manufacturability

Redesigned for manufacturability

Design for manufacturability (also sometimes known as design for manufacturing or DFM) is the general engineering practice of designing products in such a way that they are easy to manufacture. The concept exists in almost all engineering disciplines, but the implementation differs widely depending on the manufacturing technology. DFM describes the process of designing or engineering a product in order to facilitate the manufacturing process in order to reduce its manufacturing costs. DFM will allow potential problems to be fixed in the design phase which is the least expensive place to address them. Other factors may affect the manufacturability such as the type of raw material, the form of the raw material, dimensional tolerances, and secondary processing such as finishing.

Depending on various types of manufacturing processes there are set guidelines for DFM practices. These DFM guidelines help to precisely define various tolerances, rules and common manufacturing checks related to DFM.

While DFM is applicable to the design process, a similar concept called DFSS (Design for Six Sigma) is also practiced in many organizations. 

For printed circuit boards (PCB)

In the PCB design process, DFM leads to a set of design guidelines that attempt to ensure manufacturability. By doing so, probable production problems may be addressed during the design stage.

Ideally, DFM guidelines take into account the processes and capabilities of the manufacturing industry. Therefore, DFM is constantly evolving.

As manufacturing companies evolve and automate more and more stages of the processes, these processes tend to become cheaper. DFM is usually used to reduce these costs.[1] For example, if a process may be done automatically by machines (i.e. SMT component placement and soldering), such process is likely to be cheaper than doing so by hand. 

For integrated circuits (IC)

Achieving high-yielding designs, in the state of the art VLSI technology has become an extremely challenging task due to the miniaturization as well as the complexity of leading-edge products. Here, the DFM methodology includes a set of techniques to modify the design of integrated circuits (IC) in order to make them more manufacturable, i.e., to improve their functional yield, parametric yield, or their reliability. 

Background

Traditionally, in the pre-nanometer era, DFM consisted of a set of different methodologies trying to enforce some soft (recommended) design rules regarding the shapes and polygons of the physical layout of an integrated circuit. These DFM methodologies worked primarily at the full chip level. Additionally, worst-case simulations at different levels of abstraction were applied to minimize the impact of process variations on performance and other types of parametric yield loss. All these different types of worst-case simulations were essentially based on a base set of worst-case (or corner) SPICE device parameter files that were intended to represent the variability of transistor performance over the full range of variation in a fabrication process. 

Taxonomy of yield loss mechanisms

The most important yield loss models (YLMs) for VLSI ICs can be classified into several categories based on their nature.
  • Functional yield loss is still the dominant factor and is caused by mechanisms such as misprocessing (e.g., equipment-related problems), systematic effects such as printability or planarization problems, and purely random defects.
  • High-performance products may exhibit parametric design marginalities caused by either process fluctuations or environmental factors (such as supply voltage or temperature).
  • The test-related yield losses, which are caused by incorrect testing, can also play a significant role.

Techniques

After understanding the causes of yield loss, the next step is to make the design as resistant as possible. Techniques used for this include:
  • Substituting higher yield cells where permitted by timing, power, and routability.
  • Changing the spacing and width of the interconnect wires, where possible
  • Optimizing the amount of redundancy in internal memories.
  • Substituting fault tolerant (redundant) vias in a design where possible
All of these require a detailed understanding of yield loss mechanisms, since these changes trade off against one another. For example, introducing redundant vias will reduce the chance of via problems, but increase the chance of unwanted shorts. Whether this is good idea, therefore, depends on the details of the yield loss models and the characteristics of the particular design. 

For CNC machining


Objective

The objective is to design for lower cost. The cost is driven by time, so the design must minimize the time required to not just machine (remove the material), but also the set-up time of the CNC machine, NC programming, fixturing and many other activities that are dependent on the complexity and size of the part. 

Set-Up Time of Operations (Flip of the Part)

Unless a 4th- &/or 5th-Axis is used, a CNC can only approach the part from a single direction. One side must be machined at a time (called an operation or Op). Then the part must be flipped from side to side to machine all of the features. The geometry of the features dictates whether the part must be flipped over or not. The more Ops (flip of the part), the more expensive the part because it incurs substantial "Set-up" and "Load/Unload" time. 

Each operation (flip of the part) has set-up time, machine time, time to load/unload tools, time to load/unload parts, and time to create the NC program for each operation. If a part has only 1 operation, then parts only have to be loaded/unloaded once. If it has 5 operations, then load/unload time is significant. 

The low hanging fruit is minimizing the number of operations (flip of the part) to create significant savings. For example, it may take only 2 minutes to machine the face of a small part, but it will take an hour to set the machine up to do it. Or, if there are 5 operations at 1.5 hours each, but only 30 minutes total machine time, then 7.5 hours is charged for just 30 minutes of machining.

Lastly, the volume (number of parts to machine) plays a critical role in amortizing the set-up time, programming time and other activities into the cost of the part. In the example above, the part in quantities of 10 could cost 7–10X the cost in quantities of 100.

Typically, the law of diminishing returns presents itself at volumes of 100–300 because set-up times, custom tooling and fixturing can be amortized into the noise.

Material type

The most easily machined types of metals include aluminum, brass, and softer metals. As materials get harder, denser and stronger, such as steel, stainless steel, titanium, and exotic alloys, they become much harder to machine and take much longer, thus being less manufacturable. Most types of plastic are easy to machine, although additions of fiberglass or carbon fiber can reduce the machinability. Plastics that are particularly soft and gummy may have machinability problems of their own. 

Material form

Metals come in all forms. In the case of aluminum as an example, bar stock and plate are the two most common forms from which machined parts are made. The size and shape of the component may determine which form of material must be used. It is common for engineering drawings to specify one form over the other. Bar stock is generally close to 1/2 of the cost of plate on a per pound basis. So although the material form isn't directly related to the geometry of the component, cost can be removed at the design stage by specifying the least expensive form of the material.

Tolerances

A significant contributing factor to the cost of a machined component is the geometric tolerance to which the features must be made. The tighter the tolerance required, the more expensive the component will be to machine. When designing, specify the loosest tolerance that will serve the function of the component. Tolerances must be specified on a feature by feature basis. There are creative ways to engineer components with lower tolerances that still perform as well as ones with higher tolerances. 

Design and shape

As machining is a subtractive process, the time to remove the material is a major factor in determining the machining cost. The volume and shape of the material to be removed as well as how fast the tools can be fed will determine the machining time. When using milling cutters, the strength and stiffness of the tool which is determined in part by the length to diameter ratio of the tool will play the largest role in determining that speed. The shorter the tool is relative to its diameter the faster it can be fed through the material. A ratio of 3:1 (L:D) or under is optimum. If that ratio cannot be achieved, a solution like this depicted here can be used. For holes, the length to diameter ratio of the tools are less critical, but should still be kept under 10:1.

There are many other types of features which are more or less expensive to machine. Generally chamfers cost less to machine than radii on outer horizontal edges. 3D interpolation is used to create radii on edges that are not on the same plane which incur 10X the cost. Undercuts are more expensive to machine. Features that require smaller tools, regardless of L:D ratio, are more expensive. 

Design for Inspection

The concept of Design for Inspection (DFI) should complement and work in collaboration with Design for Manufacturability (DFM) and Design for Assembly (DFA) to reduce product manufacturing cost and increase manufacturing practicality. There are instances when this method could cause calendar delays since it consumes many hours of additional work such as the case of the need to prepare for design review presentations and documents. To address this, it is proposed that instead of periodic inspections, organizations could adopt the framework of empowerment, particularly at the stage of product development, wherein the senior management empowers the project leader to evaluate manufacturing processes and outcomes against expectations on product performance, cost, quality and development time. Experts, however, cite the necessity for the DFI because it is crucial in performance and quality control, determining key factors such as product reliability, safety, and life cycles. For an aerospace components company, where inspection is mandatory, there is the requirement for the suitability of the manufacturing process for inspection. Here, a mechanism is adopted such as an inspectability index, which evaluates design proposals. Another example of DFI is the concept of cumulative count of conforming chart (CCC chart), which is applied in inspection and maintenance planning for systems where different types of inspection and maintenance are available.

Design for additive manufacturing

Additive manufacturing broadens the ability of a designer to optimize the design of a product or part (to save materials for example). Designs tailored for additive manufacturing are sometimes very different from designs tailored for machining or forming manufacturing operations. 

In addition, due to some size constraints of additive manufacturing machines, sometimes the related bigger designs are split into smaller sections with self-assembly features or fasteners locators.

Technology CAD

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Technology_CAD
 
Technology computer-aided design (technology CAD or TCAD) is a branch of electronic design automation that models semiconductor fabrication and semiconductor device operation. The modeling of the fabrication is termed Process TCAD, while the modeling of the device operation is termed Device TCAD. Included are the modelling of process steps (such as diffusion and ion implantation), and modelling of the behavior of the electrical devices based on fundamental physics, such as the doping profiles of the devices. TCAD may also include the creation of compact models (such as the well known SPICE transistor models), which try to capture the electrical behavior of such devices but do not generally derive them from the underlying physics. (However, the SPICE simulator itself is usually considered as part of ECAD rather than TCAD.)

Hierarchy of technology CAD tools building from the process level to circuits. Left side icons show typical manufacturing issues; right side icons reflect MOS scaling results based on TCAD (CRC Electronic Design Automation for IC Handbook, Chapter 25)
 
From the diagram on the right:

Introduction

Technology files and design rules are essential building blocks of the integrated circuit design process. Their accuracy and robustness over process technology, its variability and the operating conditions of the IC — environmental, parasitic interactions and testing, including adverse conditions such as electro-static discharge — are critical in determining performance, yield and reliability. Development of these technology and design rule files involves an iterative process that crosses boundaries of technology and device development, product design and quality assurance. Modeling and simulation play a critical role in support of many aspects of this evolution process. 

The goals of TCAD start from the physical description of integrated circuit devices, considering both the physical configuration and related device properties, and build the links between the broad range of physics and electrical behavior models that support circuit design. Physics-based modeling of devices, in distributed and lumped forms, is an essential part of the IC process development. It seeks to quantify the underlying understanding of the technology and abstract that knowledge to the device design level, including extraction of the key parameters that support circuit design and statistical metrology.

Although the emphasis here is on Metal Oxide Semiconductor (MOS) transistors — the workhorse of the IC industry — it is useful to briefly overview the development history of the modeling tools and methodology that has set the stage for the present state-of-the-art. 

History

The evolution of technology computer-aided design (TCAD) — the synergistic combination of process, device and circuit simulation and modeling tools — finds its roots in bipolar technology, starting in the late 1960s, and the challenges of junction isolated, double-and triple-diffused transistors. These devices and technology were the basis of the first integrated circuits; nonetheless, many of the scaling issues and underlying physical effects are integral to IC design, even after four decades of IC development. With these early generations of IC, process variability and parametric yield were an issue — a theme that will reemerge as a controlling factor in future IC technology as well.

Process control issues — both for the intrinsic devices and all the associated parasitics — presented formidable challenges and mandated the development of a range of advanced physical models for process and device simulation. Starting in the late 1960s and into the 1970s, the modeling approaches exploited were dominantly one- and two-dimensional simulators. While TCAD in these early generations showed exciting promise in addressing the physics-oriented challenges of bipolar technology, the superior scalability and power consumption of MOS technology revolutionized the IC industry. By the mid-1980s, CMOS became the dominant driver for integrated electronics. Nonetheless, these early TCAD developments set the stage for their growth and broad deployment as an essential toolset that has leveraged technology development through the VLSI and ULSI eras which are now the mainstream. 

IC development for more than a quarter-century has been dominated by the MOS technology. In the 1970s and 1980s NMOS was favored owing to speed and area advantages, coupled with technology limitations and concerns related to isolation, parasitic effects and process complexity. During that era of NMOS-dominated LSI and the emergence of VLSI, the fundamental scaling laws of MOS technology were codified and broadly applied. It was also during this period that TCAD reached maturity in terms of realizing robust process modeling (primarily one-dimensional) which then became an integral technology design tool, used universally across the industry. At the same time device simulation, dominantly two-dimensional owing to the nature of MOS devices, became the work-horse of technologists in the design and scaling of devices. The transition from NMOS to CMOS technology resulted in the necessity of tightly coupled and fully 2D simulators for process and device simulations. This third generation of TCAD tools became critical to address the full complexity of twin-well CMOS technology (see Figure 3a), including issues of design rules and parasitic effects such as latchup. An abbreviated but prospective view of this period, through the mid-1980s, is given in; and from the point of view of how TCAD tools were used in the design process.

Modern TCAD

Today the requirements for and use of TCAD cross-cut a very broad landscape of design automation issues, including many fundamental physical limits. At the core are still a host of process and device modeling challenges that support intrinsic device scaling and parasitic extraction. These applications include technology and design rule development, extraction of compact models and more generally design for manufacturability (DFM). The dominance of interconnects for giga-scale integration (transistor counts in O(billion)) and clocking frequencies in O (10 gigahertz)) have mandated the development of tools and methodologies that embrace patterning by electro-magnetic simulations—both for optical patterns and electronic and optical interconnect performance modeling—as well as circuit-level modeling. This broad range of issues at the device and interconnect levels, including links to underlying patterning and processing technologies, is summarized in Figure 1 and provides a conceptual framework for the discussion that now follows.

Figure 1: Hierarchy of technology CAD tools building from the process level to circuits. Left side icons show typical manufacturing issues; right side icons reflect MOS scaling results based on TCAD (CRC Electronic Design Automation for IC Handbook, Chapter 25)
 
Figure 1 depicts a hierarchy of process, device and circuit levels of simulation tools. On each side of the boxes indicating modeling level are icons that schematically depict representative applications for TCAD. The left side gives emphasis to Design For Manufacturing (DFM) issues such as: shallow-trench isolation (STI), extra features required for phase-shift masking (PSM) and challenges for multi-level interconnects that include processing issues of chemical-mechanical planarization (CMP), and the need to consider electro-magnetic effects using electromagnetic field solvers. The right side icons show the more traditional hierarchy of expected TCAD results and applications: complete process simulations of the intrinsic devices, predictions of drive current scaling and extraction of technology files for the complete set of devices and parasitics.

Figure 2 again looks at TCAD capabilities but this time more in the context of design flow information and how this relates to the physical layers and modeling of the electronic design automation (EDA) world. Here the simulation levels of process and device modeling are considered as integral capabilities (within TCAD) that together provide the "mapping" from mask-level information to the functional capabilities needed at the EDA level such as compact models ("technology files") and even higher-level behavioral models. Also shown is the extraction and electrical rule checking (ERC); this indicates that many of the details that to date have been embedded in analytical formulations, may in fact also be linked to the deeper TCAD level in order to support the growing complexity of technology scaling. 

Providers

Current major suppliers of TCAD tools include Synopsys, Silvaco, Crosslight, Cogenda Software, Global TCAD Solutions and Tiberlab. The open source GSS, Archimedes, Aeneas, NanoTCAD ViDES, DEVSIM, and GENIUS have some of the capabilities of the commercial products.

Friday, January 17, 2020

Bi polar junction transistor

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Bipolar_junction_transistor
 
Typical individual BJT packages. From top to bottom: TO-3, TO-126, TO-92, SOT-23
 
A bipolar junction transistor (bipolar transistor or BJT) is a type of transistor that uses both electrons and holes as charge carriers. 

Unipolar transistors, such as field-effect transistors, use only one kind of charge carrier. BJTs use two junctions between two semiconductor types, n-type and p-type. 

BJTs are manufactured in two types: NPN and PNP, and are available as individual components, or fabricated in integrated circuits, often in large numbers. 

Usage

BJTs can be used as amplifiers or switches. This ability gives them many applications in electronic equipment such as computers, televisions, mobile phones, audio amplifiers, industrial control, and radio transmitters. 

Current direction conventions

By convention, the direction of current on diagrams is shown as the direction that a positive charge would move. This is called conventional current. However, current in many metal conductors is due to the flow of electrons. Because electrons carry a negative charge, they move in the direction opposite to conventional current. On the other hand, inside a bipolar transistor, currents can be composed of both positively charged holes and negatively charged electrons. In this article, current arrows are shown in the conventional direction, but labels for the movement of holes and electrons show their actual direction inside the transistor. The arrow on the symbol for bipolar transistors indicates the PN junction between base and emitter and points in the direction in which conventional current travels. 

Function

BJTs are available in two types, or polarities, known as PNP and NPN based on the doping types of the three main terminal regions. An NPN transistor comprises two semiconductor junctions that share a thin p-doped region, and a PNP transistor comprises two semiconductor junctions that share a thin n-doped region. 

NPN BJT with forward-biased E–B junction and reverse-biased B–C junction
 
Charge flow in a BJT is due to diffusion of charge carriers across a junction between two regions of different charge concentrations. The regions of a BJT are called emitter, base, and collector. A discrete transistor has three leads for connection to these regions. Typically, the emitter region is heavily doped compared to the other two layers, and the collector is doped much lighter than the base (collector doping is typically ten times lighter than base doping). By design, most of the BJT collector current is due to the flow of charge carriers (electrons or holes) injected from a heavily doped emitter into the base where they are minority carriers that diffuse toward the collector, and so BJTs are classified as minority-carrier devices. 

In typical operation of an NPN device, the base–emitter junction is forward-biased, which means that the p-doped side of the junction is at a more positive potential than the n-doped side, and the base–collector junction is reverse-biased. When forward bias is applied to the base–emitter junction, the equilibrium between the thermally generated carriers and the repelling electric field of the n-doped emitter depletion region is disturbed. This allows thermally excited electrons to inject from the emitter into the base region. These electrons diffuse through the base from the region of high concentration near the emitter toward the region of low concentration near the collector. The electrons in the base are called minority carriers because the base is doped p-type, which makes holes the majority carrier in the base. In a PNP device, analogous behaviour occurs, but with holes as the dominant current carriers.

To minimize the fraction of carriers that recombine before reaching the collector–base junction, the transistor's base region must be thin enough that carriers can diffuse across it in much less time than the semiconductor's minority-carrier lifetime. Having a lightly doped base ensures recombination rates are low. In particular, the thickness of the base must be much less than the diffusion length of the electrons. The collector–base junction is reverse-biased, and so negligible electron injection occurs from the collector to the base, but carriers that are injected into the base and diffuse to reach the collector-base depletion region are swept into the collector by the electric field in the depletion region. The thin shared base and asymmetric collector–emitter doping are what differentiates a bipolar transistor from two separate and oppositely biased diodes connected in series. 

Voltage, current, and charge control

The collector–emitter current can be viewed as being controlled by the base–emitter current (current control), or by the base–emitter voltage (voltage control). These views are related by the current–voltage relation of the base–emitter junction, which is the usual exponential current–voltage curve of a p–n junction (diode).

The explanation for collector current is the concentration gradient of minority carriers in the base region. Due to low-level injection (in which there are much fewer excess carriers than normal majority carriers) the ambipolar transport rates (in which the excess majority and minority carriers flow at the same rate) is in effect determined by the excess minority carriers.

Detailed transistor models of transistor action, such as the Gummel–Poon model, account for the distribution of this charge explicitly to explain transistor behaviour more exactly. The charge-control view easily handles phototransistors, where minority carriers in the base region are created by the absorption of photons, and handles the dynamics of turn-off, or recovery time, which depends on charge in the base region recombining. However, because base charge is not a signal that is visible at the terminals, the current- and voltage-control views are generally used in circuit design and analysis.
In analog circuit design, the current-control view is sometimes used because it is approximately linear. That is, the collector current is approximately times the base current. Some basic circuits can be designed by assuming that the base-emitter voltage is approximately constant and that collector current is β times the base current. However, to accurately and reliably design production BJT circuits, the voltage-control (for example, Ebers–Moll) model is required. The voltage-control model requires an exponential function to be taken into account, but when it is linearized such that the transistor can be modeled as a transconductance, as in the Ebers–Moll model, design for circuits such as differential amplifiers again becomes a mostly linear problem, so the voltage-control view is often preferred. For translinear circuits, in which the exponential I–V curve is key to the operation, the transistors are usually modeled as voltage-controlled current sources whose transconductance is proportional to their collector current. In general, transistor-level circuit analysis is performed using SPICE or a comparable analog-circuit simulator, so mathematical model complexity is usually not of much concern to the designer, but a simplified view of the characteristics allows designs to be created following a logical process. 

Turn-on, turn-off, and storage delay

Bipolar transistors, and particularly power transistors, have long base-storage times when they are driven into saturation; the base storage limits turn-off time in switching applications. A Baker clamp can prevent the transistor from heavily saturating, which reduces the amount of charge stored in the base and thus improves switching time. 

Transistor characteristics: alpha (α) and beta (β)

The proportion of carriers able to cross the base and reach the collector is a measure of the BJT efficiency. The heavy doping of the emitter region and light doping of the base region causes many more electrons to be injected from the emitter into the base than holes to be injected from the base into the emitter. A thin and lightly-doped base region means that most of the minority carriers that are injected into the base will diffuse to the collector and not recombine.

The common-emitter current gain is represented by βF or the h-parameter hFE; it is approximately the ratio of the DC collector current to the DC base current in forward-active region. It is typically greater than 50 for small-signal transistors, but can be smaller in transistors designed for high-power applications. Both injection efficiency and recombination in the base reduce the BJT gain.

Another useful characteristic is the common-base current gain, αF. The common-base current gain is approximately the gain of current from emitter to collector in the forward-active region. This ratio usually has a value close to unity; between 0.980 and 0.998. It is less than unity due to recombination of charge carriers as they cross the base region.

Alpha and beta are related by the following identities:
Beta is a convenient figure of merit to describe the performance of a bipolar transistor, but is not a fundamental physical property of the device. Bipolar transistors can be considered voltage-controlled devices (fundamentally the collector current is controlled by the base-emitter voltage; the base current could be considered a defect and is controlled by the characteristics of the base-emitter junction and recombination in the base). In many designs beta is assumed high enough so that base current has a negligible effect on the circuit. In some circuits (generally switching circuits), sufficient base current is supplied so that even the lowest beta value a particular device may have will still allow the required collector current to flow. 

Structure

Simplified cross section of a planar NPN bipolar junction transistor

A BJT consists of three differently doped semiconductor regions: the emitter region, the base region and the collector region. These regions are, respectively, p type, n type and p type in a PNP transistor, and n type, p type and n type in an NPN transistor. Each semiconductor region is connected to a terminal, appropriately labeled: emitter (E), base (B) and collector (C).

The base is physically located between the emitter and the collector and is made from lightly doped, high-resistivity material. The collector surrounds the emitter region, making it almost impossible for the electrons injected into the base region to escape without being collected, thus making the resulting value of α very close to unity, and so, giving the transistor a large β. A cross-section view of a BJT indicates that the collector–base junction has a much larger area than the emitter–base junction.

The bipolar junction transistor, unlike other transistors, is usually not a symmetrical device. This means that interchanging the collector and the emitter makes the transistor leave the forward active mode and start to operate in reverse mode. Because the transistor's internal structure is usually optimized for forward-mode operation, interchanging the collector and the emitter makes the values of α and β in reverse operation much smaller than those in forward operation; often the α of the reverse mode is lower than 0.5. The lack of symmetry is primarily due to the doping ratios of the emitter and the collector. The emitter is heavily doped, while the collector is lightly doped, allowing a large reverse bias voltage to be applied before the collector–base junction breaks down. The collector–base junction is reverse biased in normal operation. The reason the emitter is heavily doped is to increase the emitter injection efficiency: the ratio of carriers injected by the emitter to those injected by the base. For high current gain, most of the carriers injected into the emitter–base junction must come from the emitter. 

Die of a KSY34 high-frequency NPN transistor. Bond wires connect to the base and emitter
 
The low-performance "lateral" bipolar transistors sometimes used in CMOS processes are sometimes designed symmetrically, that is, with no difference between forward and backward operation.

Small changes in the voltage applied across the base–emitter terminals cause the current between the emitter and the collector to change significantly. This effect can be used to amplify the input voltage or current. BJTs can be thought of as voltage-controlled current sources, but are more simply characterized as current-controlled current sources, or current amplifiers, due to the low impedance at the base.

Early transistors were made from germanium but most modern BJTs are made from silicon. A significant minority are also now made from gallium arsenide, especially for very high speed applications (see HBT, below). 

NPN

The symbol of an NPN BJT. A mnemonic for the symbol is "not pointing in".

NPN is one of the two types of bipolar transistors, consisting of a layer of P-doped semiconductor (the "base") between two N-doped layers. A small current entering the base is amplified to produce a large collector and emitter current. That is, when there is a positive potential difference measured from the base of an NPN transistor to its emitter (that is, when the base is high relative to the emitter), as well as a positive potential difference measured from the collector to the emitter, the transistor becomes active. In this "on" state, current flows from the collector to the emitter of the transistor. Most of the current is carried by electrons moving from emitter to collector as minority carriers in the P-type base region. To allow for greater current and faster operation, most bipolar transistors used today are NPN because electron mobility is higher than hole mobility

PNP

The symbol of a PNP BJT. A mnemonic for the symbol is "points in proudly".

The other type of BJT is the PNP, consisting of a layer of N-doped semiconductor between two layers of P-doped material. A small current leaving the base is amplified in the collector output. That is, a PNP transistor is "on" when its base is pulled low relative to the emitter. In a PNP transistor, the emitter–base region is forward biased, so holes are injected into the base as minority carriers. The base is very thin, and most of the holes cross the reverse-biased base–collector junction to the collector.

The arrows in the NPN and PNP transistor symbols indicate the PN junction between the base and emitter. When the device is in forward active or forward saturated mode, the arrow, placed on the emitter leg, points in the direction of the conventional current

Heterojunction bipolar transistor

Bands in graded heterojunction NPN bipolar transistor. Barriers indicated for electrons to move from emitter to base and for holes to be injected backward from base to emitter; also, grading of bandgap in base assists electron transport in base region. Light colors indicate depleted regions.
 
The heterojunction bipolar transistor (HBT) is an improvement of the BJT that can handle signals of very high frequencies up to several hundred GHz. It is common in modern ultrafast circuits, mostly RF systems.

Symbol for NPN bipolar transistor with current flow direction
 
Heterojunction transistors have different semiconductors for the elements of the transistor. Usually the emitter is composed of a larger bandgap material than the base. The figure shows that this difference in bandgap allows the barrier for holes to inject backward from the base into the emitter, denoted in the figure as Δφp, to be made large, while the barrier for electrons to inject into the base Δφn is made low. This barrier arrangement helps reduce minority carrier injection from the base when the emitter-base junction is under forward bias, and thus reduces base current and increases emitter injection efficiency.

The improved injection of carriers into the base allows the base to have a higher doping level, resulting in lower resistance to access the base electrode. In the more traditional BJT, also referred to as homojunction BJT, the efficiency of carrier injection from the emitter to the base is primarily determined by the doping ratio between the emitter and base, which means the base must be lightly doped to obtain high injection efficiency, making its resistance relatively high. In addition, higher doping in the base can improve figures of merit like the Early voltage by lessening base narrowing.

The grading of composition in the base, for example, by progressively increasing the amount of germanium in a SiGe transistor, causes a gradient in bandgap in the neutral base, denoted in the figure by ΔφG, providing a "built-in" field that assists electron transport across the base. That drift component of transport aids the normal diffusive transport, increasing the frequency response of the transistor by shortening the transit time across the base.

Two commonly used HBTs are silicon–germanium and aluminum gallium arsenide, though a wide variety of semiconductors may be used for the HBT structure. HBT structures are usually grown by epitaxy techniques like MOCVD and MBE

Regions of operation

Junction
type
Applied
voltages
Junction bias Mode
B-E B-C
NPN E < B < C Forward Reverse Forward-active
E < B > C Forward Forward Saturation
E > B < C Reverse Reverse Cut-off
E > B > C Reverse Forward Reverse-active
PNP E < B < C Reverse Forward Reverse-active
E < B > C Reverse Reverse Cut-off
E > B < C Forward Forward Saturation
E > B > C Forward Reverse Forward-active

Bipolar transistors have four distinct regions of operation, defined by BJT junction biases.
Forward-active (or simply active)
The base–emitter junction is forward biased and the base–collector junction is reverse biased. Most bipolar transistors are designed to afford the greatest common-emitter current gain, βF, in forward-active mode. If this is the case, the collector–emitter current is approximately proportional to the base current, but many times larger, for small base current variations.
Reverse-active (or inverse-active or inverted)
By reversing the biasing conditions of the forward-active region, a bipolar transistor goes into reverse-active mode. In this mode, the emitter and collector regions switch roles. Because most BJTs are designed to maximize current gain in forward-active mode, the βF in inverted mode is several times smaller (2–3 times for the ordinary germanium transistor). This transistor mode is seldom used, usually being considered only for failsafe conditions and some types of bipolar logic. The reverse bias breakdown voltage to the base may be an order of magnitude lower in this region.
Saturation
With both junctions forward-biased, a BJT is in saturation mode and facilitates high current conduction from the emitter to the collector (or the other direction in the case of NPN, with negatively charged carriers flowing from emitter to collector). This mode corresponds to a logical "on", or a closed switch.
Cut-off
In cut-off, biasing conditions opposite of saturation (both junctions reverse biased) are present. There is very little current, which corresponds to a logical "off", or an open switch.
 
Avalanche breakdown region
Input characteristics
 
output characteristics
Input and output characteristics for a common-base silicon transistor amplifier.
 
The modes of operation can be described in terms of the applied voltages (this description applies to NPN transistors; polarities are reversed for PNP transistors):
Forward-active
Base higher than emitter, collector higher than base (in this mode the collector current is proportional to base current by ).
Saturation
Base higher than emitter, but collector is not higher than base.
Cut-off
Base lower than emitter, but collector is higher than base. It means the transistor is not letting conventional current go through from collector to emitter.
Reverse-active
Base lower than emitter, collector lower than base: reverse conventional current goes through transistor.
In terms of junction biasing: (reverse biased base–collector junction means Vbc < 0 for NPN, opposite for PNP).

Although these regions are well defined for sufficiently large applied voltage, they overlap somewhat for small (less than a few hundred millivolts) biases. For example, in the typical grounded-emitter configuration of an NPN BJT used as a pulldown switch in digital logic, the "off" state never involves a reverse-biased junction because the base voltage never goes below ground; nevertheless the forward bias is close enough to zero that essentially no current flows, so this end of the forward active region can be regarded as the cutoff region. 

Active-mode transistors in circuits

Structure and use of NPN transistor. Arrow according to schematic.
 
The diagram shows a schematic representation of an NPN transistor connected to two voltage sources. (The same description applies to a PNP transistor with reversed directions of current flow and applied voltage.) This applied voltage causes the lower P-N junction to become forward biased, allowing a flow of electrons from the emitter into the base. In active mode, the electric field existing between base and collector (caused by VCE) will cause the majority of these electrons to cross the upper P-N junction into the collector to form the collector current IC. The remainder of the electrons recombine with holes, the majority carriers in the base, making a current through the base connection to form the base current, IB. As shown in the diagram, the emitter current, IE, is the total transistor current, which is the sum of the other terminal currents, (i.e., IE = IB + IC).

In the diagram, the arrows representing current point in the direction of conventional current – the flow of electrons is in the opposite direction of the arrows because electrons carry negative electric charge. In active mode, the ratio of the collector current to the base current is called the DC current gain. This gain is usually 100 or more, but robust circuit designs do not depend on the exact value (for example see op-amp). The value of this gain for DC signals is referred to as , and the value of this gain for small signals is referred to as . That is, when a small change in the currents occurs, and sufficient time has passed for the new condition to reach a steady state is the ratio of the change in collector current to the change in base current. The symbol is used for both and .

The emitter current is related to exponentially. At room temperature, an increase in by approximately 60 mV increases the emitter current by a factor of 10. Because the base current is approximately proportional to the collector and emitter currents, they vary in the same way. 

History

The bipolar point-contact transistor was invented in December 1947 at the Bell Telephone Laboratories by John Bardeen and Walter Brattain under the direction of William Shockley. The junction version known as the bipolar junction transistor (BJT), invented by Shockley in 1948, was for three decades the device of choice in the design of discrete and integrated circuits. Nowadays, the use of the BJT has declined in favor of CMOS technology in the design of digital integrated circuits. The incidental low performance BJTs inherent in CMOS ICs, however, are often utilized as bandgap voltage reference, silicon bandgap temperature sensor and to handle electrostatic discharge

Germanium transistors

The germanium transistor was more common in the 1950s and 1960s but has a greater tendency to exhibit thermal runaway

Early manufacturing techniques

Various methods of manufacturing bipolar transistors were developed.

Bipolar transistors


Theory and modeling

Band diagram for NPN transistor at equilibrium
 
Band diagram for NPN transistor in active mode, showing injection of electrons from emitter to base, and their overshoot into the collector

Transistors can be thought of as two diodes (P–N junctions) sharing a common region that minority carriers can move through. A PNP BJT will function like two diodes that share an N-type cathode region, and the NPN like two diodes sharing a P-type anode region. Connecting two diodes with wires will not make a transistor, since minority carriers will not be able to get from one P–N junction to the other through the wire.

Both types of BJT function by letting a small current input to the base control an amplified output from the collector. The result is that the transistor makes a good switch that is controlled by its base input. The BJT also makes a good amplifier, since it can multiply a weak input signal to about 100 times its original strength. Networks of transistors are used to make powerful amplifiers with many different applications. In the discussion below, focus is on the NPN bipolar transistor. In the NPN transistor in what is called active mode, the base–emitter voltage and collector–base voltage are positive, forward biasing the emitter–base junction and reverse-biasing the collector–base junction. In the active mode of operation, electrons are injected from the forward biased n-type emitter region into the p-type base where they diffuse as minority carriers to the reverse-biased n-type collector and are swept away by the electric field in the reverse-biased collector–base junction. For a figure describing forward and reverse bias, see semiconductor diodes

Large-signal models

In 1954, Jewell James Ebers and John L. Moll introduced their mathematical model of transistor currents.

Ebers–Moll model

Ebers–Moll model for an NPN transistor. IB, IC and IE are the base, collector and emitter currents; ICD and IED are the collector and emitter diode currents; αF and αR are the forward and reverse common-base current gains.
 
Ebers–Moll model for a PNP transistor
 
Approximated Ebers–Moll model for an NPN transistor in the forward active mode. The collector diode is reverse-biased so ICD is virtually zero. Most of the emitter diode current (αF is nearly 1) is drawn from the collector, providing the amplification of the base current.
 
The DC emitter and collector currents in active mode are well modeled by an approximation to the Ebers–Moll model.
The base internal current is mainly by diffusion (see Fick's law) and
where
  • is the thermal voltage (approximately 26 mV at 300 K ≈ room temperature).
  • is the emitter current
  • is the collector current
  • is the common base forward short-circuit current gain (0.98 to 0.998)
  • is the reverse saturation current of the base–emitter diode (on the order of 10−15 to 10−12 amperes)
  • is the base–emitter voltage
  • is the diffusion constant for electrons in the p-type base
  • W is the base width
The and forward parameters are as described previously. A reverse is sometimes included in the model.

The unapproximated Ebers–Moll equations used to describe the three currents in any operating region are given below. These equations are based on the transport model for a bipolar junction transistor.
where
  • is the collector current
  • is the base current
  • is the emitter current
  • is the forward common emitter current gain (20 to 500)
  • is the reverse common emitter current gain (0 to 20)
  • is the reverse saturation current (on the order of 10−15 to 10−12 amperes)
  • is the thermal voltage (approximately 26 mV at 300 K ≈ room temperature).
  • is the base–emitter voltage
  • is the base–collector voltage
 Base-width modulation
Top: NPN base width for low collector–base reverse bias; Bottom: narrower NPN base width for large collector–base reverse bias. Hashed regions are depleted regions.

As the collector–base voltage () varies, the collector–base depletion region varies in size. An increase in the collector–base voltage, for example, causes a greater reverse bias across the collector–base junction, increasing the collector–base depletion region width, and decreasing the width of the base. This variation in base width often is called the Early effect after its discoverer James M. Early.

Narrowing of the base width has two consequences:
  • There is a lesser chance for recombination within the "smaller" base region.
  • The charge gradient is increased across the base, and consequently, the current of minority carriers injected across the emitter junction increases.
Both factors increase the collector or "output" current of the transistor in response to an increase in the collector–base voltage. 

In the forward-active region, the Early effect modifies the collector current () and the forward common emitter current gain () as given by:
where:
  • is the collector–emitter voltage
  • is the Early voltage (15 V to 150 V)
  • is forward common-emitter current gain when = 0 V
  • is the output impedance
  • is the collector current
Punchthrough
When the base–collector voltage reaches a certain (device-specific) value, the base–collector depletion region boundary meets the base–emitter depletion region boundary. When in this state the transistor effectively has no base. The device thus loses all gain when in this state.

Gummel–Poon charge-control model

The Gummel–Poon model is a detailed charge-controlled model of BJT dynamics, which has been adopted and elaborated by others to explain transistor dynamics in greater detail than the terminal-based models typically do. This model also includes the dependence of transistor -values upon the direct current levels in the transistor, which are assumed current-independent in the Ebers–Moll model.

Small-signal models


Hybrid-pi model

Hybrid-pi model
The hybrid-pi model is a popular circuit model used for analyzing the small signal and AC behavior of bipolar junction and field effect transistors. Sometimes it is also called Giacoletto model because it was introduced by L.J. Giacoletto in 1969. The model can be quite accurate for low-frequency circuits and can easily be adapted for higher-frequency circuits with the addition of appropriate inter-electrode capacitances and other parasitic elements. 

h-parameter model

Generalized h-parameter model of an NPN BJT.
Replace x with e, b or c for CE, CB and CC topologies respectively.

Another model commonly used to analyze BJT circuits is the h-parameter model, closely related to the hybrid-pi model and the y-parameter two-port, but using input current and output voltage as independent variables, rather than input and output voltages. This two-port network is particularly suited to BJTs as it lends itself easily to the analysis of circuit behaviour, and may be used to develop further accurate models. As shown, the term, x, in the model represents a different BJT lead depending on the topology used. For common-emitter mode the various symbols take on the specific values as:
  • Terminal 1, base
  • Terminal 2, collector
  • Terminal 3 (common), emitter; giving x to be e
  • ii, base current (ib)
  • io, collector current (ic)
  • Vin, base-to-emitter voltage (VBE)
  • Vo, collector-to-emitter voltage (VCE)
and the h-parameters are given by:
  • hix = hie, the input impedance of the transistor (corresponding to the base resistance rpi).
  • hrx = hre, represents the dependence of the transistor's IBVBE curve on the value of VCE. It is usually very small and is often neglected (assumed to be zero).
  • hfx = hfe, the current-gain of the transistor. This parameter is often specified as hFE or the DC current-gain (βDC) in datasheets.
  • hox = 1/hoe, the output impedance of transistor. The parameter hoe usually corresponds to the output admittance of the bipolar transistor and has to be inverted to convert it to an impedance.
As shown, the h-parameters have lower-case subscripts and hence signify AC conditions or analyses. For DC conditions they are specified in upper-case. For the CE topology, an approximate h-parameter model is commonly used which further simplifies the circuit analysis. For this the hoe and hre parameters are neglected (that is, they are set to infinity and zero, respectively). The h-parameter model as shown is suited to low-frequency, small-signal analysis. For high-frequency analyses the inter-electrode capacitances that are important at high frequencies must be added.
Etymology of hFE
The h refers to its being an h-parameter, a set of parameters named for their origin in a hybrid equivalent circuit model. F is from forward current amplification also called the current gain. E refers to the transistor operating in a common emitter (CE) configuration. Capital letters used in the subscript indicate that hFE refers to a direct current circuit.

Industry models

The Gummel–Poon SPICE model is often used, but it suffers from several limitations. These have been addressed in various more advanced models: Mextram, VBIC, HICUM, Modella.

Applications

The BJT remains a device that excels in some applications, such as discrete circuit design, due to the very wide selection of BJT types available, and because of its high transconductance and output resistance compared to MOSFETs

The BJT is also the choice for demanding analog circuits, especially for very-high-frequency applications, such as radio-frequency circuits for wireless systems. 

High-speed digital logic

Emitter-coupled logic (ECL) use BJTs.

Bipolar transistors can be combined with MOSFETs in an integrated circuit by using a BiCMOS process of wafer fabrication to create circuits that take advantage of the application strengths of both types of transistor. 

Amplifiers

The transistor parameters α and β characterizes the current gain of the BJT. It is this gain that allows BJTs to be used as the building blocks of electronic amplifiers. The three main BJT amplifier topologies are:

Temperature sensors

Because of the known temperature and current dependence of the forward-biased base–emitter junction voltage, the BJT can be used to measure temperature by subtracting two voltages at two different bias currents in a known ratio.

Logarithmic converters

Because base–emitter voltage varies as the logarithm of the base–emitter and collector–emitter currents, a BJT can also be used to compute logarithms and anti-logarithms. A diode can also perform these nonlinear functions but the transistor provides more circuit flexibility. 

Vulnerabilities

Exposure of the transistor to ionizing radiation causes radiation damage. Radiation causes a buildup of 'defects' in the base region that act as recombination centers. The resulting reduction in minority carrier lifetime causes gradual loss of gain of the transistor. 

Transistors have "maximum ratings", including power ratings (essentially limited by self-heating), maximum collector and base currents (both continuous/DC ratings and peak), and breakdown voltage ratings, beyond which the device may fail or at least perform badly.

In addition to normal breakdown ratings of the device, power BJTs are subject to a failure mode called secondary breakdown, in which excessive current and normal imperfections in the silicon die cause portions of the silicon inside the device to become disproportionately hotter than the others. The electrical resistivity of doped silicon, like other semiconductors, has a negative temperature coefficient, meaning that it conducts more current at higher temperatures. Thus, the hottest part of the die conducts the most current, causing its conductivity to increase, which then causes it to become progressively hotter again, until the device fails internally. The thermal runaway process associated with secondary breakdown, once triggered, occurs almost instantly and may catastrophically damage the transistor package.

If the emitter-base junction is reverse biased into avalanche or Zener mode and charge flows for a short period of time, the current gain of the BJT will be permanently degraded.

Operator (computer programming)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Operator_(computer_programmin...