Search This Blog

Tuesday, September 22, 2015

Transistor


From Wikipedia, the free encyclopedia


Assorted discrete transistors. Packages in order from top to bottom: TO-3, TO-126, TO-92, SOT-23

A transistor is a semiconductor device used to amplify and switch electronic signals and electrical power. It is composed of semiconductor material with at least three terminals for connection to an external circuit. A voltage or current applied to one pair of the transistor's terminals changes the current through another pair of terminals. Because the controlled (output) power can be higher than the controlling (input) power, a transistor can amplify a signal. Today, some transistors are packaged individually, but many more are found embedded in integrated circuits.

The transistor is the fundamental building block of modern electronic devices, and is ubiquitous in modern electronic systems. Following its development in 1947 by American physicists John Bardeen, Walter Brattain, and William Shockley, the transistor revolutionized the field of electronics, and paved the way for smaller and cheaper radios, calculators, and computers, among other things. The transistor is on the list of IEEE milestones in electronics,[1] and the inventors were jointly awarded the 1956 Nobel Prize in Physics for their achievement.[2]

History


A replica of the first working transistor.

The thermionic triode, a vacuum tube invented in 1907 enabled amplified radio technology and long-distance telephony. The triode, however, was a fragile device that consumed a lot of power. Physicist Julius Edgar Lilienfeld filed a patent for a field-effect transistor (FET) in Canada in 1925, which was intended to be a solid-state replacement for the triode.[3][4] Lilienfeld also filed identical patents in the United States in 1926[5] and 1928.[6][7] However, Lilienfeld did not publish any research articles about his devices nor did his patents cite any specific examples of a working prototype. Because the production of high-quality semiconductor materials was still decades away, Lilienfeld's solid-state amplifier ideas would not have found practical use in the 1920s and 1930s, even if such a device had been built.[8] In 1934, German inventor Oskar Heil patented a similar device.[9]

John Bardeen, William Shockley and Walter Brattain at Bell Labs, 1948.

From November 17, 1947 to December 23, 1947, John Bardeen and Walter Brattain at AT&T's Bell Labs in the United States, performed experiments and observed that when two gold point contacts were applied to a crystal of germanium, a signal was produced with the output power greater than the input.[10] Solid State Physics Group leader William Shockley saw the potential in this, and over the next few months worked to greatly expand the knowledge of semiconductors. The term transistor was coined by John R. Pierce as a contraction of the term transresistance.[11][12][13] According to Lillian Hoddeson and Vicki Daitch, authors of a biography of John Bardeen, Shockley had proposed that Bell Labs' first patent for a transistor should be based on the field-effect and that he be named as the inventor. Having unearthed Lilienfeld’s patents that went into obscurity years earlier, lawyers at Bell Labs advised against Shockley's proposal because the idea of a field-effect transistor that used an electric field as a "grid" was not new. Instead, what Bardeen, Brattain, and Shockley invented in 1947 was the first point-contact transistor.[8] In acknowledgement of this accomplishment, Shockley, Bardeen, and Brattain were jointly awarded the 1956 Nobel Prize in Physics "for their researches on semiconductors and their discovery of the transistor effect."[14]

In 1948, the point-contact transistor was independently invented by German physicists Herbert Mataré and Heinrich Welker while working at the Compagnie des Freins et Signaux, a Westinghouse subsidiary located in Paris. Mataré had previous experience in developing crystal rectifiers from silicon and germanium in the German radar effort during World War II. Using this knowledge, he began researching the phenomenon of "interference" in 1947. By June 1948, witnessing currents flowing through point-contacts, Mataré produced consistent results using samples of germanium produced by Welker, similar to what Bardeen and Brattain had accomplished earlier in December 1947. Realizing that Bell Labs' scientists had already invented the transistor before them, the company rushed to get its "transistron" into production for amplified use in France's telephone network.[15]

Philco surface-barrier transistor developed and produced in 1953

The first high-frequency transistor was the surface-barrier germanium transistor developed by Philco in 1953, capable of operating up to 60 MHz.[16] These were made by etching depressions into an N-type germanium base from both sides with jets of Indium(III) sulfate until it was a few ten-thousandths of an inch thick. Indium electroplated into the depressions formed the collector and emitter.[17][18] The first all-transistor car radio, which was produced in 1955 by Chrysler and Philco, used these transistors in its circuitry and also they were the first suitable for high-speed computers.[19][20][21][22]

The first working silicon transistor was developed at Bell Labs on January 26, 1954 by Morris Tanenbaum. The first commercial silicon transistor was produced by Texas Instruments in 1954. This was the work of Gordon Teal, an expert in growing crystals of high purity, who had previously worked at Bell Labs. [23][24][25] The first MOS transistor actually built was by Kahng and Atalla at Bell Labs in 1960.[26]

Importance


A Darlington transistor opened up so the actual transistor chip (the small square) can be seen inside. A Darlington transistor is effectively two transistors on the same chip. One transistor is much larger than the other, but both are large in comparison to transistors in large-scale integration because this particular example is intended for power applications.

The transistor is the key active component in practically all modern electronics. Many consider it to be one of the greatest inventions of the 20th century.[27] Its importance in today's society rests on its ability to be mass-produced using a highly automated process (semiconductor device fabrication) that achieves astonishingly low per-transistor costs. The invention of the first transistor at Bell Labs was named an IEEE Milestone in 2009.[28]

Although several companies each produce over a billion individually packaged (known as discrete) transistors every year,[29] the vast majority of transistors are now produced in integrated circuits (often shortened to IC, microchips or simply chips), along with diodes, resistors, capacitors and other electronic components, to produce complete electronic circuits. A logic gate consists of up to about twenty transistors whereas an advanced microprocessor, as of 2009, can use as many as 3 billion transistors (MOSFETs).[30] "About 60 million transistors were built in 2002 ... for [each] man, woman, and child on Earth."[31]

The transistor's low cost, flexibility, and reliability have made it a ubiquitous device. Transistorized mechatronic circuits have replaced electromechanical devices in controlling appliances and machinery. It is often easier and cheaper to use a standard microcontroller and write a computer program to carry out a control function than to design an equivalent mechanical control function.

Simplified operation


A simple circuit diagram to show the labels of a n–p–n bipolar transistor.

The essential usefulness of a transistor comes from its ability to use a small signal applied between one pair of its terminals to control a much larger signal at another pair of terminals. This property is called gain. It can produce a stronger output signal, a voltage or current, which is proportional to a weaker input signal; that is, it can act as an amplifier. Alternatively, the transistor can be used to turn current on or off in a circuit as an electrically controlled switch, where the amount of current is determined by other circuit elements.

There are two types of transistors, which have slight differences in how they are used in a circuit. A bipolar transistor has terminals labeled base, collector, and emitter. A small current at the base terminal (that is, flowing between the base and the emitter) can control or switch a much larger current between the collector and emitter terminals. For a field-effect transistor, the terminals are labeled gate, source, and drain, and a voltage at the gate can control a current between source and drain.

The image to the right represents a typical bipolar transistor in a circuit. Charge will flow between emitter and collector terminals depending on the current in the base. Because internally the base and emitter connections behave like a semiconductor diode, a voltage drop develops between base and emitter while the base current exists. The amount of this voltage depends on the material the transistor is made from, and is referred to as VBE.

Transistor as a switch


BJT used as an electronic switch, in grounded-emitter configuration.

Transistors are commonly used in digital circuits as electronic switches which can be either in an "on" or "off" state, both for high-power applications such as switched-mode power supplies and for low-power applications such as logic gates. Important parameters for this application include the current switched, the voltage handled, and the switching speed, characterised by the rise and fall times.

In a grounded-emitter transistor circuit, such as the light-switch circuit shown, as the base voltage rises, the emitter and collector currents rise exponentially. The collector voltage drops because of reduced resistance from collector to emitter. If the voltage difference between the collector and emitter were zero (or near zero), the collector current would be limited only by the load resistance (light bulb) and the supply voltage. This is called saturation because current is flowing from collector to emitter freely. When saturated, the switch is said to be on.[32]

Providing sufficient base drive current is a key problem in the use of bipolar transistors as switches. The transistor provides current gain, allowing a relatively large current in the collector to be switched by a much smaller current into the base terminal. The ratio of these currents varies depending on the type of transistor, and even for a particular type, varies depending on the collector current. In the example light-switch circuit shown, the resistor is chosen to provide enough base current to ensure the transistor will be saturated.

In a switching circuit, the idea is to simulate, as near as possible, the ideal switch having the properties of open circuit when off, short circuit when on, and an instantaneous transition between the two states. Parameters are chosen such that the "off" output is limited to leakage currents too small to affect connected circuitry; the resistance of the transistor in the "on" state is too small to affect circuitry; and the transition between the two states is fast enough not to have a detrimental effect.

Transistor as an amplifier


Amplifier circuit, common-emitter configuration with a voltage-divider bias circuit.

The common-emitter amplifier is designed so that a small change in voltage (Vin) changes the small current through the base of the transistor; the transistor's current amplification combined with the properties of the circuit mean that small swings in Vin produce large changes in Vout.

Various configurations of single transistor amplifier are possible, with some providing current gain, some voltage gain, and some both.

From mobile phones to televisions, vast numbers of products include amplifiers for sound reproduction, radio transmission, and signal processing. The first discrete-transistor audio amplifiers barely supplied a few hundred milliwatts, but power and audio fidelity gradually increased as better transistors became available and amplifier architecture evolved.

Modern transistor audio amplifiers of up to a few hundred watts are common and relatively inexpensive.

Comparison with vacuum tubes

Prior to the development of transistors, vacuum (electron) tubes (or in the UK "thermionic valves" or just "valves") were the main active components in electronic equipment.

Advantages

The key advantages that have allowed transistors to replace vacuum tubes in most applications are
  • No cathode heater (which produces the characteristic orange glow of tubes), reducing power consumption, eliminating delay as tube heaters warm up, and immune from cathode poisoning and depletion.
  • Very small size and weight, reducing equipment size.
  • Large numbers of extremely small transistors can be manufactured as a single integrated circuit.
  • Low operating voltages compatible with batteries of only a few cells.
  • Circuits with greater energy efficiency are usually possible. For low-power applications (e.g., voltage amplification) in particular, energy consumption can be very much less than for tubes.
  • Inherent reliability and very long life; tubes always degrade and fail over time. Some transistorized devices have been in service for more than 50 years.
  • Complementary devices available, providing design flexibility including complementary-symmetry circuits, not possible with vacuum tubes.
  • Very low sensitivity to mechanical shock and vibration, providing physical ruggedness and virtually eliminating shock-induced spurious signals (e.g., microphonics in audio applications).
  • Not susceptible to breakage of a glass envelope, leakage, outgassing, and other physical damage.

Limitations

  • Silicon transistors can age and fail.[33]
  • High-power, high-frequency operation, such as that used in over-the-air television broadcasting, is better achieved in vacuum tubes due to improved electron mobility in a vacuum.
  • Solid-state devices are susceptible to damage from very brief electrical and thermal events, including electrostatic discharge in handling; vacuum tubes are electrically much more rugged.
  • Sensitivity to radiation and cosmic rays (special radiation-hardened chips are used for spacecraft devices).
  • Vacuum tubes in audio applications create significant lower-harmonic distortion, the so-called tube sound, which some people prefer.[34]

Types

BJT PNP symbol.svg PNP JFET P-Channel Labelled.svg P-channel
BJT NPN symbol.svg NPN JFET N-Channel Labelled.svg N-channel
BJT JFET
BJT and JFET symbols
JFET P-Channel Labelled.svg IGFET P-Ch Enh Labelled.svg IGFET P-Ch Enh Labelled simplified.svg IGFET P-Ch Dep Labelled.svg P-channel
JFET N-Channel Labelled.svg IGFET N-Ch Enh Labelled.svg IGFET N-Ch Enh Labelled simplified.svg IGFET N-Ch Dep Labelled.svg N-channel
JFET MOSFET enh MOSFET dep
JFET and MOSFET symbols

Transistors are categorized by
Thus, a particular transistor may be described as silicon, surface-mount, BJT, n–p–n, low-power, high-frequency switch.

Bipolar junction transistor (BJT)

Bipolar transistors are so named because they conduct by using both majority and minority carriers. The bipolar junction transistor, the first type of transistor to be mass-produced, is a combination of two junction diodes, and is formed of either a thin layer of p-type semiconductor sandwiched between two n-type semiconductors (an n–p–n transistor), or a thin layer of n-type semiconductor sandwiched between two p-type semiconductors (a p–n–p transistor). This construction produces two p–n junctions: a base–emitter junction and a base–collector junction, separated by a thin region of semiconductor known as the base region (two junction diodes wired together without sharing an intervening semiconducting region will not make a transistor).
BJTs have three terminals, corresponding to the three layers of semiconductor—an emitter, a base, and a collector. They are useful in amplifiers because the currents at the emitter and collector are controllable by a relatively small base current.[36] In an n–p–n transistor operating in the active region, the emitter–base junction is forward biased (electrons and holes recombine at the junction), and electrons are injected into the base region. Because the base is narrow, most of these electrons will diffuse into the reverse-biased (electrons and holes are formed at, and move away from the junction) base–collector junction and be swept into the collector; perhaps one-hundredth of the electrons will recombine in the base, which is the dominant mechanism in the base current. By controlling the number of electrons that can leave the base, the number of electrons entering the collector can be controlled.[36] Collector current is approximately β (common-emitter current gain) times the base current. It is typically greater than 100 for small-signal transistors but can be smaller in transistors designed for high-power applications.

Unlike the field-effect transistor (see below), the BJT is a low–input-impedance device. Also, as the base–emitter voltage (Vbe) is increased the base–emitter current and hence the collector–emitter current (Ice) increase exponentially according to the Shockley diode model and the Ebers-Moll model. Because of this exponential relationship, the BJT has a higher transconductance than the FET.

Bipolar transistors can be made to conduct by exposure to light, because absorption of photons in the base region generates a photocurrent that acts as a base current; the collector current is approximately β times the photocurrent. Devices designed for this purpose have a transparent window in the package and are called phototransistors.

Field-effect transistor (FET)

The field-effect transistor, sometimes called a unipolar transistor, uses either electrons (in n-channel FET) or holes (in p-channel FET) for conduction. The four terminals of the FET are named source, gate, drain, and body (substrate). On most FETs, the body is connected to the source inside the package, and this will be assumed for the following description.
In a FET, the drain-to-source current flows via a conducting channel that connects the source region to the drain region. The conductivity is varied by the electric field that is produced when a voltage is applied between the gate and source terminals; hence the current flowing between the drain and source is controlled by the voltage applied between the gate and source. As the gate–source voltage (Vgs) is increased, the drain–source current (Ids) increases exponentially for Vgs below threshold, and then at a roughly quadratic rate (I_{ds} \propto (V_{gs}-V_T)^2) (where VT is the threshold voltage at which drain current begins)[37] in the "space-charge-limited" region above threshold. A quadratic behavior is not observed in modern devices, for example, at the 65 nm technology node.[38]

For low noise at narrow bandwidth the higher input resistance of the FET is advantageous.

FETs are divided into two families: junction FET (JFET) and insulated gate FET (IGFET). The IGFET is more commonly known as a metal–oxide–semiconductor FET (MOSFET), reflecting its original construction from layers of metal (the gate), oxide (the insulation), and semiconductor. Unlike IGFETs, the JFET gate forms a p–n diode with the channel which lies between the source and drain. Functionally, this makes the n-channel JFET the solid-state equivalent of the vacuum tube triode which, similarly, forms a diode between its grid and cathode. Also, both devices operate in the depletion mode, they both have a high input impedance, and they both conduct current under the control of an input voltage.

Metal–semiconductor FETs (MESFETs) are JFETs in which the reverse biased p–n junction is replaced by a metal–semiconductor junction. These, and the HEMTs (high-electron-mobility transistors, or HFETs), in which a two-dimensional electron gas with very high carrier mobility is used for charge transport, are especially suitable for use at very high frequencies (microwave frequencies; several GHz).

FETs are further divided into depletion-mode and enhancement-mode types, depending on whether the channel is turned on or off with zero gate-to-source voltage. For enhancement mode, the channel is off at zero bias, and a gate potential can "enhance" the conduction. For the depletion mode, the channel is on at zero bias, and a gate potential (of the opposite polarity) can "deplete" the channel, reducing conduction. For either mode, a more positive gate voltage corresponds to a higher current for n-channel devices and a lower current for p-channel devices. Nearly all JFETs are depletion-mode because the diode junctions would forward bias and conduct if they were enhancement-mode devices; most IGFETs are enhancement-mode types.

Usage of bipolar and field-effect transistors

The bipolar junction transistor (BJT) was the most commonly used transistor in the 1960s and 70s. Even after MOSFETs became widely available, the BJT remained the transistor of choice for many analog circuits such as amplifiers because of their greater linearity and ease of manufacture. In integrated circuits, the desirable properties of MOSFETs allowed them to capture nearly all market share for digital circuits. Discrete MOSFETs can be applied in transistor applications, including analog circuits, voltage regulators, amplifiers, power transmitters and motor drivers.

Other transistor types


Transistor symbol created on Portuguese pavement in the University of Aveiro.

Part numbering standards / specifications

The types of some transistors can be parsed from the part number. There are three major semiconductor naming standards; in each the alphanumeric prefix provides clues to type of the device.

Japanese Industrial Standard (JIS)

JIS Transistor Prefix Table
Prefix Type of transistor
2SA high-frequency p–n–p BJTs
2SB audio-frequency p–n–p BJTs
2SC high-frequency n–p–n BJTs
2SD audio-frequency n–p–n BJTs
2SJ P-channel FETs (both JFETs and MOSFETs)
2SK N-channel FETs (both JFETs and MOSFETs)
The JIS-C-7012 specification for transistor part numbers starts with "2S",[44] e.g. 2SD965, but sometimes the "2S" prefix is not marked on the package – a 2SD965 might only be marked "D965"; a 2SC1815 might be listed by a supplier as simply "C1815". This series sometimes has suffixes (such as "R", "O", "BL"... standing for "Red", "Orange", "Blue" etc.) to denote variants, such as tighter hFE (gain) groupings.

European Electronic Component Manufacturers Association (EECA)

The Pro Electron standard, the European Electronic Component Manufacturers Association part numbering scheme, begins with two letters: the first gives the semiconductor type (A for germanium, B for silicon, and C for materials like GaAs); the second letter denotes the intended use (A for diode, C for general-purpose transistor, etc.). A 3-digit sequence number (or one letter then 2 digits, for industrial types) follows. With early devices this indicated the case type. Suffixes may be used, with a letter (e.g. "C" often means high hFE, such as in: BC549C[45]) or other codes may follow to show gain (e.g. BC327-25) or voltage rating (e.g. BUK854-800A[46]). The more common prefixes are:

Pro Electron / EECA Transistor Prefix Table
Prefix class Type and usage Example Equivalent Reference
AC Germanium small-signal AF transistor AC126 NTE102A Datasheet
AD Germanium AF power transistor AD133 NTE179 Datasheet
AF Germanium small-signal RF transistor AF117 NTE160 Datasheet
AL Germanium RF power transistor ALZ10 NTE100 Datasheet
AS Germanium switching transistor ASY28 NTE101 Datasheet
AU Germanium power switching transistor AU103 NTE127 Datasheet
BC Silicon, small-signal transistor ("general purpose") BC548 2N3904 Datasheet
BD Silicon, power transistor BD139 NTE375 Datasheet
BF Silicon, RF (high frequency) BJT or FET BF245 NTE133 Datasheet
BS Silicon, switching transistor (BJT or MOSFET) BS170 2N7000 Datasheet
BL Silicon, high frequency, high power (for transmitters) BLW60 NTE325 Datasheet
BU Silicon, high voltage (for CRT horizontal deflection circuits) BU2520A NTE2354 Datasheet
CF Gallium Arsenide small-signal Microwave transistor (MESFET CF739 Datasheet
CL Gallium Arsenide Microwave power transistor (FET) CLY10 Datasheet

Joint Electron Devices Engineering Council (JEDEC)

The JEDEC EIA370 transistor device numbers usually start with "2N", indicating a three-terminal device (dual-gate field-effect transistors are four-terminal devices, so begin with 3N), then a 2, 3 or 4-digit sequential number with no significance as to device properties (although early devices with low numbers tend to be germanium). For example, 2N3055 is a silicon n–p–n power transistor, 2N1301 is a p–n–p germanium switching transistor. A letter suffix (such as "A") is sometimes used to indicate a newer variant, but rarely gain groupings.

Proprietary

Manufacturers of devices may have their own proprietary numbering system, for example CK722. Since devices are second-sourced, a manufacturer's prefix (like "MPF" in MPF102, which originally would denote a Motorola FET) now is an unreliable indicator of who made the device. Some proprietary naming schemes adopt parts of other naming schemes, for example a PN2222A is a (possibly Fairchild Semiconductor) 2N2222A in a plastic case (but a PN108 is a plastic version of a BC108, not a 2N108, while the PN100 is unrelated to other xx100 devices).

Military part numbers sometimes are assigned their own codes, such as the British Military CV Naming System.

Manufacturers buying large numbers of similar parts may have them supplied with "house numbers", identifying a particular purchasing specification and not necessarily a device with a standardized registered number. For example, an HP part 1854,0053 is a (JEDEC) 2N2218 transistor[47][48] which is also assigned the CV number: CV7763[49]

Naming problems

With so many independent naming schemes, and the abbreviation of part numbers when printed on the devices, ambiguity sometimes occurs. For example, two different devices may be marked "J176" (one the J176 low-power Junction FET, the other the higher-powered MOSFET 2SJ176).

As older "through-hole" transistors are given surface-mount packaged counterparts, they tend to be assigned many different part numbers because manufacturers have their own systems to cope with the variety in pinout arrangements and options for dual or matched n–p–n+p–n–p devices in one pack. So even when the original device (such as a 2N3904) may have been assigned by a standards authority, and well known by engineers over the years, the new versions are far from standardized in their naming.

Construction

Semiconductor material

Semiconductor material characteristics
Semiconductor
material
Junction forward
voltage
V @ 25 °C
Electron mobility
m2/(V·s) @ 25 °C
Hole mobility
m2/(V·s) @ 25 °C
Max.
junction temp.
°C
Ge 0.27 0.39 0.19 70 to 100
Si 0.71 0.14 0.05 150 to 200
GaAs 1.03 0.85 0.05 150 to 200
Al-Si junction 0.3 150 to 200
The first BJTs were made from germanium (Ge). Silicon (Si) types currently predominate but certain advanced microwave and high-performance versions now employ the compound semiconductor material gallium arsenide (GaAs) and the semiconductor alloy silicon germanium (SiGe). Single element semiconductor material (Ge and Si) is described as elemental.

Rough parameters for the most common semiconductor materials used to make transistors are given in the table to the right; these parameters will vary with increase in temperature, electric field, impurity level, strain, and sundry other factors.

The junction forward voltage is the voltage applied to the emitter–base junction of a BJT in order to make the base conduct a specified current. The current increases exponentially as the junction forward voltage is increased. The values given in the table are typical for a current of 1 mA (the same values apply to semiconductor diodes). The lower the junction forward voltage the better, as this means that less power is required to "drive" the transistor. The junction forward voltage for a given current decreases with increase in temperature. For a typical silicon junction the change is −2.1 mV/°C.[50] In some circuits special compensating elements (sensistors) must be used to compensate for such changes.

The density of mobile carriers in the channel of a MOSFET is a function of the electric field forming the channel and of various other phenomena such as the impurity level in the channel. Some impurities, called dopants, are introduced deliberately in making a MOSFET, to control the MOSFET electrical behavior.
The electron mobility and hole mobility columns show the average speed that electrons and holes diffuse through the semiconductor material with an electric field of 1 volt per meter applied across the material. In general, the higher the electron mobility the faster the transistor can operate. The table indicates that Ge is a better material than Si in this respect. However, Ge has four major shortcomings compared to silicon and gallium arsenide:
  • Its maximum temperature is limited;
  • it has relatively high leakage current;
  • it cannot withstand high voltages;
  • it is less suitable for fabricating integrated circuits.
Because the electron mobility is higher than the hole mobility for all semiconductor materials, a given bipolar n–p–n transistor tends to be swifter than an equivalent p–n–p transistor. GaAs has the highest electron mobility of the three semiconductors. It is for this reason that GaAs is used in high-frequency applications. A relatively recent FET development, the high-electron-mobility transistor (HEMT), has a heterostructure (junction between different semiconductor materials) of aluminium gallium arsenide (AlGaAs)-gallium arsenide (GaAs) which has twice the electron mobility of a GaAs-metal barrier junction. Because of their high speed and low noise, HEMTs are used in satellite receivers working at frequencies around 12 GHz. HEMTs based on gallium nitride and aluminium gallium nitride (AlGaN/GaN HEMTs) provide a still higher electron mobility and are being developed for various applications.
Max. junction temperature values represent a cross section taken from various manufacturers' data sheets. This temperature should not be exceeded or the transistor may be damaged.
Al–Si junction refers to the high-speed (aluminum–silicon) metal–semiconductor barrier diode, commonly known as a Schottky diode. This is included in the table because some silicon power IGFETs have a parasitic reverse Schottky diode formed between the source and drain as part of the fabrication process. This diode can be a nuisance, but sometimes it is used in the circuit.

Packaging[edit]


Assorted discrete transistors
Discrete transistors are individually packaged transistors. Transistors come in many different semiconductor packages (see image). The two main categories are through-hole (or leaded), and surface-mount, also known as surface-mount device (SMD). The ball grid array (BGA) is the latest surface-mount package (currently only for large integrated circuits). It has solder "balls" on the underside in place of leads. Because they are smaller and have shorter interconnections, SMDs have better high-frequency characteristics but lower power rating.
Transistor packages are made of glass, metal, ceramic, or plastic. The package often dictates the power rating and frequency characteristics. Power transistors have larger packages that can be clamped to heat sinks for enhanced cooling. Additionally, most power transistors have the collector or drain physically connected to the metal enclosure. At the other extreme, some surface-mount microwave transistors are as small as grains of sand.
Often a given transistor type is available in several packages. Transistor packages are mainly standardized, but the assignment of a transistor's functions to the terminals is not: other transistor types can assign other functions to the package's terminals. Even for the same transistor type the terminal assignment can vary (normally indicated by a suffix letter to the part number, q.e. BC212L and BC212K).
Nowadays most transistors come in a wide range of SMT packages, in comparison the list of available through-hole packages is relatively small, here is a short list of the most common through-hole transistors packages in alphabetical order: ATV, E-line, MRT, HRT, SC-43, SC-72, TO-3, TO-18, TO-39, TO-92, TO-126, TO220, TO247, TO251, TO262, ZTX851

Flexible transistors[edit]

Researchers have made several kinds of flexible transistors, including organic field-effect transistors.[51][52][53] Flexible transistors are useful in some kinds of flexible displays and other flexible electronics.

Vacuum state

From Wikipedia, the free encyclopedia

In quantum field theory, the vacuum state (also called the vacuum) is the quantum state with the lowest possible energy. Generally, it contains no physical particles. Zero-point field is sometimes used as a synonym for the vacuum state of an individual quantized field.

According to present-day understanding of what is called the vacuum state or the quantum vacuum, it is "by no means a simple empty space",[1] and again: "it is a mistake to think of any physical vacuum as some absolutely empty void."[2] According to quantum mechanics, the vacuum state is not truly empty but instead contains fleeting electromagnetic waves and particles that pop into and out of existence.[3][4][5]

The QED vacuum of quantum electrodynamics (or QED) was the first vacuum of quantum field theory to be developed. QED originated in the 1930s, and in the late 1940s and early 1950s it was reformulated by Feynman, Tomonaga and Schwinger, who jointly received the Nobel prize for this work in 1965.[6] Today the electromagnetic interactions and the weak interactions are unified in the theory of the electroweak interaction.

The Standard Model is a generalization of the QED work to include all the known elementary particles and their interactions (except gravity). Quantum chromodynamics is the portion of the Standard Model that deals with strong interactions, and QCD vacuum is the vacuum of quantum chromodynamics. It is the object of study in the Large Hadron Collider and the Relativistic Heavy Ion Collider, and is related to the so-called vacuum structure of strong interactions.[7]

Non-zero expectation value

The video of an experiment showing vacuum fluctuations (in the red ring) amplified by spontaneous parametric down-conversion.

If the quantum field theory can be accurately described through perturbation theory, then the properties of the vacuum are analogous to the properties of the ground state of a quantum mechanical harmonic oscillator (or more accurately, the ground state of a QM problem). In this case the vacuum expectation value (VEV) of any field operator vanishes. For quantum field theories in which perturbation theory breaks down at low energies (for example, Quantum chromodynamics or the BCS theory of superconductivity) field operators may have non-vanishing vacuum expectation values called condensates. In the Standard Model, the non-zero vacuum expectation value of the Higgs field, arising from spontaneous symmetry breaking, is the mechanism by which the other fields in the theory acquire mass.

Energy

In many situations, the vacuum state can be defined to have zero energy, although the actual situation is considerably more subtle. The vacuum state is associated with a zero-point energy, and this zero-point energy has measurable effects. In the laboratory, it may be detected as the Casimir effect. In physical cosmology, the energy of the cosmological vacuum appears as the cosmological constant. In fact, the energy of a cubic centimeter of empty space has been calculated figuratively to be one trillionth of an erg (or 0.6 eV).[8] An outstanding requirement imposed on a potential Theory of Everything is that the energy of the quantum vacuum state must explain the physically observed cosmological constant.

Symmetry

For a relativistic field theory, the vacuum is Poincaré invariant, which follows from Wightman axioms but can be also proved directly without these axioms.[9] Poincaré invariance implies that only scalar combinations of field operators have non-vanishing VEV's. The VEV may break some of the internal symmetries of the Lagrangian of the field theory. In this case the vacuum has less symmetry than the theory allows, and one says that spontaneous symmetry breaking has occurred. See Higgs mechanism, standard model.

Electrical permittivity

In principle, quantum corrections to Maxwell's equations can cause the experimental electrical permittivity ε of the vacuum state to deviate from the defined scalar value ε0 of the electric constant.[10] These theoretical developments are described, for example, in Dittrich and Gies.[5] In particular, the theory of quantum electrodynamics predicts that the QED vacuum should exhibit nonlinear effects that will make it behave like a birefringent material with ε slightly greater than ε0 for extremely strong electric fields.[11][12] Explanations for dichroism from particle physics, outside quantum electrodynamics, also have been proposed.[13] Active attempts to measure such effects have been unsuccessful so far.[14]

Notations

The vacuum state is written as |0\rangle or |\rangle. The vacuum expectation value (see also Expectation Value) of any field φ, should be written as \langle0|\phi|0\rangle, but is usually condensed to \langle\phi\rangle.

Virtual particles

The presence of virtual particles can be rigorously based upon the non-commutation of the quantized electromagnetic fields. Non-commutation means that although the average values of the fields vanish in a quantum vacuum, their variances do not.[15] The term "vacuum fluctuations" refers to the variance of the field strength in the minimal energy state,[16] and is described picturesquely as evidence of "virtual particles".[17]
It is sometimes attempted to provide an intuitive picture of virtual particles based upon the Heisenberg energy-time uncertainty principle:
\Delta E \Delta t \ge \hbar \ ,
(with ΔE and Δt being the energy and time variations respectively; ΔE is the accuracy in the measurement of energy and Δt is the time taken in the measurement, and ħ is the Planck constant divided by 2π) arguing along the lines that the short lifetime of virtual particles allows the "borrowing" of large energies from the vacuum and thus permits particle generation for short times.[18]

Although the phenomenon of virtual particles is accepted, this interpretation of the energy-time uncertainty relation is not universal.[19][20] One issue is the use of an uncertainty relation limiting measurement accuracy as though a time uncertainty Δt determines a "budget" for borrowing energy ΔE. Another issue is the meaning of "time" in this relation, because energy and time (unlike position q and momentum p, for example) do not satisfy a canonical commutation relation (such as [q, p] = i ħ).[21] Various schemes have been advanced to construct an observable that has some kind of time interpretation, and yet does satisfy a canonical commutation relation with energy.[22][23] The very many approaches to the energy-time uncertainty principle are a long and continuing subject.[23]

Physical nature of the quantum vacuum

According to Astrid Lambrecht (2002): "When one empties out a space of all matter and lowers the temperature to absolute zero, one produces in a Gedankenexperiment the quantum vacuum state."[1]
According to Fowler & Guggenheim (1939/1965), the third law of thermodynamics may be precisely enunciated as follows:
It is impossible by any procedure, no matter how idealized, to reduce any assembly to the absolute zero in a finite number of operations.[24] (See also.[25][26][27])
Photon-photon interaction can occur only through interaction with the vacuum state of some other field, for example through the Dirac electron-positron vacuum field; this is associated with the concept of vacuum polarization.[28]

According to Milonni (1994): "... all quantum fields have zero-point energies and vacuum fluctuations."[29] This means that there is a component of the quantum vacuum respectively for each component field (considered in the conceptual absence of the other fields), such as the electromagnetic field, the Dirac electron-positron field, and so on.

According to Milonni (1994), some of the effects attributed to the vacuum electromagnetic field can have several physical interpretations, some more conventional than others. The Casimir attraction between uncharged conductive plates is often proposed as an example of an effect of the vacuum electromagnetic field. Schwinger, DeRaad, and Milton (1978) are cited by Milonni (1994) as validly, though unconventionally, explaining the Casimir effect with a model in which "the vacuum is regarded as truly a state with all physical properties equal to zero."[30][31] In this model, the observed phenomena are explained as the effects of the electron motions on the electromagnetic field, called the source field effect. Milonni writes: "The basic idea here will be that the Casimir force may be derived from the source fields alone even in completely conventional QED, ..." Milonni provides detailed argument that the measurable physical effects usually attributed to the vacuum electromagnetic field cannot be explained by that field alone, but require in addition a contribution from the self-energy of the electrons, or their radiation reaction. He writes: "The radiation reaction and the vacuum fields are two aspects of the same thing when it comes to physical interpretations of various QED processes including the Lamb shift, van der Waals forces, and Casimir effects."[32] This point of view is also stated by Jaffe (2005): "The Casimir force can be calculated without reference to vacuum fluctuations, and like all other observable effects in QED, it vanishes as the fine structure constant, α, goes to zero."[33]

Virtual particle


From Wikipedia, the free encyclopedia

In physics, a virtual particle is an explanatory conceptual entity that is found in mathematical calculations about quantum field theory. It refers to mathematical terms that have some appearance of representing particles inside a subatomic process such as a collision. Virtual particles, however, do not appear directly amongst the observable and detectable input and output quantities of those calculations, which refer only to actual, as distinct from virtual, particles. Virtual particle terms represent "particles" that are said to be 'off mass shell'. For example, they can progress backwards in time, can have apparent mass very different from their regular particle namesake's[dubious ], and can travel faster than light. That is to say, when looked at individually, they appear to be able to violate basic laws of physics. Regular particles of course never do so. On the other hand, any particle that is actually observed never precisely satisfies the conditions theoretically imposed on regular particles. Virtual particles occur in combinations that mutually more or less nearly cancel from the actual output quantities, so that no actual violation of the laws of physics occurs in completed processes. Often the virtual-particle virtual "events" appear to occur close to one another in time, for example within the time scale of a collision, so that they are virtually and apparently "short-lived". If the mathematical terms that are interpreted as representing virtual particles are omitted from the calculations, the result is an approximation that may or may not be near the correct and accurate answer obtained from the proper full calculation.[1][2][3]

Quantum theory is different from classical theory. The difference is in accounting for the inner workings of subatomic processes. Classical physics cannot account for such. It was pointed out by Heisenberg that what "actually" or "really" occurs inside such subatomic processes as collisions is not directly observable and no unique and physically definite visualization is available for it. Quantum mechanics has the specific merit of by-passing speculation about such inner workings. It restricts itself to what is actually observable and detectable. Virtual particles are conceptual devices that in a sense try to by-pass Heisenberg's insight, by offering putative or virtual explanatory visualizations for the inner workings of subatomic processes.

A virtual particle does not necessarily appear to carry the same mass as the corresponding real particle. This is because it appears as "short-lived" and "transient", so that the uncertainty principle allows it to appear not to conserve energy and momentum. The longer a virtual particle appears to "live", the closer its characteristics come to those of an actual particle.

Virtual particles appear in many processes, including particle scattering and Casimir forces. In quantum field theory, even classical forces — such as the electromagnetic repulsion or attraction between two charges — can be thought of as due to the exchange of many virtual photons between the charges.

Virtual particles appear in calculations of subatomic interactions, but never as asymptotic states or indices to the scattering matrix. A subatomic process involving virtual particles is schematically representable by a Feynman diagram in which they are represented by internal lines.

Antiparticles and quasiparticles should not be confused with virtual particles or virtual antiparticles.
Many physicists believe that, because of its intrinsically perturbative character, the concept of virtual particles is often confusing and misleading, and is thus best avoided.[4][5]

Properties

The concept of virtual particles arises in the perturbation theory of quantum field theory, an approximation scheme in which interactions (in essence, forces) between actual particles are calculated in terms of exchanges of virtual particles. Such calculations are often performed using schematic representations known as Feynman diagrams, in which virtual particles appear as internal lines. By expressing the interaction in terms of the exchange of a virtual particle with four-momentum q, where q is given by the difference between the four-momenta of the particles entering and leaving the interaction vertex, both momentum and energy are conserved at the interaction vertices of the Feynman diagram.[6]:119

A virtual particle does not precisely obey the energy–momentum relation m2c4 = E2 − p2c2. Its kinetic energy may not have the usual relationship to velocity–indeed, it can be negative.[7]:110 This is expressed by the phrase off mass shell.[6]:119 The probability amplitude for a virtual particle to exist tends to be canceled out by destructive interference over longer distances and times. Quantum tunnelling may be considered a manifestation of virtual particle exchanges.[8]:235 The range of forces carried by virtual particles is limited by the uncertainty principle, which regards energy and time as conjugate variables; thus, virtual particles of larger mass have more limited range.[9]

Written in the usual mathematical notations, in the equations of physics, there is no mark of the distinction between virtual and actual particles. The amplitude that a virtual particle exists interferes with the amplitude for its non-existence, whereas for an actual particle the cases of existence and non-existence cease to be coherent with each other and do not interfere any more. In the quantum field theory view, actual particles are viewed as being detectable excitations of underlying quantum fields. Virtual particles are also viewed as excitations of the underlying fields, but appear only as forces, not as detectable particles. They are "temporary" in the sense that they appear in calculations, but are not detected as single particles. Thus, in mathematical terms, they never appear as indices to the scattering matrix, which is to say, they never appear as the observable inputs and outputs of the physical process being modelled.

There are two principal ways in which the notion of virtual particles appears in modern physics. They appear as intermediate terms in Feynman diagrams; that is, as terms in a perturbative calculation. They also appear as an infinite set of states to be summed or integrated over in the calculation of a semi-non-perturbative effect. In the latter case, it is sometimes said that virtual particles contribute to a mechanism that mediates the effect, or that the effect occurs through the virtual particles.[6]:118

Manifestations

There are many observable physical phenomena that arise in interactions involving virtual particles. For bosonic particles that exhibit rest mass when they are free and actual, virtual interactions are characterized by the relatively short range of the force interaction produced by particle exchange.[citation needed] Examples of such short-range interactions are the strong and weak forces, and their associated field bosons. For the gravitational and electromagnetic forces, the zero rest-mass of the associated boson particle permits long-range forces to be mediated by virtual particles. However, in the case of photons, power and information transfer by virtual particles is a relatively short-range phenomenon (existing only within a few wavelengths of the field-disturbance, which carries information or transferred power), as for example seen in the characteristically short range of inductive and capacitative effects in the near field zone of coils and antennas.[citation needed]

Some field interactions which may be seen in terms of virtual particles are:
  • The Coulomb force (static electric force) between electric charges. It is caused by the exchange of virtual photons. In symmetric 3-dimensional space this exchange results in the inverse square law for electric force. Since the photon has no mass, the coulomb potential has an infinite range.
  • The magnetic field between magnetic dipoles. It is caused by the exchange of virtual photons. In symmetric 3-dimensional space this exchange results in the inverse cube law for magnetic force. Since the photon has no mass, the magnetic potential has an infinite range.
  • Electromagnetic induction. This phenomenon transfers energy to and from a magnetic coil via a changing (electro)magnetic field.
  • The strong nuclear force between quarks is the result of interaction of virtual gluons. The residual of this force outside of quark triplets (neutron and proton) holds neutrons and protons together in nuclei, and is due to virtual mesons such as the pi meson and rho meson.
  • The weak nuclear force - it is the result of exchange by virtual W and Z bosons.
  • The spontaneous emission of a photon during the decay of an excited atom or excited nucleus; such a decay is prohibited by ordinary quantum mechanics and requires the quantization of the electromagnetic field for its explanation.
  • The Casimir effect, where the ground state of the quantized electromagnetic field causes attraction between a pair of electrically neutral metal plates.
  • The van der Waals force, which is partly due to the Casimir effect between two atoms.
  • Vacuum polarization, which involves pair production or the decay of the vacuum, which is the spontaneous production of particle-antiparticle pairs (such as electron-positron).
  • Lamb shift of positions of atomic levels.
  • Hawking radiation, where the gravitational field is so strong that it causes the spontaneous production of photon pairs (with black body energy distribution) and even of particle pairs.
  • Much of the so-called near-field of radio antennas, where the magnetic and electric effects of the changing current in the antenna wire and the charge effects of the wire's capacitive charge may be (and usually are) important contributors to the total EM field close to the source, but both of which effects are dipole effects that decay with increasing distance from the antenna much more quickly than do the influence of "conventional" electromagnetic waves that are "far" from the source. ["Far" in terms of ratio of antenna length or diameter, to wavelength]. These far-field waves, for which E is (in the limit of long distance) equal to cB, are composed of actual photons. It should be noted that actual and virtual photons are mixed near an antenna, with the virtual photons responsible only for the "extra" magnetic-inductive and transient electric-dipole effects, which cause any imbalance between E and cB. As distance from the antenna grows, the near-field effects (as dipole fields) die out more quickly, and only the "radiative" effects that are due to actual photons remain as important effects. Although virtual effects extend to infinity, they drop off in field strength as 1/r2 rather than the field of EM waves composed of actual photons, which drop 1/r (the powers, respectively, decrease as 1/r4 and 1/r2). See near and far field for a more detailed discussion. See near field communication for practical communications applications of near fields.
Most of these have analogous effects in solid-state physics; indeed, one can often gain a better intuitive understanding by examining these cases. In semiconductors, the roles of electrons, positrons and photons in field theory are replaced by electrons in the conduction band, holes in the valence band, and phonons or vibrations of the crystal lattice. A virtual particle is in a virtual state where the probability amplitude is not conserved. Examples of macroscopic virtual phonons, photons, and electrons in the case of the tunneling process were presented by Günter Nimtz [10] and Alfons A. Stahlhofen.[11]

History

Paul Dirac was the first to propose that empty space (a vacuum) can be visualized as consisting of a sea of electrons with negative energy, known as the Dirac sea. The Dirac sea has a direct analog to the electronic band structure in crystalline solids as described in solid state physics. Here, particles correspond to conduction electrons, and antiparticles to holes. A variety of interesting phenomena can be attributed to this structure. The development of quantum field theory (QFT) in the 1930s made it possible to reformulate the Dirac equation in a way that treats the positron as a "real" particle rather than the absence of a particle, and makes the vacuum the state in which no particles exist instead of an infinite sea of particles.

Feynman diagrams


One particle exchange scattering diagram

The calculation of scattering amplitudes in theoretical particle physics requires the use of some rather large and complicated integrals over a large number of variables. These integrals do, however, have a regular structure, and may be represented as Feynman diagrams. The appeal of the Feynman diagrams is strong, as it allows for a simple visual presentation of what would otherwise be a rather arcane and abstract formula. In particular, part of the appeal is that the outgoing legs of a Feynman diagram can be associated with actual, on-shell particles. Thus, it is natural to associate the other lines in the diagram with particles as well, called the "virtual particles". In mathematical terms, they correspond to the propagators appearing in the diagram.

In the image to the right, the solid lines correspond to actual particles (of momentum p1 and so on), while the dotted line corresponds to a virtual particle carrying momentum k. For example, if the solid lines were to correspond to electrons interacting by means of the electromagnetic interaction, the dotted line would correspond to the exchange of a virtual photon. In the case of interacting nucleons, the dotted line would be a virtual pion. In the case of quarks interacting by means of the strong force, the dotted line would be a virtual gluon, and so on.

One-loop diagram with fermion propagator

Virtual particles may be mesons or vector bosons, as in the example above; they may also be fermions. However, in order to preserve quantum numbers, most simple diagrams involving fermion exchange are prohibited. The image to the right shows an allowed diagram, a one-loop diagram. The solid lines correspond to a fermion propagator, the wavy lines to bosons.

Vacuums

In formal terms, a particle is considered to be an eigenstate of the particle number operator aa, where a is the particle annihilation operator and a the particle creation operator (sometimes collectively called ladder operators). In many cases, the particle number operator does not commute with the Hamiltonian for the system. This implies the number of particles in an area of space is not a well-defined quantity but, like other quantum observables, is represented by a probability distribution. Since these particles do not have a permanent existence,[clarification needed] they are called virtual particles or vacuum fluctuations of vacuum energy. In a certain sense, they can be understood to be a manifestation of the time-energy uncertainty principle in a vacuum.[12]
An important example of the "presence" of virtual particles in a vacuum is the Casimir effect.[13] Here, the explanation of the effect requires that the total energy of all of the virtual particles in a vacuum can be added together. Thus, although the virtual particles themselves are not directly observable in the laboratory, they do leave an observable effect: Their zero-point energy results in forces acting on suitably arranged metal plates or dielectrics.[14] On the other hand, the Casimir effect can be interpreted as the relativistic van der Waals force.[15]

Pair production

Virtual particles are often popularly described as coming in pairs, a particle and antiparticle, which can be of any kind. These pairs exist for an extremely short time, and then mutually annihilate. In some cases, however, it is possible to boost the pair apart using external energy so that they avoid annihilation and become actual particles.
This may occur in one of two ways. In an accelerating frame of reference, the virtual particles may appear to be actual to the accelerating observer; this is known as the Unruh effect. In short, the vacuum of a stationary frame appears, to the accelerated observer, to be a warm gas of actual particles in thermodynamic equilibrium.

Another example is pair production in very strong electric fields, sometimes called vacuum decay. If, for example, a pair of atomic nuclei are merged to very briefly form a nucleus with a charge greater than about 140, (that is, larger than about the inverse of the fine structure constant, which is a dimensionless quantity), the strength of the electric field will be such that it will be energetically favorable to create positron-electron pairs out of the vacuum or Dirac sea, with the electron attracted to the nucleus to annihilate the positive charge. This pair-creation amplitude was first calculated by Julian Schwinger in 1951.

Actual and virtual particles compared

As a consequence of quantum mechanical uncertainty, any object or process that exists for a limited time or in a limited volume cannot have a precisely defined energy or momentum. This is the reason that virtual particles — which exist only temporarily as they are exchanged between ordinary particles — do not necessarily obey the mass-shell relation. However, the longer a virtual particle exists, the more closely it adheres to the mass-shell relation. A "virtual" particle that exists for an arbitrarily long time is simply an ordinary particle.

However, all particles have a finite lifetime, as they are created and eventually destroyed by some processes. As such, there is no absolute distinction between "real" and "virtual" particles. In practice, the lifetime of "ordinary" particles is far longer than the lifetime of the virtual particles that contribute to processes in particle physics, and as such the distinction is useful to make.

Why people like Mike Huckabee are not legally qualified to hold any government position, let alone the presidency





















It's quite simple, really.  Upon winning an election, the to be president has to take an oath  "to uphold, protect, and defend the Constitution ..."  If, however, people, like Huckabee, declare that the Bible (or Quran, or the Communist Manifesto) should be above the Constitution, then he cannot take that oath without committing perjury.  And since the oath is legally binding for as long as he is in power, he also cannot later adopt this position without violating that oath, for then he is committing treason against the country.  As either way, he is committing "high crimes and misdemeanors", he is subject to impeachment and prosecution.

 This is not a religious test for office.  A government official can hold any religious or other ideological ides as he pleases; he just can't, by his actions, impose them against the US and local constitutions.  He can even advocate his beliefs, and hold that the Constitution to be amended or rewritten to reflect them; he can't just act on them out of personal conviction or conscience.
















Of course, I am alluding to Kim Davis as well, who is basically in the same position.  She isn't a county clerk simply she won the largest number of votes; she too must have taken an oath of office which mentioned both the state and federal constitutions.  Perjury cannot be proved here, because her views on gay marriage only came out recently.  But she is violating her oath by ignoring the Supreme and other courts, those determiners of what constitutions mean.  That to is treasonous, albeit on a much smaller scale.  It is a crime which should lead to her impeachment and prosecution.  And if impeachment is unobtainable, she still should be prosecuted and punished if found guilty.  Of course, the honorable thing to do would have been to resign, citing her conscience -- but that wouldn't have brought forth fame, legions of followers/supporters, and guaranteed royalties on the inevitable book (a ghost writer) publishes.

That, ironically, is something she does have the right to do.

Homework

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Homework A person doing geometry home...